id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
192,266
https://en.wikipedia.org/wiki/Trace%20class
In mathematics, specifically functional analysis, a trace-class operator is a linear operator for which a trace may be defined, such that the trace is a finite number independent of the choice of basis used to compute the trace. This trace of trace-class operators generalizes the trace of matrices studied in linear algebra. All trace-class operators are compact operators. In quantum mechanics, quantum states are described by density matrices, which are certain trace class operators. Trace-class operators are essentially the same as nuclear operators, though many authors reserve the term "trace-class operator" for the special case of nuclear operators on Hilbert spaces and use the term "nuclear operator" in more general topological vector spaces (such as Banach spaces). Note that the trace operator studied in partial differential equations is an unrelated concept. Definition Let be a separable Hilbert space, an orthonormal basis and a positive bounded linear operator on . The trace of is denoted by and defined as independent of the choice of orthonormal basis. A (not necessarily positive) bounded linear operator is called trace class if and only if where denotes the positive-semidefinite Hermitian square root. The trace-norm of a trace class operator is defined as One can show that the trace-norm is a norm on the space of all trace class operators and that , with the trace-norm, becomes a Banach space. When is finite-dimensional, every (positive) operator is trace class and this definition of trace of coincides with the definition of the trace of a matrix. If is complex, then is always self-adjoint (i.e. ) though the converse is not necessarily true. Equivalent formulations Given a bounded linear operator , each of the following statements is equivalent to being in the trace class: is finite for every orthonormal basis of . is a nuclear operator There exist two orthogonal sequences and in and positive real numbers in such that and where are the singular values of (or, equivalently, the eigenvalues of ), with each value repeated as often as its multiplicity. is a compact operator with If is trace class then is an integral operator. is equal to the composition of two Hilbert-Schmidt operators. is a Hilbert-Schmidt operator. Examples Spectral theorem Let be a bounded self-adjoint operator on a Hilbert space. Then is trace class if and only if has a pure point spectrum with eigenvalues such that Mercer's theorem Mercer's theorem provides another example of a trace class operator. That is, suppose is a continuous symmetric positive-definite kernel on , defined as then the associated Hilbert–Schmidt integral operator is trace class, i.e., Finite-rank operators Every finite-rank operator is a trace-class operator. Furthermore, the space of all finite-rank operators is a dense subspace of (when endowed with the trace norm). Given any define the operator by Then is a continuous linear operator of rank 1 and is thus trace class; moreover, for any bounded linear operator A on H (and into H), Properties If is a non-negative self-adjoint operator, then is trace-class if and only if Therefore, a self-adjoint operator is trace-class if and only if its positive part and negative part are both trace-class. (The positive and negative parts of a self-adjoint operator are obtained by the continuous functional calculus.) The trace is a linear functional over the space of trace-class operators, that is, The bilinear map is an inner product on the trace class; the corresponding norm is called the Hilbert–Schmidt norm. The completion of the trace-class operators in the Hilbert–Schmidt norm are called the Hilbert–Schmidt operators. is a positive linear functional such that if is a trace class operator satisfying then If is trace-class then so is and If is bounded, and is trace-class, then and are also trace-class (i.e. the space of trace-class operators on H is an ideal in the algebra of bounded linear operators on H), and Furthermore, under the same hypothesis, and The last assertion also holds under the weaker hypothesis that A and T are Hilbert–Schmidt. If and are two orthonormal bases of H and if T is trace class then If A is trace-class, then one can define the Fredholm determinant of : where is the spectrum of The trace class condition on guarantees that the infinite product is finite: indeed, It also implies that if and only if is invertible. If is trace class then for any orthonormal basis of the sum of positive terms is finite. If for some Hilbert-Schmidt operators and then for any normal vector holds. Lidskii's theorem Let be a trace-class operator in a separable Hilbert space and let be the eigenvalues of Let us assume that are enumerated with algebraic multiplicities taken into account (that is, if the algebraic multiplicity of is then is repeated times in the list ). Lidskii's theorem (named after Victor Borisovich Lidskii) states that Note that the series on the right converges absolutely due to Weyl's inequality between the eigenvalues and the singular values of the compact operator Relationship between common classes of operators One can view certain classes of bounded operators as noncommutative analogue of classical sequence spaces, with trace-class operators as the noncommutative analogue of the sequence space Indeed, it is possible to apply the spectral theorem to show that every normal trace-class operator on a separable Hilbert space can be realized in a certain way as an sequence with respect to some choice of a pair of Hilbert bases. In the same vein, the bounded operators are noncommutative versions of the compact operators that of (the sequences convergent to 0), Hilbert–Schmidt operators correspond to and finite-rank operators to (the sequences that have only finitely many non-zero terms). To some extent, the relationships between these classes of operators are similar to the relationships between their commutative counterparts. Recall that every compact operator on a Hilbert space takes the following canonical form: there exist orthonormal bases and and a sequence of non-negative numbers with such that Making the above heuristic comments more precise, we have that is trace-class iff the series is convergent, is Hilbert–Schmidt iff is convergent, and is finite-rank iff the sequence has only finitely many nonzero terms. This allows to relate these classes of operators. The following inclusions hold and are all proper when is infinite-dimensional: The trace-class operators are given the trace norm The norm corresponding to the Hilbert–Schmidt inner product is Also, the usual operator norm is By classical inequalities regarding sequences, for appropriate It is also clear that finite-rank operators are dense in both trace-class and Hilbert–Schmidt in their respective norms. Trace class as the dual of compact operators The dual space of is Similarly, we have that the dual of compact operators, denoted by is the trace-class operators, denoted by The argument, which we now sketch, is reminiscent of that for the corresponding sequence spaces. Let we identify with the operator defined by where is the rank-one operator given by This identification works because the finite-rank operators are norm-dense in In the event that is a positive operator, for any orthonormal basis one has where is the identity operator: But this means that is trace-class. An appeal to polar decomposition extend this to the general case, where need not be positive. A limiting argument using finite-rank operators shows that Thus is isometrically isomorphic to As the predual of bounded operators Recall that the dual of is In the present context, the dual of trace-class operators is the bounded operators More precisely, the set is a two-sided ideal in So given any operator we may define a continuous linear functional on by This correspondence between bounded linear operators and elements of the dual space of is an isometric isomorphism. It follows that the dual space of This can be used to define the weak-* topology on See also Trace operator References Bibliography Dixmier, J. (1969). Les Algebres d'Operateurs dans l'Espace Hilbertien. Gauthier-Villars. Operator theory Topological tensor products Linear operators
Trace class
Mathematics,Engineering
1,726
32,629,434
https://en.wikipedia.org/wiki/The%20CIS%20Critical%20Security%20Controls%20for%20Effective%20Cyber%20Defense
The CIS Controls (formerly called the Center for Internet Security Critical Security Controls for Effective Cyber Defense) is a publication of best practice guidelines for computer security. The project was initiated early in 2008 in response to extreme data losses experienced by organizations in the US defense industrial base. The publication was initially developed by the SANS Institute and released as the "SANS Top 20." Ownership was then transferred to the Council on Cyber Security (CCS) in 2013, and then transferred to Center for Internet Security (CIS) in 2015. CIS released version 8 of the CIS Controls in 2021. Goals The guidelines consist of 18 (originally 20) key actions, called critical security controls (CSC), that organizations should implement to block or mitigate known attacks. The controls are designed so that primarily automated means can be used to implement, enforce and monitor them. The security controls give no-nonsense, actionable recommendations for cyber security, written in language that’s easily understood by IT personnel. Goals of the Consensus Audit Guidelines include Leveraging cyber offense to inform cyber defense, focusing on high payoff areas Ensuring that security investments are focused to counter highest threats Maximizing the use of automation to enforce security controls, thereby negating human errors Using consensus process to collect best ideas Supported Platforms CIS Benchmarks cover a wide range of technologies, including: Operating Systems: Windows, Linux, macOS Servers: Apache, NGINX, Microsoft IIS Cloud Platforms: AWS, Azure, Google Cloud Platform Network Devices: Cisco, Juniper Applications: Microsoft Office, Google Chrome, Mozilla Firefox References Information privacy Security compliance
The CIS Critical Security Controls for Effective Cyber Defense
Engineering
322
102,959
https://en.wikipedia.org/wiki/Substance%20abuse
Substance misuse, also known as drug misuse or, in older vernacular, substance abuse, is the use of a drug in amounts or by methods that are harmful to the individual or others. It is a form of substance-related disorder. Differing definitions of drug misuse are used in public health, medical, and criminal justice contexts. In some cases, criminal or anti-social behavior occurs when the person is under the influence of a drug, and long-term personality changes in individuals may also occur. In addition to possible physical, social, and psychological harm, the use of some drugs may also lead to criminal penalties, although these vary widely depending on the local jurisdiction. Drugs most often associated with this term include alcohol, amphetamines, barbiturates, benzodiazepines, cannabis, cocaine, hallucinogens, methaqualone, and opioids. The exact cause of substance misuse is not clear, but there are two predominant theories: either a genetic predisposition or a habit learned from others, which, if addiction develops, manifests itself as a chronic debilitating disease. In 2010, about 5% of adults (230 million) used an illicit substance. Of these, 27 million have high-risk drug use—otherwise known as recurrent drug use—causing harm to their health, causing psychological problems, and or causing social problems that put them at risk of those dangers. In 2015, substance use disorders resulted in 307,400 deaths, up from 165,000 deaths in 1990. Of these, the highest numbers are from alcohol use disorders at 137,500, opioid use disorders at 122,100 deaths, amphetamine use disorders at 12,200 deaths, and cocaine use disorders at 11,100. Classification Public health definitions Public health practitioners have attempted to look at substance use from a broader perspective than the individual, emphasizing the role of society, culture, and availability. Some health professionals choose to avoid the terms alcohol or drug "abuse" in favor of language considered more objective, such as "substance and alcohol type problems" or "harmful/problematic use" of drugs. The Health Officers Council of British Columbia — in their 2005 policy discussion paper, A Public Health Approach to Drug Control in Canada — has adopted a public health model of psychoactive substance use that challenges the simplistic black-and-white construction of the binary (or complementary) antonyms "use" vs. "abuse". This model explicitly recognizes a spectrum of use, ranging from beneficial use to chronic dependence. Medical definitions 'Drug abuse' is no longer a current medical diagnosis in either of the most used diagnostic tools in the world, the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM), and the World Health Organization's International Classification of Diseases (ICD). Value judgment Philip Jenkins suggests that there are two issues with the term "drug abuse". First, what constitutes a "drug" is debatable. For instance, GHB, a naturally occurring substance in the central nervous system is considered a drug, and is illegal in many countries, while nicotine is not officially considered a drug in most countries. Second, the word "abuse" implies a recognized standard of use for any substance. Drinking an occasional glass of wine is considered acceptable in most Western countries, while drinking several bottles is seen as abuse. Strict temperance advocates, who may or may not be religiously motivated, would see drinking even one glass as abuse. Some groups (Mormons, as prescribed in "the Word of Wisdom") even condemn caffeine use in any quantity. Similarly, adopting the view that any (recreational) use of cannabis or substituted amphetamines constitutes drug abuse implies a decision made that the substance is harmful, even in minute quantities. In the U.S., drugs have been legally classified into five categories, schedule I, II, III, IV, or V in the Controlled Substances Act. The drugs are classified on their deemed potential for abuse. The usage of some drugs is strongly correlated. For example, the consumption of seven illicit drugs (amphetamines, cannabis, cocaine, ecstasy, legal highs, LSD, and magic mushrooms) is correlated and the Pearson correlation coefficient r>0.4 in every pair of them; consumption of cannabis is strongly correlated (r>0.5) with the usage of nicotine (tobacco), heroin is correlated with cocaine (r>0.4) and methadone (r>0.45), and is strongly correlated with crack (r>0.5) Drug misuse Drug misuse is a term used commonly when prescription medication with sedative, anxiolytic, analgesic, or stimulant properties is used for mood alteration or intoxication ignoring the fact that overdose of such medicines can sometimes have serious adverse effects. It sometimes involves drug diversion from the individual for whom it was prescribed. Prescription misuse has been defined differently and rather inconsistently based on the status of drug prescription, the uses without a prescription, intentional use to achieve intoxicating effects, route of administration, co-ingestion with alcohol, and the presence or absence of dependence symptoms. Chronic use of certain substances leads to a change in the central nervous system known as a "tolerance" to the medicine such that more of the substance is needed in order to produce desired effects. With some substances, stopping or reducing use can cause withdrawal symptoms to occur, but this is highly dependent on the specific substance in question. The rate of prescription drug use is fast overtaking illegal drug use in the United States. According to the National Institute of Drug Abuse, 7 million people were taking prescription drugs for nonmedical use in 2010. Among 12th graders, nonmedical prescription drug use is now second only to cannabis. In 2011, "Nearly 1 in 12 high school seniors reported nonmedical use of Vicodin; 1 in 20 reported such use of OxyContin." Both of these drugs contain opioids. Fentanyl is an opioid that is 100 times more potent than morphine, and 50 times more potent than heroin. A 2017 survey of 12th graders in the United States, found misuse of OxyContin of 2.7 percent, compared to 5.5 percent at its peak in 2005. Misuse of the combination hydrocodone/paracetamol was at its lowest since a peak of 10.5 percent in 2003. This decrease may be related to public health initiatives and decreased availability. Avenues of obtaining prescription drugs for misuse are varied: sharing between family and friends, illegally buying medications at school or work, and often "doctor shopping" to find multiple physicians to prescribe the same medication, without the knowledge of other prescribers. Increasingly, law enforcement is holding physicians responsible for prescribing controlled substances without fully establishing patient controls, such as a patient "drug contract". Concerned physicians are educating themselves on how to identify medication-seeking behavior in their patients, and are becoming familiar with "red flags" that would alert them to potential prescription drug abuse. Signs and symptoms Depending on the actual compound, drug abuse including alcohol may lead to health problems, social problems, morbidity, injuries, unprotected sex, violence, deaths, motor vehicle accidents, homicides, suicides, physical dependence or psychological addiction. There is a high rate of suicide in alcoholics and other drug abusers. The reasons believed to cause the increased risk of suicide include the long-term abuse of alcohol and other drugs causing physiological distortion of brain chemistry as well as the social isolation. Another factor is the acute intoxicating effects of the drugs may make suicide more likely to occur. Suicide is also very common in adolescent alcohol abusers, with 1 in 4 suicides in adolescents being related to alcohol abuse. In the US, approximately 30% of suicides are related to alcohol abuse. Alcohol abuse is also associated with increased risks of committing criminal offences including child abuse, domestic violence, rapes, burglaries and assaults. Drug abuse, including alcohol and prescription drugs, can induce symptomatology which resembles mental illness. This can occur both in the intoxicated state and also during withdrawal. In some cases, substance-induced psychiatric disorders can persist long after detoxification, such as prolonged psychosis or depression after amphetamine or cocaine abuse. A protracted withdrawal syndrome can also occur with symptoms persisting for months after cessation of use. Benzodiazepines are the most notable drug for inducing prolonged withdrawal effects with symptoms sometimes persisting for years after cessation of use. Both alcohol, barbiturate as well as benzodiazepine withdrawal can potentially be fatal. Abuse of hallucinogens, although extremely unlikely, may in some individuals trigger delusional and other psychotic phenomena long after cessation of use. This is mainly a risk with deliriants, and most unlikely with psychedelics and dissociatives. Cannabis may trigger panic attacks during intoxication and with continued use, it may cause a state similar to dysthymia. Researchers have found that daily cannabis use and the use of high-potency cannabis are independently associated with a higher chance of developing schizophrenia and other psychotic disorders. Severe anxiety and depression are often induced by sustained alcohol abuse. Even sustained moderate alcohol use may increase anxiety and depression levels in some individuals. In most cases, these drug-induced psychiatric disorders fade away with prolonged abstinence. Similarly, although substance abuse induces many changes to the brain, there is evidence that many of these alterations are reversed following periods of prolonged abstinence. Impulsivity Impulsivity is characterized by actions based on sudden desires, whims, or inclinations rather than careful thought. Individuals with substance abuse have higher levels of impulsivity, and individuals who use multiple drugs tend to be more impulsive. A number of studies using the Iowa gambling task as a measure for impulsive behavior found that drug using populations made more risky choices compared to healthy controls. There is a hypothesis that the loss of impulse control may be due to impaired inhibitory control resulting from drug induced changes that take place in the frontal cortex. The neurodevelopmental and hormonal changes that happen during adolescence may modulate impulse control that could possibly lead to the experimentation with drugs and may lead to addiction. Impulsivity is thought to be a facet trait in the neuroticism personality domain (overindulgence/negative urgency) which is prospectively associated with the development of substance abuse. Screening and assessment The screening and assessment process of substance use behavior is important for the diagnosis and treatment of substance use disorders. Screeners is the process of identifying individuals who have or may be at risk for a substance use disorder and are usually brief to administer. Assessments are used to clarify the nature of the substance use behavior to help determine appropriate treatment. Assessments usually require specialized skills, and are longer to administer than screeners. Given that addiction manifests in structural changes to the brain, it is possible that non-invasive magnetic resonance imaging could help diagnose addiction in the future. Targeted assessments There are several different screening tools that have been validated for use with adolescents such as the CRAFFT Screening Test and in adults the CAGE questionnaire. Some recommendations for screening tools for substance misuse in pregnancy include that they take less than 10 minutes, should be used routinely, include an educational component. Tools suitable for pregnant women include i.a. 4Ps, T-ACE, TWEAK, TQDH (Ten-Question Drinking History), and AUDIT. Treatment Psychological From the applied behavior analysis literature, behavioral psychology, and from randomized clinical trials, several evidenced based interventions have emerged: behavioral marital therapy, motivational Interviewing, community reinforcement approach, exposure therapy, contingency management They help suppress cravings and mental anxiety, improve focus on treatment and new learning behavioral skills, ease withdrawal symptoms and reduce the chances of relapse. In children and adolescents, cognitive behavioral therapy (CBT) and family therapy currently has the most research evidence for the treatment of substance abuse problems. Well-established studies also include ecological family-based treatment and group CBT. These treatments can be administered in a variety of different formats, each of which has varying levels of research support Research has shown that what makes group CBT most effective is that it promotes the development of social skills, developmentally appropriate emotional regulatory skills and other interpersonal skills. A few integrated treatment models, which combines parts from various types of treatment, have also been seen as both well-established or probably effective. A study on maternal alcohol and other drug use has shown that integrated treatment programs have produced significant results, resulting in higher negative results on toxicology screens. Additionally, brief school-based interventions have been found to be effective in reducing adolescent alcohol and cannabis use and abuse. Motivational interviewing can also be effective in treating substance use disorder in adolescents. Alcoholics Anonymous and Narcotics Anonymous are widely known self-help organizations in which members support each other abstain from substances. Social skills are significantly impaired in people with alcoholism due to the neurotoxic effects of alcohol on the brain, especially the prefrontal cortex area of the brain. It has been suggested that social skills training adjunctive to inpatient treatment of alcohol dependence is probably efficacious, including managing the social environment. Medication A number of medications have been approved for the treatment of substance abuse. These include replacement therapies such as buprenorphine and methadone as well as antagonist medications like disulfiram and naltrexone in either short acting, or the newer long acting form. Several other medications, often ones originally used in other contexts, have also been shown to be effective including bupropion and modafinil. Methadone and buprenorphine are sometimes used to treat opiate addiction. These drugs are used as substitutes for other opioids and still cause withdrawal symptoms but they facilitate the tapering off process in a controlled fashion. When a person goes from using fentanyl every day, to not using it at all, they will experience a point where they need to get used to not using the substance. This is called withdrawal. Antipsychotic medications have not been found to be useful. Acamprostate is a glutamatergic NMDA antagonist, which helps with alcohol withdrawal symptoms because alcohol withdrawal is associated with a hyperglutamatergic system. Heroin-assisted treatment Three countries in Europe have active HAT programs, namely England, the Netherlands and Switzerland. Despite critical voices by conservative think-tanks with regard to these harm-reduction strategies, significant progress in the reduction of drug-related deaths has been achieved in those countries. For example, the US, devoid of such measures, has seen large increases in drug-related deaths since 2000 (mostly related to heroin use), while Switzerland has seen large decreases. In 2018, approximately 60,000 people have died of drug overdoses in America, while in the same time period, Switzerland's drug deaths were at 260. Relative to the population of these countries, the US has 10 times more drug-related deaths compared to the Swiss Confederation, which in effect illustrates the efficacy of HAT to reduce fatal outcomes in opiate/opioid addiction. Dual diagnosis It is common for individuals with drugs use disorder to have other psychological problems. The terms "dual diagnosis" or "co-occurring disorders", refer to having a mental health and substance use disorder at the same time. According to the British Association for Psychopharmacology (BAP), "symptoms of psychiatric disorders such as depression, anxiety and psychosis are the rule rather than the exception in patients misusing drugs and/or alcohol." Individuals who have a comorbid psychological disorder often have a poor prognosis if either disorder is untreated. Historically most individuals with dual diagnosis either received treatment only for one of their disorders or they did not receive any treatment all. However, since the 1980s, there has been a push towards integrating mental health and addiction treatment. In this method, neither condition is considered primary and both are treated simultaneously by the same provider. Epidemiology The initiation of drug use including alcohol is most likely to occur during adolescence, and some experimentation with substances by older adolescents is common. For example, results from 2010 Monitoring the Future survey, a nationwide study on rates of substance use in the United States, show that 48.2% of 12th graders report having used an illicit drug at some point in their lives. In the 30 days prior to the survey, 41.2% of 12th graders had consumed alcohol and 19.2% of 12th graders had smoked tobacco cigarettes. In 2009 in the United States about 21% of high school students have taken prescription drugs without a prescription. And earlier in 2002, the World Health Organization estimated that around 140 million people were alcohol dependent and another 400 million with alcohol-related problems. Studies have shown that the large majority of adolescents will phase out of drug use before it becomes problematic. Thus, although rates of overall use are high, the percentage of adolescents who meet criteria for substance abuse is significantly lower (close to 5%). According UN estimates, there are "more than 50 million regular users of morphine diacetate (heroin), cocaine and synthetic drugs". More than 70,200 Americans died from drug overdoses in 2017. Among these, the sharpest increase occurred among deaths related to fentanyl and synthetic opioids (28,466 deaths). See charts below. History APA, AMA, and NCDA In 1966, the American Medical Association's Committee on Alcoholism and Addiction defined abuse of stimulants (amphetamines, primarily) in terms of 'medical supervision': In 1972, the American Psychiatric Association created a definition that used legality, social acceptability, and cultural familiarity as qualifying factors: In 1973, the National Commission on Marijuana and Drug Abuse stated: ...drug abuse may refer to any type of drug or chemical without regard to its pharmacologic actions. It is an eclectic concept having only one uniform connotation: societal disapproval. ... The Commission believes that the term drug abuse must be deleted from official pronouncements and public policy dialogue. The term has no functional utility and has become no more than an arbitrary codeword for that drug use which is presently considered wrong. DSM The first edition of the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (published in 1952) grouped alcohol and other drug abuse under "sociopathic personality disturbances", which were thought to be symptoms of deeper psychological disorders or moral weakness. The third edition, published in 1980, was the first to recognize substance abuse (including drug abuse) and substance dependence as conditions separate from substance abuse alone, bringing in social and cultural factors. The definition of dependence emphasised tolerance to drugs, and withdrawal from them as key components to diagnosis, whereas abuse was defined as "problematic use with social or occupational impairment" but without withdrawal or tolerance. In 1987, the DSM-IIIR category "psychoactive substance abuse", which includes former concepts of drug abuse is defined as "a maladaptive pattern of use indicated by...continued use despite knowledge of having a persistent or recurrent social, occupational, psychological or physical problem that is caused or exacerbated by the use (or by) recurrent use in situations in which it is physically hazardous". It is a residual category, with dependence taking precedence when applicable. It was the first definition to give equal weight to behavioural and physiological factors in diagnosis. By 1988, the DSM-IV defined substance dependence as "a syndrome involving compulsive use, with or without tolerance and withdrawal"; whereas substance abuse is "problematic use without compulsive use, significant tolerance, or withdrawal". Substance abuse can be harmful to health and may even be deadly in certain scenarios. By 1994, the fourth edition of the DSM issued by the American Psychiatric Association, the DSM-IV-TR, defined substance dependence as "when an individual persists in use of alcohol or other drugs despite problems related to use of the substance, substance dependence may be diagnosed", along with criteria for the diagnosis. The DSM-IV-TR defines substance abuse as: A. A maladaptive pattern of substance use leading to clinically significant impairment or distress, as manifested by one (or more) of the following, occurring within a 12-month period: Recurrent substance use resulting in a failure to fulfill major role obligations at work, school, or home (e.g., repeated absences or poor work performance related to substance use; substance-related absences, suspensions or expulsions from school; neglect of children or household) Recurrent substance use in situations in which it is physically hazardous (e.g., driving an automobile or operating a machine when impaired by substance use) Recurrent substance-related legal problems (e.g., arrests for substance-related disorderly conduct) Continued substance use despite having persistent or recurrent social or interpersonal problems caused or exacerbated by the effects of the substance (e.g., arguments with spouse about consequences of intoxication, physical fights) the symptoms have never met the criteria for substance dependence for this class of substance The fifth edition of the DSM (DSM-5), was released in 2013, and it revisited this terminology. The principal change was a transition from the abuse-dependence terminology. In the DSM-IV era, abuse was seen as an early form or less hazardous form of the disease characterized with the dependence criteria. However, the APA's dependence term does not mean that physiologic dependence is present but rather means that a disease state is present, one that most would likely refer to as an addicted state. Many involved recognize that the terminology has often led to confusion, both within the medical community and with the general public. The American Psychiatric Association requested input as to how the terminology of this illness should be altered as it moves forward with DSM-5 discussions. In the DSM-5, substance abuse and substance dependence have been merged into the category of substance use disorders and they no longer exist as individual concepts. While substance abuse and dependence were either present or not, substance use disorder has three levels of severity: mild, moderate and severe. Society and culture Legal approaches Related articles: Drug control law, Prohibition (drugs), Arguments for and against drug prohibition, Harm reduction Most governments have designed legislation to criminalize certain types of drug use. These drugs are often called "illegal drugs" but generally what is illegal is their unlicensed production, distribution, and possession. These drugs are also called "controlled substances". Even for simple possession, legal punishment can be quite severe (including the death penalty in some countries). Laws vary across countries, and even within them, and have fluctuated widely throughout history. Attempts by government-sponsored drug control policy to interdict drug supply and eliminate drug abuse have been largely unsuccessful. In spite of the huge efforts by the U.S., drug supply and purity has reached an all-time high, with the vast majority of resources spent on interdiction and law enforcement instead of public health. In the United States, the number of nonviolent drug offenders in prison exceeds by 100,000 the total incarcerated population in the EU, despite the fact that the EU has 100 million more citizens. Despite drug legislation (or perhaps because of it), large, organized criminal drug cartels operate worldwide. Advocates of decriminalization argue that drug prohibition makes drug dealing a lucrative business, leading to much of the associated criminal activity. Some states in the U.S., as of late, have focused on facilitating safe use as opposed to eradicating it. For example, as of 2022, New Jersey has made the effort to expand needle exchange programs throughout the state, passing a bill through legislature that gives control over decisions regarding these types of programs to the state's department of health. This state level bill is not only significant for New Jersey, as it could be used as a model for other states to possibly follow as well. This bill is partly a reaction to the issues occurring at local level city governments within the state of New Jersey as of late. One example of this is in the Atlantic City Government which came under lawsuit after they halted the enactment of said programs within their city. This suit came a year before the passing of this bill, stemming from a local level decision to shut down related operations in Atlantic City made in July that same year. This lawsuit highlights the feelings of New Jersey residents, who had a great influence on this bill passing the legislature. These feelings were demonstrated in front of Atlantic City City hall, where residents exclaimed their desire for these programs. All in all, the aforementioned bill was signed effectively into law just days after it passed legislature, by New Jersey Governor Phil Murphy. Cost Policymakers try to understand the relative costs of drug-related interventions. An appropriate drug policy relies on the assessment of drug-related public expenditure based on a classification system where costs are properly identified. Labelled drug-related expenditures are defined as the direct planned spending that reflects the voluntary engagement of the state in the field of illicit drugs. Direct public expenditures explicitly labeled as drug-related can be easily traced back by exhaustively reviewing official accountancy documents such as national budgets and year-end reports. Unlabelled expenditure refers to unplanned spending and is estimated through modeling techniques, based on a top-down budgetary procedure. Starting from overall aggregated expenditures, this procedure estimates the proportion causally attributable to substance abuse (Unlabelled Drug-related Expenditure = Overall Expenditure × Attributable Proportion). For example, to estimate the prison drug-related expenditures in a given country, two elements would be necessary: the overall prison expenditures in the country for a given period, and the attributable proportion of inmates due to drug-related issues. The product of the two will give a rough estimate that can be compared across different countries. Europe As part of the reporting exercise corresponding to 2005, the European Monitoring Centre for Drugs and Drug Addiction's network of national focal points set up in the 27 European Union (EU) the member states, Norway, and the candidates' countries to the EU, were requested to identify labeled drug-related public expenditure, at the national level. This was reported by 10 countries categorized according to the functions of government, amounting to a total of EUR 2.17 billion. Overall, the highest proportion of this total came within the government functions of health (66%) (e.g. medical services), and public order and safety (POS) (20%) (e.g. police services, law courts, prisons). By country, the average share of GDP was 0.023% for health, and 0.013% for POS. However, these shares varied considerably across countries, ranging from 0.00033% in Slovakia, up to 0.053% of GDP in Ireland in the case of health, and from 0.003% in Portugal, to 0.02% in the UK, in the case of POS; almost a 161-fold difference between the highest and the lowest countries for health, and a six-fold difference for POS. To respond to these findings and to make a comprehensive assessment of drug-related public expenditure across countries, this study compared health and POS spending and GDP in the 10 reporting countries. Results suggest GDP to be a major determinant of the health and POS drug-related public expenditures of a country. Labeled drug-related public expenditure showed a positive association with the GDP across the countries considered: r = 0.81 in the case of health, and r = 0.91 for POS. The percentage change in health and POS expenditures due to a one percent increase in GDP (the income elasticity of demand) was estimated to be 1.78% and 1.23% respectively. Being highly income elastic, health and POS expenditures can be considered luxury goods; as a nation becomes wealthier it openly spends proportionately more on drug-related health and public order and safety interventions. United Kingdom The UK Home Office estimated that the social and economic cost of drug abuse to the UK economy in terms of crime, absenteeism and sickness is in excess of £20 billion a year. However, the UK Home Office does not estimate what portion of those crimes are unintended consequences of drug prohibition (crimes to sustain expensive drug consumption, risky production and dangerous distribution), nor what is the cost of enforcement. Those aspects are necessary for a full analysis of the economics of prohibition. United States These figures represent overall economic costs, which can be divided in three major components: health costs, productivity losses and non-health direct expenditures. Health-related costs were projected to total $16 billion in 2002. Productivity losses were estimated at $128.6 billion. In contrast to the other costs of drug abuse (which involve direct expenditures for goods and services), this value reflects a loss of potential resources: work in the labor market and in household production that was never performed, but could reasonably be expected to have been performed absent the impact of drug abuse. Included are estimated productivity losses due to premature death ($24.6 billion), drug abuse-related illness ($33.4 billion), incarceration ($39.0 billion), crime careers ($27.6 billion) and productivity losses of victims of crime ($1.8 billion). The non-health direct expenditures primarily concern costs associated with the criminal justice system and crime victim costs, but also include a modest level of expenses for administration of the social welfare system. The total for 2002 was estimated at $36.4 billion. The largest detailed component of these costs is for state and federal corrections at $14.2 billion, which is primarily for the operation of prisons. Another $9.8 billion was spent on state and local police protection, followed by $6.2 billion for federal supply reduction initiatives. According to a report from the Agency for Healthcare Research and Quality (AHRQ), Medicaid was billed for a significantly higher number of hospitals stays for opioid drug overuse than Medicare or private insurance in 1993. By 2012, the differences were diminished. Over the same time, Medicare had the most rapid growth in number of hospital stays. Canada Substance abuse takes a financial toll on Canada's hospitals and the country as a whole. In the year 2011, around $267 million of hospital services were attributed to dealing with substance abuse problems. The majority of these hospital costs in 2011 were related to issues with alcohol. Additionally, in 2014, Canada also allocated almost $45 million towards battling prescription drug abuse, extending into the year 2019. Most of the financial decisions made on substance abuse in Canada can be attributed to the research conducted by the Canadian Centre on Substance Abuse (CCSA) which conduct both extensive and specific reports. In fact, the CCSA is heavily responsible for identifying Canada's heavy issues with substance abuse. Some examples of reports by the CCSA include a 2013 report on drug use during pregnancy and a 2015 report on adolescents' use of cannabis. Special populations Immigrants and refugees Immigrant and refugees have often been under great stress, physical trauma and depression and anxiety due to separation from loved ones often characterize the pre-migration and transit phases, followed by "cultural dissonance", language barriers, racism, discrimination, economic adversity, overcrowding, social isolation, and loss of status and difficulty obtaining work and fears of deportation are common. Refugees frequently experience concerns about the health and safety of loved ones left behind and uncertainty regarding the possibility of returning to their country of origin. For some, substance abuse functions as a coping mechanism to attempt to deal with these stressors. Immigrants and refugees may bring the substance use and abuse patterns and behaviors of their country of origin, or adopt the attitudes, behaviors, and norms regarding substance use and abuse that exist within the dominant culture into which they are entering. Street children Street children in many developing countries are a high-risk group for substance misuse, in particular solvent abuse. Drawing on research in Kenya, Cottrell-Boyce argues that "drug use amongst street children is primarily functional—dulling the senses against the hardships of life on the street—but can also provide a link to the support structure of the 'street family' peer group as a potent symbol of shared experience." Musicians In order to maintain high-quality performance, some musicians take chemical substances. Some musicians take drugs such as alcohol to deal with the stress of performing. As a group they have a higher rate of substance abuse. The most common chemical substance which is abused by pop musicians is cocaine, because of its neurological effects. Stimulants like cocaine increase alertness and cause feelings of euphoria, and can therefore make the performer feel as though they in some ways 'own the stage'. One way in which substance abuse is harmful for a performer (musicians especially) is if the substance being abused is aspirated. The lungs are an important organ used by singers, and addiction to cigarettes may seriously harm the quality of their performance. Smoking harms the alveoli, which are responsible for absorbing oxygen. Veterans Substance abuse can be a factor that affects the physical and mental health of veterans. Substance abuse may also harm personal and familial relationships, leading to financial difficulty. There is evidence to suggest that substance abuse disproportionately affects the homeless veteran population. A 2015 Florida study, which compared causes of homelessness between veterans and non-veteran populations in a self-reporting questionnaire, found that 17.8% of the homeless veteran participants attributed their homelessness to alcohol and other drug-related problems compared to just 3.7% of the non-veteran homeless group. A 2003 study found that homelessness was correlated with access to support from family/friends and services. However, this correlation was not true when comparing homeless participants who had a current substance-use disorders. The U.S. Department of Veterans Affairs provides a summary of treatment options for veterans with substance-use disorder. For treatments that do not involve medication, they offer therapeutic options that focus on finding outside support groups and "looking at how substance use problems may relate to other problems such as PTSD and depression". Sex and gender There are many sex differences in substance abuse. Men and women express differences in the short- and long-term effects of substance abuse. These differences can be credited to sexual dimorphisms in the brain, endocrine and metabolic systems. Social and environmental factors that tend to disproportionately affect women, such as child and elder care and the risk of exposure to violence, are also factors in the gender differences in substance abuse. Women report having greater impairment in areas such as employment, family and social functioning when abusing substances but have a similar response to treatment. Co-occurring psychiatric disorders are more common among women than men who abuse substances; women more frequently use substances to reduce the negative effects of these co-occurring disorders. Substance abuse puts both men and women at higher risk for perpetration and victimization of sexual violence. Men tend to take drugs for the first time to be part of a group and fit in more so than women. At first interaction, women may experience more pleasure from drugs than men do. Women tend to progress more rapidly from first experience to addiction than men. Physicians, psychiatrists and social workers have believed for decades that women escalate alcohol use more rapidly once they start. Once the addictive behavior is established for women they stabilize at higher doses of drugs than males do. When withdrawing from smoking women experience greater stress response. Males experience greater symptoms when withdrawing from alcohol. There are gender differences when it comes to rehabilitation and relapse rates. For alcohol, relapse rates were very similar for men and women. For women, marriage and marital stress were risk factors for alcohol relapse. For men, being married lowered the risk of relapse. This difference may be a result of gendered differences in excessive drinking. Alcoholic women are much more likely to be married to partners that drink excessively than are alcoholic men. As a result of this, men may be protected from relapse by marriage while women are at higher risk when married. However, women are less likely than men to experience relapse to substance use. When men experience a relapse to substance use, they more than likely had a positive experience prior to the relapse. On the other hand, when women relapse to substance use, they were more than likely affected by negative circumstances or interpersonal problems. See also ΔFosB Combined drug intoxication Drug addiction Handbook on Drug and Alcohol Abuse Harm reduction Hedonism International Day Against Drug Abuse and Illicit Trafficking List of controlled drugs in the United Kingdom United States drug overdose death rates and totals over time List of deaths from drug overdose and intoxication Low-threshold treatment programs Needle-exchange programme Nihilism Poly drug use Polysubstance abuse Responsible drug use Supervised injection site Wellness check References People overdose drugs to try to forget their problems at home, and some use them for fun because they saw people using drugs at television advertising them. External links Dr. Robert Anda of the U.S. Centers for Disease Control describes the relation between childhood adversity and later ill-health, including substance abuse (video) Causes of death . Addiction Abuse
Substance abuse
Biology
7,693
39,147
https://en.wikipedia.org/wiki/Finite%20difference
A finite difference is a mathematical expression of the form . Finite differences (or the associated difference quotients) are often used as approximations of derivatives, such as in numerical differentiation. The difference operator, commonly denoted , is the operator that maps a function to the function defined by A difference equation is a functional equation that involves the finite difference operator in the same way as a differential equation involves derivatives. There are many similarities between difference equations and differential equations. Certain recurrence relations can be written as difference equations by replacing iteration notation with finite differences. In numerical analysis, finite differences are widely used for approximating derivatives, and the term "finite difference" is often used as an abbreviation of "finite difference approximation of derivatives". Finite differences were introduced by Brook Taylor in 1715 and have also been studied as abstract self-standing mathematical objects in works by George Boole (1860), L. M. Milne-Thomson (1933), and (1939). Finite differences trace their origins back to one of Jost Bürgi's algorithms () and work by others including Isaac Newton. The formal calculus of finite differences can be viewed as an alternative to the calculus of infinitesimals. Basic types Three basic types are commonly considered: forward, backward, and central finite differences. A , denoted of a function is a function defined as Depending on the application, the spacing may be variable or constant. When omitted, is taken to be 1; that is, A uses the function values at and , instead of the values at and : Finally, the is given by Relation with derivatives The approximation of derivatives by finite differences plays a central role in finite difference methods for the numerical solution of differential equations, especially boundary value problems. The derivative of a function at a point is defined by the limit If has a fixed (non-zero) value instead of approaching zero, then the right-hand side of the above equation would be written Hence, the forward difference divided by approximates the derivative when is small. The error in this approximation can be derived from Taylor's theorem. Assuming that is twice differentiable, we have The same formula holds for the backward difference: However, the central (also called centered) difference yields a more accurate approximation. If is three times differentiable, The main problem with the central difference method, however, is that oscillating functions can yield zero derivative. If for odd, and for even, then if it is calculated with the central difference scheme. This is particularly troublesome if the domain of is discrete. See also Symmetric derivative. Authors for whom finite differences mean finite difference approximations define the forward/backward/central differences as the quotients given in this section (instead of employing the definitions given in the previous section). Higher-order differences In an analogous way, one can obtain finite difference approximations to higher order derivatives and differential operators. For example, by using the above central difference formula for and and applying a central difference formula for the derivative of at , we obtain the central difference approximation of the second derivative of : Second-order central Similarly we can apply other differencing formulas in a recursive manner. Second order forward Second order backward More generally, the -th order forward, backward, and central differences are given by, respectively, Forward Backward Central These equations use binomial coefficients after the summation sign shown as . Each row of Pascal's triangle provides the coefficient for each value of . Note that the central difference will, for odd , have multiplied by non-integers. This is often a problem because it amounts to changing the interval of discretization. The problem may be remedied substituting the average of and Forward differences applied to a sequence are sometimes called the binomial transform of the sequence, and have a number of interesting combinatorial properties. Forward differences may be evaluated using the Nörlund–Rice integral. The integral representation for these types of series is interesting, because the integral can often be evaluated using asymptotic expansion or saddle-point techniques; by contrast, the forward difference series can be extremely hard to evaluate numerically, because the binomial coefficients grow rapidly for large . The relationship of these higher-order differences with the respective derivatives is straightforward, Higher-order differences can also be used to construct better approximations. As mentioned above, the first-order difference approximates the first-order derivative up to a term of order . However, the combination approximates up to a term of order . This can be proven by expanding the above expression in Taylor series, or by using the calculus of finite differences, explained below. If necessary, the finite difference can be centered about any point by mixing forward, backward, and central differences. Polynomials For a given polynomial of degree , expressed in the function , with real numbers and and lower order terms (if any) marked as : After pairwise differences, the following result can be achieved, where is a real number marking the arithmetic difference: Only the coefficient of the highest-order term remains. As this result is constant with respect to , any further pairwise differences will have the value . Inductive proof Base case Let be a polynomial of degree : This proves it for the base case. Inductive step Let be a polynomial of degree where and the coefficient of the highest-order term be . Assuming the following holds true for all polynomials of degree : Let be a polynomial of degree . With one pairwise difference: As , this results in a polynomial of degree , with as the coefficient of the highest-order term. Given the assumption above and pairwise differences (resulting in a total of pairwise differences for ), it can be found that: This completes the proof. Application This identity can be used to find the lowest-degree polynomial that intercepts a number of points where the difference on the x-axis from one point to the next is a constant . For example, given the following points: We can use a differences table, where for all cells to the right of the first , the following relation to the cells in the column immediately to the left exists for a cell , with the top-leftmost cell being at coordinate : To find the first term, the following table can be used: This arrives at a constant . The arithmetic difference is , as established above. Given the number of pairwise differences needed to reach the constant, it can be surmised this is a polynomial of degree . Thus, using the identity above: Solving for , it can be found to have the value . Thus, the first term of the polynomial is . Then, subtracting out the first term, which lowers the polynomial's degree, and finding the finite difference again: Here, the constant is achieved after only two pairwise differences, thus the following result: Solving for , which is , the polynomial's second term is . Moving on to the next term, by subtracting out the second term: Thus the constant is achieved after only one pairwise difference: It can be found that and thus the third term of the polynomial is . Subtracting out the third term: Without any pairwise differences, it is found that the 4th and final term of the polynomial is the constant . Thus, the lowest-degree polynomial intercepting all the points in the first table is found: Arbitrarily sized kernels Using linear algebra one can construct finite difference approximations which utilize an arbitrary number of points to the left and a (possibly different) number of points to the right of the evaluation point, for any order derivative. This involves solving a linear system such that the Taylor expansion of the sum of those points around the evaluation point best approximates the Taylor expansion of the desired derivative. Such formulas can be represented graphically on a hexagonal or diamond-shaped grid. This is useful for differentiating a function on a grid, where, as one approaches the edge of the grid, one must sample fewer and fewer points on one side. Finite difference approximations for non-standard (and even non-integer) stencils given an arbitrary stencil and a desired derivative order may be constructed. Properties For all positive and Leibniz rule: In differential equations An important application of finite differences is in numerical analysis, especially in numerical differential equations, which aim at the numerical solution of ordinary and partial differential equations. The idea is to replace the derivatives appearing in the differential equation by finite differences that approximate them. The resulting methods are called finite difference methods. Common applications of the finite difference method are in computational science and engineering disciplines, such as thermal engineering, fluid mechanics, etc. Newton's series The Newton series consists of the terms of the Newton forward difference equation, named after Isaac Newton; in essence, it is the Gregory–Newton interpolation formula (named after Isaac Newton and James Gregory), first published in his Principia Mathematica in 1687, namely the discrete analog of the continuous Taylor expansion, which holds for any polynomial function and for many (but not all) analytic functions. (It does not hold when is exponential type . This is easily seen, as the sine function vanishes at integer multiples of ; the corresponding Newton series is identically zero, as all finite differences are zero in this case. Yet clearly, the sine function is not zero.) Here, the expression is the binomial coefficient, and is the "falling factorial" or "lower factorial", while the empty product is defined to be 1. In this particular case, there is an assumption of unit steps for the changes in the values of of the generalization below. Note the formal correspondence of this result to Taylor's theorem. Historically, this, as well as the Chu–Vandermonde identity, (following from it, and corresponding to the binomial theorem), are included in the observations that matured to the system of umbral calculus. Newton series expansions can be superior to Taylor series expansions when applied to discrete quantities like quantum spins (see Holstein–Primakoff transformation), bosonic operator functions or discrete counting statistics. To illustrate how one may use Newton's formula in actual practice, consider the first few terms of doubling the Fibonacci sequence One can find a polynomial that reproduces these values, by first computing a difference table, and then substituting the differences that correspond to (underlined) into the formula as follows, For the case of nonuniform steps in the values of , Newton computes the divided differences, the series of products, and the resulting polynomial is the scalar product, In analysis with -adic numbers, Mahler's theorem states that the assumption that is a polynomial function can be weakened all the way to the assumption that is merely continuous. Carlson's theorem provides necessary and sufficient conditions for a Newton series to be unique, if it exists. However, a Newton series does not, in general, exist. The Newton series, together with the Stirling series and the Selberg series, is a special case of the general difference series, all of which are defined in terms of suitably scaled forward differences. In a compressed and slightly more general form and equidistant nodes the formula reads Calculus of finite differences The forward difference can be considered as an operator, called the difference operator, which maps the function to . This operator amounts to where is the shift operator with step , defined by and is the identity operator. The finite difference of higher orders can be defined in recursive manner as Another equivalent definition is The difference operator is a linear operator, as such it satisfies It also satisfies a special Leibniz rule: Similar Leibniz rules hold for the backward and central differences. Formally applying the Taylor series with respect to , yields the operator equation where denotes the conventional, continuous derivative operator, mapping to its derivative The expansion is valid when both sides act on analytic functions, for sufficiently small ; in the special case that the series of derivatives terminates (when the function operated on is a finite polynomial) the expression is exact, for all finite stepsizes, Thus and formally inverting the exponential yields This formula holds in the sense that both operators give the same result when applied to a polynomial. Even for analytic functions, the series on the right is not guaranteed to converge; it may be an asymptotic series. However, it can be used to obtain more accurate approximations for the derivative. For instance, retaining the first two terms of the series yields the second-order approximation to mentioned at the end of the section . The analogous formulas for the backward and central difference operators are The calculus of finite differences is related to the umbral calculus of combinatorics. This remarkably systematic correspondence is due to the identity of the commutators of the umbral quantities to their continuum analogs ( limits), A large number of formal differential relations of standard calculus involving functions    thus systematically map to umbral finite-difference analogs involving For instance, the umbral analog of a monomial is a generalization of the above falling factorial (Pochhammer k-symbol), so that hence the above Newton interpolation formula (by matching coefficients in the expansion of an arbitrary function    in such symbols), and so on. For example, the umbral sine is As in the continuum limit, the eigenfunction of    also happens to be an exponential, and hence Fourier sums of continuum functions are readily, faithfully mapped to umbral Fourier sums, i.e., involving the same Fourier coefficients multiplying these umbral basis exponentials. This umbral exponential thus amounts to the exponential generating function of the Pochhammer symbols. Thus, for instance, the Dirac delta function maps to its umbral correspondent, the cardinal sine function and so forth. Difference equations can often be solved with techniques very similar to those for solving differential equations. The inverse operator of the forward difference operator, so then the umbral integral, is the indefinite sum or antidifference operator. Rules for calculus of finite difference operators Analogous to rules for finding the derivative, we have: Constant rule: If is a constant, then Linearity: If and are constants, All of the above rules apply equally well to any difference operator as to , including and Product rule: Quotient rule: or Summation rules: See references. Generalizations A generalized finite difference is usually defined as where is its coefficient vector. An infinite difference is a further generalization, where the finite sum above is replaced by an infinite series. Another way of generalization is making coefficients depend on point : , thus considering weighted finite difference. Also one may make the step depend on point : . Such generalizations are useful for constructing different modulus of continuity. The generalized difference can be seen as the polynomial rings . It leads to difference algebras. Difference operator generalizes to Möbius inversion over a partially ordered set. As a convolution operator: Via the formalism of incidence algebras, difference operators and other Möbius inversion can be represented by convolution with a function on the poset, called the Möbius function ; for the difference operator, is the sequence . Multivariate finite differences Finite differences can be considered in more than one variable. They are analogous to partial derivatives in several variables. Some partial derivative approximations are: Alternatively, for applications in which the computation of is the most costly step, and both first and second derivatives must be computed, a more efficient formula for the last case is since the only values to compute that are not already needed for the previous four equations are and . See also References Richardson, C. H. (1954): An Introduction to the Calculus of Finite Differences (Van Nostrand (1954) online copy Mickens, R. E. (1991): Difference Equations: Theory and Applications (Chapman and Hall/CRC) External links Table of useful finite difference formula generated using Mathematica D. Gleich (2005), Finite Calculus: A Tutorial for Solving Nasty Sums Discrete Second Derivative from Unevenly Spaced Points Numerical differential equations Mathematical analysis Factorial and binomial topics Linear operators in calculus Numerical analysis Non-Newtonian calculus
Finite difference
Mathematics
3,296
20,596,619
https://en.wikipedia.org/wiki/Cell%20and%20Tissue%20Research
Cell and Tissue Research presents regular articles and reviews in the areas of molecular, cell, stem cell biology and tissue engineering. In particular, the journal provides a forum for publishing data that analyze the supracellular, integrative actions of gene products and their impact on the formation of tissue structure and function. Articles emphasize structure–function relationships as revealed by recombinant molecular technologies. The coordinating editor of the journal is Klaus Unsicker. Subjects covered in journal Areas of research frequently published in Cell and Tissue Research include: neurobiology, neuroendocrinology, endocrinology, reproductive biology, skeletal and immune systems, and development. Editors The coordinating editor of the journal is Klaus Unsicker, of the University of Heidelberg. Section editors are K. Unsicker, neurobiology/sense organs/endocrinology; M. Furutani-Seiki, Development/growth/regeneration; W.W. Franke, molecular/cell biology; Andreas Oksche and Horst-Werner Korf, neuroendocrinology; T. Pihlajaniemi, extracellular Extracellular matrix; D. Furst, muscle; Joseph Bonventre, kidney and related subjects; P. Sutovsky, reproductive biology; B. Singh, immunology/hematology; and V. Hartenstein, invertebrates. See also Autophagy (journal) Cell Biology International Cell Cycle (journal) References External links Cell & Tissue Research Springer Science+Business Media SpringerLink.com English-language journals Molecular and cellular biology journals Academic journals established in 1924
Cell and Tissue Research
Chemistry
326
34,076,639
https://en.wikipedia.org/wiki/Betongtavlen
Betongtavlen () is a Norwegian architecture and civil engineering award issued by the National Associations of Norwegian Architects and the Norwegian Concrete Association. The award is issues to a structure "where concrete is used in an environmentally, esthetically and technically excellent way". The award was first issued in 1961 for Bakkehaugen Church and has as of 2011 been awarded 53 times. The award is not necessarily awarded every year, and up to four structures have been awarded in a year. Structures awarded prices include office buildings, campus buildings, ski jumps, houses, hotels, bridges, tunnels, dams, oil platforms, industrial facilities, viewpoints and cultural institutions. Prizes are not necessarily awarded immediately after the structure was completed—for instance, Elgeseter Bridge was completed in 1951 but awarded the prize in 2006. List of awards The following is a list of the awards, including the year it was awarded, the structure, the credited architects and engineering firms or people, the type of structure and the municipality in which it is located. See also List of engineering awards References Civil engineering awards Architecture awards 1961 establishments in Norway Norwegian awards
Betongtavlen
Engineering
228
8,678,512
https://en.wikipedia.org/wiki/Toftness%20device
The Toftness Radiation Detector was a quack instrument used by some chiropractors. It was patented by Irwing N. Toftness in 1971, and was banned from use in the United States in 1982. Toftness claimed that it detected electromagnetic radiation emanating from vertebral subluxations. The device had multiple forms, but a common configuration consisted of a plastic cylinder with a series of plastic lenses inside, as well as a clear plastic "detection plate". The operator would rub their finger against the detection plate while the device was held close to an area of the spine, and report the degree of perceived resistance against the movement of their fingers. An increase in perceived resistance would indicate which area of the body required chiropractic manipulation. Specifically, Toftness made the claim in his 1971 patent that "what is sensed by the operator is a friction or dragging sensation which retards the passage of a finger or fingers over the surface of the deflection plate." Toftness devices were banned by the United States District Court in Wisconsin in January 1982. The Court issued a permanent nationwide injunction against the manufacture, promotion, sale, lease, distribution, shipping, delivery, or use of the Toftness Radiation Detector, or any product which utilizes the same principles as the Toftness Radiation Detector. The United States Court of Appeals for the Seventh Circuit upheld the decision in 1984. According to the United States Food and Drug Administration, the Toftness Radiation Detectors were misbranded under the Food, Drug, and Cosmetic Act because they could not be used safely or effectively for their intended purposes. The devices were purportedly being used to assist with the diagnosis and treatment of injuries, without FDA approval. In 2013, David Toftness, nephew of Irwing N. Toftness, and the Toftness Post-Graduate School of Chiropractic were fined for shipping the devices across state borders. See also Chiropractic Dowsing N-ray Pathological science References External links Disciplinary Action against Harold J. Dykema, D.C. Pseudoscience Chiropractic Radioactive quackery
Toftness device
Chemistry
436
8,771,718
https://en.wikipedia.org/wiki/Instant%20Insanity
Instant Insanity is the name given by Parker Brothers to their 1967 version of a puzzle which has existed since antiquity, and which has been marketed by many toy and puzzle makers under a variety of names, including: Devil's Dice (Pressman); DamBlocks (Schaper); Logi-Qubes (Schaeffer); Logi Cubes (ThinkinGames); Daffy Dots (Reiss); Those Blocks (Austin); PsykoNosis (A to Z Ideas), and many others. The puzzle consists of four cubes with faces colored with four colors (commonly red, blue, green, and white). The objective of the puzzle is to stack these cubes in a column so that each side of the stack (front, back, left, and right) shows each of the four colors. The distribution of colors on each cube is unique, and the order in which the four cubes are stacked is irrelevant as long as each side shows every color. This problem has a graph-theoretic solution in which a graph with four vertices labeled B, G, R, W (for blue, green, red, and white) can be used to represent each cube; there is an edge between two vertices if the two colors are on the opposite sides of the cube, and a loop at a vertex if the opposite sides have the same color. Each individual cube can be placed in one of 24 positions, by placing any one of the six faces upward and then giving the cube up to three quarter-turns. Once the stack is formed, it can be rotated up to three quarter-turns without altering the orientation of any cube relative to the others. Ignoring the order in which the cubes are stacked, the total possible number of arrangements is therefore 3,456 (24 * 24 * 24 * 24 / (4 * 4!)). The puzzle is studied by D. E. Knuth in an article on estimating the running time of exhaustive search procedures with backtracking. Every position of the puzzle can be solved in eight moves or less. The first known patented version of the puzzle was created by Frederick Alvin Schossow in 1900, and marketed as the Katzenjammer puzzle. The puzzle was recreated by Franz Owen Armbruster, also known as Frank Armbruster, and independently published by Parker Brothers and Pressman, in 1967. Over 12 million puzzles were sold by Parker Brothers alone. The puzzle is similar or identical to numerous other puzzles (e.g., The Great Tantalizer, circa 1940, and the most popular name prior to Instant Insanity). One version of the puzzle is currently being marketed by Winning Moves Games USA. Solution Given the already colored cubes and the four distinct colors are (Red, Green, Blue, White), we will try to generate a graph which gives a clear picture of all the positions of colors in all the cubes. The resultant graph will contain four vertices one for each color and we will number each edge from one through four (one number for each cube). If an edge connects two vertices (Red and Green) and the number of the edge is three, then it means that the third cube has Red and Green faces opposite to each other. To find a solution to this problem we need the arrangement of four faces of each of the cubes. To represent the information of two opposite faces of all the four cubes we need a directed subgraph instead of an undirected one because two directions can only represent two opposite faces, but not whether a face should be at the front or at the back. So if we have two directed subgraphs, we can actually represent all the four faces (which matter) of all the four cubes. First directed graph will represent the front and back faces. Second directed graph will represent the left and right faces. We cannot randomly select any two subgraphs - so what are the criteria for selecting? We need to choose graphs such that: the two subgraphs have no edges in common, because if there is an edge which is common that means at least one cube has the pair of opposite faces of exactly the same color, that is, if a cube has Red and Blue as its front and back faces, then the same is true for its left and right faces. a subgraph contains only one edge from each cube, because the sub graph has to account for all the cubes and one edge can completely represent a pair of opposite faces. a subgraph can contain only vertices of degree two, because a degree of two means a color can only be present at faces of two cubes. Easy way to understand is that there are eight faces to be equally divided into four colors. So, two per color. After understanding these restrictions if we try to derive the two sub graphs, we may end up with one possible set as shown in Image 3. Each edge line style represents a cube. The upper subgraph lets one derive the left and the right face colors of the corresponding cube. E.g.: The solid arrow from Red to Green says that the first cube will have Red in the left face and Green at the Right. The dashed arrow from Blue to Red says that the second cube will have Blue in the left face and Red at the Right. The dotted arrow from White to Blue says that the third cube will have White in the left face and Blue at the Right. The dash-dotted arrow from Green to White says that the fourth cube will have Green in the left face and White at the Right. The lower subgraph lets one derive the front and the back face colors of the corresponding cube. E.g.: The solid arrow from White to Blue says that the first cube will have White in the front face and Blue at the Back. The dashed arrow from Green to White says that the second cube will have Green in the front face and White at the Back. The dotted arrow from Blue to Red says that the third cube will have Blue in the front face and Red at the Back. The dash-dotted arrow from Red to Green says that the fourth cube will have Red in the front face and Green at the Back. The third image shows the derived stack of cube which is the solution to the problem. It is important to note that: You can arbitrarily label the cubes as one such solution will render 23 more by swapping the positions of the cubes but not changing their configurations. The two directed subgraphs can represent front-to-back, and left-to-right interchangeably, i.e. one of them can represent front-to-back or left-to-right. This is because one such solution also render 3 more just by rotating. Adding the effect in 1., we generate 95 more solutions by providing only one. To put it into perspective, such four cubes can generate 243 × 3 = 41472 configurations. It is not important to take notice of the top and the bottom of the stack of cubes. Generalizations Given n cubes, with the faces of each cube coloured with one of n colours, determining if it is possible to stack the cubes so that each colour appears exactly once on each of the 4 sides of the stack is NP-complete. The cube stacking game is a two-player game version of this puzzle. Given an ordered list of cubes, the players take turns adding the next cube to the top of a growing stack of cubes. The loser is the first player to add a cube that causes one of the four sides of the stack to have a color repeated more than once. Robertson and Munro proved that this game is PSPACE-complete, which illustrates the observation that NP-complete puzzles tend to lead to PSPACE-complete games. References Computational problems in graph theory Combination puzzles NP-complete problems
Instant Insanity
Mathematics
1,599
78,136,945
https://en.wikipedia.org/wiki/Cancer%20exodus%20hypothesis
The cancer exodus hypothesis establishes that circulating tumor cell clusters (CTC clusters) maintain their multicellular structure throughout the metastatic process. It was previously thought that these clusters must dissociate into single cells during metastasis. According to the hypothesis, CTC clusters intravasate (enter the bloodstream), travel through circulation as a cohesive unit, and extravasate (exit the bloodstream) at distant sites without disaggregating, significantly enhancing their metastatic potential. This concept is considered a key advancement in understanding of cancer biology and CTCs role in cancer metastasis. Mechanism Traditionally, it was believed that CTC clusters needed to dissociate into individual cells during their journey through the bloodstream to seed secondary tumors. However, recent studies show that CTC clusters can travel through the bloodstream intact, enabling them to perform every step of metastasis while maintaining their group/cluster structure. The cancer exodus hypothesis asserts that CTC clusters have several distinct advantages that increase their metastatic potential: Higher metastatic efficiency: CTC clusters have been shown to possess superior seeding capabilities at distant sites compared to single CTCs. Survival and proliferation: The collective nature of CTC clusters allows them to share resources and offer intercellular support, improving their overall survival rates in the bloodstream. Resistance to treatment: CTC clusters exhibit unique gene expression profiles that contribute to their ability to evade certain cancer therapies, making them more resistant than individual tumor cells. Clinical relevance The cancer exodus hypothesis offers important insights into how metastasis occurs and highlights the significance of CTC clusters in cancer progression. Detecting and analyzing CTC clusters through liquid biopsies could offer valuable information about the aggressiveness and metastatic potential of cancers. This information is particularly useful for identifying patients who may benefit from more aggressive treatment strategies. Characterization The hypothesis was developed due to several key studies, which have demonstrated the ability of CTC clusters to: Intravasate and travel as clusters: Research has shown that CTC clusters can enter the bloodstream as a group, travel through the circulatory system intact, and maintain their cluster phenotype during transit. Extravasate through angiopellosis: A key finding of the hypothesis is that CTC clusters do not need to disaggregate to exit the bloodstream. Instead, they can undergo a process called angiopellosis, in which entire clusters migrate out of the blood vessels as a group, retaining their multicellular form. These findings underscore the critical role of CTC clusters in driving the metastatic cascade and suggest that CTC clusters could serve as important biomarkers in cancer diagnosis, prognosis, and treatment planning. Additionally, understanding the mechanisms that allow CTC clusters to retain their structure and survive in circulation opens new avenues for targeted cancer therapies designed to disrupt this process. Future directions As research into the cancer exodus hypothesis progresses, new therapeutic strategies could emerge to specifically target CTC clusters. Blocking their formation, disrupting their cohesion, or preventing their ability to survive in the bloodstream could offer new ways to prevent metastasis in aggressive cancers. Continued studies will be essential to further elucidate the biological pathways involved in CTC cluster-mediated metastasis and develop potential treatment interventions. References Cancer pathology Oncology Biology Medicine
Cancer exodus hypothesis
Biology
677
31,794,459
https://en.wikipedia.org/wiki/Simple%20Model%20of%20the%20Atmospheric%20Radiative%20Transfer%20of%20Sunshine
The Simple Model of the Atmospheric Radiative Transfer of Sunshine (SMARTS) is a computer program designed to evaluate the surface solar irradiance components in the shortwave spectrum (spectral range 280 to 4000 nm) under cloudless conditions. The program, written in FORTRAN, relies on simplifications of the equation of radiative transfer to allow extremely fast calculations of the surface irradiance. The irradiance components can be incident on a horizontal, a fixed-tilt or a 2-axis tracking surface. SMARTS can be used for example to evaluate the energy production of solar panels under variable atmospheric conditions. Many other applications are possible. History The first versions of SMARTS were developed by Dr. Gueymard while he was at the Florida Solar Energy Center. The model employed a structure similar to the earlier SPCTRAL2 model, still offered by the National Renewable Energy Laboratory (NREL), but with finer spectral resolution, as well as updated extraterrestrial spectrum and transmittance functions. The latter consisted mostly of parameterizations of results obtained with MODTRAN. The latest versions (2.9.2 and 2.9.5) of SMARTS are hosted by NREL. The program can be freely downloaded but is subject to a License Agreement, which limits its use to civilian research and education. For new users, an optional graphical interface (for Windows OS only) is available to ease the preparation of the input file. Program packages are available for the Windows, Macintosh, and Linux platforms. Applications SMARTS version 2.9.2 was selected to prepare various reference terrestrial spectra, which have been standardized by ASTM under the designations G173, G177 and G197, and by IEC under 60904-3. The latter standard represents the spectral distribution of global irradiance incident on a 37° tilted surface facing the sun at an air mass of 1.5. The integrated irradiance amounts to 1000 W/m2. This standard spectrum is mandated by IEC to evaluate the rating of photovoltaic (PV) solar cells in the absence of optical concentration. PV cells requiring concentration referred to as CPV cells are normally evaluated against the direct spectrum at air mass 1.5 described in ASTM G173. This spectrum integrates to 900 W/m2. The reasons behind the selection of the atmospheric and environmental conditions that eventually led to the development of ASTM G173 are described in a scientific paper. SMARTS version 2.9.2 is considered an adjunct standard to G173 by ASTM. Further details on the use of SMARTS for PV or CPV applications are available in other publications. In particular, the model is frequently used to evaluate real-world efficiencies of PV or CPV modules and evaluate mismatch factors. The reference spectra in ASTM G197 have been developed to evaluate the optical characteristics of fenestration devices when mounted vertically (windows) or on structures inclined at 20° from the horizontal (skylights on roofs). The reference spectrum in ASTM G177 is limited to the global irradiance in the ultraviolet (280–400 nm), and corresponds to "high-UV" conditions frequently encountered in arid and elevated sites, such as in the southwest USA. This spectrum is to be used as a reference for testing the degradation and durability of materials. Features The program uses various inputs that describe the atmospheric conditions for which the irradiance spectra are to be calculated. Ideal conditions, based on various possible model atmospheres and aerosol models, can be selected by the user. Alternatively, realistic conditions can also be specified as inputs, based for example on aerosol and water vapor data provided by a sunphotometer. In turn, these realistic conditions are necessary to compare the modeled spectra to those measured by a spectroradiometer. Reciprocally, since the model is well validated, this comparative method can be used as guidance to detect malfunction or miscalibration of instruments. The original spectral resolution of the model is 0.5 nm in the UV, 1 nm in the visible and near-infrared, and 5 nm above 1700 nm. To facilitate comparisons between the modeled spectra and actual measurements at a different spectral resolution, the SMARTS post-processor may be used to smooth the modeled spectra and adapt them to simulate the optical characteristics of a specific spectroradiometer. Additionally, the model provides the spectrally-integrated (or "broadband") irradiance values, which can then be compared to measurements from a pyrheliometer (for direct radiation) or pyranometer (for diffuse or global radiation) at any instant. Besides the atmospheric conditions, another important input is the solar geometry, which can be defined by the sun position (zenith angle and azimuth), the air mass, or by specifying the date, time, and location. Optional calculations include the circumsolar irradiance, illuminance components, photosynthetically active radiation (PAR) components, and irradiance calculations in the UV, involving a variety of action spectra (such as that corresponding to the erythema). The program outputs its results to text files, which can be further imported and processed into spreadsheets. A graphic interface, providing plots of the calculated spectra using National Instruments' LabVIEW software, is also available. See also Air mass (solar energy) Atmosphere of Earth Concentrated photovoltaics Diffuse sky radiation Electromagnetic radiation and health Illuminance Insolation Irradiance List of atmospheric radiative transfer codes MODTRAN Rayleigh scattering Sunlight Sunshine References External links Official website : http://www.solarconsultingservices.com/smarts.php Download website : http://www.nrel.gov/rredc/smarts/ Electromagnetic radiation Atmospheric radiative transfer codes
Simple Model of the Atmospheric Radiative Transfer of Sunshine
Physics
1,205
7,125,109
https://en.wikipedia.org/wiki/Nuclear%20transport
Nuclear transport refers to the mechanisms by which molecules move across the nuclear membrane of a cell. The entry and exit of large molecules from the cell nucleus is tightly controlled by the nuclear pore complexes (NPCs). Although small molecules can enter the nucleus without regulation, macromolecules such as RNA and proteins require association with transport factors known as nuclear transport receptors, like karyopherins called importins to enter the nucleus and exportins to exit. Nuclear import Protein that must be imported to the nucleus from the cytoplasm carry nuclear localization signals (NLS) that are bound by importins. An NLS is a sequence of amino acids that acts as a tag. They are most commonly hydrophilic sequences containing lysine and arginine residues, although diverse NLS sequences have been documented. Proteins, transfer RNA, and assembled ribosomal subunits are exported from the nucleus due to association with exportins, which bind signaling sequences called nuclear export signals (NES). The ability of both importins and exportins to transport their cargo is regulated by the Ran small G-protein. G-proteins are GTPase enzymes that bind to a molecule called guanosine triphosphate (GTP) which they then hydrolyze to create guanosine diphosphate (GDP) and release energy. The RAN enzymes exist in two nucleotide-bound forms: GDP-bound and GTP-bound. In its GTP-bound state, Ran is capable of binding importins and exportins. Importins release cargo upon binding to RanGTP, while exportins must bind RanGTP to form a ternary complex with their export cargo. The dominant nucleotide binding state of Ran depends on whether it is located in the nucleus (RanGTP) or the cytoplasm (RanGDP). Nuclear export Nuclear export roughly reverses the import process; in the nucleus, the exportin binds the cargo and Ran-GTP and diffuses through the pore to the cytoplasm, where the complex dissociates. Ran-GTP binds GAP and hydrolyzes GTP, and the resulting Ran-GDP complex is restored to the nucleus where it exchanges its bound ligand for GTP. Hence, whereas importins depend on RanGTP to dissociate from their cargo, exportins require RanGTP in order to bind to their cargo. A specialized mRNA exporter protein moves mature mRNA to the cytoplasm after post-transcriptional modification is complete. This translocation process is actively dependent on the Ran protein, although the specific mechanism is not yet well understood. Some particularly commonly transcribed genes are physically located near nuclear pores to facilitate the translocation process. Export of tRNA is also dependent on the various modifications it undergoes, thus preventing export of improperly functioning tRNA. This quality control mechanism is important due to tRNA's central role in translation, where it is involved in adding amino acids to a growing peptide chain. The tRNA exporter in vertebrates is called exportin-t. Exportin-t binds directly to its tRNA cargo in the nucleus, a process promoted by the presence of RanGTP. Mutations that affect tRNA's structure inhibit its ability to bind to exportin-t, and consequentially, to be exported, providing the cell with another quality control step. As described above, once the complex has crossed the envelope it dissociates and releases the tRNA cargo into the cytosol. Protein shuttling Many proteins are known to have both NESs and NLSs and thus shuttle constantly between the nucleus and the cytosol. In certain cases one of these steps (i.e., nuclear import or nuclear export) is regulated, often by post-translational modifications. Nuclear import limits the propagation of large proteins expressed in skeletal muscle fibers and possibly other syncytial tissues, maintaining localized gene expression in certain nuclei. Combining both NESs and NLSs promotes propagation of large proteins to more distant nuclei in muscle fibers. Protein shuttling can be assessed using a heterokaryon fusion assay. References External links Nuclear Transport animations Nuclear Transport illustrations Cell biology
Nuclear transport
Biology
853
1,932,759
https://en.wikipedia.org/wiki/Making%20out
Making out is a term of American origin dating back to at least 1949, and is used to refer to kissing, including extended French kissing or necking (heavy kissing of the neck, and above), or to acts of non-penetrative sex such as heavy petting ("intimate contact, just short of sexual intercourse"). Equivalent terms in other dialects include the British English getting off and the Hiberno-English shifting. When performed in a stationary vehicle, it has been euphemistically referred to as parking, coinciding with American car culture. History The sexual connotations of the phrase "make out" appear to have developed in the 1930s and 1940s from the phrase's other meaning: "to succeed". Originally, it meant "to seduce" or "to have sexual intercourse". "Petting" ("making out" or foreplay) was popularized in the 1920s, as youth culture challenged earlier Victorian era strictures on sexuality with the rise in popularity of "petting parties". At these parties, promiscuity became more commonplace, breaking from the traditions of monogamy or courtship with their expectations of eventual marriage. This was typical on college campuses, where young people "spent a great deal of unsupervised time in mixed company", and theaters. In the 1950s, Life magazine depicted petting parties as "that famed and shocking institution of the '20s", and commenting on the Kinsey Report, said that they have been "very much with us ever since". In the Kinsey Report of 1950, there was an indicated increase in premarital intercourse for the generation of the 1920s. Kinsey found that of women born before 1900, 14 percent acknowledged premarital sex before the age of 25, while those born after 1900 were two and a half times more likely (36 percent) to have premarital intercourse and experience an orgasm. The Continental zeitgeist is illustrated by a letter that Sigmund Freud wrote to Sándor Ferenczi in 1931, playfully admonishing him to stop kissing his patients; Freud warned him lest "a number of independent thinkers in matters of technique will say to themselves: Why stop at a kiss? Certainly one gets further when one adopts 'pawing' as well, which, after all, doesn't make a baby. And then bolder ones will come along who will go further, to peeping and showing – and soon we shall have accepted in the technique of analysis the whole repertoire of demi-viergerie and petting parties". In the years following World War I, necking and petting became accepted behavior in mainstream American culture as long as the partners were dating. A 1956 study defined necking as "kissing and light caressing above the neck" and petting as "more intimate contact with the erogenous zones, short of sexual intercourse". Alfred Kinsey's definition of petting was "deliberately touching body parts above or below the waist", compared to necking which only involved general body contact. Characteristics Making out is usually considered an expression of romantic affection or sexual attraction. An episode of making out is frequently referred to as a "make-out session" or simply "making out", depending on the speaker's vernacular. It covers a wide range of sexual behavior, and means different things to different age groups in different parts of the United States. It typically refers to kissing, including prolonged, passionate, open-mouth kissing (also known as French kissing), and intimate skin-to-skin contact. The term can also refer to other forms of foreplay such as heavy petting (sometimes simply called petting), which typically involves some genital stimulation, but usually not the direct act of penetrative sexual intercourse. The perceived significance of making out may be affected by the age and relative sexual experience of the participants. Teenagers sometimes play party games in which making out is the main activity as an act of exploration. Games in this category include seven minutes in heaven and spin the bottle. Teenagers may have had social gatherings in which making out was the predominant event. In the United States, these events were referred to as "make-out parties" and may have been confined to a specific area, called the "make-out room". These make-out parties were generally not regarded as sex parties, though heavy petting may have been involved, depending on the group. See also Sexual slang References Sexual euphemisms Sexual acts Kissing sv:Hångel
Making out
Biology
922
56,812,359
https://en.wikipedia.org/wiki/Deplatforming
Deplatforming, also called no-platforming, is a form of Internet censorship of an individual or group by preventing them from posting on the platforms they use to share their information/ideas. This typically involves suspension, outright bans, or reducing spread (shadow banning). As early as 2015, platforms such as Reddit began to enforce selective bans based, for example, on terms of service that prohibit "hate speech". A famous example of deplatforming was Twitter's ban of then-US President Donald Trump shortly after the January 6 United States Capitol attack. History Deplatforming of invited speakers In the United States, the banning of speakers on university campuses dates back to the 1940s. This was carried out by the policies of the universities themselves. The University of California had a policy known as the Speaker Ban, codified in university regulations under President Robert Gordon Sproul, that mostly, but not exclusively, targeted communists. One rule stated that "the University assumed the right to prevent exploitation of its prestige by unqualified persons or by those who would use it as a platform for propaganda." This rule was used in 1951 to block Max Shachtman, a socialist, from speaking at the University of California at Berkeley. In 1947, former U.S. Vice President Henry A. Wallace was banned from speaking at UCLA because of his views on U.S. Cold War policy, and in 1961, Malcolm X was prohibited from speaking at Berkeley as a religious leader. Controversial speakers invited to appear on college campuses have faced deplatforming attempts to disinvite them or to otherwise prevent them from speaking. The British National Union of Students established its No Platform policy as early as 1973. In the mid-1980s, visits by South African ambassador Glenn Babb to Canadian college campuses faced opposition from students opposed to apartheid. In the United States, recent examples include the March 2017 disruption by protestors of a public speech at Middlebury College by political scientist Charles Murray. In February 2018, students at the University of Central Oklahoma rescinded a speaking invitation to creationist Ken Ham, after pressure from an LGBT student group. In March 2018, a "small group of protesters" at Lewis & Clark Law School attempted to stop a speech by visiting lecturer Christina Hoff Sommers. In the 2019 film No Safe Spaces, Adam Carolla and Dennis Prager documented their own disinvitation along with others. , the Foundation for Individual Rights in Education, a speech advocacy group, documented 469 disinvitation or disruption attempts at American campuses since 2000, including both "unsuccessful disinvitation attempts" and "successful disinvitations"; the group defines the latter category as including three subcategories: formal disinvitation by the sponsor of the speaking engagement; the speaker's withdrawal "in the face of disinvitation demands"; and "heckler's vetoes" (situations when "students or faculty persistently disrupt or entirely prevent the speakers' ability to speak"). Deplatforming in social media Beginning in 2015, Reddit banned several communities on the site ("subreddits") for violating the site's anti-harassment policy. A 2017 study published in the journal Proceedings of the ACM on Human-Computer Interaction, examining "the causal effects of the ban on both participating users and affected communities," found that "the ban served a number of useful purposes for Reddit" and that "Users participating in the banned subreddits either left the site or (for those who remained) dramatically reduced their hate speech usage. Communities that inherited the displaced activity of these users did not suffer from an increase in hate speech." In June 2020 and January 2021, Reddit also issued bans to two prominent online pro-Trump communities over violations of the website's content and harassment policies. On May 2, 2019, Facebook and the Facebook-owned platform Instagram announced a ban of "dangerous individuals and organizations" including Nation of Islam leader Louis Farrakhan, Milo Yiannopoulos, Alex Jones and his organization InfoWars, Paul Joseph Watson, Laura Loomer, and Paul Nehlen. In the wake of the 2021 storming of the US Capitol, Twitter banned then-president Donald Trump, as well as 70,000 other accounts linked to the event and the far-right movement QAnon. Some studies have found that the deplatforming of extremists reduced their audience, although other research has found that some content creators became more toxic following deplatforming and migration to alt-tech platform. Twitter On November 18, 2022, Elon Musk, as newly appointed CEO of Twitter, reopened previously banned Twitter accounts of high-profile users, including Kathy Griffin, Jordan Peterson, and The Babylon Bee as part of the new Twitter policy. As Musk exclaimed, "New Twitter policy is freedom of speech, but not freedom of reach". Alex Jones On August 6, 2018, Facebook, Apple, YouTube and Spotify removed all content by Jones and InfoWars for policy violations. YouTube removed channels associated with InfoWars, including The Alex Jones Channel. On Facebook, four pages associated with InfoWars and Alex Jones were removed over repeated policy violations. Apple removed all podcasts associated with Jones from iTunes. On August 13, 2018, Vimeo removed all of Jones's videos because of "prohibitions on discriminatory and hateful content". Facebook cited instances of dehumanizing immigrants, Muslims and transgender people, as well as glorification of violence, as examples of hate speech. After InfoWars was banned from Facebook, Jones used another of his websites, NewsWars, to circumvent the ban. Jones's accounts were also removed from Pinterest, Mailchimp and LinkedIn. , Jones retained active accounts on Instagram, Google+ and Twitter. In September, Jones was permanently banned from Twitter and Periscope after berating CNN reporter Oliver Darcy. On September 7, 2018, the InfoWars app was removed from the Apple App Store for "objectionable content". He was banned from using PayPal for business transactions, having violated the company's policies by expressing "hate or discriminatory intolerance against certain communities and religions." After Elon Musk's purchase of Twitter several previously banned accounts were reinstated including Donald Trump, Andrew Tate and Ye resulting in questioning if Alex Jones will be unbanned as well. However Musk denied that Alex Jones will be unbanned criticizing Jones as a person that "would use the deaths of children for gain, politics or fame". InfoWars remained available on Roku devices in January 2019, a year after the channel's removal from multiple streaming services. Roku indicated that they do not "curate or censor based on viewpoint," and that it had policies against content that is "unlawful, incited illegal activities, or violates third-party rights," but that InfoWars was not in violation of these policies. Following a social media backlash, Roku removed InfoWars and stated "After the InfoWars channel became available, we heard from concerned parties and have determined that the channel should be removed from our platform." In March 2019, YouTube terminated the Resistance News channel due to its reuploading of live streams from InfoWars. On May 1, 2019, Jones was barred from using both Facebook and Instagram. Jones briefly moved to Dlive, but was suspended in April 2019 for violating community guidelines. In March 2020, the InfoWars app was removed from the Google Play store due to claims of Jones disseminating COVID-19 misinformation. A Google spokesperson stated that "combating misinformation on the Play Store is a top priority for the team" and apps that violate Play policy by "distributing misleading or harmful information" are removed from the store. Donald Trump On January 6, 2021, in a joint session of the United States Congress, the counting of the votes of the Electoral College was interrupted by a breach of the United States Capitol chambers. The rioters were supporters of President Donald Trump who hoped to delay and overturn the President's loss in the 2020 election. The event resulted in five deaths and at least 400 people being charged with crimes. The certification of the electoral votes was only completed in the early morning hours of January 7, 2021. In the wake of several Tweets by President Trump on January 7, 2021 Facebook, Instagram, YouTube, Reddit, and Twitter all deplatformed Trump to some extent. Twitter deactivated his personal account, which the company said could possibly be used to promote further violence. Trump subsequently tweeted similar messages from the President's official US Government account @POTUS, which resulted in him being permanently banned on January 8. Twitter then announced that Trump's ban from their platform would be permanent. Trump planned to rejoin on social media through the use of a new platform by May or June 2021, according to Jason Miller on a Fox News broadcast. The same week Musk announced Twitter's new freedom of speech policy, he tweeted a poll to ask whether to bring back Trump into the platform. The poll ended with 51.8% in favor of unbanning Trump's account. Twitter has since reinstated Trump's Twitter account @realDonaldTrump (as of 19 Nov 2022 — but by then Trump's platform was Truth Social). Andrew Tate In 2017, Andrew Tate was banned from Twitter for tweeting that women should "bare some responsibility" in response to the #MeToo movement. Similarly, in August 2022, Tate was banned on four more major social media platforms: Instagram, Facebook, TikTok, and YouTube. These platforms indicated that Tate's misogynistic comments violated their hate speech policies. Tate has since been unbanned from Twitter as part of the new freedom of speech policy on Twitter. Demonetization Social media platforms such as YouTube and Instagram allow their content producers or influencers to earn money based on the content (videos, images, etc.), most typically based around some sort of payment per a set number of new "likes" or clicks etc. When the content is deemed inappropriate for compensation, but still left on the platform, this is called "demonetization" because the content producer is left with no compensation for their content that they created, while at the same time the content is still left up and available for viewing or listening by the general public. In September 2016, Vox reported that demonetization—as it pertained to YouTube specifically—involved the following key points: "Since 2012, YouTube has been automatically 'demonetizing' some videos because its software thought the content was unfriendly for advertisers." "Many YouTube video makers didn't realize this until last week, when YouTube began actively telling them about it." "This has freaked YouTubers out, even though YouTube has been behaving rationally by trying to connect advertisers to advertiser-friendly content. It's not censorship, since YouTube video makers can still post (just about) anything they want." "YouTube's software will screw things up, which means videos that should have ads don't, which means YouTube video makers have been missing out on ad revenue." Other examples Deplatforming tactics have also included attempts to silence controversial speakers through various forms of personal harassment, such as doxing, the making of false emergency reports for purposes of swatting, and complaints or petitions to third parties. In some cases, protesters have attempted to have speakers blacklisted from projects or fired from their jobs. In 2019, students at the University of the Arts in Philadelphia circulated an online petition demanding that Camille Paglia "should be removed from UArts faculty and replaced by a queer person of color." According to The Atlantics Conor Friedersdorf, "It is rare for student activists to argue that a tenured faculty member should be denied a platform." Paglia, a tenured professor for over 30 years who identifies as transgender, had long been unapologetically outspoken on controversial "matters of sex, gender identity, and sexual assault". In December 2017, after learning that a French artist it had previously reviewed was a neo-Nazi, the San Francisco punk magazine Maximum Rocknroll apologized and announced that it has "a strict no-platform policy towards any bands and artists with a Nazi ideology". Legislative responses United Kingdom In May 2021, the UK government under Boris Johnson announced a Higher Education (Freedom of Speech) Bill that would allow speakers at universities to seek compensation for no-platforming, impose fines on universities and student unions that promote the practice, and establish a new ombudsman charged with monitoring cases of no-platforming and academic dismissals. In addition, the government published an Online Safety Bill that would prohibit social media networks from discriminating against particular political views or removing "democratically important" content, such as comments opposing or supporting political parties and policies. United States Some critics of deplatforming have proposed that governments should treat social media as a public utility to ensure that constitutional rights of the users are protected, citing their belief that an Internet presence using social media websites is imperative in order to adequately take part in the 21st century as an individual. Republican politicians have sought to weaken the protections established by Section 230 of the Communications Decency Act—which provides immunity from liability for providers and users of an "interactive computer service" who publish information provided by third-party users—under allegations that the moderation policies of major social networks are not politically neutral. Reactions Support for deplatforming According to its defenders, deplatforming has been used as a tactic to prevent the spread of hate speech and disinformation. Social media has evolved into a significant source of news reporting for its users, and support for content moderation and banning of inflammatory posters has been defended as an editorial responsibility required by news outlets. Supporters of deplatforming have justified the action on the grounds that it produces the desired effect of reducing what they characterize as hate speech. Angelo Carusone, president of the progressive organization Media Matters for America and who had run deplatforming campaigns against conservative talk hosts Rush Limbaugh in 2012 and Glenn Beck in 2010, pointed to Twitter's 2016 ban of Milo Yiannopoulos, stating that "the result was that he lost a lot.... He lost his ability to be influential or at least to project a veneer of influence." In the United States, the argument that deplatforming violates rights protected by the First Amendment is sometimes raised as a criticism. Proponents say that deplatforming is a legal way of dealing with controversial users online or in other digital spaces, so long as the government is not involved with causing the deplatforming. According to Audie Cornish, host of the NPR show Consider This, "the government can't silence your ability to say almost anything you want on a public street corner. But a private company can silence your ability to say whatever you want on a platform they created." Critical responses In the words of technology journalist Declan McCullagh, "Silicon Valley's efforts to pull the plug on dissenting opinions" began around 2018 with Twitter, Facebook, and YouTube denying service to selected users of their platforms; he said they devised "excuses to suspend ideologically disfavored accounts". In 2019, McCullagh predicted that paying customers would become targets for deplatforming as well, citing protests and open letters by employees of Amazon, Microsoft, Salesforce, and Google who opposed policies of U.S. Immigration and Customs Enforcement (ICE), and who reportedly sought to influence their employers to deplatform the agency and its contractors. Law professor Glenn Reynolds dubbed 2018 the "Year of Deplatforming" in an August 2018 article in The Wall Street Journal. Reynolds criticized the decision of "internet giants" to "slam the gates on a number of people and ideas they don't like", naming Alex Jones and Gavin McInnes. Reynolds cited further restrictions on "even mainstream conservative figures" such as Dennis Prager, as well as Facebook's blocking of a campaign advertisement by a Republican candidate "ostensibly because her video mentioned the Cambodian genocide, which her family survived." In a 2019 The Atlantic article, Conor Friedersdorf described what he called "standard practice" among student activists. He wrote: "Activists begin with social-media callouts; they urge authority figures to impose outcomes that they favor, without regard for overall student opinion; they try to marshal antidiscrimination law to limit freedom of expression." Friedersdorf pointed to evidence of a chilling effect on free speech and academic freedom. Of the faculty members he had contacted for interviews, he said a large majority "on both sides of the controversy insisted that their comments be kept off the record or anonymous. They feared openly participating in a debate about a major event at their institution—even after their university president put out an uncompromising statement in support of free speech." See also Boycott No Platform Freedom of speech Censorship Online shaming Social media as a public utility Cancel culture Marsh v. Alabama References Further reading External links Internet censorship Social media Web 2.0 neologisms New media Social networks Blacklisting Censorship Political repression Internet manipulation and propaganda Internet-based activism Shunning Excluded people
Deplatforming
Technology
3,598
46,766,187
https://en.wikipedia.org/wiki/Russula%20clelandii
Russula clelandii is a species of fungus in the family Russulaceae. Found in Australia, it was described as new to science in 1987. The fungus fruits on the ground in mixed woodlands of jarrah (Eucalyptus marginata) and karri (E. diversicolor), plants with which it is suspected of forming ectomycorrhizae. Fruitbodies are similar in morphology to the North American species Russula mariae. The specific epithet honours Australian naturalist John Burton Cleland. See also List of Russula species References External links clelandii Fungi described in 1987 Fungi of Australia Fungus species
Russula clelandii
Biology
127
30,721,542
https://en.wikipedia.org/wiki/Nanofountain%20probe
A nanofountain probe (NFP) is a device for 'drawing' micropatterns of liquid chemicals at extremely small resolution. An NFP contains a cantilevered micro-fluidic device terminated in a nanofountain. The embedded microfluidics facilitates rapid and continuous delivery of molecules from the on-chip reservoirs to the fountain tip. When the tip is brought into contact with the substrate, a liquid meniscus forms, providing a path for molecular transport to the substrate. By controlling the geometry of the meniscus through hold time and deposition speed, various inks and biomolecules could be patterned on a surface, with sub 100 nm resolution. Historical background The advent of dip-pen nanolithography (DPN) in recent years represented a revolution in nanoscale patterning technology. With sub-100-nanometer resolution and an architecture conducive to massive parallelization, DPN is capable of producing large arrays of nanoscale features. As such, conventional DPN and other probe-based techniques are generally limited in their rate of deposition and by the need for repeated re-inking during extended patterning. To address these challenges, nanofountain probe was developed by Espinosa et al. where microchannels were embedded in AFM probes to transport ink or bio-molecules from reservoirs to substrates, realizing continuous writing at the nanoscale. Integration of continuous liquid ink feeding within the NFP facilitates more rapid deposition and eliminates the need for repeated dipping, all while preserving the sub-100-nanometer resolution of DPN. Microfabrication Nano fountain probes (NFPs) are fabricated on the wafer-scale using microfabrication techniques allowing for batch fabrication of numerous chips. Through the different generations of devices, design and experimentation improved the device yielding to a robust fabrication process. The highly enhanced feature dimension and shapes is expected to improve the performance in writing and imaging. Applications Direct-write nanopatterning NFP is used in the development of a to scale, direct-write nanomanufacturing platform. The platform is capable of constructing complex, highly-functional nanoscale devices from a diverse suite of materials (e.g., nanoparticles, catalysts (increase rate of reaction), biomolecules, and chemical solutions). Demonstrated nanopatterning capabilities include: • Biomolecules (proteins, DNA) for biodetection assays or cell adhesion studies • Functional nanoparticles for drug delivery studies and nanosystems making (fabrication) • Catalysts for carbon nanotube growth in nanodevice fabrication • Thiols for directed self-assembly of nanostructures. Direct in-vitro single-cell injection Taking advantage of the unique tip geometry of the NFP nanomaterials are directly injected into live cells with minimal invasiveness. This enables unique studies of nanoparticle-mediated delivery, as well as cellular pathways and toxicity. Whereas typical in vitro studies are limited to cell populations, these broadly-applicable tools enable multifaceted interrogation at a truly single cell level. See also Nanolithography References Lithography (microfabrication) Microtechnology Scanning probe microscopy Biological engineering Tissue engineering
Nanofountain probe
Chemistry,Materials_science,Engineering,Biology
676
25,465,563
https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20July%201%2C%202057
An annular solar eclipse will occur at the Moon's ascending node of orbit between Sunday, July 1 and Monday, July 2, 2057, with a magnitude of 0.9464. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. An annular solar eclipse occurs when the Moon's apparent diameter is smaller than the Sun's, blocking most of the Sun's light and causing the Sun to look like an annulus (ring). An annular eclipse appears as a partial eclipse over a region of the Earth thousands of kilometres wide. Occurring about 1.7 days after apogee (on June 30, 2057, at 6:30 UTC), the Moon's apparent diameter will be smaller. The path of annularity will be visible from parts of northwest China, Mongolia, eastern Russia, northern Alaska, western and central Canada, and far northeast Minnesota, northern Michigan, and far western New York in the United States. A partial solar eclipse will also be visible for parts of East Asia, Northeast Asia, Northern Europe, and North America. Eclipse details Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse. Eclipse season This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight. Related eclipses Eclipses in 2057 A total solar eclipse on January 5. A partial lunar eclipse on June 17. An annular solar eclipse on July 1. A partial lunar eclipse on December 11. A total solar eclipse on December 26. Metonic Preceded by: Solar eclipse of September 12, 2053 Followed by: Solar eclipse of April 20, 2061 Tzolkinex Preceded by: Solar eclipse of May 20, 2050 Followed by: Solar eclipse of August 12, 2064 Half-Saros Preceded by: Lunar eclipse of June 26, 2048 Followed by: Lunar eclipse of July 7, 2066 Tritos Preceded by: Solar eclipse of August 2, 2046 Followed by: Solar eclipse of May 31, 2068 Solar Saros 147 Preceded by: Solar eclipse of June 21, 2039 Followed by: Solar eclipse of July 13, 2075 Inex Preceded by: Solar eclipse of July 22, 2028 Followed by: Solar eclipse of June 11, 2086 Triad Preceded by: Solar eclipse of August 31, 1970 Followed by: Solar eclipse of May 3, 2144 Solar eclipses of 2054–2058 Saros 147 Metonic series Tritos series Inex series References External links NASA graphics 2057 7 1 2057 in science 2057 7 1 2057 7 1
Solar eclipse of July 1, 2057
Astronomy
648
53,359,527
https://en.wikipedia.org/wiki/Ceronapril
Ceronapril (INN, proposed trade names Ceranapril, Novopril) is a phosphonate ACE inhibitor that was never marketed. References ACE inhibitors Carboxamides Enantiopure drugs Phosphonates Prodrugs Pyrrolidines
Ceronapril
Chemistry
60
42,403,060
https://en.wikipedia.org/wiki/Sat%20Bir%20Singh%20Khalsa
Sat Bir Singh Khalsa is a researcher in the field of body mind medicine, specializing in yoga therapy. Originally from Toronto, he earned his Ph.D. at the University of Toronto, where he also began his practice of Kundalini Yoga under the tutelage of Yogi Bhajan. He is (since 2006) an Associate Professor of Medicine at Harvard Medical School Sat Bir Singh Khalsa serves as the Director of Yoga Research for Yoga Alliance and the Kundalini Research Institute, Research Associate at the Benson Henry Institute for Mind Body Medicine, and Research Affiliate of the Osher Center for Integrative Medicine. Research Studies Sat Bir Singh Khalsa has participated in numerous mind-body studies. His work has been published in more than eighty papers. He has conducted clinical research trials evaluating yoga interventions for insomnia, post-traumatic stress disorder, chronic stress, and anxiety disorders and in both public school and occupational settings. Sat Bir Singh works with the International Association of Yoga Therapists to promote research on yoga and yoga therapy as the chair of the scientific program committee for the annual Symposium on Yoga Research and as editor-in-chief of the International Journal of Yoga Therapy. He is medical editor of the Harvard Medical School Special Report, An Introduction to Yoga and chief editor of the medical textbook The Principles and Practice of Yoga in Health Care (2016). Sat Bir Singh Khalsa's papers explore the application of yoga as therapy for mental health conditions, including insomnia, performance anxiety, drug addiction, depression, and as a predictor of low body mass and low medication usage. Media and Public Speaking Sat Bir Singh Khalsa is often hired to speak about his research world wide, to share his research findings with the general public, government and NGOs, schools, universities and corporations alike. General Publication Sat Bir Singh Khalsa. (2009). "Kundalini Yoga as Therapy: A Research Perspective," chapter in Kundalini Rising: Exploring the Awakening of Kundalini. Boulder Colorado, Sounds True, Inc. Link to Research Articles External links Brigham and Women's Hospital International Association of Yoga Therapists Kundalini Research Institute 1951 births American Sikhs Converts to Sikhism Living people Sleep researchers University of Toronto alumni American yoga teachers Canadian Sikhs American people of Canadian descent
Sat Bir Singh Khalsa
Biology
473
671,260
https://en.wikipedia.org/wiki/Abell%201835%20IR1916
Abell 1835 IR1916 (also known as Abell 1835, Galaxy Abell 1835, Galaxy Abell 1835 IR1916, or simply The Abell) was a candidate for being the most distant galaxy ever observed, although that claim has not been verified by additional observations. It was claimed to lie behind the galaxy cluster Abell 1835, in the Virgo constellation. Initial observation Abell 1835 was discovered by French and Swiss astronomers of the European Southern Observatory, namely Roser Pelló, Johan Richard, Jean-François Le Borgne, Daniel Schaerer, and Jean-Paul Kneib. The astronomers used a near-infrared instrument on the Very Large Telescope to detect the galaxy; other observatories were then used to make an image of it possible. The Observatory, in conjunction with the Swiss National Science Foundation, the French Centre National de la Recherche Scientifique, and the journal Astronomy and Astrophysics, issued a press release on 1 March 2004 announcing the discovery. It was believed to be more distant than the galaxy lensed by Abell 2218. Age and distance The initial observer's analysis of J-band observations indicated that Abell 1835 IR1916 has a redshift factor of z~10.0, meaning that it appears to us as it was about 13.2 billion years ago, only 470 million years after the Big Bang and very close to the first burst of star formation in the universe. Its visibility at such a great distance was credited to gravitational lensing by the galaxy cluster Abell 1835 between it and us. Further analysis of the data that led to the first announcement has cast doubt on the claim that it is a distant object, and follow-up observations in the H-band using the Gemini North Telescope and observations from the orbiting Spitzer Space Telescope were not able to detect it at all, the latter regarding it to be an artefact. See also Abell 370 IOK-1 HD1 (galaxy), The most distant galaxy known. Notes References Galaxies Dwarf galaxies Virgo (constellation) de:Abell 1835 IR1916
Abell 1835 IR1916
Astronomy
428
3,994,836
https://en.wikipedia.org/wiki/Piecewise%20syndetic%20set
In mathematics, piecewise syndeticity is a notion of largeness of subsets of the natural numbers. A set is called piecewise syndetic if there exists a finite subset G of such that for every finite subset F of there exists an such that where . Equivalently, S is piecewise syndetic if there is a constant b such that there are arbitrarily long intervals of where the gaps in S are bounded by b. Properties A set is piecewise syndetic if and only if it is the intersection of a syndetic set and a thick set. If S is piecewise syndetic then S contains arbitrarily long arithmetic progressions. A set S is piecewise syndetic if and only if there exists some ultrafilter U which contains S and U is in the smallest two-sided ideal of , the Stone–Čech compactification of the natural numbers. Partition regularity: if is piecewise syndetic and , then for some , contains a piecewise syndetic set. (Brown, 1968) If A and B are subsets of with positive upper Banach density, then is piecewise syndetic. Other notions of largeness There are many alternative definitions of largeness that also usefully distinguish subsets of natural numbers: Cofiniteness IP set member of a nonprincipal ultrafilter positive upper density syndetic set thick set See also Ergodic Ramsey theory Notes References Semigroup theory Ergodic theory Ramsey theory Combinatorics
Piecewise syndetic set
Mathematics
314
543,329
https://en.wikipedia.org/wiki/Optician
An optician is an individual who fits glasses or contact lenses by filling a refractive prescription from an optometrist or ophthalmologist. They are able to translate and adapt ophthalmic prescriptions, dispense products, and work with accessories. There are several specialties within the field. Types Dispensing optician or ophthalmic dispenser A dispensing optician is anyone who prepares, fits, and dispenses prescription lenses, spectacles, glasses, contact lenses, or any other type of vision-correcting optical device to the intended user. They may interpret optical prescriptions issued by an ophthalmologist, optometrist, or physician for the lab optician who fabricates vision-correcting optical lenses. They also measure inter-ocular or pupillary distances, vertex distances, pupil fitting heights, and frame angles to determine the proper position of vision-correcting lenses. In addition, they adapt, modify, or align frames with vision-correcting lenses to the face of the intended wearer, Dispensing opticians must have a basic knowledge of laboratory techniques such as lens surfacing and lens preparation. Mechanical optician, lab optician, or ophthalmic lab technician Ophthalmic laboratory technicians must understand optics and how to use machinery in order to surface, coat, edge, or finish lenses according to specifications provided by dispensing opticians. They typically insert lenses into frames, also called glazing, to produce finished glasses and conduct all quality and safety testing required by the respective local and country regulations. Although most lenses are designed with fully automated equipment, such as computer-based generators, automatic edgers, and lens measurement instruments, a highly-skilled lab optician will often finish lenses by hand for more difficult prescriptions and lens designs in order to have the best-finished outcome. Contact lens fitter or contact lens technician Contact lens fitters may work independently or under the direction of an ophthalmologist or optometrist to fill a doctor's prescription for contact lenses. A patient must obtain a prescription for contact lenses from a physician and then the fitter will review contact lens handling, fitting, and follow-up care. Contact lens fitters must have computer skills, communication skills, and an understanding of medical-legal implications. Ocularist An ocularist is a trained technician who specializes in fitting a patient with a prosthetic eye after management by an ophthalmologist. Ocularists are trained in assessing the status of the orbit, fabricating and fitting a cosmetic ocular prosthesis, and periodically monitoring the prosthesis and related tissues. They ensure the correct fitting, shaping, and painting of ocular prostheses. The ocularist also educates the patient on handling and care of the prosthesis. Ocularists provide long-term care through follow-up examinations for evaluation and polishing of prostheses. Work environment Corporate practice Corporate practices may require more night and weekend work hours than other work environments due to the longer hours of the corporate chains. Many who work for them report the trade-off is greater room for growth, higher pay, and better benefits due to the larger scale of the employer. Purchasing of goods is conducted by the corporate headquarters and not by individuals at the locations. Independent practice Owned by the optician themselves, opticians who operate independent practices have all of the responsibilities of an entrepreneur/business owner as well as an optician. In the United States, due to certain local and state regulations, opticians cannot employ optometrists in various areas and are limited in some vision discount plans they can accept. This means they must rely more heavily on walk-in consumers than those who are owned by a doctor. Optometrist or ophthalmologist office A smaller, more intimate environment than corporate or clinical, doctor-owned practices usually do not require as many evening or weekend hours as corporate locations; however, every medical office is different and will have a unique set of features and characteristics. Hospitals and clinics Opticians working in a hospital or clinic typically oversee patient care, administer treatment and operate medical equipment under the supervision of an ophthalmologist or optometrist. Lab manufacturing This role typically does not work directly with patients and it is centred around the use of high-tech equipment and hand-held tools. History of opticians and spectacle makers The first known artistic representation of glasses was painted by Tommaso da Modena in 1352. He did a sequence of frescoes of brothers efficiently reading or replicating manuscripts; one holds a magnifying glass while the other has glasses suspended on his nose. Once Tommaso had established the example, other painters positioned spectacles on the noses of many of subjects, almost certainly as a representation of wisdom and respect. One of the most noteworthy developments in spectacle production in the 15th century was the introduction of concave lenses for the myopic or nearsighted. Pope Leo X, who was very myopic, wore concave spectacles when hunting and professed they enabled him to see clearer than his cohorts. The first spectacles utilized quartz lenses since optical glass had not been developed. The lenses were set into bone, metal and leather mountings, frequently fashioned like two small magnifying glasses with handles riveted together and set in an inverted V shape that could be balanced on the bridge of the nose. The use of spectacles extended from Italy to Germany, Spain, France and Portugal. From their inception, eyeglasses posed a dilemma that wasn't solved for almost 350 years: how to keep them on the bridge of the nose without falling. Spanish spectacle makers of the 17th century experimented with ribbons of silk that could be attached to the frames and then looped over the ears. Spanish and Italian missionaries carried the new models to spectacle wearers in China. The Chinese attached little ceramic or metal weights to the strings instead of making loops. In 1730 a London optician named Edward Scarlett perfected the use of rigid sidepieces that rested atop the ears. This perfection rapidly spread across the continent. In 1752 James Ayscough publicized his latest invention, spectacles with double hinged side pieces. These became very popular and appear more often than any other kind in paintings and prints of the period. Lenses were fabricated of tinted glass as well as clear. Ayscough felt that the clear glass lenses gave an unpleasant glare. In Spain in 1763 Pablo Minguet recommended turquoise, green, or yellow lenses but not amber or red. Europeans, in particular the French, were self-conscious about the use of glasses. Parisian aristocrats used reading aids only in private. The gentry of England and France used a "perspective glass” or monocular which could be concealed from view easily. In Spain, however, spectacles were popular amongst all classes since they considered glasses made them look more important and dignified. Far-sighted or aging colonial Americans imported spectacles from Europe. Spectacles were primarily for the affluent and literate colonists, who required a valuable and precious appliance. Benjamin Franklin in the 1780s developed the bifocals. Bifocal lenses advanced little in the first half of the 19th century. The terms bifocal and trifocal were introduced in London by John Isaac Hawkins, whose trifocals were patented in 1827. In 1884 B. M. Hanna was granted patents on two forms of bifocals which become commercially standardized as the "cemented" and "perfection" bifocals. Both had the serious faults of ugly appearance, fragility, and dirt-collection at the dividing line. At the end of the 19th century the two sections of the lens were fused instead of cemented At the turn of the 20th century, there was a considerable increase in the use of bifocals. Between 1781 and 1789, silver spectacles with sliding extension temples were being fabricated in France; however it was not until the 19th century that they gained extensive popularity. John McAllister of Philadelphia began fabricating spectacles with sliding temples containing looped ends which were much easier to use with the then-popular wigs. The loops supplement the inadequacy of stability, by allowing the addition of a cord or ribbon which could be tied behind the head, thus holding the eyeglasses firmly in place. In 1826, William Beecher moved to Massachusetts from Connecticut to establish a jewellery-optical manufacturing shop. The first ophthalmic pieces he fabricated were silver spectacles, which were later followed by blue steel. In 1869 the American Optical Company was incorporated and acquired the holdings of William Beecher. In 1849 J. J. Bausch immigrated to the United States from Germany. He had already served an apprenticeship as an optician in his native land and had found work in Berne. His reimbursement for the labour on a complete pair of spectacles was equal to six cents. Mr. Bausch encountered difficult times in America from 1849 until 1861, at which time war broke out. When the war prevented import of glass frames, demand for his hard rubber frames skyrocketed. Continuous expansion followed and the large Bausch and Lomb Company was formed. The monocle, which was first called an "eye-ring", was initially introduced in England in the early 19th century; although it had been developed in Germany during the 18th century. A young Austrian named studied optics in London and took the monocle idea back to Germany with him. He started making monocles in Vienna about 1814 and the fashion spread and took particularly strong roots in Germany and in Russia. The first monocle wearers were upper-class gentlemen, which may account for the aura of arrogance the monocle seemed to confer on the wearer. After World War I, the monocle fell into disrepute, its downfall in the allied sphere hastened, no doubt, by its association with the German military. The lorgnette, two lenses in a frame the user held with a lateral handle, was another 18th-century development (by Englishman George Adams). The lorgnette almost certainly developed from the scissors-glass, which was a double glass on a handle. Given that the two branches of the handle came together under the nose and looked as if they were about to cut it off, they were known as binocles-ciseaux or scissors glasses. The English altered the size and form of the scissors-glasses and produced the lorgnette. The frame and handle were often artistically embellished, given that they were used mostly by women and more often as a piece of jewellery than as a visual aid. The lorgnette maintained its popularity with ladies of fashion, who chose not to wear spectacles. The lorgnette maintained its popularity to the end of the 19th century. Pince-nez are believed to have appeared in the 1840s, but in the latter part of the century there was a great upsurge in the popularity of the pince-nez for both men and women. Gentlemen wore any style which suited them—heavy or delicate, round, or oval, straight, or drooping—usually on a ribbon, cord, or chain about the neck or attached to the lapel. Ladies more often than not wore the oval rimless style on a fine gold chain which could be reeled automatically into a button-size eyeglass holder pinned to the dress. Whatever the disadvantage of the pince-nez, it was convenient. In the 19th century, the responsibility of choosing the correct lens lay, as it always had, with the customer. Even when the optician was asked to choose, it was often on a rather casual basis. Spectacles were still available from travelling salesmen. Spectacles with round lenses (like Winston Churchill), oval shape, panto shape, and tortoise shell frames became the fashion around 1930. The round spectacles and the pince-nez continued to be worn in the 30. In the 40s there was increased emphasis on style in glasses with a variety of spectacles available. Meta Rosenthal wrote in 1938 that the pince-nez was still being worn by dowagers, headwaiters, old men, and a few others. The monocle was worn by only a minority in the United States. Sunglasses, however, became very popular in the late '30s. Equipment Opticians use a variety of equipment to fit, adjust and dispense eyewear, contact lenses, and low-vision aids. Manual lensometer Technically identified by the generic term manual lensometer, opticians may often refer to this piece of equipment as a lensometer, focimeter, or vertometer. The modern lensometer was invented in 1922 by Edgar Derry Tillyer of American Optical to determine "whether lenses have the refraction and power prescribed." Proper use of the lensometer by a dispensing optician or a lab optician includes verifying back or front vertex power, orientating uncut lenses for finishing and glazing, and confirming the mounting of lenses into the frame. Manual lensometers can also be outfitted with an attachment to read the back vertex power of a contact lens for modification and verification purposes. The optician uses the refracted, or bent light, displayed within a lensometer to interpret the sphere, cylinder, and add powers (if prescribed), axis orientation, prismatic effect, and locate the major reference point of the lens. Correct interpretation of these readings is critical to the performance of the eyewear and user satisfaction. Automated lensometer An automated lensometer uses the reflected wavelength of green light off of the lens surface along every lens meridian in order to determine all of the data points that the optician interprets with the manual lensometer. The benefits of an automated lensometer are increased speed, adjustments for variables in the index of refraction in the lens material, the ability to measure UV and light transmittance, and a decrease in training time while on the job. The drawbacks of automated lensometers in comparison to manual lensometers are greater difficulty identifying higher prismatic errors, aberrations, and surfacing power errors (optic waves), and the necessity of the optician not to tip the lens to avoid an erroneous result. Corneal reflex pupilometer A corneal reflex pupilometer is a digital device used to measure Interpupillary Distance (IPD), otherwise known as Pupillary Distance (PD). The measurement is used to align the Major Reference Point (MRP) of the lenses along the visual axis to reduce unwanted prismatic effect, eyestrain, and lens aberrations. A PD can be taken Binocularly (from the corneal reflex of one pupil to the corneal reflex of the other) or Monocularly (from the centre of the spectacle bridge to the centre of the corneal reflex of each eye independently with the non-measured eye being occluded). By providing a rest point on the bridge similar to a glasses frame, pupilometers provided a proper reference point for obtaining an accurate monocular PD value. PDs are also taken in relation to focus point. The eyes can be focused at infinity (distance), focused near (approximately 16 inches or 40 centimetres), or intermediate (a working distance in between near and distance. Because a pupilometer can be dialed to a specific distance and easily occluded, it is often easier to work with. This does not mean that it is more accurate than a skilled optician with a corneal reflex light, a millimeter ruler called a PD stick, and fully adjusted eyewear for certain age groups and pathologies. While a ruler alone is susceptible to parallax error, when it is used in conjunction with the other tools previously mentioned the accuracy can exceed the pupilometer for these certain patient groups The fitting and dispensing of contact lenses requires the use of additional equipment, all with very specific purposes. A keratometer is a diagnostic instrument for measuring the curvature of the anterior surface of the cornea, particularly for assessing the extent and axis of astigmatism. It was invented by the French ophthalmologist Samuel Hankins in 1880. Opticians, like ophthalmologists and optometrists, also use a slit-lamp/bio-microscope to examine the anterior segment, or frontal structures and posterior segment, of the human eye, which includes the eyelid, sclera, conjunctiva, iris, natural crystalline lens, and cornea. The binocular slit-lamp examination provides stereoscopic magnified view of the eye structures in detail, enabling anatomical diagnoses to be made for a variety of eye conditions. While a patient is seated in the examination chair, he rests his chin and forehead on a support to steady the head. Using the biomicroscope, the optician then proceeds to examine the patient's eye. A fine strip of paper, stained with fluorescein, a fluorescent dye, may be touched to the side of the eye; this stains the tear film on the surface of the eye to aid examination. The dye is naturally rinsed out of the eye by tears. Adults need no special preparation for the test; however children may need some preparation, depending on age, previous experiences, and level of trust. The list of equipment used by an optician is extensive and is often specified in jurisdiction specific Professional Standards of Practice. The standards of the College of Opticians of British Columbia serve as an example. By country Canada All provinces in Canada require opticians to complete formal training and education in opticianry and then must pass competency examinations prior to receiving governmental licensure. Some provinces (Ontario and Quebec) require a single optician's license that includes both the dispensing of glasses and contact lenses, while the other provinces have two separate licenses, one each for glasses and contact lens dispensing. Recent changes to the British Columbia Opticians regulations allow qualified opticians in that province to test a person's vision and prepare an assessment of the corrective lenses required for a client. Using the results of the assessment an optician is able to prepare and dispense glasses or contact lenses. Opticians in Alberta and Ontario are also permitted, under certain conditions, to refract and prepare and dispense glasses and contact lenses. Provincial regulatory organizations Each Canadian province has its own regulatory College or Board that provides registration or licensure to its opticians. The Regulatory body (often known as a ‘College’ but separate from, and not to be confused with, an educational institute) has a government mandate to protect the public. This includes enforcement of provincial statutes (Opticians Act) and public awareness campaigns. The National Association of Canadian Opticianry Regulators (NACOR) The National Association of Canadian Opticianry Regulators (NACOR) is an organization of all the provincial opticianry regulatory bodies in Canada (except Quebec). NACOR also administers Canada's national opticianry examination(s). Since 2001, all jurisdictions (except Quebec) have agreed to and signed, the Mutual Recognition Agreement among Opticianry Regulators that ensures labour mobility to all opticians across the entire nation without need for further examination. All provinces (with the exception of Quebec) require individuals to achieve a passing mark in a national examination as a requirement of licensure as an optician. Despite the non participation of Quebec in National initiatives, Canadian opticians who relocate to Quebec are able to register and practice in that province provided they meet certain language requirements. Provincial associations Most Canadian provinces have their own provincial opticianry associations that look after the interests of their members at the provincial level, such as advocacy. Some provincial regulatory agencies have a dual role or purpose and also serve as the association for that province. In addition to protecting their member's interests, provincial associations also undertake public interest initiatives such as providing vision screening for children in schools, or organizing professional development seminars. Established in 1989, the Opticians Association of Canada is a national organization of all provincial Opticianry Associations in Canada. The role of the OAC is to advocate for the various interests of opticians on a national basis. Education As a prerequisite for registration in any province of Canada opticians are required to complete a course at one of the NACOR accredited teaching institutions. Persons from an international jurisdiction may apply to a provincial regulatory agency for an assessment of equivalency of their education. Such applications are not unreasonably denied. Nigeria Dispensing Opticians are regulated by the Optometrists and Dispensing Opticians Registration Board of Nigeria (ODORBN). The training programme is a 3-year diploma programme in a Board accredited institution, located in all geopolitical zones of the nation. Some of the training institutions include Kwara State College of Health Technology (Offa), Federal Polytechnic (Nekede), and Millennium College of Health Technology. United Kingdom Opticians or Dispensing Opticians are regulated by the General Optical Council (GOC). A dispensing optician advises on, fits and supplies the most appropriate spectacles after taking account of each patient's visual, lifestyle and vocational needs. Dispensing opticians also play an important role in fitting contact lenses and advising and dispensing low vision aids to those who are partially sighted and in advising on and dispensing to children where appropriate. The Association of British Dispensing Opticians (ABDO) is the qualifying body for dispensing opticians in the United Kingdom (UK). The Fellow of British Dispensing Opticians (FBDO) is the base qualification for UK dispensing opticians. This qualification has been awarded level 6 status (equivalent to BSc) by Ofqual Welsh Assembly Government and Council for Curriculum Examinations and Assessment (CCEA). Additional qualifications, Contact Lenses and Low Vision have been assessed at level 7 (equivalent to an MSc). United States In the United States, an optician, through testing, may be certified by the American Board of Opticianry (ABO) to fill the prescription ordered by an ophthalmologist or optometrist. Note: The ABO Exam is not nationally recognized and does not indicate a license to practice as an optician. In roughly half the states, licensing is not a requirement to make or dispense eyewear. Many eye doctors do their own dispensing, and it is frequent for eye clinics to have an optician on their premises; or, conversely, for large optical chains to have optometrists in offices on their premises. Some opticians learn their skills through formal training programs. Professional technical schools and two-year colleges offer programs in opticianry. Two-year programs usually grant an associate degree. One-year programs offer a certificate. Training usually includes courses in optical math, optical physics, and tools and equipment use. Other opticians can apprentice to learn the required skills. Many formal education programs will accept hours worked as an apprentice to supplement or replace course credits, as well. United States Organizations That Impact Opticianry on a National Level Notable opticians Euclid of Alexandria Roger Bacon Christiaan Huygens Isaac Newton René Descartes Benedictus Spinoza James Ayscough Carl Laubman John Jacob Bausch Henry Lomb Eugene Kalt Achim Leistner See also Ophthalmologist Optometrist Scientific equipment optician References Health care occupations
Optician
Astronomy
4,828
39,706,267
https://en.wikipedia.org/wiki/Chrysomyxa%20pyrolae
Chrysomyxa pyrolae, is a species of rust fungi in the family Coleosporiaceae that can be found in such US states such as Alabama, Colorado, Maine and Vermont. References Fungal plant pathogens and diseases pyrolae Fungi of the United States Fungi without expected TNC conservation status Fungus species Taxa named by Augustin Pyramus de Candolle
Chrysomyxa pyrolae
Biology
79
27,480,321
https://en.wikipedia.org/wiki/Designated%20verifier%20signature
A designated verifier signature is a signature scheme in which signatures can only be verified by a single, designated verifier, designated as part of the signature creation. Designated verifier signatures were first proposed in 1996 by Jakobsson Markus, Kazue Sako, and Russell Impagliazzo. Proposed as a way to combine authentication and off-the-record messages, designated verifier signatures allow authenticated, private conversations to take place. Unlike in undeniable signature scheme the protocol of verifying is non-interactive; i.e., the signer chooses the designated verifier (or the set of designated verifiers) in advance and does not take part in the verification process. See also Non-repudiation Undeniable signature References Cryptography Digital signature schemes
Designated verifier signature
Mathematics,Engineering
170
35,764,282
https://en.wikipedia.org/wiki/Fruit%20and%20vegetable%20wash
A vegetable wash is a cleaning product designed to aid in the removal process of dirt, wax and pesticides from fruit and vegetables before they are consumed. Contents and use All fresh produce, even organic, can harbor residual pesticides, dirt or harmful microorganisms on the surface. Vegetable wash also removes germs, waxes on vegetable and fruits, and also the pesticides. Vegetable washes may either be a number of specially-marketed commercial brands, or they may be home recipes. Commercial vegetable washes generally contain surfactants, along with chelating agents, antioxidants, and other agents. Home recipes are generally dilutions of hydrogen peroxide or vinegar, the former of which may be dangerous at high concentrations. Effectiveness Neither the U.S. Food and Drug Administration nor the United States Department of Agriculture recommend washing fruits and vegetables in anything other than cold water. To date, there is little evidence that vegetable washes are effective at reducing the presence of harmful microorganisms, though their application in removing simple dirt and wax is not contested. References Cleaning products Vegetables Edible fruits
Fruit and vegetable wash
Chemistry
225
274,975
https://en.wikipedia.org/wiki/C-QUAM
C-QUAM (Compatible QUadrature Amplitude Modulation) is the method of AM stereo broadcasting used in Canada, the United States and most other countries. It was invented in 1977 by Norman Parker, Francis Hilbert, and Yoshio Sakaie, and published in an IEEE journal. Using circuitry developed by Motorola, C-QUAM uses quadrature amplitude modulation (QAM) to encode the stereo separation signal. This extra signal is then stripped down in such a way that it is compatible with the envelope detector of older receivers, hence the name C-QUAM for Compatible. A 25 Hz pilot tone is added to trigger receivers; unlike its counterpart in FM radio, this carrier is not necessary for the reconstruction of the original audio sources. Description The C-QUAM signal is composed of two distinct modulation stages: a conventional AM version and a compatible quadrature PM version. Stage 1 provides the transmitter with a summed L+R mono audio input. This input is precisely the same as conventional AM-Mono transmission methods and ensures 100% compatibility with conventional 'envelope detector' receivers. Stage 2 provides the stereo multiplexed (muxed) audio input and replaces the conventional crystal oscillator stage of otherwise AM-Mono transmitters. So as to not create interference with 'envelope detector' receivers, the stage 2 signal takes the multiplexed (muxed) audio signals and phase modulates both, using a divide-by-4 Johnson counter and two balanced modulators operating 90 degrees out of phase with each other. Stage 2 is not amplitude modulated, it is phase modulated, and is made up of both a L+R input and a L-R input. To recover the 'stereo' audio signals, a synchronous detector extracts the L-R audio from the phase modulated quadrature portion of the signal created in stage 2. The L+R audio can be extracted from either the AM (stage 1) or the PM (stage 2) modulation component. From there, the audio can be readily de-multiplexed (de-muxed) back to 'stereo', a.k.a. Left and Right channels. For additional information, see the attached PDF: "Introduction to the Motorola C-QUAM AM Stereo System". Known problems C-QUAM is not perfect, however, in large part because pre-AMAX it exhibited platform motion, with the audio "center" rocking back and forth as if changing the balance knob. This effect is potentially bothersome, especially in a moving vehicle where the received signal changes rapidly, and occupants (particularly the driver) would be more prone to its effects (this was an effect that happened primarily with skywave signals. Groundwave or local coverage usually did not suffer from this issue). This has been alleviated in subsequent revisions. Also, since some stereo information is contained in the sidebands, adjacent channel interference can cause problems. Finally, when only part of a sideband is attenuated (as often happens to skywave signals reflecting off the ionosphere), an effect known as selective fading, very unpleasant effects result; hence, the C-QUAM system is not often if ever used for shortwave broadcasting, nor by stations which receive a great deal of skywave interference. User base Nowadays, this standard faces fierce competition with some other stereophonic standards on AM. , there are still a number of AM radio stations in North America broadcasting in C-QUAM stereo. Among those stations are WXYG/540: Sauk Rapids, MN; CFCB/570: Corner Brook, NL; CFCO/630: Chatham, Ontario (covering SW Ontario, Eastern Michigan and Northern Ohio); WNMB/900: North Myrtle Beach, South Carolina; WBLQ/1230: Westerly, Rhode Island; WIRY/1340: Plattsburgh, New York; WAXB/850: Ridgefield, Connecticut and WYLD-AM/940: New Orleans, Louisiana. In addition to FCC-Licensed C-QUAM AM broadcast stations, low-powered (<100 mW) Part 15 C-QUAM stereo transmitters are available for sale for use in the United States. In Rome, Italy, there is Broadcastitalia on 1485 kHz. Also see: AM Stereo radio stations in the United States AM Stereo radio stations worldwide Competition from IBOC Hybrid Digital Systems While C-QUAM is an accepted international standard for AM Radio broadcasting, it is incompatible with the IBOC (In-band on-channel) "HD" (Hybrid Digital) radio system, so a broadcaster must choose what system they will use. The IBOC system allows transmission of an audio frequency range extending to approximately 15 kHz, 2-ch Stereo on the AM band, but with significant digital artifact and aliasing due to substantial codec inadequacy. In addition, C-QUAM patents have expired. iBiquity still controls IBOC intellectual property through patents, through licensing fees for both the use of the technology, and any modifications to be made, even if the broadcaster in question has purchased the equipment outright and made costly modifications to their transmitter plant in order to implement it. Very few AM radio stations that broadcast with IBOC HD Radio during the day switch to C-QUAM AM Stereo during nighttime operation to reduce sideband digital (hash) interference and to provide long-range stereo reception. A number of HD radio tuners have the limited ability to decode C-Quam stereo transmissions, (typically with lower bandwidth), and as a result, reduced audio quality than what could be expected from a specifically designed AMAX/C-QUAM only tuner. C-QUAM AM Stereo transmissions have the same range as AM Monural transmission, a key benefit. Whereas many stations in the late 2000s changed from C-QUAM to HD Radio, in the 2010s the trend reversed with many HD Radio stations shutting off their digital equipment. However, few of these stations returned to C-QUAM broadcasts. There has been a move to bring back C-QUAM in the last few years, due to the poor sound quality of digital audio encoding at low bit rates. Where AM stereo receivers use a dual IF bandwidth setup, for an extended audio frequency response over mono receivers. Providing for a full, rich stereo sound is simply not possible with digital audio encoding. The down side of analog broadcasting is the amount of unwanted noise. See also List of AM stereo radio stations References Introduction to the Motorola C-QUAM AM Stereo System External links History of AM Stereo Another AM Stereo information and vendor site - meduci.com 1977 in radio Telecommunications-related introductions in 1977 Broadcast engineering Motorola Radio technology Standards of the United States Stereophonic sound
C-QUAM
Technology,Engineering
1,372
4,150,226
https://en.wikipedia.org/wiki/Centre%20de%20Sociologie%20de%20l%27Innovation
The Centre de Sociologie de l'Innovation (CSI; "Center for the Sociology of Innovation") is a research center at the Mines Paris – PSL, France, and a research unit affiliated to the French National Centre for Scientific Research. The CSI was created in 1967 and is known for its members' contributions to the field of science and technology studies and to actor–network theory. Prominent past and current members include academics such as Bruno Latour and Michel Callon. References External links Centre de Sociologie de l'Innovation Science and technology studies Actor-network theory Universities and colleges in Paris Engineering universities and colleges in France French National Centre for Scientific Research Educational institutions established in 1967 1967 establishments in France
Centre de Sociologie de l'Innovation
Technology
142
1,217,448
https://en.wikipedia.org/wiki/First%20appearance
In comic books and other stories with a long history, first appearance refers to the first issue to feature a fictional character. These issues are often highly valued by collectors due to their rarity and iconic status. Reader interest in first appearances Collectors value first appearances for their rarity and historical value, while many regular readers are interested in viewing how their favorite characters were originally portrayed. Reprints of first appearances are often published, both as single comic books and in trade paperbacks, usually with other early appearances of the character. Marvel Comics' "Essential" line has become popular by giving readers an affordable glimpse into characters' early history. Historically, first appearances tell the origin story for the character, although some, such as Batman and Green Goblin, remained dubious figures for several issues. Modern writers prefer to tell a character's origin across an entire story arc or keep a newly introduced character mysterious until a "secret origin" issue. Some fans consider this a gimmick and prefer the older method. The artistic merit of many first appearances is debatable. The events portrayed in most famous first appearances are continuously retconed, rebooted, or expanded upon by subsequent writers. Like many golden and silver age comics, first appearances often become dated and do not fit the modern portrayal of the character. However, some first appearances are considered classics. 1990s-era Spider-Man writer Howard Mackie said that his favorite story featuring the character was his first appearance and origin story in Amazing Fantasy #15 (August 1962), stating that writer Stan Lee and artist Steve Ditko "gave us everything we needed, I wanted or could ask for in the least possible space. Every single person who retells the origin never improves on the original, they simply expand it." Monetary value of first appearance issues First appearances of popular characters are among the most valuable comic books in existence. Of the "ten most valuable comic books" listed in the spring 2002 issue of The Overstreet Comic Book Price Guide, seven are first appearances of popular superheroes. Another, Marvel Comics #1 (October 1939), is the first appearance of the Golden Age Human Torch, but is more noteworthy as the first comic published by Marvel Comics. It can take many years for a character to attain sufficient popularity after their first appearance to be considered "iconic." By the point a character reaches that level of popularity, it is common for few copies of their first appearance issues to remain. Furthermore, even fewer of those remaining copies will be in the pristine condition prized by collectors. What few remain can be worth thousands of dollars to interested collectors. For example, in 2004, a copy of Flash Comics #1 (January 1940), the first appearance of The Flash, was auctioned for $42,000 and a copy of Captain America Comics #1 (March 1941), the first appearance of Captain America sold for $64,400. In 2010, another copy of Flash Comics #1 sold privately for $450,000. The first appearance of Superman, Action Comics #1 (June 1938), has been regarded as the "holy grail" of comic books due to its cultural significance and rarity; fewer than one hundred copies are thought to exist. Superman is widely considered to have solidified, if not created, the superhero archetype; therefore, his first appearance is not only important to fans of the character but to fans of superheroes and comic books as a whole. Well-preserved copies of Action Comics #1 have been sold at auction for record-breaking prices. A copy graded at 8.0 ("very fine") on the 10-point scale typically used by collectors was sold at auction for $1,000,000 in 2010. Even a copy graded at a much lower 5.5 ("fine minus") sold for $956,000 in 2016. Shortly after the record-breaking million-dollar sale of Action Comics #1 in 2010, a copy of Detective Comics #27 featuring the first appearance of Batman was sold for $1,075,000 in a Heritage auction. Several factors determine the value of a first appearance. All values are according to ComicsPriceGuide.com and are for editions certified by the Certified Collectibles Group (see below): The importance of the character(s) that debuted; the first appearance of Spider-Man in fine condition is listed at $45,150; the first appearance of the similarly popular Iron Man, in the same condition, is listed at $3,837; and the first appearances of most characters are not valued significantly higher than other comics published the same month. The rarity of comic book itself; comics from the Golden Age are usually more valuable than later comic books because they are older and fewer copies survive. Spider-Man is more popular than The Spectre but Spider-Man's 1962 first appearance is valued at $45,150 while a copy of The Spectre's 1940 debut, in fine condition, is valued at $54,000. Also, first appearances often lack value if they are relatively recent issues of high-profile, best-selling titles. Except during a 1990s collector's bubble, the first appearances of several Image Comics characters and newer X-Men have not been as valuable as one may expect for such popular characters because those comics were widely produced. Other reasons for historical importance; The Fantastic Four (November 1961) #1 is not only the first appearance of the eponymous group but also represents a turning point in the history of Marvel Comics and is the first issue of a long-running series. Occasionally, a comic book is the first appearance of more than one important character. Usually the characters are related; X-Men #1 (September 1963) introduced the X-Men and their archenemy Magneto. However, rarely a comic book is the first appearance of two unrelated, important characters. More Fun Comics #73 (November 1941) introduced both Green Arrow and Aquaman, who have little relation to one another. This is also the case with Action Comics #1, which contained the first appearances of Zatara and Tex Thompson, as well as Superman. Occasionally a first appearance will lack the value expected for a character of such stature because the debut was not splashy. Wonder Woman, a popular and historically important hero, debuted in the anthology title All Star Comics #8 (December 1941), and was not featured on the cover. This issue is valued at $30,000 in fine condition. Comparatively, the first appearances of equally (or even less) important peers Green Lantern and The Flash, boldly introduced on their covers, are worth $131,250 and $69,000, respectively. Arguably, the first appearance of Wonder Woman is worth much less because she did not make a flashy debut that lent the comic book an air of history. As is the case with all collectibles, condition greatly affects the value of comic books, although considerable wear is expected for decades-old comics. Most comic books are worth more if their condition is certified and they are protectively packaged (or "slabbed") by the Certified Collectibles Group, a professional grading service involved in the sale of most high-value comic books, although some fans accuse the group of inflating the value of comics. Ambiguous cases While seemingly a simple concept, determining the first appearance may be complex. The following are instances in which a character's first appearance may be difficult to determine: Those unfamiliar to comics may assume that Iron Man's first appearance is The Invincible Iron Man #1 (May 1968). However, in the golden and early silver ages of comic books, few superheroes debuted in magazines carrying their names. More often a character first appeared in a generically titled anthology series. If the character proved popular, a new series was launched. For example, Iron Man first appeared in Tales of Suspense #39 (March 1963) and appeared regularly in that series for five years before Marvel launched a series properly named Iron Man. Wonder Woman, Spider-Man, Thor, and many others also first appeared in anthology series. The first appearance of "all-star" teams is given as the first instance in which that team banded together regardless of whether or not it consists of previously existing characters. The first appearance of The Justice League of America is considered The Brave and the Bold #28 (May 1960), the issue in which they first operated as a group, although none of its members first appeared in that issue. Alternatively, X-Men #1 (September 1963) is both the first appearance of the X-Men and its original members. Sometimes a character first appears in the last page of an issue, foreshadowing his or her greater role in the next issue. Arguments can ensue over whether the first appearance is the issue containing the final page cameo or the subsequent issue which more adequately introduced the character. Wolverine was first seen in the last page of The Incredible Hulk #180 (October 1974) but makes a more full appearance in issue #181 (November 1974). Stricter fans may consider The Incredible Hulk #180 Wolverine's first appearance but most consider it #181. ComicsPriceGuide.com lists a copy of issue #180, rated very fine, at $149 and #181 at $2,075. Comparatively, The Incredible Hulk #179 (September 1974), which has no special importance, is listed at $11, so both types of first appearance add value to a comic book. Retconning can also complicate first appearances. Initially, Cable was portrayed as a wholly new character, first appearing in The New Mutants #87 (March 1990). However, writers later changed his background, stating that Cable is an adult, time-traveling Nathan Summers, the son of Cyclops and Madelyne Pryor, first seen in Uncanny X-Men #201 (January 1986). Both issues could be given as the first appearance of Cable. Further complicating the matter, Cable was seen in a cameo at the end of The New Mutants #86 (February 1990). Some superhero identities are used by more than one character. The original Green Lantern first appeared in All-American Comics #16 (April 1940). During the Silver Age, Green Lantern, like many DC heroes, was rebooted with a totally new identity. The second Green Lantern, Hal Jordan, debuted in Showcase #22 (October 1959). All-American Comics #16 is still considered the first appearance of Green Lantern, both of the original title-bearer and the superhero identity itself. To avoid confusion, Showcase #22 is called the first appearance of Hal Jordan, of Green Lantern II or of the Silver Age Green Lantern. Occasionally, a character will appear in the background of a comic book before fully introduced. Spider-Man's early love interest Liz Allan is first addressed by name in Amazing Spider-Man #4 (September 1963). However, an unnamed character in Amazing Spider-Man #1 (March 1963) is, based on her appearance and dialogue, probably Allan. Plus, Amazing Fantasy #15 (August 1962), shows an unnamed, unspeaking character who looks exactly like Allan. Thus Allan's first appearance may be given as any of the three. Some characters appear in more than one continuity. While the first appearance of Nightcrawler is Giant-Size X-Men #1 (May 1975), the first appearance of "Ultimate Nightcrawler" (Nightcrawler in the alternate Ultimate Marvel universe) is Ultimate X-Men #6 (August 2001). Sometimes new characters are created for television or film adaptations of a franchise and are later added to the comic book continuity. The Batman adversary Harley Quinn debuted in the 1992 Batman: The Animated Series episode "Joker's Favor". Her first appearance in comic format was the graphic novel The Batman Adventures #12, which took place in the continuity of Batman: The Animated Series. Her first appearance in the regular "DC Universe" was the 1999 one-shot Batman: Harley Quinn. Thus, her first appearance is technically "Joker's Favor", her first appearance in a comic book was The Batman Adventures #12 and her first appearance in the regular DC Comics continuity was Batman: Harley Quinn. Similarly, Firestar first appeared in Spider-Man and His Amazing Friends #1, which adapted the first episode of the TV series. Her first Earth-616 appearance was in The Uncanny X-Men #193. Rarely, a character debuts in a publisher's foreign branch and then appears in a domestic series. Psylocke first appeared in Captain Britain #8 (December 1976), an original series of Marvel UK not widely available outside Great Britain. Her debut in an American series was The New Mutants Annual #2 (1986). Her first appearance is sometimes given as either but more correctly it is Captain Britain #8 while The New Mutants Annual #2 is her first US appearance. Some characters appear first in a normal supporting role before becoming a superhero or villain. For example, Roderick Kingsley first appeared as a minor supporting character in The Spectacular Spider-Man #43 (June 1980). However, he would later take on the villainous role of the Hobgoblin in The Amazing Spider-Man #238 (March 1983), becoming one of Spider-Man's most dangerous foes. The latter issue, featuring his first appearance as the Hobgoblin, is worth quite more than his original debut. First appearances of popular heroes, villains and teams See also Comic book collecting List of first appearances in Marvel Comics publications List of Marvel Comics superhero debuts Notes Nicolas Cage's 9.0 graded Action Comics #1 sold in 2011. Batman #1, the first appearance of the Joker and Catwoman, is especially valuable since it is also the first issue of a long-running series and the first comic book to bear Batman's name as its title. References Comics terminology Beginnings
First appearance
Physics
2,838
2,797,401
https://en.wikipedia.org/wiki/COMSOL%20Multiphysics
COMSOL Multiphysics is a finite element analyzer, solver, and simulation software package for various physics and engineering applications, especially coupled phenomena and multiphysics. The software facilitates conventional physics-based user interfaces and coupled systems of partial differential equations (PDEs). COMSOL Multiphysics provides an IDE and unified workflow for electrical, mechanical, fluid, acoustics, and chemical applications. Beside the classical problems that can be addressed with application modules, the core Multiphysics package can be used to solve PDEs in weak form. An API for Java and MATLAB can be used to control the software externally. The program also serves as an application builder for physics applications. Several modules are available for COMSOL, categorized according to the applications areas of Electrical, Mechanical, Fluid, Acoustic, Chemical, Multipurpose, and Interfacing. See also Finite element method Multiphysics List of computer simulation software References External links Finite element software Finite element software for Linux Computer-aided engineering software Physics software
COMSOL Multiphysics
Physics
201
296,435
https://en.wikipedia.org/wiki/Electrical%20susceptance
In electrical engineering, susceptance () is the imaginary part of admittance (), where the real part is conductance (). The reciprocal of admittance is impedance (), where the imaginary part is reactance () and the real part is resistance (). In SI units, susceptance is measured in siemens (S). Origin The term was coined by C.P. Steinmetz in a 1894 paper. In some sources Oliver Heaviside is given credit for coining the term, or with introducing the concept under the name permittance. This claim is mistaken according to Steinmetz's biographer. The term susceptance does not appear anywhere in Heaviside's collected works, and Heaviside used the term permittance to mean capacitance, not susceptance. Formula The general equation defining admittance is given by where The admittance () is the reciprocal of the impedance (), if the impedance is not zero: and where The susceptance is the imaginary part of the admittance The magnitude of admittance is given by: And similar formulas transform admittance into impedance, hence susceptance () into reactance (): hence The reactance and susceptance are only reciprocals in the absence of either resistance or conductance (only if either or , either of which implies the other, as long as , or equivalently as long as ). Relation to capacitance In electronic and semiconductor devices, transient or frequency-dependent current between terminals contains both conduction and displacement components. Conduction current is related to moving charge carriers (electrons, holes, ions, etc.), while displacement current is caused by time-varying electric field. Carrier transport is affected by electric field and by a number of physical phenomena, such as carrier drift and diffusion, trapping, injection, contact-related effects, and impact ionization. As a result, device admittance is frequency-dependent, and the simple electrostatic formula for capacitance, is not applicable. A more general definition of capacitance, encompassing electrostatic formula, is: where is the device admittance, and is the susceptance, both evaluated at the angular frequency in question, and is that angular frequency. It is common for electrical components to have slightly reduced capacitances at extreme frequencies, due to slight inductance of the internal conductors used to make capacitors (not just the leads), and permittivity changes in insulating materials with frequency: is very nearly, but not quite a constant. Relationship to reactance Reactance is defined as the imaginary part of electrical impedance, and is analogous to but not generally equal to the negative reciprocal of the susceptance – that is their reciprocals are equal and opposite only in the special case where the real parts vanish (either zero resistance or zero conductance). In the special case of entirely zero admittance or exactly zero impedance, the relations are encumbered by infinities. However, for purely-reactive impedances (which are purely-susceptive admittances), the susceptance is equal to the negative reciprocal of the reactance, except when either is zero. In mathematical notation: The minus sign is not present in the relationship between electrical resistance and the analogue of conductance but otherwise a similar relation holds for the special case of reactance-free impedance (or susceptance-free admittance): If the imaginary unit is included, we get for the resistance-free case since, Applications High susceptance materials are used in susceptors built into microwavable food packaging for their ability to convert microwave radiation into heat. See also Electrical measurements SI electromagnetism units References Physical quantities Electrical engineering
Electrical susceptance
Physics,Mathematics,Engineering
787
53,082,558
https://en.wikipedia.org/wiki/Contaminants%20of%20emerging%20concern
Contaminants of emerging concern (CECs) is a term used by water quality professionals to describe pollutants that have been detected in environmental monitoring samples, that may cause ecological or human health impacts, and typically are not regulated under current environmental laws. Sources of these pollutants include agriculture, urban runoff and ordinary household products (such as soaps and disinfectants) and pharmaceuticals that are disposed to sewage treatment plants and subsequently discharged to surface waters. CECs include different substances like pharmaceuticals, personal care products, industrial byproducts, and agricultural chemicals. These substances often bypass regular detection and treatment processes, leading to their unintended persistence in the environment. The complexity of CECs arises not only from their different chemical nature but also from the complex ways they interact with ecosystems and human health. As such, they are the focus of increasing examination by researchers, policymakers, and public health officials who want to understand their long-term effects and develop effective interventions. Global initiatives, like those from the World Health Organization (WHO) and the United States Environmental Protection Agency (US EPA), emphasize the need to create international standards and effective environmental policies to address the challenges posed by CECs. Public awareness and advocacy play crucial roles in driving the research agenda and policy development for CECs, highlighting the need for updated manufacturing practices and developing more remediation and detection methods. History and background The concept of CECs gained significant attention in the early 21st century as advances in analytical techniques allowed for the detection of these substances at trace levels in various environmental matrices. The increased awareness of CECs is partly due to their abundant presence in wastewater, surface water, groundwater, and drinking water, often because of urbanization, industrial activities, and the widespread use of pharmaceuticals and personal care products. The recognition of the potential risks posed by CECs has led to a growing body of research aimed at understanding their sources, fate, and effects in the environment, as well as the development of strategies for their management and removal. Past events In the 19th and early 20th centuries asbestos was used in many products and in building construction and was not considered a threat to human health or the environment. Deaths and lung problems caused by asbestos were first documented in the early 20th century. The first regulations of the asbestos industry were published in the UK in the 1930s. Regulation of asbestos in the US did not occur until the 1980s. In the 1970s there was a serious issue with the water treatment infrastructure of some US states, notably in Southern California with water sourced from the Sacramento–San Joaquin River Delta. Water was being disinfected for domestic use through chlorine treatment, which was effective for killing microbial contaminants and bacteria, but in some cases, it reacted with runoff chemicals and organic matter to form trihalomethanes (THMs). Research done in the subsequent years began to suggest the carcinogenic and harmful nature of this category of compounds. EPA issued its first standard for THMs, applicable to public water systems, in 1979, and more stringent standards in 1998 and 2006. Rapid industry changes also make the treatment and regulation of CECs particularly challenging. For instance, the replacing substance (GenX), for the recently regulated perfluorooctanoic acid (PFOA), a PFAS, had a more detrimental environmental impact, resulting in the subsequently banning of GenX as well. Hence, there is a pressing need for the treatment and management of CECs to keep up with global trends. Classification For a compound to be recognized as an emerging contaminant it has to meet at least two requirements: Adverse human health effects have been associated with a compound. There is an established relationship between the positive and negative effect(s) of the compound. Emerging contaminants are those which have not previously been detected through water quality analysis, or have been found in small concentrations with uncertainty as to their effects. The risk they pose to human or environmental health is not fully understood. Contaminant classes Contaminants of emerging concern (CECs) can be broadly classed into several categories of chemicals such as pharmaceuticals and personal care products, cyanotoxins, nanoparticles, and flame retardants, among others. However, these classifications are constantly changing as new contaminants (or effects) are discovered and emerging contaminants from past years become less of a priority. These contaminants can generally be categorized as truly "new" contaminants that have only recently been discovered and researched, contaminants that were known about but their environmental effects were not fully understood, or "old" contaminants that have new information arising regarding their risks. Pharmaceuticals Pharmaceuticals are gaining more attention as CECs because of their continual introduction into the environment and their general lack of regulation. These compounds are often present at low concentrations in water bodies and little is currently known about their environmental and health effects from chronic exposure; pharmaceuticals are only now becoming a focus in toxicology due to improved analytical techniques that allow very low concentrations to be detected. There are several sources of pharmaceuticals in the environment, including most prominently effluent from sewage treatment plants, aquaculture and agricultural runoff. Personal care products Personal care products often contain a complex mixture of chemicals such as preservatives (e.g., parabens), UV filters (e.g., oxybenzone), plasticizers (e.g., phthalates), antimicrobials (e.g., triclosan), fragrances, and colorants. Many of these compounds are synthesized chemicals that are not typically found in nature. Chemicals from personal care products can enter the environment through various pathways. After use, they are often washed down the drain and can end up in the wastewater stream. These substances are not all completely removed by conventional wastewater treatment processes, leading to their release into natural water bodies. Some of these chemicals are persistent in the environment and can bioaccumulate in the tissues of organisms, potentially causing ecological disruptions. They can also have endocrine-disrupting properties that interfere with the hormonal systems of wildlife and humans. Cyanotoxins In recent years, there has been an increase of cyanobacterial blooms due to the eutrophication (or increase in nutrient levels) of surface waters around the world. Increases in certain nutrients, such as nitrogen and phosphorus, are linked to fertilizer runoff from agricultural fields, and are also found in certain products, such as detergents, in urban spaces. These blooms can release toxins that can decrease water quality and are a risk to human and wildlife health. Additionally, there are a lack of regulations regarding the maximum contaminant levels (MCL) allowed in drinking water sources. Cyanotoxins can have both acute and chronic toxic effects, and there are often many consequences for the health of the environment where these blooms occur. Industrial chemicals Industrial chemicals from various industries produce harmful chemicals that are known to cause harm to human health and the environment. Common industrial chemicals, like 1,4-Dioxanes, Perfluorooctane sulfonate (PFOS) and Perfluorooctanoic acid (PFOA), are commonly found in various water sources. Nanomaterials Nanomaterials include carbon-based materials, metal oxides, metals, and quantum dots. Nanomaterials can enter the environment during their manufacturing, consumer use, or disposal. Due to their small size, nanomaterials behave differently than larger particles. They have a high surface area to volume ratio, which can lead to increased reactivity and the potential to transport throughout the environment. Nanomaterials are challenging to detect and monitor due to their size and the absence of standardized methods for measuring their presence and concentration in various media. Sources and pathways Agricultural runoff Agricultural runoff is a major pathway through which CECs enter the environment. Compounds like pesticides and pharmaceuticals from fertilizers are carried by water from farms into their surrounding areas soil and water bodies. Then runoff happens after rainfall or irrigation, which causes an influx of chemicals to leak out of the soil where they were dumped and into rivers, lakes, and groundwater. The runoff can contain a CEC’s which are not regulated or whose environmental impacts are not well understood, contributing to the pollution of aquatic ecosystems, and potentially affecting human water sources. A significant challenge is monitoring levels of CECs in bodies of water. A nationwide survey revealed that soil erosion, nutrient loss, and pesticide runoff from America's vast agricultural lands are leading causes of water quality pollution. Approximately 46% of rivers and streams in the United States have conditions which are harmful to aquatic life. Additionally, only about 28% of these water bodies are rated as 'healthy' based on their biological communities. Industrial discharge   Industrial discharge is when waste products are released into the environment from manufacturing and chemical processing facilities. This waste can include a wide variety of CECs like heavy metals, solvents, and various organic compounds that are not regularly detected for or removed by standard treatment processes. These contaminants can accumulate in sediments and biota, posing risks to aquatic life and human health. The complexity and diversity of industrial discharge requires advanced treatment technologies and stricter regulatory frameworks to prevent CECs from contaminating the environment. Advanced oxidation processes and membrane technologies have been researched and shown to reduce CECs from industrial discharge, however there is an excessive cost to retrofit existing treatment facilities with this technology. Urban runoff   Urban runoff is rainwater that runs through streets, gardens, and other urban surfaces, picking up various pollutants along the way. These pollutants can include CECs like microplastics from synthetic materials, polycyclic aromatic hydrocarbons (PAHs) from vehicle exhausts, and pharmaceuticals from improperly disposed medications. This untreated runoff can enter storm drains and eventually discharge into natural water bodies, often bypassing wastewater treatment facilities and leading to their accumulation in the environment, where they can cause harm to wildlife and potentially enter the human food chain. Permeable pavements and rain gardens are being implemented and tested in some urban areas to mitigate the effects of runoff, helping to filter pollutants before they reach the water system. Wastewater treatment plants   Wastewater treatment plants (WWTPs) are designed to remove contaminants from domestic and industrial wastewater before it is released into the environment. However, some WWTPs, particularly older or under-resourced ones are not equipped to effectively remove all CECs, such as advanced pharmaceuticals, personal care product ingredients, and certain types of industrial chemicals. These substances can pass through the treatment process and enter aquatic ecosystems, which creates a challenge for water treatment technology and emphasizes the need for ongoing research and infrastructure improvement to address the removal of CECs from wastewater. Advances like tertiary treatment stages, which incorporate advanced filtration and chemical removal techniques, are being tested to address the presence of CECs in waste, though widespread implementation is yet to be seen due to novelty, cost, and logistical challenges. Environmental and health impacts Relation between compound and effects There is an overlap of many anthropogenically sourced chemicals that humans are exposed to regularly. This makes it difficult to attribute negative health causality to a specific, isolated compound. EPA manages a Contaminant Candidate List to review substances that may need to be controlled in public water systems. EPA has also listed twelve contaminants of emerging concern at federal facilities, with ranging origins, health effects, and means of exposure. The twelve listed contaminants are as follows: Trichloropropane (TCP), Dioxane, Trinitrotoluene (TNT), Dinitrotoluene, Hexahydro-trinitro-triazane (RDX), N-nitroso-dimethylamine (NDMA), Perchlorate, Polybrominated biphenyls (PBBs), Tungsten, Polybrominated diphenyl ethers (PBDEs) and Nanomaterials. Selected compounds listed as emerging contaminants The NORMAN network enhances the exchange of information on emerging environmental substances. A Suspect List Exchange (SLE) has been created to allow sharing of the many potential contaminants of emerging concern. The list contains more than 100,000 chemicals. Table 1 is a summary of emerging contaminants currently listed on one EPA website and a review article. Detailed use and health risk of commonly identified CECs are listed in the table below. Aquatic life The environmental impact of CECs on aquatic life is broad. For example, endocrine-disrupting chemicals (EDCs) have the potential to imitate natural hormones, which can lead to reproductive failures and eventually population declines or increases in fish and amphibians. EDCs are found in a variety of common contaminants, including pesticides and industrial chemicals, and they can also lead to altered growth and reproduction in aquatic life (US EPA) (USGS.gov). Microplastics are another concern, as they can lead to physical blockages in the digestive tracts of aquatic organisms and act as paths for other toxins, leading to bioaccumulation and increase in concentration as they move up each level of the food chain. These impacts not only threaten biodiversity but also the stability of aquatic ecosystems upon which many species depend. Ongoing monitoring and regulatory efforts are crucial for assessing the full scope of CECs' impacts and for the development of effective strategies to mitigate their presence in aquatic ecosystems (NOAA.gov). Human health When CECs bypass water filtration systems and contaminate drinking water or accumulate in the food chain, they can also cause risks to human health. Chronic exposure to low doses of CECs has been linked to various health issues. For example, certain pharmaceutical CECs and EDCs have been associated with hormonal imbalances, increased risks of certain cancers, and developmental problems. The antibiotics present in the environment can also contribute to the development of antibiotic-resistant bacteria, which poses a serious threat to human health by reducing the effectiveness of antibiotic treatments. Studies have shown that even at low concentrations, the presence of CECs in drinking water can correlate with neurological disorders and can decrease cognitive function over time. Certain perfluoroalkyl substances (PFAS), which are a type of CEC, have been linked to different adverse health outcomes like increased cholesterol levels, changes in liver enzymes, and reduced vaccine efficacy, which raises concerns about widespread exposure to these chemicals. The CDC also identifies exposure to high levels of CECs with negative effects on the immune system, by compromising the body’s ability to fight infections and increasing the risk of rheumatological diseases. Exposure to a combination of various CECs, which can occur through contaminated drinking water or food chains, may lead to cumulative on human health that are not yet fully understood. Wildlife Wildlife, particularly species reliant on aquatic environments, are exceptionally vulnerable to the disruptions caused by CECs. Terrestrial species can be exposed to CECs through contaminated food, water, and soil. These contaminants can cause pollution which can lead to mortality or can indirectly result in changes in behavior which affect essential activities like feeding and mating. Migratory species are especially at risk as they can spread the impact of CECs across various ecosystems. The health of wildlife populations is an important indicator of environmental quality, and the presence of CECs can signal broader ecological issues that require attention. Detection and monitoring Detection and monitoring of CECs is done through a variety of sophisticated analytical techniques. High-performance liquid chromatography (HPLC) paired with mass spectrometry (MS) can help identify organic CECs, due to their high sensitivity and selectivity EPA. For volatile and semi-volatile compounds, gas chromatography (GC) coupled with MS is commonly used FDA. Metals and metalloids are typically analyzed using techniques like inductively coupled plasma mass spectrometry (ICP-MS), which allows for the simultaneous analysis of multiple elements USGS. The complications with monitoring CECs go past just detection. Their pathways across different environmental also must be monitored. This can be done with passive sampling devices, which accumulate contaminants over time and give a comprehensive view of contaminant levels at different locations NOAA. Biosensors are also used and integrated to detect specific contaminants rapidly, which is important for on-site monitoring applications NIH. The use of remote sensing and geographic information systems (GIS) for spatial analysis is expanding, these tools facilitate the tracking of pollution spread NASA Earth Science. Recent advancements in nanotechnology have led to the development of nano-sensors which can detect trace amounts of CECs Nature Nanotechnology.   There are sites with waste that would take hundreds of years to clean up and prevent further seepage and contamination into the water table and surrounding biosphere. In the United States, the environmental regulatory agencies on the federal level are primarily responsible for determining standards and statutes which guide policy and control in the state to prevent citizens and the environment from being exposed to harmful compounds. Emerging contaminants are examples of instances in which regulation did not do what it was supposed to, and communities have been left vulnerable to adverse health effects. Many states have assessed what can be done about emerging contaminants and currently view it as a serious issue, but only eight states have specific risk management programs addressing emerging contaminants. Regulations and management These are tactics and methods that aim to remediate the effects of certain, or all, CECs by preventing movement throughout the environment, or limiting their concentrations in certain environmental systems. It is particularly important to ensure that water treatment approaches do not simply move contaminants from effluent to sludge given the potential for sludge to be spread to land providing an alternative route to entering the environment. Advanced treatment plant technology For some emerging contaminants, several advanced technologies—sonolysis, photocatalysis, Fenton-based oxidation and ozonation—have treated pollutants in laboratory experiments. Another technology is "enhanced coagulation" in which the treatment entity would work to optimize filtration by removing precursors to contamination through treatment. In the case of THMs, this meant lowering the pH, increasing the feed rate of coagulants, and encouraging domestic systems to operate with activated carbon filters and apparatuses that can perform reverse osmosis. Although these methods are effective, they are costly, and there have been many instances of treatment plants being resistant to pay for the removal of pollution, especially if it wasn't created in the water treatment process as many EC's occur from runoff, past pollution sources, and personal care products. It is also difficult to incentivize states to have their own policies surrounding contamination because it can be burdensome for states to pay for screening and prevention processes. There is also an element of environmental injustice, in that lower income communities with less purchasing and political power cannot buy their own system for filtration and are regularly exposed to harmful compounds in drinking water and food. However, recent treads for light-based systems shows great potential for such applications. With the decrease in cost of UV-LED systems and growing prevalence of solar powered systems, it shows great potential to remove CECs while keeping costs low. Metal–organic framework-based nano-adsorbent remediation Researchers have suggested that metal–organic frameworks (MOFs) and MOF-based nano-adsorbents (MOF-NAs) could be used in the removal of certain CECs, such as pharmaceuticals and personal care products, especially in wastewater treatment. Widespread use of MOF-based nano-adsorbents has yet to be implemented due to complications created by the vast physicochemical properties that CECs contain. The removal of CECs largely depends on the structure and porosity of the MOF-NAs and the physicochemical compatibility of both the CECs and the MOF-NAs. If a CEC is not compatible with the MOF-NA, then particular functional groups can be chemically added to increase compatibility between the two molecules. The addition of functional groups causes the reactions to rely on other chemical processes and mechanisms, such as hydrogen bonding, acid-base reactions, and complex electrostatic forces. MOF-based nano-adsorbent remediation heavily relies on water-qualities, such as pH, in order for the reaction to be executed efficiently. MOF-NA remediation can also be used to efficiently remove other heavy metals and organic compounds in wastewater treatment. Membrane bioreactors Another method of possible remediation for CECs is through the use of membrane bioreactors (MBRs) that act through mechanisms of sorption and biodegradation. Membrane bioreactors have shown results on being able to filter out certain solutes and chemicals from wastewater through methods of microfiltration, but due to the extremely small size of CECs, MBRs must rely on other mechanisms in order to ensure the removal of CECs. One mechanism that MBRs use to remove CECs from wastewater is sorption. Sorption of the CECs to sludge deposits in the MBR's system can allow the deposits to sit and be bombarded with water, causing the eventual biodegradation of CECs in the membrane. Sorption of a particular CEC can be even more efficient in the system if the CEC is hydrophobic, causing it to move from the wastewater to the sludge deposits more quickly. Current events and advocacy The management of CECs has gained increasing attention in recent years due to their potential impact on public health and the environment. In response to these concerns, various governmental and international organizations have initiated efforts to address CECs through research, regulation, and public outreach. In January 2024, the White House Office of Science and Technology Policy announced a coordinated federal research initiative to address CECs in surface waters. The initiative aims to enhance understanding of the sources, occurrence, and effects of CECs, as well as to develop effective strategies for their removal and management. Furthermore, the Organization for Economic Co-operation and Development (OECD) has been actively involved in addressing CECs. The OECD Workshop on Managing Contaminants of Emerging Concern in Surface Waters brought together experts from various countries to discuss challenges and solutions related to CECs, emphasizing the importance of international collaboration in tackling this global issue. These recent developments underscore the growing recognition of the need for concerted efforts to address the challenges posed by CECs to protect public health and the environment. Advocacy efforts for the regulation of CECs are important to push for legislation and regulatory action. Environmental advocacy groups raise awareness about the potential risks associated with CECs and urge for the advancement of environmental protection policies. These groups lobby for the enhancement of water quality standards, particularly the inclusion of CECs in the monitoring and treatment protocols of wastewater facilities, resulting in improved effluent quality NECRI. Additionally, they push for a comprehensive detection framework, and advocate for precautionary policies to prevent the release of harmful chemicals into the environment (Environmental Working Group). References Pollutants Water pollution in the United States Water pollution
Contaminants of emerging concern
Chemistry,Environmental_science
4,795
2,574,694
https://en.wikipedia.org/wiki/Cone%20beam%20reconstruction
In microtomography X-ray scanners, cone beam reconstruction is one of two common scanning methods, the other being Fan beam reconstruction. Cone beam reconstruction uses a 2-dimensional approach for obtaining projection data. Instead of utilizing a single row of detectors, as fan beam methods do, a cone beam systems uses a standard charge-coupled device camera, focused on a scintillator material. The scintillator converts X-ray radiation to visible light, which is picked up by the camera and recorded. The method has enjoyed widespread implementation in microtomography, and is also used in several larger-scale systems. An X-ray source is positioned across from the detector, with the object being scanned in between. (This is essentially the same setup used for an ordinary X-ray fluoroscope). Projections from different angles are obtained in one of two ways. In one method, the object being scanned is rotated. This has the advantage of simplicity in implementation; a rotating stage results in little complexity. The second method involves rotating the X-ray source and camera around the object, as is done in ordinary CT scanning and SPECT imaging. This adds complexity, size and cost to the system, but removes the need to rotate the object. The method is referred to as cone-beam reconstruction because the X-rays are emitted from the source as a cone-shaped beam. In other words, it begins as a tight beam at the source, and expands as it moves away. See also Computed tomography Industrial CT scanning Tomographic reconstruction References Medical imaging Medical physics X-ray computed tomography
Cone beam reconstruction
Physics
324
857,110
https://en.wikipedia.org/wiki/Centered%20triangular%20number
A centered (or centred) triangular number is a centered figurate number that represents an equilateral triangle with a dot in the center and all its other dots surrounding the center in successive equilateral triangular layers. This is also the number of points of a hexagonal lattice with nearest-neighbor coupling whose distance from a given point is less than or equal to . The following image shows the building of the centered triangular numbers by using the associated figures: at each step, the previous triangle (shown in red) is surrounded by a triangular layer of new dots (in blue). Properties The gnomon of the n-th centered triangular number, corresponding to the (n + 1)-th triangular layer, is: The n-th centered triangular number, corresponding to n layers plus the center, is given by the formula: Each centered triangular number has a remainder of 1 when divided by 3, and the quotient (if positive) is the previous regular triangular number. Each centered triangular number from 10 onwards is the sum of three consecutive regular triangular numbers. For n > 2, the sum of the first n centered triangular numbers is the magic constant for an n by n normal magic square. Relationship with centered square numbers The centered triangular numbers can be expressed in terms of the centered square numbers: where Lists of centered triangular numbers The first centered triangular numbers (C3,n < 3000) are: 1, 4, 10, 19, 31, 46, 64, 85, 109, 136, 166, 199, 235, 274, 316, 361, 409, 460, 514, 571, 631, 694, 760, 829, 901, 976, 1054, 1135, 1219, 1306, 1396, 1489, 1585, 1684, 1786, 1891, 1999, 2110, 2224, 2341, 2461, 2584, 2710, 2839, 2971, … . The first simultaneously triangular and centered triangular numbers (C3,n = TN < 109) are: 1, 10, 136, 1 891, 26 335, 366 796, 5 108 806, 71 156 485, 991 081 981, … . The generating function If the centered triangular numbers are treated as the coefficients of the McLaurin series of a function, that function converges for all , in which case it can be expressed as the meromorphic generating function References Lancelot Hogben: Mathematics for the Million (1936), republished by W. W. Norton & Company (September 1993), Figurate numbers
Centered triangular number
Mathematics
537
27,924,015
https://en.wikipedia.org/wiki/DNA%20Plant%20Technology
DNA Plant Technology was an early pioneer in applying transgenic biotechnology to problems in agriculture. The company was founded in Cinnaminson Township, New Jersey, and moved to California in 1994. Some of the plants and products they developed included vine sweet mini peppers, the Fish tomato and Y1 tobacco. In 1996 the company merged with the Mexican conglomerate Empresas La Moderna, through its Bionovo subsidiary. In 2002, Bionova shut down DNA Plant Technology. History DNA Plant Technology was founded in 1981 by Dr. William R. Sharp and Dr. David A. Evans, in Cinnaminson, New Jersey, "to develop tastier, value-added plant-based products for industrial and consumer markets" using "advanced plant-breeding techniques, tissue-culture methods and molecular biology in developing premium food products and improving agricultural raw materials." By 1986, the company had gone public (NASDAQ:DNAP), and had partnerships with American Home Foods, Campbell Soup, Firmenich (a fragrance and flavor company), General Foods, Koppers Company, Hershey Foods, Brown and Williamson Tobacco, United Fruit, and others. By 1992 the company was investing heavily in genetic engineering and had invented, and obtained an issued patent for, the fish antifreeze gene that would become part of the infamous Fish tomato. In 1993, DNAP purchased the Freshworld premium fruit and vegetable brand from Du Pont for a mixture of shares, cash and intellectual rights valued at over $30 million. In 1994, their headquarters moved to Oakland, California. In 1996, the company was out of cash, and agreed to a merger with Empresas La Moderna, S.A. de C.V. (NYSE/ADR:ELM) (“ELM”) through ELM's subsidiary, Bionova, which also controlled the seed company, Seminis. The company became a wholly owned subsidiary of DNAP Holding Corporation (NASDAQ: DNAPD) of which it retained a 30% equity stake. ELM and Bionova were controlled by Alfonso Romo Garza. ELM was a company based in Monterrey, Mexico that operated in three fields: cigarettes (where it held 53% of the Mexican market), vegetable seeds, and packaging. In 1999 DNAP Holding Corporation changed its name to Bionova Holding Corporation and changed its NASDAQ ticker to BNVA. In 2002 Bionova closed down its R&D operations, which had been carried out through its DNA Plant Technology subsidiary. Major works Fish tomato In 1991, DNA Plant Technology applied for and were granted permission to conduct a field test permit for their transgenic fish tomato product (tomato; antifreeze gene; staphylococcal Protein A) from the USDA's Animal and Plant Health Inspection Service. This product remains controversial in the history of biotechnology, because an antifreeze gene isolated from an arctic flounder was transgenically inserted into a tomato in an attempt to create a frost-tolerant tomato. Although this product was tested in a greenhouse, and may have been tested in the field, it was never commercialized. In 1995, DNA Plant Technology unveiled a second generation of a different transgenic tomato and served it at a meeting of its shareholders. That same year, DNA Plant Technology sold its wholly owned subsidiary called to Frost Technology Corporation to Simplot. Tobacco Via its collaboration with the cigarette company, Brown & Williamson, DNA Plant Technology developed a genetically engineered cultivar of tobacco with a higher nicotine content, based on a high-nicotine strain already owned by Brown & Williamson called Y-1. Brown & Williamson and DNA Plant Technology were indicted by the US government for exporting the seeds to Brazil in violation of the Tobacco Seed Export law. Popcorn In the mid-1980s, DNAP attempted to use somaclonal variation with corn to produce buttery-tasting popcorn without the need to add butter. Discovery of gene silencing While working for DNA Plant Technology, the scientists Richard A. Jorgensen and Carolyn Napoli made discoveries about post transcriptional gene silencing that went on to form the basis of a number of U.S. patents on gene regulation and crop manipulation. Key experiments in the control of plant transgene expression were performed by Jorgensen after he joined DNA Plant Technology corporation / Advanced Genetic Sciences, Inc., including the modification of flower color in ornamental plants. This research led to the discovery of gene silencing when an extra copy of a key gene yielded white rather than blue flowers. Legal controversy In the 1990s, the FDA targeted DNA Plant Technology, charging that it had illegally smuggled Y1 Tobacco seeds out of the United States. The U.S. Justice Department charged DNA Plant Technology with one misdemeanor count of conspiracy to violate the Tobacco Seed Export law, prohibiting the export of tobacco seeds without a permit (a law which was repealed in 1991). DNA Plant Technology pleaded guilty in 1998 and agreed to cooperate with further investigations of Brown & Williamson. However, the U.S. Supreme Court eventually ruled in March 2000 that the FDA did not have the authority to regulate tobacco as a drug. References Agriculture companies of the United States Defunct biotechnology companies of the United States Biotechnology companies established in 1981 Biotechnology companies disestablished in 2002 Defunct technology companies based in California Genetic engineering and agriculture 1981 establishments in New Jersey 2002 disestablishments in California
DNA Plant Technology
Engineering,Biology
1,094
28,423,132
https://en.wikipedia.org/wiki/Gautami%20%28typeface%29
Gautami is a Microsoft Windows typeface used to display the Telugu script. Versions of it have been supplied in Windows Server 2003, Windows Server 2008, Windows XP, Windows Vista, Windows 7 and Windows 8. It contains Unicode support for the following ranges: Basic Latin Latin-1 Supplement Telugu The name "Gautami" was given by Radhika Mamidi, a resource person for Telugu, from the National Centre for Software Technology, working with R K Joshi's team. It refers to the tributary Gautami of Godavari River. References Microsoft typefaces Brahmic typefaces Sans-serif typefaces Typefaces and fonts introduced in 2011
Gautami (typeface)
Technology
142
1,840,378
https://en.wikipedia.org/wiki/Lad%20culture
Lad culture (also the new lad, laddism) was a media-driven, principally British and Irish subculture of the 1990s and the early 2000s. The term lad culture continues to be used today to refer to collective, boorish or misogynistic behaviour by young heterosexual men, particularly university students. In the lad culture of the 1990s and 2000s, the image of the "lad"—or "new lad"—was that of a generally middle class figure espousing attitudes typically attributed to the working classes. The subculture involved heterosexual young men assuming an anti-intellectual position, shunning cultural pursuits and sensitivity in favour of drinking, sport, sex and sexism. Lad culture was diverse and popular, involving literature, magazines, film, music and television, with ironic humour being a defining trope. Principally understood at the time as a male backlash against feminism and the pro-feminist "new man", the discourse around the new lad represented some of the earliest mass public discussion of how heterosexual masculinity is constructed. Lad culture as a mainstream cultural phenomenon peaked around the turn of the millennium and can be seen as going into decline as the market for lad mags collapsed in the early 2000s, driven by the rise of Internet. Nonetheless, the stereotype of the lad continued to be exploited in advertising and marketing as late as the mid-2010s. Though the term "lad culture" was predominantly used in Britain and Ireland, it was part of a global cultural trend in the developed English speaking world. The title of a 2007 book by the gender studies academic David Nylund about USA Sports Radio, "Beer, Babes and Balls" mirrors the three stereotypical interests of the "lad." The American term Bro culture is clearly closely related, though it originated around two decades later than the term lad culture and therefore needs to be understood against a different cultural context. In popular culture Lad culture did not emerge organically as with earlier British male sub-cultures such as the mods of the 1960s; rather it was a media creation. The term "new lad" was first coined - as a response to then popular concept of the new man - by journalist Sean O'Hagan in a 1993 article in the magazine Arena. The concept was developed and sustained across a diverse range of media: there was a literary component - lad lit; it was closely associated with the musical style Britpop and with certain television shows and stand-up comedians; a number of glossy, violent films in the later 1990s were also popularly linked to lad culture. Most important in shaping and popularising lad culture, though, was the lad mag a new style of lifestyle magazine for young, heterosexual men that became suddenly popular in the mid-1990s. Lad mags Lad mags included Maxim, FHM and Loaded. Britpop Television Men Behaving Badly, Game On and They Think It's All Over were 1990s television programmes that presented images of laddishness dominated by the male pastimes of drinking, watching football, and sex. Film Lad culture grew beyond men's magazines to films such as Snatch and Lock, Stock and Two Smoking Barrels. Irony Lad culture was strongly associated with an ironic position. The strapline of the leading lad mag Loaded was "for men who should know better." The BBC in a 1999 review called "Our Decade: New Lad Rules the World" identified that one of the key concepts associated with lad culture (alongside curry and foreign stag weekends) was "anything being acceptable if its "ironic"." Humour in lad mags and in television comedy was a major element of lad culture: the ironic position allowed comedians to both identify themselves as opposed to and, at the same time, indulge in racist, sexist and homophobic jokes. Part of the ironic position can be seen in relation to the term lad itself. Despite the ubiquity of lad culture in the media of the 1990s there was no expectation that real, individual men would seriously identify themselves as lads: to do so would be to invite ridicule. This was a form of distinctively British class play: middle or aspiring middle men were playing at being working class. A 2012 National Union of Students report citing the academic John Benyon identified how "Uncensored displays of masculinity during the 1990s were deemed by those involved to be ironic by their very nature. He [Benyon] highlights how the magazine Loaded consciously reduced working class masculinities to jokes, interest in cars and the objectification of women, and dismissed criticisms as humourless attacks on free speech which failed to see the ironic nature of the representations." Oddly, the lad was both ironic and authentic. Irony was the lad's defining behaviours but the lad himself was often presented as the authentic form of masculinity. For example, GQ in a press-release from 1991 wrote, "GQ is proud to announce that the New Man has officially been laid to rest (if indeed he ever drew breath). The Nineties man knows who he is, what he wants and where he's going, and he's not afraid to say so. And yes, he still wants to get laid." In gender studies Though always principally driven by the media, the concept of the "lad" or "new lad" was widely discussed at the time as a male backlash to feminism and changing gender norms. For example, the writer Fay Weldon claimed in 1999 that, "laddishness is a response to humiliation and indignity ... the girl-power! girl-power! female triumphalism which echoes through the land". The press frequently presented the new lad in opposition to a slightly earlier media construct, the "new man," who supposedly eschewed traditionally male interests as part of his feminist values, a man who "has subjugated his masculinity in order to fulfill the needs of women .." and has a "passive and insipid image". Both the "new lad" and the "new man" were - it was always implicitly assumed - heterosexual and cisgender. Many feminists were robust in their criticism of lad culture. Naomi Wolf stated: "the stereotypes for men attentive to feminism were two: Eunuch, or Beast", in the New Statesman, Kira Cochrane argued that "it's a dark world that Loaded and the lad culture has bequeathed us". Joanne Knowles of Liverpool John Moores University wrote that the "lad" displays "a pre-feminist and racist attitude to women as both sex objects and creatures from another species". An article in Frieze magazine proposed a psychoanalytic reading of the new lad phenomenon: Other writers saw less new about the lad. Nylund, in his 2007 "Beer, Babes and Balls" discussion of parallel developments in American popular culture, identifies "a return to hegemonic masculine values of male homosociality". Other writers observed that social constraints simply meant that "it is easier to be a lad rather than a new man in most workplaces". Meanwhile, the lad could be seen as the ongoing reaction to a far older perceived threat from women to men's freedom, one that predated feminism: the lad image was "a refuge from the constraints and demands of marriage and nuclear family". Social studies A study by Gabrielle Ivinson of Cardiff University and Patricia Murphy of the Open University identified lad culture as a source of behavioural confusion, and an investigation by Adrienne Katz linked it to suicide and depression. A study of the architecture profession found that lad culture had a negative impact on women completing their professional education. Commentator Helen Wilkinson believes that lad culture has affected politics and decreased the ability of women to participate. The UK's largest student union warned in a 2015 study that universities were failing to address the issue of lad culture, with almost half (49%) of all universities having no policy against discrimination due to sexuality, or anti-sexual harassment policies. Related terms and uses The word "ladette" was coined to describe young women who take part in laddish behaviour. Ladettes are defined by the Concise Oxford Dictionary as: "Young women who behave in a boisterously assertive or crude manner and engage in heavy drinking sessions." The term is no longer widely used. The term "lad" is also used in Australian youth culture to refer to the Eshay subculture which is more similar to the chav or football casual subcultures, rather than the middle class student subculture the term refers to in the United Kingdom. Australian lads wear a distinctive dress code, consisting of running caps and shoes combined with striped polo shirts and sports shorts. They frequently use pig latin phrases in conversation, for example "Ad-lay" to refer to a fellow "Lad". Lad-rap is a growing underground hip hop scene in Australia. See also References Counterculture of the 1990s Counterculture of the 2000s 1990s in the United Kingdom 1990s in the Republic of Ireland 2000s in the United Kingdom 2000s in the Republic of Ireland Adolescence Anti-intellectualism Drinking culture Masculinity Interpersonal relationships Men's culture Men's movement Misogyny Postmodernism Slang terms for men Subcultures Youth culture in the United Kingdom Middle class culture Antifeminism 1990s neologisms
Lad culture
Biology
1,898
3,298,854
https://en.wikipedia.org/wiki/Graph%20factorization
In graph theory, a factor of a graph G is a spanning subgraph, i.e., a subgraph that has the same vertex set as G. A k-factor of a graph is a spanning k-regular subgraph, and a k-factorization partitions the edges of the graph into disjoint k-factors. A graph G is said to be k-factorable if it admits a k-factorization. In particular, a 1-factor is a perfect matching, and a 1-factorization of a k-regular graph is a proper edge coloring with k colors. A 2-factor is a collection of cycles that spans all vertices of the graph. 1-factorization If a graph is 1-factorable then it has to be a regular graph. However, not all regular graphs are 1-factorable. A k-regular graph is 1-factorable if it has chromatic index k; examples of such graphs include: Any regular bipartite graph. Hall's marriage theorem can be used to show that a k-regular bipartite graph contains a perfect matching. One can then remove the perfect matching to obtain a (k − 1)-regular bipartite graph, and apply the same reasoning repeatedly. Any complete graph with an even number of nodes (see below). However, there are also k-regular graphs that have chromatic index k + 1, and these graphs are not 1-factorable; examples of such graphs include: Any regular graph with an odd number of nodes. The Petersen graph. Complete graphs A 1-factorization of a complete graph corresponds to pairings in a round-robin tournament. The 1-factorization of complete graphs is a special case of Baranyai's theorem concerning the 1-factorization of complete hypergraphs. One method for constructing a 1-factorization of a complete graph on an even number of vertices involves placing all but one of the vertices in a regular polygon, with the remaining vertex at the center. With this arrangement of vertices, one way of constructing a 1-factor of the graph is to choose an edge e from the center to a single polygon vertex together with all possible edges that lie on lines perpendicular to e. The 1-factors that can be constructed in this way form a 1-factorization of the graph. The number of distinct 1-factorizations of K2, K4, K6, K8, ... is 1, 1, 6, 6240, 1225566720, 252282619805368320, 98758655816833727741338583040, ... (). 1-factorization conjecture Let G be a k-regular graph with 2n nodes. If k is sufficiently large, it is known that G has to be 1-factorable: If k = 2n − 1, then G is the complete graph K2n, and hence 1-factorable (see above). If k = 2n − 2, then G can be constructed by removing a perfect matching from K2n. Again, G is 1-factorable. show that if k ≥ 12n/7, then G is 1-factorable. The 1-factorization conjecture is a long-standing conjecture that states that k ≈ n is sufficient. In precise terms, the conjecture is: If n is odd and k ≥ n, then G is 1-factorable. If n is even and k ≥ n − 1 then G is 1-factorable. The overfull conjecture implies the 1-factorization conjecture. Perfect 1-factorization A perfect pair from a 1-factorization is a pair of 1-factors whose union induces a Hamiltonian cycle. A perfect 1-factorization (P1F) of a graph is a 1-factorization having the property that every pair of 1-factors is a perfect pair. A perfect 1-factorization should not be confused with a perfect matching (also called a 1-factor). In 1964, Anton Kotzig conjectured that every complete graph K2n where n ≥ 2 has a perfect 1-factorization. So far, it is known that the following graphs have a perfect 1-factorization: the infinite family of complete graphs K2p where p is an odd prime (by Anderson and also Nakamura, independently), the infinite family of complete graphs Kp+1 where p is an odd prime, and sporadic additional results, including K2n where 2n ∈ {16, 28, 36, 40, 50, 126, 170, 244, 344, 730, 1332, 1370, 1850, 2198, 3126, 6860, 12168, 16808, 29792}. Some newer results are collected here. If the complete graph Kn+1 has a perfect 1-factorization, then the complete bipartite graph Kn,n also has a perfect 1-factorization. 2-factorization If a graph is 2-factorable, then it has to be 2k-regular for some integer k. Julius Petersen showed in 1891 that this necessary condition is also sufficient: any 2k-regular graph is 2-factorable. If a connected graph is 2k-regular and has an even number of edges it may also be k-factored, by choosing each of the two factors to be an alternating subset of the edges of an Euler tour. This applies only to connected graphs; disconnected counterexamples include disjoint unions of odd cycles, or of copies of K2k&hairsp;+1. The Oberwolfach problem concerns the existence of 2-factorizations of complete graphs into isomorphic subgraphs. It asks for which subgraphs this is possible. This is known when the subgraph is connected (in which case it is a Hamiltonian cycle and this special case is the problem of Hamiltonian decomposition) but the general case remains open. References Bibliography , Section 5.1: "Matchings". . , Chapter 2: "Matching, covering and packing". Electronic edition. , Chapter 9: "Factorization". . . . Further reading . Graph theory objects Factorization
Graph factorization
Mathematics
1,284
22,658,615
https://en.wikipedia.org/wiki/Perfect%20ring
In the area of abstract algebra known as ring theory, a left perfect ring is a type of ring over which all left modules have projective covers. The right case is defined by analogy, and the condition is not left-right symmetric; that is, there exist rings which are perfect on one side but not the other. Perfect rings were introduced in Bass's book. A semiperfect ring is a ring over which every finitely generated left module has a projective cover. This property is left-right symmetric. Perfect ring Definitions The following equivalent definitions of a left perfect ring R are found in Anderson and Fuller: Every left R-module has a projective cover. R/J(R) is semisimple and J(R) is left T-nilpotent (that is, for every infinite sequence of elements of J(R) there is an n such that the product of first n terms are zero), where J(R) is the Jacobson radical of R. (Bass' Theorem P) R satisfies the descending chain condition on principal right ideals. (There is no mistake; this condition on right principal ideals is equivalent to the ring being left perfect.) Every flat left R-module is projective. R/J(R) is semisimple and every non-zero left R-module contains a maximal submodule. R contains no infinite orthogonal set of idempotents, and every non-zero right R-module contains a minimal submodule. Examples Right or left Artinian rings, and semiprimary rings are known to be right-and-left perfect. The following is an example (due to Bass) of a local ring which is right but not left perfect. Let F be a field, and consider a certain ring of infinite matrices over F. Take the set of infinite matrices with entries indexed by , and which have only finitely many nonzero entries, all of them above the diagonal, and denote this set by . Also take the matrix with all 1's on the diagonal, and form the set It can be shown that R is a ring with identity, whose Jacobson radical is J. Furthermore R/J is a field, so that R is local, and R is right but not left perfect. Properties For a left perfect ring R: From the equivalences above, every left R-module has a maximal submodule and a projective cover, and the flat left R-modules coincide with the projective left modules. An analogue of the Baer's criterion holds for projective modules. Semiperfect ring Definition Let R be ring. Then R is semiperfect if any of the following equivalent conditions hold: R/J(R) is semisimple and idempotents lift modulo J(R), where J(R) is the Jacobson radical of R. R has a complete orthogonal set e1, ..., en of idempotents with each eiRei a local ring. Every simple left (right) R-module has a projective cover. Every finitely generated left (right) R-module has a projective cover. The category of finitely generated projective -modules is Krull-Schmidt. Examples Examples of semiperfect rings include: Left (right) perfect rings. Local rings. Kaplansky's theorem on projective modules Left (right) Artinian rings. Finite dimensional k-algebras. Properties Since a ring R is semiperfect iff every simple left R-module has a projective cover, every ring Morita equivalent to a semiperfect ring is also semiperfect. Citations References Ring theory
Perfect ring
Mathematics
746
55,764,776
https://en.wikipedia.org/wiki/Sweat%20allergy
A sweat allergy is the exacerbation of atopic dermatitis associated with an elevated body temperature and resulting increases in the production of sweat. It appears as small reddish welts that become visible in response to increased temperature and resulting production of sweat. It can affect all ages. Sweating can trigger intense itching or cholinergic urticaria. The protein MGL_1304 secreted by mycobiota (fungi) present on the skin such as Malassezia globosa acts as a histamine or antigen. People can be desensitized using their own samples of sweat that have been purified that contains small amounts of the allergen. The allergy is not due to the sweat itself but instead to an allergy-producing protein secreted by microorganisms found on the skin. Cholinergic urticaria (CU) is one of the physical urticaria (hives) which is provoked during sweating events such as exercise, bathing, staying in a heated environment, or emotional stress. The hives produced are typically smaller than classic hives and are generally shorter-lasting. Multiple subtypes have been elucidated, each of which require distinct treatment. Tannic acid has been found to suppress the allergic response, along with showering. See also Miliaria Exercise-induced anaphylaxis Idiopathic pure sudomotor failure Hypohidrosis Fabry disease Allergy Food allergy List of allergens Tree nut allergy Cholinergic urticaria References Allergology Dermatitis Immunology
Sweat allergy
Biology
328
5,745,387
https://en.wikipedia.org/wiki/Subsurface%20utilities
Subsurface Utilities are the utility networks generally laid under the ground surface. These utilities include pipeline networks for water supply, sewage disposal, petrochemical liquid transmission, petrochemical gas transmission or cable networks for power transmission, telecom data transmission, any other data or signal transmission. In North America alone, there are an estimated 35 million miles of subsurface infrastructure that deliver critical services to homes and businesses. The field of engineering dealing with the locating and mapping subsurface utilities is termed as Subsurface Utility Engineering (SUE). References Building engineering
Subsurface utilities
Engineering
113
32,703,814
https://en.wikipedia.org/wiki/Chirality
Chirality () is a property of asymmetry important in several branches of science. The word chirality is derived from the Greek (kheir), "hand", a familiar chiral object. An object or a system is chiral if it is distinguishable from its mirror image; that is, it cannot be superposed (not to be confused with superimposed) onto it. Conversely, a mirror image of an achiral object, such as a sphere, cannot be distinguished from the object. A chiral object and its mirror image are called enantiomorphs (Greek, "opposite forms") or, when referring to molecules, enantiomers. A non-chiral object is called achiral (sometimes also amphichiral) and can be superposed on its mirror image. The term was first used by Lord Kelvin in 1893 in the second Robert Boyle Lecture at the Oxford University Junior Scientific Club which was published in 1894: Human hands are perhaps the most recognized example of chirality. The left hand is a non-superposable mirror image of the right hand; no matter how the two hands are oriented, it is impossible for all the major features of both hands to coincide across all axes. This difference in symmetry becomes obvious if someone attempts to shake the right hand of a person using their left hand, or if a left-handed glove is placed on a right hand. In mathematics, chirality is the property of a figure that is not identical to its mirror image. Mathematics In mathematics, a figure is chiral (and said to have chirality) if it cannot be mapped to its mirror image by rotations and translations alone. For example, a right shoe is different from a left shoe, and clockwise is different from anticlockwise. See for a full mathematical definition. A chiral object and its mirror image are said to be enantiomorphs. The word enantiomorph stems from the Greek () 'opposite' + () 'form'. A non-chiral figure is called achiral or amphichiral. The helix (and by extension a spun string, a screw, a propeller, etc.) and Möbius strip are chiral two-dimensional objects in three-dimensional ambient space. The J, L, S and Z-shaped tetrominoes of the popular video game Tetris also exhibit chirality, but only in a two-dimensional space. Many other familiar objects exhibit the same chiral symmetry of the human body, such as gloves, glasses (sometimes), and shoes. A similar notion of chirality is considered in knot theory, as explained below. Some chiral three-dimensional objects, such as the helix, can be assigned a right or left handedness, according to the right-hand rule. Geometry In geometry, a figure is achiral if and only if its symmetry group contains at least one orientation-reversing isometry. In two dimensions, every figure that possesses an axis of symmetry is achiral, and it can be shown that every bounded achiral figure must have an axis of symmetry. In three dimensions, every figure that possesses a plane of symmetry or a center of symmetry is achiral. There are, however, achiral figures lacking both plane and center of symmetry. In terms of point groups, all chiral figures lack an improper axis of rotation (Sn). This means that they cannot contain a center of inversion (i) or a mirror plane (σ). Only figures with a point group designation of C1, Cn, Dn, T, O, or I can be chiral. Knot theory A knot is called achiral if it can be continuously deformed into its mirror image, otherwise it is called chiral. For example, the unknot and the figure-eight knot are achiral, whereas the trefoil knot is chiral. Physics In physics, chirality may be found in the spin of a particle, where the handedness of the object is determined by the direction in which the particle spins. Not to be confused with helicity, which is the projection of the spin along the linear momentum of a subatomic particle, chirality is an intrinsic quantum mechanical property, like spin. Although both chirality and helicity can have left-handed or right-handed properties, only in the massless case are they identical. In particular for a massless particle the helicity is the same as the chirality while for an antiparticle they have opposite sign. The handedness in both chirality and helicity relate to the rotation of a particle while it proceeds in linear motion with reference to the human hands. The thumb of the hand points towards the direction of linear motion whilst the fingers curl into the palm, representing the direction of rotation of the particle (i.e. clockwise and counterclockwise). Depending on the linear and rotational motion, the particle can either be defined by left-handedness or right-handedness. A symmetry transformation between the two is called parity. Invariance under parity by a Dirac fermion is called chiral symmetry. Electromagnetism Electromagnetic waves can have handedness associated with their polarization. Polarization of an electromagnetic wave is the property that describes the orientation, i.e., the time-varying direction and amplitude, of the electric field vector. For example, the electric field vectors of left-handed or right-handed circularly polarized waves form helices of opposite handedness in space. Circularly polarized waves of opposite handedness propagate through chiral media at different speeds (circular birefringence) and with different losses (circular dichroism). Both phenomena are jointly known as optical activity. Circular birefringence causes rotation of the polarization state of electromagnetic waves in chiral media and can cause a negative index of refraction for waves of one handedness when the effect is sufficiently large. While optical activity occurs in structures that are chiral in three dimensions (such as helices), the concept of chirality can also be applied in two dimensions. 2D-chiral patterns, such as flat spirals, cannot be superposed with their mirror image by translation or rotation in two-dimensional space (a plane). 2D chirality is associated with directionally asymmetric transmission (reflection and absorption) of circularly polarized waves. 2D-chiral materials, which are also anisotropic and lossy exhibit different total transmission (reflection and absorption) levels for the same circularly polarized wave incident on their front and back. The asymmetric transmission phenomenon arises from different, e.g. left-to-right, circular polarization conversion efficiencies for opposite propagation directions of the incident wave and therefore the effect is referred to as circular conversion dichroism. Like the twist of a 2d-chiral pattern appears reversed for opposite directions of observation, 2d-chiral materials have interchanged properties for left-handed and right-handed circularly polarized waves that are incident on their front and back. In particular left-handed and right-handed circularly polarized waves experience opposite directional transmission (reflection and absorption) asymmetries. While optical activity is associated with 3d chirality and circular conversion is associated with 2d chirality, both effects have also been observed in structures that are not chiral by themselves. For the observation of these chiral electromagnetic effects, chirality does not have to be an intrinsic property of the material that interacts with the electromagnetic wave. Instead, both effects can also occur when the propagation direction of the electromagnetic wave together with the structure of an (achiral) material form a chiral experimental arrangement. This case, where the mutual arrangement of achiral components forms a chiral (experimental) arrangement, is known as extrinsic chirality. Chiral mirrors are a class of metamaterials that reflect circularly polarized light of a certain helicity in a handedness-preserving manner, while absorbing circular polarization of the opposite handedness. However, most absorbing chiral mirrors operate only in a narrow frequency band, as limited by the causality principle. Employing a different design methodology that allows undesired waves to pass through instead of absorbing the undesired waveform, chiral mirrors are able to show good broadband performance. Chemistry A chiral molecule is a type of molecule that has a non-superposable mirror image. The feature that is most often the cause of chirality in molecules is the presence of an asymmetric carbon atom. The term "chiral" in general is used to describe the object that is non-superposable on its mirror image. In chemistry, chirality usually refers to molecules. Two mirror images of a chiral molecule are called enantiomers or optical isomers. Pairs of enantiomers are often designated as "right-", "left-handed" or, if they have no bias, "achiral". As polarized light passes through a chiral molecule, the plane of polarization, when viewed along the axis toward the source, will be rotated clockwise (to the right) or anticlockwise (to the left). A right handed rotation is dextrorotary (d); that to the left is levorotary (l). The d- and l-isomers are the same compound but are called enantiomers. An equimolar mixture of the two optical isomers, which is called a racemic mixture, will produce no net rotation of polarized light as it passes through. Left handed molecules have l- prefixed to their names; d- is prefixed to right handed molecules. However, this d- and l- notation of distinguishing enantiomers does not say anything about the actual spatial arrangement of the ligands/substituents around the stereogenic center, which is defined as configuration. Another nomenclature system employed to specify configuration is Fischer convention. This is also referred to as the D- and L-system. Here the relative configuration is assigned with reference to D-(+)-Glyceraldehyde and L-(−)-Glyceraldehyde, being taken as standard. Fischer convention is widely used in sugar chemistry and for α-amino acids. Due to the drawbacks of Fischer convention, it is almost entirely replaced by Cahn-Ingold-Prelog convention, also known as the sequence rule or R and S nomenclature. This was further extended to assign absolute configuration to cis-trans isomers with the E-Z notation. Molecular chirality is of interest because of its application to stereochemistry in inorganic chemistry, organic chemistry, physical chemistry, biochemistry, and supramolecular chemistry. More recent developments in chiral chemistry include the development of chiral inorganic nanoparticles that may have the similar tetrahedral geometry as chiral centers associated with sp3 carbon atoms traditionally associated with chiral compounds, but at larger scale. Helical and other symmetries of chiral nanomaterials were also obtained. Biology All of the known life-forms show specific chiral properties in chemical structures as well as macroscopic anatomy, development and behavior. In any specific organism or evolutionarily related set thereof, individual compounds, organs, or behavior are found in the same single enantiomorphic form. Deviation (having the opposite form) could be found in a small number of chemical compounds, or certain organ or behavior but that variation strictly depends upon the genetic make up of the organism. From chemical level (molecular scale), biological systems show extreme stereospecificity in synthesis, uptake, sensing, metabolic processing. A living system usually deals with two enantiomers of the same compound in drastically different ways. In biology, homochirality is a common property of amino acids and carbohydrates. The chiral protein-making amino acids, which are translated through the ribosome from genetic coding, occur in the L form. However, D-amino acids are also found in nature. The monosaccharides (carbohydrate-units) are commonly found in D-configuration. DNA double helix is chiral (as any kind of helix is chiral), and B-form of DNA shows a right-handed turn. Sometimes, when two enantiomers of a compound are found in organisms, they significantly differ in their taste, smell and other biological actions. For example,(+)-Carvone is responsible for the smell of caraway seed oil, whereas (–)-carvone is responsible for smell of spearmint oil. However, it is a commonly held misconception that (+)-limonene is found in oranges (causing its smell), and (–)-limonene is found in lemons (causing its smell). In 2021, after rigorous experimentation, it was found that all citrus fruits contain only (+)-limonene and the odor difference is because of other contributing factors. Also, for artificial compounds, including medicines, in case of chiral drugs, the two enantiomers sometimes show remarkable difference in effect of their biological actions. Darvon (dextropropoxyphene) is a painkiller, whereas its enantiomer, Novrad (levopropoxyphene) is an anti-cough agent. In case of penicillamine, the (S-isomer is used in the treatment of primary chronic arthritis, whereas the (R)-isomer has no therapeutic effect, as well as being highly toxic. In some cases, the less therapeutically active enantiomer can cause side effects. For example, (S-naproxen is an analgesic but the (R-isomer causes renal problems. In such situations where one of the enantiomers of a racemic drug is active and the other partner has undesirable or toxic effect one may switch from racemate to a single enantiomer drug for a better therapeutic value. Such a switching from a racemic drug to an enantiopure drug is called a chiral switch. The naturally occurring plant form of alpha-tocopherol (vitamin E) is RRR-α-tocopherol whereas the synthetic form (all-racemic vitamin E, or dl-tocopherol) is equal parts of the stereoisomers RRR, RRS, RSS, SSS, RSR, SRS, SRR, and SSR with progressively decreasing biological equivalency, so that 1.36 mg of dl-tocopherol is considered equivalent to 1.0 mg of d-tocopherol. Macroscopic examples of chirality are found in the plant kingdom, the animal kingdom and all other groups of organisms. A simple example is the coiling direction of any climber plant, which can grow to form either a left- or right-handed helix. In anatomy, chirality is found in the imperfect mirror image symmetry of many kinds of animal bodies. Organisms such as gastropods exhibit chirality in their coiled shells, resulting in an asymmetrical appearance. Over 90% of gastropod species have dextral (right-handed) shells in their coiling, but a small minority of species and genera are virtually always sinistral (left-handed). A very few species (for example Amphidromus perversus) show an equal mixture of dextral and sinistral individuals. In humans, chirality (also referred to as handedness or laterality) is an attribute of humans defined by their unequal distribution of fine motor skill between the left and right hands. An individual who is more dexterous with the right hand is called right-handed, and one who is more skilled with the left is said to be left-handed. Chirality is also seen in the study of facial asymmetry and is known as aurofacial asymmetry. According to the Axial Twist theory, vertebrate animals develop into a left-handed chirality. Due to this, the brain is turned around and the heart and bowels are turned by 90°. In the case of the health condition situs inversus totalis, in which all the internal organs are flipped horizontally (i.e. the heart placed slightly to the right instead of the left), chirality poses some problems should the patient require a liver or heart transplant, as these organs are chiral, thus meaning that the blood vessels which supply these organs would need to be rearranged should a normal, non situs inversus (situs solitus) organ be required. In the monocot bloodroot family, the species of the genera Wachendorfia and Barberetta have only individuals that either have the style points to the right or the style pointed to the left, with both morphs appearing within the same populations. This is thought to increase outcrossing and so boost genetic diversity, which in turn may help to survive in a changing environment. Remarkably, the related genus Dilatris also has chirally dimorphic flowers, but here both morphs occur on the same plant. In flatfish, the summer flounder or fluke are left-eyed, while halibut are right-eyed. Resources and Research Journal Chirality- a scientific journal focused on chirality in chemistry and biochemistry in respect to biological, chemical, materials, pharmacological, spectroscopic and physical properties. Selected Books Creutz, Michael (2018). From quarks to pions: chiral symmetry and confinement. New Jersey London Singapore Beijing , Shanghai Hong Kong Taipei Chennai Tokyo: World Scientific. Wolf, Christian (2008). Dynamic stereochemistry of chiral compounds: principles and applications. Cambridge: RSC Publ. Beesley, Thomas E.; Scott, Raymond P. W. (1998). Chiral chromatography. Separation science series. Chichester Weinheim: Wiley. See also Handedness Chiral drugs Chiral switch Chiral inversion Metachirality Orientation (space) Sinistral and dextral Tendril perversion Chirality (physics) References External links Asymmetry Biochemistry Stereochemistry Pharmacology Origin of life 1890s neologisms de:Chiralität (Chemie)
Chirality
Physics,Chemistry,Biology
3,828
713,636
https://en.wikipedia.org/wiki/Pruno
Pruno, also known as prison hooch or prison wine, is a term used in the United States to describe an improvised alcoholic beverage. It is variously made from apples, oranges, fruit cocktail, fruit juices, hard candy, sugar, high fructose syrup, and possibly other ingredients, including crumbled bread. Bread is incorrectly thought to contain yeast for the pruno to ferment. Pruno originated in US prisons, where it can be produced with the limited selection of equipment and ingredients available to inmates. It can be made using only a plastic bag, hot running water, and a towel or sock to conceal the pulp during fermentation. The end result has been described as a "vile-flavored wine cooler". Depending on the time spent fermenting (always balanced against the risk of discovery by officers), the sugar content, and the quality of the ingredients and preparation, pruno's alcohol content by volume can range from as low as 2% (equivalent to a very weak beer) to as high as 14% (equivalent to a strong wine). Description Typically, the fermenting mass of fruit—called the motor or kicker in US prison parlance—is retained from batch to batch to make the fermentation start faster. The more sugar is added, the greater the potential for a higher alcohol content—to a point. Beyond this point, the waste products of fermentation (mainly alcohol) cause the motor to die or go dormant as the yeasts' environment becomes too poisoned for them to continue fermenting. This also causes the taste of the end product to suffer. Ascorbic acid powder is sometimes used to stop the fermentation at a certain point, which, combined with the tartness of the added acid, somewhat enhances the taste by reducing the cloyingly sweet flavor associated with pruno. In 2004 and 2005 botulism outbreaks were reported among inmates in two California prisons; the Centers for Disease Control and Prevention suspects that potatoes used in making pruno were to blame in both cases. In 2012, similar botulism outbreaks caused by potato-based pruno were reported among inmates at prisons in Arizona and Utah. Inmates are not permitted to have alcoholic beverages, and correctional officers confiscate pruno whenever and wherever they find it. In an effort to eradicate pruno, some wardens have gone as far as banning all fresh fruit, fruit juices, and fruit-based food products from prison cafeterias. But even this is not always enough; there are pruno varieties made almost entirely from sauerkraut and orange juice. Food hoarding in the inmate cells in both prisons and jails allows the inmates to acquire ingredients and produce pruno. During jail and prison inmate cell searches, correctional officers remove excessive or unauthorized food items to halt the production of pruno. Pruno is hidden under bunks, inside toilets, inside walls, trash cans, in the shower area and anywhere inmates feel is safe to brew their pruno away from the prying eyes of correctional officers and jailers. Jarvis Jay Masters, a death row inmate at San Quentin, offers an oft-referenced recipe for pruno in his poem "Recipe for Prison Pruno", which won a PEN award in 1992. Another recipe for pruno can be found in Michael Finkel's Esquire article on Oregon death row inmate Christian Longo. In 2004 at the American Homebrewers Association's National Homebrew Conference in Las Vegas, a pruno competition and judging was held. See also Bum wine Changaa Chicha Drinking culture Jenkem Kilju Kvas Moonshine (hooch) Pájaro verde Poitín Bootlegging Tepache Tharra White Lightning References External links "Jailhouse Hooch: How to Get Liquored Up While Locked Down" from Modern Drunkard Magazine Blacktable.com —a complete pruno recipe, including detailed instructions and frequent disclaimers. "Steve Don't Eat It, Vol. 8: Prison Wine"—extensive, humorous account of pruno preparation and tasting, with photographs, from The Sneeze. "How to Make Pruno: 8 steps"—on wikiHow "Jailhouse Pruno -- Homemade Booze: It'll Kill You"—story about pruno-making methods, pruno stashing, and the cost of pruno in Sacramento, California's New Folsom prison circa 1995. Fermented drinks Prison-related crime Prison drinks
Pruno
Biology
926
2,120,575
https://en.wikipedia.org/wiki/Synaptonemal%20complex
The synaptonemal complex (SC) is a protein structure that forms between homologous chromosomes (two pairs of sister chromatids) during meiosis and is thought to mediate synapsis and recombination during prophase I during meiosis in eukaryotes. It is currently thought that the SC functions primarily as a scaffold to allow interacting chromatids to complete their crossover activities. Composition The synaptonemal complex is a tripartite structure consisting of two parallel lateral regions and a central element. This "tripartite structure" is seen during the pachytene stage of the first meiotic prophase, both in males and in females during gametogenesis. Previous to the pachytene stage, during leptonema, the lateral elements begin to form and they initiate and complete their pairing during the zygotene stage. After pachynema ends, the SC usually becomes disassembled and can no longer be identified. In humans, three specific components of the synaptonemal complex have been characterized: SC protein-1 (SYCP1), SC protein-2 (SYCP2), and SC protein-3 (SYCP3). The SYCP1 gene is on chromosome 1p13; the SYCP2 gene is on chromosome 20q13.33; and the gene for SYCP3 is on chromosome 12q. The synaptonemal complex was described by Montrose J. Moses in 1956 in primary spermatocytes of crayfish and by D. Fawcett in spermatocytes of pigeon, cat and man. As seen with the electron microscope, the synaptonemal complex is formed by two "lateral elements", mainly formed by SYCP3 and secondarily by SYCP2, a "central element" that contains at least two additional proteins and the amino terminal region of SYCP1, and a "central region" spanned between the two lateral elements, that contains the "transverse filaments" composed mainly by the protein SYCP1. The SCs can be seen with the light microscope using silver staining or with immunofluorescence techniques that label the proteins SYCP3 or SYCP2. Assembly and disassembly Formation of the SC usually reflects the pairing or "synapsis" of homologous chromosomes and may be used to probe the presence of pairing abnormalities in individuals carrying chromosomal abnormalities, either in number or in the chromosomal structure. The sex chromosomes in male mammals show only "partial synapsis" as they usually form only a short SC in the XY pair. The SC shows very little structural variability among eukaryotic organisms despite some significant protein differences. In many organisms the SC carries one or several "recombination nodules" associated with its central space. These nodules are thought to correspond to mature genetic recombination events or "crossovers". In male mice, gamma irradiation increases meiotic crossovers in SCs. This indicates that exogenously caused DNA damages are likely repaired by crossover recombination in SCs. The finding of an interaction between a SC structural component [synaptonemal central element protein 2 (SYCE2)] and recombinational repair protein RAD51 also suggests a role for the SC in DNA repair. In cell development the synaptonemal complex disappears during the late prophase of meiosis I. It is formed during zygotene. Cancer Although synaptonemal complex protein 2 (SYCP2) is a meiotic protein, it is aberrantly and commonly expressed in breast and ovarian cancers. SYCP2 protein expression in these cancers is associated with broad resistance to drugs that induced DNA damage, i.e. DNA damage response (DDR) drugs. SYCP2 is employed in the repair of DNA double-strand breaks by transcription-coupled homologous recombination. SYCP2 appears to confer cancer cell resistance to therapeutic DNA damaging agents by stimulating R-loop mediated double strand break repair. Thus inhibition of SYCP2 expression is being studied in efforts to improve therapy for breast and ovarian cancers. Necessity in eukaryotes It is now evident that the synaptonemal complex is not required for genetic recombination in some organisms. For instance, in protozoan ciliates such as Tetrahymena thermophila and Paramecium tetraurelia genetic crossover does not appear to require synaptonemal complex formation. Research has shown that not only does the SC form after genetic recombination but mutant yeast cells unable to assemble a synaptonemal complex can still engage in the exchange of genetic information. However, in other organisms like the C. elegans nematode, formation of chiasmata require the formation of the synaptonemal complex. References External links Synaptonemal complex by 3D-Structured Illumination, photograph by Dr. Chung-Ju Rachel Wang, University of California Berkeley, Department of Molecular and Cell Biology, Berkeley, CA, USA, second place winner of the 2009 Olympus Bioscapes Digital Imaging Competition. Kounetsova A. et al, Meiosis in Mice without a Synaptonemal Complex PLOS ONE (2011) Molecular genetics
Synaptonemal complex
Chemistry,Biology
1,098
77,793,120
https://en.wikipedia.org/wiki/Chiang%20Mai%20Design%20Week
Chiang Mai Design Week () is an annual event held to celebrate and promote design and creativity in Chiang Mai, Thailand. Supported by the Creative Economy Agency, it was Thailand's first design week. In 2017, UNESCO designated Chiang Mai as a Crafts and Folk Art Creative Cities Network. Overview The inaugural event was held 6 - 14 December, 2014 by the Thailand Creative & Design Center. In 2021, the event was hosted under the theme "Co-Forward" during the COVID-19 pandemic. In 2023, the event was hosted under the theme "Transforming Local: Adapt / Enhance / Local / Grow". The 2024 event will be hosted under the theme "Scaling Local: Creativity, Technology, Sustainability". See also Chiang Mai Creative City Bangkok Design Week Pakk Taii Design Week References Festivals in Thailand Events in Thailand Design events Culture of Chiang Mai
Chiang Mai Design Week
Engineering
177
275,222
https://en.wikipedia.org/wiki/Telephone%20keypad
A telephone keypad is a keypad installed on a push-button telephone or similar telecommunication device for dialing a telephone number. It was standardized when the dual-tone multi-frequency signaling (DTMF) system was developed in the Bell System in the United States in the 1960s – this replaced rotary dialing, that had been developed for electromechanical telephone switching systems. Because of the abundance of rotary dial equipment still on use well into the 1990s, many telephone keypads were also designed to be backwards-compatible: as well as producing DTMF pulses, they could optionally be switched to produce loop-disconnect pulses electronically. The development of the modern telephone keypad is attributed to research in the 1950s by Richard Deininger under the directorship of John Karlin at the Human Factors Engineering Department of Bell Labs. The modern keypad is laid out in a rectangular array of twelve push buttons arranged as four rows of three keys each. For military applications, a fourth column of keys was added to the right for priority signaling in the Autovon system in the 1960s. Initially, between 1963 and 1968, the keypads for civilian subscriber service omitted the lower left and lower right keys. These two keys are commonly labelled star, , and number sign/hash, , respectively, and produce the signals associated with those symbols. These keys were added to provide signals for anticipated data entry purposes in business applications, but found use in Custom Calling Services (CLASS) features installed in electronic switching systems. Layout The layout of the digit keys is different from that commonly appearing on calculators and numeric keypads. This layout was chosen after extensive human factors testing at Bell Labs. At the time (late 1950s), mechanical calculators were not widespread, and few people had experience with them. Indeed, calculators were only just starting to settle on a common layout; a 1955 paper states "Of the several calculating devices we have been able to look at ... Two other calculators have keysets resembling [the layout that would become the most common layout] ... . Most other calculators have their keys reading upward in vertical rows of ten." Meanwhile, a 1960 paper – just five years later – refers to today's common calculator layout as "the arrangement frequently found in ten-key adding machines". In any case, Bell Labs' testing found that the telephone layout with 1, 2, and 3 on the top row, was slightly faster in use than the calculator layout with them in the bottom row. The key labeled was officially named the "star" key. The key labeled is officially called the "number sign" key, but other names such as "pound", "hash", "hex", "octothorpe", "gate", "lattice", and "square" are common, depending on national or personal preference. The Greek symbols alpha and omega had been planned originally. These can be used for special functions. For example, in the UK, users can order a 7:30am alarm call from a BT telephone exchange by dialing: ✻55✻0730#. In the Americas and a number of other countries, most dials and, later, keypads also bear letters according to the following system: In the UK, dials and keypads also bore letters, though these were later dropped. They were arranged as follows: Putting the letter O on the zero makes sense, as in British speech, "oh" is often said rather than "nought" or "zero"; Q is visually similar to O, and therefore the two might be confused. Therefore, two possible mistakes were avoided. These letter assignments have been used for multiple purposes. Originally, they referred to the leading letters of telephone exchange names. In the mid-20th century United States, before the switch to All-Number Calling, telephone numbers had seven digits, including a two-digit prefix which was expressed in letters rather than digits, e.g.; KL5-5445. The UK telephone numbering system used a similar two-letter code after an initial zero (the zero prefix selected trunk dialling) to form the first part of the subscriber trunk dialling code for a region; the letters were followed by one or more digits. For example, Aylesbury was assigned 0AY6, which translated to 0296. The letters have also been used, mainly in the United States, as a technique for remembering telephone numbers easily. For example, an interior decorator might license the telephone number 1-800-724-6837, but advertise it as the more memorable phoneword "1-800-PAINTER". Sometimes businesses advertise a number with a mnemonic word having more letters than there are digits in the phone number. Usually, this means that the caller just stops dialing at seven digits after the area code or that the extra digits are ignored by the telephone exchange. In early cell phones, or feature phones, the letters on the keys are used for text entry tasks such as text messaging, entering names in the phone book, and browsing the web. To compensate for the smaller number of keys, phones used multi-tap and later predictive text processing to speed up the process. Touchscreen phones have made these input methods obsolete, as the screens are typically large enough to show as many virtual buttons as necessary for a full keyboard. Key tones Pressing a single key of a traditional analog telephone keypad produces a telephony signaling event to the remote switching system. For touchtone service, the signal is a dual-tone multi-frequency signaling tone consisting of two simultaneous pure tone sinusoidal frequencies. The row in which the key appears determines the low-frequency component, and the column determines the high-frequency component. For example, pressing key 1 results in a signal composed of tones with frequencies 697 hertz (Hz) and 1209 Hz. Letter mapping In the course of telephone history, dials as well as keypads have been associated with various mappings of letters and characters to numbers. The system used in Denmark was different from that used in the UK, which, in turn, was different from the US and Australia. The use of alphanumeric codes for area codes was abandoned in Europe when international direct dialing was introduced in the 1960s, because, for example, dialing VIC 8900 on a Danish telephone would result in a different number to dialling it on a British telephone. At the same time, letters were no longer placed on the dials/keypads of new telephones. Letters did not reappear on phones in Europe until the introduction of mobile phones, and the layout followed the new international standard ITU E.161/ISO 9995-8. The ITU established an international standard (ITU E.161) in the mid-1990s, recommended that this should be the layout used on any new devices. There is a standard, ETSI ES 202 130, that covers European languages and other languages used in Europe, published by the independent ETSI organisation in 2003 and updated in 2007. Documentation describing some principles of the standard is available. Early smartphones such as the Palm Treo, HTC Wizard and BlackBerry had full alphanumeric keyboards instead of the traditional telephone keypads, and the user had to execute additional steps to dial a number containing convenience letters. On certain BlackBerry devices, a user can press the key followed by the desired letter, and the device will generate the appropriate DTMF tone. Later smartphones moved to on-screen virtual keyboards and keypads. The latter typically include the ITU standard letters next to each number (and many Android phone use the key to access voicemail and the zero to type a "+"). See also E.161 Phoneword Rotary dial T9 References Telephony equipment Telephone numbers
Telephone keypad
Mathematics
1,612
52,720,345
https://en.wikipedia.org/wiki/Miproxifene
Miproxifene (INN) (former developmental code name DP-TAT-59) is a nonsteroidal selective estrogen receptor modulator (SERM) of the triphenylethylene group that was never marketed. It is a derivative of afimoxifene (4-hydroxytamoxifen) in which an additional 4-isopropyl group is present in the β-phenyl ring. The drug has been found to be 3- to 10-fold more potent than tamoxifen in inhibiting breast cancer cell growth in in vitro models. Miproxifene is the active metabolite of miproxifene phosphate (TAT-59), a phosphate ester and prodrug of miproxifene that was developed to improve its water solubility. Miproxifene phosphate was under development for the treatment of breast cancer and reached phase III clinical trials for this indication but development was discontinued. References 4-Hydroxyphenyl compounds Dimethylamino compounds Hormonal antineoplastic drugs Human drug metabolites Selective estrogen receptor modulators Triphenylethylenes Ethers Isopropyl compounds
Miproxifene
Chemistry
252
9,590,785
https://en.wikipedia.org/wiki/Feature-oriented%20positioning
Feature-oriented positioning (FOP) is a method of precise movement of the scanning microscope probe across the surface under investigation. With this method, surface features (objects) are used as reference points for microscope probe attachment. Actually, FOP is a simplified variant of the feature-oriented scanning (FOS). With FOP, no topographical image of a surface is acquired. Instead, a probe movement by surface features is only carried out from the start surface point A (neighborhood of the start feature) to the destination point B (neighborhood of the destination feature) along some route that goes through intermediate features of the surface. The method may also be referred to by another name—object-oriented positioning (OOP). To be distinguished are a "blind" FOP when the coordinates of features used for probe movement are unknown in advance and FOP by existing feature "map" when the relative coordinates of all features are known, for example, in case they were obtained during preliminary FOS. Probe movement by a navigation structure is a combination of the above-pointed methods. FOP method may be used in bottom-up nanofabrication to implement high-precision movement of the nanolithograph/nanoassembler probe along the substrate surface. Moreover, once made along some route, FOP may be then exactly repeated the required number of times. After movement in the specified position, an influence on the surface or manipulation of a surface object (nanoparticle, molecule, atom) is performed. All the operations are carried out in automatic mode. With multiprobe instruments, FOP approach allows to apply any number of specialized technological and/or analytical probes successively to a surface feature/object or to a specified point of the feature/object neighborhood. That opens a prospect for building a complex nanofabrication consisting of a large number of technological, measuring, and checking operations. See also Feature-oriented scanning References External links Feature-oriented positioning, Research section, Lapshin's Personal Page on SPM & Nanotechnology Microscopes Nanotechnology Scanning probe microscopy ru:ООП
Feature-oriented positioning
Chemistry,Materials_science,Technology,Engineering
429
167,120
https://en.wikipedia.org/wiki/Gravel
Gravel () is a loose aggregation of rock fragments. Gravel occurs naturally on Earth as a result of sedimentary and erosive geological processes; it is also produced in large quantities commercially as crushed stone. Gravel is classified by particle size range and includes size classes from granule- to boulder-sized fragments. In the Udden-Wentworth scale gravel is categorized into granular gravel () and pebble gravel (). ISO 14688 grades gravels as fine, medium, and coarse, with ranges for fine and for coarse. One cubic metre of gravel typically weighs about , or one cubic yard weighs about . Gravel is an important commercial product, with a number of applications. Almost half of all gravel production is used as aggregate for concrete. Much of the rest is used for road construction, either in the road base or as the road surface (with or without asphalt or other binders.) Naturally occurring porous gravel deposits have a high hydraulic conductivity, making them important aquifers. Definition and properties Colloquially, the term gravel is often used to describe a mixture of different size pieces of stone mixed with sand and possibly some clay. The American construction industry distinguishes between gravel (a natural material) and crushed stone (produced artificially by mechanical crushing of rock.) The technical definition of gravel varies by region and by area of application. Many geologists define gravel simply as loose rounded rock particles over in diameter, without specifying an upper size limit. Gravel is sometimes distinguished from rubble, which is loose rock particles in the same size range but angular in shape. The Udden-Wentworth scale, widely used by geologists in the US, defines granular gravel as particles with a size from and pebble gravel as particles with a size from . This corresponds to all particles with sizes between coarse sand and cobbles. The U.S. Department of Agriculture and the Soil Science Society of America define gravel as particles from in size, while the German scale (Atterburg) defines gravel as particles from in size. The U.S. Army Corps of Engineers defines gravel as particles under in size that are retained by a number 4 mesh, which has a mesh spacing of . ISO 14688 for soil engineering grades gravels as fine, medium, and coarse with ranges 2 mm to 6.3 mm to 20 mm to 63 mm. The bulk density of gravel varies from . Natural gravel has a high hydraulic conductivity, sometimes reaching above 1 cm/s. Origin Most gravel is derived from disintegration of bedrock as it weathers. Quartz is the most common mineral found in gravel, as it is hard, chemically inert, and lacks cleavage planes along which the rock easily splits. Most gravel particles consist of multiple mineral grains, since few rocks have mineral grains coarser than about in size. Exceptions include quartz veins, pegmatites, deep intrusions, and high-grade metamorphic rock. The rock fragments are rapidly rounded as they are transported by rivers, often within a few tens of kilometers of their source outcrops. Gravel is deposited as gravel blankets or bars in stream channels; in alluvial fans; in near-shore marine settings, where the gravel is supplied by streams or erosion along the coast; and in the deltas of swift-flowing streams. The upper Mississippi embayment contains extensive chert gravels thought to have their origin less than from the periphery of the embayment. It has been suggested that wind-formed (aeolian) gravel "megaripples" in Argentina have counterparts on the planet Mars. Production and uses Gravel is a major basic raw material in construction. Sand is not usually distinguished from gravel in official statistics, but crushed stone is treated as a separate category. In 2020, sand and gravel together made up 23% of all industrial mineral production in the U.S., with a total value of about $12.6 billion. Some 960 million tons of construction sand and gravel were produced. This greatly exceeds production of industrial sand and gravel (68 million tons), which is mostly sand rather than gravel. It is estimated that almost half of construction sand and gravel is used as aggregate for concrete. Other important uses include in road construction, as road base or in blacktop; as construction fill; and in myriad minor uses. Gravel is widely and plentifully distributed, mostly as river deposits, river flood plains, and glacial deposits, so that environmental considerations and quality dictate whether alternatives, such as crushed stone, are more economical. Crushed stone is already displacing natural gravel in the eastern United States, and recycled gravel is also becoming increasingly important. Etymology The word gravel comes from the Old French gravele or gravelle. Types Different varieties of gravel are distinguished by their composition, origin, and use cases. Types of gravel include: Bank gravel naturally deposited gravel intermixed with sand or clay found in and next to rivers and streams. Also known as "bank run" or "river run". Bench gravel a bed of gravel located on the side of a valley above the present stream bottom, indicating the former location of the stream bed when it was at a higher level. The term is most commonly used in Alaska and the Yukon Territory. Crushed stone rock crushed and graded by screens and then mixed to a blend of stones and fines. It is widely used as a surfacing for roads and driveways, sometimes with tar applied over it. Crushed stone may be made from granite, limestone, dolomite, and other rocks. Also known as "crusher run", DGA (dense grade aggregate) QP (quarry process), and shoulder stone. Crushed stone is distinguished from gravel by the U.S. Geological Survey. Fine gravel gravel consisting of particles with a diameter of Lag gravel a surface accumulation of coarse gravel produced by the removal of finer particles. Pay gravel also known as "pay dirt"; a nickname for gravel with a high concentration of gold and other precious metals. The metals are recovered through gold panning. Pea gravel also known as "pea shingle" is clean gravel similar in size to garden peas. Used for concrete surfaces, walkways, driveways and as a substrate in home aquariums. Piedmont gravel a coarse gravel carried down from high places by mountain streams and deposited on relatively flat ground, where the water runs more slowly. Plateau gravel a layer of gravel on a plateau or other region above the height at which stream-terrace gravel is usually found. Shingle Coarse, loose, well-rounded, waterworn, specifically alluvial and beach, sediment that is largely composed of smooth and spheroidal or flattened pebbles, cobbles, and sometimes small boulders, generally measuring in diameter. Relationship to plant life In locales where gravelly soil is predominant, plant life is generally more sparse. This is due to the inferior ability of gravels to retain moisture, as well as the corresponding paucity of mineral nutrients, since finer soils that contain such minerals are present in smaller amounts. In the geologic record Sediments containing over 30% gravel that become lithified into solid rock are termed conglomerate. Conglomerates are widely distributed in sedimentary rock of all ages, but usually as a minor component, making up less than 1% of all sedimentary rock. Alluvial fans likely contain the largest accumulations of gravel in the geologic record. These include conglomerates of the Triassic basins of eastern North America and the New Red Sandstone of south Devon. See also Construction aggregate Melon gravel Pebble Rock Shingle beach References External links British Geological Survey UKGravelBarriers: Understanding coastal protection by gravel barriers in a changing climate Aggregate (composite) Sedimentology Building stone Natural materials Pavements Gardening aids Stone (material) Soil-based building materials
Gravel
Physics
1,567
10,229,344
https://en.wikipedia.org/wiki/Glycerol%20phosphate%20shuttle
The glycerol-3-phosphate shuttle is a mechanism used in skeletal muscle and the brain that regenerates NAD+ from NADH, a by-product of glycolysis. NADH is a reducing equivalent that stores electrons generated in the cytoplasm during glycolysis. NADH must be transported into the mitochondria to enter the oxidative phosphorylation pathway. However, the inner mitochondrial membrane is impermeable to NADH and only contains a transport system for NAD+. Depending on the type of tissue either the glycerol-3-phosphate shuttle pathway or the malate–aspartate shuttle pathway is used to transport electrons from cytoplasmic NADH into the mitochondria. The shuttle consists of two proteins acting in sequence. Cytoplasmic glycerol-3-phosphate dehydrogenase (cGPD) transfers an electron pair from NADH to dihydroxyacetone phosphate (DHAP), forming glycerol-3-phosphate (G3P) and regenerating the NAD+ needed to generate energy via glycolysis. Mitochondrial glycerol-3-phosphate dehydrogenase (mGPD) then catalyzes the oxidation of G3P by FAD, regenerating DHAP in the cytosol and forming FADH2 in the mitochondrial matrix. In mammals, its activity in transporting reducing equivalents across the mitochondrial membrane is secondary to the malate–aspartate shuttle. History The glycerol phosphate shuttle was first characterized as a major route of mitochondrial hydride transport in the flight muscles of blow flies. It was initially believed that the system would be inactive in mammals due to the predominance of lactate dehydrogenase activity over glycerol-3-phosphate dehydrogenase 1 (GPD1) until high GPD1 and GPD2 activity were demonstrated in mammalian brown adipose tissue and pancreatic ß-islets. Reaction In this shuttle, the enzyme called cytoplasmic glycerol-3-phosphate dehydrogenase 1 (GPD1 or cGPD) converts dihydroxyacetone phosphate (2) to glycerol 3-phosphate (1) by oxidizing one molecule of NADH to NAD+ as in the following reaction: Glycerol-3-phosphate is converted back to dihydroxyacetone phosphate by an inner membrane-bound mitochondrial glycerol-3-phosphate dehydrogenase 2 (GPD2 or mGPD), this time reducing one molecule of enzyme-bound flavin adenine dinucleotide (FAD) to FADH2. FADH2 then reduces coenzyme Q (ubiquinone to ubiquinol) whose electrons enter into oxidative phosphorylation. This reaction is irreversible. These electrons bypass Complex I of the electron transport chain, making the glycerol-3-phosphate shuttle less energetically efficient compared to oxidation of NADH by Complex I. See also Malate-aspartate shuttle Mitochondrial shuttle References External links http://chemistry.elmhurst.edu/vchembook/601glycolysissum.html (describes the shuttle in the context of glycolysis) Biochemical reactions Cellular respiration
Glycerol phosphate shuttle
Chemistry,Biology
703
69,147,292
https://en.wikipedia.org/wiki/Tampico-Misantla%20Basin
The Tampico-Misantla Basin or TMB is a geological depression located in the North-East Mexico. The area is well-known for its oil and gas reserves and includes the Chicontepec Formation. The TMB is located in the coastal plain of the Gulf of Mexico, extending the area 50 km to the East until shallow waters. The basin is bordered to the west by the Sierra Madre Oriental, to the north by the Tamaulipas arch, and to the south by the Teziutlán Massif. In 2018, IHS Markit considered Tampico-Misantla Basin as a global onshore Super Basin and compared the area with the Permian Basin. Oil Extraction Oil has been extracted from the Tampico Misantla Basin since the 1920s, accounting for 7.4 billion BOE. Meanwhile, 5.2 billion BOE remains in discovered conventional fields of the basin. The geological area contains three mature sources of rocks: the Agua Nueva Formation, the Huayacocotla Formation, and the Pimienta Formation. Some of the most important oil fields in the zone are Chicontepec, Golden Lane, Remolino, San Andrés, Presidene Miguel Alemán, Coatzintla, Coralillo, M.A. Camacho, Coapechaca. See also Permian Basin Super Basin Petroleum industry in Mexico References Petroleum_industry Oil_exploration Oil_fields
Tampico-Misantla Basin
Chemistry
295
4,969,106
https://en.wikipedia.org/wiki/Catastrophic%20optical%20damage
Catastrophic optical damage (COD), or catastrophic optical mirror damage (COMD), is a failure mode of high-power semiconductor lasers. It occurs when the semiconductor junction is overloaded by exceeding its power density and absorbs too much of the produced light energy, leading to melting and recrystallization of the semiconductor material at the facets of the laser. This is often colloquially referred to as "blowing the diode". The affected area contains a large number of lattice defects, negatively affecting its performance. If the affected area is sufficiently large, it can be observable under optical microscope as darkening of the laser facet, and/or as presence of cracks and grooves. The damage can occur within a single laser pulse, in less than a millisecond. The time to COD is inversely proportional to the power density. Catastrophic optical damage is one of the limiting factors in increasing performance of semiconductor lasers. It is the primary failure mode for AlGaInP/AlGaAs red lasers. Short-wavelength lasers are more susceptible to COD than long-wavelength ones. The typical values for COD in industrial products range between 12 and 20 MW/cm2. Causes and mechanisms At the edge of a diode laser, where light is emitted, a mirror is traditionally formed by cleaving the semiconductor wafer to form a specularly reflecting plane. This approach is facilitated by the weakness of the [110] crystallographic plane in III-V semiconductor crystals (such as GaAs, InP, GaSb, etc.) compared to other planes. A scratch made at the edge of the wafer and a slight bending force causes a nearly atomically perfect mirror-like cleavage plane to form and propagate in a straight line across the wafer. But it so happens that the atomic states at the cleavage plane are altered (compared to their bulk properties within the crystal) by the termination of the perfectly periodic lattice at that plane. Surface states at the cleaved plane have energy levels within the (otherwise forbidden) band gap of the semiconductor. The absorbed light causes generation of electron-hole pairs. These can lead to breaking of chemical bonds on the crystal surface followed by oxidation, or to release of heat by nonradiative recombination. The oxidized surface then shows increased absorption of the laser light, which further accelerates its degradation. The oxidation is especially problematic for semiconductor layers containing aluminium. Essentially, as a result when light propagates through the cleavage plane and transits to free space from within the semiconductor crystal, a fraction of the light energy is absorbed by the surface states where it is converted to heat by phonon-electron interactions. This heats the cleaved mirror. In addition the mirror may heat simply because the edge of the diode laser—which is electrically pumped—is in less-than-perfect contact with the mount that provides a path for heat removal. The heating of the mirror causes the band gap of the semiconductor to shrink in the warmer areas. The band gap shrinkage brings more electronic band-to-band transitions into alignment with the photon energy causing yet more absorption. This is thermal runaway, a form of positive feedback, and the result can be melting of the facet, known as catastrophic optical damage, or COD. Deterioration of the laser facets with aging and effects of the environment (erosion by water, oxygen, etc.) increases light absorption by the surface, and decreases the COD threshold. A sudden catastrophic failure of the laser due to COD then can occur after many thousands hours in service. Improvements One of the methods of increasing the COD threshold in AlGaInP laser structures is the sulfur treatment, which replaces the oxides at the laser facet with chalcogenide glasses. This decreases the recombination velocity of the surface states. Reduction of recombination velocity of surface states can be also achieved by cleaving the crystals in ultrahigh vacuum and immediate deposition of a suitable passivation layer. A thin layer of aluminium can be deposited over the surface, for gettering the oxygen. Another approach is doping of the surface, increasing the band gap and decreasing absorption of the lasing wavelength, shifting the absorption maximum several nanometers up. Current crowding near the mirror area can be avoided by prevention of injecting charge carriers near the mirror region. This is achieved by depositing the electrodes away from the mirror, at least several carrier diffusion distances. Energy density on the surface can be reduced by employing a waveguide broadening the optical cavity, so the same amount of energy exits through a larger area. Energy density of 15–20 MW/cm2 corresponding to 100 mW per micrometer of stripe width are now achievable. A wider laser stripe can be used for higher output power, for the cost of transverse mode oscillations and therefore worsening of spectral and spatial beam quality. In the 1970s, this problem, which is particularly nettlesome for GaAs-based lasers emitting between 1 μm and 0.630 μm wavelengths (less so for InP based lasers used for long-haul telecommunications which emit between 1.3 μm and 2 μm), was identified. Michael Ettenberg, a researcher and later Vice President at RCA Laboratories' David Sarnoff Research Center in Princeton, New Jersey, devised a solution. A thin layer of aluminum oxide was deposited on the facet. If the aluminum oxide thickness is chosen correctly, it functions as an anti-reflective coating, reducing reflection at the surface. This alleviated the heating and COD at the facet. Since then, various other refinements have been employed. One approach is to create a so-called non-absorbing mirror (NAM) such that the final 10 μm or so before the light emits from the cleaved facet are rendered non-absorbing at the wavelength of interest. Such lasers are called window lasers. In the very early 1990s, SDL, Inc. began supplying high power diode lasers with good reliability characteristics. CEO Donald Scifres and CTO David Welch presented new reliability performance data at, e.g., SPIE Photonics West conferences of the era. The methods used by SDL to defeat COD were considered to be highly proprietary and have still not been disclosed publicly as of June, 2006. In the mid-1990s IBM Research (Ruschlikon, Switzerland) announced that it had devised its so-called "E2 process" which conferred extraordinary resistance to COD in GaAs-based lasers. This process, too, has never been disclosed as of June, 2006. Further reading Graduate thesis about COD in high power diode lasers from 2013 References Semiconductor device defects Laser science
Catastrophic optical damage
Technology
1,364
14,800,527
https://en.wikipedia.org/wiki/CACNG2
Calcium channel, voltage-dependent, gamma subunit 2, also known as CACNG2 or stargazin is a protein that in humans is encoded by the CACNG2 gene. Function L-type calcium channels are composed of five subunits. The protein encoded by this gene represents one of these subunits, gamma, and is one of several gamma subunit proteins. It is an integral membrane protein that is thought to stabilize the calcium channel in an inactive (closed) state. This protein is similar to the mouse stargazin protein, mutations in which having been associated with absence seizures, also known as petit-mal or spike-wave seizures. This gene is a member of the neuronal calcium channel gamma subunit gene subfamily of the PMP-22/EMP/MP20 family. Stargazin is involved in the transportation of AMPA receptors to the synaptic membrane, and the regulation of their receptor rate constants — via its extracellular domain — once it is there. As it is highly expressed throughout the cerebral cortex, it is likely to have an important role in learning within these areas, due to the importance of AMPA receptors in LTP. Clinical significance Disruptions of CACNG2 have been implicated in autism. Interactions CACNG2 has been shown to interact with GRIA4, DLG4, and MAGI2. See also Voltage-dependent calcium channel References Further reading External links Ion channels
CACNG2
Chemistry
292
12,552,550
https://en.wikipedia.org/wiki/C9H10
{{DISPLAYTITLE:C9H10}} The molecular formula C9H10 may refer to: Allylbenzene Cyclononatetraene Indane α-Methylstyrene (AMS) Phenylpropene trans-Propenylbenzene 4-Vinyltoluene Molecular formulas
C9H10
Physics,Chemistry
70
15,215,841
https://en.wikipedia.org/wiki/Chinese-American%20Chemical%20Society
Founded in 1981, the Chinese American Chemical Society (CACS) is a nonprofit, professional organization that has neither national nor regional political affiliation. Membership is open to professionals and students in chemistry, chemical engineering, and related fields, as well as to individuals and corporations supporting the objectives of society. Currently, CACS has three local chapters in North America. References "Silver anniversary for Chinese society" by Linda Wang. C&EN May 1, 2006, page 37. External links CACS homepage Chemistry societies Scientific societies based in the United States Organizations established in 1981 Chinese-American organizations 1981 establishments in the United States
Chinese-American Chemical Society
Chemistry
124
76,817,298
https://en.wikipedia.org/wiki/De%20novo%20domestication
De novo domestication is a process where new species are genetically altered to meet human needs, such as agriculture or companionship. It is performed both by farmers and scientists, and can be done through traditional selective breeding or modern biotechnological methods. Targets for de novo domestication are often species that have never been under cultivation, but may also include wild relatives of already domesticated species. Definition De novo domestication refers to the process by which wild species are intentionally transformed into domesticated varieties. The majority of domesticated species has been under domestication for millenia, with the first animal, the dog, having been under domestication for between 40,000-30,000 years, and the first plants since the start of the Neolithic Revolution, approximately 12,000 years ago. This initial process of domestication is hypothesized to have been a passive process, resulting from the subconcious selection of individuals performing better in agricultural contexts. The scientific field of de novo domestication seeks to domesticate new species in an accelerated manner as opposed to over the course of thousands of years, as more domesticated species may provide an advantage to humanity, especially in agriculture. Newly domesticated crop species may allow for alternatives to agricultural extensification in regions where yields are plateauing, make agricultural systems more resilient to climate change, and increase the sustainability of agriculture. It is important to note that de novo domestication does not only happen in a scientific context, but that the active domestication of new species is also performed by farmers, especially in the Global South. The collection and subsequent agricultural integration of traditionally wild-gathered food plants still happens to this day, and also constitutes de novo domestication. The terminology in the scientific field of domestication is improperly standardized, with the same term meaning different things to different scientists. This means that in some cases, de novo domestication is solely used for species that have no history of domestication, while in other cases, it can be used to describe further studies into semi-domesticated crops, which already have gone through (early) stages of domestication. In plants The study of de novo domestication is most prevalent in plants, due to the implications new crops may bring to agriculture. There are two potential applications to the study of de novo domestication in plant sciences: the introduction of novel crops into agricultural systems and the redomestication of wild relatives of conventionally domesticated crops. Novel species The introduction of novel species into agricultural systems has the potential to radically alter their workings. One set of candidates for de novo domestication are perennial grains, cereal crops that can be harvested for multiple seasons after planting, as opposed to the annual grains that dominate agriculture. The successful de novo domestication of a perennial grain would drastically reduce the need for yearly plowing, seedling protection and energy spent on reaching maturity, thus decreasing environmental impact and labour use. The de novo domestication of tropical fruit trees is suggested to be able to help address 14 out of 17 of the Sustainable Development Goals set by the United Nations, either directly or indirectly. Redomestication Another use for de novo domestication is the redomestication of wild relatives of domesticated crops. Through millennia under selection, most domesticated crops have undergone many genetic bottlenecks, drastically reducing their genetic diversity, and thus the ability to breed in new traits. Meanwhile, these bottlenecked crops have been spread over the entire world, and are often grown in areas with climates that differ significantly from their genetic center of origin. Redomestication of crop wild relatives may offer a solution to long-term, repetitive plant breeding projects seeking to integrate wild relative DNA from the center of origin into established hybrid cultivars. This is especially relevant for crops that are reproductively incompatible with their wild relatives through processes such as polyploidization, such as hexaploid wheat, where integration of wild relative DNA through traditional breeding projects is difficult. In animals The de novo domestication of animals has less scientific traction than that of plants, but one notable project is that undertaken by the Russian Institute of Cytology and Genetics to domesticate the fox. This project aimed to study the theory of evolution and domestication syndrome by attempting the domestication of foxes, but was not primarily aimed at providing a new domesticated animal. De novo domestication of fish, either in the ornamental aquarium trade or for the purposes of pisciculture is also ongoing. In fungi Fungiculture, the cultivation of fungi such as mushrooms, has historically been less important than horticulture or animal husbandry in providing food for humans. Mushrooms were often gathered from the wild, but the knowledge to do so has largely disappeared in the Global North due to lifestyle changes and urbanization, prompting an increased need for mushroom cultivation. As a result, many fungi were de novo domesticated, such as snow fungus (1866), oyster mushroom (1917), and milky white mushroom (1974). A fungus that has been notoriously difficult to bring under cultivation is white truffle, and projects to de novo domesticate it are running. See also Genetic manipulation References Genetics Domestication Selection Breeding Biotechnology
De novo domestication
Biology
1,042
50,995,425
https://en.wikipedia.org/wiki/2014%20GSOC%20bugging%20scandal
The GSOC bugging scandal in February 2014 involved revelations that the offices of the Garda Síochána Ombudsman Commission, Ireland's independent police watchdog, were under covert electronic surveillance by an unknown party. John Mooney, security correspondent for The Sunday Times, first published the story alleging that GSOC had been the subject of surveillance by an unidentified party using "government level technology" to hack into its emails, Wi-Fi and telephone systems. The espionage operation was uncovered by a private British counter-surveillance firm, Verrimus, whom GSOC hired after its suspicions became aroused of outside spying on the organisation and its activities. The scandal and its aftermath are widely attributed to be one of the main reasons, along with the Garda whistleblower scandal, for the resignations of the then Garda Commissioner Martin Callinan (in March 2014) and Minister for Justice and Defence Alan Shatter (in May 2014). GSOC Chairman, Simon O'Brien, also resigned from his job in January 2015, ten months after the bugging allegations became public knowledge. Discovery of surveillance operation Verrimus, the UK-based private counterintelligence company which uses countermeasures and specialist devices to uncover electronic surveillance, and employs former British military and intelligence personnel, was paid €18,000 by GSOC for its services over a number of days (it came to Dublin during the night to avoid arousing the suspicions of anyone watching GSOC) and found the following; A conference speaker phone on the upper floor of the GSOC building on Abbey Street may have been tampered with. This room was regularly used to hold case conferences on sensitive investigations. GSOC's internal wireless local area network (WLAN) was compromised in order to steal emails, data, confidential reports and possibly to eavesdrop on mobile phone calls. A second Wi-Fi network had been created to harvest GSOC data. It was operated using an IP address in the UK, which concealed the identities and whereabouts of those operating the network. Another device, which worked off GSOC's broadband network, was also found to have been compromised. However, it was wiped of all data by those involved in the illicit operation when it became clear that their activities had been detected. A UK 4G cellular network was discovered in the vicinity of GSOC's headquarters, claimed to have been operated using an IMSI-catcher (cell tower spoofing) which instead of displaying an Irish Mobile Country Code (MCC) and Mobile Network Code (MNC), displayed a UK country codes – probably by accident, resulting in the only reason why it was found. However, Vodafone were testing the rollout of 4G at the time and it has also been claimed that this is this test network that was detected. GSOC employed Verrimus after it had consulted with the Independent Police Complaints Commission (IPCC), which is responsible for investigating complaints against police forces in England and Wales. Suspected culprits The most likely explanation for the surveillance operation and those that stood most to gain from it and had the experience and access to the technology required were the Garda Síochána, Ireland's national police service. Although no direct evidence was ever found linking the Garda force or its members to the espionage, GSOC investigated many sensitive matters relating to the force including investigations involving senior members of the force. It was reported that the reason GSOC ordered the bug sweep in the first place was because after a meeting with a senior Garda officer during the course of a malpractice investigation by the watchdog, the senior Garda inadvertently let slip that he was aware of contents of a secret report which GSOC had been working on, which had not yet been released, and that he was aware of text that actually never made it into the final report. Units of the force which have the ability to carry out such high-tech monitoring include the Crime and Security Branch, National Surveillance Unit and Special Detective Unit. The Irish Defence Forces and Revenue Commissioners are the only other two state agencies in Ireland which have the legal authority to carry out covert surveillance operations. The Irish Army and its Military Intelligence and Communications & Information Services Corps have the ability to undertake sophisticated intelligence operations, but no evidence whatsoever was proffered implicating either the military or Revenue, nor would they have stood much to gain from any information gathered. The United Kingdom's GCHQ and other intelligence services in the past have collected information concerning actions taken by the Irish government, and a second unauthorised spoofing Wi-Fi network discovered at GSOC's head office was traced back to the UK, however it is believed that was a deliberate act to hide the culprit's tracks. The Sunday Times reported that the NSA in the United States had in the past used very similar technology to spy on targets, and in the aftermath of the Edward Snowden leaks the year before, suspicion was rife about NSA activities in Europe. However, the US had little to gain by surveilling an Irish police watchdog's investigations into corruption and malpractice, and none of GSOC's current investigations involved either the UK or US. Motives According to journalist John Mooney, he linked the bugging operation to GSOC's investigation of the Garda handling of the Kieran Boylan case, a convicted drug-runner who was assisted by Gardaí in obtaining a passport, a haulage licence and had a prosecution for drug offences annulled in extraordinary circumstances. After the results of the security sweep, GSOC did not bring them to the attention of the Minister for Justice or the Garda Síochána (who would usually investigate such matters), instead they emerged through the media. Aftermath and resignations This was the second such security sweep GSOC had undertaken, and it was also understood to be concerned about the level of detail emerging publicly regarding ongoing cases. Electronic security procedures were improved after the sweep, including a conference room which cannot be bugged. The government appointed retired High Court Judge John Cooke to conduct an independent inquiry into reports of unlawful surveillance of the Garda Siochána Ombudsman Commission. He could find neither conclusive evidence supporting the surveillance allegations, or by whom, or that it didn't occur in the first place. Judge Cooke was the only person to undertake the inquiry, which did not include any technical expertise as had been called for by opposition parties. A number of weeks after news of the bugging at GSOC broke, on 25 March 2014, Garda Commissioner Martin Callinan resigned citing "early retirement" after it was believed the government lost confidence in his leadership and wanted a fresh face to head the force. Minister for Justice and Defence Alan Shatter, who had a very close working relationship with Commissioner Callinan, resigned from government on 7 May 2014 and later lost his seat as a TD in Dáil Éireann at the 2016 general election. Questions had been raised about the unusual and potentially conflicting occurrence of a Minister holding not only both the Justice and Defence portfolios (housing the two main intelligence services of the state), but also in charge of both the Gardaí and the watchdog whose sole responsibility it is to investigate them. Chairman of the Garda Síochána Ombudsman Commission, Simon O'Brien, announced his resignation on 7 January 2015 with two years remaining on his contract to take up a role as chief executive of the Pensions Ombudsman Service in the UK. Both the Association of Garda Sergeants and Inspectors (AGSI) and the Garda Representative Association (GRA) had previously called on him to step down over his handling of the bugging scandal, despite being the victim of it. See also Kieran Boylan affair Garda whistleblower scandal Garda phone recordings scandal Irish phone tapping scandal References GSOC bugging scandal GSOC bugging scandal 21st-century scandals Crime in the Republic of Ireland GSOC bugging scandal Police misconduct in Ireland Political scandals in the Republic of Ireland Privacy of telecommunications Surveillance scandals Computing-related controversies GSOC bugging scandal
2014 GSOC bugging scandal
Technology
1,647
35,547,268
https://en.wikipedia.org/wiki/Bacterial%20morphological%20plasticity
Bacterial morphological plasticity refers to changes in the shape and size that bacterial cells undergo when they encounter stressful environments. Although bacteria have evolved complex molecular strategies to maintain their shape, many are able to alter their shape as a survival strategy in response to protist predators, antibiotics, the immune response, and other threats. Bacterial shape and size under selective forces Normally, bacteria have different shapes and sizes which include coccus, rod and helical/spiral (among others less common) and that allow for their classification. For instance, rod shapes may allow bacteria to attach more readily in environments with shear stress (e.g., in flowing water). Cocci may have access to small pores, creating more attachment sites per cell and hiding themselves from external shear forces. Spiral bacteria combine some of the characteristics cocci (small footprints) and of filaments (more surface area on which shear forces can act) and the ability to form an unbroken set of cells to build biofilms. Several bacteria alter their morphology in response to the types and concentrations of external compounds. Bacterial morphology changes help to optimize interactions with cells and the surfaces to which they attach. This mechanism has been described in bacteria such as Escherichia coli and Helicobacter pylori. Bacterial filamentation Physiological mechanisms Oxidative stress, nutrient limitation, DNA damage and antibiotic exposure are examples of stressors that cause bacteria to halt septum formation and cell division. Filamentous bacteria have been considered to be over-stressed, sick and dying members of the population. However, the filamentous members of some communities have vital roles in the population's continued existence, since the filamentous phenotype can confer protection against lethal environments. Filamentous bacteria can be over 90 μm in length and play an important role in the pathogenesis of human cystitis. Filamentous forms arise via several different mechanisms. Base Excision Repair (BER) mechanism This is a strategy to repair DNA damage observed in E. coli. This involves two types of enzymes: Bifunctional glycosylases: the endonuclease III (encoded by nth gene) Apurinic/Apirimidinic (AP)-endonucleases: endonuclease IV (encoded by nfo gene) and exonuclease III (encoded by xth gene). Under this mechanism, daughter cells are protected from receiving damaged copies of the bacterial chromosome, and at the same time promoting bacterial survival. A mutant for these genes lack BER activity and a strong formation of filamentous structures is observed. SulA/FtsZ mediated filamentation This is a mechanism to halt cell division and repair DNA. In the presence of single-stranded DNA regions, due to the action of different external cues (that induce mutations), the major bacterial recombinase (RecA) binds to this DNA regions and is activated by the presence of free nucleotide triphosphates. This activated RecA stimulates the autoproteolysis of the SOS transcriptional repressor LexA. The LexA regulon includes a cell division inhibitor, SulA, that prevent the transmission of mutant DNA to the daughter cells. SulA is a dimer that binds FtsZ (a tubulin-like GTPase) in a 1:1 ratio and acts specifically on its polymerization which results in the formation of non-septated bacteria filaments. A similar mechanism may occur in Mycobacterium tuberculosis,which also elongates after being phagocytized. M. tuberculosis Septum site determining protein (Ssd) encoded by rv3660c promotes filamentation in response to the stressful intracellular environment. SSD inhibits septum and is also found in Mycobacterium smegmatis. The bacterial filament ultrastructure is consistent with inhibition of FtsZ polymerization (previously described). Ssd is believed to be part of a global regulatory mechanism in this bacteria that promotes a shift into an altered metabolic state. Helicobacter pylori In this spiral-shaped Gram-negative bacterium, the filamentation mechanism are regulated by two mechanisms: the peptidases that cause peptidoglycan relaxation and the coiled-coil-rich proteins (Ccrp) that are responsible for the helical cell shape in vitro as well as in vivo. A rod shape could have probably an advantage for motility than the regular helical shape. In this model, there is another protein Mre, which is not exactly involved in the maintenance of cell shape but in the cell cycle. It has been demotrated that mutant cells were highly elongated due to a delay in cell division and contained non-segregated chromosomes. Environmental cues Immune response Some of the strategies for bacteria to bypass host defenses include the generation of filamentous structures. As it has been observed in other organisms (such as fungi), filamentous forms are resistant to phagocytosis. As an example of this, during urinary tract infection, filamentous structures of uropathogenic E. coli (UPEC) start to develop in response to host innate immune response (more exactly in response to Toll-like receptor 4-TLR4). TLR-4 is stimulated by the lipopolysaccharide (LPS) and recruits neutrophils (PMN) which are important leukocytes to eliminate these bacteria. Adopting filamentous structures, bacteria resist these phagocytic cells and their neutralizing activity (which include antimicrobial peptides, degradative enzyme and reactive oxygen species). It is believed that filamentation is induced as a response of DNA damage (by the mechanisms previously exposed), participating SulA mechanism and additional factors. Furthermore, the length of the filamentous bacteria could have a stronger attachment to the epithelial cells, with an increased number of adhesins participating in the interaction, making even harder the work for (PMN). The interaction between phagocyte cells and adopting filamentous-shape bacteria provide an advantage to their survival. In this relate, filamentation could be not only a virulence, but also a resistance factor in these bacteria. Predator protist Bacteria exhibit a high degree of "morphological plasticity" that protects them from predation. Bacterial capture by protozoa is affected by size and irregularities in shape of bacteria. Oversized, filamentous, or prosthecate bacteria may be too large to be ingested. On the other hand, other factors such as extremely tiny cells, high-speed motility, tenacious attachment to surfaces, formation of biofilms and multicellular conglomerates may also reduce predation. Several phenotypic features of bacteria are adapted to escape protistan-grazing pressure. Protistan grazing or bacterivory is a protozoan feeding on bacteria. It affects prokaryotic size and the distribution of microbial groups. There are several feeding mechanisms used to seek and capture prey, because the bacteria have to avoid being consumed from these factors. There are six feeding mechanisms listed by Kevin D. Young. Filter feeding: transport water through a filter or sieve Sedimentation: allows prey to settle into a capture device Interception: capture by predator-induced current or motility and phagocytosis Raptorial: predator craws and ingests prey through pharynx or by pseudopods Pallium: prey engulfed e.g. by extrusion of feeding membrane Myzocytosis: punctures prey and suck out cytoplasm and content Bacterial responses are elicited depending on the predator and prey combinations because feeding mechanisms differ among the protists. Moreover, the grazing protists also produce the by-products, which directly lead to the morphological plasticity of prey bacteria. For example, the morphological phenotypes of Flectobacillus spp. were evaluated in the presence and absence of the flagellate grazer Orchromonas spp. in a laboratory that has environmental control within a chemostat. Without grazer and with adequate nutrient supply, the Flectobacillus spp. grew mainly in medium-sized rod (4-7 μm), remaining a typical 6.2 μm in length. With the predator, the Flectobacillus spp. size was altered to an average 18.6 μm and it is resistant to grazing. If the bacteria are exposed to the soluble by-products produced by grazing Orchromonas spp. and pass through a dialysis membrane, the bacterial length can increase to an average 11.4 μm. Filamentation occurs as a direct response to these effectors that are produced by the predator and there is a size preference for grazing that varies for each species of protist. The filamentous bacteria that are larger than 7 μm in length are generally inedible by marine protists. This morphological class is called grazing resistant. Thus, filamentation leads to the prevention of phagocytosis and killing by predator. Bimodal effect Bimodal effect is a situation that bacterial cell in an intermediate size range are consumed more rapidly than the very large or the very small. The bacteria, which are smaller than 0.5 μm in diameter, are grazed by protists four to six times less than larger cells. Moreover, the filamentous cells or cells with diameters greater than 3 μm are often too large to ingest by protists or are grazed at substantially lower rates than smaller bacteria. The specific effects vary with the size ratio between predator and prey. Pernthaler et al. classified susceptible bacteria into four groups by rough size. Bacterial size < 0.4 μm were not grazed well Bacterial size between 0.4 μm and 1.6 μm were "grazing vulnerable" Bacterial size between 1.6 μm and 2.4 μm were "grazing suppressed" Bacterial size > 2.4 μm were "grazing resistant" Filamentous preys are resistant to protist predation in a number of marine environments. In fact, there is no bacterium entirely safe. Some predators graze the larger filaments to some degree. Morphological plasticity of some bacterial strains is able to show at different growth conditions. For instance, at enhanced growth rates, some strains can form large thread-like morphotypes. While filament formation in subpopulations can occur during starvation or at suboptimal growth conditions. These morphological shifts could be triggered by external chemical cues that might be released by the predator itself. Besides bacterial size, there are several factors affecting the predation of protists. Bacterial shape, the spiral morphology may play a defensive role towards predation feedings. For example, Arthrospira may reduce its susceptibility to predation by altering its spiral pitch. This alteration inhibits some natural geometric feature of the protist's ingestion apparatus. Multicellular complexes of bacterial cells also change the ability of protist's ingestion. Cells in biofilms or microcolonies are often more resistant to predation. For instance, the swarm cells of Serratia liquefaciens resist predation by its predator, Tetrahymenu. Due to the normal-sized cells that first contact a surface are most susceptible, bacteria need elongating swarm cells to protect them from predation until the biofilm matures. For aquatic bacteria, they can produce a wide range of extracellular polymeric substances (EPS), which comprise protein, nucleic acids, lipids, polysaccharides and other biological macromolecules. EPS secretion protects bacteria from HNF grazing. The EPS-producing planktonic bacteria typically develop subpopulations of single cells and microcolonies that are embedded in an EPS matrix. The larger microcolonies are also protected from flagellate predation because of their size. The shift to the colonial type may be a passive consequence of selective feeding on single cells. However, the microcolony formation can be specifically induced in the presence of predators by cell-cell communication (quorum sensing). As for bacterial motility, the bacteria with high-speed motility sometimes avoid grazing better than their nonmotile or slower strains especially the smallest, fastest bacteria. Moreover, a cell's movement strategy may be altered by predation. The bacteria move by run-and-reverse strategy, which help them to beat a hasty retreat before being trapped instead of moving by the run-and-tumble strategy. However, there is a study showed that the probability of random contacts between predators and prey increases with bacterial swimming, and motile bacteria can be consumed at higher rates by HNFs. In addition, bacterial surface properties affect predation as well as other factors. For example, there is an evidence shows that protists prefer gram-negative bacteria than gram-positive bacteria. Protists consume gram-positive cells at much lower rates than consuming gram-negative cells. The heterotrophic nanoflagellates actively avoid grazing on gram-positive actinobacteria as well. Grazing on gram-positive cells takes longer digestion time than on gram-negative cells. As a result of this, the predator cannot handle more prey until the previous ingested material is consumed or expelled. Moreover, bacterial cell surface charge and hydrophobicity have also been suggested that might reduce grazing ability. Another strategy that bacteria can use for avoiding the predation is to poison their predator. For example, certain bacteria such as Chromobacterium violaceum and Pseudomonas aeruginosa can secrete toxin agents related to quorum sensing to kill their predators. Antibiotics Antibiotics can induce a broad range of morphological changes in bacterial cells including spheroplast, protoplast and ovoid cell formation, filamentation (cell elongation), localized swelling, bulge formation, blebbing, branching, bending, and twisting. Some of these changes are accompanied by altered antibiotic susceptibility or altered bacterial virulence. In patients treated with β-lactam antibiotics, for example, filamentous bacteria are commonly found in their clinical specimens. Filamentation is accompanied by both a decrease in antibiotic susceptibility and an increase in bacterial virulence. This has implications for both disease treatment and disease progression. Antibiotics used to treat Burkholderia pseudomallei infection (melioidosis), for example β-lactams, fluoroquinolones and thymidine synthesis inhibitors, can induce filamentation and other physiological changes. The ability of some β-lactam antibiotics to induce bacterial filamentation is attributable to their inhibition of certain penicillin-binding proteins (PBPs). PBPs are responsible for assembly of the peptidoglycan network in the bacterial cell wall. Inhibition of PBP-2 changes normal cells to spheroplasts, while inhibition of PBP-3 changes normal cells to filaments. PBP-3 synthesizes the septum in dividing bacteria, so inhibition of PBP-3 leads to the incomplete formation of septa in dividing bacteria, resulting in cell elongation without separation. Ceftazidime, ofloxacin, trimethoprim and chloramphenicol have all been shown to induce filamentation. Treatment at or below the minimal inhibitory concentration (MIC) induces bacterial filamentation and decreases killing within human macrophages. B.pseudomallei filaments revert to normal forms when the antibiotics are removed, and daughter cells maintain cell-division capacity and viability when re-exposed to antibiotics. Thus, filamentation may be a bacterial survival strategy. In Pseudomonas aeruginosa, antibiotic-induced filamentation appears to trigger a change from normal growth phase to stationary growth phase. Filamentous bacteria also release more endotoxin (lipopolysaccharide), one of the toxins responsible for septic shock. In addition to the mechanism described above, some antibiotics induce filamentation via the SOS response. During repair of DNA damage, the SOS response aids bacterial propagation by inhibiting cell division. DNA damage induces the SOS response in E.coli through the DpiBA two-component signal transduction system, leading to inactivation of the ftsL gene product, penicillin binding protein 3 (PBP-3). The ftsL gene is a group of filamentation temperature-sensitive genes used in cell division. Their product (PBP-3), as mentioned above, is a membrane transpeptidase required for peptidoglycan synthesis at the septum. Inactivation of the ftsL gene product requires the SOS-promoting recA and lexA genes as well as dpiA and transiently inhibits bacterial cell division. The DpiA is the effector for the DpiB two-component system. Interaction of DpiA with replication origins competes with the binding of the replication proteins DnaA and DnaB. When overexpressed, DpiA can interrupt DNA replication and induce the SOS response resulting in inhibition of cell division. Nutritional stress Nutritional stress can change bacterial morphology. A common shape alteration is filamentation which can be triggered by a limited availability of one or more substrates, nutrients or electron acceptors. Since the filament can increase a cell's uptake–surface area without significantly changing its volume appreciably. Moreover, the filamentation benefits bacterial cells attaching to a surface because it increases specific surface area in direct contact with the solid medium. In addition, the filamentation may allows bacterial cells to access nutrients by enhancing the possibility that part of the filament will contact a nutrient-rich zone and pass compounds to the rest of the cell's biomass. For example, Actinomyces israelii grows as filamentous rods or branched in the absence of phosphate, cysteine, or glutathione. However, it returns to a regular rod-like morphology when adding back these nutrients. See also Filamentation Protoplasts Spheroplasts References Bacteriology Morphology (biology)
Bacterial morphological plasticity
Biology
3,805
13,736,699
https://en.wikipedia.org/wiki/Cobalt%28II%29%20thiocyanate
Cobalt(II) thiocyanate is an inorganic compound with the formula Co(SCN)2. The anhydrous compound is a coordination polymer with a layered structure. The trihydrate, Co(SCN)2(H2O)3, is a isothiocyanate complex used in the cobalt thiocyanate test (or Scott test) for detecting cocaine. The test has been responsible for widespread false positives and false convictions. Structure and preparation The structures of Co(SCN)2 and its hydrate Co(SCN)2(H2O)3 have been determined using X-ray crystallography. Co(SCN)2 forms infinite 2D sheets as in the mercury(II) thiocyanate structure type, where as Co(SCN)2(H2O)3 consists of isolated tetrahedral Co(SCN)2(H2O)2 centers and one equivalent of water of crystallization. The hydrate may be prepared by the salt metathesis reactions, such as the reaction of aqueous cobalt(II) sulfate and barium thiocyanate to produce a barium sulfate precipitate, leaving the hydrate of Co(SCN)2 in solution: CoSO4 + Ba(SCN)2 → BaSO4 + Co(SCN)2 or the reaction of the hexakisacetonitrile cobalt(II) tetrafluoroborate and potassium thiocyanate, precipitating KBF4 [Co(NCMe)6](BF4)2 + 2KSCN → 2KBF4 + Co(SCN)2. The anhydrate can then be prepared via addition of diethylether as an antisolvent. Cobalt thiocyanate test Detailed procedures for the cobalt thiocyanate test, often sold as the "morris reagent" are available. The reagent consists of 2% cobalt thiocyanate dissolved in dilute acid. Glycerol is often added to stabilise the cobalt complex, ensuring it only goes blue when in contact with an analyte and not due to drying. Addition of the cobalt thiocyanate reagent to cocaine hydrochloride results in the surface of the particles turning a bright blue (faint blue for cocaine base). The solution changes back to pink upon adding some hydrochloric acid. Addition of chloroform, results in a blue organic layer for both cocaine hydrochloride and cocaine base. Diphenhydramine and lidocaine also give blue organic layers. These compounds are known false positives for cocaine. Lidocaine is commonly used to adulterate or mimic cocaine due to its local anaesthetic effect. If the procedure is adjusted to basify the sample rather than acidifying it, the test can be used to test for ketamine hydrochloride. References Thiocyanates Cobalt(II) compounds Chemical tests Drug testing reagents Cocaine
Cobalt(II) thiocyanate
Chemistry
637
38,408,206
https://en.wikipedia.org/wiki/Iron%28II%29%20hydride
Iron(II) hydride, systematically named iron dihydride and poly(dihydridoiron) is solid inorganic compound with the chemical formula (also written )n or ). ). It is kinetically unstable at ambient temperature, and as such, little is known about its bulk properties. However, it is known as a black, amorphous powder, which was synthesised for the first time in 2014. Iron(II) hydride is the second simplest polymeric iron hydride (after iron(I) hydride). Due to its instability, it has no practical industrial uses. However, in metallurgical chemistry, iron(II) hydride is fundamental to certain forms of iron-hydrogen alloys. Nomenclature The systematic name iron dihydride, a valid IUPAC name, is constructed according to the compositional nomenclature. However, as the name is compositional in nature, it does not distinguish between compounds of the same stoichiometry, such as molecular species, which exhibit distinct chemical properties. The systematic names poly(dihydridoiron) and poly[ferrane(2)], also valid IUPAC names, are constructed according to the additive and electron-deficient substitutive nomenclatures, respectively. They do distinguish the titular compound from the others. Dihydridoiron Dihydridoiron, also systematically named ferrane(2), is a related inorganic compound with the chemical formula (also written ). It is both kinetically unstable at concentration and at ambient temperature. Dihydridoiron is the second simplest molecular iron hydride (after hydridoiron), and is also the progenitor of clusters with the same stoichiometry. In addition, it may be considered to be the iron(II) hydride monomer. It has been observed in matrix isolation. Properties Acidity and basicity An electron pair of a Lewis base can join with the iron centre in dihydridoiron by adduction: + :L → Because of this capture of an adducted electron pair, dihydridoiron has Lewis acidic character. Dihydridoiron has the capacity to capture up to four electron pairs from Lewis bases. A proton can join with the iron centre by dissociative protonation: + → + Because dissociative protonation involves the capture of the proton () to form a Kubas complex (]+) as an intermediate, dihydridoiron and its adducts of weak-field Lewis bases, such as water, also have Brønsted–Lowry basic character. They have the capacity to capture up to two protons. Its dissociated conjugate acids are hydridoiron(1+) and iron(2+) ( and ). + + + Aqueous solutions of adducts of weak-field Lewis bases are however, unstable due to hydrolysis of the dihydridoiron and hydridoiron(1+) groups: + 2  → + 2  + 3  → + + It should be expected that iron dihydride clusters and iron(II) hydride have similar acid-base properties, although reaction rates and equilibrium constants are different. Alternatively, a hydrogen centre in the dihydridoiron group in adducts of strong-field Lewis bases, such as carbon monoxide, may separate from the molecule by ionisation: → + Because of this release of the proton, adducts of strong-field Lewis bases have may have Brønsted–Lowry acidic character. They have the capacity to release up to two protons. + + Mixed adducts with Lewis bases of differing fields strengths may exhibit intermediate behaviour. Structure In iron(II) hydride, the atoms form a network, individual atoms being interconnected by covalent bonds. Since it is a polymeric solid, a monocrystalline sample is not expected to undergo state transitions, such as melting and dissolution, as this would require the rearrangement of molecular bonds and consequently, change its chemical identity. Colloidal crystalline samples, wherein intermolecular forces are relevant, are expected to undergo state transitions. At least up to , iron(II) hydride is predicted to have a body-centred tetragonal crystalline structure with the I4/mmm space group. In this structure, iron centres have a capped square-antiprismatic coordination geometry, and hydrogen centres have square-planar and square-pyramidal geometries. An amorphous form of iron(II) hydride is also known. The infrared spectrum for dihydridoiron shows that the molecule has a linear H−Fe−H structure in the gas phase, with an equilibrium distance between the iron atom and the hydrogen atoms of 0.1665 nm. Electronic properties A few of dihydridoiron's electronic states lie relatively close to each other, giving rise to varying degrees of radical chemistry. The ground state and the first two excited states are all quintet radicals with four unpaired electrons (X5Δg, A5Πg, B5Σg+). With the first two excited states only 22 and 32 kJ mol−1 above the ground state, a sample of dihydridoiron contains trace quantities of excited states even at room temperature. Furthermore, Crystal field theory predicts that the low transition energies correspond to a colourless compound. The ground electronic state is 5Δg. Metallurgical chemistry In iron-hydrogen alloys that have hydrogen content near 3.48 wt%, hydrogen can precipitate as iron(II) hydride and lesser quantities of other polymeric iron hydrides. However, due to the limited solubility of hydrogen in iron, the optimum content for the formation of iron(II) hydride can only be reached by applying extreme pressure. In metallurgical chemistry, iron(II) hydride is fundamental to certain forms of iron-hydrogen alloys. It occurs as a brittle component within the solid matrix, with a physical makeup that depends on its formation conditions and subsequent heat treatment. As it decomposes over time, the alloy will slowly become softer and more ductile, and may start to suffer from hydrogen embrittlement. Production Dihydridoiron has been produced by several means, including: By reaction of and PhMgBr under a hydrogen atmosphere (1929). Electrical discharge in a mixture of pentacarbonyliron and dihydrogen diluted in helium at 8.5 Torr. Evaporation of iron with a laser in an atmosphere of hydrogen, pure or diluted in neon or argon, and condensing the products on a cold surface below 10 K. Decomposition product of collision-excited ferrocenium ions. Iron reduction Most iron(II) hydride is produced by iron reduction. In this process, stoichiometric amounts of iron and hydrogen react under an applied pressure of between approximately 45 and 75 GPa to produce iron(II) hydride according to the reaction: nFe + → The process involves iron(I) hydride as an intermediate, and occurs in two steps. 2nFe + → + → Bis[bis(mesityl)iron] reduction Amorphous iron(II) hydride is produced by bis[bis(mesityl)iron] reduction. In this process, bis[bis(mesityl)iron] is reduced with hydrogen under an applied pressure of 100 atmospheres to produce iron(II) hydride according to the reaction: n  + 4n  → + 4n Hmes The process involves bis[hydrido(mesityl)iron] and dihydridoiron as intermediates, and occurs in three steps. + → + 2 Hmes + → + Hmes n  → Reactions As dihydridoiron is an electron-deficient molecule, it spontaneously autopolymerises in its pure form, or converts to an adduct upon treatment with a Lewis base. Upon treatment of adducts of weak-field Lewis bases with a dilute standard acid, it converts to an hydridoiron(1+) salt and elemental hydrogen. Treatment of adducts of strong-field Lewis bases with a standard base, converts it to a metal ferrate(1−) salt and water. Oxidation of iron dihydrides give iron(II) hydroxide, whereas reduction gives hexahydridoferrate(4−) salts. Unless cooled to or below, dihydridoiron decomposes to produce elemental iron and hydrogen. Other iron dihydrides and adducts of dihydridoiron decompose at higher temperatures to also produce elemental hydrogen, and iron or polynuclear iron adducts: → Fe + Non-metals, including oxygen, strongly attack iron dihydrides, forming hydrogenated compounds and iron(II) compounds: + → FeO + Iron(II) compounds can also be prepared from an iron dihydride and an appropriate, concentrated acid: + 2 HCl → + 2 History Even though complexes containing dihydridoiron was known since 1931, the simple compound with the molecular formula is only a much more recent discovery. Following the discovery of the first complex containing dihydridoiron, tetracarbonylate, it was also quickly discovered that it is not possible to remove the carbon monoxide by thermal means - heating an dihydridoiron containing complex only causes it to decompose, a habit attributable to the weak iron-hydrogen bond. Thus, a practical method has been sought since then for the production of the pure compound, without the involvement of a liquid phase. Furthermore, there is also on going research into its other adducts. Although iron(II) hydride has received attention only recently, complexes containing the dihydridoiron group have been known at least since 1931, when iron carbonyl hydride FeH2(CO)4 was first synthesised. The most precisely characterised FeH2L4 complex as of 2003 is FeH2(CO)2[P(OPh)3]2. Complexes can also contain FeH2 with hydrogen molecules as a ligand. Those with one or two molecules of hydrogen are unstable, but FeH2(H2)3 is stable and can be produced by the evaporation of iron into hydrogen gas. From infrared spectra of samples of dihydridoiron trapped in frozen argon between 10 and 30 K, Chertihin and Andrews conjectured in 1995 that dihydridoiron readily dimerized into , and that it reacts with atomic hydrogen to produce trihydridoiron (). However, it was later proven that the product of the reaction was likely to have been hydrido(dihydrogen)iron (). References Iron(II) compounds Metal hydrides
Iron(II) hydride
Chemistry
2,305
12,112,279
https://en.wikipedia.org/wiki/The%20Critical%20Eye
The Critical Eye is a Discovery Science Channel documentary series examining pseudoscientific and paranormal phenomena. The eight-part documentary series aired from October 2002 through February 2003 and was hosted by actor and scientific skeptic William B. Davis. Series description The Critical Eye, alternately labeled as just Critical Eye, was produced by the Discovery Science Channel, and was produced in association with Skeptical Inquirer Magazine. The show was described by cosmolearning.org as "William B. Davis hosts this programme bringing to the viewers the science behind the paranormal and the unexplained." Historical event references The series discusses several notable events: The 1990 civil trial brought against Judas Priest alleging subliminal messaging in their music The Stargate Project The Phoenix Lights The Roswell UFO incident Project Blue Book The Heaven's Gate mass suicide Episodes Each episode of the series consists of four or five segments focused specifically on one pseudoscientific or paranormal phenomenon. Each segment begins by explaining the phenomenon in question, discusses it with both scientists/skeptics and proponents/believers, and concludes with street interviews regarding the legitimacy of the phenomenon in question. References American documentary television series Documentary television series about science American English-language television shows Scientific skepticism mass media 2000s American documentary television series Pseudoscience Reiki Acupuncture Alternative medicine Energy (esotericism) Telepathy Homeopathy Parapsychology Vampirism Hypnosis Modern witchcraft Exorcism Bigfoot Loch Ness Monster Atlantis Noah's Ark Extraterrestrial life Near-death experiences Ghosts Reincarnation Mediumship Nostradamus Stonehenge Pyramids
The Critical Eye
Astronomy,Biology
329
2,937,841
https://en.wikipedia.org/wiki/Horizontal%20blanking%20interval
Horizontal blanking interval refers to a part of the process of displaying images on a computer monitor or television screen via raster scanning. CRT screens display images by moving beams of electrons very quickly across the screen. Once the beam of the monitor has reached the edge of the screen, it is switched off, and the deflection circuit voltages (or currents) are returned to the values they had for the other edge of the screen; this would have the effect of retracing the screen in the opposite direction, so the beam is turned off during this time. This part of the line display process is the Horizontal Blank. In detail, the Horizontal blanking interval consists of: front porch – blank while still moving right, past the end of the scanline, sync pulse – blank while rapidly moving left; in terms of amplitude, "blacker than black". back porch – blank while moving right again, before the start of the next scanline. Colorburst occurs during the back porch, and unblanking happens at the end of the back porch. In the NTSC television standard, horizontal blanking occupies out of every scan line (17.2%). In PAL, it occupies out of every scan line (18.8%). Some modern monitors and video cards support reduced blanking, standardized with Coordinated Video Timings. In the PAL television standard, the blanking level corresponds to the black level, whilst other standards, most notably some variants of NTSC, may set the black level slightly above the blanking level on a pedestal or "set up level". HBlank effects Some graphics systems can count horizontal blanks and change how the display is generated during this blank time in the signal; this is called a raster effect, of which an example is raster bars. In video games, the horizontal blanking interval was used to create some notable effects. Some methods of parallax scrolling use a raster effect to simulate depth in consoles that do not natively support multiple background layers or do not support enough background layers to achieve the desired effect. One example of this is in the game Castlevania: Rondo of Blood, which was written for the PC Engine CD-ROM which does not support multiple background layers. The Super Nintendo Entertainment System's Mode 7 uses the horizontal blanking interval to vary the scaling and rotation, per scan line, of one background layer to make the background appear to be a 3D plane. See also Nominal analogue blanking Vertical blanking interval References Video signal Television technology Television terminology
Horizontal blanking interval
Technology
518
39,562,449
https://en.wikipedia.org/wiki/Crop%20desiccation
Pre-harvest crop desiccation is the application of an agent to a crop just before harvest to kill the leaves and/or plants so that the crop dries out from environmental conditions, or "dry-down", more quickly and evenly. Crop desiccants (not to be confused with a chemical desiccants) include herbicides and defoliants, used to accelerate the natural drying of plant tissues. Desiccation of crops through the use of herbicides is practiced worldwide on a variety of food and non-food crops. Uses Crop desiccation can improve the efficiency and economics of mechanical harvesting. In grain crops such as wheat, barley and oats, uniformly dried crops do not have to be windrowed (swathed and dried) prior to harvest, but can easily be straight-cut and harvested. This saves the farmer time and money, which is important in northern regions where the growing season is short. In a non-food crop such as cotton, reliance on natural frost may be too late to be effective in some regions. Thus leaves that remain on the cotton plant will interfere with mechanical harvesters and stain the white cotton resulting in a lower quality grade; herbicides which cause both defoliation and desiccation reduce these problems. Desiccation can improve the uniformity of a crop. It may correct for uneven crop growth which is a problem in northern climates, during wet summers, or when weed control is poor. Plants that have naturally reached the end of their maturation may be mingled with plants in earlier stages of growth; controlled desiccation evens that out. This also increases uniformity of moisture content in grain, which has positive economic benefits in the storage of the grain and the price of the grain. Desiccation can enhance the ripening of a crop. With sugarcane, for example, glyphosate application increases sucrose concentration before harvest. With grains, for example, as a consequence of crop plant uniformity as noted above, grain ripeness can be made more uniform through the same process. Several additional advantages of desiccation have been cited: harvest can be conducted earlier; weed control is initiated for a future crop; earlier ripening allows for earlier replanting; desiccation reduces green material in the harvest putting less strain on harvesting machinery. Examples of crops that may be subjected to pre-harvest desiccation include: Cereals/Grains such as barley, oats, rice, sorghum (millet), and wheat Flax Cotton Legumes including beans (common, fava, garbanzos, etc.), lentils, peas, and soybeans Maize (corn) Mustard Oilseed such as canola, linseed, rapeseed, safflower, sunflower, and soy Potato Sugarcane Sunflower Active agents In agricultural parlance, desiccation is divided into two distinct groups: "true desiccants" and pre-harvest systemic herbicides. True desiccants are not chemical desiccants, rather they are contact herbicides which kill only the parts of the plant they touch. They induce plant death/defoliation rapidly and dry down occurs within a few days. True desiccants do not often provide good weed control because killing only the top growth may allow plants to begin re-growing again. In contrast, systemic herbicides are absorbed by foliage or roots and translocated to other parts of the plant. They poison metabolism throughout the plant thus the process is slower, with die off and dry down taking up to a couple weeks. "True desiccants" Most of these kinds of contact herbicides are cell membrane disruptors that are either "PPO inhibitors" or "Photosystem I inhibitors." Plant cells have chloroplasts, which contain the protoporphyrinogen oxidase (PPO) enzyme complex. PPO inhibitors poison that enzyme, causing a build up of Protoporphyrin IX (Proto). Normally Proto is present in very low amounts, but when there is too much Proto it interacts with light to form singlet oxygen radicals (1O2). These interact with the fatty acids of membranes, causing disruption of membrane integrity and leakage of cell contents. The plants soon begin to wilt and quickly dry out in the sun. Plants can burn within hours of exposure to these herbicides. In contrast, Photosystem I inhibitors such as diquat and paraquat work by entering plant cells and immediately diverting electrons away from photosynthetic chain, poisoning photosynthesis. In addition, hydroxyl radicals (•OH) are formed which interact with the fatty acids of membranes, causing disruption of membranes, leakage, plant wilting, and drying out in the sun. Contact herbicides used for desiccation include: carfentrazone-ethyl, cyclanilide, diquat, endothall, glufosinate, paraquat, pelargonic acid / ammonium nonanoate, pyraflufen-ethyl, saflufenacil, sodium chlorate, thidiazuron, and tribufos. The most common and widely used contact desiccant is diquat (Reglone). For potatoes, sulfuric acid is sometimes used as a non-herbicide chemical desiccating agent. Systemic desiccants (glyphosate) Glyphosate (Roundup) is the principal pre-harvest systemic herbicide used for desiccation of a wide variety of crops. As a systemic herbicide it is not a true desiccant as it can take weeks rather than days for the crop to die back and dry out after application. Glyphosate works by poisoning the shikimate pathway which is found in plants and microorganisms but not in animals. Specifically, it inhibits the EPSP synthase enzyme which is required for plants to make certain amino acids. Without these, metabolism in the plant collapses. In addition, shikimate accumulates in plant tissues and diverts energy and resources away from other processes, eventually killing the plant over a period of days to weeks. In the UK, glyphosate began to be applied to wheat crops in the 1980s to control perennial weeds such as common couch which was very effective and meant that sowing of the next crop could occur sooner. Use as a harvest aid in the UK increased after the introduction of strobilurin fungicides which prolong the longevity of the leaves, and by 2002, 12% of UK wheat crops were treated in this way. The timing of application is crucial as the moisture content of the grain must be below 30% for the yield of the crop to be unaffected and to minimize uptake of glyphosate by the grain. Yield may be affected and residues increased if applications are made to uneven fields in which some areas have a moisture content over 30%. Although used in weed-free and evenly maturing crops with the aim of reducing the grain moisture content more rapidly to hasten the harvest, there is little or no advantage in doing so. The application of glyphosate differs between regions and countries significantly. In North America, for example, its use on wheat crops is uncommon in the United States but more common in Canada which has a colder climate and shorter growing season. In the UK where summers are wet and crops may ripen unevenly 78% of oilseed rape is desiccated before harvest, but only 4% in Germany. Questions over practice Herbicide residue in food has been raised as a concern. Residue quantities are regulated by Codex Alimentarius of the Food and Agriculture Organization of the United Nations. In July 2013 Austria banned the use of pre-harvest glyphosate citing the precautionary principle. In April 2015 an oat buyer in Western Canada announced that it was refusing oats in which pre-harvest glyphosate had been used. Glyphosate was found in 5–15% of cereal crop samples tested in the UK between 2000 and 2004, although never exceeding the Maximum Residue Level of 20 mg/kg. A survey of British wheat in 2006-2008 found average levels of 0.05–0.22 mg/kg with maximum levels of 1.2 mg/kg. Evidence that traces of glyphosate used on crops could make its way into a final, processed food product was raised in 2016 by a German environmental group, Munich Environmental Institute (note that this example does not distinguish between specific crop desiccation use and general use). The group issued a report that stated glyphosate was detected in Germany's 14 most popular beers, ranging from 0.46 to 29.74 micrograms per liter, noting that the German government limit for glyphosate in drinking water is 0.1 microgram per liter. However, the German government's Federal Institute for Risk Assessment made an official comment that those levels did not pose a risk to consumer's health, emphasizing that “An adult would have to drink around 1,000 liters (264 U.S. gallons) of beer a day to ingest enough quantities to be harmful for health.” An industry group, Brauer-Bund Beer Association, asserted that the results weren't credible because of insufficient sampling and that its own monitoring system for malt never detected glyphosate levels above maximum permitted levels. Separately, in 2020 scientists from the Max Rubner-Institut published a study about correlations between trace levels of glyphosate in human urine with consumption of various food types. They found no relationship between traces of glyphosate and the consumption of beer. Neither did they find an association with bread, honey, mushroom, and soy products but there was a correlation with pulses (legumes). References External links Harvest Herbicides Agricultural practices
Crop desiccation
Biology
2,041
26,117,670
https://en.wikipedia.org/wiki/Illuminationism
Illuminationism (Persian حكمت اشراق hekmat-e eshrāq, Arabic: حكمة الإشراق ḥikmat al-ishrāq, both meaning "Wisdom of the Rising Light"), also known as Ishrāqiyyun or simply Ishrāqi (Persian اشراق, Arabic: الإشراق, lit. "Rising", as in "Shining of the Rising Sun") is a philosophical and mystical school of thought introduced by Shahab al-Din Suhrawardi (honorific: Shaikh al-ʿIshraq or Shaikh-i-Ishraq, both meaning "Master of Illumination") in the twelfth century, established with his Kitab Hikmat al-Ishraq (lit: "Book of the Wisdom of Illumination"), a fundamental text finished in 1186. Written with influence from Avicennism, Peripateticism, and Neoplatonism, the philosophy is nevertheless distinct as a novel and holistic addition to the history of Islamic philosophy. History While the Ilkhanate-Mongol Siege of Baghdad and the destruction of the House of Wisdom (Arabic: بيت الحكمة, romanized: Bayt al-Ḥikmah) effectively ended the Islamic Golden Age in 1258, it also paved the way for novel philosophical invention. Such an example is the work of philosopher Abu'l-Barakāt al-Baghdādī, specifically his Kitāb al-Muʿtabar ("The Book of What Has Been Established by Personal Reflection"); the book's challenges to the Aristotelian norm in Islamic philosophy along with al-Baghdādī's emphasis on "evident self-reflection" and his revival of the Platonic use of light as a metaphor for phenomena like inspiration all influenced the philosophy of Suhrawardi. The philosopher and logician Zayn al-Din Omar Savaji further inspired Suhrawardi with his foundational works on mathematics and his creativity in reconstructing the Organon; Savaji's two-part logic based on "expository propositions" (al-aqwāl al-šāreḥa) and "proof theory" (ḥojaj) served as the precursory model for Suhrawardi's own "Rules of Thought" (al-Żawābeṭ al-fekr). Among the three Islamic philosophers mentioned in Suhrawardi's work, al-Baghdādī and Savaji are two of them. Upon finishing his Kitab Hikmat al-Ishraq (lit: "Book of the Wisdom of Illumination"), the Persian philosopher Shahab al-Din Suhrawardi founded Illuminationism in 1186. The Persian and Islamic school draws on ancient Iranian philosophical disciplines, Avicennism (Ibn Sina's early Islamic philosophy), Neoplatonic thought (modified by Ibn Sina), and the original ideas of Suhrawardi. Key concepts In his Philosophy of Illumination, Suhrawardi argued that light operates at all levels and hierarchies of reality (PI, 97.7–98.11). Light produces immaterial and substantial lights, including immaterial intellects (angels), human and animal souls, and even 'dusky substances', such as bodies. Suhrawardi's metaphysics is based on two principles. The first is a form of the principle of sufficient reason. The second principle is Aristotle's principle that an actual infinity is impossible. Ishraq The essential meaning of ishrāq (Persian اشراق, Arabic: الإشراق) is "rising", specifically referring to the sunrise, though "illumination" is the more common translation. It has used both Arabic and Persian philosophical texts as means to signify the relation between the "apprehending subject" (al-mawżuʿ al-modrek) and the "apprehensible object" (al-modrak); beyond philosophical discourse, it is a term used in common discussion. Suhrawardi utilized the ordinariness of the word in order to encompass the all that is mystical along with an array of different kinds of knowledge, including elhām, meaning personal inspiration. Legacy None of Suhrawardi's works were translated into Latin, so he remained unknown in the Latin West, although his work continued to be studied in the Islamic East. According to Hosein Nasr, Suhrawardi was unknown to the west until he was translated to western languages by contemporary thinkers such as Henry Corbin, and he remains largely unknown even in countries within the Islamic world. Suhrawardi tried to present a new perspective on questions like those of existence. He not only caused peripatetic philosophers to confront such new questions, but also gave new life to the body of philosophy after Avicenna. According to John Walbridge, Suhrawardi's critiques of Peripatetic philosophy could be counted as an important turning point for his successors. Although Suhravardi was first a pioneer of Peripatetic philosophy, he later became a Platonist following a mystical experience. He is also counted as one who revived the ancient wisdom in Persia by his philosophy of illumination. His followers, such as Shahrzouri and Qutb al-Din al-Shirazi tried to continue the way of their teacher. Suhrewardi makes a distinction between two approaches in the philosophy of illumination: one approach is discursive and another is intuitive. Illuminationist thinkers in the School of Isfahan played a significant role in revitalizing academic life in the Safavid Empire under Shah Abbas I (1588–1629). Avicennan thought continued to inform philosophy during the reign of the Safavid Empire. Illuminationism was taught in Safavid Madrasas (Place of Study) established by pious shahs. Mulla Sadra Mulla Sadra (Ṣadr ad-Dīn Muḥammad Shīrāzī) was a 17th-century Iranian philosopher who was considered a master of illuminationism. He wrote a book titled al-Asfār al-Arbaʻah meaning 'the four journeys', referring to the soul's journey back to Allah. He developed his book into an entire school of thought; he did not refer to al-Asfār as a philosophy but as "wisdom." Sadra taught how one could be illuminated or given wisdom until becoming a sage. Al-Asfar was one piece of illuminationism which is still an active part of Islamic philosophy today. It was representative of Mulla Sadra's entire philosophical worldview. Like many important Arabian works it is difficult for the western world to understand because it has not been translated into English. Mulla Sadra eventually became the most significant teacher at the religious school known as Madrasa-yi Khan. His philosophies are still taught throughout the Islamic East and South Asia. Al-Asfar is Mulla Sadra's book explaining his view of illuminationism. He views problems starting with a Peripatetic sketch. This Aristotelian style of teaching is reminiscent of Islamic Golden Age Philosopher Avicenna. Mulla Sadra often refers to the Qur'an when dealing with philosophical problems. He quotes Qur'anic verses while explaining philosophy. He wrote exegeses of the Qur'an such as his explanation of Āyat al-Kursī. Asfār means journeys. In al-Asfar is a journey to gain wisdom. Mulla Sadra used philosophy as a set of spiritual exercises to become more wise. In Mulla Sadra's book The Transcendent Philosophy of the Four Journeys of the Intellect he describes the four journeys of A journey from creation to the Truth or Creator A journey from the Truth to the Truth A journey that stands in relation to the first journey because it is from the Truth to creation with the Truth A journey that stands in relation to the second journey because it is from the Truth to the creation. See also Divine illumination Divine light Perennialism Notes Further reading Persian philosophy History of logic Theories of deduction Iranian philosophy fr:Philosophie illuminative
Illuminationism
Mathematics
1,695
57,820,194
https://en.wikipedia.org/wiki/NGTS-3Ab
NGTS-3Ab is a gas giant exoplanet that orbits a G-type star. Its mass is 2.38 Jupiters, it takes 1.7 days to complete one orbit of its star, and is 0.023 AU from its star. Its discovery was announced in 2018. The Jupiter-like planet is discovered by 39 astronomers, mainly Max Günther, Didier Queloz, Edward Gillen, Laetitia Delrez, and Francois Bouchy. Overview NGTS-3Ab was discovered in 2018 by the use of transit method. It is the only planet orbiting around NGTS-3A, a G6V class star, situated in the constellation of Columba in 2480 light years from the Sun. The exoplanet orbits its star in about 2 terrestrial days. The orbit is closer to the star than the internal limit of the habitable zone. It has a low density and can be composed of gas. It has a low Earth similarity index (0.06) and should be very different from our planet. Discovery The discovery of NGTS-3Ab, a hot Jupiter found orbiting a star in a still visually unresolved binary system, was announced in June 2018. The data regarding the exoplanet is based on the data gathered with the Next Generation Transit Survey (NGTS), SPECULOOS, and HARPS, and enhanced by recent advances with the centroiding technique for NGTS. The planetary system NGTS-3A was detected by a conjointly model multi-colour photometry, centroids and radial velocity (RV) extraction process. RV cross-correlation functions (CCFs) and study correlations of the bisector inverse span (BIS) are simulated in order to define the characteristics of the exoplanet NGTS-3Ab. See also NGTS-3 List of exoplanet firsts List of exoplanetary host stars List of exoplanets discovered using the Kepler spacecraft List of planets observed during Kepler's K2 mission References Further reading "NGTS-3Ab". exoplanetarchive.ipac.caltech.edu. "The Extrasolar Planet Encyclopaedia — NGTS-3Ab". exoplanet.eu. Transiting exoplanets Exoplanets discovered in 2018 Columba (constellation) Hot Jupiters
NGTS-3Ab
Astronomy
495
16,844,845
https://en.wikipedia.org/wiki/Cobalt%28II%29%20sulfate
Cobalt(II) sulfate is any of the inorganic compounds with the formula CoSO4(H2O)x. Usually cobalt sulfate refers to the hexa- or heptahydrates CoSO4.6H2O or CoSO4.7H2O, respectively. The heptahydrate is a red solid that is soluble in water and methanol. Since cobalt(II) has an odd number of electrons, its salts are paramagnetic. Preparation, and structure It forms by the reaction of metallic cobalt, its oxide, hydroxide, or carbonate with aqueous sulfuric acid: The heptahydrate is only stable at humidity >70% at room temperature, otherwise it converts to the hexahydrate. The hexahydrate converts to the monohydrate and the anhydrous forms at 100 and 250 °C, respectively. The hexahydrate is a metal aquo complex consisting of octahedral [Co(H2O)6]2+ ions associated with sulfate anions (see image in table). The monoclinic heptahydrate has also been characterized by X-ray crystallography. It also features [Co(H2O)6]2+ octahedra as well as one water of crystallization. Uses and reactions Cobalt sulfates are important intermediates in the extraction of cobalt from its ores. Thus, crushed, partially refined ores are treated with sulfuric acid to give red-colored solutions containing cobalt sulfate. Hydrated cobalt(II) sulfate is used in the preparation of pigments, as well as in the manufacture of other cobalt salts. Cobalt pigment is used in porcelains and glass. Cobalt(II) sulfate is used in storage batteries and electroplating baths, sympathetic inks, and as an additive to soils and animal feeds. For these purposes, the cobalt sulfate is produced by treating cobalt oxide with sulfuric acid. Being commonly available commercially, the heptahydrate is a routine source of cobalt in coordination chemistry. Natural occurrence Rarely, cobalt(II) sulfate is found in form of few crystallohydrate minerals, occurring among oxidation zones containing primary Co minerals (like skutterudite or cobaltite). These minerals are: biebierite (heptahydrate), moorhouseite (Co,Ni,Mn)SO4.6H2O, aplowite (Co,Mn,Ni)SO4.4H2O and cobaltkieserite (monohydrate). Health issues Cobalt is an essential mineral for mammals, but more than a few micrograms per day is harmful. Although poisonings have rarely resulted from cobalt compounds, their chronic ingestion has caused serious health problems at doses far less than the lethal dose. In 1965, the addition of a cobalt compound to stabilize beer foam in Canada led to a peculiar form of toxin-induced cardiomyopathy, which came to be known as beer drinker's cardiomyopathy. Furthermore, cobalt(II) sulfate is suspected of causing cancer (i.e., possibly carcinogenic, IARC Group 2B) as per the International Agency for Research on Cancer (IARC) Monographs. Related compounds the Tutton salt K2Co(SO4)2 References Cobalt(II) compounds Sulfates IARC Group 2B carcinogens Hydrates
Cobalt(II) sulfate
Chemistry
706
1,753,270
https://en.wikipedia.org/wiki/Self-assembled%20monolayer
Self-assembled monolayers (SAM) are assemblies of organic molecules that form spontaneously on surfaces by adsorption and organize themselves into more or less distinct domains (head group, chain/backbone, and tail/end group). In some cases, molecules that form the monolayer do not interact strongly with the substrate. This is the case for porphyrins on HOPG and two-dimensional supramolecular networks of PTCDA on gold. In other cases, the head group has a strong affinity for the substrate and anchors the molecule. Such an SAM consisting of a head group, chain (labeled "tail"), and functional end group is depicted in Figure 1. Common head groups include thiols, silanes, and phosphonates. SAMs are created by the chemisorption of head groups onto a substrate from either the vapor or liquid phase followed by a slower organization of "tail groups". Initially, at small molecular density on the surface, adsorbate molecules form either a disordered mass of molecules or an ordered two-dimensional "lying down phase". At higher molecular coverage, adsorbates can begin to form three-dimensional crystalline or semicrystalline structures on the substrate surface over a period of minutes to hours. The head groups assemble on the substrate, while the tail groups assemble far from the substrate. Areas of close-packed molecules nucleate and grow until the surface of the substrate is covered in a single monolayer. Adsorbate molecules adsorb readily because they lower the surface free-energy of the substrate and are stable due to the strong chemisorption of the head groups. These bonds create monolayers that are more stable than the physisorbed bonds of Langmuir–Blodgett films. For example, the trichlorosilane head group of an FDTS molecule reacts with a hydroxyl group on a substrate to form a very stable covalent bond [R-Si-O-substrate] with an energy of 452 kJ/mol. Thiol-metal bonds are on the order of 100 kJ/mol, making them fairly stable in a variety of temperatures, solvents, and potentials. Monolayers pack tightly due to van der Waals interactions, thereby reducing their own free energy. The adsorption can be described by the Langmuir adsorption isotherm if lateral interactions are neglected. If they cannot be neglected, the adsorption is better described by the Frumkin isotherm. Types Selecting the type of head group depends on the application of the SAM. Typically, head groups are connected to a molecular chain in which the terminal end can be functionalized (i.e. adding –OH, –NH2, –COOH, or –SH groups) to vary the wetting and interfacial properties. An appropriate substrate is chosen to react with the head group. Substrates can be planar surfaces, such as silicon and metals, or curved surfaces, such as nanoparticles. Alkanethiols are the most commonly used molecules for SAMs. Alkanethiols are molecules with an alkyl chain, (C-C)ⁿ chain, as the back bone, a tail group, and a S-H head group. Other types of interesting molecules include aromatic thiols, of interest in molecular electronics, in which the alkane chain is (partly) replaced by aromatic rings. An example is the dithiol 1,4-Benzenedimethanethiol (SHCH2C6H4CH2SH)). Interest in such dithiols stems from the possibility of linking the two sulfur ends to metallic contacts, which was first used in molecular conduction measurements. Thiols are frequently used on noble metal substrates because of the strong affinity of sulfur for these metals. The sulfur gold interaction is semi-covalent and has a strength of approximately 45 kcal/mol. In addition, gold is an inert and biocompatible material that is easy to acquire. It is also easy to pattern via lithography, a useful feature for applications in nanoelectromechanical systems (NEMS). Additionally, it can withstand harsh chemical cleaning treatments. Recently other chalcogenide SAMs: selenides and tellurides have attracted attention in a search for different bonding characteristics to substrates affecting the SAM characteristics and which could be of interest in some applications such as molecular electronics. Silanes are generally used on nonmetallic oxide surfaces; however monolayers formed from covalent bonds between silicon and carbon or oxygen cannot be considered self assembled because they do not form reversibly. Self-assembled monolayers of thiolates on noble metals are a special case because the metal-metal bonds become reversible after the formation of the thiolate-metal complex. This reversibility is what gives rise to vacancy islands and it is why SAMs of alkanethiolates can be thermally desorbed and undergo exchange with free thiols. Preparation Metal substrates for use in SAMs can be produced through physical vapor deposition techniques, electrodeposition or electroless deposition. Thiol or selenium SAMs produced by adsorption from solution are typically made by immersing a substrate into a dilute solution of alkane thiol in ethanol, though many different solvents can be used besides use of pure liquids. While SAMs are often allowed to form over 12 to 72 hours at room temperature, SAMs of alkanethiolates form within minutes. Special attention is essential in some cases, such as that of dithiol SAMs to avoid problems due to oxidation or photoinduced processes, which can affect terminal groups and lead to disorder and multilayer formation. In this case appropriate choice of solvents, their degassing by inert gasses and preparation in the absence of light is crucial and allows formation of "standing up" SAMs with free –SH groups. Self-assembled monolayers can also be adsorbed from the vapor phase. In some cases when obtaining an ordered assembly is difficult or when different density phases need to be obtained substitutional self-assembly is used. Here one first forms the SAM of a given type of molecules, which give rise to ordered assembly and then a second assembly phase is performed (e.g. by immersion into a different solution). This method has also been used to give information on relative binding strengths of SAMs with different head groups and more generally on self-assembly characteristics. Characterization The thicknesses of SAMs can be measured using ellipsometry and X-ray photoelectron spectroscopy (XPS), which also give information on interfacial properties. The order in the SAM and orientation of molecules can be probed by Near Edge Xray Absorption Fine Structure (NEXAFS) and Fourier Transform Infrared Spectroscopy in Reflection Absorption Infrared Spectroscopy (RAIRS) studies. Numerous other spectroscopic techniques are used such as Second-harmonic generation (SHG), Sum-frequency generation (SFG), Surface-enhanced Raman scattering (SERS), as well as High-resolution electron energy loss spectroscopy (HREELS). The structures of SAMs are commonly determined using scanning probe microscopy techniques such as atomic force microscopy (AFM) and scanning tunneling microscopy (STM). STM has been able to help understand the mechanisms of SAM formation as well as determine the important structural features that lend SAMs their integrity as surface-stable entities. In particular STM can image the shape, spatial distribution, terminal groups and their packing structure. AFM offers an equally powerful tool without the requirement of the SAM being conducting or semi-conducting. AFM has been used to determine chemical functionality, conductance, magnetic properties, surface charge, and frictional forces of SAMs. The scanning vibrating electrode technique (SVET) is a further scanning probe microscopy which has been used to characterize SAMs, with defect free SAMs showing homogeneous activity in SVET. More recently, however, diffractive methods have also been used. The structure can be used to characterize the kinetics and defects found on the monolayer surface. These techniques have also shown physical differences between SAMs with planar substrates and nanoparticle substrates. An alternative characterisation instrument for measuring the self-assembly in real time is dual polarisation interferometry where the refractive index, thickness, mass and birefringence of the self assembled layer are quantified at high resolution. Another method that can be used to measure the self-assembly in real-time is Quartz Crystal Microbalance with Dissipation monitoring technology where the mass and viscoelastic properties of the adlayer are quantified. Contact angle measurements can be used to determine the surface free-energy which reflects the average composition of the surface of the SAM and can be used to probe the kinetics and thermodynamics of the formation of SAMs. The kinetics of adsorption and temperature induced desorption as well as information on structure can also be obtained in real time by ion scattering techniques such as low energy ion scattering (LEIS) and time of flight direct recoil spectroscopy (TOFDRS). Defects Defects due to both external and intrinsic factors may appear. External factors include the cleanliness of the substrate, method of preparation, and purity of the adsorbates. SAMs intrinsically form defects due to the thermodynamics of formation, e.g. thiol SAMs on gold typically exhibit etch pits (monatomic vacancy islands) likely due to extraction of adatoms from the substrate and formation of adatom-adsorbate moieties. Recently, a new type of fluorosurfactants have found that can form nearly perfect monolayer on gold substrate due to the increase of mobility of gold surface atoms. Nanoparticle properties The structure of SAMs is also dependent on the curvature of the substrate. SAMs on nanoparticles, including colloids and nanocrystals, "stabilize the reactive surface of the particle and present organic functional groups at the particle-solvent interface". These organic functional groups are useful for applications, such as immunoassays or sensors, that are dependent on chemical composition of the surface. Kinetics There is evidence that SAM formation occurs in two steps: an initial fast step of adsorption and a second slower step of monolayer organization. Adsorption occurs at the liquid–liquid, liquid–vapor, and liquid-solid interfaces. The transport of molecules to the surface occurs due to a combination of diffusion and convective transport. According to the Langmuir or Avrami kinetic model the rate of deposition onto the surface is proportional to the free space of the surface. Where θ is the proportional amount of area deposited and k is the rate constant. Although this model is robust it is only used for approximations because it fails to take into account intermediate processes. Dual polarisation interferometry being a real time technique with ~10 Hz resolution can measure the kinetics of monolayer self-assembly directly. Once the molecules are at the surface the self-organization occurs in three phases: 1. A low-density phase with random dispersion of molecules on the surface. 2. An intermediate-density phase with conformational disordered molecules or molecules lying flat on the surface. 3. A high-density phase with close-packed order and molecules standing normal to the substrate's surface. The phase transitions in which a SAM forms depends on the temperature of the environment relative to the triple point temperature, the temperature in which the tip of the low-density phase intersects with the intermediate-phase region. At temperatures below the triple point the growth goes from phase 1 to phase 2 where many islands form with the final SAM structure, but are surrounded by random molecules. Similar to nucleation in metals, as these islands grow larger they intersect forming boundaries until they end up in phase 3, as seen below. At temperatures above the triple point the growth is more complex and can take two paths. In the first path the heads of the SAM organize to their near final locations with the tail groups loosely formed on top. Then as they transit to phase 3, the tail groups become ordered and straighten out. In the second path the molecules start in a lying down position along the surface. These then form into islands of ordered SAMs, where they grow into phase 3, as seen below. The nature in which the tail groups organize themselves into a straight ordered monolayer is dependent on the inter-molecular attraction, or van der Waals forces, between the tail groups. To minimize the free energy of the organic layer the molecules adopt conformations that allow high degree of Van der Waals forces with some hydrogen bonding. The small size of the SAM molecules are important here because Van der Waals forces arise from the dipoles of molecules and are thus much weaker than the surrounding surface forces at larger scales. The assembly process begins with a small group of molecules, usually two, getting close enough that the Van der Waals forces overcome the surrounding force. The forces between the molecules orient them so they are in their straight, optimal, configuration. Then as other molecules come close by they interact with these already organized molecules in the same fashion and become a part of the conformed group. When this occurs across a large area the molecules support each other into forming their SAM shape seen in Figure 1. The orientation of the molecules can be described with two parameters: α and β. α is the angle of tilt of the backbone from the surface normal. In typical applications α varies from 0 to 60 degrees depending on the substrate and type of SAM molecule. β is the angle of rotation along the long axis of tee molecule. β is usually between 30 and 40 degrees. In some cases existence of kinetic traps hindering the final ordered orientation has been pointed out. Thus in case of dithiols formation of a "lying down" phase was considered an impediment to formation of "standing up" phase, however various recent studies indicate this is not the case. Many of the SAM properties, such as thickness, are determined in the first few minutes. However, it may take hours for defects to be eliminated via annealing and for final SAM properties to be determined. The exact kinetics of SAM formation depends on the adsorbate, solvent and substrate properties. In general, however, the kinetics are dependent on both preparations conditions and material properties of the solvent, adsorbate and substrate. Specifically, kinetics for adsorption from a liquid solution are dependent on: Temperature – room-temperature preparation improves kinetics and reduces defects. Concentration of adsorbate in the solution – low concentrations require longer immersion times and often create highly crystalline domains. Purity of the adsorbate – impurities can affect the final physical properties of the SAM Dirt or contamination on the substrate – imperfections can cause defects in the SAM The final structure of the SAM is also dependent on the chain length and the structure of both the adsorbate and the substrate. Steric hindrance and metal substrate properties, for example, can affect the packing density of the film, while chain length affects SAM thickness. Longer chain length also increases the thermodynamic stability. Patterning 1. Locally attract This first strategy involves locally depositing self-assembled monolayers on the surface only where the nanostructure will later be located. This strategy is advantageous because it involves high throughput methods that generally involve fewer steps than the other two strategies. The major techniques that use this strategy are: Micro-contact printing Micro-contact printing or soft lithography is analogous to printing ink with a rubber stamp. The SAM molecules are inked onto a pre-shaped elastomeric stamp with a solvent and transferred to the substrate surface by stamping. The SAM solution is applied to the entire stamp but only areas that make contact with the surface allow transfer of the SAMs. The transfer of the SAMs is a complex diffusion process that depends on the type of molecule, concentration, duration of contact, and pressure applied. Typical stamps use PDMS because its elastomeric properties, E = 1.8 MPa, allow it to fit the contour of micro surfaces and its low surface energy, γ = 21.6 dyn/cm². This is a parallel process and can thus place nanoscale objects over a large area in a short time. Dip-pen nanolithography Dip-pen nanolithography is a process that uses an atomic force microscope to transfer molecules on the tip to a substrate. Initially the tip is dipped into a reservoir with an ink. The ink on the tip evaporates and leaves the desired molecules attached to the tip. When the tip is brought into contact with the surface a water meniscus forms between the tip and the surface resulting in the diffusion of molecules from the tip to the surface. These tips can have radii in the tens of nanometers, and thus SAM molecules can be very precisely deposited onto a specific location of the surface. This process was discovered by Chad Mirkin and co-workers at Northwestern University. 2. Locally remove The locally remove strategy begins with covering the entire surface with a SAM. Then individual SAM molecules are removed from locations where the deposition of nanostructures is not desired. The result is the same as in the locally attract strategy, the difference being in the way this is achieved. The major techniques that use this strategy are: Scanning tunneling microscope The scanning tunneling microscope can remove SAM molecules in many different ways. The first is to remove them mechanically by dragging the tip across the substrate surface. This is not the most desired technique as these tips are expensive and dragging them causes a lot of wear and reduction of the tip quality. The second way is to degrade or desorb the SAM molecules by shooting them with an electron beam. The scanning tunneling microscope can also remove SAMs by field desorption and field enhanced surface diffusion. Atomic force microscope The most common use of this technique is to remove the SAM molecules in a process called shaving, where the atomic force microscope tip is dragged along the surface mechanically removing the molecules. An atomic force microscope can also remove SAM molecules by local oxidation nanolithography. Ultraviolet irradiation In this process, UV light is projected onto the surface with a SAM through a pattern of apertures in a chromium film. This leads to photo oxidation of the SAM molecules. These can then be washed away in a polar solvent. This process has 100 nm resolutions and requires exposure time of 15–20 minutes. 3. Modify tail groups The final strategy focuses not on the deposition or removal of SAMS, but the modification of terminal groups. In the first case the terminal group can be modified to remove functionality so that SAM molecule will be inert. In the same regards the terminal group can be modified to add functionality so it can accept different materials or have different properties than the original SAM terminal group. The major techniques that use this strategy are: Focused electron beam and ultraviolet irradiation Exposure to electron beams and UV light changes the terminal group chemistry. Some of the changes that can occur include the cleavage of bonds, the forming of double carbon bonds, cross-linking of adjacent molecules, fragmentation of molecules, and confromational disorder. Atomic force microscope A conductive AFM tip can create an electrochemical reaction that can change the terminal group. Applications Thin-film SAMs SAMs are an inexpensive and versatile surface coating for applications including control of wetting and adhesion, chemical resistance, bio compatibility, sensitization, and molecular recognition for sensors and nano fabrication. Areas of application for SAMs include biology, electrochemistry and electronics, nanoelectromechanical systems (NEMS) and microelectromechanical systems (MEMS), and everyday household goods. SAMs can serve as models for studying membrane properties of cells and organelles and cell attachment on surfaces. SAMs can also be used to modify the surface properties of electrodes for electrochemistry, general electronics, and various NEMS and MEMS. For example, the properties of SAMs can be used to control electron transfer in electrochemistry. They can serve to protect metals from harsh chemicals and etchants. SAMs can also reduce sticking of NEMS and MEMS components in humid environments. In the same way, SAMs can alter the properties of glass. A common household product, Rain-X, utilizes SAMs to create a hydrophobic monolayer on car windshields to keep them clear of rain. Another application is an anti-adhesion coating on nanoimprint lithography (NIL) tools and stamps. One can also coat injection molding tools for polymer replication with a Perfluordecyltrichlorosilane SAM. Thin film SAMs can also be placed on nanostructures. In this way they functionalize the nanostructure. This is advantageous because the nanostructure can now selectively attach itself to other molecules or SAMs. This technique is useful in biosensors or other MEMS devices that need to separate one type of molecule from its environment. One example is the use of magnetic nanoparticles to remove a fungus from a blood stream. The nanoparticle is coated with a SAM that binds to the fungus. As the contaminated blood is filtered through a MEMS device the magnetic nanoparticles are inserted into the blood where they bind to the fungus and are then magnetically driven out of the blood stream into a nearby laminar waste stream. Patterned SAMs Photolithographic methods are useful in patterning SAMs. SAMs are also useful in depositing nanostructures, because each adsorbate molecule can be tailored to attract two different materials. Current techniques utilize the head to attract to a surface, like a plate of gold. The terminal group is then modified to attract a specific material like a particular nanoparticle, wire, ribbon, or other nanostructure. In this way, wherever the SAM is patterned to a surface there will be nanostructures attached to the tail groups. One example is the use of two types of SAMs to align single wall carbon nanotubes, SWNTs. Dip pen nanolithography was used to pattern a 16-mercaptohexadecanoic acid (MHA)SAM and the rest of the surface was passivated with 1-octadecanethiol (ODT) SAM. The polar solvent that is carrying the SWNTs is attracted to the hydrophilic MHA; as the solvent evaporates, the SWNTs are close enough to the MHA SAM to attach to it due to Van der Waals forces. The nanotubes thus line up with the MHA-ODT boundary. Using this technique Chad Mirkin, Schatz and their co-workers were able to make complex two-dimensional shapes, a representation of a shape created is shown to the right. Another application of patterned SAMs is the functionalization of biosensors. The tail groups can be modified so they have an affinity for cells, proteins, or molecules. The SAM can then be placed onto a biosensor so that binding of these molecules can be detected. The ability to pattern these SAMs allows them to be placed in configurations that increase sensitivity and do not damage or interfere with other components of the biosensor. Metal organic superlattices There has been considerable interest in use of SAMs for new materials e.g. via formation of two- or three-dimensional metal organic superlattices by assembly of SAM capped nanoparticles or layer by layer SAM-nanoparticle arrays using dithiols. A detailed review on this subject using dithiols is given by Hamoudi and Esaulov References Further reading I. Rubinstein, E. Sabatani, R. Maoz and J. Sagiv, Organized Monolayers on Gold Electrodes, in Electrochemical Sensors for Biomedical Applications, C.K.N. Li (Ed.), The Electrochemical Society 1986: 175. Sigma-Aldrich "Material Matters", Molecular Self-Assembly Nanotechnology Thin films Supramolecular chemistry Self-organization
Self-assembled monolayer
Chemistry,Materials_science,Mathematics,Engineering
4,996
9,451,015
https://en.wikipedia.org/wiki/Expo%20Mimio
EXPO mimio is a brand name of computer whiteboard capture devices marketed by Sanford Brands. EXPO mimio devices allow users to digitally capture whiteboard images and text. The devices link physical whiteboard to software created whiteboards such as in netmeeting, and can also be used to control desktop applications and documents directly from a whiteboard when used with a projector and computer. On October 4, 2006 Newell Rubbermaid acquired the mimio interactive whiteboard (iWB) product line. The mimio line has become part of the Sanford Brands portfolio of products. Models In production: EXPO mimio Interactive EXPO mimio Xi EXPO mimio Board EXPO mimio wireless EXPO mimio studio (win) EXPO mimio Mac EXPO mimio writingRecognition (win) EXPO mimio screenRecorder (win) Computer peripherals
Expo Mimio
Technology
172
15,023,595
https://en.wikipedia.org/wiki/C166%20family
The C166 family is a 16-bit microcontroller architecture from Infineon (formerly the semiconductor division of Siemens) in cooperation with STMicroelectronics. It was first released in 1990 and is a controller for measurement and control tasks. It uses the well-established RISC architecture, but features some microcontroller-specific extensions such as bit-addressable memory and an interrupt system optimized for low-latency. When this architecture was introduced the main focus was to replace 8051 controllers (from Intel). Opcode-compatible successors of the C166 family are the C167 family, XC167 family, the XE2000 family and the XE166 family. As of 2017, microcontrollers using the C166 architecture are still being manufactured by NIIET in Voronezh, Russia, as part of the 1887 series of integrated circuits. This includes a radiation-hardened device under the designation 1887VE6T (). C167 / ST10 family The Siemens/Infineon C167 family or STMicroelectronics ST10 family is a further development of the C166 family. It has improved addressing modes and support for "atomic" instructions. Variants include, for example, Controller Area Network (CAN bus). C167 architecture is used predominantly on German and German-owned automobile marques as well as certain models from Renault, Dacia, Peugeot, Citroen, Hyundai, Kia etc. See also (CCU, CAPCOM) Peripheral Event Controller (PEC) References Digital electronics Microcontrollers
C166 family
Engineering
333
1,423,860
https://en.wikipedia.org/wiki/Pulaski%20%28tool%29
The Pulaski is a specialty hand tool used in fighting fires, particularly wildfires, which combines an axe and an adze in one head. Similar to a cutter mattock, it has a rigid handle of wood, plastic, or fiberglass. The Pulaski was developed for constructing firebreaks, able to both dig soil and chop wood. It is also well adapted for trail construction, and can be used for gardening and other outdoor work for general excavation and digging holes in root-bound or hard soil. The invention of the Pulaski is credited to Ed Pulaski, an assistant ranger with the United States Forest Service in 1911. Similar tools were introduced in 1876 by the Collins Tool Company. A tool that serves the same purpose was used in the Alps for over 300 years for planting trees (Wiedehopfhaue) or the dolabra in ancient Rome. Pulaski was famous for taking action to save the lives of a crew of 45 firefighters during the disastrous August 1910 wildfires in Idaho. His invention (or reinvention) of a combination axe and adze may have been a result of the disaster, as he saw the need for better firefighting tools. Pulaski further refined the tool by 1913, and it came into use in the Rocky Mountain region. In 1920 the Forest Service began contracting for the tool to be commercially manufactured but its use remained regional until the tool became a national standard in the 1930s. An initialed ("E.P.") tool, which purportedly belonged to Pulaski himself, is part of the collection of the Smithsonian Institution at the Wallace District Mining Museum in Wallace, Idaho. See also Driptorch Fire flapper (tool) Fire rake Flare Halligan bar McLeod (tool) Pickaroon References External links Firefighter tools Forestry tools Mechanical hand tools Wildfire suppression equipment
Pulaski (tool)
Physics
378
2,786,447
https://en.wikipedia.org/wiki/Monomial%20representation
In the mathematical fields of representation theory and group theory, a linear representation (rho) of a group is a monomial representation if there is a finite-index subgroup and a one-dimensional linear representation of , such that is equivalent to the induced representation . Alternatively, one may define it as a representation whose image is in the monomial matrices. Here for example and may be finite groups, so that induced representation has a classical sense. The monomial representation is only a little more complicated than the permutation representation of on the cosets of . It is necessary only to keep track of scalars coming from applied to elements of . Definition To define the monomial representation, we first need to introduce the notion of monomial space. A monomial space is a triple where is a finite-dimensional complex vector space, is a finite set and is a family of one-dimensional subspaces of such that . Now Let be a group, the monomial representation of on is a group homomorphism such that for every element , permutes the 's, this means that induces an action by permutation of on . References Representation theory of groups
Monomial representation
Mathematics
243
61,751,564
https://en.wikipedia.org/wiki/Aspergillus%20carneus
Aspergillus carneus is a fast-growing, filamentous fungus found on detritus and in fertile soil worldwide. It is characterized by its yellow, thick-walled hyphae and biseriate sterigmata. The fungus produces citrinin and 5 unique depsipeptides, Aspergillicins A-E. History and taxonomy The fungus was originally isolated by van Tieghem in 1877 from a soil sample in Java, where it was named Sterigmatocystis carnea. The epithet carnea was derived from the Latin meaning "flesh coloured". The fungus was later described by Sartory, Sartory and Meyer in 1930 under the probable synonym Sterigmatocystis albo-rosea, as they erroneously believed it to be a new species. However, the genus Sterigmatocystis is now obsolete. In 1933, the fungus was renamed Aspergillus carneus by Blochwitz, because he believed that van Tieghem's initial description was inadequate. Thus, although some authors have erroneously attributed it to van Tieghem, the epithet carneus that is currently in use was originally coined by Blochwitz. The classification of A. carneus in section Terrei or section Flavipedes (2 closely related groups belonging to the Aspergillus subgenus Terrei) has been contested. Classifications were originally based on morphological examination. A. carneus was first placed in section Terrei, along with Aspergillus terreus and Aspergillus nivea, by Thom and Raper (1945) due to its transparent aleurioconidia. In 1965, A. carneus was reclassified to Aspergillus section Flavipedes, along with A. flavipes and A. nivea, by Raper and Fennell because its conidial heads were less columnar than those of A. terreus and its unique thick-walled, yellow hyphae. This taxonomic debate was addressed using modern molecular genetic techniques as they became available. In 2000, sequencing of the D1/D2 region of the 28S ribosomal RNA gene indicated that Thom and Raper's original classification (1945) of A. carneus in Aspergillus section Terrei was likely correct. However, Varga et al. (2005) re-examined 3 isolates of A. carneus from the USA and Haiti by sequencing the fungus' internal transcribed spacer gene region and conducting a random amplified polymorphic DNA analysis. Their work revealed that A. flavipes, A. terreus and A. carneus species were equally related, rendering the distinction between sections Terrei and Flavipedes obsolete. They recommended that both sections be merged, pending further confirmatory genetic analyses. However, as of 2016, A. carneus continues to be considered a member of section Terrei, along with 15 other members of the genus Aspergillus. Growth and physiology Growth of A. carneus is moderate from 24–26 °C and optimal from 41–42 °C. The fungus cannot grow at temperatures below 6–7 °C or above 46–48 °C. Colonies of A. carneus grow rapidly, reaching a diameter of 4–5 cm in 2 weeks on Czapek medium. Normal growth also occurs on media containing 1% dextrane, and growth is resistant to low water potentials and high salt concentrations. Growth is impaired on malt extract agar and media which are deficient of important fungal macronutrients. On media lacking sulphur, potassium or magnesium, growth is halved. Growth is negligible but present on nitrogen- or phosphorus-deficient media. This observation of mild tolerance to nitrogen or phosphorus deficiency is attributed either to contamination of the media or the ability of A. carneus to utilize atmospheric sources of these nutrients for growth. The fungus is also resistant to heavy metal toxicity. Growth is inhibited but present after inoculation with cobalt, lead, nickel, zinc or cadmium, in concentrations ranging from 100–300 mg/L. No growth occurs at heavy metal concentrations exceeding 500 mg/L. Cobalt and cadmium are most toxic to A. carneus, while lead affects growth the least. A. carneus is known to produce citrinin, a secondary metabolite and mycotoxin characterized by its hepatotoxicity, nephrotoxicity and cytotoxicity. Sclerin, a compound which stimulates plant growth, and dihydrocitrinone, a metabolite of citrinin, have also been isolated from samples of the fungus. Additionally, A. carneus may be characterized by its production of novel secondary metabolites carneamides A-C, carnequinazolines A-C and aryl C-glycoside. Morphology On Czapek's agar, A. carneus colonies appear white at the beginning of development, progressing to variable shades of deep red, brownish red or yellow with age. The fungus produces conidiophores (250–400 μm) which are smooth and brown, yellow or colourless. Conidia are smooth, unpigmented and approximately spherical, with a diameter of 2.4–2.8 μm. Conidial heads are columnar in shape (150–200 μm x 25–35 μm) and initially white, appearing pale pink to brown in older cultures. Vesicles are globose, subglobose or hemispherical. Sterigmata are biseriate, while hyphae are characteristically thick-walled. Irregular hyphal branching may occur. Exudate may be absent or present in brown droplets with a strong odour. The morphology of A. carneus varies based on its growth medium or ecological habitat, presenting a challenge for identification. Grown on malt extract agar, A. carneus exhibits increased sporing, darker pigmentation and larger conidial heads. Otherwise, its morphology is consistent with the description above. A unique strain of A. carneus lacking its unique yellow, thick-walled hyphae was isolated in Arkansas. Its appearance was pale grey-brown. Additionally, a culture of A. carneus derived from estuarine sediment in Tasmania was characterized by a brown mycelium, indicating that morphological strain divergence may have occurred in marine environments. Habitat and ecology A. carneus is primarily a soil fungus, preferentially colonizing tropical and subtropical terrestrial environments. It is also found worldwide. The fungus inhabits alkaline, fertile soils and decomposing vegetation. A. carneus has been isolated from soils in Asia, Hawaii, north Africa, South America, Kuwait and southern Europe. The fungus colonizes a range of habitats, including podzolic forests, teak forests and mangrove swamps. It has also been found in forest nurseries in North America and eastern Europe. Rarely, it has been reported to grow on wheat and wild bees. A. carneus has also been isolated from the mycobiotia of the marine algae Laminaria sachalinensis in Russia and from estuarine sediment in Australia, demonstrating its potential to colonize aquatic organisms and environments. Industrial and medical applications A. carneus contributes to both medicine and industry, often simultaneously. The fungus produces a unique alkaline lipase (Aspergillus carneus lipase) with high pH and temperature tolerance, 1,3-regioselectivity, stability in organic solvents and esterification and transesterification properties. The lipase hydrolyzes a variety of oils and triglycerides, most notably sunflower oil. It is also extracellular, which improves yield during purification, and is resistant to inhibition by sodium propionate, a common food preservative. No other known fungal lipase exhibits this precise combination of abilities, making it a promising catalyst for industrial processes including synthesis of pharmaceuticals, agricultural chemicals, dairy products and detergents. Media containing glucose, sunflower oil, nitrogen and phosphorus may be used to maximize lipase yield from A. carneus. The Aspegillus carneus lipase is also capable of producing synthetic plant polyphenolics through deactylation, compounds which may protect against oxidative damage in human neurodegenerative disease. A. carneus also produces cystathionine-γ-lyase (CGL), an enzyme which catalyzes the breakdown of L-cystathionine, a human cysteine synthesis intermediate. Excess L-cystathionine due to CGL deficiency (cystathioninuria) is associated with cardiovascular disease, diabetes and cystic fibrosis. Thus, fungal CGL may have therapeutic potential as a human CGL substitute. The activity of CGL isolated from A. carneus is maximized at 40 °C and slightly basic pH (8–9). It is also non-toxic and correlated with decreased blood concentrations of L-cystathionine in rabbits, a preliminary indicator of safety and therapeutic effectiveness in vivo. A. carneus secretes a low molecular weight xylanase which hydrolyzes heteroxylan, a component of the plant cell wall. The activity of A. carneus xylanase is stable over a broad pH range (3–10), but is optimized at acidic pH and 60 °C. The enzyme is highly specific to low-cost agricultural waste products, particularly corn cobs and coba husks, which it can degrade into xylooligosaccharides. Xylooligosaccharides may be used as food additives, components of animal feed and prebiotics. A. carneus also produces a thermostable pectinase, which can be used to degrade orange peel and pulp waste, notably in the Egyptian orange juice industry. A. carneus produces the known fungal metabolite marcfortine A, as well as 5 novel depsipeptides, aspergillicins A–E. Marcfortine A is a paralytic, nematocidal agent which is also active against the commercially relevant ruminant parasite Haemonchus contortus. The aspergillicins exhibit mild cytotoxic activity. A. carneus produces other cytotoxic compounds, including sterigmatocystin, isopropylchaetominine and asteltoxin E, which are potently active against the mouse lymphoma cell line L5178Y and may have therapeutic potential as anti-cancer agents. Role in disease Human disease A. carneus is an infrequent human pathogen. However, 2 out of 9 cases of disseminated aspergillosis in a cohort of Czech patients, which had originally been attributed to Aspergillus candidus, were found to have instead been caused by A. carneus. The presence of A. carneus in clinical isolates from these patients was confirming using sequencing of the β-tubulin and calmodulin genes, in addition to the internal transcribed spacer (ITS) region of the ribosomal DNA. A. carneus has also been implicated in a case of appendicitis in a 6-year-old Romanian boy with acute myeloid leukemia and neutropenia. The patient's resected appendix was encircled by hyphae, which had penetrated blood vessels inside the tissue. The causative agent of the infection was confirmed to be A. carneus using genomic sequencing of the ITS region. The patient was successfully treated with an appendectomy procedure and the triazole antifungal voriconazole, followed by fluconazole treatment until his neutropenia had resolved. The fungus was also susceptible to the anti-fungal drugs amphotericin B, itraconazole and posaconazole. A. carneus has also been implicated in 2 cases of human lung aspergillosis in immunocompromised patients. Fragments of A. carneus hyphae and aleuriospores were identified in one patient's sputum at autopsy by morphological examination. Animal disease Rarely, the fungus has also been recognized as a contributor to animal disease. Grains and legumes colonized by A. carneus are toxic to ducklings. Additionally, wild type mice injected with A. carneus conidia (105) develop cerebral aspergillosis and ataxia after 2–10 days. Inoculation with corticosterone (10 mg) decreases the threshold for neurological symptoms to appear (to 104 conidia), indicating that immune suppression may increase vulnerability to A. carneus infection. References carneus Fungi described in 1933 Fungus species
Aspergillus carneus
Biology
2,698
3,005,139
https://en.wikipedia.org/wiki/Cauchy%27s%20equation
In optics, Cauchy's transmission equation is an empirical relationship between the refractive index and wavelength of light for a particular transparent material. It is named for the mathematician Augustin-Louis Cauchy, who originally defined it in 1830 in his article "The refraction and reflection of light". The equation The most general form of Cauchy's equation is where n is the refractive index, λ is the wavelength, A, B, C, etc., are coefficients that can be determined for a material by fitting the equation to measured refractive indices at known wavelengths. The coefficients are usually quoted for λ as the vacuum wavelength in micrometres. Usually, it is sufficient to use a two-term form of the equation: where the coefficients A and B are determined specifically for this form of the equation. A table of coefficients for common optical materials is shown below: The theory of light-matter interaction on which Cauchy based this equation was later found to be incorrect. In particular, the equation is only valid for regions of normal dispersion in the visible wavelength region. In the infrared, the equation becomes inaccurate, and it cannot represent regions of anomalous dispersion. Despite this, its mathematical simplicity makes it useful in some applications. The Sellmeier equation is a later development of Cauchy's work that handles anomalously dispersive regions, and more accurately models a material's refractive index across the ultraviolet, visible, and infrared spectrum. Humidity dependence for air Cauchy's two-term equation for air, expanded by Lorentz to account for humidity, is as follows: where p is the air pressure in millibar, T is the temperature in kelvin, and v is the vapor pressure of water in millibar. See also Sellmeier equation References F.A. Jenkins and H.E. White, Fundamentals of Optics, 4th ed., McGraw-Hill, Inc. (1981). Augustin-Louis Cauchy Optics Electric and magnetic fields in matter Eponymous equations of physics
Cauchy's equation
Physics,Chemistry,Materials_science,Engineering
422
67,352,502
https://en.wikipedia.org/wiki/Nano%20tape
Nano tape, also called gecko tape is a synthetic adhesive tape consisting of arrays of carbon nanotubes transferred onto a backing material of flexible polymer tape. These arrays are called synthetic setae and mimic the nanostructures found on the toes of a gecko; this is an example of biomimicry. The adhesion is achieved not with chemical adhesives, but via van der Waals forces, which are weak electric forces generated between two atoms or molecules that are very close to each other. Explanation Geckos show a remarkable ability to climb smooth vertical surfaces at high speeds, exhibiting both strong attachment and easy rapid removal, or shear adhesion, of their feet. On a gecko's foot, micrometer-sized elastic hairs called setae are split into nanometer-sized structures called spatulas. The shear adhesion is achieved by forming and breaking van der Waals forces between these microscopic structures and the substrate. Nano tapes mimic these structures with carbon nanotube bundles, which simulate setae and individual nanotubes, which simulate spatulas, to achieve macroscopic shear adhesion and to translate the weak van der Waals interactions into high shear forces. The shear adhesion allows the tape to be easily peeled off in the manner a gecko lifts its foot. Since the carbon nanotube arrays leave no residue on the substrate, the tape can be reused many times. History Nano tape is one of the first developments of synthetic setae, which arose from a collaboration between the Manchester Centre for Mesoscience and Nanotechnology, and the Institute for Microelectronics Technology in Russia. Work started in 2001 and two years later results were published in Nature Materials. The group prepared flexible fibers of polyimide as the synthetic setae structures on the surface of a 5 μm thick film of the same material using electron beam lithography and dry etching in an oxygen plasma. The fibres were 2 μm long, with a diameter of around 500 nm and a periodicity of 1.6 μm, and covered an area of roughly 1 cm2 (see figure on the left). Initially, the team used a silicon wafer as a substrate, but found that the tape's adhesive power increased by almost 1,000 times if they used a soft bonding substrate such as Scotch tape. This is because the flexible substrate yields a much higher ratio of the number of setae in contact with the surface over the total number of setae. The result of this "gecko tape" was tested by attaching a sample to the hand of a 15 cm high plastic Spider-Man figure weighing 40 g, which enabled it to stick to a glass ceiling, as is shown in the figure. The tape, which had a contact area of around with the glass, was able to carry a load of more than . However, the adhesion coefficient was only 0.06, which is low compared with real geckos (8~16). Commercial use Commercial nano tape is usually sold as double-sided tape that is useful for hanging lightweight items, such as pictures and decorative items on smooth walls. Using superaligned carbon nanotubes, some nano tapes can stay sticky in extreme temperatures. Gallery References Adhesive tape Biophysics Biomimetics Nanotechnology Carbon nanotubes
Nano tape
Physics,Materials_science,Engineering,Biology
679
5,483,579
https://en.wikipedia.org/wiki/Tin%28II%29%20bromide
Tin(II) bromide is a chemical compound of tin and bromine with a chemical formula of SnBr2. Tin is in the +2 oxidation state. The stability of tin compounds in this oxidation state is attributed to the inert pair effect. Structure and bonding In the gas phase SnBr2 is non-linear with a bent configuration similar to SnCl2 in the gas phase. The Br-Sn-Br angle is 95° and the Sn-Br bond length is 255pm. There is evidence of dimerisation in the gaseous phase. The solid state structure is related to that of SnCl2 and PbCl2 and the tin atoms have five near bromine atom neighbours in an approximately trigonal bipyramidal configuration. Two polymorphs exist: a room-temperature orthorhombic polymorph, and a high-temperature hexagonal polymorph. Both contain (SnBr2)∞ chains but the packing arrangement differs. Preparation Tin(II) bromide can be prepared by the reaction of metallic tin and HBr distilling off the H2O/HBr and cooling: Sn + 2 HBr → SnBr2 + H2 However, the reaction will produce tin (IV) bromide in the presence of oxygen. Reactions SnBr2 is soluble in donor solvents such as acetone, pyridine and dimethylsulfoxide to give pyramidal adducts. A number of hydrates are known, 2SnBr2·H2O, 3SnBr2·H2O & 6SnBr2·5H2O which in the solid phase have tin coordinated by a distorted trigonal prism of 6 bromine atoms with Br or H2O capping 1 or 2 faces. When dissolved in HBr the pyramidal SnBr3− ion is formed. Like SnCl2 it is a reducing agent. With a variety of alkyl bromides oxidative addition can occur to yield the alkyltin tribromide e.g. SnBr2 + RBr → RSnBr3 Tin(II) bromide can act as a Lewis acid forming adducts with donor molecules e.g. trimethylamine where it forms NMe3·SnBr2 and 2NMe3·SnBr2 It can also act as both donor and acceptor in, for example, the complex F3B·SnBr2·NMe3 where it is a donor to boron trifluoride and an acceptor to trimethylamine. References Bromides Metal halides Tin(II) compounds Reducing agents
Tin(II) bromide
Chemistry
559
2,902,622
https://en.wikipedia.org/wiki/66%20Aquarii
66 Aquarii is a single star in the equatorial constellation of Aquarius. 66 Aquarii is the Flamsteed designation though the star also bears the Bayer designation of g1 Aquarii. It is visible to the naked eye as a faint, orange-hued star with an apparent visual magnitude of 4.673. Based upon an annual parallax shift of 7.53 milliarcseconds, the distance to this star is about . This is an evolved giant star with a stellar classification of K3 III. It has expanded to 37 times the radius of the Sun and is radiating 434 times the luminosity of the Sun from its outer envelope at an effective temperature of 4,170 K. This gives it the orange-hued glow of a K-type star. It is a suspected variable star that ranges in magnitude between 4.66 and 4.71. References External links Image 66 Aquarii K-type giants Suspected variables Aquarius (constellation) BD-19 6324 Aquarii, g1 Aquarii, 066 215167 112211 8649
66 Aquarii
Astronomy
231
14,858,960
https://en.wikipedia.org/wiki/3C%20219
3C 219 is a Seyfert galaxy with a quasar-like appearance located in the constellation Ursa Major. This galaxy's radio jets are not detectable between the core and the outer radio lobes. See also Lists of galaxies References External links www.jb.man.ac.uk/atlas/ (J. P. Leahy) 3C219 = B0917+458 (Alan Bridle / 23 September 1999) Radio structure in radio galaxies (William C. Keel @ University of Alabama) Radio galaxies Seyfert galaxies 219 3C 219 2817605 Ursa Major
3C 219
Astronomy
131
642,936
https://en.wikipedia.org/wiki/Immersion%20lithography
Immersion lithography is a technique used in semiconductor manufacturing to enhance the resolution and accuracy of the lithographic process. It involves using a liquid medium, typically water, between the lens and the wafer during exposure. By using a liquid with a higher refractive index than air, immersion lithography allows for smaller features to be created on the wafer. Immersion lithography replaces the usual air gap between the final lens and the wafer surface with a liquid medium that has a refractive index greater than one. The angular resolution is increased by a factor equal to the refractive index of the liquid. Current immersion lithography tools use highly purified water for this liquid, achieving feature sizes below 45 nanometers. Background The ability to resolve features in optical lithography is directly related to the numerical aperture of the imaging equipment, the numerical aperture being the sine of the maximum refraction angle multiplied by the refractive index of the medium through which the light travels. The lenses in the highest resolution "dry" photolithography scanners focus light in a cone whose boundary is nearly parallel to the wafer surface. As it is impossible to increase resolution by further refraction, additional resolution is obtained by inserting an immersion medium with a higher index of refraction between the lens and the wafer. The blurriness is reduced by a factor equal to the refractive index of the medium. For example, for water immersion using ultraviolet light at 193 nm wavelength, the index of refraction is 1.44. The resolution enhancement from immersion lithography is about 30–40% depending on materials used. However, the depth of focus, or tolerance in wafer topography flatness, is improved compared to the corresponding "dry" tool at the same resolution. The idea for immersion lithography was patented in 1984 by Takanashi et al. It was also proposed by Taiwanese engineer Burn J. Lin and realized in the 1980s. In 2004, IBM's director of silicon technology, Ghavam Shahidi, announced that IBM planned to commercialize lithography based on light filtered through water. Defects Defect concerns, e.g., water left behind (watermarks) and loss of resist-water adhesion (air gap or bubbles), have led to considerations of using a topcoat layer directly on top of the photoresist. This topcoat would serve as a barrier for chemical diffusion between the liquid medium and the photoresist. In addition, the interface between the liquid and the topcoat would be optimized for watermark reduction. At the same time, defects from topcoat use should be avoided. As of 2005, Topcoats had been tuned for use as antireflection coatings, especially for hyper-NA (NA>1) cases. By 2008, defect counts on wafers printed by immersion lithography had reached zero level capability. Polarization impacts As of 2000, Polarization effects due to high angles of interference in the photoresist were considered as features approach 40 nm. Hence, illumination sources generally need to be azimuthally polarized to match the pole illumination for ideal line-space imaging. Throughput As of 1996, this was achieved through higher stage speeds, which in turn, as of 2013 were allowed by higher power ArF laser pulse sources. Specifically, the throughput is directly proportional to stage speed V, which is related to dose D and rectangular slit width S and slit intensity Iss (which is directly related to pulse power) by V=Iss*S/D. The slit height is the same as the field height. The slit width S, in turn, is limited by the number of pulses to make the dose (n), divided by the frequency of the laser pulses (f), at the maximum scan speed Vmax by S=Vmax*n/f. At a fixed frequency f and pulse number n, the slit width will be proportional to the maximum stage speed. Hence, throughput at a given dose is improved by increasing maximum stage speed as well as increasing pulse power. According to ASML s product information about twinscan-nxt1980di, immersion lithography tools currently boasted the highest throughputs (275 WPH) as targeted for high volume manufacturing. Multiple patterning The resolution limit for a 1.35 NA immersion tool operating at 193 nm wavelength is 36 nm. Going beyond this limit to sub-20nm nodes requires multiple patterning. At the 20nm foundry and memory nodes and beyond, double patterning and triple patterning are already being used with immersion lithography for the densest layers. See also Oil immersion Water immersion objective References Lithography (microfabrication) Taiwanese inventions ja:液浸
Immersion lithography
Materials_science
967
60,442,232
https://en.wikipedia.org/wiki/Microwave%20welding
Microwave welding is a plastic welding process that utilizes alternating electromagnetic fields in the microwave band to join thermoplastic base materials that are melted by the phenomenon of dielectric heating. See also Dielectric heating Plastic welding Radio-frequency welding References Welding Radio technology
Microwave welding
Technology,Engineering
54
2,902,891
https://en.wikipedia.org/wiki/4%20Aquilae
4 Aquilae, abbreviated 4 Aql, is a single, white-hued star in the equatorial constellation of Aquila. 4 Aquilae is the Flamsteed designation. It has an apparent visual magnitude of 5.02, making it a faint star visible to the naked eye. The distance to 4 Aql can be estimated from its annual parallax shift of , yielding an estimated range of around 480 light years. It is moving closer to the Earth with a heliocentric radial velocity of −13 km/s. This is a B-type main-sequence star with a stellar classification of B9 V. It was classed as a Be star by Arne Sletteback in 1982, indicating it has ionized circumstellar gas. The star is spinning rapidly, showing a projected rotational velocity of 259 km/s, and is being viewed almost equator-on. It has 3.6 times the mass of the Sun and 3 times the Sun's radius. The star is radiating 294 times the Sun's luminosity from its photosphere at an effective temperature of 10,965 K. References B-type main-sequence stars Be stars Aquila (constellation) Durchmusterung objects Aquilae, 04 173370 091975 7040
4 Aquilae
Astronomy
271
5,064,853
https://en.wikipedia.org/wiki/HD%2061248
HD 61248 is a single star in the southern constellation of Carina. It has the Bayer designation Q Carinae, while HD 61248 is the star's identifier in the Henry Draper Catalogue. This star has an orange hue and is visible to the naked eye with an apparent visual magnitude of 4.93. Based upon parallax measurements, it is located approximately 402 light years in distance from the Sun. The object is drifting further away with a radial velocity of +63 km/s, having come to within some 1.8 million years ago. This is an aging giant star with a stellar classification of K3 III, which means it is no longer undergoing core hydrogen fusion. It has expanded to 30 times the Sun's radius and is radiating 279 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of 4,289 K. References K-type giants Carina (constellation) Carinae, Q Durchmusterung objects 061248 036942 2934
HD 61248
Astronomy
212
44,305,631
https://en.wikipedia.org/wiki/Aerosol%20mass%20spectrometry
Aerosol mass spectrometry is the application of mass spectrometry to the analysis of the composition of aerosol particles. Aerosol particles are defined as solid and liquid particles suspended in a gas (air), with size range of 3 nm to 100 μm in diameter and are produced from natural and anthropogenic sources, through a variety of different processes that include wind-blown suspension and combustion of fossil fuels and biomass. Analysis of these particles is important owing to their major impacts on global climate change, visibility, regional air pollution and human health. Aerosols are very complex in structure, can contain thousands of different chemical compounds within a single particle, and need to be analysed for both size and chemical composition, in real-time or off-line applications. Off-line mass spectrometry is performed on collected particles, while on-line mass spectrometry is performed on particles introduced in real time. History In literature from ancient Rome there are complaints of foul air, while in 1273 the inhabitants of London were discussing the prohibition of coal burning to improve air quality. However, the measurement and analysis of aerosols only became established in the second half of the 19th century. In 1847 Henri Becquerel presented the first concept of particles in the air in his condensation nuclei experiment and his ideas were confirmed in later experiments by Coulier in 1875. These ideas were expanded on between 1880 and 1890 by meteorologist John Aitken who demonstrated the fundamental role of dust particles in the formation of clouds and fogs. Aitken's method for aerosol analysis consisted of counting and sizing particles mounted on a slide, using a microscope. The composition of the particles was determined by their refractive index. In the 1920s aerosol measurements, using Aitken's simple microscopic method, became more common place because the negative health effects of industrial aerosols and dust were starting to be recognized by health organizations. Technological and instrumentation advancements, including improved filters, led to improvement in the aerosol measurement methods in the 1960s. The introduction of polycarbonate filters, called nucleopore filters, enhanced the collection, storage, and transportation of samples without disturbing the physical and chemical state of the particles. On-line aerosol measurements methods took a little longer than off-line to be developed and perfected. It wasn't till 1973 with W.D. Davis who developed and patented of the real-time single particle mass spectrometry (RTSPMS) instrument. The setup is quite similar to today's AMS system, with the sample being introduced through a small steel capillary into the ion source region. The sample would ionize after striking a hot rhenium filament. The resulting ions were separated in a magnetic sector and detected by an electron multiplier. The method could only ionize elements with ionization potentials below the work function of the filament (~8 eV), typically alkali and alkaline earth metal. The instrument did yield unit resolution up to a mass-to-charge ratio of 115. The RTSPMS instrument had a particle transmission/detection efficiency of 0.2-0.3%. Davis used the RTSPMS instrument to study samples from calibration aerosols, ambient laboratory air, and aerosols sources. Majority of his studies where focused on inorganic salts created in lab. In Davis's analysis of ambient air, he found a significant increase in lead at the end of the day, which was concluded to be due to automobile emissions. This development was the first step towards, today's modern on-line instruments. The next major development in technological improvement that came out of the 1970s was in 1976 by Stoffel with the development of a magnetic sector RTSPMS technique that had a direct-inlet mass spectrometry (DIMS) also known as particle-inlet mass spectrometry (PIMS). The PIMS instrument was the first to have a deferentially-pumped direct inlet that consists of a stainless steel capillary, followed by a skimmer and conical collimator that focuses the sample into a particle beam that goes on to the ionization region. This type of inlet system is what modern on-line aerosol mass spectrometer instruments use today. In 1982 Sinha and Fredlander developed the particle analysis by mass spectrometry (PAMS), this method was the first to incorporate the optical detection of particles followed by laser desorption/ionization (LDI) in a RTSPMS technique. Prior to this point all RTSPMS methods used surface desorption/ionization (SDI) which consist of a heated metal that ionized the samples. The LDI method involves the sample being hit with a continuous wave, where the particle absorbs photons, and undergoes both desorption and ionization by the same pulse. LDI has several advantages over SDI for on-line single particle mass spectrometry, as such since its development it has been the primary ionization method for RTSPMS. The last major step in RTSPMS development was in 1994 by Kimberly A. Prather. Prather developed the aerosol time-of-flight mass spectrometry (ATOFMS), this method was the first that allow for simultaneous measurement of size and composition of single airborne particle. This techniques was different then previous methods in that instead of using the unreliable method of using light scattering signal intensity to measure particle size, this method uses a two laser system that allows for aerodynamic sizing. Off-line Off-line is an older method than on-line and involves the chemical analysis of sampled aerosols collected traditionally on filters or with cascade impactors (shown to the right) in the field and analyzed back in the lab. Cascade impactors collects particles as they transverse a series of impaction plates, and separate them based on size. The aerosol samples are analyzed by the coupling of pre-separation methods with mass spectrometry. The benefit of this method relative to on-line sampling is greater molecular and structural speciation. The greater molecular and structural speciation is due to the pre-separation. There are many different types of instrumentation used for the analysis due to various type and combinations of the ionization, separation, and mass detection methods. Not one combination is best for all samples, and as such depending on the need for analysis, different instrumentation is used. The most commonly used ionization method for off-line instrument is electron ionization (EI) which is a hard ionization technique that utilized 70 eV to ionize the sample, which causes significant fragmentation that can be used in a library search to identify the compounds. The separation method that EI is usually coupled with is gas chromatography (GC), where in GC the particles are separated by their boiling points and polarity, followed by solvent extraction of the samples collected on the filters. An alternative to solvent-based extraction for particulates on filters is the use of thermal extraction (TE)-GC/MS, which utilizes oven interfaced with the GC inlet to vaporize the analyte of the sample and into the GC inlet. This technique is more often used then solvent-based extraction, because of its better sensitivity, eliminates need for solvents, and can be fully automated. To increase the separation of the particles the GC can be coupled with a time of flight (TOF)-MS, which is a mass separation method that separates ions based on their size. Another method that utilizes EI is isotope ratio mass spectrometry (IR-MS) this instrumentation incorporates a magnetic sector analyzer and a faraday-collector detector array and separates ions based on their isotopic abundance. Isotopic abundance of carbon, hydrogen, nitrogen, and oxygen isotopic abundance become locally enriched or depleted through a variety of atmospheric processes. This information helps in determining the source of the aerosols and the interaction it has had. EI is a universal ionization method, but it does cause excessive fragmentation, and thus can be substituted with chemical ionization (CI) which is a much softer ionization method, and is often used to determine the molecular ion. One ionization method the utilizes CI is atmospheric-pressure chemical ionization (APCI). In APCI the ionization occurs at atmospheric pressure with ions produced by corona discharges on a solvent spray, and it is often coupled with high-performance liquid chromatography (HPLC) which provides quality determination of polar and ionic compounds in the collected atmospheric aerosols. The use of APCI allows for the sampling of the filters without the need of solvents for the extraction. The APCI is typically connected to a quadruple mass spectrometer. Other ionization methods are often used for off-line mass spectrometer inductively coupled plasma (ICP). ICP is commonly used in the elemental analysis of trace metals, and can be used to determine the source of the particles and their health effects. There are also a range of soft ionisation techniques available for assessing the molecular composition of aerosol particles in greater detail, such as electrospray ionization, which result in less fragmentation of compounds within the aerosol. These techniques are only beneficial when coupled with a high or ultra-high resolution mass spectrometer, such as an FTICR-MS or an Orbitrap, as very high resolution is needed to differentiate between the high number of compounds present. On-line On-line mass spectrometry was develop to solve some of the limitations and problem that develop from off-line analysis, such as evaporation and chemical reactions of particles in the filters during long analysis time. On-line Mass spectrometry solves these problems through the collection and analysis of aerosol particles in real time. On-line instruments are very portable and allow for spatial variability to be examined. These portable instruments can be put on many different platforms such as boats, planes, and mobile platforms (e.g. car trailers). An example of this is in the picture at the beginning with the instrumentation attached to an aircraft. Like off-line, on-line mass spectrometry has many different type of instruments, which can be broken up into two types; instruments that measures the chemistry of the particle ensemble (bulk measurement) and those that measure the chemistry of individual particles (single-particle measurement). Thus based on analytical need different instrumentation is used in analysis of the aerosol particles. Bulk measurement Generally speaking bulk measurement instruments thermally vaporize the particles prior to ionization, and there are several different ways that the vaporization and ionization is performed. The main instrument that is used for bulk measurements is Aerodyne aerosol mass spectrometer (AMS). Aerosol mass spectrometer The Aerodyne AMS provides real-time aerosol mass spectrometry analysis of size-resolved mass concentration of non-refractory components (Ex. organics, sulfate, nitrate, and ammonium). The term non-refractory is assigned to species that evaporate rapidly at 600 °C under vacuum conditions (e.g. organic matter, NH4NO3 and (NH4)2SO4. The schematic of a typical AMS is shown in the figure to the right. The Aerodyne AMS is made up of three sections; The aerosol inlet, the particle sizing chamber, and the particle detection chamber. The aerosol inlet has a flow limiting orifice entrance that is around 100 um in diameter. Once in the chamber the sample goes through aerodynamic focusing lens system, which consist of several orifice lenses that are mount in sequence of decreasing inner diameter. The lens focuses the particles into a narrow particle beam. The beam now travels through the particle sizing chamber, where the particle aerodynamic diameter is measured. The particle sizing chamber is made up of a flight tube maintained at (~ 10−5 torr). The entrance of the flight tube is a mechanical chopper that's used to modulate the particle beam; then using both the fixed length of the tube and the time-resolved detection of the arrival at the end, the particles' velocities can be determined. Using the velocity, the particle's diameter is obtained. As the particle beam exits the flight tube, it enters the particle composition detection chamber. In this section, the particles collide with a heated tungsten element (~600 °C). At this tungsten element the non-refractory components of the particle beam are flash vaporized and then ionized by EI. Once ionized, the sample can be analyzed with either a quadruple (Q), time-of-flight (ToF), or high-resolution (HR)-ToF mass analyzer. Single-particle measurements Generally speaking single-particle measurement instruments desorb particles one at a time using a pulsed laser. The process is called laser desorption/ionization (LDI) and is the primary ionization method used for single-particle measurements. The main advantage of using LDI over thermal desorption, is the ability to analyze both non-refractory and refractory (e.g., mineral dust, soot) components of atmospheric aerosols. Laser vaporization allows precise laser firing when individual particles fly through the vaporization zone, and the systems are thus dubbed single particle mass spectrometers (SPMS). Several versions of SPMS have been reported, including the aerosol time-of-flight mass spectrometer (AToFMS), the laser mass analyzer for particles in the airborne state (LAMPAS), particle analysis by laser mass spectrometer (PALMS), the rapid single-particle mass spectrometer (RSMS), the bioaerosol mass spectrometer (BAMS) b194 Steele et al., 2003), the nanoaerosol mass spectrometer (NAMS), the single-particle laser ablation time-of-flight mass spectrometer (SPLAT), the single-particle aerosol mass spectrometer (SPAMS), and laser ablation aerosol particle time-of-flight mass spectrometer (LAAP-ToF-MS). Among the most commons of these instruments is the aerosol time-of-flight mass spectrometer (AToFMS). Aerosol time-of-flight mass spectrometer The AToFMS allows for the determination of mixing state, or distribution of chemical species, within individual particles. These mixing states are important in the determination of climate and health impact of aerosols. The schematic of a typical AToFMS is shown to the right. The overall structure of ATOF instruments is; sampling, sizing, and the mass analyzer region. The inlet system is similar to the AMS by using the same aerodynamic focusing lens, but it has smaller orifices because of its analysis of single particles. In the sizing region particle passes through the first continuous solid state laser that generates an initial pulse of scattered light. Then the particle passes through the second laser that is orthogonal to the first and produces a pulse of scattered light. The light is detected by a photomultiplier (PMT) that is matched up to each laser. Using the transit times between the two detected pulses and the fixed distance the velocity and size of each particle is calculated. Next the particles travel through to the mass analyzer region where it is ionized by a pulsed LDI laser, which is timed to hit the particle as it reaches the center of the ion extraction region. Once ionized, the positive ions are accelerated towards the positive ToF section and the negative ions are accelerated towards the negative ToF section where they are detected. Applications Aerosol science and measurements field, especially aerosol mass spectrometry has grown a lot over the last couple decades. Its growth is partly due to the instruments versatility, it has the ability to analyze a particles size and chemical composition, and perform bulk and single-particle measurements. The versatility of aerosol mass spectrometers allow for them to be used for many different applications in both the lab and field. Over the years aerosols mass spectrometers have been used for anything from determining emissions sources, human exposure to pollutants, radiative transfer and cloud microphysics. Most of these studies have utilized the mobility of the AMS and has been fielded in urban, remote, rural, marine, and forested environments around the world. AMS have also been deployed in mobile platforms such as ships, mobile laboratories, and aircraft. One recent emission study in 2014 was performed by two NASA research aircraft, a DC-8 and a P-3B, that were outfitted with aerosol instrumentation (AMS). The aircraft were sent to perform analysis of atmospheric samples over the oil sands mining and upgrading facilities near Ft. McMurray, Alberta, Canada. The purpose of the study was to test the emission from the facilities, and determine if they match the requirements. The results of the study was that compared to estimates of annual forest fire emissions in Canada, the oil sands facilities are a minor source of aerosol number, aerosol mass, particulate organic matter, and black carbon. Aerosol mass spectrometry has also found its way into the field of pharmaceutical aerosol analysis, due to its ability to provide real-time measurements of particle size and chemical composition. People who suffer from chronic respiratory disease commonly receive their medication through the use of either pressurized metered dose inhaler (pMDI) or dry powder inhaler (DPI). In both methods the drug is delivered directly into the lungs by inhalation. In recent years, inhaled products have become available which deliver two types of drug within a single dose. Research has shown that the two drugs inhalers provide an enhanced clinical effect beyond that achieved when the two drugs are administered concurrently from two separate inhalers. It was determined using an AToFMS that the respirable particles in a DPI product and pMDI product were composed of co-associated active pharmaceutical ingredients, which is the reason behind the increased effects of the two drug inhalers. See also Laser microprobe mass spectrometer Particulate matter sampler Aerosol impaction Particle size analysis References Further reading External links Centre for Atmospheric Science TOF-AMS resources Q-AMS resources List of publications using all versions of the AMS Glossary of AMS terms Mass spectrometry Aerosols Aerosol measurement
Aerosol mass spectrometry
Physics,Chemistry
3,832
35,850,400
https://en.wikipedia.org/wiki/Babygrow
A babygrow, babygro, sleepsuit, or sleep suit in British English is a one-piece item of baby clothing with long sleeves and legs used for sleep and everyday wear. They are typically made from cotton and closed with snaps, although they may also be made from fleece or closed with zips. The feet are often enclosed, but they may be footless. They are distinguished from bodysuits by having legs and long sleeves. This terminology is common in the United Kingdom, where the trademark is not registered. In the United States, where the trademark is registered, the name is uncommon as other manufacturers of the item use different terms. Synonyms In American English, different terms are more usual. The most common are sleeper or sleep and play. If made of fleece, they are considered blanket sleepers. They may also be referred to in American English simply as pajamas or one-pieces, or if they have feet as footie pajamas or footed one-pieces. If closed with a zipper, they may be referred to as zipper pajamas, zip up pajamas, or simply zip pajamas. History Babygro is a trademark brand, invented in the U.S. in the 1950s by Walter Artzt. See also Romper suit Infant bodysuit References Nightwear One-piece suits Children's clothing Infants' clothing
Babygrow
Biology
273
2,779,031
https://en.wikipedia.org/wiki/Rosette%20%28botany%29
In botany, a rosette is a circular arrangement of leaves or of structures resembling leaves. In flowering plants, rosettes usually sit near the soil. Their structure is an example of a modified stem in which the internode gaps between the leaves do not expand, so that all the leaves remain clustered tightly together and at a similar height. Some insects induce the development of galls that are leafy rosettes. In bryophytes and algae, a rosette results from the repeated branching of the thallus as the plant grows, resulting in a circular outline. Taxonomies Many plant families have varieties with rosette morphology; they are particularly common in Asteraceae (such as dandelions), Brassicaceae (such as cabbage), and Bromeliaceae. The fern Blechnum fluviatile or New Zealand Water Fern (kiwikiwi) is a rosette plant. Function in flowering plants Often, rosettes form in perennial plants whose upper foliage dies back with the remaining vegetation protecting the plant. Another form occurs when internodes along a stem are shortened, bringing the leaves closer together, as in lettuce, dandelion and some succulents. (When plants such as lettuce grow too quickly, the stem lengthens instead, a condition known as bolting.) In yet other forms, the rosette persists at the base of the plant (such as the dandelion), and there is a taproot. Protection Part of the protective function of a rosette like the dandelion is that it is hard to pull from the ground; the leaves come away easily while the taproot is left intact. Another kind of protection is provided by the caulescent rosette, which is part of the growth form of the giant herb genus Espeletia in South America, which has a well-developed stem above the ground. In tropical alpine environments, a wide variety of plants in different plant families and different parts of the world have evolved this growth form characterized by evergreen rosettes growing above marcescent leaves. Examples where this arrangement has been confirmed to improve survival, help water balance, or protect the plant from cold injury are Espeletia schultzii and Espeletia timotensis, both from the Andes. Form The rosette form is the structure, the relationship of the parts, and the variations within it, as shown in the following study from a herbarium: Dryas octopetala (white dryas, Rosaceae) has a leaf rosette of leaf blades with a short petiole, slim, egg-shaped leaves with cordate bases with clearly and regularly toothed margins, and single flowers on usually long peduncles or stalks, two to four centimetres across. The flowers have seven to nine, often even more, white egg-shaped petals. The sepals are lanceolate. Silene nutans (Nottingham catchfly, Caryophyllaceae) shows ensiform-lanceolate leaves. The slightly rosette-like ground leaves are bigger and of different shape than the sparse, opposite leaves on the stem. This is explained in that side shoots with greatly prolonged internodes may spring from rosettes. They have one or more flowers at their tip, like the primrose. Especially in biennial plants, the main shoot can grow with prolonged internodes and even branches. It is not unusual that the leaves of the rosette and those of the shoot differ in shape. As form, "rosette" is used to describe plants that perpetually grow as a rosette and the immature stage of plants such as some ferns. See also Asteraceae Bromeliad Billbergia Fern Bird's nest fern Ostrich fern Wild bird's-nest fern References Plant morphology Leaves
Rosette (botany)
Biology
771
5,298,132
https://en.wikipedia.org/wiki/AIDS%20%28journal%29
AIDS is a peer-reviewed scientific journal that is published by Lippincott Williams & Wilkins. It was established in 1987 and is an official journal of the International AIDS Society. It covers all aspects of HIV and AIDS, including basic science, clinical trials, epidemiology, and social science. The editor in chief is Jay A. Levy. Eighteen issues are published annually. Abstracting and indexing The journal is abstracted and indexed in Chemical Abstracts Service, EMBASE, Index Medicus, MEDLINE, and the Science Citation Index Expanded. According to the Journal Citation Reports, the journal has a 2021 impact factor of 4.632. See also Journal of Acquired Immune Deficiency Syndromes References External links HIV/AIDS journals Delayed open access journals Academic journals established in 1987 Immunology journals English-language journals Lippincott Williams & Wilkins academic journals Academic journals associated with international learned and professional societies Journals published between 13 and 25 times per year
AIDS (journal)
Biology
191
3,819,714
https://en.wikipedia.org/wiki/SOAP%20with%20Attachments
SOAP with Attachments (SwA) or MIME for Web Services is the use of web services to send and receive files with a combination of SOAP and MIME, primarily over HTTP. Note that SwA is not a new specification, but rather a mechanism for using the existing SOAP and MIME facilities to perfect the transmission of files using Web Services invocations. Status SwA is a W3C Note. It was submitted as a proposal, but it was not adopted by the W3C. Instead, MTOM is the W3C Recommendation for handling binary data in SOAP messages. With the release of SOAP 1.2 additionally the note SOAP 1.2 Attachment Feature was published. See also DIME MTOM SOAP with Attachments API for Java References External links Note by the World Wide Web Consortium on 11 December 2000 World Wide Web Consortium standards Web service specifications XML-based standards
SOAP with Attachments
Technology
180
1,354,854
https://en.wikipedia.org/wiki/Ludwig%20B%C3%BCchner
Friedrich Karl Christian Ludwig Büchner (29 March 1824 – 30 April 1899) was a German philosopher, physiologist and physician who became one of the exponents of 19th-century scientific materialism. Biography Büchner was born at Darmstadt on 29 March 1824. From 1842 to 1848 he studied physics, chemistry, botany, mineralogy, philosophy and medicine at the University of Giessen, where he graduated in 1848 with a dissertation entitled Beiträge zur Hall'schen Lehre von einem excitomotorischen Nervensystem (Contributions to the Hallerian Theory of an Excitomotor Nervous System). Afterwards, he continued his studies at the University of Strasbourg, the University of Würzburg (where he studied pathology with the great Rudolf Virchow) and the University of Vienna. In 1852 he became lecturer in medicine at the University of Tübingen, where he published his magnum opus Kraft und Stoff: Empirisch-naturphilosophische Studien (Force and Matter: Empiricophilosophical Studies, 1855). Büchner was one of the founding members of the Freies Deutsches Hochstift (Free German Foundation). According to Friedrich Albert Lange (Geschichte des Materialismus, 1866), Kraft und Stoff was imbued with a fanatical enthusiasm for humanity. Büchner sought to demonstrate the indestructibility of matter, and the finality of physical force. The scientific materialism of this work, which contemporaries often lumped together with the publications of other 'materialists' like Karl Vogt and Jacob Moleschott, caused so much opposition that he was compelled to give up his post at Tübingen, and he retired to Darmstadt. He practiced as a physician and contributed regularly to pathological, physiological and popular magazines. He continued his philosophical work in defense of materialism, and published Natur und Geist (Nature and Spirit, 1857), Aus Natur und Wissenschaft (From Nature and Science, vol. I., 1862; vol. II., 1884), Der Fortschritt in Natur und Geschichte im Lichte der Darwinschen Theorie (Progress in Nature and History in the Light of the Darwinian Theory, 1884), Tatsachen und Theorien aus dem naturwissenschaftlichen Leben der Gegenwart (Facts and Theories in the Scientific Life of Present, 1887), Fremdes und Eigenes aus dem geistligen Leben der Gegenwart (Strangers and Selves in the Spiritual Life of the Present, 1890), Darwinismus und Socialismus (Darwinism and Socialism, 1894), Im Dienste der Wahrheit (In the Service of Truth, 1899). Ludwig Büchner's materialism was the founding ground for the freethinkers' movement in Germany. In 1881 he founded in Frankfurt the "German Freethinkers League" ("Deutsche Freidenkerbund"). Being politically active, Büchner was a member of the second chamber of the Landstände of the Grand Duchy of Hesse as a representative of the German Free-minded Party from 1884 to 1890. He died at Darmstadt on 30 April 1899. Philosophical work In estimating Büchner's philosophy it must be remembered that he was primarily a physiologist, not a metaphysician. Matter and force (or energy) are, he maintained, infinite; the conservation of force follows from the imperishability of matter, the ultimate basis of all science. Büchner is not always clear in his theory of the relation between matter and force. At one time he refuses to explain it, but generally he assumes that all natural and spiritual forces are indwelling in matter. Just as a steam engine, he says in Kraft und Stoff (7th ed., p. 130), produces motion, so the intricate organic complex of force-bearing substance in an animal organism produces a total sum of certain effects, which, when bound together in a unity, are called by us mind, soul, thought. Here he postulates force and mind as emanating from original matter, a materialistic monism. But in other parts of his works he suggests that mind and matter are two different aspects of that which is the basis of all things, a monism which is not necessarily materialistic. Büchner was much less concerned to establish a scientific metaphysics than to protest against the romantic idealism of his predecessors and the theological interpretations of the universe. Nature according to him is purely physical; it has no purpose, no will, no laws imposed by extraneous authority, no supernatural ethical sanction. Büchner endorsed Charles Darwin's theory of evolution within a decade of its first issuance, writing the book Man in the Past, Present and Future in 1869 about what he felt were Darwinism's implications. He believed that this included humanity moving into a kinder state of being, where a primitive struggle for life would no longer apply or at least be replaced with purely intellectual struggles, and war would end. To achieve this, Büchner advocated government social programs which would aid greater equality, including the collective ownership of land and women's rights (however he did not extend this to them receiving suffrage, deeming that premature at the time). Büchner, together with Edward Aveling, had attended the congress of the "International Federation of Freethinkers" held in London from 25 to 27 September 1881, the following day they visited Darwin on 28 September. Aveling published a full account of his visit in the National Reformer in 1882. Family Ludwig Büchner was born in the family of Ernst Karl Büchner, a senior medical councilor and court doctor in the Grand Duchy of Hesse. Ludwig was the younger brother of Georg Büchner, a famous revolutionary playwright, and Luise Büchner, a women's rights advocate; and the uncle of Ernst Büchner, inventor of the Büchner flask. Notes References Andreas Daum, Wissenschaftspopularisierung im 19. Jahrhundert: Bürgerliche Kultur, naturwissenschaftliche Bildung und die deutsche Öffentlichkeit, 1848–1914. Munich: Oldenbourg, 1998, . Fredrick Gregory: Scientific Materialism in Nineteenth Century Germany, Springer, Berlin u.a. 1977, Attribution External links Biography and bibliography in the Virtual Laboratory of the Max Planck Institute for the History of Science Complete scanned text of Büchner's Force and Matter 1824 births 1899 deaths 19th-century atheists 19th-century German non-fiction writers 19th-century German philosophers Atheist philosophers Critics of religions Freethought writers German atheists German humanists German male non-fiction writers German cognitive neuroscientists 19th-century German physicians German physiologists Materialists Members of the Second Chamber of the Estates of the Grand Duchy of Hesse Ontologists Physicians from Darmstadt German philosophers of science German philosophers of technology Founding members of the Freies Deutsches Hochstift
Ludwig Büchner
Physics
1,444
6,316
https://en.wikipedia.org/wiki/Water%20%28classical%20element%29
Water is one of the classical elements in ancient Greek philosophy along with air, earth and fire, in the Asian Indian system Panchamahabhuta, and in the Chinese cosmological and physiological system Wu Xing. In contemporary esoteric traditions, it is commonly associated with the qualities of emotion and intuition. Greek and Roman tradition Water was one of many archai proposed by the Pre-socratics, most of whom tried to reduce all things to a single substance. However, Empedocles of Acragas (c. 495 – c. 435 BC) selected four archai for his four roots: air, fire, water and earth. Empedocles roots became the four classical elements of Greek philosophy. Plato (427–347 BC) took over the four elements of Empedocles. In the Timaeus, his major cosmological dialogue, the Platonic solid associated with water is the icosahedron which is formed from twenty equilateral triangles. This makes water the element with the greatest number of sides, which Plato regarded as appropriate because water flows out of one's hand when picked up, as if it is made of tiny little balls. Plato's student Aristotle (384–322 BC) developed a different explanation for the elements based on pairs of qualities. The four elements were arranged concentrically around the center of the Universe to form the sublunary sphere. According to Aristotle, water is both cold and wet and occupies a place between air and earth among the elemental spheres. In ancient Greek medicine, each of the four humours became associated with an element. Phlegm was the humor identified with water, since both were cold and wet. Other things associated with water and phlegm in ancient and medieval medicine included the season of Winter, since it increased the qualities of cold and moisture, the phlegmatic temperament, the feminine and the western point of the compass. In alchemy, the chemical element of mercury was often associated with water and its alchemical symbol was a downward-pointing triangle. Indian tradition Ap () is the Vedic Sanskrit term for water, in Classical Sanskrit occurring only in the plural is not an element.v, (sometimes re-analysed as a thematic singular, ), whence Hindi . The term is from PIE hxap water. In Hindu philosophy, the term refers to water as an element, one of the Panchamahabhuta, or "five great elements". In Hinduism, it is also the name of the deva, a personification of water, (one of the Vasus in most later Puranic lists). The element water is also associated with Chandra or the moon and Shukra, who represent feelings, intuition and imagination. According to Jain tradition, water itself is inhabited by spiritual Jīvas called apakāya ekendriya. Ceremonial magic Water and the other Greek classical elements were incorporated into the Golden Dawn system. The elemental weapon of water is the cup. Each of the elements has several associated spiritual beings. The archangel of water is Gabriel, the angel is Taliahad, the ruler is Tharsis, the king is Nichsa and the water elementals are called Ondines. It is referred to the upper right point of the pentagram in the Supreme Invoking Ritual of the Pentagram. Many of these associations have since spread throughout the occult community. Modern witchcraft Water is one of the five elements that appear in most Wiccan traditions. Wicca in particular was influenced by the Golden Dawn system of magic and Aleister Crowley's mysticism, which was in turn inspired by the Golden Dawn. See also Water Sea and river deity Notes External links Different versions of the classical elements Classical elements Water in culture Esoteric cosmology History of astrology Technical factors of astrology Concepts in ancient Greek metaphysics
Water (classical element)
Astronomy
787
46,664,026
https://en.wikipedia.org/wiki/Modified%20pressure
Some systems in fluid dynamics involve a fluid being subject to conservative body forces. Since a conservative body force is the gradient of some potential function, it has the same effect as a gradient in fluid pressure. It is often convenient to define a modified pressure equal to the true fluid pressure plus the potential. Examples of conservative body forces include gravity and the centrifugal force in a rotating reference frame. See also Reduced gravity References Fluid dynamics
Modified pressure
Chemistry,Engineering
88
67,513,406
https://en.wikipedia.org/wiki/2021%20Leaders%20Summit%20on%20Climate
The 2021 Leaders' Summit on Climate was a virtual climate summit on April 22–23, 2021, organized by the Joe Biden administration, with leaders from various countries. At the summit Biden announced that greenhouse gas emissions by the United States would be reduced by 50% - 52% relative to the level of 2005 by 2030. Overall, the commitments made at the summit reduce the gap between governments' current pledges and the 1.5 degrees target of the Paris Agreement by 12% - 14%. If the pledges are accomplished, greenhouse gas emissions will fall by 2.6% - 3.7% more in comparison to the pledges before the summit. The results of the summit were described by Climate Action Tracker as a step forward in the fight against climate change. Invited countries and their representatives Results At the summit Biden announced that greenhouse gas emissions by the United States would be reduced by 50% - 52% relative to the level of 2005 by 2030. Overall, the commitments made at the summit reduce the gap between governments' current pledges and the 1.5 degrees target of the Paris Agreement by 12% - 14%. If the pledges are accomplished, greenhouse gas emissions will fall by 2.6% - 3.7% GtCO2e more in comparison to the pledges before the summit. The results of the summit were described by Climate Action Tracker as a step forward in the fight against climate change, even though there is still a long way to go to reach the 1.5 degrees target. The most important commitments were made by United States, United Kingdom, European Union, China and Japan. At the summit the Biden administration submitted a new Nationally Determined Contribution to the United Nations Framework Convention on Climate Change (UNFCCC), according to Climate Action Tracker "the biggest climate step made by any US government in history". At the summit Biden's administration launched a number of coalitions and initiatives to limit climate change and help to reduce its impacts, among others a Global Climate Ambition Initiative to help low income countries achieve those targets, and a "Net-Zero Producers Forum, with Canada, Norway, Qatar, and Saudi Arabia, together representing 40% of global oil and gas production" Several countries increased their climate pledges in the summit. Several countries deliver vague promises, and statements: In the beginning of May, 2021, Climate Action Tracker released a more detailed report about the significance of the summit. According to the report the summit, together with the pledges made from September 2020, reduce the expected rise in temperature by 2100 by 0.2 degrees. If all pledges are fulfilled the temperature will rise by 2.4 °C. However, if the policies will remain as they are now it will rise by 2.9 °C. In the most optimistic scenario, if the countries will fulfill also the pledges that are not part of Paris agreement it will rise by 2.0 °C. Use of masks After the summit, there were claims spread that Joe Biden was the only leader there wearing a mask, which was later proved was wrong as at least 5 other world leaders were wearing masks. Notes References External links whitehouse.gov: President Biden Invites 40 World Leaders to Leaders Summit on Climate (March 26) Leaders Summit on Climate Summary of Proceedings (April 23) Remarks by President Biden at the Virtual Leaders Summit on Climate Opening Session (April 22) Remarks by President Biden at the Virtual Leaders Summit on Climate Session 2: Investing in Climate Solutions International conferences in the United States 2021 in international relations 2021 in American politics 2021 conferences 2021 in the United States April 2021 events in the United States Presidency of Joe Biden Politics of climate change Emissions reduction
2021 Leaders Summit on Climate
Chemistry
747
29,900,217
https://en.wikipedia.org/wiki/Ascochyta%20viciae
Ascochyta viciae is an ascomycete fungus species in the genus Ascochyta. Ascofuranone is an antibiotic first isolated from a strain of A. viciae in 1972; however, the identification of the strain is later revised as Acremonium sclerotigenum, and A. viciae cannot produce this antibiotic. See also List of Ascochyta species References viciae Fungus species
Ascochyta viciae
Biology
90
9,763,317
https://en.wikipedia.org/wiki/Sterility%20assurance%20level
In microbiology, sterility assurance level (SAL) is the probability that a single unit that has been subjected to sterilization nevertheless remains nonsterile. It is never possible to prove that all organisms have been destroyed, as the likelihood of survival of an individual microorganism is never zero. So SAL is used to express the probability of the survival. For example, medical device manufacturers design their sterilization processes for an extremely low SAL, such as 10−6, which is a 1 in 1,000,000 chance of a non-sterile unit. SAL also describes the killing efficacy of a sterilization process. A very effective sterilization process has a very low SAL. Terminology Mathematically, SALs are probabilities, often very small but (by definition) always lying between zero and one. So when they are expressed in scientific notation their exponents are negative, as for instance, "The SAL of this process is 10−6". But the term SAL is sometimes also used to refer to a sterilization's efficacy. This usage (technically the multiplicative inverse) results in positive exponents, as in "The SAL of this process is 106". To avoid ambiguity from these inverse usages, some authors use the term log reduction (e.g., "This process gives a six-log reduction"). SALs can also be used to describe the microbial population that was destroyed by the sterilization process, though this is not the same as the probabilistic definition. What is often called a "log reduction" (technically a reduction by one order of magnitude) represents a 90% reduction in microbial population. Thus a process that achieves a "6-log reduction" (10−6) will theoretically reduce an initial population of one million organisms to very close to zero. The difference in meaning between this and the probabilistic sense can be seen from an example: if careful assays before and after indicate that a procedure has inactivated 90% of the biological agents in some unit, then the procedure can be correctly reported to have achieved a 1-log reduction, even though the probability that the unit is sterile is not 90% but 0. Because of all these ambiguities, contexts in which it is critical to prevent any confusion—such as in the setting of standards—require that SAL terminology be defined carefully and explicitly. SALs describing the "Probability of a Non-Sterile Unit" are expressed more specifically in some literature. References Microbiology terms Sterilization (microbiology)
Sterility assurance level
Chemistry,Biology
527
50,530,275
https://en.wikipedia.org/wiki/WebUSB
WebUSB is a JavaScript application programming interface (API) specification for securely providing access to USB devices from web applications. It was published by the Web Platform Incubator Community Group. As of July 2021, it is in Draft Community status, and is supported by Chromium-based browsers. Introduction A Universal Serial Bus, or a USB is an industry standard communication protocol used to communicate data across connectors, and cables from computers to peripheral devices and/or other computers. WebUSB is a set of API calls that enable access to these hardware devices from web pages. WebUSB is developed by the World Wide Web Consortium (W3C). The WebUSB API provides a safe, and developer familiar means of communication to edge devices from web pages. The WebUSB API integrates into existing USB libraries and shortens the development cycle for integrating new devices into the web environment by not needing to wait for browser support for these devices. Early versions of WebUSB came out around as an alternative to Flash, Chrome Serial, and other custom approaches to connecting browsers to hardware. WebUSB aims to solve the four goals of any interface being; fast to make, cross platform, look good, accessibility. Application to Internet of Things (IoT) architecture WebUSB API's are able to bridge hardware protocols to internet protocols, enabling the creating of uniform gateways linking edge devices to a centralised networks. The explosion in computing ability over the last few decades has led to an increase in edge devices. Devices such as lights, thermometers, HVAC, motors are increasingly integrated into centralised internet control servers. These devices have evolved from isolated and previously non-integrated development environments. Consequently, they lack the uniform and consistent communication protocol necessary to develop an immediate connectivity to a web service. The WebUSB's API framework standardises disparate protocols and is able to expose non-standard Universal Serial Bus (USB) compatible devices to the web. The WebUSB looks to sit between the perception layer and the network layer. The main goals of software in this gateway are; Scalability, Cost and reliability. The cloud-based deployment of WebUSB libraries enables it to cover scalability, its low overhead deployment significantly lowers cost, and its continual in use development over its lifetime has enabled the framework to attain a high degree of reliability. WebUSB has formed a cornerstone of the BIPES (Block based Integrated Platform for Embedded Systems) architecture framework. This systems architecture model aims to reduce complexity of IoT systems development by aggregating relevant software into 'Blocks' that are complete units of code and can be deployed to an edge device from a centralised cloud infrastructure. As already mentioned the role of WebUSB is critically tied to its ability to communicate to embedded software through the USB communication protocol. Once the information is inside WebUSB's JavaScript environment it can be transposed and communicated through a variety of software protocols. In this particular architecture model WebUSB bridges the gap between embedded software, and the web browser. The web browser then communicates to the cloud environment using uniform WebUSB constructed data. Security considerations WebUSB provides a web page access to a connector to an edge device. The exposure of any device to the internet carries inherent risks and security concerns. By product of design USB ports are designed to trust the device they are connected to. Connecting such a port to an internet facing application introduced a new set of security risks and massively expanding the attack surface for would be malicious actors. For instance a malicious host web page could request data from a peripheral device, which the device would happily fulfil thinking it was communicating through a standard USB connector. To mitigate this type of attack WebUSB developed a requestDevice() function call. This would notify the user that the site was requesting access to the edge device. This is similar to the access requests browser control for when a web page would like to access the inbuilt camera or microphone. Depending on the wariness of the user this protocol can be enough to prevent certain attacks. A second protocol that was developed is the specification of a request originating from a secure context. This ensures that both the code to be executed and the data returned is not intercepted or modified in transit. This security is implemented through the claimInterface() function. This is an OS supported function, and ensures that only a single execution instance can have user space or kernel space driver access to the device, preventing malicious code on a web page from opening a second channel of communication to the device. Other security considerations included created a public registry of approved connections, but this idea was ultimately scrapped as it required vendors to develop devices with WebUSB in mind. The threat surface of a USB however is bi-directional and a malicious peripheral device could attack the host. An infected edge device cannot easily be mitigated by WebUSB API's. In many device configurations trusted USB ports are used to deliver firmware upgrades and a malicious edge device could grant attackers persistence in a system. In light of the security concerns posed by WebUSB, it is only supported by an estimated 76% of browsers. Also notably is that support for WebUSB at a browser level has been volatile over time, with stretches of time where certain browsers turned off access after the discovery of particular security threats. It is these security concerns that have plagued alternatives to WebUSB. Particularly Flash and Google Serial failed to take off because they were unable to be used with adequate answers to these fundamental security risks. Use in multi-factor authentication The ability to own and verify a digital identity on the internet is critical to interaction with internet facing infrastructure. WebUSB in combination with special purpose devices and public identification registries can be used as key piece in an infrastructure scale solution to digital identity on the internet. WebUSB API library is able to standardise the connection of peripheral devices to web pages. The security investment in WebUSB makes it a suitable software component in connecting identifiable devices to the internet. Recent research has shown the fallibility of SMS based authentication highlighting how key pieces of the infrastructure can be subverted. Alternative proposals for securing a digital identity involve the use of biometric sensors and/or personal identifiers. However, while these are good at identifying an individual, it is only through WebUSB that they can adequately be integrated into the existing internet tech stack. Cryptographically secure solutions for personal identification exist with support from government and specialised hardware. However, these solutions lack generalised specification for web based infrastructure and are generally hard to support. Gateway support for such a communication protocol can be supported by software middlemen, such as WebUSB. A model system for multi-factor authentication uses WebUSB in tandem with an identifying hardware such as an ID card built to ISO/IEC 7810:2003 ID-1 standards. This card would constitute a physical representation of an individual's identity. WebUSB would then act as a middle man in facilitating the transfer of data stored on the hardware to a given web server. The number card would be digitally signed by an authorised party and would digitally connect to a server. This connection would require a device capable of reading ISO/IEC 14443 type B connections. In order to make this digital connection valid, WebUSB would serve as software connector. Usage WebUSB will only work on supported browsers, for example Chrome. Due to privacy and security concerns it will also only work in a secure context i.e.; over HTTPS, and can only be called through a user actions. For instance in order to instantiate a connection navigator.usb.requestDevice() can only be called through user gesture, such as touch or mouse click. Similarly protection from WebUSB can be provided using a feature policy. For instance would prevent WebUSB from running. To get access to devices visible to the browser two options are available. navigator.usb.requestDevice() will prompt the user to select which USB access is to be given, or navigator.usb.getDevices() will return a list of USB devices that the origin has access to. To better search for devices, WebUSB has a number of filtering options. These filters are passed into navigator.usb.requestDevice() as a JavaScript filtering object. These filters are; vendorId,productId,classCode, protocolCode, serialNumber, and subclassCode. For example, imagine connecting to an Arduino device, this could be done in the following way. Where is Arduino in the list of USB ID's navigator.usb.requestDevice({ filters: [{ vendorId: 0x2341 }] }) .then(device => { console.log(device.productName); console.log(device.manufacturerName); }) .catch(error => { console.error(error); }); The USB device descriptor returned from the above snippet will contain all important information about the device such as; version, packet size, configuration options etc. The alternative call to navigator.usb.getDevices() will instead look like this; navigator.usb.getDevices().then(devices => { devices.forEach(device => { console.log(device.productName); console.log(device.manufacturerName); }); }) In order to talk to the device there are a few important function calls to run through. device.open() will run through all the required steps of setting up the device, device.selectConfiguration() sets up the configuration, importantly how it is powered, and the number of interfaces. It is then important to claim the interface. This can be done through the device.claimInterface function call. This will simulate a real wired connection and ensure that this web page is the only one able to read and write to the device until the connection is released. Finally the call device.controlTransferOut() will set up the device to communicate through the WebUSB Serial API. Once the set up is all done, data can be transferred to the device using device.transferIn() to transfer bulk data to the device, similarly its sister function device.transferOut() to read data from the device. Interfaces In order to generalise interaction with hardware devices WebUSB supports a number of interfaces than abstract away the specific hardware functionality. References External links WebUSB API - Draft Community Group Report, 7 July 2020 Application programming interfaces USB Web programming Web development Web standards
WebUSB
Engineering
2,177
14,343,887
https://en.wikipedia.org/wiki/Precision%20and%20recall
In pattern recognition, information retrieval, object detection and classification (machine learning), precision and recall are performance metrics that apply to data retrieved from a collection, corpus or sample space. Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances. Written as a formula: Recall (also known as sensitivity) is the fraction of relevant instances that were retrieved. Written as a formula: Both precision and recall are therefore based on relevance. Consider a computer program for recognizing dogs (the relevant element) in a digital photograph. Upon processing a picture which contains ten cats and twelve dogs, the program identifies eight dogs. Of the eight elements identified as dogs, only five actually are dogs (true positives), while the other three are cats (false positives). Seven dogs were missed (false negatives), and seven cats were correctly excluded (true negatives). The program's precision is then 5/8 (true positives / selected elements) while its recall is 5/12 (true positives / relevant elements). Adopting a hypothesis-testing approach, where in this case, the null hypothesis is that a given item is irrelevant (not a dog), absence of type I and type II errors (perfect specificity and sensitivity) corresponds respectively to perfect precision (no false positives) and perfect recall (no false negatives). More generally, recall is simply the complement of the type II error rate (i.e., one minus the type II error rate). Precision is related to the type I error rate, but in a slightly more complicated way, as it also depends upon the prior distribution of seeing a relevant vs. an irrelevant item. The above cat and dog example contained 8 − 5 = 3 type I errors (false positives) out of 10 total cats (true negatives), for a type I error rate of 3/10, and 12 − 5 = 7 type II errors (false negatives), for a type II error rate of 7/12. Precision can be seen as a measure of quality, and recall as a measure of quantity. Higher precision means that an algorithm returns more relevant results than irrelevant ones, and high recall means that an algorithm returns most of the relevant results (whether or not irrelevant ones are also returned). Introduction In a classification task, the precision for a class is the number of true positives (i.e. the number of items correctly labelled as belonging to the positive class) divided by the total number of elements labelled as belonging to the positive class (i.e. the sum of true positives and false positives, which are items incorrectly labelled as belonging to the class). Recall in this context is defined as the number of true positives divided by the total number of elements that actually belong to the positive class (i.e. the sum of true positives and false negatives, which are items which were not labelled as belonging to the positive class but should have been). Precision and recall are not particularly useful metrics when used in isolation. For instance, it is possible to have perfect recall by simply retrieving every single item. Likewise, it is possible to achieve perfect precision by selecting only a very small number of extremely likely items. In a classification task, a precision score of 1.0 for a class C means that every item labelled as belonging to class C does indeed belong to class C (but says nothing about the number of items from class C that were not labelled correctly) whereas a recall of 1.0 means that every item from class C was labelled as belonging to class C (but says nothing about how many items from other classes were incorrectly also labelled as belonging to class C). Often, there is an inverse relationship between precision and recall, where it is possible to increase one at the cost of reducing the other, but context may dictate if one is more valued in a given situation: A smoke detector is generally designed to commit many Type I errors (to alert in many situations when there is no danger), because the cost of a Type II error (failing to sound an alarm during a major fire) is prohibitively high. As such, smoke detectors are designed with recall in mind (to catch all real danger), even while giving little weight to the losses in precision (and making many false alarms). In the other direction, Blackstone's ratio, "It is better that ten guilty persons escape than that one innocent suffer," emphasizes the costs of a Type I error (convicting an innocent person). As such, the criminal justice system is geared toward precision (not convicting innocents), even at the cost of losses in recall (letting more guilty people go free). A brain surgeon removing a cancerous tumor from a patient's brain illustrates the tradeoffs as well: The surgeon needs to remove all of the tumor cells since any remaining cancer cells will regenerate the tumor. Conversely, the surgeon must not remove healthy brain cells since that would leave the patient with impaired brain function. The surgeon may be more liberal in the area of the brain they remove to ensure they have extracted all the cancer cells. This decision increases recall but reduces precision. On the other hand, the surgeon may be more conservative in the brain cells they remove to ensure they extracts only cancer cells. This decision increases precision but reduces recall. That is to say, greater recall increases the chances of removing healthy cells (negative outcome) and increases the chances of removing all cancer cells (positive outcome). Greater precision decreases the chances of removing healthy cells (positive outcome) but also decreases the chances of removing all cancer cells (negative outcome). Usually, precision and recall scores are not discussed in isolation. A precision-recall curve plots precision as a function of recall; usually precision will decrease as the recall increases. Alternatively, values for one measure can be compared for a fixed level at the other measure (e.g. precision at a recall level of 0.75) or both are combined into a single measure. Examples of measures that are a combination of precision and recall are the F-measure (the weighted harmonic mean of precision and recall), or the Matthews correlation coefficient, which is a geometric mean of the chance-corrected variants: the regression coefficients Informedness (DeltaP') and Markedness (DeltaP). Accuracy is a weighted arithmetic mean of Precision and Inverse Precision (weighted by Bias) as well as a weighted arithmetic mean of Recall and Inverse Recall (weighted by Prevalence). Inverse Precision and Inverse Recall are simply the Precision and Recall of the inverse problem where positive and negative labels are exchanged (for both real classes and prediction labels). True Positive Rate and False Positive Rate, or equivalently Recall and 1 - Inverse Recall, are frequently plotted against each other as ROC curves and provide a principled mechanism to explore operating point tradeoffs. Outside of Information Retrieval, the application of Recall, Precision and F-measure are argued to be flawed as they ignore the true negative cell of the contingency table, and they are easily manipulated by biasing the predictions. The first problem is 'solved' by using Accuracy and the second problem is 'solved' by discounting the chance component and renormalizing to Cohen's kappa, but this no longer affords the opportunity to explore tradeoffs graphically. However, Informedness and Markedness are Kappa-like renormalizations of Recall and Precision, and their geometric mean Matthews correlation coefficient thus acts like a debiased F-measure. Definition For classification tasks, the terms true positives, true negatives, false positives, and false negatives compare the results of the classifier under test with trusted external judgments. The terms positive and negative refer to the classifier's prediction (sometimes known as the expectation), and the terms true and false refer to whether that prediction corresponds to the external judgment (sometimes known as the observation). Let us define an experiment from P positive instances and N negative instances for some condition. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as follows: Precision and recall are then defined as: Recall in this context is also referred to as the true positive rate or sensitivity, and precision is also referred to as positive predictive value (PPV); other related measures used in classification include true negative rate and accuracy. True negative rate is also called specificity. Precision vs. Recall Both precision and recall may be useful in cases where there is imbalanced data. However, it may be valuable to prioritize one metric over the other in cases where the outcome of a false positive or false negative is costly. For example, in medical diagnosis, a false positive test can lead to unnecessary treatment and expenses. In this situation, it is useful to value precision over recall. In other cases, the cost of a false negative is high, and recall may be a more valuable metric. For instance, the cost of a false negative in fraud detection is high, as failing to detect a fraudulent transaction can result in significant financial loss. Probabilistic Definition Precision and recall can be interpreted as (estimated) conditional probabilities: Precision is given by while recall is given by , where is the predicted class and is the actual class (i.e. means the actual class is positive). Both quantities are, therefore, connected by Bayes' theorem. No-Skill Classifiers The probabilistic interpretation allows to easily derive how a no-skill classifier would perform. A no-skill classifiers is defined by the property that the joint probability is just the product of the unconditional probabilites since the classification and the presence of the class are independent. For example the precision of a no-skill classifier is simply a constant i.e. determined by the probability/frequency with which the class P occurs. A similar argument can be made for the recall: which is the probability for a positive classification. Imbalanced data Accuracy can be a misleading metric for imbalanced data sets. Consider a sample with 95 negative and 5 positive values. Classifying all values as negative in this case gives 0.95 accuracy score. There are many metrics that don't suffer from this problem. For example, balanced accuracy (bACC) normalizes true positive and true negative predictions by the number of positive and negative samples, respectively, and divides their sum by two: For the previous example (95 negative and 5 positive samples), classifying all as negative gives 0.5 balanced accuracy score (the maximum bACC score is one), which is equivalent to the expected value of a random guess in a balanced data set. Balanced accuracy can serve as an overall performance metric for a model, whether or not the true labels are imbalanced in the data, assuming the cost of FN is the same as FP. The TPR and FPR are a property of a given classifier operating at a specific threshold. However, the overall number of TPs, FPs etc depend on the class imbalance in the data via the class ratio . As the recall (or TPR) depends only on positive cases, it is not affected by , but the precision is. We have that Thus the precision has an explicit dependence on . Starting with balanced classes at and gradually decreasing , the corresponding precision will decrease, because the denominator increases. Another metric is the predicted positive condition rate (PPCR), which identifies the percentage of the total population that is flagged. For example, for a search engine that returns 30 results (retrieved documents) out of 1,000,000 documents, the PPCR is 0.003%. According to Saito and Rehmsmeier, precision-recall plots are more informative than ROC plots when evaluating binary classifiers on imbalanced data. In such scenarios, ROC plots may be visually deceptive with respect to conclusions about the reliability of classification performance. Different from the above approaches, if an imbalance scaling is applied directly by weighting the confusion matrix elements, the standard metrics definitions still apply even in the case of imbalanced datasets. The weighting procedure relates the confusion matrix elements to the support set of each considered class. F-measure A measure that combines precision and recall is the harmonic mean of precision and recall, the traditional F-measure or balanced F-score: This measure is approximately the average of the two when they are close, and is more generally the harmonic mean, which, for the case of two numbers, coincides with the square of the geometric mean divided by the arithmetic mean. There are several reasons that the F-score can be criticized, in particular circumstances, due to its bias as an evaluation metric. This is also known as the measure, because recall and precision are evenly weighted. It is a special case of the general measure (for non-negative real values of ): Two other commonly used measures are the measure, which weights recall higher than precision, and the measure, which puts more emphasis on precision than recall. The F-measure was derived by van Rijsbergen (1979) so that "measures the effectiveness of retrieval with respect to a user who attaches times as much importance to recall as precision". It is based on van Rijsbergen's effectiveness measure , the second term being the weighted harmonic mean of precision and recall with weights . Their relationship is where . Limitations as goals There are other parameters and strategies for performance metric of information retrieval system, such as the area under the ROC curve (AUC) or pseudo-R-squared. Multi-class evaluation Precision and recall values can also be calculated for classification problems with more than two classes. To obtain the precision for a given class, we divide the number of true positives by the classifier bias towards this class (number of times that the classifier has predicted the class). To calculate the recall for a given class, we divide the number of true positives by the prevalence of this class (number of times that the class occurs in the data sample). The class-wise precision and recall values can then be combined into an overall multi-class evaluation score, e.g., using the macro F1 metric. See also Uncertainty coefficient, also called proficiency Sensitivity and specificity Confusion matrix Scoring rule Base rate fallacy References Baeza-Yates, Ricardo; Ribeiro-Neto, Berthier (1999). Modern Information Retrieval. New York, NY: ACM Press, Addison-Wesley, Seiten 75 ff. Hjørland, Birger (2010); The foundation of the concept of relevance, Journal of the American Society for Information Science and Technology, 61(2), 217-237 Makhoul, John; Kubala, Francis; Schwartz, Richard; and Weischedel, Ralph (1999); Performance measures for information extraction, in Proceedings of DARPA Broadcast News Workshop, Herndon, VA, February 1999 van Rijsbergen, Cornelis Joost "Keith" (1979); Information Retrieval, London, GB; Boston, MA: Butterworth, 2nd Edition, External links Information Retrieval – C. J. van Rijsbergen 1979 Computing Precision and Recall for a Multi-class Classification Problem Information retrieval evaluation Information science Bioinformatics
Precision and recall
Engineering,Biology
3,119
338,526
https://en.wikipedia.org/wiki/Humoral%20immunity
Humoral immunity is the aspect of immunity that is mediated by macromolecules – including secreted antibodies, complement proteins, and certain antimicrobial peptides – located in extracellular fluids. Humoral immunity is named so because it involves substances found in the humors, or body fluids. It contrasts with cell-mediated immunity. Humoral immunity is also referred to as antibody-mediated immunity. The study of the molecular and cellular components that form the immune system, including their function and interaction, is the central science of immunology. The immune system is divided into a more primitive innate immune system and an acquired or adaptive immune system of vertebrates, each of which contain both humoral and cellular immune elements. Humoral immunity refers to antibody production and the coinciding processes that accompany it, including: Th2 activation and cytokine production, germinal center formation and isotype switching, and affinity maturation and memory cell generation. It also refers to the effector functions of antibodies, which include pathogen and toxin neutralization, classical complement activation, and opsonin promotion of phagocytosis and pathogen elimination. History The concept of humoral immunity developed based on the analysis of antibacterial activity of the serum components. Hans Buchner is credited with the development of the humoral theory. In 1890, Buchner described alexins as "protective substances" that exist in the blood serum and other bodily fluids and are capable of killing microorganisms. Alexins, later redefined as "complements" by Paul Ehrlich, were shown to be the soluble components of the innate response that leads to a combination of cellular and humoral immunity. This discovery helped to bridge the features of innate and acquired immunity. Following the 1888 discovery of the bacteria that cause diphtheria and tetanus, Emil von Behring and Kitasato Shibasaburō showed that disease need not be caused by microorganisms themselves. They discovered that cell-free filtrates were sufficient to cause disease. In 1890, filtrates of diphtheria, later named diphtheria toxins, were used to vaccinate animals in an attempt to demonstrate that immunized serum contained an antitoxin that could neutralize the activity of the toxin and could transfer immunity to non-immune animals. In 1897, Paul Ehrlich showed that antibodies form against the plant toxins ricin and abrin, and proposed that these antibodies are responsible for immunity. Ehrlich, with his colleague von Behring, went on to develop the diphtheria antitoxin, which became the first major success of modern immunotherapy. The discovery of specified compatible antibodies became a major tool in the standardization of immunity and the identification of lingering infections. Antibodies Antibodies or Immunoglobulins are glycoproteins found within blood and lymph. Structurally, antibodies are large Y-shaped globular proteins. In mammals, there are five types of antibodies: immunoglobulin A, immunoglobulin D, immunoglobulin E, immunoglobulin G, and immunoglobulin M. Each immunoglobulin class differs in its biological properties and has evolved to deal with different antigens. Antibodies are synthesized and secreted by plasma cells that are derived from the B cells of the immune system. An antibody is used by the acquired immune system to identify and neutralize foreign objects like bacteria and viruses. Each antibody recognizes a specific antigen unique to its target. By binding their specific antigens, antibodies can cause agglutination and precipitation of antibody-antigen products, prime for phagocytosis by macrophages and other cells, block viral receptors, and stimulate other immune responses, such as the complement pathway. An incompatible blood transfusion causes a transfusion reaction, which is mediated by the humoral immune response. This type of reaction, called an acute hemolytic reaction, results in the rapid destruction (hemolysis) of the donor red blood cells by host antibodies. The cause is usually a clerical error, such as the wrong unit of blood being given to the wrong patient. The symptoms are fever and chills, sometimes with back pain and pink or red urine (hemoglobinuria). The major complication is that hemoglobin released by the destruction of red blood cells can cause acute kidney failure. Antibody production In humoral immune response, the naive B cells begin the maturation process in the bone marrow, gaining B-cell receptors (BCRs) along the cell surface. These BCRs are membrane-bound protein complexes that have a high binding affinity for specific antigens; this specificity is derived from the amino acid sequence of the heavy and light polypeptide chains that constitute the variable region of the BCR. Once a BCR interacts with an antigen, it creates a binding signal which directs the B cell to produce a unique antibody that only binds with that antigen. The mature B cells then migrate from the bone marrow to the lymph nodes or other lymphatic organs, where they begin to encounter pathogens. B cell activation When a B cell encounters an antigen, a signal is activated, the antigen binds to the receptor and is taken inside the B cell by endocytosis. The antigen is processed and presented on the B cell's surface again by MHC-II proteins. The MHC-II proteins are recognized by helper T cells, stimulating the production of proteins, allowing for B cells to multiply and the descendants to differentiate into antibody-secreting cells circulating in the blood. B cells can be activated through certain microbial agents without the help of T-cells and have the ability to work directly with antigens to provide responses to pathogens present. B cell proliferation The B cell waits for a helper T cell (TH) to bind to the complex. This binding will activate the TH cell, which then releases cytokines that induce B cells to divide rapidly, making thousands of identical clones of the B cell. These daughter cells either become plasma cells or memory cells. The memory B cells remain inactive here; later, when these memory B cells encounter the same antigen due to reinfection, they divide and form plasma cells. On the other hand, the plasma cells produce a large number of antibodies which are released freely into the circulatory system. Antibody-antigen reaction These antibodies will encounter antigens and bind with them. This will either interfere with the chemical interaction between host and foreign cells, or they may form bridges between their antigenic sites hindering their proper functioning. Their presence might also attract macrophages or killer cells to attack and phagocytose them. Complement system The complement system is a biochemical cascade of the innate immune system that helps clear pathogens from an organism. It is derived from many small blood plasma proteins that work together to disrupt the target cell's plasma membrane leading to cytolysis of the cell. The complement system consists of more than 35 soluble and cell-bound proteins, 12 of which are directly involved in the complement pathways. The complement system is involved in the activities of both innate immunity and acquired immunity. Activation of this system leads to cytolysis, chemotaxis, opsonization, immune clearance, and inflammation, as well as the marking of pathogens for phagocytosis. The proteins account for 5% of the serum globulin fraction. Most of these proteins circulate as zymogens, which are inactive until proteolytic cleavage. Three biochemical pathways activate the complement system: the classical complement pathway, the alternate complement pathway, and the mannose-binding lectin pathway. These processes differ only in the process of activating C3 convertase, which is the initial step of complement activation, and the subsequent process are eventually the same. The classical pathway is initiated through exposure to free-floating antigen-bound antibodies. This leads to enzymatic cleavage of smaller complement subunits which synthesize to form the C3 convertase. This differs from the mannose-binding lectin pathway, which is initiated by bacterial carbohydrate motifs, such as mannose, found on the surface of bacterium. After the binding process, the same subunit cleavage and synthesis occurs as in the classical pathway. The alternate complement pathway completely diverges from the previous pathways, as this pathway spontaneously initiates in the presence of hydrolyzed C3, which then recruits other subunits which can be cleaved to form C3 convertase. In all three pathways, once C3 convertase is synthesized, complements are cleaved into subunits which either form a structure called the membrane attack complex (MAC) on the bacterial cell wall to destroy the bacteria or act as cytokines and chemokines, amplifying the immune response. See also Cell-mediated immunity (vs. humoral immunity) Immune system Polyclonal response Serology References Further reading Immunology
Humoral immunity
Biology
1,852
5,544,986
https://en.wikipedia.org/wiki/Ingress%20router
An ingress router is a label switch router that is a starting point (source) for a given label-switched path (LSP). An ingress router may be an egress router or an intermediate router for any other LSP(s). Hence the role of ingress and egress routers is LSP specific. Usually, the MPLS label is attached with an IP packet at the ingress router and removed at the egress router, whereas label swapping is performed on the intermediate routers. However, in special cases (such as LSP Hierarchy in RFC 4206, LSP Stitching and MPLS local protection) the ingress router could be pushing label in label stack of an already existing MPLS packet (instead of an IP packet). Note that, although the ingress router is the starting point of an LSP, it may or may not be the source of the under-lying IP packets. MPLS networking
Ingress router
Technology
207
21,561,133
https://en.wikipedia.org/wiki/J.%20Schmalz
J. Schmalz GmbH is a manufacturer of automation technology based in Glatten, Germany. The family-run company is one of the leading suppliers of vacuum technology in the fields of automation and manual handling and is also active in the energy storage business area. Over the years, the company's products changed from razor blades to transport equipment and finally to vacuum technology. The Schmalz Group employs 1,800 people, 1,164 of whom work at J. Schmalz GmbH (2022). History Foundation and beginnings In 1910, Johannes Schmalz founded the Johannes Schmalz Rasierklingenfabrik in Glatten with the "Glattis" razor blade brand. Expansion at home and abroad The proliferation of the electric shaver required the company to change its focus. Subsequently, the production of trailers and transportation equipment for agriculture and industry began. In 1948, Johannes Schmalz's son Artur took over the management of the company and developed products in the field of light vehicles. When Kurt Schmalz took over the management of the company in 1984, focus shifted to vacuum technology. In 1990, his brother Wolfgang Schmalz joined the company's management. 1998 saw the opening of the company's first branch office in Switzerland. One year later, Schmalz entered the US market with the establishment of a branch in Raleigh, North Carolina. From 2009 to 2017, Schmalz expanded the Glatten site to include additional production facilities, a research and testing center and a communication center for employees and visitors. Schmalz invested around €6 million in the expansion, with energy-saving measures at its headquarters and the creation of creative spaces for employees. At the end of 2017, Wolfgang Schmalz stepped down from the Management Board and joined the company's Advisory Board. Acquisitions and realignment In 2017, Schmalz acquired all shares in Stuttgart-based Gesellschaft für Produktionssysteme GmbH (GPS), which was founded as an offshoot of the Fraunhofer Institute for Manufacturing Engineering and Automation. GPS is active in the fields of hardware, software and data aggregation, among others. The company entered the Australian market in 2018 with the acquisition of Millsom Hoists, who had been their distributor in the area for thirty years previously. In January 2022, the British company Palamatic Ltd., which develops handling systems for the pharmaceutical and chemical industries, was acquired. Two months later, Schmalz acquired the Swedish company Binar Handling AB and its subsidiaries in China, France, Germany and Turkey. The company is a manufacturer of cranes, balancers and end effectors based in Trollhättan, Sweden. In June 2023, Schmalz presented its own redox flow battery systems. Corporate structure J. Schmalz GmbH is the parent company of the Schmalz Group. The Schmalz Group consists of the companies Schmalz, Binar Handling, Palamatic and Gesellschaft für Produktionssysteme (GPS), among others. In the 2022 financial year, the Schmalz Group employed around 1,800 people internationally, including 1,164 employees at J. Schmalz GmbH, and it generated a revenue of €207.3 million. Schmalz is represented by trading partners and 31 locations in over 80 countries in Europe, Asia, Australia, and North America. In addition to Germany, Italy, Eastern Europe, the United States, China and Japan are among the most important markets for Schmalz. Products Schmalz is active in the fields of vacuum automation, manual handling and energy storage. The technologies and products developed by Schmalz are primarily aimed at the logistics, automotive, wood, electronics, food, plastics, and furniture industries. Vacuum technology In the field of vacuum technology, Schmalz produces gripping systems such as mounting elements, system monitors, suction pads, lifters, generators, or crane systems with vacuum technology for manual and automated production processes. The grippers are developed in different designs, sizes, and materials depending on the requirements of use. In addition to the grippers, Schmalz manufactures other components of vacuum systems, including valves and switches. The company also produces vacuum clamping systems that are used in CNC machine tools. For the battery technology sector, Schmalz manufactures automated grippers and end effectors that enable the precise transportation of cathodes, anodes, separators, and pouch cells as well as pressure-free and particle-free handling of battery, fuel and solar cells. Other components of the business division are production systems, plastics and handling technology. Energy storage The energy storage division develops intermediate storage options for renewable energy. Schmalz uses redox flow battery systems for this purpose, which use liquid media in external tanks to store energy. Sustainability In 2020, Schmalz became one of the first companies to join the Baden-Württemberg Climate Alliance. Members of the alliance aim to reduce their overall energy consumption, produce carbon dioxide-free and ultimately become climate-neutral. At this point, Schmalz was already generating around 80% of its energy from wind and hydropower, photovoltaics, solar parks and wood chip plants. The remaining electricity required was purchased from a regional CO2 -neutral supplier. Large stationary batteries are used for energy storage. By August 2022, Schmalz was able to generate more energy than the company actually needed. Schmalz is also a founding member of the Baden-Württemberg Sustainability Initiative (WIN). As part of the H2 Black Forest initiative, Schmalz is part of the ReduCO2 sustainability project, which aims to reduce CO2 emissions. Awards 2019: National German Sustainability Award for medium-sized companies, by a jury of experts from industry practice, civil society, consulting, and research 2020: Award as Germany's innovation leader by the FAZ Institute 2021: Grand Prize for Small and Medium-Sized Enterprises, awarded by the Oskar Patzelt Foundation References External links Official Website Industrial machine manufacturers Machine tool builders Pneumatic tool manufacturers Technology companies of Germany German brands Manufacturing companies established in 1910 1910 establishments in Germany Companies based in Baden-Württemberg Tool manufacturing companies of Germany
J. Schmalz
Engineering
1,261
73,249,226
https://en.wikipedia.org/wiki/List%20of%20large%20language%20models
A large language model (LLM) is a type of machine learning model designed for natural language processing tasks such as language generation. LLMs are language models with many parameters, and are trained with self-supervised learning on a vast amount of text. This page lists notable large language models. For the training cost column, 1 petaFLOP-day = 1 petaFLOP/sec × 1 day = 8.64E19 FLOP. Also, only the largest model's cost is written. See also List of chatbots Notes References Software comparisons
List of large language models
Technology
114
193,341
https://en.wikipedia.org/wiki/Offshore%20construction
Offshore construction is the installation of structures and facilities in a marine environment, usually for the production and transmission of electricity, oil, gas and other resources. It is also called maritime engineering. Construction and pre-commissioning is typically performed as much as possible onshore. To optimize the costs and risks of installing large offshore platforms, different construction strategies have been developed. One strategy is to fully construct the offshore facility onshore, and tow the installation to site floating on its own buoyancy. Bottom founded structure are lowered to the seabed by de-ballasting (see for instance Condeep or Cranefree), whilst floating structures are held in position with substantial mooring systems. The size of offshore lifts can be reduced by making the construction modular, with each module being constructed onshore and then lifted using a crane vessel into place onto the platform. A number of very large crane vessels were built in the 1970s which allow very large single modules weighing up to 14,000 tonnes to be fabricated and then lifted into place. Specialist floating hotel vessels known as flotels or accommodation rigs are used to accommodate workers during the construction and hook-up phases. This is a high cost activity due to the limited space and access to materials. Oil platforms are key fixed installations from which drilling and production activity is carried out. Drilling rigs are either floating vessels for deeper water or jack-up designs which are a barge with liftable legs. Both of these types of vessel are constructed in marine yards but are often involved during the construction phase to pre-drill some production wells. Other key factors in offshore construction are the weather windows which define periods of relatively light weather during which continuous construction or other offshore activity can take place. Safety of personnel is another key construction parameter, an obvious hazard being a fall into the sea from which speedy recovery in cold waters is essential. Environmental issues are also often a major concern, and environmental impact assessment may be required during planning. The main types of vessels used for pipe laying are the "derrick barge (DB)", the "pipelay barge (LB)" and the "derrick/lay barge (DLB)" combination. Closed diving bells in offshore construction are mainly used for saturation diving in water depths greater than , less than that, the surface oriented divers are transported through the water in a wet bell or diving stage (basket), a suspended platform deployed from a launch and recovery system (LARS, or "A" frame) on the deck of the rig or a diving support vessel. The basket is lowered to the working depth and recovered at a controlled rate for decompression. Closed bells can go to , but are normally used at . Offshore construction includes foundations engineering, structural design, construction, and/or repair of offshore structures, both commercial and military. Outline Mariculture Offshore aquaculture Offshore windfarm Floating solar Offshore platform Fixed platforms, Spar (platform) Tension leg platform Floating production storage and offloading (FPSOs) Oil platform Semi-submersible platform Sea fort Accommodation platform Offshore embedded anchors Offshore geotechnical engineering Offshore drilling Land reclamation Artificial island Subsea Submarine pipelines Underwater habitat See also References Coastal construction Offshore engineering
Offshore construction
Engineering
641
26,998,547
https://en.wikipedia.org/wiki/Degrees%20of%20freedom%20%28physics%20and%20chemistry%29
In physics and chemistry, a degree of freedom is an independent physical parameter in the chosen parameterization of a physical system. More formally, given a parameterization of a physical system, the number of degrees of freedom is the smallest number of parameters whose values need to be known in order to always be possible to determine the values of all parameters in the chosen parameterization. In this case, any set of such parameters are called degrees of freedom. The location of a particle in three-dimensional space requires three position coordinates. Similarly, the direction and speed at which a particle moves can be described in terms of three velocity components, each in reference to the three dimensions of space. So, if the time evolution of the system is deterministic (where the state at one instant uniquely determines its past and future position and velocity as a function of time), such a system has six degrees of freedom. If the motion of the particle is constrained to a lower number of dimensions – for example, the particle must move along a wire or on a fixed surface – then the system has fewer than six degrees of freedom. On the other hand, a system with an extended object that can rotate or vibrate can have more than six degrees of freedom. In classical mechanics, the state of a point particle at any given time is often described with position and velocity coordinates in the Lagrangian formalism, or with position and momentum coordinates in the Hamiltonian formalism. In statistical mechanics, a degree of freedom is a single scalar number describing the microstate of a system. The specification of all microstates of a system is a point in the system's phase space. In the 3D ideal chain model in chemistry, two angles are necessary to describe the orientation of each monomer. It is often useful to specify quadratic degrees of freedom. These are degrees of freedom that contribute in a quadratic function to the energy of the system. Depending on what one is counting, there are several different ways that degrees of freedom can be defined, each with a different value. Thermodynamic degrees of freedom for gases By the equipartition theorem, internal energy per mole of gas equals , where is absolute temperature and the specific heat at constant volume is cv = (f)(R/2). R = 8.314 J/(K mol) is the universal gas constant, and "f" is the number of thermodynamic (quadratic) degrees of freedom, counting the number of ways in which energy can occur. Any atom or molecule has three degrees of freedom associated with translational motion (kinetic energy) of the center of mass with respect to the x, y, and z axes. These are the only degrees of freedom for a monoatomic species, such as noble gas atoms. For a structure consisting of two or more atoms, the whole structure also has rotational kinetic energy, where the whole structure turns about an axis. A linear molecule, where all atoms lie along a single axis, such as any diatomic molecule and some other molecules like carbon dioxide (CO2), has two rotational degrees of freedom, because it can rotate about either of two axes perpendicular to the molecular axis. A nonlinear molecule, where the atoms do not lie along a single axis, like water (H2O), has three rotational degrees of freedom, because it can rotate around any of three perpendicular axes. In special cases, such as adsorbed large molecules, the rotational degrees of freedom can be limited to only one. A structure consisting of two or more atoms also has vibrational energy, where the individual atoms move with respect to one another. A diatomic molecule has one molecular vibration mode: the two atoms oscillate back and forth with the chemical bond between them acting as a spring. A molecule with atoms has more complicated modes of molecular vibration, with vibrational modes for a linear molecule and modes for a nonlinear molecule. As specific examples, the linear CO2 molecule has 4 modes of oscillation, and the nonlinear water molecule has 3 modes of oscillation Each vibrational mode has two energy terms: the kinetic energy of the moving atoms and the potential energy of the spring-like chemical bond(s). Therefore, the number of vibrational energy terms is modes for a linear molecule and is modes for a nonlinear molecule. Both the rotational and vibrational modes are quantized, requiring a minimum temperature to be activated. The "rotational temperature" to activate the rotational degrees of freedom is less than 100 K for many gases. For N2 and O2, it is less than 3 K. The "vibrational temperature" necessary for substantial vibration is between 103 K and 104 K, 3521 K for N2 and 2156 K for O2. Typical atmospheric temperatures are not high enough to activate vibration in N2 and O2, which comprise most of the atmosphere. (See the next figure.) However, the much less abundant greenhouse gases keep the troposphere warm by absorbing infrared from the Earth's surface, which excites their vibrational modes. Much of this energy is reradiated back to the surface in the infrared through the "greenhouse effect." Because room temperature (≈298 K) is over the typical rotational temperature but lower than the typical vibrational temperature, only the translational and rotational degrees of freedom contribute, in equal amounts, to the heat capacity ratio. This is why ≈ for monatomic gases and ≈ for diatomic gases at room temperature. Since the air is dominated by diatomic gases (with nitrogen and oxygen contributing about 99%), its molar internal energy is close to = (5/2), determined by the 5 degrees of freedom exhibited by diatomic gases. See the graph at right. For 140 K < < 380 K, cv differs from (5/2) d by less than 1%. Only at temperatures well above temperatures in the troposphere and stratosphere do some molecules have enough energy to activate the vibrational modes of N2 and O2. The specific heat at constant volume, cv, increases slowly toward (7/2) as temperature increases above T = 400 K, where cv is 1.3% above (5/2) d = 717.5 J/(K kg). Counting the minimum number of co-ordinates to specify a position One can also count degrees of freedom using the minimum number of coordinates required to specify a position. This is done as follows: For a single particle we need 2 coordinates in a 2-D plane to specify its position and 3 coordinates in 3-D space. Thus its degree of freedom in a 3-D space is 3. For a body consisting of 2 particles (ex. a diatomic molecule) in a 3-D space with constant distance between them (let's say d) we can show (below) its degrees of freedom to be 5. Let's say one particle in this body has coordinate and the other has coordinate with unknown. Application of the formula for distance between two coordinates results in one equation with one unknown, in which we can solve for . One of , , , , , or can be unknown. Contrary to the classical equipartition theorem, at room temperature, the vibrational motion of molecules typically makes negligible contributions to the heat capacity. This is because these degrees of freedom are frozen because the spacing between the energy eigenvalues exceeds the energy corresponding to ambient temperatures (). Independent degrees of freedom The set of degrees of freedom of a system is independent if the energy associated with the set can be written in the following form: where is a function of the sole variable . example: if and are two degrees of freedom, and is the associated energy: If , then the two degrees of freedom are independent. If , then the two degrees of freedom are not independent. The term involving the product of and is a coupling term that describes an interaction between the two degrees of freedom. For from 1 to , the value of the th degree of freedom is distributed according to the Boltzmann distribution. Its probability density function is the following: , In this section, and throughout the article the brackets denote the mean of the quantity they enclose. The internal energy of the system is the sum of the average energies associated with each of the degrees of freedom: Quadratic degrees of freedom A degree of freedom is quadratic if the energy terms associated with this degree of freedom can be written as , where is a linear combination of other quadratic degrees of freedom. example: if and are two degrees of freedom, and is the associated energy: If , then the two degrees of freedom are not independent and non-quadratic. If , then the two degrees of freedom are independent and non-quadratic. If , then the two degrees of freedom are not independent but are quadratic. If , then the two degrees of freedom are independent and quadratic. For example, in Newtonian mechanics, the dynamics of a system of quadratic degrees of freedom are controlled by a set of homogeneous linear differential equations with constant coefficients. Quadratic and independent degree of freedom are quadratic and independent degrees of freedom if the energy associated with a microstate of the system they represent can be written as: Equipartition theorem In the classical limit of statistical mechanics, at thermodynamic equilibrium, the internal energy of a system of quadratic and independent degrees of freedom is: Here, the mean energy associated with a degree of freedom is: Since the degrees of freedom are independent, the internal energy of the system is equal to the sum of the mean energy associated with each degree of freedom, which demonstrates the result. Generalizations The description of a system's state as a point in its phase space, although mathematically convenient, is thought to be fundamentally inaccurate. In quantum mechanics, the motion degrees of freedom are superseded with the concept of wave function, and operators which correspond to other degrees of freedom have discrete spectra. For example, intrinsic angular momentum operator (which corresponds to the rotational freedom) for an electron or photon has only two eigenvalues. This discreteness becomes apparent when action has an order of magnitude of the Planck constant, and individual degrees of freedom can be distinguished. References Physical quantities Dimension
Degrees of freedom (physics and chemistry)
Physics,Mathematics
2,098
2,893,811
https://en.wikipedia.org/wiki/National%20Longitudinal%20Surveys
The National Longitudinal Surveys (NLS) are a set of surveys sponsored by the Bureau of Labor Statistics (BLS) of the U.S. Department of Labor. These surveys have gathered information at multiple points in time on the labor market experiences and other significant life events of several groups of men and women. Each of the NLS samples consists of several thousand individuals, many of whom have been surveyed over several decades. Surveys The National Longitudinal Survey of Youth 1997 (NLSY97) began in 1997 with 8,984 men and women born in 1980-84 (ages 12–17 in 1997). Sample members were interviewed annually from 1997 to 2011 and biennially thereafter. The 2015 interview was conducted with 7,103 men and women ages 30–36. Data are available from Round 1 (1997–98) to Round 17 (2015–16). The National Longitudinal Survey of Youth 1979 (NLSY79) began in 1979 with 12,686 men and women born in 1957-64 (ages 14–22 in 1979). Sample members were interviewed annually from 1979–1994 and biennially thereafter. Oversamples of military and economically disadvantaged, nonblack/non-Hispanic respondents were dropped in 1985 and 1991, leaving a sample size of 9,964. The 2014 interview (Round 26) was conducted with 7,071 men and women ages 49–58. The NLSY79 Children and Young Adults (NLSCYA) began in 1986 with children born to female NLSY79 respondents. Biennial data collection consists of interviews with the mothers and interviews with the children themselves; from 1994 onward, children turning age 15 and older during the survey year have been administered a Young Adult questionnaire that is similar to the NLSY79 questionnaire. In 2014, 276 children (ages 0–14) and 5,735 young adults (ages 15–42) were interviewed. To date, about 10,500 children have been interviewed in at least one survey round. The National Longitudinal Surveys of Young Women and Mature Women (NLSW) comprised two separate surveys. The Young Women's survey began in 1968 with 5,159 women born in 1943-53 (ages 14–24 in 1968). Sample members were interviewed 22 times from 1968 to 2003. The final interview in 2003 was conducted with 2,857 women ages 49–59. The Mature Women's survey began in 1967 with 5,083 women born in 1922-37 (ages 30–44 in 1967). Sample members were interviewed 21 times from 1967 to 2003. The final interview in 2003 was conducted with 2,237 women ages 66–80. The National Longitudinal Surveys of Young Men and Older Men (NLSM) comprised two separate surveys. The Young Men's survey began in 1966 with 5,225 men born in 1941-51 (ages 14–24 in 1966). Sample members were interviewed 12 times from 1966 to 1981. The Older Men's survey began in 1966 with 5,020 men born in 1906-21 (ages 45–59 in 1966). Sample members were interviewed 12 times from 1966 to 1983. A final interview in 1990 was conducted with 2,092 respondents who were 69–83 years old, and 2,206 family members of deceased respondents. NLSY97 The National Longitudinal Survey of Youth 1997 (NLSY97), the newest survey in the NLS program, is a sample of 8,984 young men and women born during the years 1980 through 1984 and living in the United States when first interviewed. Survey respondents were ages 12 to 17 when first interviewed in 1997. The U.S. Department of Labor selected the NLSY97 cohort to enable research on youths’ transition from school to the labor market and into adulthood. Data from the first 17 rounds of data collection are available to researchers. Round 17 consisted of 7,103 respondents, age 30- 36, and was completed in 2015-2016 with data made available in fall of 2017. In addition, survey staff conducted special high school and college transcript data collections to supplement the data on schooling provided by respondents. Many NLSY97 respondents also participated in a special administration of the computer-adaptive form of the Armed Services Vocational Aptitude Battery, and scores from that test are available for approximately 80 percent of sample members. NLSY79 The National Longitudinal Survey of Youth 1979 (NLSY79) is a sample of 12,686 men and women born during the years 1957 through 1964 and living in the United States when the survey began. Survey respondents were ages 14 to 22 when first interviewed in 1979. The U.S. Department of Labor selected the NLSY79 cohort to replicate the NLS of Young Women and the NLS of Young Men, which began in the 1960s. The NLSY79 also was designed to help researchers and policymakers evaluate the expanded employment and training programs for youths legislated by the 1977 amendments to the Comprehensive Employment and Training Act (CETA). Data are available for this cohort through 2014 when the 7,071 men and women in the sample were ages 49 to 58. Data from the 2016-2017 survey will be released in late 2018/early 2019. To supplement the main data collection, survey staff conducted special high school and transcript surveys. NLSY79 respondents also participated in a special administration of the Armed Services Vocational Aptitude Battery. NLSCYA The NLSY79 Children and Young Adults (NLSCYA) Funded by Bureau of Labor Statistics (BLS) and the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), the NLSY79 Child and Young Adult surveys contain comprehensive information on the experiences of children born to female NLSY79 respondents. The collection of data on these NLSY79 children began in 1986, and a battery of cognitive, socioemotional, and physiological assessments has been administered biennially since that year. Their mothers also provide reports on their children’s health, temperament, motor and social development, behavior problems, school activities, and home environments. Beginning in 1988, children age 10 and older have answered a self-administered set of questions about family, friends, jobs, school, after-school activities, computer use, religious attendance, smoking, alcohol and drug use, and more. Starting in 1994, children who have reached age 15 by December 31 of the survey year complete a questionnaire that is similar to the main NLSY79 survey and asks about work experiences, training, schooling, health, fertility, parenting and attitudes. The Young Adult questionnaire, conducted primarily by telephone, replaced the child assessments for young adults 15 years or older. Young adults also report on sensitive topics such as parent child conflict, participation in delinquent or criminal activities, use of controlled and uncontrolled substances, sexual activity, volunteer activities, and expectations for the future. The data collected about the children can be linked with information collected from their mothers in the main NLSY79 survey. The NLSY79 Child and Young Adult surveys are a valuable resource for studying how individual and family characteristics and experiences affect the well-being and development of children, adolescents, and young adults. Original cohorts The NLSW and NLSM make up the original four cohorts, which were designed to represent the U.S. civilian noninstitutional population at the time of the initial survey. The surveys were funded by the Office of Manpower, Automation, and Training (now, the Employment and Training Administration) of the Department of Labor, and conducted by the Center for Human Resource Research (CHRR) of Ohio State University. The National Longitudinal Surveys of Young Women and Mature Women (NLSW) The NLS of Young Women was a sample of 5,159 women who were ages 14 to 24 in 1968. The survey was one of four original groups first interviewed when the NLS program began in the mid-1960s. The U.S. Department of Labor selected the Young Women cohort to enable research on the employment patterns of women who were finishing school, making initial career decisions, and starting families. Data are available for this cohort from 1968 through 2003, when the survey was discontinued. The survey covered a variety of topics, including: characteristics of jobs, labor market status, education, health and physical condition, marital and family characteristics, income and assets, attitudes and perspectives, retirement, environmental characteristics, transfers of time and money. A special survey of the high schools of young women respondents provided additional information about their educational experiences. The survey also has included questions on topics specific to the life stage of respondents, such as educational experiences and plans in the earlier years of the survey, childcare issues and fertility expectations a few years later, and health, pension, and retirement information and, finally, asked about transfers of time and money between respondents, their parents, and their children. The NLS of Mature Women was a sample of 5,083 women who were ages 30 to 44 in 1967. The survey was one of four original groups first interviewed when the NLS program began in the mid-1960s. The U.S. Department of Labor selected the Mature Women cohort to enable research on the employment patterns of women who were reentering the workforce and balancing the roles of homemaker, mother, and labor force participant. Data are available for this cohort from 1967 through 2003, when the survey was discontinued. The survey covered a variety of topics, including:characteristics of jobs, labor market status, education, health and physical condition, marital and family characteristics, income and assets, attitudes and perspectives, retirement, environmental characteristics, transfers of time and money. The survey also included questions on topics specific to the life stage of respondents, such as childcare issues in the earlier years of the survey and health, pension, and retirement information and, finally, asked about transfers of time and money between respondents, their parents, and their children. The National Longitudinal Surveys of Young Men and Older Men (NLSM) The NLS of Young Men was a sample of 5,225 men who were ages 14 to 24 in 1966. The survey was one of four original groups first interviewed when the NLS program began in the mid-1960s. The U.S. Department of Labor selected the Young Men cohort to enable research on the employment patterns of men who were completing school and entering the work force or joining the military and were thus making initial career and job decisions that would impact their employment in the coming decades. Data are available for this cohort from 1966 through 1981, when the survey was discontinued. A special survey of the high schools of young men respondents provided additional information about their educational experiences. The survey covered a variety of topics, including: characteristics of jobs, labor market status, education, health and physical condition, marital and family characteristics, income and assets, attitudes and perspectives, environmental characteristics, military service, and training. The NLS of Older Men was a sample of 5,020 men who were ages 45 to 59 in 1966. The survey was one of four original groups first interviewed when the NLS program began in the mid-1960s. The U.S. Department of Labor selected the Older Men cohort to enable research on the employment patterns of men who were nearing the completions of their careers, making decisions about the timing and extent of their labor force withdrawal, and planning for retirement. Data are available for this cohort from 1966 through 1983. Additional information was collected in 1990 during final interviews with the remaining respondents and the widows or other family members of deceased sample members. The survey covered a variety of topics, including: characteristics of jobs, labor market status, education, health and physical condition, marital and family characteristics, income and assets, attitudes and perspectives, retirement, environmental characteristics, military service. Survey topics Demographic and family background, education, military experiences, job characteristics and training, labor market status and histories, marital and family characteristics, income and assets, transfers of time and money, retirement, geographic location and mobility, health, nutrition, and physical activity, fertility and parenting, sexual activity, attitudes and expectations, behaviors and perspectives, environmental characteristics, and civic engagement. Additionally, NLSY79 Child and Young Adult surveys include: Assessments of the quality of the home environment, cognitive development, temperament, and motor, social and emotional development. Accessing the surveys NLS public-use data for each cohort are available at no cost via the Investigator, an online search and extraction site that enables individuals to review NLS variables and create their own data sets. Application is necessary to access NLS geocode and school surveys data. The geocode application document is available on the BLS website. See also List of household surveys in the United States References Additional resources Meet Herbet S. Parnes, The First Director of the NLS Program at Ohio State Donna S. Rothstein, "Leaving a job during the Great Recession: evidence from the National Longitudinal Survey of Youth 1979," Monthly Labor Review, U.S. Bureau of Labor Statistics, December 2018, . Pierret, Charles R. "The National Longitudinal Survey of Youth: 1979 Cohort at 25." Monthly Labor Review 128,2 (February 2005): 3-7. https://www.bls.gov/opub/mlr/2005/02/art1full.pdf Demography Economic data Reports of the Bureau of Labor Statistics Surveys (human research) Longitudinal studies
National Longitudinal Surveys
Environmental_science
2,743
72,749,655
https://en.wikipedia.org/wiki/Diaporthomycetidae
Diaporthomycetidae is a subclass of sac fungi under the class Sordariomycetes. The subclass was formed in 2015 for some fungi taxa that were already placed within Sordariomycetidae subclass but that were phylogenetically and morphologically distinct from genera in Sordariomycetidae. Members of Diaporthomycetidae can occur in both aquatic and terrestrial habitats as saprobes (living on decayed dead or waste organic matter), pathogens, or endophytes (within a plant for at least part of its life cycle without causing apparent disease). In 2017, there were up to 15 orders and 65 families in this subclass. More orders may be confirmed in DNA-based phylogenetic analysis studies in 2021. Distribution Member in the order have a cosmopolitan distribution, including being found in China and Thailand, and parts of Europe. They can be found in freshwater habitats. Orders As accepted by Wijayawardene et al. 2020; Annulatascales - family Annulatascaceae (with 13 genera) Atractosporales Atractosporaceae (2) Conlariaceae (2) Pseudoproboscisporaceae (3) Calosphaeriales Calosphaeriaceae (4) Pleurostomataceae (1) Diaporthales Apiosporopsidaceae (1) Apoharknessiaceae (2) Asterosporiaceae (1) Auratiopycnidiellaceae (1) Coryneaceae (2) Cryphonectriaceae (27) Cytosporaceae (6) Diaporthaceae (15) Diaporthosporellaceae (1) Diaporthostomataceae (1) Dwiroopaceae (1) Erythrogloeaceae (4) Foliocryphiaceae (2) Gnomoniaceae (37) Harknessiaceae (2) Juglanconidaceae (2) Lamproconiaceae (2) Macrohilaceae (1) Mastigosporellaceae (1) Melanconidaceae (1) Melanconiellaceae (8) Neomelanconiellaceae (1) Phaeoappendicosporaceae (2) Prosopidicolaceae (1) Pseudomelanconidaceae (3) Pseudoplagiostomataceae (1) Pyrisporaceae (1) Schizoparmaceae (1) Stilbosporaceae (4) Sydowiellaceae (21) Synnemasporellaceae (1) Tubakiaceae (8) Distoseptisporales - Distoseptisporaceae (1) Jobellisiales - Jobellisiaceae (1) Magnaporthales Ceratosphaeriaceae (1) Magnaporthaceae (24) Ophioceraceae (2) Pseudohalonectriaceae (1) Pyriculariaceae (11) Myrmecridiales Myrmecridiaceae (2) Xenodactylariaceae (1) Ophiostomatales Kathistaceae (3) Ophiostomataceae (12) Pararamichloridiales - Pararamichloridiaceae (1, Pararamichloridium) Phomatosporales - Phomatosporaceae (3) Sporidesmiales - Sporidesmiaceae (1, Sporidesmium) Tirisporellales - Tirisporellaceae (3) Togniniales - Togniniaceae (1) Xenospadicoidales - Xenospadicoidaceae (5) Incertae sedis As accepted by Wijayawardene et al. 2020; Families Barbatosphaeriaceae Barbatosphaeria (9) Ceratostomella (18) Xylomelasma (4) Papulosaceae Brunneosporella (1) Fluminicola (5) Papulosa (1) Wongia (3) Rhamphoriaceae Rhamphoria (15) Rhamphoriopsis (1) Rhodoveronaea (1) Xylolentia (1) Thyridiaceae Pleurocytospora (3) Thyridium (34) Trichosphaeriaceae Aquidictyomyces (1)* Brachysporium (25) Collematospora (1) Coniobrevicolla (1) Eriosphaeria (24) Koorchaloma Subram. (11) Rizalia (6) Schweinitziella (4) Setocampanula (1) Trichosphaeria (20) Unisetosphaeria (1) Woswasiaceae Cyanoannulus (1) Woswasia (1) Xylochrysis (1) Genera incertae sedis Aquimonospora (1) Aquaticola (5) Fusoidispora (1) Kaarikia (1)* Platytrachelon (1) Proliferophorum (1) Pseudoconlarium (1) Pseudostanjehughesia (1) References Sordariomycetes Fungus subclasses Fungus taxa Taxa described in 2015
Diaporthomycetidae
Biology
1,099