source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Dvoretzky%E2%80%93Kiefer%E2%80%93Wolfowitz%20inequality | In the theory of probability and statistics, the Dvoretzky–Kiefer–Wolfowitz–Massart inequality (DKW inequality) bounds how close an empirically determined distribution function will be to the distribution function from which the empirical samples are drawn. It is named after Aryeh Dvoretzky, Jack Kiefer, and Jacob Wolfowitz, who in 1956 proved the inequality
with an unspecified multiplicative constant C in front of the exponent on the right-hand side.
In 1990, Pascal Massart proved the inequality with the sharp constant C = 2, confirming a conjecture due to Birnbaum and McCarty. In 2021, Michael Naaman proved the multivariate version of the DKW inequality and generalized Massart's tightness result to the multivariate case, which results in a sharp constant of twice the dimension k of the space in which the observations are found: C = 2k.
The DKW inequality
Given a natural number n, let X1, X2, …, Xn be real-valued independent and identically distributed random variables with cumulative distribution function F(·). Let Fn denote the associated empirical distribution function defined by
so is the probability that a single random variable is smaller than , and is the fraction of random variables that are smaller than .
The Dvoretzky–Kiefer–Wolfowitz inequality bounds the probability that the random function Fn differs from F by more than a given constant ε > 0 anywhere on the real line. More precisely, there is the one-sided estimate
which also implies a two-sided estimate
This strengthens the Glivenko–Cantelli theorem by quantifying the rate of convergence as n tends to infinity. It also estimates the tail probability of the Kolmogorov–Smirnov statistic. The inequalities above follow from the case where F corresponds to be the uniform distribution on [0,1]
as Fn has the same distributions as Gn(F) where Gn is the empirical distribution of
U1, U2, …, Un where these are independent and Uniform(0,1), and noting that
with equality if and only if F is |
https://en.wikipedia.org/wiki/Midkine | Midkine (MK or MDK), also known as neurite growth-promoting factor 2 (NEGF2), is a protein that in humans is encoded by the MDK gene.
Midkine is a basic heparin-binding growth factor of low molecular weight, and forms a family with pleiotrophin (NEGF1, 46% homologous with MK). It is a nonglycosylated protein, composed of two domains held by disulfide bridges. It is a developmentally important retinoic acid-responsive gene product strongly induced during mid-gestation, hence the name midkine. Restricted mainly to certain tissues in the normal adult, it is strongly induced during oncogenesis, inflammation and tissue repair.
MK is pleiotropic, capable of exerting activities such as cell proliferation, cell migration, angiogenesis and fibrinolysis. A molecular complex containing receptor-type tyrosine phosphatase zeta (PTPζ), low density lipoprotein receptor-related protein (LRP1), anaplastic leukemia kinase (ALK) and syndecans is considered to be its receptor.
Role in cancer
MK appears to enhance the angiogenic and proliferative activities of cancer cells. The expression of MK (mRNA and protein expression) has been found to be elevated in multiple cancer types, such as neuroblastoma, glioblastoma, Wilms' tumors, thyroid papillary carcinomas, colorectal, liver, ovary, bladder, breast, lung, esophageal, stomach, and prostate cancers. Serum MK in normal individuals is usually less than 0.5-0.6 ng/ml, whereas patients with these malignancies have much higher levels than this. In some cases, these elevated levels of MK also indicate a poorer prognosis of the disease, such as in neuroblastoma, glioblastoma, and bladder carcinomas. In neuroblastoma, for example, the levels of MK are elevated about three times the level in Stage 4 of the cancer (one of the final stages) than they are in Stage 1.
In neuroblastoma, MK has been found to be over expressed in the cancer cells that are resistant to chemotherapeutic drugs. The resistance to chemotherapy seems to be reversible b |
https://en.wikipedia.org/wiki/FASEB%20Excellence%20in%20Science%20Award | The Excellence in Science Award was established by the Federation of American Societies for Experimental Biology (FASEB) in 1989 to recognize outstanding achievement by women in biological science. All women who are members of one or more of the societies of FASEB are eligible for nomination. Nominations recognize a woman whose career achievements have contributed significantly to further our understanding of a particular discipline by excellence in research.
The award includes a $10,000 unrestricted research grant, funded by Eli Lilly and Company.
Award recipients
Source: FASEB
1989 Marian Koshland
1990 Elizabeth Hay
1991 Ellen Vitetta
1992 Bettie Sue Masters
1993 Susan Leeman
1994 Lucille Shapiro
1995 Philippa Marrack
1996 Zena Werb
1997 Claude Klee
1998 Eva Neer
1999 Helen Blau
2000 Peng Loh
2001 Laurie Glimcher
2002 Phyllis Wise
2003 Joan A. Steitz
2004 Janet Rossant
2005 Anita Roberts
2006 Marilyn Farquhar and Elaine Fuchs
2007 Frances Arnold
2008 Mina J. Bissell
2009 Susan L. Lindquist
2010 Susan S. Taylor
2011 Gail R. Martin
2012 Susan R. Wessler
2013 Terry Orr-Weaver
2014 Kathryn V. Anderson
2015 Diane Griffin
2016 Bonnie Bassler
2017 Diane Mathis
2018 Lynne E. Maquat
2019 Barbara B. Kahn
2020 :
Lifetime Achievement : Brigid Hogan
Mid-Career Investigator : Aviv Regev
Early-Career Investigator : Karen Schindler
2021:
Lifetime Achievement : M. Celeste Simon
Mid-Career Investigator : Valentina Greco
Early-Career Investigator : Cigall Kadoch
2022:
Lifetime Achievement : Arlene H. Sharpe
Mid-Career Investigator : Sallie R. Permar
Early-Career Investigator : Smita Krishnaswamy
2023:
Lifetime Achievement : Elaine S. Jaffe
Mid-Career Investigator : Paola Arlotta
Early-Career Investigator : Diana Libuda
See also
List of biology awards |
https://en.wikipedia.org/wiki/Indoleamine%202%2C3-dioxygenase | Indoleamine-pyrrole 2,3-dioxygenase (IDO or INDO ) is a heme-containing enzyme physiologically expressed in a number of tissues and cells, such as the small intestine, lungs, female genital tract or placenta. In humans is encoded by the IDO1 gene. IDO is involved in tryptophan metabolism. It is one of three enzymes that catalyze the first and rate-limiting step in the kynurenine pathway, the O2-dependent oxidation of L-tryptophan to N-formylkynurenine, the others being indolamine-2,3-dioxygenase 2 (IDO2) and tryptophan 2,3-dioxygenase (TDO). IDO is an important part of the immune system and plays a part in natural defense against various pathogens. It is produced by the cells in response to inflammation and has an immunosuppressive function because of its ability to limit T-cell function and engage mechanisms of immune tolerance. Emerging evidence suggests that IDO becomes activated during tumor development, helping malignant cells escape eradication by the immune system. Expression of IDO has been described in a number of types of cancer, such as acute myeloid leukemia, ovarian cancer or colorectal cancer. IDO is part of the malignant transformation process and plays a key role in suppressing the anti-tumor immune response in the body, so inhibiting it could increase the effect of chemotherapy as well as other immunotherapeutic protocols. Furthermore, there is data implicating a role for IDO1 in the modulation of vascular tone in conditions of inflammation via a novel pathway involving singlet oxygen.
Physiological function
Indoleamine 2,3-dioxygenase is the first and rate-limiting enzyme of tryptophan catabolism through the kynurenine pathway.
IDO is an important molecule in the mechanisms of tolerance and its physiological functions include the suppression of potentially dangerous inflammatory processes in the body. IDO also plays a role in natural defense against microorganisms. Expression of IDO is induced by interferon-gamma, which explains why the express |
https://en.wikipedia.org/wiki/Situational%20application | In computing, a situational application is "good enough" software created for a narrow group of users with a unique set of needs. The application typically (but not always) has a short life span, and is often created within the group where it is used, sometimes by the users themselves. As the requirements of a small team using the application change, the situational application often also continues to evolve to accommodate these changes. Although situational applications are specifically designed to embrace change, significant changes in requirements may lead to an abandonment of the situational application altogether – in some cases it is just easier to develop a new one than to evolve the one in use.
Characteristics
Situational applications are developed fast, easy to use, uncomplicated, and serve a unique set of requirements. They have a narrow focus on a specific business problem, and they are written in a way where if the business problem changes rapidly, so can the situational application.
This contrasts with more common enterprise applications, which are designed to address a large set of business problems, require meticulous planning, and impose a sometimes-slow and often-meticulous change process.
Origination
Clay Shirky in his essay entitled "Situated Software" described a type of software that "...is designed for use by a specific social group, rather than for a generic set of "users"." IBM later morphed the term into "situational applications".
Evolution
The successful large-scale implementation of a situational application environment in an organization requires a strategy, mindset, methodology and support structure quite different from traditional application development. This is now evolving as more companies learn how to best leverage the ideas behind situational applications. In addition, the advent of cloud-based application development and deployment platforms makes the implementation of a comprehensive situational application environment mu |
https://en.wikipedia.org/wiki/I-CreI | I-CreI is a homing endonuclease whose gene was first discovered in the chloroplast genome of Chlamydomonas reinhardtii, a species of unicellular green algae. It is named for the facts that: it resides in an Intron; it was isolated from Clamydomonas reinhardtii; it was the first (I) such gene isolated from C. reinhardtii. Its gene resides in a group I intron in the 23S ribosomal RNA gene of the C. reinhardtii chloroplast, and I-CreI is only expressed when its mRNA is spliced from the primary transcript of the 23S gene. I-CreI enzyme, which functions as a homodimer, recognizes a 22-nucleotide sequence of duplex DNA and cleaves one phosphodiester bond on each strand at specific positions. I-CreI is a member of the LAGLIDADG family of homing endonucleases, all of which have a conserved LAGLIDADG amino acid motif that contributes to their associative domains and active sites. When the I-CreI-containing intron encounters a 23S allele lacking the intron, I-CreI enzyme "homes" in on the "intron-minus" allele of 23S and effects its parent intron's insertion into the intron-minus allele. Introns with this behavior are called mobile introns. Because I-CreI provides for its own propagation while conferring no benefit on its host, it is an example of selfish DNA.
Discovery
I-CreI was first observed as an intervening sequence in the 23S rRNA gene of the C. reinhardtii chloroplast genome. The 23S gene is an RNA gene, meaning that its transcript is not translated into protein. As RNA, it forms part of the large subunit of the ribosome. An open reading frame coding for a 163-amino acid protein was found in this 23S intron, suggesting that a protein might facilitate the homing behavior of the mobile intron. Furthermore, the predicted protein had a LAGLIDADG motif, a conserved amino acid sequence that is present in other proteins coded for in group I mobile introns. A 1991 study established that the ORF codes for a DNA endonuclease, I-CreI, which selectively cuts a site corresponding |
https://en.wikipedia.org/wiki/Testware | Generally speaking, Testware is a sub-set of software with a special purpose, that is, for software testing, especially for software testing automation. Automation testware for example is designed to be executed on automation frameworks. Testware is an umbrella term for all utilities and application software that serve in combination for testing a software package, but not necessarily contribute to operational purposes. As such, testware is not a standing configuration, but merely a working environment for application software or subsets thereof.
It includes artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing.
Testware is produced by both verification and validation testing methods. Like software, Testware includes codes and binaries as well as test cases, test plan, test report, etc. Testware should be placed under the control of a configuration management system, saved and faithfully maintained.
Compared to general software, testware is special because it has:
a different purpose
different metrics for quality and
different users
The different methods should be adopted when you develop testware with what you use to develop general software.
Testware is also referred as test tools in a narrow sense. |
https://en.wikipedia.org/wiki/CMA-ES | Covariance matrix adaptation evolution strategy (CMA-ES) is a particular kind of strategy for numerical optimization. Evolution strategies (ES) are stochastic, derivative-free methods for numerical optimization of non-linear or non-convex continuous optimization problems. They belong to the class of evolutionary algorithms and evolutionary computation. An evolutionary algorithm is broadly based on the principle of biological evolution, namely the repeated interplay of variation (via recombination and mutation) and selection: in each generation (iteration) new individuals (candidate solutions, denoted as ) are generated by variation, usually in a stochastic way, of the current parental individuals. Then, some individuals are selected to become the parents in the next generation based on their fitness or objective function value . Like this, over the generation sequence, individuals with better and better -values are generated.
In an evolution strategy, new candidate solutions are sampled according to a multivariate normal distribution in . Recombination amounts to selecting a new mean value for the distribution. Mutation amounts to adding a random vector, a perturbation with zero mean. Pairwise dependencies between the variables in the distribution are represented by a covariance matrix. The covariance matrix adaptation (CMA) is a method to update the covariance matrix of this distribution. This is particularly useful if the function is ill-conditioned.
Adaptation of the covariance matrix amounts to learning a second order model of the underlying objective function similar to the approximation of the inverse Hessian matrix in the quasi-Newton method in classical optimization. In contrast to most classical methods, fewer assumptions on the underlying objective function are made. Because only a ranking (or, equivalently, sorting) of candidate solutions is exploited, neither derivatives nor even an (explicit) objective function is required by the method. For exampl |
https://en.wikipedia.org/wiki/Susac%27s%20syndrome | Susac's syndrome (retinocochleocerebral vasculopathy) is a very rare form of microangiopathy characterized by encephalopathy, branch retinal artery occlusions and hearing loss. The cause is unknown but it is theorized that antibodies are produced against endothelial cells in tiny arteries which leads to damage and the symptoms related to the illness. Despite this being an extremely rare disease, there are four registries collecting data on the illness; two are the United States, one in Germany, and one in Portugal.
Presentation
Susac's syndrome is named for Dr. John Susac (1940–2012), of Winter Haven, Florida, who first described it in 1979. Susac's syndrome is a very rare disease, of unknown cause, and many persons who experience it do not display the bizarre symptoms named here. Their speech can be affected, such as the case of a female of late teens who suffered speech issues and hearing problems, and many experience unrelenting and intense headaches and migraines, some form of hearing loss, and impaired vision. The problem usually corrects itself, but this can take up to five years. In some cases, subjects can become confused. The syndrome usually affects women around the age of 18 years, with female to male ratio of cases of 2:1.
William F. Hoyt was the first to call the syndrome Susac syndrome and later Robert Daroff asked Dr. Susac to write an editorial in Neurology about the disorder and to use the eponym of Susac syndrome in the title, forever linking this disease with him.
Pathogenesis
In the March 1979 report in Neurology, Drs. Susac, Hardman and Selhorst reported two patients with the triad of encephalopathy, hearing loss and microangiopathy of the retina. The first patient underwent brain biopsy, which revealed sclerosis of the media and adventitia of small pial and cortical vessels, suggestive of a healed angiitis. Both patients underwent fluorescein retinal angiography that demonstrated multifocal retinal artery occlusions without evidence of em |
https://en.wikipedia.org/wiki/Soft%20Hard%20Real-Time%20Kernel | S.Ha.R.K. (the acronym stands for Soft Hard Real-time Kernel) is a completely configurable kernel architecture designed for supporting hard, soft, and non real-time applications with interchangeable scheduling algorithms.
Main features
The kernel architecture's main benefit is that an application can be developed independently from a particular system configuration. This allows new modules to be added or replaced in the same application, so that specific scheduling policies can be evaluated for predictability, overhead and performance.
Applications
S.Ha.R.K. was developed at RETIS Lab, a research facility of the Sant'Anna School of Advanced Studies, and at the University of Pavia, as a tool for teaching, testing and developing real-time software systems. It is used for teaching at many universities, including the Sant'Anna School of Advanced Studies and Malardalens University in Sweden.
Modularity
Unlike the kernels in traditional operating systems, S.Ha.R.K. is fully modular in terms of scheduling policies, aperiodic servers, and concurrency control protocols. Modularity is achieved by partitioning system activities between a generic kernel and a set of modules, which can be registered at initialization to configure the kernel according to specific application requirements.
History
S.Ha.R.K. is the evolution of the Hartik Kernel and it is based on the OSLib Project.
See also
Real-time operating system
External links
The S.Ha.R.K. Project official site
Real-time operating systems
Free software operating systems |
https://en.wikipedia.org/wiki/Mead%E2%80%93Conway%20VLSI%20chip%20design%20revolution | The Mead–Conway VLSI chip design revolution, or Mead and Conway revolution, was a very-large-scale integration (VLSI) design revolution starting in 1978 which resulted in a worldwide restructuring of academic materials in computer science and electrical engineering education, and was paramount for the development of industries based on the application of microelectronics.
A prominent factor in promoting this design revolution throughout industry was the DARPA-funded VLSI Project instigated by Mead and Conway which spurred development of electronic design automation.
Details
When the integrated circuit was originally invented and commercialized, the initial chip designers were co-located with the physicists, engineers and factories that understood integrated circuit technology. At that time, fewer than 100 transistors would fit in an integrated circuit "chip". The design capability for such circuits was centered in industry, with universities struggling to catch up. Soon, the number of transistors which fit in a chip started doubling every year. (The doubling period later grew to two years.) Much more complex circuits could then fit on a single chip, but the device physicists who fabricated the chips were not experts in electronic circuit design, so their designs were limited more by their expertise and imaginations than by limitations in the technology.
In 1978–79, when approximately 20,000 transistors could be fabricated in a single chip, Carver Mead and Lynn Conway wrote the textbook Introduction to VLSI Systems. It was published in 1979 and became a bestseller, since it was the first VLSI (Very Large Scale Integration) design textbook usable by non-physicists. ("In a self-aligned CMOS process, a transistor is formed wherever the gate layer ... crosses a diffusion layer." from: Integrated circuit § Manufacturing) The authors intended the book to fill a gap in the literature and introduce electrical engineering and computer science students to integrated s |
https://en.wikipedia.org/wiki/Salicide | The term salicide refers to a technology used in the microelectronics industry used to form electrical contacts between the semiconductor device and the supporting interconnect structure. The salicide process involves the reaction of a metal thin film with silicon in the active regions of the device, ultimately forming a metal silicide contact through a series of annealing and/or etch processes. The term "salicide" is a compaction of the phrase self-aligned silicide. The description "self-aligned" suggests that the contact formation does not require photolithography patterning processes, as opposed to a non-aligned technology such as polycide.
The term salicide is also used to refer to the metal silicide formed by the contact formation process, such as "titanium salicide", although this usage is inconsistent with accepted naming conventions in chemistry.
Contact formation
The salicide process begins with deposition of a thin transition metal layer over fully formed and patterned semiconductor devices (e.g. transistors). The wafer is heated, allowing the transition metal to react with exposed silicon in the active regions of the semiconductor device (e.g., source, drain, gate) forming a low-resistance transition metal silicide. The transition metal does not react with the silicon dioxide nor the silicon nitride insulators present on the wafer. Following the reaction, any remaining transition metal is removed by chemical etching, leaving silicide contacts in only the active regions of the device. A fully integrable manufacturing process may be more complex, involving additional anneals, surface treatments, or etch processes.
Chemistry
Typical transition metals used or considered for use in salicide technology include titanium, cobalt, nickel, platinum, and tungsten. One key challenge in developing a salicide process is controlling the specific phase (compound) formed by the metal-silicon reaction. Cobalt, for example, may react with silicon to form Co2Si, |
https://en.wikipedia.org/wiki/Calgary%20corpus | The Calgary corpus is a collection of text and binary data files, commonly used for comparing data compression algorithms. It was created by Ian Witten, Tim Bell and John Cleary from the University of Calgary in 1987 and was commonly used in the 1990s. In 1997 it was replaced by the Canterbury corpus, based on concerns about how representative the Calgary corpus was, but the Calgary corpus still exists for comparison and is still useful for its originally intended purpose.
Contents
In its most commonly used form, the corpus consists of 14 files totaling 3,141,622 bytes as follows.
There is also a less commonly used 18 file version which include 4 additional text files in UNIX "troff" format, PAPER3 through PAPER6. The maintainers of the Canterbury corpus website notes that "they don't add to the evaluation".
Benchmarks
The Calgary corpus was a commonly used benchmark for data compression in the 1990s. Results were most commonly listed in bits per byte (bpb) for each file and then summarized by averaging. More recently, it has been common to just add the compressed sizes of all of the files. This is called a weighted average because it is equivalent to weighting the compression ratios by the original file sizes. The UCLC benchmark by Johan de Bock uses this method.
For some data compressors it is possible to compress the corpus smaller by combining the inputs into an uncompressed archive (such as a tar file) before compression because of mutual information between the text files. In other cases, the compression is worse because the compressor handles nonuniform statistics poorly. This method was used in a benchmark in the online book Data Compression Explained by Matt Mahoney.
The table below shows the compressed sizes of the 14 file Calgary corpus using both methods for some popular compression programs. Options, when used, select best compression. For a more complete list, see the above benchmarks.
Compression challenge
The "Calgary corpus Compression a |
https://en.wikipedia.org/wiki/Motor%20soft%20starter | A motor soft starter is a device used with AC electrical motors to temporarily reduce the load and torque in the powertrain and electric current surge of the motor during start-up. This reduces the mechanical stress on the motor and shaft, as well as the electrodynamic stresses on the attached power cables and electrical distribution network, extending the lifespan of the system.
It can consist of mechanical or electrical devices, or a combination of both. Mechanical soft starters include clutches and several types of couplings using a fluid, magnetic forces, or steel shot to transmit torque, similar to other forms of torque limiter. Electrical soft starters can be any control system that reduces the torque by temporarily reducing the voltage or current input, or a device that temporarily alters how the motor is connected in the electric circuit.
Operating principles
Whenever the armature of an electric motor is moving, both the motor action and generator action are occurring simultaneously; the electromagnetic force produced by generator action opposes the desired motor action and effectively creates a variable motor resistance which increases with motor speed. When a voltage is applied to the motor, this resistance dictates the current drawn by the motor. At rest, the resistance is relatively low, so the starting or inrush current can be high if the full line voltage is applied to the motor. Compared to DC motors, AC motors tend to have significantly higher stator resistance and correspondingly lower inrush current.
Nevertheless, across-the line starting of induction motors is accompanied by inrush currents up to 7-10 times higher than running current, and higher efficiency motors can experience inrush currents 10-15 times running current. In addition, starting torque can be up to 3 times higher than running torque. The starting torque transient can create a sudden mechanical stress on the machine, which leads to a reduced service life. Moreover, the high inru |
https://en.wikipedia.org/wiki/Benznidazole | Benznidazole is an antiparasitic medication used in the treatment of Chagas disease. While it is highly effective in early disease, the effectiveness decreases in those who have long-term infection. It is the first-line treatment given its moderate side effects compared to nifurtimox. It is taken by mouth.
Side effects are fairly common. They include rash, numbness, fever, muscle pain, loss of appetite, and trouble sleeping. Rare side effects include bone marrow suppression which can lead to low blood cell levels. It is not recommended during pregnancy or in people with severe liver or kidney disease. Benznidazole is in the nitroimidazole family of medication and works by the production of free radicals.
Benznidazole came into medical use in 1971. It is on the World Health Organization's List of Essential Medicines. As of 2012, Laboratório Farmacêutico do Estado de Pernambuco, a government run pharmaceutical company in Brazil was the only producer.
Medical uses
Benznidazole has a significant activity during the acute phase of Chagas disease, with a success rate of up to 80%. Its curative capabilities during the chronic phase are, however, limited. Some studies have found parasitologic cure (a complete elimination of T. cruzi from the body) in children during the early stage of the chronic phase, but overall failure rate in chronically infected individuals is typically above 80%.
Some studies indicate treatment with benznidazole during the chronic phase, even if incapable of producing parasitologic cure because it reduces electrocardiographic changes and delays worsening of the clinical condition of the patient.
Benznidazole has proven to be effective in the treatment of reactivated T. cruzi infections caused by immunosuppression, such as in people with AIDS or in those under immunosuppressive therapy related to organ transplants.
Children
Benznidazole can be used in children, with the same 5–7 mg/kg per day weight-based dosing regimen that is used to treat |
https://en.wikipedia.org/wiki/Thermoproteus | Thermoproteus is a genus of archaeans in the family Thermoproteaceae. These prokaryotes are thermophilic sulphur-dependent organisms related to the genera Sulfolobus, Pyrodictium and Desulfurococcus. They are hydrogen-sulphur autotrophs and can grow at temperatures of up to 95 °C.
Description and significance
Thermoproteus is a genus of anaerobes that grow in the wild by autotrophic sulfur reduction. Like other hyperthermophiles, Thermoproteus represents a living example of some of Earth's earliest organisms, located at the base of the Archaea.
Genome structure
Genetic sequencing of Thermoproteus has revealed much about the organism's modes of metabolism. Total genome length is 1.84 Mbp, and the DNA is double-stranded and circular. Genes are arranged in co-transcribed clusters called operons. The Thermoproteus tenax genome has been completely sequenced.
Phylogeny
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI)
Cell structure and metabolism
A significant amount of research has been done on the metabolism of Thermoproteus and other hyperthermophiles as well. Thermoproteus metabolizes autotrophically through sulfur reduction, but it grows much faster by sulfur respiration in cultivation. In T. tenax, a number of metabolic pathways allow the cell to select a mode of metabolism depending on the energy requirements of the cell (depending, for example, on the cell's developmental or growth stage). Like all archaea, Thermoproteus possesses unique membrane lipids, which are ether-linked glycerol derivatives of 20 or 40 carbon branched lipids. The lipids' unsaturations are generally conjugated (as opposed to the unconjugation found in Bacteria and Eukaryota). In Thermosphaera, as in all members of the Crenarchaeota, the membranes are predominated by the 40-carbon lipids that span the entire membrane. This causes the membrane to be composed of monolayers |
https://en.wikipedia.org/wiki/Richard%20Lounsbery%20Award | The Richard Lounsbery Award is given to American and French scientists, 45 years or younger, in recognition of "extraordinary scientific achievement in biology and medicine."
The Award alternates between French and American scientists, and is awarded by the National Academy of Sciences and the French Academy of Sciences in alternating years to a scientist from the other country. The award is selected by a seven-member jury representing both the French and the US Academies. The recipient receives a $75,000 prize, funding to visit a lab or research institution in the awarding country, and an invitation to give the Lounsbery Lecture in the awarding country.
The Lounsbery Award was established in 1979 by Vera Lounsbery in memory of her husband, Richard Lounsbery, and is funded by the Richard Lounsbery Foundation. Richard and Vera met in Paris after World War I, and the couple divided their time between Paris and New York.
Award recipients
Source:
2022 Claire Wyart, for her outstanding research on the sensory interface between the central nervous system and cerebrospinal fluid that controls our posture and movements.
2021 Feng Zhang, for his pioneering achievements in the field of genome editing, including the discovery of novel CRISPR systems and their development as molecular tools.
2020 , for her work in developmental biology, in particular training and evolution of periodic patterns on the plumage of birds.
2019 Jay Shendure, for his pioneering work and leadership in the second wave of genomics that is transforming genetics and medicine. Through his development of exome sequencing and other novel technologies, he has defined new paradigms for implicating Mendelian disease genes, interpreting genetic variation, and single cell profiling of developmental lineages and gene regulation in whole organisms.
2018 , For his work on the genetic and mechanical regulation that underlies tissue proliferation, homeostasis and repair in physiological and pathological conditions |
https://en.wikipedia.org/wiki/Graph%20embedding | In topological graph theory, an embedding (also spelled imbedding) of a graph on a surface is a representation of on in which points of are associated with vertices and simple arcs (homeomorphic images of ) are associated with edges in such a way that:
the endpoints of the arc associated with an edge are the points associated with the end vertices of
no arcs include points associated with other vertices,
two arcs never intersect at a point which is interior to either of the arcs.
Here a surface is a compact, connected -manifold.
Informally, an embedding of a graph into a surface is a drawing of the graph on the surface in such a way that its edges may intersect only at their endpoints. It is well known that any finite graph can be embedded in 3-dimensional Euclidean space . A planar graph is one that can be embedded in 2-dimensional Euclidean space
Often, an embedding is regarded as an equivalence class (under homeomorphisms of ) of representations of the kind just described.
Some authors define a weaker version of the definition of "graph embedding" by omitting the non-intersection condition for edges. In such contexts the stricter definition is described as "non-crossing graph embedding".
This article deals only with the strict definition of graph embedding. The weaker definition is discussed in the articles "graph drawing" and "crossing number".
Terminology
If a graph is embedded on a closed surface , the complement of the union of the points and arcs associated with
the vertices and edges of is a family of regions (or faces). A 2-cell embedding, cellular embedding or map is an embedding in which every face is homeomorphic to an open disk. A closed 2-cell embedding is an embedding in which the closure of every face is homeomorphic to a closed disk.
The genus of a graph is the minimal integer such that the graph can be embedded in a surface of genus . In particular, a planar graph has genus , because it can be drawn on a sphere without self |
https://en.wikipedia.org/wiki/Limb%20bud | The limb bud is a structure formed early in vertebrate limb development. As a result of interactions between the ectoderm and underlying mesoderm, formation occurs roughly around the fourth week of development. In the development of the human embryo the upper limb bud appears in the third week and the lower limb bud appears four days later.
The limb bud consists of undifferentiated mesoderm cells that are sheathed in ectoderm. As a result of cell signaling interactions between the ectoderm and underlying mesoderm cells, formation of the developing limb bud occurs as mesenchymal cells from the lateral plate mesoderm and somites begin to proliferate to the point where they create a bulge under the ectodermal cells above. The mesoderm cells in the limb bud that come from the lateral plate mesoderm will eventually differentiate into the developing limb's connective tissues, such as cartilage, bone, and tendon. Moreover, the mesoderm cells that come from the somites will eventually differentiate into the myogenic cells of the limb muscles.
The limb bud remains active throughout much of limb development as it stimulates the creation and positive feedback retention of two signaling regions: the apical ectodermal ridge (AER) and the zone of polarizing activity (ZPA) with the mesenchymal cells. These signaling centers are crucial to the proper formation of a limb that is correctly oriented with its corresponding axial polarity in the developing organism. Research has determined that the AER signaling region within the limb bud determines the proximal-distal axis formation of the limb using FGF signals. ZPA signaling establishes the anterior-posterior axis formation of the limb using Shh signals. Additionally, though not known as a specific signaling region like AER and ZPA, the dorsal-ventral axis is established in the limb bud by the competitive Wnt7a and BMP signals that the dorsal ectoderm and ventral ectoderm use respectively. Because all of these signaling systems re |
https://en.wikipedia.org/wiki/Agostic%20interaction | In organometallic chemistry, agostic interaction refers to the interaction of a coordinatively-unsaturated transition metal with a C−H bond, when the two electrons involved in the C−H bond enter the empty d-orbital of the transition metal, resulting in a three-center two-electron bond. Many catalytic transformations, e.g. oxidative addition and reductive elimination, are proposed to proceed via intermediates featuring agostic interactions. Agostic interactions are observed throughout organometallic chemistry in alkyl, alkylidene, and polyenyl ligands.
History
The term agostic, derived from the Ancient Greek word for "to hold close to oneself", was coined by Maurice Brookhart and Malcolm Green, on the suggestion of the classicist Jasper Griffin, to describe this and many other interactions between a transition metal and a C−H bond. Often such agostic interactions involve alkyl or aryl groups that are held close to the metal center through an additional σ-bond.
Short interactions between hydrocarbon substituents and coordinatively unsaturated metal complexes have been noted since the 1960s. For example, in tris(triphenylphosphine) ruthenium dichloride, a short interaction is observed between the ruthenium(II) center and a hydrogen atom on the ortho position of one of the nine phenyl rings. Complexes of borohydride are described as using the three-center two-electron bonding model.
The nature of the interaction was foreshadowed in main group chemistry in the structural chemistry of trimethylaluminium.
Characteristics of agostic bonds
Agostic interactions are best demonstrated by crystallography. Neutron diffraction data have shown that C−H and M┄H bond distances are 5-20% longer than expected for isolated metal hydride and hydrocarbons. The distance between the metal and the hydrogen is typically 1.8–2.3 Å, and the M┄H−C angle is in the range of 90°–140°. The presence of a 1H NMR signal that is shifted upfield from that of a normal aryl or alkane, often to the |
https://en.wikipedia.org/wiki/Cleistogamy | Cleistogamy is a type of automatic self-pollination of certain plants that can propagate by using non-opening, self-pollinating flowers. Especially well known in peanuts, peas, and pansies, this behavior is most widespread in the grass family. However, the largest genus of cleistogamous plants is Viola.
The more common opposite of cleistogamy, or "closed marriage", is called chasmogamy, or "open marriage". Virtually all plants that produce cleistogamous flowers also produce chasmogamous ones. The principal advantage of cleistogamy is that it requires fewer plant resources to produce seeds than does chasmogamy, because development of petals, nectar and large amounts of pollen is not required. This efficiency makes cleistogamy particularly useful for seed production on unfavorable sites or adverse conditions. Impatiens capensis, for example, has been observed to produce only cleistogamous flowers after being severely damaged by grazing and to maintain populations on unfavorable sites with only cleistogamous flowers. The obvious disadvantage of cleistogamy is that self-fertilization occurs, which may suppress the creation of genetically superior plants. Another disadvantage of self-fertilization is that it leads to the expression in progeny of deleterious recessive mutations.
For genetically modified (GM) rapeseed, researchers hoping to minimise the admixture of GM and non-GM crops are attempting to use cleistogamy to prevent gene flow. However, preliminary results from Co-Extra, a current project within the EU research program, show that although cleistogamy reduces gene flow, it is not at the moment a consistently reliable tool for biocontainment; due to a certain instability of the cleistogamous trait, some flowers may open and release genetically modified pollen.
See also
Co-existence of genetically modified, conventional, and organic crops |
https://en.wikipedia.org/wiki/Work%20output | In physics, work output is the work done by a simple machine, compound machine, or any type of engine model. In common terms, it is the energy output, which for simple machines is always less than the energy input, even though the forces may be drastically different.
In [thermodynamics], work output can refer to the thermodynamic work done by a heat engine, in which case the amount of work output must be less than the input as energy is lost to heat, as determined by the engine's efficiency. |
https://en.wikipedia.org/wiki/Letters%20to%20a%20Young%20Mathematician | Letters to a Young Mathematician () is a 2006 book by Ian Stewart, and is part of Basic Books' Art of Mentoring series. Stewart mentions in the preface that he considers this book an update to G.H. Hardy's A Mathematician's Apology.
The book is made up of letters to a fictional correspondent of Stewart's, an aspiring mathematician named Meg. The roughly chronological letters follow Meg from her high school years up to her receiving tenure from an American university.
Reviews of the book were generally positive. Fernando Q. Gouvêa's review for the MAA calls it "full of good advice, much of it direct and to the point" and later, that "while it won't change the world, it may well help some young people decide to be (or not to be) mathematicians." In Emma Carberry's review for the AMS, reacted differently, saying that "one does not so much feel the benefit of a ream of practical advice, but rather of exposure to the inner realm of mathematics". A review in Nature was harsher, however, saying that "there is a general lack of information ... [and] too much jargon" and that it "suffers from being written entirely for a US audience", but even this review finds a bright note, "The letter in which Stewart tells Meg how to teach undergraduates should be compulsory reading for all lecturers and tutors." |
https://en.wikipedia.org/wiki/Fixation%20%28histology%29 | In the fields of histology, pathology, and cell biology, fixation is the preservation of biological tissues from decay due to autolysis or putrefaction. It terminates any ongoing biochemical reactions and may also increase the treated tissues' mechanical strength or stability. Tissue fixation is a critical step in the preparation of histological sections, its broad objective being to preserve cells and tissue components and to do this in such a way as to allow for the preparation of thin, stained sections. This allows the investigation of the tissues' structure, which is determined by the shapes and sizes of such macromolecules (in and around cells) as proteins and nucleic acids.
Purposes
In performing their protective role, fixatives denature proteins by coagulation, by forming additive compounds, or by a combination of coagulation and additive processes. A compound that adds chemically to macromolecules stabilizes structure most effectively if it is able to combine with parts of two different macromolecules, an effect known as cross-linking.
Fixation of tissue is done for several reasons. One reason is to kill the tissue so that postmortem decay (autolysis and putrefaction) is prevented.
Fixation preserves biological material (tissue or cells) as close to its natural state as possible in the process of preparing tissue for examination. To achieve this, several conditions usually must be met.
First, a fixative usually acts to disable intrinsic biomolecules—particularly proteolytic enzymes—which otherwise digest or damage the sample.
Second, a fixative typically protects a sample from extrinsic damage. Fixatives are toxic to most common microorganisms (bacteria in particular) that might exist in a tissue sample or which might otherwise colonize the fixed tissue. In addition, many fixatives chemically alter the fixed material to make it less palatable (either indigestible or toxic) to opportunistic microorganisms.
Finally, fixatives often alter the cells or tiss |
https://en.wikipedia.org/wiki/Tarjan%27s%20algorithm | Tarjan's algorithm may refer to one of several algorithms attributed to Robert Tarjan, including:
Tarjan's strongly connected components algorithm
Tarjan's off-line lowest common ancestors algorithm
Tarjan's algorithm for finding bridges in an undirected graph
Tarjan's algorithm for finding simple circuits in a directed graph
See also
List of algorithms |
https://en.wikipedia.org/wiki/TypeScript | TypeScript is a free and open-source high-level programming language developed by Microsoft that adds static typing with optional type annotations to JavaScript. It is designed for the development of large applications and transpiles to JavaScript. Because TypeScript is a superset of JavaScript, all JavaScript programs are syntactically valid TypeScript, but they can fail to type-check for safety reasons.
TypeScript may be used to develop JavaScript applications for both client-side and server-side execution (as with Node.js or Deno). Multiple options are available for transpilation. The default TypeScript Compiler can be used, or the Babel compiler can be invoked to convert TypeScript to JavaScript.
TypeScript supports definition files that can contain type information of existing JavaScript libraries, much like C++ header files can describe the structure of existing object files. This enables other programs to use the values defined in the files as if they were statically typed TypeScript entities. There are third-party header files for popular libraries such as jQuery, MongoDB, and D3.js. TypeScript headers for the Node.js library modules are also available, allowing development of Node.js programs within TypeScript.
The TypeScript compiler is itself written in TypeScript and compiled to JavaScript. It is licensed under the Apache License 2.0. Anders Hejlsberg, lead architect of C# and creator of Delphi and Turbo Pascal, has worked on the development of TypeScript.
History
TypeScript was released to the public in October 2012, with version 0.8, after two years of internal development at Microsoft. Soon after the initial public release, Miguel de Icaza praised the language itself, but criticized the lack of mature IDE support apart from Microsoft Visual Studio, which was not available on Linux and OS X at that time. As of April 2021 there is support in other IDEs and text editors, including Emacs, Vim, WebStorm, Atom and Microsoft's own Visual Studio Code. Ty |
https://en.wikipedia.org/wiki/Bletting | Bletting is a process of softening that certain fleshy fruits undergo, beyond ripening. There are some fruits that are either sweeter after some bletting, such as sea buckthorn, or for which most varieties can be eaten raw only after bletting, such as medlars, persimmons, quince, service tree fruit, and wild service tree fruit (popularly known as chequers). The rowan or mountain ash fruit must be bletted and cooked to be edible, to break down the toxic parasorbic acid (hexenollactone) into sorbic acid.
History
The English verb to blet was coined by John Lindley, in his Introduction to Botany (1835). He derived it from the French poire blette meaning 'overripe pear'. "After the period of ripeness", he wrote, "most fleshy fruits undergo a new kind of alteration; their flesh either rots or blets."
In Shakespeare's Measure for Measure, he alluded to bletting when he wrote (IV. iii. 167) "They would have married me to the rotten Medler." Thomas Dekker also draws a similar comparison in his play The Honest Whore: "I scarce know her, for the beauty of her cheek hath, like the moon, suffered strange eclipses since I beheld it: women are like medlars – no sooner ripe but rotten." Elsewhere in literature, D. H. Lawrence dubbed medlars "wineskins of brown morbidity."
There is also an old saying, used in Don Quixote, that "time and straw make medlars ripe", referring to the bletting process.
Process
Chemically speaking, bletting brings about an increase in sugars and a decrease in the acids and tannins that make the unripe fruit astringent.
Ripe medlars, for example, are taken from the tree, placed somewhere cool, and allowed to further ripen for several weeks. In Trees and Shrubs, horticulturist F. A. Bush wrote about medlars that "if the fruit is wanted it should be left on the tree until late October and stored until it appears in the first stages of decay; then it is ready for eating. More often the fruit is used for making jelly." Ideally, the fruit should be harve |
https://en.wikipedia.org/wiki/Hyperbolic%20volume | In the mathematical field of knot theory, the hyperbolic volume of a hyperbolic link is the volume of the link's complement with respect to its complete hyperbolic metric. The volume is necessarily a finite real number, and is a topological invariant of the link. As a link invariant, it was first studied by William Thurston in connection with his geometrization conjecture.
Knot and link invariant
A hyperbolic link is a link in the 3-sphere whose complement (the space formed by removing the link from the 3-sphere) can be given a complete Riemannian metric of constant negative curvature, giving it the structure of a hyperbolic 3-manifold, a quotient of hyperbolic space by a group acting freely and discontinuously on it. The components of the link will become cusps of the 3-manifold, and the manifold itself will have finite volume. By Mostow rigidity, when a link complement has a hyperbolic structure, this structure is uniquely determined, and any geometric invariants of the structure are also topological invariants of the link. In particular, the hyperbolic volume of the complement is a knot invariant. In order to make it well-defined for all knots or links, the hyperbolic volume of a non-hyperbolic knot or link is often defined to be zero.
There are only finitely many hyperbolic knots for any given volume. A mutation of a hyperbolic knot will have the same volume, so it is possible to concoct examples with equal volumes; indeed, there are arbitrarily large finite sets of distinct knots with equal volumes.
In practice, hyperbolic volume has proven very effective in distinguishing knots, utilized in some of the extensive efforts at knot tabulation. Jeffrey Weeks's computer program SnapPea is the ubiquitous tool used to compute hyperbolic volume of a link.
Arbitrary manifolds
More generally, the hyperbolic volume may be defined for any hyperbolic 3-manifold. The Weeks manifold has the smallest possible volume of any closed manifold (a manifold that, unlike link co |
https://en.wikipedia.org/wiki/Stefan%20Marinov | Stefan Marinov () (1 February 1931 – 15 July 1997) was a Bulgarian physicist, researcher, writer and lecturer who promoted anti-relativistic theoretical viewpoints, and later in his life defended the ideas of perpetual motion and free energy. In 1997 he self-published experimental results that confirmed classical electromagnetism and disproved that a machine constructed by Marinov himself could be a source of perpetual motion. Devastated by the negative results, he committed suicide in Graz, Austria on 15 July 1997.
Life and education
Marinov was born on 1 February 1931 in Sofia to a family of intellectual communists.
In 1948 he finished Soviet College in Prague, then studied physics at the Czech Technical University in Prague and Sofia University. He was an Assistant Professor of Physics from 1960 to 1974 at Sofia University. In 1966–67, 1974, and 1977 he was subject to compulsory psychiatric treatment in Sofia because of his political dissent. In September 1977 Marinov received a passport and he successfully emigrated out of the country, moving to Brussels. In 1978, Marinov moved to Washington, D.C. Later he lived in Italy and Austria. In his later years, Marinov earned a living as a groom for horses.
On 15 July 1997, Marinov jumped to his death from a staircase at a library at the University of Graz, after leaving suicide notes. He was 66 years old and was survived by his son Marin Marinov, who at the time was a vice-Minister of Industry of Bulgaria.
Work
One of Marinov's interests was the quest for free energy sources via construction of toy theories (new axiomatic systems that putatively describe our physical reality) and their experimental testing against mainstream physical theories. In 1992 Marinov wrote a letter to German Federal Chancellor Helmut Kohl in support of a German company, Becocraft, that was doing research into "free energy" technologies and had recently been the target of lawsuits. In the letter, Marinov threatened to set himself on fire a |
https://en.wikipedia.org/wiki/Public%20Netbase | Public Netbase was a cultural media initiative, open access internet platform, media art space, and advocate for the development of electronic art.
Early development
Public Netbase was founded by Konrad Becker and Francisco de Sousa Webber in Vienna's Messepalast (later renamed to Museumsquartier) in 1994 as a non-profit internet provider and a platform for the participatory use and critical analyses of information and communication technology. Its parent organization was the Institute for New Culture Technologies-t0, founded in 1993. Most of the Institute's activities after 1994 occurred through Public Netbase, leading to the names and establishment dates being loosely applied, even in the organization's official material. The name of its World Wide Web server, t0, was often appended to either name as well. During the first years of its existence, Public Netbase shared a space in Museumsquartier with the initiative Depot – Kunst und Diskussion. After it relocated to own rooms (also in Museumsquartier), the number of workshops and instruction courses increased and the program of discursive events was now realized on an almost daily basis.
Art, culture, philosophy
Public Netbase focused on aiding the development of electronic art and the impact of the nascent World Wide Web on culture. Its own online presence earned it an award for distinction at the Prix Ars Electronica in 1995. Much of the web space provided through Public Netbase supported Austrian artists, although some hosted projects, such as the Transformation Story Archive, had wider recognition. The physical location in the Museumsquartier was also used for sponsored events, ranging from art symposia to a conference of the Association of Autonomous Astronauts to a "lecture/performance/event" by Critical Art Ensemble about biotechnology and a number of conferences/exhibitions such as ROBOTRONIKA (1998) and SYNWORLD (1999).
Controversy
Austrian Freedom Party
The project increasingly came into conflict |
https://en.wikipedia.org/wiki/Cerebellar%20model%20articulation%20controller | The cerebellar model arithmetic computer (CMAC) is a type of neural network based on a model of the mammalian cerebellum. It is also known as the cerebellar model articulation controller. It is a type of associative memory.
The CMAC was first proposed as a function modeler for robotic controllers by James Albus in 1975 (hence the name), but has been extensively used in reinforcement learning and also as for automated classification in the machine learning community. The CMAC is an extension of the perceptron model. It computes a function for input dimensions. The input space is divided up into hyper-rectangles, each of which is associated with a memory cell. The contents of the memory cells are the weights, which are adjusted during training. Usually, more than one quantisation of input space is used, so that any point in input space is associated with a number of hyper-rectangles, and therefore with a number of memory cells. The output of a CMAC is the algebraic sum of the weights in all the memory cells activated by the input point.
A change of value of the input point results in a change in the set of activated hyper-rectangles, and therefore a change in the set of memory cells participating in the CMAC output. The CMAC output is therefore stored in a distributed fashion, such that the output corresponding to any point in input space is derived from the value stored in a number of memory cells (hence the name associative memory). This provides generalisation.
Building blocks
In the adjacent image, there are two inputs to the CMAC, represented as a 2D space. Two quantising functions have been used to divide this space with two overlapping grids (one shown in heavier lines). A single input is shown near the middle, and this has activated two memory cells, corresponding to the shaded area. If another point occurs close to the one shown, it will share some of the same memory cells, providing generalisation.
The CMAC is trained by presenting pairs of input poin |
https://en.wikipedia.org/wiki/Biocontainment%20of%20genetically%20modified%20organisms | Since the advent of genetic engineering in the 1970s, concerns have been raised about the dangers of the technology. Laws, regulations, and treaties were created in the years following to contain genetically modified organisms and prevent their escape. Nevertheless, there are several examples of failure to keep GM crops separate from conventional ones.
Overview
In the context of agriculture and food and feed production, co-existence means using cropping systems with and without genetically modified crops in parallel. In some countries, such as the United States, co-existence is not governed by any single law but instead is managed by regulatory agencies and tort law. In other regions, such as Europe, regulations require that the separation and the identity of the respective food and feed products must be maintained at all stages of the production process.
Many consumers are critical of genetically modified plants and their products, while, conversely, most experts in charge of GMO approvals do not perceive concrete threats to health or the environment. The compromise chosen by some countries - notably the European Union - has been to implement regulations specifically governing co-existence and traceability. Traceability has become commonplace in the food and feed supply chains of most countries in the world, but the traceability of GMOs is made more challenging by the addition of very strict legal thresholds for unwanted mixing. Within the European Union, since 2001, conventional and organic food and feedstuffs can contain up to 0.9% of authorised GM material without being labelled GM (any trace of non-authorised GM products would cause shipments to be rejected).
In the United States there is no legislation governing the co-existence of neighboring farms growing organic and GM crops; instead the US relies on a "complex but relaxed" combination of three federal agencies (FDA, EPA, and USDA/APHIS) and the common law tort system, governed by state law, to ma |
https://en.wikipedia.org/wiki/MPX%20filter | MPX filter is a function found in analogue stereo FM broadcasting and personal monitor equipment, FM tuners and cassette decks. An MPX filter is, at least, a notch filter blocking the 19 kHz pilot tone, and possibly higher frequencies in the 23-53kHz and 63-75kHz bands.
Broadcasting and personal monitors
FM stereo broadcasts contain a pilot tone - a 19 kHz sinewave serving as a phase reference for decoding the stereophonic information. The system was developed jointly by Zenith and General Electric, and approved by the FCC in 1961. Normal monaural audio, the pilot tone and the double sideband stereophonic difference information are all mixed together into composite FM baseband signal extending to 53 kHz (stereo audio only) or 99 kHz (stereo audio plus an auxiliary subchannel, so-called SCA). The process of encoding the difference signal into the 23-53kHz band via double-sideband carrier-suppressed amplitude modulation is an instance of multiplexing (hence the name MPX filter).
The pilot tone resides inside the audio band (although beyond the range of many adult listeners), and can be compromised by high-energy treble components of the source. Any energy at frequencies above 19 kHz, which is the Nyquist frequency of FM stereo, may cause offensive audible aliasing described as "monkey chatter". Any energy between 18.5 and 19.5 kHz may disrupt stereo decoding, causing sudden rotation of the soundfield. For this reasons, source programs for commercial FM broadcasting are limited to 50 Hz – 15 kHz bandwidth with very steep 15 kHz low-pass filters.
Source programs transmitted to personal monitors on stage or in the recording studio do not need to follow the broadcast 50 Hz – 15 kHz standard, and may have substantial upper treble content. Sources that were subjected to excessive treble boost, or excessive stereo panning are particularly capable of degrading stereo FM reception. For this reason, they must pass through a brickwall MPX filter to clean up space for the p |
https://en.wikipedia.org/wiki/High%20frequency%20content%20measure | In signal processing, the high frequency content measure is a simple measure, taken across a signal spectrum (usually a STFT spectrum), that can be used to characterize the amount of high-frequency content in the signal. The magnitudes of the spectral bins are added together, but multiplying each magnitude by the bin "position" (proportional to the frequency). Thus if X(k) is a discrete spectrum with N unique points, its high frequency content measure is:
In contrast to perceptual measures, this is not based on any evidence about its relevance to human hearing. Despite that, it can be useful for some applications, such as onset detection.
The measure has close similarities to the spectral centroid measure, being essentially the same calculation but without normalization according to overall magnitude. |
https://en.wikipedia.org/wiki/Tower%20of%20Babel%20%28M.%20C.%20Escher%29 | Tower of Babel is a 1928 woodcut by M. C. Escher. It depicts the Tower of Babel, a biblical story about people attempting to build a tower to reach God, which is found in Genesis 11:9. Although Escher dismissed his works before 1935 as of little or no value as they were "for the most part merely practice exercises," some of them, including the Tower of Babel, chart the development of his interest in perspective and unusual viewpoints that would become the hallmarks of his later, more famous, work.
In contrast to many other depictions of the biblical story, such as those by Pieter Brueghel the Elder (The Tower of Babel) and Gustave Doré (The Confusion of Tongues), Escher depicts the tower as a geometrical structure and places the viewpoint above the tower. This allows him to exercise his skill with perspective, but he also chose to centre the picture around the top of the tower as the focus for the climax of the action. He later commented:
See also
Belvedere
Waterfall |
https://en.wikipedia.org/wiki/Loft%20%283D%29 | Loft is a method to create complicated smooth 3D shapes in CAD and other 3D modeling software. Planar cross-sections of the desired shape are defined at chosen locations. Algorithms find a smooth 3D shape that fit these cross-sections. Designers can modify the shape through choice of fitting algorithm and input parameters. The method is used in packages such as Onshape, 3D Studio Max, Creo*, SolidWorks, NX, Autodesk Revit, and FreeCAD.
Consider lofting process in boat building, to visualise the process. The planar sections are defined by boat ribs spaced along its length. The final shape is produced by placing planks over the ribs to form a smooth skin.
In PTCs Creo and in Autodesk Revit it is referred to as a Blend or Swept Blend.
See also
Parallel transport
Lathe (graphics)
Examples (external links)
Modeling an irregular funnel with the loft tool
Computer-aided design |
https://en.wikipedia.org/wiki/Traceability%20of%20genetically%20modified%20organisms | The traceability of genetically modified organisms (GMOs) describes a system that ensures the forwarding of the identity of a GMO from its production to its final buyer. Traceability is an essential prerequisite for the co-existence of GM and non-GM foods, and for the freedom of choice for consumers.
Why traceability is needed
The traceability of GMOs is founded on two needs. First, consumers in many countries are reluctant to buy genetically modified foods, and are skeptical of the use of GM crops for animal feed. Consequently, the concept of co-existence has been developed to separate GM and non-GM supply chains, and is only possible if all purchasers along the production chain know what they are buying. Secondly, although every GMO that is approved for commercialisation must have passed a safety assessment, it may be necessary to withdraw a certain GMO from the market - for example, if new scientific evidence raises doubts about its safety.
Unique identifiers for GMOs
For these purposes, after three years of debate, the OECD countries came up with an identity code for GMOs in 2002. Initially, some member countries (for example, the US, but also Canada and Australia) were opposed to the concept. The final decision requires the assignment of a "unique identifier" to each GMO event which is authorised in one or more OECD countries. The unique identifier is a code consisting of nine letters and/or numbers. The first two or three characters indicate the company submitting the application, while the following six or five characters specify the respective transformation event. The last digit serves as a verifier. All the crop varieties derived from one transformation event will share the same unique identifier.
The unique identifier has been integrated in the Cartagena Protocol on Biosafety and in the European Union legislation on the labelling and traceability of genetically modified organisms (Regulation (EC) No 1830/2003). Detailing the unique identifier, the regu |
https://en.wikipedia.org/wiki/Effective%20number%20of%20bits | Effective number of bits (ENOB) is a measure of the dynamic range of an analog-to-digital converter (ADC), digital-to-analog converter, or their associated circuitry. The resolution of an ADC is specified by the number of bits used to represent the analog value. Ideally, a 12-bit ADC will have an effective number of bits of almost 12. However, real signals have noise, and real circuits are imperfect and introduce additional noise and distortion. Those imperfections reduce the number of bits of accuracy in the ADC. The ENOB describes the effective resolution of the system in bits. An ADC may have a 12-bit resolution, but the effective number of bits, when used in a system, may be 9.5.
ENOB is also used as a quality measure for other blocks such as sample-and-hold amplifiers. Thus analog blocks may be included in signal-chain calculations. The total ENOB of a chain of blocks is usually less than the ENOB of the worst block.
The frequency band of a signal converter where ENOB is still guaranteed is called the effective resolution bandwidth and is limited by dynamic quantization problems. For example, an ADC has some aperture uncertainty. The instant a real ADC samples, its input varies from sample to sample. Because the input signal changes, that time variation translates to an output variation. For example, an ADC may sample 1 ns late. If the input signal is a 1 V sinewave at 1,000,000 radians/second (roughly 160 kHz), the input voltage may change by as much as 1 MV/s. A sampling time error of 1 ns would cause a sampling error of about 1 mV (an error in the 10th bit). If the frequency were 100 times faster (about 16 MHz), then the maximum error would be 100 times greater: about 100 mV on a 1 V signal (an error in the third or fourth bit).
Definition
An often used definition for ENOB is
where
ENOB is given in bits
SINAD (signal, noise, and distortion) is a power ratio indicating the quality of the signal in dB.
the 6.02 term in the divisor converts decibels (a l |
https://en.wikipedia.org/wiki/Galenic%20formulation | Galenic formulation deals with the principles of preparing and compounding medicines in order to optimize their absorption. Galenic formulation is named after Claudius Galen, a 2nd Century AD Greek physician, who codified the preparation of drugs using multiple ingredients. Today, galenic formulation is part of pharmaceutical formulation. The pharmaceutical formulation of a medicine affects the pharmacokinetics, pharmacodynamics and safety profile of a drug.
See also
Formulations
Pharmaceutical formulation
ADME
Pharmacology
Medicinal chemistry
Pesticide formulation
Medicinal chemistry |
https://en.wikipedia.org/wiki/Dehalococcoides | Dehalococcoides is a genus of bacteria within class Dehalococcoidia that obtain energy via the oxidation of hydrogen and subsequent reductive dehalogenation of halogenated organic compounds in a mode of anaerobic respiration called organohalide respiration. They are well known for their great potential to remediate halogenated ethenes and aromatics. They are the only bacteria known to transform highly chlorinated dioxins, PCBs. In addition, they are the only known bacteria to transform tetrachloroethene (perchloroethene, PCE) to ethene.
Microbiology
The first member of the genus Dehalococcoides was described in 1997 as Dehalococcoides ethenogenes strain 195 (nom. inval.). Additional Dehalococcoides members were later described as strains CBDB1, BAV1, FL2, VS, and GT. In 2012 all yet-isolated Dehalococcoides strains were summarized under the new taxonomic name D. mccartyi, with strain 195 as the type strain.
GTDB release 202 clusters the genus into three species, all labeled Dehalococcoides mccartyi in their NCBI accession.
Activities
Dehalococcoides are obligately organohalide-respiring bacteria, meaning that they can only grow by using halogenated compounds as electron acceptors. Currently, hydrogen (H2) is often regarded as the only known electron donor to support growth of dehalococcoides bacteria. However, studies have shown that utilizing various electron donors such as formate, and methyl viologen, have also been effective in promoting growth for various species of dehalococcoides. In order to perform reductive dehalogenation processes, electrons are transferred from electron donors through dehydrogenases, and ultimately utilized to reduce halogenated compounds, many of which are human-synthesized chemicals acting as pollutants. Furthermore, it has been shown that a majority of reductive dehalogenase activities lie within the extracellular and membranous components of D. ethenogenes, indicating that dechlorination processes may function semi-independent |
https://en.wikipedia.org/wiki/Bowen%20knot | The Bowen knot (also known as the heraldic knot in symbolism) is not a true knot, but is rather a heraldic knot, sometimes used as a heraldic charge. It is named after the Welshman James Bowen (died 1629) and is also called true lover's knot. It consists of a rope in the form of a continuous loop laid out as an upright square shape with loops at each of the four corners. Since the rope is not actually knotted, it would in topological terms be considered an unknot.
In Norwegian heraldry a Bowen knot is called a valknute (valknut) and the municipal coat of arms of Lødingen from 1984 has a femsløyfet valknute which means a Bowen knot with five loops.
An angular Bowen knot is such a knot with no rounded sides, so that it appears to be made of five squares. A Bowen knot with lozenge-shaped loops is called a bendwise Bowen knot or a Bowen cross.
The Dacre, Hungerford, Lacy, Shakespeare, and Tristram knots are all considered variations of the Bowen knot, and are sometimes blazoned as such.
The Bowen knot resembles the symbol ⌘ (looped square), which is used on Apple Keyboards as the symbol of the Command key. However, the origin of this use is not related to the use of the Bowen knot in heraldic designs. |
https://en.wikipedia.org/wiki/Synonym%20%28taxonomy%29 | The Botanical and Zoological Codes of nomenclature treat the concept of synonymy differently.
In botanical nomenclature, a synonym is a scientific name that applies to a taxon that (now) goes by a different scientific name. For example, Linnaeus was the first to give a scientific name (under the currently used system of scientific nomenclature) to the Norway spruce, which he called Pinus abies. This name is no longer in use, so it is now a synonym of the current scientific name, Picea abies.
In zoology, moving a species from one genus to another results in a different binomen, but the name is considered an alternative combination rather than a synonym. The concept of synonymy in zoology is reserved for two names at the same rank that refers to a taxon at that rank – for example, the name Papilio prorsa Linnaeus, 1758 is a junior synonym of Papilio levana Linnaeus, 1758, being names for different seasonal forms of the species now referred to as Araschnia levana (Linnaeus, 1758), the map butterfly. However, Araschnia levana is not a synonym of Papilio levana in the taxonomic sense employed by the Zoological code.
Unlike synonyms in other contexts, in taxonomy a synonym is not interchangeable with the name of which it is a synonym. In taxonomy, synonyms are not equals, but have a different status. For any taxon with a particular circumscription, position, and rank, only one scientific name is considered to be the correct one at any given time (this correct name is to be determined by applying the relevant code of nomenclature). A synonym cannot exist in isolation: it is always an alternative to a different scientific name. Given that the correct name of a taxon depends on the taxonomic viewpoint used (resulting in a particular circumscription, position and rank) a name that is one taxonomist's synonym may be another taxonomist's correct name (and vice versa).
Synonyms may arise whenever the same taxon is described and named more than once, independently. They may |
https://en.wikipedia.org/wiki/GenePattern | GenePattern is a freely available computational biology open-source software package originally created and developed at the Broad Institute for the analysis of genomic data. Designed to enable researchers to develop, capture, and reproduce genomic analysis methodologies, GenePattern was first released in 2004. GenePattern is currently developed at the University of California, San Diego.
Functionality
GenePattern is a powerful scientific workflow system that provides access to hundreds of genomic analysis tools. Use these analysis tools as building blocks to design sophisticated analysis pipelines that capture the methods, parameters, and data used to produce analysis results. Pipelines can be used to create, edit and share reproducible in silico results.
Project Objectives
Accessibility: Run over 200 regularly updated analysis and visualization tools (that support data preprocessing, gene expression analysis, proteomics, Single nucleotide polymorphism (SNP) analysis, flow cytometry, and next-generation sequencing) and create analytic workflows without any programming through a point and click user interface.
Reproducibility: Automated history and provenance tracking with versioning so that any user can share, repeat and understand a complete computational analysis
Extensibility: Computational users can import their methods and code for sharing using tools that support easy creation and integration
Multiple interfaces: Web browser, application, and programmatic interfaces make analysis modules and pipelines available to a broad range of users; public hosted server
Features
A regularly updated repository of hundreds of computational analysis modules that support data preprocessing, gene expression analysis, proteomics, single nucleotide polymorphism (SNP) analysis, flow cytometry, and short-read sequencing.
A programmatic interface that makes analysis modules available to computational biologists and developers from Python, Java, MATLAB, and R.
The GenePa |
https://en.wikipedia.org/wiki/KE%20family | The KE family is a medical name designated for a British family, about half of whom exhibit a severe speech disorder called developmental verbal dyspraxia. It is the first family with speech disorder to be investigated using genetic analyses, by which the speech impairment is discovered to be due to genetic mutation, and from which the gene FOXP2, often dubbed the "language gene", was discovered. Their condition is also the first human speech and language disorder known to exhibit strict Mendelian inheritance.
Brought to medical attention from their school children in the late 1980s, the case of KE family was taken up at the UCL Institute of Child Health in London in 1990. Initial report suggested that the family was affected by a genetic disorder. Canadian linguist Myrna Gopnik suggested that the disorder was characterized primarily by grammatical deficiency, supporting the controversial notion of a "grammar gene". Geneticists at the University of Oxford determined that the condition was indeed genetic, with complex physical and physiological effects, and in 1998, they identified the actual gene, eventually named FOXP2. Contrary to the grammar gene notion, FOXP2 does not control any specific grammar or language output. This discovery directly led to a broader knowledge on human evolution as the gene is directly implicated with the origin of language.
Two family members, a boy and a girl, were featured in the National Geographic documentary film Human Ape.
Background and identity
The individual identity of the KE family are kept confidential. The family children attended Elizabeth Augur's special educational needs unit at the Lionel Primary School in Brentford, West London. Towards the end of 1980s, seven children of the family attended there. Augur began to learn that the family had a speech disorder for three generations. Of the 30 members, half of them had severe disability, some are affected mildly, and few are unaffected. Their faces show rigidity at the lo |
https://en.wikipedia.org/wiki/Broadcast%20storm | A broadcast storm or broadcast radiation is the accumulation of broadcast and multicast traffic on a computer network. Extreme amounts of broadcast traffic constitute a broadcast storm. It can consume sufficient network resources so as to render the network unable to transport normal traffic. A packet that induces such a storm is occasionally nicknamed a Chernobyl packet.
Causes
Most commonly the cause is a switching loop in the Ethernet network topology (i.e. two or more paths exist between switches). A simple example is both ends of a single Ethernet patch cable connected to a switch. As broadcasts and multicasts are forwarded by switches out of every port, the switch or switches will repeatedly rebroadcast broadcast messages and flood the network. Since the layer-2 header does not support a time to live (TTL) value, if a frame is sent into a looped topology, it can loop forever.
In some cases, a broadcast storm can be instigated for the purpose of a denial of service (DOS) using one of the packet amplification attacks, such as the smurf attack or fraggle attack, where an attacker sends a large amount of ICMP Echo Requests (ping) traffic to a broadcast address, with each ICMP Echo packet containing the spoof source address of the victim host. When the spoofed packet arrives at the destination network, all hosts on the network reply to the spoofed address. The initial Echo Request is multiplied by the number of hosts on the network. This generates a storm of replies to the victim host tying up network bandwidth, using up CPU resources or possibly crashing the victim.
In wireless networks a disassociation packet spoofed with the source to that of the wireless access point and sent to the broadcast address can generate a disassociation broadcast DOS attack.
Prevention
Switching loops are largely addressed through link aggregation, shortest path bridging or spanning tree protocol. In Metro Ethernet rings it is prevented using the Ethernet Ring Protection Switchi |
https://en.wikipedia.org/wiki/Superior%20extensor%20retinaculum%20of%20foot | The superior extensor retinaculum of the foot (transverse crural ligament) is the upper part of the extensor retinaculum of foot which extends from the ankle to the heelbone.
The superior extensor retinaculum binds down the tendons of extensor digitorum longus, extensor hallucis longus, peroneus tertius, and tibialis anterior as they descend on the front of the tibia and fibula; under it are found also the anterior tibial vessels and deep peroneal nerve.
It is found on the lateral side of the lower leg, attached laterally to the lower end of the fibula, and medially to the tibia; above it is continuous with the fascia of the leg.
Additional images
See also
Peroneal retinacula |
https://en.wikipedia.org/wiki/Disk%20sector | In computer disk storage, a sector is a subdivision of a track on a magnetic disk or optical disc. For most disks, each sector stores a fixed amount of user-accessible data, traditionally 512 bytes for hard disk drives (HDDs) and 2048 bytes for CD-ROMs and DVD-ROMs. Newer HDDs and SSDs use 4096-byte (4 KiB) sectors, which are known as the Advanced Format (AF).
The sector is the minimum storage unit of a hard drive. Most disk partitioning schemes are designed to have files occupy an integral number of sectors regardless of the file's actual size. Files that do not fill a whole sector will have the remainder of their last sector filled with zeroes. In practice, operating systems typically operate on blocks of data, which may span multiple sectors.
Geometrically, the word sector means a portion of a disk between a center, two radii and a corresponding arc (see Figure 1, item B), which is shaped like a slice of a pie. Thus, the disk sector (Figure 1, item C) refers to the intersection of a track and geometrical sector.
In modern disk drives, each physical sector is made up of two basic parts, the sector header area (typically called "ID") and the data area. The sector header contains information used by the drive and controller; this information includes sync bytes, address identification, flaw flag and error detection and correction information. The header may also include an alternate address to be used if the data area is undependable. The address identification is used to ensure that the mechanics of the drive have positioned the read/write head over the correct location. The data area contains the sync bytes, user data and an error-correcting code (ECC) that is used to check and possibly correct errors that may have been introduced into the data.
History
The first disk drive, the 1957 IBM 350 disk storage, had ten 100 character sectors per track; each character was six bits and included a parity bit. The number of sectors per track was identical on all recordi |
https://en.wikipedia.org/wiki/Network%20address | A network address is an identifier for a node or host on a telecommunications network. Network addresses are designed to be unique identifiers across the network, although some networks allow for local, private addresses, or locally administered addresses that may not be unique. Special network addresses are allocated as broadcast or multicast addresses. These too are not unique.
In some cases, network hosts may have more than one network address. For example, each network interface controller may be uniquely identified. Further, because protocols are frequently layered, more than one protocol's network address can occur in any particular network interface or node and more than one type of network address may be used in any one network.
Network addresses can be flat addresses which contain no information about the node's location in the network (such as a MAC address), or may contain structure or hierarchical information for the routing (such as an IP address).
Examples
Examples of network addresses include:
Telephone number, in the public switched telephone network
IP address in IP networks including the Internet
IPX address, in NetWare
X.25 or X.21 address, in a circuit switched data network
MAC address, in Ethernet and other related IEEE 802 network technologies |
https://en.wikipedia.org/wiki/Aavasaksa | Aavasaksa is a sharp-edged hill in Ylitornio municipality in Finnish Lapland. It has an elevation of . Aavasaksa is famous for its sights both towards Finland and Sweden, and it is included in the list of the National landscapes of Finland. Decorative hunting cabin "Imperial Lodge" (Keisarinmaja) is one of the buildings on top of the hill. Its construction began with a visit by Alexander II of Russia in mind, but due to political instability it never happened. It's only open in the summer.
Due to Aavasaksa's distinctive elevation above other nearby hills, it was first used by Pierre Louis Maupertuis in the French Geodesic Mission (1736–1737), and later became part of the Struve Geodetic Arc. As a result of this, UNESCO named Aavasaksa a World Heritage Site, along with the 33 other sites used in the Struve Geodetic Arc.
Aavasaksa is often considered the southernmost point in Finland where the midnight sun is literally visible. The hill is surrounded by rivers running next to it: Torne River to the west and the smaller to the east and north.
Asteroid 2678 Aavasaksa is named after the hill. |
https://en.wikipedia.org/wiki/Many-sorted%20logic | Many-sorted logic can reflect formally our intention not to handle the universe as a homogeneous collection of objects, but to partition it in a way that is similar to types in typeful programming. Both functional and assertive "parts of speech" in the language of the logic reflect this typeful partitioning of the universe, even on the syntax level: substitution and argument passing can be done only accordingly, respecting the "sorts".
There are various ways to formalize the intention mentioned above; a many-sorted logic is any package of information which fulfils it. In most cases, the following are given:
a set of sorts, S
an appropriate generalization of the notion of signature to be able to handle the additional information that comes with the sorts.
The domain of discourse of any structure of that signature is then fragmented into disjoint subsets, one for every sort.
Example
When reasoning about biological organisms, it is useful to distinguish two sorts: and . While a function makes sense, a similar function usually does not. Many-sorted logic allows one to have terms like , but to discard terms like as syntactically ill-formed.
Algebraization
The algebraization of many-sorted logic is explained in an article by Caleiro and Gonçalves, which generalizes abstract algebraic logic to the many-sorted case, but can also be used as introductory material.
Order-sorted logic
While many-sorted logic requires two distinct sorts to have disjoint universe sets, order-sorted logic allows one sort to be declared a subsort of another sort , usually by writing or similar syntax. In the above biology example, it is desirable to declare
,
,
,
,
,
,
and so on; cf. picture.
Wherever a term of some sort is required, a term of any subsort of may be supplied instead (Liskov substitution principle). For example, assuming a function declaration , and a constant declaration , the term is perfectly valid and has the sort . In order to supply the information that |
https://en.wikipedia.org/wiki/SUN%20domain | SUN (Sad1p, UNC-84) domains are conserved C-terminal protein regions a few hundred amino acids long. SUN domains are usually found following a transmembrane domain and a less conserved region of amino acids. Most proteins containing SUN domains are thought to be involved in the positioning of the nucleus in the cell. It is thought that SUN domains interact directly with KASH domains in the space between the outer and inner nuclear membranes to bridge the nuclear envelope and transfer force from the nucleoskeleton to the cytoplasmic cytoskeleton which enables mechanosensory roles in cells. SUN proteins are thought to localize to the inner nuclear membrane. The S. pombe Sad1 protein localises at the spindle pole body. In mammals, the SUN domain is present in two proteins, Sun1 and Sun2. The SUN domain of Sun2 has been demonstrated to be in the periplasm.
Examples of SUN proteins
Caenorhabditis elegans
SUN-1/matefin
UNC-84
Drosophila melanogaster
Klaroid
Spag4
Mammals
SUN1, SUN2, SUN3, SUN4, SUN5
Schizosaccharomyces pombe
Sad1p
Saccharomyces cerevisiae
Mps3p
Maize
SUN1, SUN2, SUN3, SUN4, SUN5
Arabidopsis
SUN1, SUN2 |
https://en.wikipedia.org/wiki/Omega%20loop | The omega loop is a non-regular protein structural motif, consisting of a loop of six or more amino acid residues and any amino acid sequence. The defining characteristic is that residues that make up the beginning and end of the loop are close together in space with no intervening lengths of regular secondary structural motifs. It is named after its shape, which resembles the upper-case Greek letter Omega (Ω).
Structure
Omega loops, being non-regular, non-repeating secondary structural units, have a variety of three-dimensional shapes. Omega loop shapes are analyzed to identify recurring patterns in dihedral angles and overall loop shape to help identify potential roles in protein folding and function.
Since loops are almost always at the protein surface, it is often assumed that these structures are flexible; however, different omega loops exhibit ranges of flexibility across different time scales of protein motion and have been identified as playing a role in the folding of some proteins, including HIV-1 reverse transcriptase; cytochrome c; and nucleases.
Function
Omega loops can contribute to protein function. For example, omega loops can help stabilize interactions between protein and ligand, such as in the enzyme triose phosphate isomerase, and can directly affect protein function in other enzymes. A heritable coagulation disorder is caused by a single-site mutation in an omega loop of protein C.
Likewise, omega loops play an interesting role in the function of the beta-lactamases: mutations in the "omega loop region" of a beta-lactamase can change its specific function and substrate profile, perhaps due to an important functional role of the correlated dynamics of the region.
Cytochrome c
Omega loops have long been recognized also for their importance in the function and folding of the protein cytochrome c, contributing both key functional residues and well as important dynamic properties. Many researchers have studied omega loop function and dynamics in |
https://en.wikipedia.org/wiki/Thymopoietin | Lamina-associated polypeptide 2 (LAP2), isoforms beta/gamma is a protein that in humans is encoded by the TMPO gene. LAP2 is an inner nuclear membrane (INM) protein.
Thymopoietin is a protein involved in the induction of CD90 in the thymus. The thymopoetin (TMPO) gene encodes three alternatively spliced mRNAs encoding proteins of 75 kDa (alpha), 51 kDa (beta) and 39 kDa (gamma) which are ubiquitously expressed in all cells. The human TMPO gene maps to chromosome band 12q22 and consists of eight exons. TMPO alpha is present diffusely expressed with the cell nucleus while TMPO beta and gamma are localized to the nuclear membrane. TMPO beta is a human homolog of the murine protein LAP2. LAP2 plays a role in the regulation of nuclear architecture by binding lamin B1 and chromosomes. This interaction is regulated by phosphorylation during mitosis. Given the nuclear localization of the three TMPO isoforms, it is unlikely that these proteins play any role in CD90 induction.
Interactions
Thymopoietin has been shown to interact with Barrier to autointegration factor 1, AKAP8L, LMNB1 and LMNA. |
https://en.wikipedia.org/wiki/VMware%20Workstation | VMware Workstation Pro (known as VMware Workstation until release of VMware Workstation 12 in 2015) is a hosted (Type 2) hypervisor that only runs on x64 versions of Windows and Linux operating systems. There used to be an x86-32 version for earlier versions for the software. It enables users to set up virtual machines (VMs) on a single physical machine and use them simultaneously along with the host machine. Each virtual machine can execute its own operating system, including versions of Microsoft Windows, Linux, BSD, and MS-DOS. VMware Workstation is developed and sold by VMware, Inc. There is a free-of-charge version called VMware Workstation Player (known as VMware Player until release of VMware Workstation 12 in 2015), for non-commercial use. An operating systems license is needed to use proprietary ones such as Windows. Ready-made Linux VMs set up for different purposes are available from several sources.
VMware Workstation supports bridging existing host network adapters and sharing physical disk drives and USB devices with a virtual machine. It can simulate disk drives; an ISO image file can be mounted as a virtual optical disc drive, and virtual hard disk drives are implemented as .vmdk files.
VMware Workstation Pro can save the state of a virtual machine (a "snapshot") at any instant. These snapshots can later be restored, effectively returning the virtual machine to the saved state, as it was and free from any post-snapshot damage to the VM.
VMware Workstation includes the ability to group multiple virtual machines in an inventory folder. The machines in such a folder can then be powered on and powered off as a single object, useful for testing complex client-server environments.
2016 company changes and future development
VMware Workstation versions 12.0.0, 12.0.1, and 12.1.0 were released at intervals of about two months in 2015. In January 2016 the entire development team behind VMware Workstation and Fusion was disbanded and all US developers were |
https://en.wikipedia.org/wiki/VMware%20Server | VMware Server (formerly VMware GSX Server) is a discontinued free-of-charge virtualization-software server suite developed and supplied by VMware, Inc.
VMware Server has fewer features than VMware ESX, software available for purchase, but can create, edit, and play virtual machines. It uses a client–server model, allowing remote access to virtual machines, at the cost of some graphical performance (and 3D support). It can run virtual machines created by other VMware products and by Microsoft Virtual PC.
VMware Server can preserve and revert to a single snapshot copy of each separate virtual machine within the VMware Server environment. The software does not have a specific interface for cloning virtual machines, unlike VMware Workstation.
VMware Server has largely been replaced by the "Shared Virtual Machines" feature, introduced in VMware Workstation 8.0 and onwards.
Naming
The former name GSX Server allegedly stands for Ground Storm X, an early code name for the project.
Versions
VMware Server 1.0
VMware released version 1.0 of Server on July 12, 2006, replacing the discontinued VMware GSX Server product-line. VMware Inc continued to develop the Vmware Server 1.0.x series, issuing a maintenance release (version 1.0.10) on 26 October 2009.
VMware Server 2.0
VMware Server 2 runs on several server-class host operating systems,
including different versions of Microsoft Windows Server 2000, 2003, and 2008, and mainly enterprise-class Linuxes. The manual explicitly states: "you must use a Windows server operating system". The product also runs on Windows 7 Enterprise Edition.
Server 2 uses a web-based user-interface, the "VMware Infrastructure Web Access", instead of a GUI. For web interfaces, VMware Server 2 and VMware vCenter 4 use the Tomcat 6 web server, while VMware vCenter 2.5 is based on Tomcat 2.5.
As part of the product, the VMware Host Agent service (also carried over to VMware Workstation Server until today) allows remote access to VMware Server fun |
https://en.wikipedia.org/wiki/Vizing%27s%20conjecture | In graph theory, Vizing's conjecture concerns a relation between the domination number and the cartesian product of graphs.
This conjecture was first stated by , and states that, if denotes the minimum number of vertices in a dominating set for the graph , then
conjectured a similar bound for the domination number of the tensor product of graphs; however, a counterexample was found by . Since Vizing proposed his conjecture, many mathematicians have worked on it, with partial results described below. For a more detailed overview of these results, see .
Examples
A 4-cycle has domination number two: any single vertex only dominates itself and its two neighbors, but any pair of vertices dominates the whole graph. The product is a four-dimensional hypercube graph; it has 16 vertices, and any single vertex can only dominate itself and four neighbors, so three vertices could only dominate 15 of the 16 vertices. Therefore, at least four vertices are required to dominate the entire graph, the bound given by Vizing's conjecture.
It is possible for the domination number of a product to be much larger than the bound given by Vizing's conjecture. For instance, for a star , its domination number is one: it is possible to dominate the entire star with a single vertex at its hub. Therefore, for the graph formed as the product of two stars, Vizing's conjecture states only that the domination number should be at least . However, the domination number of this graph is actually much higher. It has vertices: formed from the product of a leaf in both factors, from the product of a leaf in one factor and the hub in the other factor, and one remaining vertex formed from the product of the two hubs. Each leaf-hub product vertex in dominates exactly of the leaf-leaf vertices, so leaf-hub vertices are needed to dominate all of the leaf-leaf vertices. However, no leaf-hub vertex dominates any other such vertex, so even after leaf-hub vertices are chosen to be included in |
https://en.wikipedia.org/wiki/Bilin%20%28biochemistry%29 | Bilins, bilanes or bile pigments are biological pigments formed in many organisms as a metabolic product of certain porphyrins. Bilin (also called bilichrome) was named as a bile pigment of mammals, but can also be found in lower vertebrates, invertebrates, as well as red algae, green plants and cyanobacteria. Bilins can range in color from red, orange, yellow or brown to blue or green.
In chemical terms, bilins are linear arrangements of four pyrrole rings (tetrapyrroles). In human metabolism, bilirubin is a breakdown product of heme. A modified bilane is an intermediate in the biosynthesis and uroporphyrinogen III from porphobilinogen.
Examples of bilins are found in animals (cardinal examples are bilirubin and biliverdin), and phycocyanobilin, the chromophore of the photosynthetic pigment phycocyanin, in algae and plants. In plants, bilins also serve as the photopigments of the photoreceptor protein phytochrome. An example of an invertebrate bilin is micromatabilin, which is responsible for the green color of the Green Huntsman Spider, Micrommata virescens.
In plants
Most photosynthetic, oxygen-producing organisms contain the positive chlorophyll biosynthesis regulator GENOMES UNCOUPLED 4 (GUN4). Research suggests that GUN4 regulates chlorophyll synthesis, by activating the enzyme Magnesium chelatase, which catalyzes the insertion of Mg2+ into Protoporphyrin IX. Bilins noncovalently bind to CrGUN4, an algal GUN4 from Chlamydomonas reinhardtii, which has been shown to participate in retrograde signaling.
Bilin-binding protein in butterfly wings
Butterfly wings are a new site of porphyrin synthesis and cleavage where bilin is portrayed; the expression of the lipocalin bilin-binding protein in Pieris brassicae. The function of the biliprotein during wing development is still unknown, as is the existence of an active pathway for porphyrin synthesis and cleavage in insect wings, which has been demonstrated here for the first time. The bilin-binding protein from |
https://en.wikipedia.org/wiki/Interaction%20nets | Interaction nets are a graphical model of computation devised by Yves Lafont in 1990 as a generalisation of the proof structures of linear logic. An interaction net system is specified by a set of agent types and a set of interaction rules. Interaction nets are an inherently distributed model of computation in the sense that computations can take place simultaneously in many parts of an interaction net, and no synchronisation is needed. The latter is guaranteed by the strong confluence property of reduction in this model of computation. Thus interaction nets provide a natural language for massive parallelism. Interaction nets are at the heart of many implementations of the lambda calculus, such as efficient closed reduction and optimal, in Lévy's sense, Lambdascope.
Definitions
Interactions nets are graph-like structures consisting of agents and edges.
An agent of type and with arity has one principal port and auxiliary ports. Any port can be connected to at most one edge. Ports that are not connected to any edge are called free ports. Free ports together form the interface of an interaction net. All agent types belong to a set called signature.
An interaction net that consists solely of edges is called a wiring and usually denoted as . A tree with its root is inductively defined either as an edge , or as an agent with its free principal port and its auxiliary ports connected to the roots of other trees .
Graphically, the primitive structures of interaction nets can be represented as follows:
When two agents are connected to each other with their principal ports, they form an active pair. For
active pairs one can introduce interaction rules which describe how the active pair rewrites to another interaction
net. An interaction net with no active pairs is said to be in normal form. A signature (with defined on it) along with a set of interaction rules defined for agents together constitute an interaction system.
Interaction calculus
Textual repre |
https://en.wikipedia.org/wiki/Edward%20A.%20Guggenheim | Edward Armand Guggenheim FRS (11 August 1901 – 9 August 1970) was an English physical chemist, noted for his contributions to thermodynamics.
Life
Guggenheim was born in Manchester 11 August 1901, the son of Armand Guggenheim and Marguerite Bertha Simon. His father was Swiss, a naturalised British citizen. Guggenheim married Simone Ganzin (died 1954), in 1934 and Ruth Helen Aitkin, born Clarke, widow, in 1955. They had no children. He died in Reading, Berkshire 9 August 1970.
Education
Guggenheim was educated at Terra Nova School, Southport, Charterhouse School and Gonville and Caius College, Cambridge, where he obtained firsts in both the mathematics part 1 and chemistry part 2 triposes. Unable to gain a fellowship at the college, he went to Denmark where he studied under J. N. Brønsted at the University of Copenhagen.
Career
Returning to England, he found a place at University College, London where he wrote his first book, Modern Thermodynamics by the Methods of Willard Gibbs (1933), which "established his reputation and revolutionized the teaching of the subject". He was also a visiting professor of chemistry at Stanford University, and later became a reader in the chemical engineering department at Imperial College London. During World War II he worked on defence matters for the navy. In 1946 he was appointed professor of chemistry and head of department at Reading University, where he stayed until his retirement in 1966.
Publications
Guggenheim produced eleven books and more than 100 papers. His first book,Modern Thermodynamics by the Methods of Willard Gibbs (1933), was a 206-page, detailed study, with text, figures, index, and preface by F. G. Donnan, showing how the analytical thermodynamic methods developed by Willard Gibbs leads in a straightforward manner to relations such as phases, constants, solution, systems, and laws, that are unambiguous and exact. This book, together with Gilbert N. Lewis and Merle Randall's 1923 textbook Thermodynamics and the |
https://en.wikipedia.org/wiki/Dublin%20University%20Zoological%20Association | The Dublin University Zoological Association was founded in 1853 to promote zoological studies in Ireland. Dublin University is now Trinity College Dublin.
It commenced proceedings in the Natural History Review in 1854.
Notable members
Robert Ball
Edward Perceval Wright
George Henry Kinahan
Robert Warren
William Archer
Samuel Haughton
George James Allman
Alexander Henry Haliday |
https://en.wikipedia.org/wiki/History%20of%20CP/CMS | This article covers the History of CP/CMS — the historical context in which the IBM time-sharing virtual machine operating system was built.
CP/CMS development occurred in a complex political and technical milieu.
Historical notes, below, provides supporting quotes and citations from first-hand observers.
Early 60s: CTSS, early time-sharing, and Project MAC
The seminal first-generation time-sharing system was CTSS, first demonstrated at MIT in 1961 and in production use from 1964 to 1974. It paved the way for Multics, CP/CMS, and all other time-sharing environments. Time-sharing concepts were first articulated in the late 50s, particularly as a way to meet the needs of scientific computing. At the time, computers were primarily used for batch processing — where jobs were submitted on punch cards, and run in sequence. Time-sharing let users interact directly with a computer, so that calculation and simulation results could be seen immediately.
Scientific users quickly embraced the concept of time-sharing, and pressured computer vendors such as IBM for improved time-sharing capabilities. MIT researchers spearheaded this effort, launching Project MAC, which was intended to develop the next generation of time-sharing technology and which would ultimately build Multics, an extremely feature-rich time-sharing system that would later inspire the initial development of UNIX. This high-profile team of leading computer scientists formed very specific technical recommendations and requirements, seeking an appropriate hardware platform for their new system. The technical problems were awesome. Most early time-sharing systems sidestepped these problems by giving users new or modified languages, such as Dartmouth BASIC, which were accessed through interpreters or restricted execution contexts. But the Project MAC vision was for shared, unrestricted access to general-purpose computing.
Along with other vendors, IBM submitted a proposal to Project MAC. However, IBM's proposal |
https://en.wikipedia.org/wiki/Pithecometra%20principle | The Pithecometra principle or Pithecometra thesis () describes the evolution of humans; the pithecometra law is analogous to the concept that "man evolved within apes" or "man descended from apes" as advocated by Thomas Henry Huxley.
In evolution, Huxley first developed the concept of the "Pithecometra principle" which was discussed by Charles Darwin and Ernst Haeckel, when Huxley wrote the 1863 essay "On the Origin of Species" stating that humanity was more closely related to apes than the apes were to monkeys.
Huxley added that to hunt evidence of this close ancestry between apes and humans, the regions where modern apes are found should be the focal point, hence, Africa.
The pithecometra principle has been most notable in evolution theory by placing humanity as an offshoot of animal species, rather than a separate divine creation, and thus pithecometra has generated intense religious controversy for decades.
Impact
Another of Darwin's colleagues was Ernst Heinrich Haeckel (1834–1919). Haeckel agreed with Huxley on several aspects of the pithecometra thesis. However, Haeckel frequently lectured on the Asian origin of the "missing link" between apes and humans. Consequently, Eugene Dubois, a student of Haeckel's indoctrinated with the idea of Asian hominid origins, traveled to Java, Indonesia in 1890–1892. It was during this expedition when Dubois made the incredible discovery of Homo erectus fossils in Asia. Also known as Java Man, that specimen was validation of humanity's deep ancestry outside of Europe.
The pithecometra thesis with the work of Darwin, Huxley and Haeckel helped liberate the European scientific community of its Eurocentric biases. However, their work did not directly produce a change. It required the later revolution in evolutionary thought, of the Neo-Darwinian Synthesis of the mid-20th century, to cause a change in the recovery of fossils from regions outside Europe. Evidence of refusals to accept the fossils that began to be found in Asia |
https://en.wikipedia.org/wiki/Multi-party%20fair%20exchange%20protocol | In cryptography, a multi-party fair exchange protocol is protocol where parties accept to deliver an item if and only if they receive an item in return.
Definition
Matthew K. Franklin and Gene Tsudik suggested in 1998 the following classification:
An -party single-unit general exchange is a permutation on , where each party offers a single unit of commodity to , and receives a single unit of commodity from .
An -party multi-unit general exchange is a matrix of baskets, where the entry in row and column is the basket of goods given by to .
See also
Secure multi-party computation |
https://en.wikipedia.org/wiki/Schizophyllum%20commune | Schizophyllum commune is a species of fungus in the genus Schizophyllum. The mushroom resembles undulating waves of tightly packed corals or loose Chinese fan. "Gillies" or "split gills" vary from creamy yellow to pale white in colour. The cap is small, wide with a dense yet spongey body texture. It is known as the split-gill mushroom because of the unique longitudinally divided nature of the "gills" on the underside of the cap. This mushroom is found throughout the world.
It is found in the wild on decaying trees after rainy seasons followed by dry spells where the mushrooms are naturally collected.
Description
Schizophyllum commune is usually described as a morphological species of global distribution, but some research has suggested that it may be a species complex encompassing several cryptic species of more narrow distribution, as typical of many mushroom-forming Basidiomycota.
The caps are wide with white or grayish hairs. They grow in shelf-like arrangements, without stalks. The gills, which produce basidiospores on their surface, split when the mushroom dries out, earning this mushroom the common name split gill. It is common in rotting wood. The mushrooms can remain dry for decades and then revived with moisture.
It has a tetrapolar mating system with each cell containing two genetic loci (called A and B) that govern different aspects the mating process, leading to 4 possible phenotypes after cell fusion. Each locus codes for a mating type (a or b) and each type is multi-allelic: the A locus has 9 alleles for the a type and an estimated 32 for its b type, and the B locus has 9 alleles each for both its a and b types. When combined this gives an estimated 23,328 potential mating type specificities. While all mating types can initially fuse with any other mating type, a fertile fruitbody and subsequent spores will result only if both the A and B loci of the merging cells are compatible. If neither the A nor B are compatible the result is normal monokary |
https://en.wikipedia.org/wiki/Switching%20loop | A switching loop or bridge loop occurs in computer networks when there is more than one layer 2 path between two endpoints (e.g. multiple connections between two network switches or two ports on the same switch connected to each other). The loop creates broadcast storms as broadcasts and multicasts are forwarded by switches out every port, the switch or switches will repeatedly rebroadcast the broadcast messages flooding the network. Since the layer-2 header does not include a time to live (TTL) field, if a frame is sent into a looped topology, it can loop forever.
A physical topology that contains switching or bridge loops is attractive for redundancy reasons, yet a switched network must not have loops. The solution is to allow physical loops, but create a loop-free logical topology using link aggregation, shortest path bridging, spanning tree protocol or TRILL on the network switches.
Broadcasts
In the case of broadcast packets over a switching loop, the situation may develop into a broadcast storm.
In a very simple example, a switch with three ports A, B, and C has a normal node connected to port A while ports B and C are connected to each other in a loop. All ports have the same link speed and run in full duplex mode. Now, when a broadcast frame enters the switch through port A, this frame is forwarded to all ports but the source port, i.e. ports B and C. Both frames exiting ports B and C traverse the loop in opposite directions and reenter the switch through their counterpart port. The frame received on port B is then forwarded to ports A and C, the frame received on port C to ports A and B. So, the node on port A receives two copies of its own broadcast frame while the other two copies produced by the loop continue to cycle. Likewise, each broadcast frame entering the system continues to cycle through the loop in both directions, rebroadcasting back to the network in each loop, and broadcasts accumulate. Eventually, the accumulated broadcasts exhaust the e |
https://en.wikipedia.org/wiki/Turret%20board | In electronics, turret boards were an early attempt at making circuits that were relatively rugged, producible, and serviceable in the days before printed circuit boards (PCBs). As this method was somewhat more expensive than conventional "point-to-point" wiring techniques, it was generally found in the more expensive components, such as professional, commercial, and military audio and test equipment. This is similar to cordwood construction.
Turret boards consist of a thin (generally 1/8 inch) piece of insulating material drilled in pattern to match the electronic layout of a set of components. Each hole drilled will have a metal post (the turret) positioned in it. Electronic components are suspended between these turrets and soldered to them to create a complete circuit layout.
Most of the military electronics used in WWII made use of this construction method, and Altec professional gear of similar vintage has the same construction. However, the underside of some turret boards, such as a consumer Zenith 1A10 console radio, circa 1940, consists of an array of electronics components that are simply suspended, rather than tethered or soldered down, and thus could move unexpectedly. Such construction methods tended to keep the neighborhood radio repairman in work. In general, however, the use of turrets and turret boards dramatically improved reliability and serviceability.
Turret boards additionally allowed some degree of "engineered" construction. That is, an engineer could design a turret board with listed component interconnects such that it could be assembled by someone skilled in component recognition and soldering. A schematic was unnecessary for assembly.
Until reliable high-temperature printed circuit boards were developed, turret board construction was considered the best available technology. Currently, the use of turret boards is limited to hand-wired vacuum tube electronics (be it commercial or hobby), often as an attempt to replicate a classic design |
https://en.wikipedia.org/wiki/Beam%20propagation%20method | The beam propagation method (BPM) is an approximation technique for simulating the propagation of light in slowly varying optical waveguides. It is essentially the same as the so-called parabolic equation (PE) method in underwater acoustics. Both BPM and the PE were first introduced in the 1970s. When a wave propagates along a waveguide for a large distance (larger compared with the wavelength), rigorous numerical simulation is difficult. The BPM relies on approximate differential equations which are also called the one-way models. These one-way models involve only a first order derivative in the variable z (for the waveguide axis) and they can be solved as "initial" value problem. The "initial" value problem does not involve time, rather it is for the spatial variable z.
The original BPM and PE were derived from the slowly varying envelope approximation and they are the so-called paraxial one-way models. Since then, a number of improved one-way models are introduced. They come from a one-way model involving a square root operator. They are obtained by applying rational approximations to the square root operator. After a one-way model is obtained, one still has to solve it by discretizing the variable z. However, it is possible to merge the two steps (rational approximation to the square root operator and discretization of z) into one step. Namely, one can find rational approximations to the so-called one-way propagator (the exponential of the square root operator) directly. The rational approximations are not trivial. Standard diagonal Padé approximants have trouble with the so-called evanescent modes. These evanescent modes should decay rapidly in z, but the diagonal Padé approximants will incorrectly propagate them as propagating modes along the waveguide. Modified rational approximants that can suppress the evanescent modes are now available. The accuracy of the BPM can be further improved, if you use the energy-conserving one-way model or the single-scatter on |
https://en.wikipedia.org/wiki/Elements%20of%20Algebra | Elements of Algebra is an elementary mathematics textbook written by mathematician Leonhard Euler around 1765 in German. It was first published in Russian as "Universal Arithmetic" (Универсальная арифметика), two volumes appearing in 1768-9 and in 1770 was printed from the original text. Elements of Algebra is one of the earliest books to set out algebra in the modern form we would recognize today (another early book being Elements of Algebra by Nicholas Saunderson, published in 1740), and is one of Euler's few writings, along with Letters to a German Princess, that are accessible to the general public. Written in numbered paragraphs as was common practice till the 19th century, Elements begins with the definition of mathematics and builds on the fundamental operations of arithmetic and number systems, and gradually moves towards more abstract topics.
In 1771, Joseph-Louis Lagrange published an addendum titled Additions to Euler's Elements of Algebra, which featured a number of important mathematical results.
The original German title of the book was Vollständige Anleitung zur Algebra, which literally translates to Complete Instruction to Algebra. Two English translations are now extant, one by John Hewlett (1822), and the other, which is translated to English from a French translation of the book, by Charles Tayler (1824). On the 300th birth anniversary of Euler in 2007, mathematician Christopher Sangwin working with Tarquin Publications published a digitized copy based on Hewlett's translation of the first four sections (or Part I) of the book.
In 2015, Scott Hecht published both print and Kindle versions of Elements of Algebra () with Euler's Part I (Containing the Analysis of Determinate Quantities), Part II (Containing the Analysis of Indeterminate Quantities), Lagrange's Additions, and footnotes by Johann Bernoulli and others.
See also
Introductio in analysin infinitorum (1748)
Institutiones calculi differentialis (1755) |
https://en.wikipedia.org/wiki/History%20of%20logarithms | The history of logarithms is the story of a correspondence (in modern terms, a group isomorphism) between multiplication on the positive real numbers and addition on the real number line that was formalized in seventeenth century Europe and was widely used to simplify calculation until the advent of the digital computer. The Napierian logarithms were published first in 1614. E. W. Hobson called it "one of the very greatest scientific discoveries that the world has seen." Henry Briggs introduced common (base 10) logarithms, which were easier to use. Tables of logarithms were published in many forms over four centuries. The idea of logarithms was also used to construct the slide rule, which became ubiquitous in science and engineering until the 1970s. A breakthrough generating the natural logarithm was the result of a search for an expression of area against a rectangular hyperbola, and required the assimilation of a new function into standard mathematics.
Napier's wonderful invention
The method of logarithms was publicly propounded for the first time by John Napier in 1614, in his book entitled Mirifici Logarithmorum Canonis Descriptio (Description of the Wonderful Canon of Logarithms). The book contains fifty-seven pages of explanatory matter and ninety pages of tables of trigonometric functions and their natural logarithms. These tables greatly simplified calculations in spherical trigonometry, which are central to astronomy and celestial navigation and which typically include products of sines, cosines and other functions. Napier described other uses, such as solving ratio problems, as well.
John Napier wrote a separate volume describing how he constructed his tables, but held off publication to see how his first book would be received. John died in 1617. His son, Robert, published his father's book, Mirifici Logarithmorum Canonis Constructio (Construction of the Wonderful Canon of Logarithms), with additions by Henry Briggs, in 1619 in Latin and then in 1620 i |
https://en.wikipedia.org/wiki/Schur%20test | In mathematical analysis, the Schur test, named after German mathematician Issai Schur, is a bound on the operator norm of an integral operator in terms of its Schwartz kernel (see Schwartz kernel theorem).
Here is one version. Let be two measurable spaces (such as ). Let be an integral operator with the non-negative Schwartz kernel , , :
If there exist real functions and and numbers such that
for almost all and
for almost all , then extends to a continuous operator with the operator norm
Such functions , are called the Schur test functions.
In the original version, is a matrix and .
Common usage and Young's inequality
A common usage of the Schur test is to take Then we get:
This inequality is valid no matter whether the Schwartz kernel is non-negative or not.
A similar statement about operator norms is known as Young's inequality for integral operators:
if
where satisfies , for some , then the operator extends to a continuous operator , with
Proof
Using the Cauchy–Schwarz inequality and inequality (1), we get:
Integrating the above relation in , using Fubini's Theorem, and applying inequality (2), we get:
It follows that for any .
See also
Hardy–Littlewood–Sobolev inequality |
https://en.wikipedia.org/wiki/Wick%27s%20theorem | Wick's theorem is a method of reducing high-order derivatives to a combinatorics problem. It is named after Italian physicist Gian-Carlo Wick. It is used extensively in quantum field theory to reduce arbitrary products of creation and annihilation operators to sums of products of pairs of these operators. This allows for the use of Green's function methods, and consequently the use of Feynman diagrams in the field under study. A more general idea in probability theory is Isserlis' theorem.
In perturbative quantum field theory, Wick's theorem is used to quickly rewrite each time ordered summand in the Dyson series as a sum of normal ordered terms. In the limit of asymptotically free ingoing and outgoing states, these terms correspond to Feynman diagrams.
Definition of contraction
For two operators and we define their contraction to be
where denotes the normal order of an operator . Alternatively, contractions can be denoted by a line joining and , like .
We shall look in detail at four special cases where and are equal to creation and annihilation operators. For particles we'll denote the creation operators by and the annihilation operators by .
They satisfy the commutation relations for bosonic operators , or the anti-commutation relations for fermionic operators where denotes the Kronecker delta.
We then have
where .
These relationships hold true for bosonic operators or fermionic operators because of the way normal ordering is defined.
Examples
We can use contractions and normal ordering to express any product of creation and annihilation operators as a sum of normal ordered terms. This is the basis of Wick's theorem. Before stating the theorem fully we shall look at some examples.
Suppose and are bosonic operators satisfying the commutation relations:
where , denotes the commutator, and is the Kronecker delta.
We can use these relations, and the above definition of contraction, to express products of and in other ways.
Example 1 |
https://en.wikipedia.org/wiki/Bob%20Grubba | Robert "Bob" Grubba was the director of engineering for Lionel Trains in the late 1990s.
He started Broadway Limited Imports in 2001 with partners Tony Wenzel of Oriental Limited and Bob Zimet of QS Industries, maker of QSI sound systems. Broadway Limited Imports was the first company to offer sound and remote control equipped HO trains, and won the Model Railroader Magazine 2002 and 2003 Product of the Year and HO Model of the Year awards. Mr. Zimet and Mr. Wenzel sold their shares to Mr. Grubba in 2004 and the company was moved to Ormond Beach, Florida.
In 2004, Grubba founded Precision Craft Models, Inc. which was the first company to install sound systems and remote control in the smaller N scale model trains.
Robert Grubba holds the following US patents:
External links
Broadway Limited Imports, LLC
"New York Times Technology Article" Steam Meets Silicon In New Toy Trains
"Amid recession woes, model train company expanding" at newsjournalonline.com
"Interview with Ken Silvestri and Bob Grubba" in Model Railroad News Magazine
"Classic Toy Trains" Lionel FasTrack article
Rail transport modellers
Living people
Year of birth missing (living people) |
https://en.wikipedia.org/wiki/Vis%20medicatrix%20naturae | Vis medicatrix naturae (literally "the healing power of nature", and also known as natura medica) is the Latin rendering of the Greek Νόσων φύσεις ἰητροί ("Nature is the physician(s) of diseases"), a phrase attributed to Hippocrates. While the phrase is not actually attested in his corpus, it nevertheless sums up one of the guiding principles of Hippocratic medicine, which is that organisms left alone can often heal themselves (cf. the Hippocratic primum non nocere).
Hippocrates
Hippocrates believed that an organism is not passive to injuries or disease, but rebalances itself to counteract them. The state of illness, therefore, is not a malady but an effort of the body to overcome a disturbed equilibrium. It is this capacity of organisms to correct imbalances that distinguishes them from non-living matter.
From this follows the medical approach that “nature is the best physician” or “nature is the healer of disease”. To do this Hippocrates considered a doctor's chief aim was to help this natural tendency of the body by observing its action, removing obstacles to its action, and thus allow an organism to recover its own health. This underlies such Hippocratic practices as blood letting in which a perceived excess of a humors is removed, and thus was taken to help the rebalancing of the body's humor.
Renaissance and modern history
After Hippocrates, the idea of vis medicatrix naturae continued to play a key role in medicine. In the early Renaissance, the physician and early scientist Paracelsus had the idea of “inherent balsam”. Thomas Sydenham, in the 18th century considered fever as a healing force of nature.
In the nineteenth-century, vis medicatrix naturae came to be interpreted as vitalism, and in this form it came to underlie the philosophical framework of homeopathy, chiropractic, hydropathy, osteopathy and naturopathy.
Relation to homeostasis
Walter Cannon's notion of homeostasis also has its origins in vis medicatrix naturae. "All that I have done thu |
https://en.wikipedia.org/wiki/Anti-hijack%20system | An anti-hijack system is an electronic system fitted to motor vehicles to deter criminals from hijacking them. Although these types of systems are becoming more common on newer cars, they have not caused a decrease in insurance premiums as they are not as widely known as other more common anti-theft systems such as alarms or steering locks. It can also be a part of an alarm or immobiliser system. An approved anti-hijacking system will achieve a safe, quick shutdown of the vehicle it is attached to. There are also mechanical anti-hijack devices.
Diversify Solutions, a company in South Africa, has announced its research and development at the Nelson Mandela University of a GSM based Anti hijacking system.
The system works off a verification process with added features such as alcohol sensors and signal jamming capabilities, this comes after increasing rates of hijackings in South Africa and alarming rates of accidents caused by driving under the influence and texting whilst driving.
Technology
There are three basic principles on which the systems work.
Lockout
A lockout system is armed when the driver turns the ignition key to the on position and carries out a specified action, usually flicking a hidden switch or depressing the brake pedal twice. It is activated when the vehicle drops below a certain speed or becomes stationary, and will cause all of the vehicle's doors to automatically lock, to prevent against thieves stealing the vehicle when it is stopped, for example at a traffic light or pedestrian crossing.
Transponder
A transponder system is a system which is always armed until a device, usually a small RFID transponder, enters the vehicle's transmitter radius. Since the device is carried by the driver, usually in their wallet or pocket, if the driver leaves the immediate vicinity of the vehicle, so will the transponder, causing the system to assume the vehicle has been hijacked and disable it.
As the transponder itself is concealed, the thief woul |
https://en.wikipedia.org/wiki/Ovarian%20fossa | The ovarian fossa is a shallow depression on the lateral wall of the pelvis, where in the ovary lies.
This ovarian fossa has the following boundaries:
anteriorly : by the external iliac artery and vein
inferiorly : by the broad ligament of the uterus
posteriorly: by the ureter, internal iliac artery and vein
laterally (on the floor of fossa): by the obturator nerve, artery and vein |
https://en.wikipedia.org/wiki/Medulla%20of%20ovary | The medulla of ovary (or Zona vasculosa of Waldeyer) is a highly vascular stroma in the center of the ovary. It forms from embryonic mesenchyme and contains blood vessels, lymphatic vessels, and nerves.
This stroma forms the tissue of the hilum by which the ovarian ligament is attached, and through which the blood vessels enter: it does not contain any ovarian follicles. |
https://en.wikipedia.org/wiki/Germinal%20epithelium%20%28female%29 | The ovarian surface epithelium, also called the germinal epithelium of Waldeyer, or coelomic epithelium is a layer of simple squamous-to-cuboidal epithelial cells covering the ovary.
The term germinal epithelium is a misnomer as it does not give rise to primary follicles.
Composition
These cells are derived from the mesoderm during embryonic development and are closely related to the mesothelium of the peritoneum. The germinal epithelium gives the ovary a dull gray color as compared with the shining smoothness of the peritoneum; and the transition between the mesothelium of the peritoneum and the cuboidal cells which cover the ovary is usually marked by a line around the anterior border of the ovary.
Diseases
Ovarian surface epithelium can give rise to surface epithelial-stromal tumor. |
https://en.wikipedia.org/wiki/Germinal%20epithelium%20%28male%29 | The germinal epithelium is the epithelial layer of the seminiferous tubules of the testicles. It is also known as the wall of the seminiferous tubules. The cells in the epithelium are connected via tight junctions.
There are two types of cells in the germinal epithelium. The large Sertoli cells (which are not dividing) function as supportive cells to the developing sperm. The second cell type are the cells belonging to the spermatogenic cell lineage. These develop to eventually become sperm cells (spermatozoon). Typically, the spermatogenic cells will make four to eight layers in the germinal epithelium. |
https://en.wikipedia.org/wiki/Anomaly%20detection | In data analysis, anomaly detection (also referred to as outlier detection and sometimes as novelty detection) is generally understood to be the identification of rare items, events or observations which deviate significantly from the majority of the data and do not conform to a well defined notion of normal behaviour. Such examples may arouse suspicions of being generated by a different mechanism, or appear inconsistent with the remainder of that set of data.
Anomaly detection finds application in many domains including cyber security, medicine, machine vision, statistics, neuroscience, law enforcement and financial fraud to name only a few. Anomalies were initially searched for clear rejection or omission from the data to aid statistical analysis, for example to compute the mean or standard deviation. They were also removed to better predictions from models such as linear regression, and more recently their removal aids the performance of machine learning algorithms. However, in many applications anomalies themselves are of interest and are the observations most desirous in the entire data set, which need to be identified and separated from noise or irrelevant outliers.
Three broad categories of anomaly detection techniques exist. Supervised anomaly detection techniques require a data set that has been labeled as "normal" and "abnormal" and involves training a classifier. However, this approach is rarely used in anomaly detection due to the general unavailability of labelled data and the inherent unbalanced nature of the classes. Semi-supervised anomaly detection techniques assume that some portion of the data is labelled. This may be any combination of the normal or anomalous data, but more often than not the techniques construct a model representing normal behavior from a given normal training data set, and then test the likelihood of a test instance to be generated by the model. Unsupervised anomaly detection techniques assume the data is unlabelled and are b |
https://en.wikipedia.org/wiki/Membrana%20granulosa | The larger ovarian follicles consist of an external fibrovascular coat, connected with the surrounding stroma of the ovary by a network of blood vessels, and an internal coat, which consists of several layers of nucleated cells, called the membrana granulosa. It contains numerous granulosa cells.
At one part of the mature follicle the cells of the membrana granulosa are collected into a mass which projects into the cavity of the follicle. This is termed the discus proligerus. |
https://en.wikipedia.org/wiki/Ductuli%20transversi | The epoöphoron lies in the mesosalpinx between the ovary and the uterine tube, and consists of a few short tubules, the ductuli transversi which converge toward the ovary while their opposite ends open into a rudimentary duct, the ductus longitudinalis epoöphori (duct of Gartner).
The ductuli transversi of the epoophoron is a remnant of the tubules of the Wolffian body or mesonephros. |
https://en.wikipedia.org/wiki/Frink%20Medal | The Frink Medal for British Zoologists is awarded by the Zoological Society of London "For significant and original contributions by a professional zoologist to the development of zoology." It consists of a bronze plaque (76 by 83 millimetres), depicting a bison and carved by British sculptor Elisabeth Frink. The Frink Medal was instituted in 1973 and first presented in 1974.
Recipients
Source ZSL
See also
List of biology awards |
https://en.wikipedia.org/wiki/Digital%20Postmarks | A Digital Postmark (DPM) is a technology that applies a trusted time stamp issued by a postal operator to an electronic document, validates electronic signatures, and stores and archives all non-repudiation data needed to support a potential court challenge. It guarantees the certainty of date and time of the postmarking. This global standard was renamed the Electronic Postal Certification Mark (EPCM) in 2007 shortly after a new iteration of the technology was developed by Microsoft and Poste Italiane. The key addition to the traditional postmarking technology was integrity of the electronically postmarked item, meaning any kind of falsification and tampering will be easily and definitely detected. Additionally, content confidentiality is guaranteed since document certification is carried out without access or reading by the postal operator. The EPCM will eventually be available through the UPU to all international postal operators in the 191 member countries willing to be compliant with this standard, thus granting interoperability in certified communications between postal operators. In the United States, the US Postal Service operates a non-global standard called the Electronic Postmark, although it is soon expected to provide services utilizing the EPCM.
Providers
In the United States, until the end of 2010, Authentidate was the only authorized USPS EPM provider. However, this contract was allowed to expire.
The process
An electronic document is created
Digital Postmarking client software signs the document locally
The signed document is sent to the Digital Postmarking (DPM) service for postmarking
Upon receipt, the DPM service first validates the authenticity of the signature
If the signature is valid then a timestamp is generated by the DPM service as a counter-signature that includes the date and time
The document, signature, validation results and timestamp are stored in the Digital Postmark non-repudiation database
A Digital Postmark Receipt, including th |
https://en.wikipedia.org/wiki/Progress%20in%20Physics | Progress in Physics is an open-access academic journal, publishing papers in theoretical and experimental physics, including related themes from mathematics. The journal was founded by Dmitri Rabounski, Florentin Smarandache, and Larissa Borissova in 2005, and is published quarterly. Rabounski is the editor-in-chief, while Smarandache and Borissova act as associate editors. It was included on Beall's List of potentially-predatory journals at the time that list was last updated. Since 2008, the Norwegian Scientific Index has rated it a "Level 0" journal, indicating that publication there does not count for official academic career or public funding purposes.
Aims and reviewing process
The journal aims to promote fair and non-commercialized science, as stated in its Declaration of Academic Freedom:
The journal describes itself as peer-reviewed. The review procedure is specified as follows:
The referees of the papers published are not listed, although anonymity of referees is specifically criticized in "Article 8: Freedom to publish scientific results" of the Declaration of Academic Freedom. This document harshly criticizes the current peer-review system using the words "censorship", "alleged expert referees", "blacklisting", and "bribes". The journal has published papers by several authors, who, along with some of the editors, claim to have been blacklisted by the Cornell University arXiv as proponents of fringe scientific theories.
Indexing and abstracting
The journal is or has been indexed and abstracted in the following bibliographic databases:
It was indexed in the (paywalled) aggregator Open J-Gate and in the website Scientific Commons. |
https://en.wikipedia.org/wiki/Blue%20Norther%20%28weather%29 | A Blue Norther, also known as a Texas Norther, is a fast moving cold front marked by a rapid drop in temperature, strong winds, and dark blue or "black" skies. The cold front originates from the north, hence the "norther", and can send temperatures plummeting by 20 or 30 degrees in merely minutes.
Effects
The Midwestern United States lacks natural geographic barriers to protect itself from the frigid winter air masses that originate in Canada and the arctic. Multiple times per year conditions will become favorable to push severe cold fronts as far south as Texas, bringing sleet and snow and causing the windchill to plunge into the teens. Depending on the time of year, high temperatures that immediately precede a Texas Norther can reach 85 °F (29°C) or even 90 °F (32°C) under bright sunlight in nearly-calm conditions before the cold front approaches.
However, most Blue Northers don't advance as far south as Mexico, and even the most severe examples typically reach their apex midway through Texas. For example, cities in North Texas, like Dallas, experience drastically more Blue Northers than cities along the Gulf of Mexico, like Houston. As a city is struck by a Blue Norther, its temperatures can be 30 to 50 degrees colder than neighboring cities that are only a few miles away that have not yet been struck. Blue Northers can be dangerous due to their volatile temperature swings which catch some people unprepared.
Frequency
Blue Northers occur multiple times per year. They are usually recorded between the months of November and March, although they have been recorded less frequently in October and April as well. The Blue Norther phenomenon is especially common in November, when the last vestiges of autumn are still clinging to life. One of the most famous Blue Northers was the Great Blue Norther of November 11, 1911, which spawned multiple tornadoes and dropped temperatures 40 degrees in only 15 minutes and 67 degrees in 10 hours, a world record.
See also
Weath |
https://en.wikipedia.org/wiki/Titchmarsh%20convolution%20theorem | The Titchmarsh convolution theorem describes the properties of the support of the convolution of two functions. It was proven by Edward Charles Titchmarsh in 1926.
Titchmarsh convolution theorem
If and are integrable functions, such that
almost everywhere in the interval , then there exist and satisfying such that almost everywhere in and almost everywhere in
As a corollary, if the integral above is 0 for all then either or is almost everywhere 0 in the interval Thus the convolution of two functions on cannot be identically zero unless at least one of the two functions is identically zero.
As another corollary, if for all and one of the function or is almost everywhere not null in this interval, then the other function must be null almost everywhere in .
The theorem can be restated in the following form:
Let . Then if the left-hand side is finite. Similarly, if the right-hand side is finite.
Above, denotes the support of a function and and denote the infimum and supremum. This theorem essentially states that the well-known inclusion is sharp at the boundary.
The higher-dimensional generalization in terms of the convex hull of the supports was proven by Jacques-Louis Lions in 1951:
If , then
Above, denotes the convex hull of the set and denotes the space of distributions with compact support.
The original proof by Titchmarsh uses complex-variable techniques, and is based on the Phragmén–Lindelöf principle, Jensen's inequality, Carleman's theorem, and Valiron's theorem. The theorem has since been proven several more times, typically using either real-variable or complex-variable methods. Gian-Carlo Rota has stated that no proof yet addresses the theorem's underlying combinatorial structure, which he believes is necessary for complete understanding. |
https://en.wikipedia.org/wiki/Uterine%20gland | Uterine glands or endometrial glands are tubular glands, lined by a simple columnar epithelium, found in the functional layer of the endometrium that lines the uterus. Their appearance varies during the menstrual cycle. During the proliferative phase, uterine glands appear long due to estrogen secretion by the ovaries. During the secretory phase, the uterine glands become very coiled with wide lumens and produce a glycogen-rich secretion known as histotroph or uterine milk. This change corresponds with an increase in blood flow to spiral arteries due to increased progesterone secretion from the corpus luteum. During the pre-menstrual phase, progesterone secretion decreases as the corpus luteum degenerates, which results in decreased blood flow to the spiral arteries. The functional layer of the uterus containing the glands becomes necrotic, and eventually sloughs off during the menstrual phase of the cycle.
They are of small size in the unimpregnated uterus, but shortly after impregnation become enlarged and elongated, presenting a contorted or waved appearance.
Function
Hormones produced in early pregnancy stimulate the uterine glands to secrete a number of substances to give nutrition and protection to the embryo and fetus, and the fetal membranes. These secretions are known as histiotroph, alternatively histotroph, and also as uterine milk. Important uterine milk proteins are glycodelin-A, and osteopontin.
Some secretory components from the uterine glands are taken up by the secondary yolk sac lining the exocoelomic cavity during pregnancy, and may thereby assist in providing fetal nutrition.
Additional images |
https://en.wikipedia.org/wiki/Oracle%20OLAP | The Oracle Database OLAP Option implements On-line Analytical Processing (OLAP) within an Oracle database environment. Oracle Corporation markets the Oracle Database OLAP Option as an extra-cost option to supplement the "Enterprise Edition" of its database. (Oracle offers Essbase for customers without the Oracle Database or who require multiple data-sources to load their cubes.)
As of Oracle Database 11g, the Oracle database optimizer can transparently redirect SQL queries to levels within the OLAP Option cubes. The cubes are managed and can take the place of multi-dimensional materialized views, simplifying Oracle data-warehouse management and speeding up query response.
Logical components
The Oracle Database OLAP Option offers:
an OLAP analytic engine
workspaces
an analytic workspace manager (AWM)
a worksheet environment
OLAP DML for DDL and DML
an interface from SQL
an analytic workspace Java API
a Java-based OLAP API
Physical implementation
The Oracle database tablespace CWMLITE stores OLAPSYS schema objects and integrates Oracle Database OLAP Option with the Oracle Warehouse Builder (OWB).
The CWMLITE name reflects the use of CWM — the Common Warehouse Metamodel, which Oracle Corporation refers to as "Common Warehouse Metadata".
See also
Business intelligence
Comparison of OLAP servers
Essbase
External links
Oracle OLAP, retrieved 2023-05-28 |
https://en.wikipedia.org/wiki/Portable%20DVD%20player | A portable DVD player is a mobile, battery powered DVD player in the format of a mobile device. Many recent players play files from USB flash drives and SD cards.
History
Portable DVD players were created in order to enhance the ability to watch DVDs away from home. They were created in 1998, first introduced by Panasonic. They are supposed to be practical for "on the go" use, and many are able to perform secondary functions such as playing music from audio CDs and displaying images from digital cameras or camcorders.
Impact
The popularity of low-cost battery powered portable DVD players in North Korea allows families to watch Chinese and South Korean programs on SD cards and USB flash drives. North Korean defectors run activist groups, like Fighters for a Free North Korea that smuggle DVDs and SD cards into the country "to introduce North Koreans to the rest of the world". Activist groups planned to distribute DVD copies of The Interview via balloon drops. The balloon drop was postponed after the North Korean government referred to the plan as a de facto "declaration of war."
Design
Most PDPs use TFT LCD screens, some using LED backlighting. The most common PDP screen size is , although some are as large as - the larger size competing with Tablet computers. Some have articulating screens that rotate 180 degrees & fold flat. Portable DVD players generally have connections for additional screens and a car lighter plug.
Some PDPs now have iPod docks, USB and SD Card slots built in. Some can play videos in other formats such as MP4, DivX, either from CDs, flash memory cards or USB external hard disks. Also some DVD players include a USB video recorder.
Some DVD players have Wifi access, helping to play Internet TV, and some have Bluetooth, allowing users to play content from or to other devices like smartphones.
Previous models of portable DVD players had AV inputs for external game consoles; now some selected models have built-in emulators for playing, usually |
https://en.wikipedia.org/wiki/Intervillous%20space | In the placenta, the intervillous space is the space between chorionic villi, and contains maternal blood.
The trophoblast, which is a collection of cells that invades the maternal endometrium to gain access to nutrition for the fetus, proliferates rapidly and forms a network of branching processes which cover the entire embryo and invade and destroy the maternal tissues. With this physiologic destructive process, the maternal blood vessels of the endometrium are opened, with the result that the spaces in the trophoblastic network are filled with maternal blood; these spaces communicate freely with one another and become greatly distended and form the intervillous space from which the fetus gains nutrition.
Maternal arteries and veins directly enter the intervillous space after 8 weeks gestation, and the intervillous space will contain about a unit of blood (400–500 mL). Much of this blood is returned to the mother with normal uterine contractions; thus, when a woman has a cesarean section, she is liable to lose more blood than a woman who has a vaginal delivery, as the blood from the intervillous space is not pushed back toward her body during such a delivery. |
https://en.wikipedia.org/wiki/Placental%20cotyledon | The placenta of humans, and certain other mammals contains structures known as cotyledons, which transmit fetal blood and allow exchange of oxygen and nutrients with the maternal blood.
Ruminants
The Artiodactyla have a cotyledonary placenta. In this form of placenta the chorionic villi form a number of separate circular structures (cotyledons) which are distributed over the surface of the chorionic sac. Sheep, goats and cattle have between 72 and 125 cotyledons whereas deer have 4-6 larger cotyledons.
Human
The form of the human placenta is generally classified as a discoid placenta. Within this the cotyledons are the approximately 15-25 separations of the decidua basalis of the placenta, separated by placental septa. Each cotyledon consists of a main stem of a chorionic villus as well as its branches and sub-branches.
Vasculature
The cotyledons receive fetal blood from chorionic vessels, which branch off cotyledon vessels into the cotyledons, which, in turn, branch into capillaries. The cotyledons are surrounded by maternal blood, which can exchange oxygen and nutrients with the fetal blood in the capillaries. |
https://en.wikipedia.org/wiki/Decidual%20cells | Before the fertilized ovum reaches the uterus, the mucous membrane of the body of the uterus undergoes important changes and is then known as the decidua. The thickness and vascularity of the mucous membrane are greatly increased; its glands are elongated and open on its free surface by funnel-shaped orifices, while their deeper portions are tortuous and dilated into irregular spaces. The interglandular tissue is also increased in quantity, and is crowded with large round, oval, or polygonal cells, termed decidual cells. Their enlargement is due to glycogen and lipid accumulation in the cytoplasm allowing these cells to provide a rich source of nutrition for the developing embryo. Decidual cells are also thought to control the invasion of the endometrium by trophoblast cells.
Experimentally, human endometrial stromal cells can be decidualized in culture by using analogs of cAMP and progesterone. The cells will exhibit a decidualized phenotype and display upregulation of common decidualization markers such as prolactin and IGFBP1. |
https://en.wikipedia.org/wiki/Kari%20%28music%29 | Kari, in shakuhachi music, is both a property of a note and a technique. To play a note kari means to play it with raised pitch, relative to playing the note meri. In addition to sharpening the pitch, playing a note kari also modifies the tone color or timbre of a note.
The usual technique to play a note kari is to raise the chin, increasing the angle and distance between the embouchure and utaguchi (blowing edge). |
https://en.wikipedia.org/wiki/Kernel%20Patch%20Protection | Kernel Patch Protection (KPP), informally known as PatchGuard, is a feature of 64-bit (x64) editions of Microsoft Windows that prevents patching the kernel. It was first introduced in 2005 with the x64 editions of Windows XP and Windows Server 2003 Service Pack 1.
"Patching the kernel" refers to unsupported modification of the central component or kernel of the Windows operating system. Such modification has never been supported by Microsoft because, according to Microsoft, it can greatly reduce system security, reliability, and performance. Although Microsoft does not recommend it, it is possible to patch the kernel on x86 editions of Windows; however, with the x64 editions of Windows, Microsoft chose to implement additional protection and technical barriers to kernel patching.
Since patching the kernel is possible in 32-bit (x86) editions of Windows, several antivirus software developers use kernel patching to implement antivirus and other security services. These techniques will not work on computers running x64 editions of Windows. Because of this, Kernel Patch Protection resulted in antivirus makers having to redesign their software without using kernel patching techniques.
However, because of the design of the Windows kernel, Kernel Patch Protection cannot completely prevent kernel patching. This has led to criticism that since KPP is an imperfect defense, the problems caused to antivirus vendors outweigh the benefits because authors of malicious software will simply find ways around its defenses. Nevertheless, Kernel Patch Protection can still prevent problems of system stability, reliability, and performance caused by legitimate software patching the kernel in unsupported ways.
Technical overview
The Windows kernel is designed so that device drivers have the same privilege level as the kernel itself. Device drivers are expected to not modify or patch core system structures within the kernel. However, in x86 editions of Windows, Windows does not enforce t |
https://en.wikipedia.org/wiki/Adversary%20model | In computer science, an online algorithm measures its competitiveness against different adversary models. For deterministic algorithms, the adversary is the same as the adaptive offline adversary. For randomized online algorithms competitiveness can depend upon the adversary model used.
Common adversaries
The three common adversaries are the oblivious adversary, the adaptive online adversary, and the adaptive offline adversary.
The oblivious adversary is sometimes referred to as the weak adversary. This adversary knows the algorithm's code, but does not get to know the randomized results of the algorithm.
The adaptive online adversary is sometimes called the medium adversary. This adversary must make its own decision before it is allowed to know the decision of the algorithm.
The adaptive offline adversary is sometimes called the strong adversary. This adversary knows everything, even the random number generator. This adversary is so strong that randomization does not help against it.
Important results
From S. Ben-David, A. Borodin, R. Karp, G. Tardos, A. Wigderson we have:
If there is a randomized algorithm that is α-competitive against any adaptive offline adversary then there also exists an α-competitive deterministic algorithm.
If G is a c-competitive randomized algorithm against any adaptive online adversary, and there is a randomized d-competitive algorithm against any oblivious adversary, then G is a randomized (c * d)-competitive algorithm against any adaptive offline adversary.
See also
Competitive analysis (online algorithm)
K-server problem
Online algorithm |
https://en.wikipedia.org/wiki/Competitive%20analysis%20%28online%20algorithm%29 | Competitive analysis is a method invented for analyzing online algorithms, in which the performance of an online algorithm (which must satisfy an unpredictable sequence of requests, completing each request without being able to see the future) is compared to the performance of an optimal offline algorithm that can view the sequence of requests in advance. An algorithm is competitive if its competitive ratio—the ratio between its performance and the offline algorithm's performance—is bounded. Unlike traditional worst-case analysis, where the performance of an algorithm is measured only for "hard" inputs, competitive analysis requires that an algorithm perform well both on hard and easy inputs, where "hard" and "easy" are defined by the performance of the optimal offline algorithm.
For many algorithms, performance is dependent not only on the size of the inputs, but also on their values. For example, sorting an array of elements varies in difficulty depending on the initial order. Such data-dependent algorithms are analysed for average-case and worst-case data. Competitive analysis is a way of doing worst case analysis for on-line and randomized algorithms, which are typically data dependent.
In competitive analysis, one imagines an "adversary" which deliberately chooses difficult data, to maximize the ratio of the cost of the algorithm being studied and some optimal algorithm. When considering a randomized algorithm, one must further distinguish between an oblivious adversary, which has no knowledge of the random choices made by the algorithm pitted against it, and an adaptive adversary which has full knowledge of the algorithm's internal state at any point during its execution. (For a deterministic algorithm, there is no difference; either adversary can simply compute what state that algorithm must have at any time in the future, and choose difficult data accordingly.)
For example, the quicksort algorithm chooses one element, called the "pivot", that is, o |
https://en.wikipedia.org/wiki/Applied%20spectroscopy | Applied spectroscopy is the application of various spectroscopic methods for the detection and identification of different elements or compounds to solve problems in fields like forensics, medicine, the oil industry, atmospheric chemistry, and pharmacology.
Spectroscopic methods
A common spectroscopic method for analysis is Fourier transform infrared spectroscopy (FTIR), where chemical bonds can be detected through their characteristic infrared absorption frequencies or wavelengths. These absorption characteristics make infrared analyzers an invaluable tool in geoscience, environmental science, and atmospheric science. For instance, atmospheric gas monitoring has been facilitated by the development of commercially available gas analyzers which can distinguish between carbon dioxide, methane, carbon monoxide, oxygen, and nitric oxide.
Ultraviolet (UV) spectroscopy is used where strong absorption of UV radiation occurs in a substance. Such groups are known as chromophores and include aromatic groups, conjugated system of bonds, carbonyl groups and so on. Nuclear magnetic resonance spectroscopy detects hydrogen atoms in specific environments, and complements both infrared (IR) spectroscopy and UV spectroscopy. The use of Raman spectroscopy is growing for more specialist applications.
There are also derivative methods such as infrared microscopy, which allows very small areas to be analyzed in an optical microscope.
One method of elemental analysis that is important in forensic analysis is energy-dispersive X-ray spectroscopy (EDX) performed in the environmental scanning electron microscope (ESEM). The method involves analysis of back-scattered X-rays from the sample as a result of interaction with the electron beam. Automated EDX is further used in a range of automated mineralogy techniques for identification and textural mapping.
Sample preparation
In all three spectroscopic methods, the sample usually needs to be present in solution, which may present problems |
https://en.wikipedia.org/wiki/Stationary%20sequence | In probability theory – specifically in the theory of stochastic processes, a stationary sequence is a random sequence whose joint probability distribution is invariant over time. If a random sequence X j is stationary then the following holds:
where F is the joint cumulative distribution function of the random variables in the subscript.
If a sequence is stationary then it is wide-sense stationary.
If a sequence is stationary then it has a constant mean (which may not be finite):
See also
Stationary process |
https://en.wikipedia.org/wiki/HisB | The hisB gene, found in the enterobacteria (such as E. coli), in Campylobacter jejuni and in Xylella/Xanthomonas encodes a protein involved in catalysis of two step in histidine biosynthesis (the sixth and eight step), namely the bifunctional Imidazoleglycerol-phosphate dehydratase/histidinol-phosphatase.
The former function (), found at the N-terminal, dehydrated d-erythroimidazoleglycerolphosphate to imidazoleacetolphosphate, the latter function (), found at the C-terminal, dephosphorylates l-histidinolphosphate producing histidinol.
The firth step is catalysed instead by histadinolphosphate aminotransferase (encoded by hisC)
The peptide is 40.5kDa and associates to form a hexamer (unless truncated)
In E. coli hisB is found on the hisGDCBHAFI operon
The phosphatase activity possess a substrate ambiguity and overexpression of hisB can rescue phosphoserine phosphatase (serB) knockouts.
Reactions
hisB-N
D-erythro-1-(imidazol-4-yl)glycerol 3-phosphate 3-(imidazol-4-yl)-2-oxopropyl phosphate + H2O
hisB-C
L-histidinol phosphate + H2O L-histidinol + phosphate
Non-fusion protein in other species
HIS3 from Saccharomyces cerevisiae is not a fused IGP dehydratase and hisidinol phosphatase, but an IGPD only (homologous to hisB-N). Whereas HIS2 is the HP (analogous to hisB-C, called hisJ in some prokaryotes). |
https://en.wikipedia.org/wiki/Phosphopantetheine | Phosphopantetheine, also known as 4'-phosphopantetheine, is a prosthetic group of several acyl carrier proteins including the acyl carrier proteins (ACP) of fatty acid synthases, ACPs of polyketide synthases, the peptidyl carrier proteins (PCP), as well as aryl carrier proteins (ArCP) of nonribosomal peptide synthetases (NRPS). It is also present in formyltetrahydrofolate dehydrogenase.
Subsequent to the expression of the apo acyl carrier protein, 4'-phosphopantetheine moiety is attached to a serine residue. The coupling involves formation of a phosphodiester linkage. This coupling is mediated by acyl carrier protein synthase (ACPS), a 4'-phosphopantetheinyl transferase.
Phosphopantetheine prosthetic group covalently links to the acyl group via a high energy thioester bond. The flexibility and length of the phosphopantetheine chain (approximately 2 nm) allows the covalently tethered intermediates to access spatially distinct enzyme-active sites. This accessibility increases the effective molarity of the intermediate and allows an assembly line-like process.
See also
Fatty acid synthase
Pantothenic acid |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.