source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Monomethylhydrazine | Monomethylhydrazine (mono-methyl hydrazine, MMH) is a highly toxic, volatile hydrazine derivative with the chemical formula . It is used as a rocket propellant in bipropellant rocket engines because it is hypergolic with various oxidizers such as nitrogen tetroxide () and nitric acid (). As a propellant, it is described in specification MIL-PRF-27404.
MMH is a hydrazine derivative that was once used in the orbital maneuvering system (OMS) and reaction control system (RCS) engines of NASA's Space Shuttle, which used MMH and MON-3 (a mixture of nitrogen tetroxide with approximately 3% nitric oxide). This chemical is toxic and carcinogenic, but it is easily stored in orbit, providing moderate performance for very low fuel tank system weight. MMH and its chemical relative unsymmetrical dimethylhydrazine (UDMH) have a key advantage that they are stable enough to be used in regeneratively cooled rocket engines. The European Space Agency (ESA) has attempted to seek new options in terms of bipropellant rocket combinations to avoid using deadly chemicals such as MMH and its relatives.
MMH is believed to be the main cause of the toxicity of mushrooms of genus Gyromitra, especially the false morel (Gyromitra esculenta). In these cases, MMH is formed by the hydrolysis of gyromitrin.
Monomethylhydrazine is considered to be a possible occupational carcinogen, and the occupational exposure limits to MMH are set at protective levels to account for the possible carcinogenicity.
A known use of MMH is in the synthesis of suritozole.
MMH is also assumed to be the active methylating agent in the drug Temozolomide. |
https://en.wikipedia.org/wiki/Methods%20in%20Ecology%20and%20Evolution | Methods in Ecology and Evolution is a monthly peer-reviewed scientific journal covering methodologies in ecology and evolution. It was established in 2010 and is the 5th journal of the British Ecological Society. It is published by Wiley-Blackwell on behalf of the British Ecological Society and the editors-in-chief are Aaron M. Ellison (Harvard University), Natalie Cooper (Natural History Museum), Nicolas Lecomte (Université de Moncton), and Huijie Qiao (Chinese Academy of Sciences).
In June 2022 it was announced that the journal would switch to a full open access publishing model from January 2023, with all papers submitted to the journal from 6 July 2022 open access on publication.
Abstracting and indexing
According to the Journal Citation Reports, the journal has a 2022 impact factor of 6.6 .
See also
Journal of Ecology
Journal of Animal Ecology
Journal of Applied Ecology
Functional Ecology |
https://en.wikipedia.org/wiki/Quasicontraction%20semigroup | In mathematical analysis, a C0-semigroup Γ(t), t ≥ 0, is called a quasicontraction semigroup if there is a constant ω such that ||Γ(t)|| ≤ exp(ωt) for all t ≥ 0. Γ(t) is called a contraction semigroup if ||Γ(t)|| ≤ 1 for all t ≥ 0.
See also
Contraction (operator theory)
Hille–Yosida theorem
Lumer–Phillips theorem |
https://en.wikipedia.org/wiki/Spaced%20seed | In bioinformatics, a spaced seed is a pattern of relevant and irrelevant positions in a biosequence and a method of approximate string matching that allows for substitutions. They are a straightforward modification to the earliest heuristic-based alignment efforts that allow for minor differences between the sequences of interest. Spaced seeds have been used in homology search., alignment, assembly, and metagenomics. They are usually represented as a sequence of zeroes and ones, where a one indicates relevance and a zero indicates irrelevance at the given position. Some visual representations use pound signs for relevant and dashes or asterisks for irrelevant positions.
Principle
Due to a number of functional and evolutionary constraints, nucleic acid sequences between individuals tend to be highly conserved, with the typical difference between two human genomes estimated on the order of 0.6% (or around 20 million base pairs). Identification of highly similar regions in the genome may indicate functional importance, as mutations in these areas that would result in cessation of function or loss of regulatory ability would be evolutionary unfavorable. More observed differences between two sequences may arise as a result of stochastic sequencing errors. Similarly, when performing assembly of a previously characterized genome, an attempt is made to align the newly sequenced DNA fragments to the existing genome sequence.
In both cases, it is useful to be able to directly compare nucleic acid sequences. Since the sequences are not expected to be exactly identical, however, it is beneficial to focus on smaller subsequences that are more likely to be locally identical. Spaced seeds allow for even more permissive local matching by allowing certain base pairs (defined by the pattern of the specific spaced seed) to mismatch without penalty, thus allowing algorithms that use the general "hit-extend" strategy of alignment to explore additional potential matches that would be |
https://en.wikipedia.org/wiki/Transmission%20delay | In a network based on packet switching, transmission delay (or store-and-forward delay, also known as packetization delay or serialization delay) is the amount of time required to push all the packet's bits into the wire. In other words, this is the delay caused by the data-rate of the link.
Transmission delay is a function of the packet's length and has nothing to do with the distance between the two nodes. This delay is proportional to the packet's length in bits,
It is given by the following formula:
seconds
where
is the transmission delay in seconds
N is the number of bits, and
R is the rate of transmission (say in bits per second)
Most packet switched networks use store-and-forward transmission at the input of the link. A switch using store-and-forward transmission will receive (save) the entire packet to the buffer and check it for CRC errors or other problems before sending the first bit of the packet into the outbound link. Thus, store-and-forward packet switches introduce a store-and-forward delay at the input to each link along the packet's route.
See also
End-to-end delay
Processing delay
Queuing delay
Propagation delay
Network delay |
https://en.wikipedia.org/wiki/List%20of%20SEA%20Games%20mascots | Since 1985, the Southeast Asian Games have had a mascot in each edition.
See also
List of Olympic mascots
List of Asian Games mascots
List of Commonwealth Games mascots |
https://en.wikipedia.org/wiki/Harmonious%20set | In mathematics, a harmonious set is a subset of a locally compact abelian group on which every weak character may be uniformly approximated by strong characters. Equivalently, a suitably defined dual set is relatively dense in the Pontryagin dual of the group. This notion was introduced by Yves Meyer in 1970 and later turned out to play an important role in the mathematical theory of quasicrystals. Some related concepts are model sets, Meyer sets, and cut-and-project sets.
Definition
Let Λ be a subset of a locally compact abelian group G and Λd be the subgroup of G generated by Λ, with discrete topology. A weak character is a restriction to Λ of an algebraic homomorphism from Λd into the circle group:
A strong character is a restriction to Λ of a continuous homomorphism from G to T, that is an element of the Pontryagin dual of G.
A set Λ is harmonious if every weak character may be approximated by
strong characters uniformly on Λ. Thus for any ε > 0 and any weak character χ, there exists a strong character ξ such that
If the locally compact abelian group G is separable and metrizable (its topology may be defined by a translation-invariant metric) then harmonious sets admit another, related, description. Given a subset Λ of G and a positive ε, let Mε be the subset of the Pontryagin dual of G consisting of all characters that are almost trivial on Λ:
Then Λ is harmonious if the sets Mε are relatively dense in the sense of Besicovitch: for every ε > 0 there exists a compact subset Kε of the Pontryagin dual such that
Properties
A subset of a harmonious set is harmonious.
If Λ is a harmonious set and F is a finite set then the set Λ + F is also harmonious.
The next two properties show that the notion of a harmonious set is nontrivial only when the ambient group is neither compact nor discrete.
A finite set Λ is always harmonious. If the group G is compact then, conversely, every harmonious set is finite.
If G is a discrete group then every set i |
https://en.wikipedia.org/wiki/Transgenerational%20epigenetic%20inheritance | Transgenerational epigenetic inheritance is the transmission of epigenetic markers and modifications from one generation to multiple subsequent generations without altering the primary structure of DNA. Thus, the regulation of genes via epigenetic mechanisms can be heritable; the amount of transcripts and proteins produced can be altered by inherited epigenetic changes. In order for epigenetic marks to be heritable, however, they must occur in the gametes in animals, but since plants lack a definitive germline and can propagate, epigenetic marks in any tissue can be heritable.
It is important to note that the inheritance of epigenetic marks in the immediate generation is referred to as intergenerational inheritance. In male mice, the epigenetic signal is maintained through the F1 generation. In female mice, the epigenetic signal is maintained through the F2 generation as a result of the exposure of the germline in the womb. Many epigenetic signals are lost beyond the F2/F3 generation and are no longer inherited, because the subsequent generations were not exposed to the same environment as the parental generations. The signals that are maintained beyond the F2/F3 generation are referred to as transgenerational epigenetic inheritance (TEI), because initial environmental stimuli resulted in inheritance of epigenetic modifications. There are several mechanisms of TEI that have shown to affect germline reprogramming, such as transgenerational increases in susceptibility to diseases, mutations, and stress inheritance. During germline reprogramming and early embryogenesis in mice, methylation marks are removed to allow for development to commence, but the methylation mark is converted into hydroxymethyl-cytosine so that it is recognized and methylated once that area of the genome is no longer being used, which serves as a memory for that TEI mark. Therefore, under lab conditions, inherited methyl marks are removed and restored to ensure TEI still occurs. However, observi |
https://en.wikipedia.org/wiki/Miedema%27s%20model | Miedema's model is a semi-empirical approach for estimating the heat of formation of solid or liquid metal alloys and compounds in the framework of thermodynamic calculations for metals and minerals. It was developed by the Dutch scientist Andries Rinse Miedema (15 November 1933 – 28 May 1992) while working at Philips Natuurkundig Laboratorium. It may provide or confirm basic enthalpy data needed for the calculation of phase diagrams of metals, via CALPHAD or ab initio quantum chemistry methods.
History
Miedema introduced his approach in several papers, beginning in 1973 in Philips Technical Review Magazine with "A simple model for alloys".
Miedema described his motivation with "Reliable rules for the alloying behaviour of metals have long been sought. There is the qualitative rule that states that the greater the difference in the electronegativity of two metals, the greater the heat of formation - and hence the stability. Then there is the Hume-Rothery rule, which states that two metals that differ by more than 15% in their atomic radius will not form substitutional solid solutions. This rule can only be used reliably (90 % success) to predict poor solubility; it cannot predict good solubility. The author has proposed a simple atomic model, which is empirical like the other two rules, but nevertheless has a clear physical basis and predicts the alloying behaviour of transition metals accurately in 98 % of cases. The model is very suitable for graphical presentation of the data and is therefore easy to use in practice."
Free web based applications include Entall and Miedema Calculator. The latter was reviewed and improved in 2016, with an extension of the method. The original Algol program was ported to Fortran.
Informatics-guided classification of miscible and immiscible binary alloy systems
Miedema's approach has been applied to the classification of miscible and immiscible systems of binary alloys. These are relevant in the design of multicomponent al |
https://en.wikipedia.org/wiki/Multi-antimicrobial%20extrusion%20protein | Multi-antimicrobial extrusion protein (MATE) also known as multidrug and toxin extrusion or multidrug and toxic compound extrusion is a family of proteins which function as drug/sodium or proton antiporters.
Function
The MATE proteins in bacteria, archaea and eukaryotes function as fundamental transporters of metabolic and xenobiotic organic cations.
Structure
These proteins are predicted to have 12 alpha-helical transmembrane regions, some of the animal proteins may have an additional C-terminal helix. The X-ray structure of the NorM was determined to 3.65 Å, revealing an outward-facing conformation with two portals open to the outer leaflet of the membrane and a unique topology of the predicted 12 transmembrane helices distinct from any other known multidrug resistance transporter.
Discovery
The multidrug efflux transporter NorM from V. parahaemolyticus which mediates resistance to multiple antimicrobial agents (norfloxacin, kanamycin, ethidium bromide etc.) and its homologue from E. coli were identified in 1998. NorM seems to function as drug/sodium antiporter which is the first example of Na+-coupled multidrug efflux transporter discovered. NorM is a prototype of a new transporter family and Brown et al. named it the multidrug and toxic compound extrusion family. NorM is nicknamed "Last of the multidrug transporters" because it is the last multidrug transporter discovered functionally as well as structurally.
Genes
The following human genes encode MATE proteins:
SLC47A1
SLC47A2
See also
Solute carrier family
Resistance-Nodulation-Cell Division Superfamily (RND) |
https://en.wikipedia.org/wiki/Compression%20%28physics%29 | it is a force that tends to push or squeeze something together.
material in the lower part of structure in the compression.
In mechanics, compression is the application of balanced inward ("pushing") forces to different points on a material or structure, that is, forces with no net sum or torque directed so as to reduce its size in one or more directions. It is contrasted with tension or traction, the application of balanced outward ("pulling") forces; and with shearing forces, directed so as to displace layers of the material parallel to each other. The compressive strength of materials and structures is an important engineering consideration.
In uniaxial compression, the forces are directed along one direction only, so that they act towards decreasing the object's length along that direction. The compressive forces may also be applied in multiple directions; for example inwards along the edges of a plate or all over the side surface of a cylinder, so as to reduce its area (biaxial compression), or inwards over the entire surface of a body, so as to reduce its volume.
Technically, a material is under a state of compression, at some specific point and along a specific direction , if the normal component of the stress vector across a surface with normal direction is directed opposite to . If the stress vector itself is opposite to , the material is said to be under normal compression or pure compressive stress along . In a solid, the amount of compression generally depends on the direction , and the material may be under compression along some directions but under traction along others. If the stress vector is purely compressive and has the same magnitude for all directions, the material is said to be under isotropic compression, hydrostatic compression, or bulk compression. This is the only type of static compression that liquids and gases can bear. It affects the volume of the material, as quantified by the bulk modulus and the volumetric strain.
The in |
https://en.wikipedia.org/wiki/Alexei%20Borodin | Alexei Mikhailovich Borodin (; born June 30, 1975) is a professor of mathematics at the Massachusetts Institute of Technology.
Research
His research concerns asymptotic representation theory, relations with random matrices and integrable systems, and the difference equation formulation of monodromy.
Education and career
Borodin was born in Donetsk, the son of Donetsk State University mathematics professor Mikhail Borodin.
He competed for Ukraine in the 1992 International Mathematical Olympiad, earning a silver medal there.
In the same year, he began studying mathematics at Moscow State University, and (because of the collapse of the Soviet Union) was forced to choose between Ukrainian and Russian citizenship, deciding at that time to be Russian. He graduated from Moscow State in 1997 and received M.S.E. in computers and information science and Ph.D. in mathematics from the University of Pennsylvania.
He was a Clay Research Fellow and a researcher at the Institute for Advanced Study in Princeton, New Jersey.
Next, he taught at the California Institute of Technology from 2003 to 2010, before moving to MIT. In 2016–2017 he was a Fellow at the Radcliffe Institute at Harvard University.
Awards and honors
In 2008, Borodin won the European Mathematical Society Prize, one of ten prizes awarded every four years for excellence by a young mathematics researcher.
In 2010, he was one of four Caltech faculty invited to present their work at the International Congress of Mathematicians. In 2015 he won the Loève Prize and the Henri Poincaré Prize. In 2018 he became a Fellow of the American Academy of Arts and Sciences, and in 2019 he was awarded the Fermat Prize. |
https://en.wikipedia.org/wiki/Gossypol | Gossypol () is a natural phenol derived from the cotton plant (genus Gossypium). Gossypol is a phenolic aldehyde that permeates cells and acts as an inhibitor for several dehydrogenase enzymes. It is a yellow pigment. The structure exhibits atropisomerism, with the two enantiomers having different biochemical properties.
Among other applications, it has been tested as a male oral contraceptive in China. In addition to its putative contraceptive properties, gossypol has also long been known to possess antimalarial properties.
Biosynthesis
Gossypol is a terpenoid aldehyde which is formed metabolically through acetate via the isoprenoid pathway. The sesquiterpene dimer undergoes a radical coupling reaction to form gossypol. The biosynthesis begins when geranyl pyrophosphate (GPP) and isopentenyl pyrophosphate (IPP) are combined to make the sesquiterpene precursor farnesyl diphosphate (FPP). The cadinyl cation (1) is oxidized to 2 by (+)-δ-cadinene synthase. The (+)-δ-cadinene (2) is involved in making the basic aromatic sesquiterpene unit, homigossypol, by oxidation, which generates the 3 (8-hydroxy-δ-cadinene) with the help of (+)-δ-cadinene 8-hyroxylase. Compound 3 goes through various oxidative processes to make 4 (deoxyhemigossypol), which is oxidized by one electron into hemigossypol (5, 6, 7) and then undergoes a phenolic oxidative coupling, ortho to the phenol groups, to form gossypol (8). The coupling is catalyzed by a hydrogen peroxide-dependent peroxidase enzyme, which results in the final product.
Research
Contraception
A 1929 investigation in Jiangxi showed correlation between low fertility in males and use of crude cottonseed oil for cooking. The compound causing the contraceptive effect was determined to be gossypol.
In the 1970s, the Chinese government began researching the use of gossypol as a contraceptive. Their studies involved over 10,000 subjects, and continued for over a decade. They concluded that gossypol provided reliable contraception, cou |
https://en.wikipedia.org/wiki/Genus%20character | In number theory, a genus character of a quadratic number field K is a character of the genus group of K. In other words, it is a real character of the narrow class group of K. Reinterpreting this using the Artin map, the collection of genus characters can also be thought of as the unramified real characters of the absolute Galois group of K (i.e. the characters that factor through the Galois group of the genus field of K). |
https://en.wikipedia.org/wiki/MinIO | MinIO is a High-Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with the Amazon S3 cloud storage service. It can handle unstructured data such as photos, videos, log files, backups, and container images with a current maximum supported object size of 50TB.
History & development
MinIO's main developer is MinIO Inc, a Silicon Valleybased technology startup founded by Anand Babu Periasamy, Garima Kapoor, and Harshavardhana in November 2014.
MinIO has published a number of benchmarks to disclose both its own performance and the performance of an object storage in general. These benchmarks include comparisons to an Amazon S3 for Trino, Presto, and Spark, as well as throughput results for the S3Benchmark on HDD and NVMe drives.
Re-licensing
As of April 23, 2021, MinIO, Inc submitted a change that re-licensed the project from its previous Apache V2 to GNU Affero Public License Version 3 (AGPLv3)..
Architecture
MinIO storage stack has three major components: MinIO Server, MinIO Client (a.k.a. mc, which is a command-line client for the object and file management with any Amazon S3 compatible servers), and MinIO Client SDK, which can be used by application developers to interact with any Amazon S3 compatible server.
MinIO Server
MinIO cloud storage server is designed to be minimal and scalable. It is light enough to be bundled along with the application stack, similar to NodeJS and Redis.
MinIO is optimized for large enterprise deployments, including features like erasure coding, bitrot protection, encryption/WORM, identity management, continuous replication, global federation, and multi-cloud deployments via gateway mode.
MinIO server is hardware agnostic and thus can be installed both on physical and virtual machines or launched as Docker containers and deployed on container orchestration platforms like Kubernetes.
MinIO Client
MinIO Client provides an alternative to the standard UNIX commands (e.g. ls, cat |
https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2081 | In molecular biology, glycoside hydrolase family 81 is a family of glycoside hydrolases.
Glycoside hydrolases are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes.
Glycoside hydrolase family 81 is a family of eukaryotic beta-1,3-glucanases (CAZY GH_81). |
https://en.wikipedia.org/wiki/Alexander%20Kiselev%20%28mathematician%29 | Alexander A. Kiselev (born 1969) is an American mathematician, specializing in spectral theory, partial differential equations, and fluid mechanics.
Career
Alexander Kiselev received his bachelor's degree in 1992 from Saint Petersburg State University and his PhD in 1997 from Caltech under supervision of Barry Simon.
In 1997-1998 he was a postdoctoral fellow at the Mathematical Sciences Research Institute, where he co-authored a paper on Christ–Kiselev maximal inequality.
Between 1998 and 2002 he was an E. Dickson Instructor and then assistant professor at the University of Chicago where he worked with Peter Constantin on reaction-diffusion equations and fluid mechanics. In 2001, Kiselev solved one of the Simon problems,
on existence of imbedded singular continuous spectrum of the Schrödinger operator with slowly decaying potential.
He taught at the University of Wisconsin-Madison from 2002 to 2013, as an associate and full professor. He was a member of the Rice University faculty between 2013 and 2017. Since 2018, Kiselev is a William T. Laprade Professor of Mathematics at Duke University. His research has been profiled by Science Watch, Institute for Mathematics and its Applications, Duke Today and Quanta Magazine
Awards and honors
Sloan Research Fellowship, 2001
Guggenheim Fellowship, 2012
Invited speaker, 2018 International Congress of Mathematicians.
12th Brooke Benjamin Lecture, Oxford University, 2019
Simons Foundation Fellow in Mathematics, 2020
Publications |
https://en.wikipedia.org/wiki/Cryptographically%20Generated%20Address | A Cryptographically Generated Address (CGA) is an Internet Protocol Version 6 (IPv6) address that has a host identifier computed from a cryptographic hash function. This procedure is a method for binding a public signature key to an IPv6 address in the Secure Neighbor Discovery Protocol (SEND).
Characteristics
A Cryptographically Generated Address is an IPv6 address whose interface identifier has been generated according to the CGA generation method. The interface identifier is formed by the least-significant 64 bits of an IPv6 address and is used to identify the host's network interface on its subnet. The subnet is determined by the most-significant 64 bits, the subnet prefix.
Apart from the public key that is to be bound to the CGA, the CGA generation method takes several other input parameters including the predefined subnet prefix. These parameters, along with other parameters that are generated during the execution of the CGA generation method, form a set of parameters called the CGA Parameters data structure. The complete set of CGA Parameters has to be known in order to be able to verify the corresponding CGA.
The CGA Parameters data structure consists of:
modifier: a random 128-bit unsigned integer;
subnetPrefix: the 64-bit prefix that defines to which subnet the CGA belongs;
collCount: an 8-bit unsigned integer that must be 0, 1, or 2;
publicKey: the public key as a DER-encoded ASN.1 structure of the type SubjectPublicKeyInfo;
extFields: an optional variable-length field (default length 0).
Additionally, a security parameter Sec determines the CGA's strength against brute-force attacks. This is a 3-bit unsigned integer that can have any value from 0 up to (and including) 7 and is encoded in the three leftmost bits of the CGA's interface identifier. The higher the value of Sec, the higher the level of security, but also the longer it generally takes to generate a CGA. For convenience, the intermediate Sec values in the pseudocode below are assumed t |
https://en.wikipedia.org/wiki/Sulcus%20of%20auditory%20tube | The lateral half of the great wing of the sphenoid bone articulates, by means of a synchondrosis, with the petrous part of the temporal bone. Between these two bones on the under surface of the skull, is a furrow, the 'sulcus of auditory tubule, for the lodgement of the cartilaginous part of the auditory tube. |
https://en.wikipedia.org/wiki/ANSI/ISA-95 | ANSI/ISA-95, or ISA-95 as it is more commonly referred, is an international standard from the International Society of Automation for developing an automated interface between enterprise and control systems. This standard has been developed for global manufacturers. It was developed to be applied in all industries, and in all sorts of processes, like batch processes, continuous and repetitive processes.
The objectives of ISA-95 are to provide consistent terminology that is a foundation for supplier and manufacturer communications, provide consistent information models, and to provide consistent operations models which is a foundation for clarifying application functionality and how information is to be used.
There are 5 parts of the ISA-95 standard.
ANSI/ISA-95.00.01-2000, Enterprise-Control System Integration Part 1: Models and Terminology consists of standard terminology and object models, which can be used to decide which information should be exchanged.
The models help define boundaries between the enterprise systems and the control systems. They help address questions like which tasks can be executed by which function and what information must be exchanged between applications. Here is a .
ISA-95 Models
Context
Hierarchy Models
Scheduling and control (Purdue)
Equipment hierarchy
Functional Data Flow Model
Manufacturing Functions
Data Flows
Object Models
Objects
Object Relationships
Object Attributes
Operations Activity Models
Operations Elements: PO, MO, QO, IO
Operations Data Flow Model
Operations Functions
Operations Flows
ANSI/ISA-95.00.02-2001, Enterprise-Control System Integration Part 2: Object Model Attributes consists of attributes for every object that is defined in part 1. The objects and attributes of Part 2 can be used for the exchange of information between different systems, but these objects and attributes can also be used as the basis for relational databases.
ANSI/ISA-95.00.03-2005, Enterprise-Control System Integration, Part 3: Models |
https://en.wikipedia.org/wiki/Trisomy%2018 | Trisomy 18, also known as Edwards syndrome, is a genetic disorder caused by the presence of a third copy of all or part of chromosome 18. Many parts of the body are affected. Babies are often born small and have heart defects. Other features include a small head, small jaw, clenched fists with overlapping fingers, and severe intellectual disability.
Most cases of trisomy 18 occur due to problems during the formation of the reproductive cells or during early development. The chance of this condition occurring increases with the mother's age. Rarely, cases may be inherited. Occasionally, not all cells have the extra chromosome, known as mosaic trisomy, and symptoms in these cases may be less severe. An ultrasound during pregnancy can increase suspicion for the condition, which can be confirmed by amniocentesis.
Treatment is supportive. After having one child with the condition, the risk of having a second is typically around one percent. It is the second-most common condition due to a third chromosome at birth, after Down syndrome.
Trisomy 18 occurs in around 1 in 5,000 live births. Many of those affected die before birth. Some studies suggest that more babies that survive to birth are female. Survival beyond a year of life is around 5–10%. It is named after English geneticist John Hilton Edwards, who first described the syndrome in 1960.
Signs and symptoms
Children born with Edwards' syndrome may have some or all of these characteristics: kidney malformations, structural heart defects at birth (i.e., ventricular septal defect, atrial septal defect, patent ductus arteriosus), intestines protruding outside the body (omphalocele), esophageal atresia, intellectual disability, developmental delays, growth deficiency, feeding difficulties, breathing difficulties, and arthrogryposis (a muscle disorder that causes multiple joint contractures at birth).
Some physical malformations associated with Edwards' syndrome include small head (microcephaly) accompanied by a promi |
https://en.wikipedia.org/wiki/Alzheimer%27s%20research%20in%20Australia | Alzheimer's research in Australia is carried out at a number of institutions and supported by various charities.
Dementia Australia Research Foundation
This is the research arm of the advocacy organization, Dementia Australia. It funds Australia's new and early career dementia researchers. Recently, 85% of their funding goes to their grants program, which in 2013 provided more than $2.5 million of competitive research funding.
McCusker Alzheimer's Research Foundation
The McCusker Alzheimer's Research Foundation Inc was established in 2001 to support the research of Professor Ralph Martins, Foundation Chair in Aging and Alzheimer's Disease at Edith Cowan University. Its patron is Malcolm McCusker and vice-patron is Terrie Delroy.
The foundation is currently researching and testing on various possible cures of the Alzheimer's. Tests are being conducted on diabetes drugs, turmeric, testosterone and omega 3 (being tested to prevent the beta amyloid).
University of Queensland
Researchers at the University of Queensland have developed an ultrasound technique that has been shown to work in mice and hope to trial it in humans. The ultrasound waves activate microglial cells that digest and remove the amyloid plaques. |
https://en.wikipedia.org/wiki/Mailing%20list | A mailing list is a collection of names and addresses used by an individual or an organization to send material to multiple recipients. The term is often extended to include the people subscribed to such a list, so the group of subscribers is referred to as "the mailing list", or simply "the list".
Transmission may be paper-based or electronic. Each has its strengths, although a 2022 article claimed that "direct mail still brings in the lion's share of revenue for most organizations."
Types
At least two types of mailing lists can be defined:
an announcement list is closer to the original sense, where a "mailing list" of people was used as a recipient for newsletters, periodicals or advertising. Traditionally this was done through the postal system, but with the rise of email, the electronic mailing list became popular. This type of list is used primarily as a one-way conduit of information and may only be "posted to" by selected people. This may also be referred to by the term newsletter. Newsletter and promotional emailing lists are employed in various sectors as parts of direct marketing campaigns.
a "discussion list" allows subscribing members (sometimes even people outside the list) to post their own items which are broadcast to all of the other mailing list members. Recipients may answer in a similar fashion, thus, actual discussion and information exchanges can occur. Mailing lists of this type are usually topic-oriented (for example, politics, scientific discussion, health problems, joke contests), and the topic may range from extremely narrow to "whatever you think could interest us." In this they are similar to Usenet newsgroups, another form of discussion group that may have an aversion to off-topic messages.
Historically mailing lists preceded email/web forums; both can provide analogous functionalities. When used in that fashion, mailing lists are sometimes known as discussion lists or discussion forums. Discussion lists provide some advantages ove |
https://en.wikipedia.org/wiki/Durham%20tube | Durham tubes are used in microbiology to detect production of gas by microorganisms. They are simply smaller test tubes inserted upside down in another test tube so they are freely movable. The culture media to be tested is then added to the larger tube and sterilized, which also eliminates the initial air gap produced when the tube is inserted upside down. The culture media typically contains a single substance to be tested with the organism, such as to determine whether an organism can ferment a particular carbohydrate. After inoculation and incubation, any gas that is produced will form a visible gas bubble inside the small tube. Litmus solution can also be added to the culture media to give a visual representation of pH changes that occur during the production of gas. The method was first reported in 1898 by British microbiologist Herbert Durham.
One limitation of the Durham tube is that it does not allow for precise determination of the type of gas that is produced within the inner tube, or measurements of the quantity of gas produced. However, Durham argued that quantitive measurements are of limited value because of the culture solution will absorb some of the gas in unknown, variable proportions. Additionally, using Durham tubes to provide evidence of fermentation may not be able to detect slow- or weakly-fermenting organisms when the resultant carbon dioxide diffuses back into the solution as quickly as it is formed, so a negative test using Durham tubes does not indicate decisive physiological significance. |
https://en.wikipedia.org/wiki/Branch%20point | In the mathematical field of complex analysis, a branch point of a multi-valued function (usually referred to as a "multifunction" in the context of complex analysis) is a point such that if the function is n-valued (has n values) at that point, all of its neighborhoods contain a point that has more than n values. Multi-valued functions are rigorously studied using Riemann surfaces, and the formal definition of branch points employs this concept.
Branch points fall into three broad categories: algebraic branch points, transcendental branch points, and logarithmic branch points. Algebraic branch points most commonly arise from functions in which there is an ambiguity in the extraction of a root, such as solving the equation w2 = z for w as a function of z. Here the branch point is the origin, because the analytic continuation of any solution around a closed loop containing the origin will result in a different function: there is non-trivial monodromy. Despite the algebraic branch point, the function w is well-defined as a multiple-valued function and, in an appropriate sense, is continuous at the origin. This is in contrast to transcendental and logarithmic branch points, that is, points at which a multiple-valued function has nontrivial monodromy and an essential singularity. In geometric function theory, unqualified use of the term branch point typically means the former more restrictive kind: the algebraic branch points. In other areas of complex analysis, the unqualified term may also refer to the more general branch points of transcendental type.
Algebraic branch points
Let be a connected open set in the complex plane and a holomorphic function. If is not constant, then the set of the critical points of , that is, the zeros of the derivative , has no limit point in . So each critical point of lies at the center of a disc containing no other critical point of in its closure.
Let be the boundary of , taken with its positive orientation. The |
https://en.wikipedia.org/wiki/Front%20Panel%20Data%20Port | The front panel data port (FPDP) is a bus that provides high speed data transfer between two or more VMEbus boards at up to 160 Mbit/s with low latency. The FPDP bus uses a 32-bit parallel synchronous bus wired with an 80-conductor ribbon cable.
The following interface functions are supported:
FPDP/TM (transmitter master) - drives data and timing signals onto the FPDP, and also terminates the bus signals at one end of the ribbon cable
FPDP/RM (receiver master) - receives data from the FPDP synchronously with the timing signals provided by the FPDP/TM, and also terminates the bus at the opposite end of the cable to the FPDP/TM
FPDP/R (receiver) - receives data from the FPDP synchronously with the timing signals provided by the FPDP/TM; it does not terminate the bus. More than one FPDP/R can be connected to the FPDP bus. It can also be an alternate function to that of FPDP/RM via software control.
The connector, denoted by the FPDP specification, is a KEL P/N 8825E-080-175.
Interface signals
D<31:0> : Data bus driven by FPDP/TM
DIR_n : Active low Direction signal driven by FPDP/TM
DVALID_n: Active low data valid indication driven by FPDP/TM
STROBE : A free running clock supplied by FPDP/TM
NRDY_n : Active low not ready signal driven by FPDP/R or FPDP/RM. Asserted before the commencement of transfer of data by the FPDP/R or FPDP/RM asynchronous to STROBE.
PSTROBE : Optional Differential PECL version of the STROBE driven by FPDP/TM
SUSPEND_n : Active low suspend signal asserted by FPDP/R or FPDP/RM asynchronous to STROBE to inform the transmitter that buffer flow condition may occur. The transmitter may delay not more than 16 clocks before it suspends the data transfer.
SYNC_n : Active low synchronization pulse provided by FPDP/TM.
PIO1, PIO2 : Programmable I/O lines for user purposes
Data frames
The following types of data frames are supported:
Unframed Data
single frame Data
Fixed size Repeating Frame Data
Dynamic Size Repeating Frame Data
Ca |
https://en.wikipedia.org/wiki/Prime%20decomposition%20of%203-manifolds | In mathematics, the prime decomposition theorem for 3-manifolds states that every compact, orientable 3-manifold is the connected sum of a unique (up to homeomorphism) finite collection of prime 3-manifolds.
A manifold is prime if it cannot be presented as a connected sum of more than one manifold, none of which is the sphere of the same dimension. This condition is necessary since for any manifold M of dimension it is true that
(where means the connected sum of and ). If is a prime 3-manifold then either it is or the non-orientable bundle over
or it is irreducible, which means that any embedded 2-sphere bounds a ball. So the theorem can be restated to say that there is a unique connected sum decomposition into irreducible 3-manifolds and fiber bundles of over
The prime decomposition holds also for non-orientable 3-manifolds, but the uniqueness statement must be modified slightly: every compact, non-orientable 3-manifold is a connected sum of irreducible 3-manifolds and non-orientable bundles over This sum is unique as long as we specify that each summand is either irreducible or a non-orientable bundle over
The proof is based on normal surface techniques originated by Hellmuth Kneser. Existence was proven by Kneser, but the exact formulation and proof of the uniqueness was done more than 30 years later by John Milnor. |
https://en.wikipedia.org/wiki/Macroscopic%20scale | The macroscopic scale is the length scale on which objects or phenomena are large enough to be visible with the naked eye, without magnifying optical instruments. It is the opposite of microscopic.
Overview
When applied to physical phenomena and bodies, the macroscopic scale describes things as a person can directly perceive them, without the aid of magnifying devices. This is in contrast to observations (microscopy) or theories (microphysics, statistical physics) of objects of geometric lengths smaller than perhaps some hundreds of micrometers.
A macroscopic view of a ball is just that: a ball. A microscopic view could reveal a thick round skin seemingly composed entirely of puckered cracks and fissures (as viewed through a microscope) or, further down in scale, a collection of molecules in a roughly spherical shape (as viewed through an electron microscope). An example of a physical theory that takes a deliberately macroscopic viewpoint is thermodynamics. An example of a topic that extends from macroscopic to microscopic viewpoints is histology.
Not quite by the distinction between macroscopic and microscopic, classical and quantum mechanics are theories that are distinguished in a subtly different way. At first glance one might think of them as differing simply in the size of objects that they describe, classical objects being considered far larger as to mass and geometrical size than quantal objects, for example a football versus a fine particle of dust. More refined consideration distinguishes classical and quantum mechanics on the basis that classical mechanics fails to recognize that matter and energy cannot be divided into infinitesimally small parcels, so that ultimately fine division reveals irreducibly granular features. The criterion of fineness is whether or not the interactions are described in terms of Planck's constant. Roughly speaking, classical mechanics considers particles in mathematically idealized terms even as fine as geometrical points wi |
https://en.wikipedia.org/wiki/TAPPS2 | TAPPS2 (Technische Alternative Planungs- und Programmier-System) is a tool used for developing the program logic for the universal, heating and solar thermal controllers by Austrian manufacturer Technische Alternative. Its primary usecase is defining the exact reaction of the controller to a certain event. Other than its predecessor, TAPPS, which could only be used to program controllers of type UVR1611, TAPPS2 is mainly used to program the UVR16x2 and RSM610 controllers, as well as several extension modules.
Development
Development in TAPPS2 is done on a vector-based drawing surface using components that can be placed via drag and drop. The components, which can be separated into inputs, functions and outputs are then being connected according to their individual features. Available components vary according to the current solar thermal control unit.
External links
Website of Technische Alternative
TAPPS2 Manual |
https://en.wikipedia.org/wiki/2021%20Microsoft%20Exchange%20Server%20data%20breach | A global wave of cyberattacks and data breaches began in January 2021 after four zero-day exploits were discovered in on-premises Microsoft Exchange Servers, giving attackers full access to user emails and passwords on affected servers, administrator privileges on the server, and access to connected devices on the same network. Attackers typically install a backdoor that allows the attacker full access to impacted servers even if the server is later updated to no longer be vulnerable to the original exploits. , it was estimated that 250,000 servers fell victim to the attacks, including servers belonging to around 30,000 organizations in the United States, 7,000 servers in the United Kingdom, as well as the European Banking Authority, the Norwegian Parliament, and Chile's Commission for the Financial Market (CMF).
On 2 March 2021, Microsoft released updates for Microsoft Exchange Server 2010, 2013, 2016 and 2019 to patch the exploit; this does not retroactively undo damage or remove any backdoors installed by attackers. Small and medium businesses, local institutions, and local governments are known to be the primary victims of the attack, as they often have smaller budgets to secure against cyber threats and typically outsource IT services to local providers that do not have the expertise to deal with cyber attacks.
On 12 March 2021, Microsoft announced the discovery of "a new family of ransomware" being deployed to servers initially infected, encrypting all files, making the server inoperable and demanding payment to reverse the damage. On 22 March 2021, Microsoft announced that in 92% of Exchange servers the exploit has been either patched or mitigated.
Background
Microsoft Exchange is considered a high-value target for hackers looking to penetrate business networks, as it is email server software, and, according to Microsoft, it provides "a unique environment that could allow attackers to perform various tasks using the same built-in tools or scripts that adm |
https://en.wikipedia.org/wiki/Friedrich%20Miescher | Johannes Friedrich Miescher (13 August 1844 – 26 August 1895) was a Swiss physician and biologist. He was the first scientist to isolate nucleic acid in 1869. He also identified protamine and made a number of other discoveries.
Miescher had isolated various phosphate-rich chemicals, which he called nuclein (now nucleic acids), from the nuclei of white blood cells in Felix Hoppe-Seyler's laboratory at the University of Tübingen, Germany, paving the way for the identification of DNA as the carrier of inheritance. The significance of the discovery, first published in 1871, was not at first apparent, and Albrecht Kossel made the initial inquiries into its chemical structure. Later, Miescher raised the idea that the nucleic acids could be involved in heredity and even posited that there might be something akin to an alphabet that might explain how variation is produced.
Early life and education
Friedrich Miescher came from a scientific family; his father and his uncle held the chair of anatomy at the University of Basel. As a boy, he was shy but intelligent. He had an interest in music as his father performed publicly. Miescher studied medicine at Basel. In the summer of 1865, he worked for the organic chemist Adolf Stecker at the University of Göttingen, but his studies were interrupted for the year when he became ill with typhoid fever, which left him hearing-impaired. He received his MD in 1868.
Career
Miescher felt that his partial deafness would be a disadvantage as a doctor, so he turned to physiological chemistry. He originally wanted to study lymphocytes, but was encouraged by Felix Hoppe-Seyler to study neutrophils. He was interested in studying the chemistry of the nucleus. Lymphocytes were difficult to obtain in sufficient numbers to study, while neutrophils were known to be one of the main and first components in pus and could be obtained from bandages at the nearby hospital. The problem was, however, washing the cells off the bandages without damaging th |
https://en.wikipedia.org/wiki/Navigation%20function | Navigation function usually refers to a function of position, velocity, acceleration and time which is used to plan robot trajectories through the environment. Generally, the goal of a navigation function is to create feasible, safe paths that avoid obstacles while allowing a robot to move from its starting configuration to its goal configuration.
Potential functions as navigation functions
Potential functions assume that the environment or work space is known. Obstacles are assigned a high potential value, and the goal position is assigned a low potential. To reach the goal position, a robot only needs to follow the negative gradient of the surface.
We can formalize this concept mathematically as following: Let be the state space of all possible configurations of a robot. Let denote the goal region of the state space.
Then a potential function is called a (feasible) navigation function if
if and only if no point in is reachable from .
For every reachable state, , the local operator produces a state for which .
Probabilistic navigation function
Probabilistic navigation function is an extension of the classical navigation function for static stochastic scenarios. The function is defined by permitted collision probability, which limits the risk during motion. The Minkowski sum used for in the classical definition is replaced with a convolution of the geometries and the Probability Density Functionss of locations. Denoting the target position by , the Probabilistic navigation function is defined as:
where is a predefined constant like in the classical navigation function, which ensures the Morse nature of the function. is the distance to the target position , and takes into account all obstacles, defined as
where is based on the probability for a collision at location . The probability for a collision is limited by a predetermined value , meaning:
and,
where is the probability to collide with the i-th obstacle.
A map is said to be a |
https://en.wikipedia.org/wiki/De-essing | De-essing (also desibilizing) is any technique intended to reduce or eliminate the excessive prominence of sibilant consonants, such as the sounds normally represented in English by "s", "z", "ch", "j" and "sh", in recordings of the human voice. Sibilance lies in frequencies anywhere between 2 and 10 kHz, depending on the individual voice.
Causes
Excess sibilance can be caused by compression, microphone choice and technique, and even simply the way a person's mouth anatomy is shaped. Ess sound frequencies can be irritating to the ear, especially with earbuds or headphones, and interfere with an otherwise modulated and pleasant audio stream.
Process of de-essing
De-essing is a dynamic audio editing process, only working when the level of the signal in the sibilant range (the ess sound) exceeds a set threshold. De-essing temporarily reduces the level of high-frequency content in the signal when a sibilant ess sound is present. De-essing differs from equalization, which is a static change in level among many frequencies. However, equalization of the ess frequencies alone can be manipulated to reduce the level of sibilance.
There are several time- and frequency-based algorithms that can reduce sibilance or de-ess the sound. Time-domain approaches, such as bandpass filters, are more suited to real-time applications such as live radio due to less constraint on digital signal processor. Playback or offline applications incorporate methods based on fast Fourier transform (FFT).
Using a dedicated de-essing plugin
In the current digital stronghold of audio production, the most commonly used tool for reducing sibilance is a de-esser plugin. A dynamic equalizer can be used to achieve the same effects as a de-esser, however, plugin manufacturers have tailored these tools to operate efficiently within the mid-high to high frequencies.
A de-essing plugin will compress the desired signal according to the amplitude of the selected frequency as it passes over a preset threshold |
https://en.wikipedia.org/wiki/Stephan%20Luckhaus | Stephan Luckhaus is a German mathematician who is a professor at the University of Leipzig working in pure and applied analysis. His PhD was obtained in 1978 under the supervision of Willi Jäger at the University of Heidelberg. He was elected to the German Academy of Sciences Leopoldina in 2007.
See also
Richards equation |
https://en.wikipedia.org/wiki/Fernando%20Zalamea | Fernando Zalamea Traba (Bogota, 28 February 1959) is a Colombian mathematician, essayist, critic, philosopher and popularizer, known by his contributions to the philosophy of mathematics, being the creator of the synthetic philosophy of mathematics. He is the author of around twenty books and is one of the world's leading experts on the mathematical and philosophical work of Alexander Grothendieck, as well as in the logical work of Charles S. Peirce.
Currently, he is a full professor in the Department of Mathematics of the National University of Colombia, where he has established a mathematical school, primarily through his ongoing seminar of epistemology, history and philosophy of mathematics, which he conducted for eleven years at the university. He is also known for his creative, critical, and constructive teaching of mathematics. Zalamea has supervised approximately 50 thesis projects at the undergraduate, master's and doctoral levels in various fields, including mathematics, philosophy, logic, category theory, semiology, medicine, culture, among others. Since 2018, he has been an honorary member of the Colombian Academy of Physical Exact Sciences and Natural. In 2016, he was recognized as one of the 100 most outstanding contemporary interdisciplinary global minds by "100 Global Minds, the most daring cross-disciplinary thinkers in the world," being the only Latin American included in this recognition. |
https://en.wikipedia.org/wiki/Pasquale%20del%20Pezzo | Pasquale del Pezzo, Duke of Caianello and Marquis of Campodisola (2 May 1859 – 20 June 1936), was an Italian mathematician.
He was born in Berlin (where his father was a representative of the Neapolitan king) on 2 May 1859. He died in Naples on 20 June 1936. His wife was the Swedish writer Anne Charlotte Leffler, sister of the great mathematician Gösta Mittag-Leffler (1846–1927).
At the University of Naples, he received first a law degree in 1880 and then in 1882 a math degree. He became a pre-eminent professor at that university, teaching projective geometry, and remained at that University, as rector, faculty president, etc.
He was mayor of Naples from 1914 to 1917. Starting in 1919 he became a Senator of the Kingdom of Italy until his death.
He is remembered particularly for first describing what became known as a del Pezzo surface. |
https://en.wikipedia.org/wiki/Discrete%20Morse%20theory | Discrete Morse theory is a combinatorial adaptation of Morse theory developed by Robin Forman. The theory has various practical applications in diverse fields of applied mathematics and computer science, such as configuration spaces, homology computation, denoising, mesh compression, and topological data analysis.
Notation regarding CW complexes
Let be a CW complex and denote by its set of cells. Define the incidence function in the following way: given two cells and in , let be the degree of the attaching map from the boundary of to . The boundary operator is the endomorphism of the free abelian group generated by defined by
It is a defining property of boundary operators that . In more axiomatic definitions one can find the requirement that
which is a consequence of the above definition of the boundary operator and the requirement that .
Discrete Morse functions
A real-valued function is a discrete Morse function if it satisfies the following two properties:
For any cell , the number of cells in the boundary of which satisfy is at most one.
For any cell , the number of cells containing in their boundary which satisfy is at most one.
It can be shown that the cardinalities in the two conditions cannot both be one simultaneously for a fixed cell , provided that is a regular CW complex. In this case, each cell can be paired with at most one exceptional cell : either a boundary cell with larger value, or a co-boundary cell with smaller value. The cells which have no pairs, i.e., whose function values are strictly higher than their boundary cells and strictly lower than their co-boundary cells are called critical cells. Thus, a discrete Morse function partitions the CW complex into three distinct cell collections: , where:
denotes the critical cells which are unpaired,
denotes cells which are paired with boundary cells, and
denotes cells which are paired with co-boundary cells.
By construction, there is a bijection of sets between |
https://en.wikipedia.org/wiki/IPadOS%2017 | iPadOS 17 is the fifth and current major release of the iPadOS operating system developed by Apple for its iPad line of tablet computers. The successor to iPadOS 16, it was announced at the company's Worldwide Developers Conference (WWDC) on June 5, 2023 and was released on September 18, 2023 along with iOS 17.
iPadOS 17 drops support for the first-generation iPad Pro, and the fifth-generation iPad, making it the first iPadOS release to be exclusive to iPads with Apple Pencil support, as well as the first version of iPadOS to drop support for an iPad Pro.
The first public beta was released on July 12, 2023 and the final version was released on September 18, 2023.
Features
Lock screen
The lock screen has been redesigned to match the appearance of iOS 16 and beyond.
PDF document handling
iPadOS can now identify PDF forms fields for quicker text input.
Siri
Users can now simply say "Siri" instead of "Hey Siri" to activate Siri by voice activation.
Health App
The Apple Health app is now available on iPads as well as on iPhones.
Notes App
The Notes app now supports real time collaboration between users in PDF documents.
Automatic verification codes
Adds support for one-time verification codes in the Mail app.
Adds feature to automatically delete verification codes.
Compatibility (supported devices)
iPadOS 17 requires an A10 chip or newer, which means it drops support for iPad models with A9 and A9X chips, officially marking the end of support for non-Apple Pencil compatible iPads. This also marks the third time Apple has dropped 64-bit devices.
Those using A10 or A10X SoC have limited support.
Those using A12, A12X, A12Z, or A13 SoC get additional features that are unavailable on older models.
Those using A14 or A15 SoC have almost full support.
Those using M1 or M2 SoC get full support.
iPad (6th generation)
iPad (7th generation)
iPad (8th generation)
iPad (9th generation)
iPad (10th generation)
iPad Air (3rd generation)
iPad Air |
https://en.wikipedia.org/wiki/Parametron | Parametron is a logic circuit element invented by Eiichi Goto in 1954. The parametron is essentially a resonant circuit with a nonlinear reactive element which oscillates at half the driving frequency. The oscillation can be made to represent a binary digit by the choice between two stationary phases π radians (180 degrees) apart.
Parametrons were used in early Japanese computers from 1954 through the early 1960s. A prototype parametron-based computer, the PC-1, was built at the University of Tokyo in 1958. Parametrons were used in early Japanese computers due to being reliable and inexpensive but were ultimately surpassed by transistors due to differences in speed.
See also
Quantum flux parametron
Eiichi Goto
MUSASINO-1
Magnetic amplifier
Magnetic logic
Parametric oscillator |
https://en.wikipedia.org/wiki/Anion-conducting%20channelrhodopsin | Anion-conducting channelrhodopsins are light-gated ion channels that open in response to light and let negatively charged ions (such as chloride) enter a cell. All channelrhodopsins use retinal as light-sensitive pigment, but they differ in their ion selectivity. Anion-conducting channelrhodopsins are used as tools to manipulate brain activity in mice, fruit flies and other model organisms (Optogenetics). Neurons expressing anion-conducting channelrhodopsins are silenced when illuminated with light, an effect that has been used to investigate information processing in the brain. For example, suppressing dendritic calcium spikes in specific neurons with light reduced the ability of mice to perceive a light touch to a whisker. Studying how the behavior of an animal changes when specific neurons are silenced allows scientists to determine the role of these neurons in the complex circuits controlling behavior.
The first anion-conducting channelrhodopsins were engineered from the cation-conducting light-gated channel Channelrhodopsin-2 by removing negatively charged amino acids from the channel pore (Fig. 1). As the main anion of extracellular fluid is chloride (Cl−), anion-conducting channelrhodopsins are also known as “chloride-conducting channelrhodopsins” (ChloCs). Naturally occurring anion-conducting channelrhodopsins (ACRs) were subsequently identified in cryptophyte algae. The crystal structure of the natural GtACR1 has recently been solved, paving the way for further protein engineering.
Variants
Applications
Anion-conducting channelrhodopsins (ACRs) have been used as optogenetic tools to inhibit neuronal activation. When expressed in nerve cells, ACRs act as light-gated chloride channels. Their effect on the activity of the neuron is comparable to GABAA receptors, ligand-gated chloride channels found in inhibitory synapses: As the chloride concentration in mature neurons is very low, illumination results in an inward flux of negatively charged ions, clampin |
https://en.wikipedia.org/wiki/History%20of%20network%20traffic%20models | Design of robust and reliable networks and network services relies on an understanding of the traffic characteristics of the network. Throughout history, different models of network traffic have been developed and used for evaluating existing and proposed networks and services.
Demands on computer networks are not entirely predictable. Performance modeling is necessary for deciding the quality of service (QoS) level. Performance models in turn, require accurate traffic models that have the ability to capture the statistical characteristics of the actual traffic on the network. Many traffic models have been developed based on traffic measurement data. If the underlying traffic models do not efficiently capture the characteristics of the actual traffic, the result may be the under-estimation or over-estimation of the performance of the network. This impairs the design of the network. Traffic models are hence, a core component of any performance evaluation of networks and they need to be very accurate.
“Teletraffic theory is the application of mathematics to the measurement, modeling, and control of traffic in telecommunications networks. The aim of traffic modeling is to find stochastic processes to represent the behavior of traffic. Working at the Copenhagen Telephone Company in the 1910s, A. K. Erlang famously characterized telephone traffic at the call level by certain probability distributions for arrivals of new calls and their holding times. Erlang applied the traffic models to estimate the telephone switch capacity needed to achieve a given call blocking probability. The Erlang blocking formulas had tremendous practical interest for public carriers because telephone facilities (switching and transmission) involved considerable investments. Over several decades, Erlang’s work stimulated the use of queuing theory, and applied probability in general, to engineer the public switched telephone network. Teletraffic theory for packet networks has seen considerable p |
https://en.wikipedia.org/wiki/Echites%20panduratus | Echites panduratus (common name: loroco ) is a climbing vine with edible flowers, widespread in El Salvador, Guatemala, and other countries in Central America as well as parts of Mexico.
The name "loroco" is used throughout Mesoamerica to refer to the species.
Description
Echites panduratus is an herbaceous vine with oblong-elliptical to broadly ovate leaves . long, 1.5–8 cm broad, inflorescences are generally somewhat shorter than the leaves, with 8–18 flowers, the pedicels 4–6 mm. long; bracts ovate, long; calyx lobes ovate, acute or obtuse, 2–3 mm. long; corolla white within, greenish outside.
Range
Echites panduratus ranges from northeastern Mexico to Costa Rica.
Uses
Echites panduratus is an important source of food in Guatemala and El Salvador. The plant's buds and flowers are used for cooking in a variety of ways, including in pupusas. |
https://en.wikipedia.org/wiki/Andrew%20Clarke%20%28cricketer%2C%20born%201961%29 | Andrew Russell Clarke (born 23 December 1961) is a former English cricketer. Clarke was a right-handed batsman who bowled leg break. He was born in Patcham, Sussex. A late starter to county cricket, not making his debut for Sussex until he was 26, Clarke played for Sussex for 3 seasons. He later played Minor counties cricket for Buckinghamshire and Norfolk, before retiring in 2003.
Sussex
Clarke initially played a Second XI Championship fixture for Sussex in 1981 against the Second XI of Hampshire, but made no further appearances for the Sussex Second XI for some while following that fixture. He did though play club cricket in Brighton, while working as an insurance underwriter. However, in 1987 he had trials at Sussex, where he again played Second XI cricket for the Sussex Second XI. The trial turned out to be a success for Clarke, with Sussex signing following which he took a sabbatical from his job. His late start in county cricket, at the age of 26, was all the more remarkable considering that his bowling style was virtually considered an extinct form in England.
His first-class debut followed in the 1988 County Championship against Somerset, with his List A debut also coming in 1988 against the same opposition in the Refuge Assurance League. Clarke played 3 seasons for Sussex, appearing regularly in the Sussex team for first-class matches, before falling out of favour during the 1989 and 1990 seasons. He did though make a total of 26 first-class appearances in the 2 seasons in which he appeared in first-class cricket, with 21 coming in his debut season. A tailender, Clarke scored 406 runs at a batting average of 14.50. He scored a single half century, making 68 against in a rearguard action against Hampshire in 1988. Clarkes primary role was as a bowler, with Clarke taking 53 first-class wickets, at a respectable bowling average of 35.32. He twice took a five wicket haul, with the best of these coming against Hampshire in 1988, the match in which his highest |
https://en.wikipedia.org/wiki/Lyotropic%20liquid%20crystal | Lyotropic liquid crystals result when fat-loving and water-loving chemical compounds known as amphiphiles dissolve into a solution that behaves both like a liquid and a solid crystal. This liquid crystalline mesophase includes everyday mixtures like soap and water.
To break the word down, "lyo" and "tropic" mean, respectively, "dissolve" and "change." Historically, the term was used to describe the common behavior of materials composed of amphiphilic molecules upon the addition of a solvent. Such molecules comprise a water-loving hydrophilic head-group (which may be ionic or non-ionic) attached to a water-hating, hydrophobic group.
The micro-phase segregation of two incompatible components on a nanometer scale results in different type of solvent-induced extended anisotropic arrangement, depending on the volume balances between the hydrophilic part and hydrophobic part. In turn, they generate the long-range order of the phases, with the solvent molecules filling the space around the compounds to provide fluidity to the system.
In contrast to thermotropic liquid crystals, lyotropic liquid crystals have therefore an additional degree of freedom, that is the concentration that enables them to induce a variety of different phases. As the concentration of amphiphilic molecules is increased, several different type of lyotropic liquid crystal structures occur in solution. Each of these different types has a different extent of molecular ordering within the solvent matrix, from spherical micelles to larger cylinders, aligned cylinders and even bilayered and multiwalled aggregates.
Types of lyotropic systems
Examples of amphiphilic compounds are the salts of fatty acids, phospholipids. Many simple amphiphiles are used as detergents. A mixture of soap and water is an everyday example of a lyotropic liquid crystal.
Biological structures such as fibrous proteins showings relatively long and well-defined hydrophobic and hydrophilic ‘‘blocks’’ of aminoacids can also show ly |
https://en.wikipedia.org/wiki/Primitive%20node | The primitive node (or primitive knot) is the organizer for gastrulation in most amniote embryos. In birds it is known as Hensen's node, and in amphibians it is known as the Spemann-Mangold organizer. It is induced by the Nieuwkoop center in amphibians, or by the posterior marginal zone in amniotes including birds.
Diversity
In birds the organizer is known as Hensen's node, named after its discoverer Victor Hensen.
In other amniotes it is known as the primitive node.
In amphibians, it is known as the Spemann-Mangold organizer, named after Hans Spemann and Hilde Mangold, who first identified the organizer in 1924.)
In fish it is known as the embryonic shield.
All structures are as yet considered as homologous. This view is substantiated by the common expression of several genes, including goosecoid, Cnot, noggin, nodal, and the sharing of strong axis-inducing properties upon transplantation. Cell fate studies have revealed that also the overall temporal sequence in which groups of endomesodermal cells internalize along the frog blastopore and amniote primitive streak are surprisingly similar: the first cells that involute around the amphibian blastopore lip in the organizer region, and that immigrate through Hensen’s node, contribute to foregut endoderm and prechordal plate. Cells involuting further laterally in the blastopore, or entering via Hensen’s node and the anterior primitive streak, contribute to gut, notochord and somites. Gastrulation then continues along the ventroposterior blastopore lip and posterior streak region, from where cells contribute to ventral and posterior mesoderm. Adding to this, Brachyury and caudal homologues are expressed circumferentially around the blastopore lips in the frog, and along the primitive streak in chick and mouse. This would suggest that, despite their different morphology, the amniote primitive streak and the amphibian blastopore are homologous structures, that have evolved from one and the same precursor structure |
https://en.wikipedia.org/wiki/Dixon%27s%20factorization%20method | In number theory, Dixon's factorization method (also Dixon's random squares method or Dixon's algorithm) is a general-purpose integer factorization algorithm; it is the prototypical factor base method. Unlike for other factor base methods, its run-time bound comes with a rigorous proof that does not rely on conjectures about the smoothness properties of the values taken by a polynomial.
The algorithm was designed by John D. Dixon, a mathematician at Carleton University, and was published in 1981.
Basic idea
Dixon's method is based on finding a congruence of squares modulo the integer N which is intended to factor. Fermat's factorization method finds such a congruence by selecting random or pseudo-random x values and hoping that the integer x2 mod N is a perfect square (in the integers):
For example, if , (by starting at 292, the first number greater than and counting up) the is 256, the square of 16. So . Computing the greatest common divisor of and N using Euclid's algorithm gives 163, which is a factor of N.
In practice, selecting random x values will take an impractically long time to find a congruence of squares, since there are only squares less than N.
Dixon's method replaces the condition "is the square of an integer" with the much weaker one "has only small prime factors"; for example, there are 292 squares smaller than 84923; 662 numbers smaller than 84923 whose prime factors are only 2,3,5 or 7; and 4767 whose prime factors are all less than 30. (Such numbers are called B-smooth with respect to some bound B.)
If there are many numbers whose squares can be factorized as for a fixed set of small primes, linear algebra modulo 2 on the matrix will give a subset of the whose squares combine to a product of small primes to an even power — that is, a subset of the whose squares multiply to the square of a (hopefully different) number mod N.
Method
Suppose the composite number N is being factored. Bound B is chosen, and the factor base is identif |
https://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann%20law | The Stefan–Boltzmann law, also known as Stefan's law, describes the intensity of the thermal radiation emitted by matter in terms of that matter's temperature. It is named for Josef Stefan, who empirically derived the relationship, and Ludwig Boltzmann who derived the law theoretically.
For an ideal absorber/emitter or black body, the Stefan–Boltzmann law states that the total energy radiated per unit surface area per unit time (also known as the radiant exitance) is directly proportional to the fourth power of the black body's temperature, T:
The constant of proportionality, , is called the Stefan–Boltzmann constant. It has a value
.
In the general case, the Stefan–Boltzmann law for radiant exitance takes the form:
where is the emissivity of the matter doing the emitting. The emissivity is generally between zero and one, although some exotic materials may have an emissivity greater than one. An emissivity of one corresponds to a black body.
Detailed explanation
The radiant exitance (previously called radiant emittance), , has dimensions of energy flux (energy per unit time per unit area), and the SI units of measure are joules per second per square metre (J⋅s⋅m), or equivalently, watts per square metre (W⋅m). The SI unit for absolute temperature, T, is the kelvin (K).
To find the total power, , radiated from an object, multiply the radiant exitance by the object's surface area, :
Matter that does not absorb all incident radiation emits less total energy than a black body. Emissions are reduced by a factor , where the emissivity, , is a material property which, for most matter, satisfies . Emissivity can in general depend on wavelength, direction, and polarization. However, the emissivity which appears in the non-directional form of the Stefan–Boltzmann law is the hemispherical total emissivity, which reflects emissions as totaled over all wavelengths, directions, and polarizations.
The form of the Stefan–Boltzmann law that includes emissivity is |
https://en.wikipedia.org/wiki/English%20Electric%20System%204 | The English Electric (later ICL) System 4 was a mainframe computer announced in 1965. It was derived from the RCA Spectra 70 range, itself a variant of the IBM System 360 architecture.
The models in the range included the System 4-10 (cancelled), 4-30 (1967), 4-50 (1967, practically the same as the RCA 70/45), 4-70 (1968, designed by English Electric) and 4-75. ICL documentation also mentions a model 4-40. This was a slugged version of the 4-50, introduced when the 4-30 (intended to be the volume seller) was found to be underpowered and had to be withdrawn. The 4-10 was introduced as a satellite computer, but demand was very low, so it was withdrawn. Only the 4-50 and 4-70, and their successors, the 4-52 and 4-72, sold in any numbers. A slugged 4-72 (the 4-62) was introduced for sale in Eastern Europe.
The System 4-50 and 4-70 were intended for real-time applications, for they had four processor states, each with its own set of general-purpose registers (GPR). Although some states did not have all 16 GPRs, nevertheless, the design avoided having to save registers when switching between processor states. At the lowest level (P1) was the user state. The instructions available in this state were the non-privileged instructions of the IBM System 360. Intermediate levels dealt with various hardware interrupts. State P2 was the Interrupt Response State which performed tasks determined by the Interrupt Control State P3 (the next-highest processor state). The highest state, P4, was the emergency state, initiated in the event of a power failure or a machine check. In the case of power failure, the processor saved the volatile registers before shutting itself down in an orderly fashion. This task was completed within one millisecond from the onset of power failure and removal of power from the machine. For a machine check, an indication of the failure was given to the operator.
In processor states P1 and P2, 16 GPRs were available; in State P3, 6 GPRs were availab |
https://en.wikipedia.org/wiki/Operation%20Wolf | is a light gun shooter arcade game developed by Taito and released in 1987. It was ported to many home systems.
The game was critically and commercially successful, becoming one of the highest-grossing arcade games of 1988 and winning the Golden Joystick Award for Game of the Year. Operation Wolf popularized military-themed first-person light gun rail shooters and inspired numerous clones, imitators, and others in the genre over the next decade. It spawned four sequels: Operation Thunderbolt (1988), Operation Wolf 3 (1994), Operation Tiger (1998), and Operation Wolf Returns: First Mission (2023).
Gameplay
Assuming the role of Special Forces Operative Roy Adams, the player attempts to rescue five hostages who are being held captive in enemy territory. The game is viewed from a first-person perspective, and is on rails, with the screen scrolling horizontally through the landscape. The game has six stages to advance the story. For example, after the jungle stage is completed, Adams interrogates an enemy soldier and learns the location of the concentration camp and hostages. Each stage has unique objectives and effects on gameplay after completion, all based on rescuing hostages. Game over screens vary depending on situations, such as the player's death or failure to rescue all hostages. Continuing the game restarts the stage. The Nintendo Entertainment System version has multiple endings depending on the number of rescued hostages.
The arcade cabinet has an optical controller resembling an Uzi submachine gun which the player can swivel and elevate, and which vibrates to simulate recoil of gunfire. Pulling the trigger allows fully automatic fire, and pressing the button near the muzzle launches a grenade with a wide blast radius against multiple targets.
To complete each stage, the player must shoot a required number of soldiers and vehicles (trucks, boats, helicopters, armored transports), as indicated by an on-screen counter. The limited ammunition and grenades c |
https://en.wikipedia.org/wiki/Centre%20for%20Quantum%20Computation | The Centre for Quantum Computation (CQC) is an alliance of quantum information research groups at the University of Oxford. It was founded by Artur Ekert in 1998.
Until recently, the CQC also included research groups at the University of Cambridge, but now the Cambridge groups operate as an independent entity called the Cambridge Centre for Quantum Information and Foundations (CQIF).
Research
The CQC conducts theoretical and experimental research into quantum computing, quantum cryptography and other forms of quantum information processing, into the implications of the quantum theory of information for physics itself, and into foundational and conceptual questions in quantum theory and quantum information theory.
Groups
Initially the CQC was based at the Clarendon Laboratory, but it has now grown to span several departments at the University of Oxford:
Physics
Atom-photon physics, group led by Axel Kuhn.
Ion trapping, group led by Andrew Steane and David Lucas.
Nuclear magnetic resonance, group led by Jonathan A. Jones.
Quantum spin dynamics, group led by Arzhang Ardavan and John Morton (group spans physics and materials).
Quantum theory, group led by Dieter Jaksch.
Ultracold quantum matter, group led by Christopher Foot.
Ultrafast quantum optics, group led by Ian Walmsley.
Materials
Photonic nanomaterials, group led by Jason Smith.
Quantum and nanotechnology theory, group led by Simon Benjamin.
Quantum spin dynamics, group led by John Morton and Arzhang Ardavan (group spans physics and materials).
Computer Science
Quantum Group, led by Samson Abramsky and Bob Coecke.
Mathematics
Mathematical physics, group led by Artur Ekert.
Origins
The centre has its origins in the early 1980s when the computer industry began to worry about the limits of computing. In 1981, Oxford physicist David Deutsch attended a party in Texas given by the famous American physicist John Wheeler who had invited a number of scientists interested in the foundations of computin |
https://en.wikipedia.org/wiki/Roger%20Slack | Charles Roger Slack (22 April 1937 – 24 October 2016) was a British-born plant biologist and biochemist who lived and worked in Australia (1962–1970) and New Zealand (1970–2000). In 1966, jointly with Marshall Hatch, he discovered C4 photosynthesis (also known as the Hatch Slack Pathway).
Biography
Slack was born on 22 April 1937 in Ashton-under-Lyne, Lancashire, England; the first and only child of Albert and Eva Slack. He studied biochemistry at the University of Nottingham, where he graduated with a Bachelor of Science (Honours) in 1958, and a PhD in 1962. He married Pam Shaw in March 1963, and had two children.
From 1962, Slack worked as a biochemist at the David North Plant Research Centre in Brisbane, Queensland, Australia (funded by the Colonial Sugar Refining Co. Ltd). In 1970, he joined the Department of Scientific and Industrial Research in New Zealand. From 1989 until his retirement in 2000, Slack was a senior scientist at the newly formed Crown Research Institute for Crop & Food Research in Palmerston North.
Slack died in Palmerston North in 2016.
Roger Slack Award
In 2007 the New Zealand Society of Plant Biologists renamed their annual award after Slack. The award is made to society members to recognise an outstanding contribution to the study of plant biology. It was renamed in recognition of his outstanding contribution as a plant biologist and biochemist in New Zealand, his role in the discovery of C4 photosynthesis (also known as the Hatch Slack Pathway), and his contribution as an early member of the New Zealand Society of Plant Biologists.
Honours
1970: Peter Goldacre Award from the Australian Society of Plant Scientists (previously called the Australian Society of Plant Physiologists).
1980: Charles F Kettering Award from the American Society of Plant Physiologists, shared with Hugo Kortschak and Marshall (Hal) Davidson Hatch.
1981: Rank Prize for Nutrition, shared with Hugo Kortschak and Marshall (Hal) Davidson Hatch.
1983: Elected |
https://en.wikipedia.org/wiki/%C4%8Cech%20complex | In algebraic topology and topological data analysis, the Čech complex is an abstract simplicial complex constructed from a point cloud in any metric space which is meant to capture topological information about the point cloud or the distribution it is drawn from. Given a finite point cloud X and an ε > 0, we construct the Čech complex as follows: Take the elements of X as the vertex set of . Then, for each , let if the set of ε-balls centered at points of σ has a nonempty intersection. In other words, the Čech complex is the nerve of the set of ε-balls centered at points of X. By the nerve lemma, the Čech complex is homotopy equivalent to the union of the balls, also known as the Offset Filtration.
Relation to Vietoris–Rips complex
The Čech complex is a subcomplex of the Vietoris–Rips complex. While the Čech complex is more computationally expensive than the Vietoris–Rips complex, since we must check for higher order intersections of the balls in the complex, the nerve theorem provides a guarantee that the Čech complex is homotopy equivalent to union of the balls in the complex. The Vietoris-Rips complex may not be.
See also
Vietoris–Rips complex
Topological data analysis
Čech cohomology
Computational geometry
Abstract simplicial complex
Simplicial complex
Simplicial homology |
https://en.wikipedia.org/wiki/WinFiles | WinFiles, formerly Windows95.com, was an Internet download directory website. Originally, it was founded by Steve Jenkins in 1994.
CNET buyout
On February 24, 1999, CNET agreed to pay WinFiles owner Jenesys LLC US$5.75 million and made an additional $5.75 million payment 18 months after the closing of the deal - totaling $11.5 million. |
https://en.wikipedia.org/wiki/IRE%20%28unit%29 | The IRE unit is used in the measurement of composite video signals. Its name is derived from the initials of the Institute of Radio Engineers.
A value of 100 IRE is defined to be +714 mV in an analog NTSC video signal. A value of 0 IRE corresponds to the voltage value of 0 mV, the signal value during the blanking period. The sync pulse is normally 40 IRE below this 0 IRE value, so the total range covered from peak to trough of an all white signal would be 140 IRE.
Video signals use the "IRE" unit instead of DC voltages to describe levels and amplitudes. Based on a standard 1 Vpp NTSC composite-video signal that swings from -286 mV (sync tip) to +714 mV (peak video), a 140 IRE peak-to-peak convention is established. Thus, one NTSC IRE unit is 7.143 mV ( V or mV), where -40 IRE is equivalent to -285.7 mV, and +100 IRE is equivalent to +714.3 mV. 0 IRE is equivalent to 0 V. The black level is equivalent to 53.57 mV (7.5 IRE).
The PAL video signal is slightly different in that it swings from -300 mV to +700 mV, instead. Thus, one PAL IRE unit is 7 mV, where -43 IRE is equivalent to -300 mV at the sync tip, and +100 IRE is equivalent to +700 mV at the peak video level. Black level is the same as the blanking level 0 mV (0 IRE).
The reason IRE is a relative measurement (percent) is because a video signal may be any amplitude. This unit is used in the ITU-R BT.470 which defines PAL, NTSC and SECAM: |
https://en.wikipedia.org/wiki/Fire%20OS | Fire OS is a mobile operating system based on the Android Open Source Project (AOSP). It is developed by Amazon for their devices. Fire OS includes proprietary software, a customized user interface primarily centered on content consumption, and heavy ties to content available from Amazon's storefronts and services.
History
Amazon began referring to the Android derivative as Fire OS with its third iteration of Fire tablets. Unlike previous Fire models, whose operating system was described as "based on" Android, Fire OS 3.0 was described as "compatible with" Android.
Fire OS 5
Based on Android 5.1 "Lollipop", it added an updated interface. The home screen has a traditional application grid and pages for content types, as opposed to the previous carousel interface. It also introduced On Deck, a function that automatically moves content out of offline storage to maintain storage space for new content; the Word Runner speed reading tool; and screen color filters. Parental controls were enhanced with a new web browser for FreeTime mode featuring a curated selection of content appropriate for children, and an Activity Center for monitoring children's usage. It removed support for device encryption, which an Amazon spokesperson stated was an enterprise-oriented feature that was underused. In March 2016, after the removal was publicized and criticized in the wake of the FBI–Apple encryption dispute, Amazon announced it would restore the feature in a future patch.
Fire OS 6
Based on Android 7.1.2 "Nougat", its main changes and additions include:
Adoptable storage, allowing users to format and use their SD card as internal storage
Doze/App standby, aiming to improve battery life by forcing devices to sleep when not actively used, adding restrictions to apps that would normally continue to run background processes
MediaTek exploits (2019)
In early 2019, security exploits for six Fire Tablet models and one Fire TV model were discovered that could allow temporary root |
https://en.wikipedia.org/wiki/Penrose%20transform | In theoretical physics, the Penrose transform, introduced by , is a complex analogue of the Radon transform that relates massless fields on spacetime, or more precisely the space of solutions to massless field equations to sheaf cohomology groups on complex projective space. The projective space in question is the twistor space, a geometrical space naturally associated to the original spacetime, and the twistor transform is also geometrically natural in the sense of integral geometry. The Penrose transform is a major component of classical twistor theory.
Overview
Abstractly, the Penrose transform operates on a double fibration of a space Y, over two spaces X and Z
In the classical Penrose transform, Y is the spin bundle, X is a compactified and complexified form of Minkowski space (which as a complex manifold is ) and Z is the twistor space (which is ). More generally examples come from double fibrations of the form
where G is a complex semisimple Lie group and H1 and H2 are parabolic subgroups.
The Penrose transform operates in two stages. First, one pulls back the sheaf cohomology groups Hr(Z,F) to the sheaf cohomology Hr(Y,η−1F) on Y; in many cases where the Penrose transform is of interest, this pullback turns out to be an isomorphism. One then pushes the resulting cohomology classes down to X; that is, one investigates the direct image of a cohomology class by means of the Leray spectral sequence. The resulting direct image is then interpreted in terms of differential equations. In the case of the classical
Penrose transform, the resulting differential equations are precisely the massless field equations for a given spin.
Example
The classical example is given as follows
The "twistor space" Z is complex projective 3-space CP3, which is also the Grassmannian Gr1(C4) of lines in 4-dimensional complex space.
X = Gr2(C4), the Grassmannian of 2-planes in 4-dimensional complex space. This is a compactification of complex Minkowski space.
Y is the flag |
https://en.wikipedia.org/wiki/Archaeopteris%20macilenta | Archaeopteris macilenta is distinguished from other species of the genus by leaves which are divided into narrow segments at their tips. Sporangia were borne on different parts of the branches with ordinary foliage leaves. Archaeopteris macilenta leaves and fertile shoots are attached to wood which when permineralized is called Callixylon newberryi. Archaeopteris is retained in the class Progymnospermopsida which includes plants with gymnospermous anatomy and pteridophytic reproduction.
History
Fossilized remains of the Archaeopteris macilenta were initially discovered in 1958 the continental beds from the upper Devonian in eastern New York and have since been located in other floodplain localities from the Catskill Delta. It is often referred to as one of the earliest true ferns which bears significance. Archaeopteris made up 90% of the forests during the late Devonian which accelerated the increase of oxygen during the last 15 million years of the Devonian. They were also the first long lived perennial plants.
Description
The trunk of Archaeopteris macilenta has been found to have a diameter of 1 m and an estimated height of 30 m, which may have attributed to its early success. Root systems rarely went deeper than 10 to 20 cm but depths in excess of 1 m have been reported for this tree. Moreover, its root exhibited perennial root growth and the repeated production of lateral rootlets. The enhanced penetration of soils by its root system appears to have had a profound impact on pedogenesis (the development of soils) during the Late Devonian.
Despite being a fern with sporangia, the Archaeopteris resembled modern conifers and has been found to grow similarly with woody strength built in rings to support weight and height, protective bark that shields the xylem, and extra wood at the base of the branch to prevent breakage. Carluccio, Hueber, and Banks (1966) concluded, on the basis of internal structure, that the laminar appendages of the 'fronds' were helically |
https://en.wikipedia.org/wiki/Pocket%20%28service%29 | Pocket, previously known as Read It Later, is a social bookmarking service for storing, sharing and discovering web bookmarks. Released in 2007, the service was originally only for desktop and laptop computers and is now available for macOS, Windows, iOS, Android, Windows Phone, BlackBerry, Kobo eReaders, and web browsers.
History
Pocket was introduced in August 2007 as a Mozilla Firefox browser extension named Read It Later by Nathan (Nate) Weiner. Once his product was used by millions of people, he moved his office to Silicon Valley and four other people joined the Read It Later team. Weiner's intention was for the application to be like a TiVo directory for web content and to give users access to that content on any device.
Read It Later obtained venture capital investments of US$2.5 million in 2011 and $5.0 million in 2012. The 2011 funding came from Foundation Capital, Baseline Ventures, Google Ventures, Founder Collective and unnamed angel investors. The company rejected an acquisition offer by Evernote after showing concerns that Evernote intended to shut down the Read It Later service and amalgamate its functionality into Evernote's main service.
Initially the Read It Later app was available in a free version and a paid version that included additional features. After the rebranding to Pocket, all paid features were made available in a free and advertisement-free app. In May 2014, a paid subscription service called Pocket Premium was introduced, adding server-side storage of articles and more powerful search tools.
In June 2015, Pocket was included in Firefox, via a toolbar button and link to a user's Pocket list in the bookmark's menu. The integration was controversial, as users displayed concerns for the direct integration of a proprietary service into an open source application, and that it could not be completely disabled without editing advanced settings, unlike other third-party extensions. A Mozilla spokesperson stated that the feature was meant t |
https://en.wikipedia.org/wiki/Eukaryotic%20chromosome%20fine%20structure | Eukaryotic chromosome fine structure refers to the structure of sequences for eukaryotic chromosomes. Some fine sequences are included in more than one class, so the classification listed is not intended to be completely separate.
Chromosomal characteristics
Some sequences are required for a properly functioning chromosome:
Centromere: Used during cell division as the attachment point for the spindle fibers.
Telomere: Used to maintain chromosomal integrity by capping off the ends of the linear chromosomes. This region is a microsatellite, but its function is more specific than a simple tandem repeat.
Throughout the eukaryotic kingdom, the overall structure of chromosome ends is conserved and is characterized by the telomeric tract - a series of short G-rich repeats. This is succeeded by an extensive subtelomeric region consisting of various types and lengths of repeats - the telomere associated sequences (TAS). These regions are generally low in gene density, low in transcription, low in recombination, late replicating, are involved in protecting the end from degradation and end-to-end fusions and in completing replication. The subtelomeric repeats can rescue chromosome ends when telomerase fails, buffer subtelomerically located genes against transcriptional silencing and protect the genome from deleterious rearrangements due to ectopic recombination. They may also be involved in fillers for increasing chromosome size to some minimum threshold level necessary for chromosome stability; act as barriers against transcriptional silencing; provide a location for the adaptive amplification of genes; and be involved in secondary mechanism of telomere maintenance via recombination when telomerase activity is absent.
Structural sequences
Other sequences are used in replication or during interphase with the physical structure of the chromosome.
Ori, or Origin: Origins of replication.
MAR: Matrix attachment regions, where the DNA attaches to the nuclear matrix.
Prote |
https://en.wikipedia.org/wiki/Uptane | Uptane is a Linux Foundation / Joint Development Foundation hosted software framework designed to ensure that valid, current software updates are installed in adversarial environments. It establishes a process of checks and balances on these electronic control units (ECUs) that can ensure the authenticity of incoming software updates. Uptane is designed for "compromise-resilience," or to limit the impact of a compromised repository, an insider attack, a leaked signing key, or similar attacks. It can be incorporated into most existing software update technologies, but offers particular support for over-the-air programming or OTA programming strategies originating from The Update Framework.
History
Uptane was developed by a team of engineers at New York University Tandon School of Engineering in Brooklyn, NY, the University of Michigan Transportation Research Institute in Ann Arbor, MI, and the Southwest Research Institute in San Antonio, TX. It was developed as open source software under a grant from the U.S. Department of Homeland Security.
In 2018, the Uptane Alliance, a non-profit organization, was formed under the aegis of IEEE-ISTO to oversee the first formal release of a standard. The first standard volume, entitled IEEE-ISTO 6100.1.0.0 Uptane Standard for Design and Implementation, was released on July 31, 2019. Uptane was recognized in 2017 by Popular Science as one of that year’s top security innovations.
As of 2020, multiple implementations of Uptane are available, both through open source projects such as the Linux Foundation’s Automotive Grade Linux, and through third party commercial suppliers, such as Advanced Telematic Systems (ATS), now part of Here Technologies, and Airbiquity. There is also a reference implementation meant to aid adopters implementing Uptane. |
https://en.wikipedia.org/wiki/Nemesis%202%20%28MSX%29 | is a side-scrolling shoot 'em up video game released for the MSX computer in 1987 by Konami. The game is a sequel to Nemesis, the MSX version of Gradius, but is unrelated to the arcade game Gradius II (which used the Roman numeral 'II'). This version was ported to the X68000 computer under the name , with some graphical and aural enhancements.
In a departure from other games, instead of controlling Vic Viper, the available ship is called Metalion. Unlike other titles, this game has a heavier focus on story, which is told by cut-scenes. The gameplay is mostly unchanged from the rest of the series, though there are some powerups that temporally gives the ship some enhancements. Also, when the bosses are being defeated, if the Metalion flies where they are, a mini-level can be accessed in order to obtain new permanent upgrades, if the mini levels are successfully cleared.
Plot
The Director General of Space Science Agency Dr. Venom was exiled to Planet Sard for a failed coup d'état. In the year 6665, he escapes and invades Planet Nemesis and the seven planets it controls with the help of Bacterion. The Nemesis High Council sends James Burton, ex-pilot of the Vic Viper, to pilot Metalion and attack Dr. Venom and the Bacterion invaders. The game takes place during the year 6666.
Nemesis '90 Kai
This X68000 port is essentially an enhanced remake of Nemesis 2 with graphical quality on par with Gradius III.
It includes two new stages exclusive to this version of the game, and four new bosses (two of which replace the rematches fought in the MSX version.)
Some people still prefer the original for its charm and color scheme.
Ports
Aside from being remade as Nemesis '90 Kai, Nemesis 2 was also ported to mobile phones in 2006 and Sony PSP in 2007 as part of the Salamander Portable collection.
Gradius 2 was re-released for Wii's Virtual Console in 2009, for Project EGG in 2015, and for Wii U Virtual Console in 2016 in Japan. |
https://en.wikipedia.org/wiki/Class%20%28biology%29 | In biological classification, class () is a taxonomic rank, as well as a taxonomic unit, a taxon, in that rank. It is a group of related taxonomic orders. Other well-known ranks in descending order of size are life, domain, kingdom, phylum, order, family, genus, and species, with class ranking between phylum and order.
History
The class as a distinct rank of biological classification having its own distinctive name (and not just called a top-level genus (genus summum)) was first introduced by the French botanist Joseph Pitton de Tournefort in his classification of plants that appeared in his Eléments de botanique, 1694.
Insofar as a general definition of a class is available, it has historically been conceived as embracing taxa that combine a distinct grade of organization—i.e. a 'level of complexity', measured in terms of how differentiated their organ systems are into distinct regions or sub-organs—with a distinct type of construction, which is to say a particular layout of organ systems. This said, the composition of each class is ultimately determined by the subjective judgment of taxonomists.
In the first edition of his Systema Naturae (1735), Carl Linnaeus divided all three of his kingdoms of Nature (minerals, plants, and animals) into classes. Only in the animal kingdom are Linnaeus's classes similar to the classes used today; his classes and orders of plants were never intended to represent natural groups, but rather to provide a convenient "artificial key" according to his Systema Sexuale, largely based on the arrangement of flowers. In botany, classes are now rarely discussed. Since the first publication of the APG system in 1998, which proposed a taxonomy of the flowering plants up to the level of orders, many sources have preferred to treat ranks higher than orders as informal clades. Where formal ranks have been assigned, the ranks have been reduced to a very much lower level, e.g. class Equisitopsida for the land plants, with the major divisions wi |
https://en.wikipedia.org/wiki/Ben%20Green%20%28mathematician%29 | Ben Joseph Green FRS (born 27 February 1977) is a British mathematician, specialising in combinatorics and number theory. He is the Waynflete Professor of Pure Mathematics at the University of Oxford.
Early life and education
Ben Green was born on 27 February 1977 in Bristol, England. He studied at local schools in Bristol, Bishop Road Primary School and Fairfield Grammar School, competing in the International Mathematical Olympiad in 1994 and 1995. He entered Trinity College, Cambridge in 1995 and completed his BA in mathematics in 1998, winning the Senior Wrangler title. He stayed on for Part III and earned his doctorate under the supervision of Timothy Gowers, with a thesis entitled Topics in arithmetic combinatorics (2003). During his PhD he spent a year as a visiting student at Princeton University. He was a research Fellow at Trinity College, Cambridge between 2001 and 2005, before becoming a Professor of Mathematics at the University of Bristol from January 2005 to September 2006 and then the first Herchel Smith Professor of Pure Mathematics at the University of Cambridge from September 2006 to August 2013. He became the Waynflete Professor of Pure Mathematics at the University of Oxford on 1 August 2013. He was also a Research Fellow of the Clay Mathematics Institute and held various positions at institutes such as Princeton University, University of British Columbia, and Massachusetts Institute of Technology.
Mathematics
The majority of Green's research is in the fields of analytic number theory and additive combinatorics, but he also has results in harmonic analysis and in group theory. His best known theorem, proved jointly with his frequent collaborator Terence Tao, states that there exist arbitrarily long arithmetic progressions in the prime numbers: this is now known as the Green–Tao theorem.
Amongst Green's early results in additive combinatorics are an improvement of a result of Jean Bourgain of the size of arithmetic progressions in sumsets, a |
https://en.wikipedia.org/wiki/Propensity%20score%20matching | In the statistical analysis of observational data, propensity score matching (PSM) is a statistical matching technique that attempts to estimate the effect of a treatment, policy, or other intervention by accounting for the covariates that predict receiving the treatment. PSM attempts to reduce the bias due to confounding variables that could be found in an estimate of the treatment effect obtained from simply comparing outcomes among units that received the treatment versus those that did not. Paul R. Rosenbaum and Donald Rubin introduced the technique in 1983.
The possibility of bias arises because a difference in the treatment outcome (such as the average treatment effect) between treated and untreated groups may be caused by a factor that predicts treatment rather than the treatment itself. In randomized experiments, the randomization enables unbiased estimation of treatment effects; for each covariate, randomization implies that treatment-groups will be balanced on average, by the law of large numbers. Unfortunately, for observational studies, the assignment of treatments to research subjects is typically not random. Matching attempts to reduce the treatment assignment bias, and mimic randomization, by creating a sample of units that received the treatment that is comparable on all observed covariates to a sample of units that did not receive the treatment.
The "propensity" describes how likely a unit is to have been treated, given its covariate values. The stronger the confounding of treatment and covariates, and hence the stronger the bias in the analysis of the naive treatment effect, the better the covariates predict whether a unit is treated or not. By having units with similar propensity scores in both treatment and control, such confounding is reduced.
For example, one may be interested to know the consequences of smoking. An observational study is required since it is unethical to randomly assign people to the treatment 'smoking.' The treatment effe |
https://en.wikipedia.org/wiki/BioDigital | BioDigital is a New York-based biomedical visualization company that is often referred to as being "Google Earth for the Human Body". BioDigital offers an interactive, 3D software platform that enables individuals and businesses to explore and visualize health information. Their flagship product, the BioDigital Human, is a "searchable, customizable map of the human body".
The BioDigital Human Platform has over a million users, including doctors, medical students, and yoga instructors. New York University School of Medicine has used the software to teach medical students about anatomy.
The company also offers a set of API's that allow developers to use the company's imaging technology.
History
The company started as a consulting business in 2002, founded on the premise that 3D technology could be leveraged to make medical topics more comprehensible, accessible, and compelling. In 2011, the company expanded their technology to reach the greater public, compiling all visualizations into a proprietary web-based virtual model of the human body, the BioDigital Human. The BioDigital Human was one of the early systems to leverage WebGL, which allows any supported browser to render 3D models in webpages without plugins.
In 2013, the company received a series of funding led by FirstMark Capital, New York University’s NYU Innovation Fund, a group of angel investors.
Awards and recognition
The BioDigital Human was featured at TEDMED 2012.
In 2013, BioDigital was named the winner of SXSW's Classic Interactive Award.
The company also received an ON for Learning Award from Common Sense Media in March 2014.
BioDigital was honored as a Best Health Digital Resource by the Health Information Resource Center's Web Health Awards in June 2014.
In 2015, BioDigital Human received a Webby Award in the category of Best Health Website.
In 2015, BioDigital was a Silver Finalist for the Edison Awards in the field of Research & Education.
Partnerships
BioDigital has partnered with |
https://en.wikipedia.org/wiki/Notch%20tensile%20strength | The notch tensile strength (NTS) of a material is the value given by performing a standard tensile strength test on a notched specimen of the material. The ratio between the NTS and the tensile strength is called the notch strength ratio (NSR).
See also
Charpy impact test |
https://en.wikipedia.org/wiki/Time%20geography | Time geography or time-space geography is an evolving transdisciplinary perspective on spatial and temporal processes and events such as social interaction, ecological interaction, social and environmental change, and biographies of individuals. Time geography "is not a subject area per se", but rather an integrative ontological framework and visual language in which space and time are basic dimensions of analysis of dynamic processes. Time geography was originally developed by human geographers, but today it is applied in multiple fields related to transportation, regional planning, geography, anthropology, time-use research, ecology, environmental science, and public health. According to Swedish geographer Bo Lenntorp: "It is a basic approach, and every researcher can connect it to theoretical considerations in her or his own way."
Origins
The Swedish geographer Torsten Hägerstrand created time geography in the mid-1960s based on ideas he had developed during his earlier empirical research on human migration patterns in Sweden. He sought "some way of finding out the workings of large socio-environmental mechanisms" using "a physical approach involving the study of how events occur in a time-space framework". Hägerstrand was inspired in part by conceptual advances in spacetime physics and by the philosophy of physicalism.
Hägerstrand's earliest formulation of time geography informally described its key ontological features: "In time-space the individual describes a path" within a situational context; "life paths become captured within a net of constraints, some of which are imposed by physiological and physical necessities and some imposed by private and common decisions". "It would be impossible to offer a comprehensive taxonomy of constraints seen as time-space phenomena", Hägerstrand said, but he "tentatively described" three important classes of constraints:
capability constraints — limitations on the activity of individuals because of their biological struc |
https://en.wikipedia.org/wiki/Evolution%20of%20biological%20complexity | The evolution of biological complexity is one important outcome of the process of evolution. Evolution has produced some remarkably complex organisms – although the actual level of complexity is very hard to define or measure accurately in biology, with properties such as gene content, the number of cell types or morphology all proposed as possible metrics.
Many biologists used to believe that evolution was progressive (orthogenesis) and had a direction that led towards so-called "higher organisms", despite a lack of evidence for this viewpoint. This idea of "progression" introduced the terms "high animals" and "low animals" in evolution. Many now regard this as misleading, with natural selection having no intrinsic direction and that organisms selected for either increased or decreased complexity in response to local environmental conditions. Although there has been an increase in the maximum level of complexity over the history of life, there has always been a large majority of small and simple organisms and the most common level of complexity appears to have remained relatively constant.
Selection for simplicity and complexity
Usually organisms that have a higher rate of reproduction than their competitors have an evolutionary advantage. Consequently, organisms can evolve to become simpler and thus multiply faster and produce more offspring, as they require fewer resources to reproduce. A good example are parasites such as Plasmodium – the parasite responsible for malaria – and mycoplasma; these organisms often dispense with traits that are made unnecessary through parasitism on a host.
A lineage can also dispense with complexity when a particular complex trait merely provides no selective advantage in a particular environment. Loss of this trait need not necessarily confer a selective advantage, but may be lost due to the accumulation of mutations if its loss does not confer an immediate selective disadvantage. For example, a parasitic organism may dispense |
https://en.wikipedia.org/wiki/Glycerol%20kinase%20deficiency | Glycerol kinase deficiency (GKD) is an X-linked recessive enzyme defect that is heterozygous in nature. Three clinically distinct forms of this deficiency have been proposed, namely infantile, juvenile, and adult. National Institutes of Health and its Office of Rare Diseases Research branch classifies GKD as a rare disease, known to affect fewer than 200,000 individuals in the United States. The responsible gene lies in a region containing genes in which deletions can cause Duchenne muscular dystrophy and adrenal hypoplasia congenita. Combinations of these three genetic defects including GKD are addressed medically as Complex GKD.
Signs and symptoms
Glycerol Kinase Deficiency causes the condition known as hyperglycerolemia, an accumulation of glycerol in the blood and urine. This excess of glycerol in bodily fluids can lead to many more potentially dangerous symptoms. Common symptoms include vomiting and lethargy. These tend to be the only symptoms, if any, present in adult GKD which has been found to present with fewer symptoms than infant or juvenile GKD. When GKD is accompanied by Duchenne muscular dystrophy and Adrenal Hypoplasia Congenita, also caused by mutations on the Xp21 chromosome, the symptoms can become much more severe. Symptoms visible at or shortly after birth include:
cryptorchidism
strabismus
seizures
Some other symptoms that become more noticeable with time would be:
metabolic acidosis
hypoglycemia
adrenal cortex insufficiency
learning disabilities
osteoporosis
myopathy
Many of the physically visible symptoms, such as cryptorchidism, strabismus, learning disabilities, and myopathy, tend to have an added psychological effect on the subject due to the fact that they can set him or her apart from those without GKD. Cryptorchidism, the failure of one or both of the testes to descend to the scrotum, has been known to lead to sexual identity confusion amongst young boys because it is such a major physiological anomaly. Strabismus is |
https://en.wikipedia.org/wiki/Triptofordin%20C-2 | Triptofordin C-2 is an antiviral chemical compound isolated from Tripterygium wilfordii. |
https://en.wikipedia.org/wiki/Lorentz%20covariance | In relativistic physics, Lorentz symmetry or Lorentz invariance, named after the Dutch physicist Hendrik Lorentz, is an equivalence of observation or observational symmetry due to special relativity implying that the laws of physics stay the same for all observers that are moving with respect to one another within an inertial frame. It has also been described as "the feature of nature that says experimental results are independent of the orientation or the boost velocity of the laboratory through space".
Lorentz covariance, a related concept, is a property of the underlying spacetime manifold. Lorentz covariance has two distinct, but closely related meanings:
A physical quantity is said to be Lorentz covariant if it transforms under a given representation of the Lorentz group. According to the representation theory of the Lorentz group, these quantities are built out of scalars, four-vectors, four-tensors, and spinors. In particular, a Lorentz covariant scalar (e.g., the space-time interval) remains the same under Lorentz transformations and is said to be a Lorentz invariant (i.e., they transform under the trivial representation).
An equation is said to be Lorentz covariant if it can be written in terms of Lorentz covariant quantities (confusingly, some use the term invariant here). The key property of such equations is that if they hold in one inertial frame, then they hold in any inertial frame; this follows from the result that if all the components of a tensor vanish in one frame, they vanish in every frame. This condition is a requirement according to the principle of relativity; i.e., all non-gravitational laws must make the same predictions for identical experiments taking place at the same spacetime event in two different inertial frames of reference.
On manifolds, the words covariant and contravariant refer to how objects transform under general coordinate transformations. Both covariant and contravariant four-vectors can be Lorentz covariant quantiti |
https://en.wikipedia.org/wiki/Cuneiform%20%28programming%20language%29 | Cuneiform is an open-source workflow language
for large-scale scientific data analysis.
It is a statically typed functional programming language promoting parallel computing. It features a versatile foreign function interface allowing users to integrate software from many external programming languages. At the organizational level Cuneiform provides facilities like conditional branching and general recursion making it Turing-complete. In this, Cuneiform is the attempt to close the gap between scientific workflow systems like Taverna, KNIME, or Galaxy and large-scale data analysis programming models like MapReduce or Pig Latin while offering the generality of a functional programming language.
Cuneiform is implemented in distributed Erlang. If run in distributed mode it drives a POSIX-compliant distributed file system like Gluster or Ceph (or a FUSE integration of some other file system, e.g., HDFS). Alternatively, Cuneiform scripts can be executed on top of HTCondor or Hadoop.
Cuneiform is influenced by the work of Peter Kelly who proposes functional programming as a model for scientific workflow execution.
In this, Cuneiform is distinct from related workflow languages based on dataflow programming like Swift.
External software integration
External tools and libraries (e.g., R or Python libraries) are integrated via a foreign function interface. In this it resembles, e.g., KNIME which allows the use of external software through snippet nodes, or Taverna which offers BeanShell services for integrating Java software. By defining a task in a foreign language it is possible to use the API of an external tool or library. This way, tools can be integrated directly without the need of writing a wrapper or reimplementing the tool.
Currently supported foreign programming languages are:
Bash
Elixir
Erlang
Java
JavaScript
MATLAB
GNU Octave
Perl
Python
R
Racket
Foreign language support for AWK and gnuplot are planned additions.
Type System
Cuneiform provides |
https://en.wikipedia.org/wiki/Tuberculocide | A tuberculocide is a substance or a process which disables or destroys the spore which causes tuberculosis, Mycobacterium tuberculosis.
History
In 1955, Bergsmann studied dairin as a tuberculocide.
In 1976, Sachse studied peracetic acid as a tuberculocide.
In 1998, Wang and Ding studied diterpenoids from the roots of Euphorbia ebracteolata (one of the Euphorbia genus) as a tuberculocide.
In 2010, Zhang et al. reported that Euphorbia fischeriana had been used especially in Asia as a tuberculocide.
In 2011, Nde et al. studied three oxidative disinfectants as tuberculocides. |
https://en.wikipedia.org/wiki/Na%C5%9Fide%20G%C3%B6zde%20Durmu%C5%9F | Naside Gözde Durmuş (born 1985, Izmir) is a Turkish scientist and geneticist. She is currently Assistant Professor (Research) of Radiology at Stanford University. Her research focuses on nanotechnology and micro-technology applications on current world-threatening health issues, like cancer and antibiotic resistance. In 2015, MIT Technology Review listed her under the category of pioneers in the magazine's list of 35 Innovators Under 35.
Biography
Durmuş was born in 1985 in Izmir, Turkey. In 2003, she started her undergraduate studies at the Middle East Technical University, specializing in Molecular Biology and Genetics. Later on, she obtained a Fulbright scholarship and moved to the United States to pursue higher education, achieving a Masters in Engineering from Boston University in 2009, and receiving a Ph.D. degree in Biomedical Engineering from Brown University in May 2013.
Durmus is currently an Assistant Professor at Stanford University; In 2014, she took a position as a post-doctoral researcher at Stanford. She conducted her research with Ronald W. Davis at the Stanford University Genome Technology Center and Stanford University School of Medicine. In 2015, she has been recognized among the "Top 35 Innovators Under 35" (TR35), as a pioneer in biotechnology and medicine, by MIT Technology Review Magazine.
Career
Her work focuses on developing low-cost nanotechnology tools that can be used for the diagnose and treatment of diseases, like for instance a fast method for detecting the physical features of a cell, by having them levitate in a magnetic field, this being able to measure in a shorter period of time how a microbe responds to a certain drug, and making it possible to differentiate cancerous cells from healthy ones. |
https://en.wikipedia.org/wiki/OSSEC | OSSEC (Open Source HIDS SECurity) is a free, open-source host-based intrusion detection system (HIDS). It performs log analysis, integrity checking, Windows registry monitoring, rootkit detection, time-based alerting, and active response. It provides intrusion detection for most operating systems, including Linux, OpenBSD, FreeBSD, OS X, Solaris and Windows. OSSEC has a centralized, cross-platform architecture allowing multiple systems to be easily monitored and managed. OSSEC has a log analysis engine that is able to correlate and analyze logs from multiple devices and formats.
History
In June 2008, the OSSEC project and all the copyrights owned by Daniel B. Cid, the project leader, were acquired by Third Brigade, Inc. They promised to continue to contribute to the open source community and to extend commercial support and training to the OSSEC open source community.
In May 2009, Trend Micro acquired Third Brigade and the OSSEC project, with promises to keep it open source and free.
In 2018, Trend released the domain name and source code to the OSSEC Foundation.
The OSSEC project is currently maintained by Atomicorp who stewards the free and open source version and also offers a commercial version.
Software components
OSSEC consists of a main application, an agent, and a web interface.
Manager (or server), which is required for distributed network or stand-alone installations.
Agent, a small program installed on the systems to be monitored.
Agentless mode, can be used to monitor firewalls, routers, and even Unix systems.
OSSEC Features
Log based Intrusion Detection (LID) : Actively monitors and analyzes data from multiple log data points in real-time.
Rootkit and Malware Detection : Process and file level analysis to detect malicious applications and rootkits.
Active Response : Respond to attacks and changes on the system in real time through multiple mechanisms including firewall policies, integration with 3rd parties such as CDN’s and support por |
https://en.wikipedia.org/wiki/Sahi%20%28software%29 | Sahi Pro is a test automation software for desktop applications, mobile applications and web applications. Sahi was conceived as an open source product in 2005 with specific focus on test automation management tools for web 2.0 technologies but as a test automation tool geared towards testers. Sahi Pro is shipped proprietary license software . The open-source version includes a basic tools set sufficient for most testing purposes (Record on all browsers, Playback on all browsers, HTML playback reports, JUnit Style playback reports, Suites and batch run, Parallel playback of tests), whereas the Pro version includes further features such as test distribution and report customization.
Sahi Open-source is written in Java and JavaScript and hosted on SourceForge since October 2005. It is released under an Apache License 2.0 open-source license. Sahi Pro is currently in version 9.0.0 and is hosted on the Sahi Pro Website.
Technical details
Sahi runs as a proxy server and the browser's proxy settings are configured to point to Sahi's proxy. Sahi then injects JavaScript event handlers into web pages, which allows it to record and playback events on the browser. Using a proxy makes Sahi independent of the browser used. |
https://en.wikipedia.org/wiki/Combination%20antibiotic | A combination antibiotic is one in which two ingredients are added together for additional therapeutic effect. One or both ingredients may be antibiotics.
Antibiotic combinations are increasingly important because of antimicrobial resistance. This means that individual antibiotics that used to be effective are no longer effective, and because of the absence of new classes of antibiotic, they allow old antibiotics to be continue to be used. In particular, they may be required to treat multiresistant organisms, such as carbapenem-resistant Enterobacteriaceae. Some combinations are more likely to result in successful treatment of an infection.
Uses
Antibiotics are used in combination for a number of reasons:
to treat multiresistant organisms, such as carbapenem-resistant Enterobacteriaceae.
Amoxicillin/clavulanic acid, which includes the beta lactam amoxicillin with the suicide inhibitor clauvanic acid, which helps the amoxicillin overcome the action of beta lactamase
because a person may be infected with more than one microbe simultaneously, for example infections of the abdominal cavity after bowel perforation.
because antibiotics used together may act synergisticly to increase the efficacy of both,
because antibiotics used together may have a broader spectrum than each antibiotic used individually.
Examples
Examples of combinations include:
Amoxicillin/clavulanic acid, which includes the beta lactam amoxicillin with the suicide inhibitor clauvanic acid, which helps the amoxicillin overcome the action of beta lactamase
Trimethoprim/sulfamethoxazole
Research
Research into combination antibiotics is ongoing. |
https://en.wikipedia.org/wiki/Plasmolysis | Plasmolysis is the process in which cells lose water in a hypertonic solution. The reverse process, deplasmolysis or cytolysis, can occur if the cell is in a hypotonic solution resulting in a lower external osmotic pressure and a net flow of water into the cell. Through observation of plasmolysis and deplasmolysis, it is possible to determine the tonicity of the cell's environment as well as the rate solute molecules cross the cellular membrane.
Etymology
The term plasmolysis is derived from the Latin word ‘plasma’ meaning ‘matrix’ and the Greek word ‘lysis’, meaning ‘loosening’.
Turgidity
A plant cell in hypotonic solution will absorb water by endosmosis, so that the increased volume of water in the cell will increase pressure, making the protoplasm push against the cell wall, a condition known as turgor. Turgor makes plant cells push against each other in the same way and is the main line method of support in non-woody plant tissue. Plant cell walls resist further water entry after a certain point, known as full turgor, which stops plant cells from bursting as animal cells do in the same conditions. This is also the reason that plants stand upright. Without the stiffness of the plant cells the plant would fall under its own weight. Turgor pressure allows plants to stay firm and erect, and plants without turgor pressure (known as flaccid) wilt. A cell will begin to decline in turgor pressure only when there is no air spaces surrounding it and eventually leads to a greater osmotic pressure than that of the cell. Vacuoles play a role in turgor pressure when water leaves the cell due to hyperosmotic solutions containing solutes such as mannitol, sorbitol, and sucrose.
Plasmolysis
If a plant cell is placed in a hypertonic solution, the plant cell loses water and hence turgor pressure by plasmolysis: pressure decreases to the point where the protoplasm of the cell peels away from the cell wall, leaving gaps between the cell wall and the membrane and making the p |
https://en.wikipedia.org/wiki/Compartmentalization%20%28information%20security%29 | Compartmentalization, in information security, whether public or private, is the limiting of access to information to persons or other entities on a need-to-know basis to perform certain tasks.
It originated in the handling of classified information in military and intelligence applications. It dates back to antiquity, and was successfully used to keep the secret of Greek fire.
The basis for compartmentalization is the idea that, if fewer people know the details of a mission or task, the risk or likelihood that such information will be compromised or fall into the hands of the opposition is decreased. Hence, varying levels of clearance within organizations exist. Yet, even if someone has the highest clearance, certain "compartmentalized" information, identified by codewords referring to particular types of secret information, may still be restricted to certain operators, even with a lower overall security clearance. Information marked this way is said to be codeword–classified. One famous example of this was the Ultra secret, where documents were marked "Top Secret Ultra": "Top Secret" marked its security level, and the "Ultra" keyword further restricted its readership to only those cleared to read "Ultra" documents.
Compartmentalization is now also used in commercial security engineering as a technique to protect information such as medical records.
Example
An example of compartmentalization was the Manhattan Project. Personnel at Oak Ridge constructed and operated centrifuges to isolate uranium-235 from naturally occurring uranium, but most did not know exactly what they were doing. Those that knew did not know why they were doing it. Parts of the weapon were separately designed by teams who did not know how the parts interacted.
See also
Information sensitivity
Principle of least privilege
Read into
Sensitive compartmented information |
https://en.wikipedia.org/wiki/Testbed | A testbed (also spelled test bed) is a platform for conducting rigorous, transparent, and replicable testing of scientific theories, computing tools, and new technologies.
The term is used across many disciplines to describe experimental research and new product development platforms and environments. They may vary from hands-on prototype development in manufacturing industries such as automobiles (known as "mules"), aircraft engines or systems and to intellectual property refinement in such fields as computer software development shielded from the hazards of testing live.
Software development
In software development, testbedding is a method of testing a particular module (function, class, or library) in an isolated fashion. It may be used as a proof of concept or when a new module is tested apart from the program or system, it will later be added to. A skeleton framework is implemented around the module so that the module behaves as if already part of the larger program.
A typical testbed could include software, hardware, and networking components. In software development, the specified hardware and software environment can be set up as a testbed for the application under test. In this context, a testbed is also known as the test environment made of:
Testing hardware equipment (test bench, optical table, custom testing rig, dummy equipment as simulates an actual product or its counterpart, external environment means, like showers, heaters, fans, vacuum chamber, anechoic chamber).
Computing equipment (processing units, data centers, in-line FPGA, environment simulation equipment).
Testing software (DAQ / oscilloscopes, visualisation and testing software, environment software to feed a dummmy equipment with data).
Testbeds are also pages on the Internet where the public are given the opportunity to test CSS or HTML they have created and want to preview the results, for example:
The Arena web browser was created by the World Wide Web Consortium (W3C) and CE |
https://en.wikipedia.org/wiki/DSQI | DSQI (design structure quality index) is an architectural design metric used to evaluate a computer program's design structure and the efficiency of its modules. The metric was developed by the United States Air Force Systems Command.
The result of DSQI calculations is a number between 0 and 1. The closer to 1, the higher the quality. It is best used on a comparison basis, i.e., with previous successful projects. |
https://en.wikipedia.org/wiki/Synthetic%20differential%20geometry | In mathematics, synthetic differential geometry is a formalization of the theory of differential geometry in the language of topos theory. There are several insights that allow for such a reformulation. The first is that most of the analytic data for describing the class of smooth manifolds can be encoded into certain fibre bundles on manifolds: namely bundles of jets (see also jet bundle). The second insight is that the operation of assigning a bundle of jets to a smooth manifold is functorial in nature. The third insight is that over a certain category, these are representable functors. Furthermore, their representatives are related to the algebras of dual numbers, so that smooth infinitesimal analysis may be used.
Synthetic differential geometry can serve as a platform for formulating certain otherwise obscure or confusing notions from differential geometry. For example, the meaning of what it means to be natural (or invariant) has a particularly simple expression, even though the formulation in classical differential geometry may be quite difficult.
Further reading
John Lane Bell, Two Approaches to Modelling the Universe: Synthetic Differential Geometry and Frame-Valued Sets (PDF file)
F.W. Lawvere, Outline of synthetic differential geometry (PDF file)
Anders Kock, Synthetic Differential Geometry (PDF file), Cambridge University Press, 2nd Edition, 2006.
R. Lavendhomme, Basic Concepts of Synthetic Differential Geometry, Springer-Verlag, 1996.
Michael Shulman, Synthetic Differential Geometry
Ryszard Paweł Kostecki, Differential Geometry in Toposes
Differential geometry |
https://en.wikipedia.org/wiki/Power%20sum%20symmetric%20polynomial | In mathematics, specifically in commutative algebra, the power sum symmetric polynomials are a type of basic building block for symmetric polynomials, in the sense that every symmetric polynomial with rational coefficients can be expressed as a sum and difference of products of power sum symmetric polynomials with rational coefficients. However, not every symmetric polynomial with integral coefficients is generated by integral combinations of products of power-sum polynomials: they are a generating set over the rationals, but not over the integers.
Definition
The power sum symmetric polynomial of degree k in variables x1, ..., xn, written pk for k = 0, 1, 2, ..., is the sum of all kth powers of the variables. Formally,
The first few of these polynomials are
Thus, for each nonnegative integer , there exists exactly one power sum symmetric polynomial of degree in variables.
The polynomial ring formed by taking all integral linear combinations of products of the power sum symmetric polynomials is a commutative ring.
Examples
The following lists the power sum symmetric polynomials of positive degrees up to n for the first three positive values of In every case, is one of the polynomials. The list goes up to degree n because the power sum symmetric polynomials of degrees 1 to n are basic in the sense of the theorem stated below.
For n = 1:
For n = 2:
For n = 3:
Properties
The set of power sum symmetric polynomials of degrees 1, 2, ..., n in n variables generates the ring of symmetric polynomials in n variables. More specifically:
Theorem. The ring of symmetric polynomials with rational coefficients equals the rational polynomial ring The same is true if the coefficients are taken in any field of characteristic 0.
However, this is not true if the coefficients must be integers. For example, for n = 2, the symmetric polynomial
has the expression
which involves fractions. According to the theorem this is the only way to represent in |
https://en.wikipedia.org/wiki/Deutsche%20Mark | The Deutsche Mark (; English: German mark), abbreviated "DM" or "D-Mark" (), was the official currency of West Germany from 1948 until 1990 and later the unified Germany from 1990 until the adoption of the euro in 2002. In English, it was typically called the "Deutschmark" (). One Deutsche Mark was divided into 100 pfennigs.
It was first issued under Allied occupation in 1948 to replace the Reichsmark and served as the Federal Republic of Germany's official currency from its founding the following year. On 31 December 1998, the Council of the European Union fixed the irrevocable exchange rate, effective 1 January 1999, for German mark to euros as DM 1.95583 = €1. In 1999, the Deutsche Mark was replaced by the euro; its coins and banknotes remained in circulation, defined in terms of euros, until the introduction of euro notes and coins on 1 January 2002. The Deutsche Mark ceased to be legal tender immediately upon the introduction of the euro—in contrast to the other eurozone states, where the euro and legacy currency circulated side by side for up to two months. Mark coins and banknotes continued to be accepted as valid forms of payment in Germany until 1 March 2002.
The Deutsche Bundesbank has guaranteed that all German marks in cash form may be changed into euros indefinitely, and one may do so in person at any branch of the Bundesbank in Germany. Banknotes and coins can even be sent to the Bundesbank by mail. In 2012, it was estimated that as many as 13.2 billion marks were in circulation, with one poll from 2011 showing a narrow majority of Germans favouring the currency's restoration (although only a minority believed this would bring any economic benefit). Newer polls indicate that only a minority of Germans is supportive of a reintroduction of the Deutsche Mark.
History
Before 1871
A mark had been the currency of Germany since its original unification in 1871. Before that time, the different German states issued a variety of different currencies, the mo |
https://en.wikipedia.org/wiki/Walter%20Schottky%20Prize | The Walter Schottky Prize is a scientific prize awarded by the German Physical Society for outstanding research work of young academics in the field of solid-state physics. Since 1973 the prize is generally awarded annually. The endowment of the prize with 10,000 euros is contributed by Infineon Technologies AG and Robert Bosch GmbH. The prize is dedicated to Walter Schottky, a physicist and pioneer of electronics.
List of recipients
Source: German Physical Society
2021: Andreas Hüttel
2020: Zhe Wang
2019: Eva Vera Benckiser
2018: Sascha Schäfer
2017: Helmut Schultheiss
2016: Ermin Malic
2015: Frank Pollmann, Andreas Schnyder
2014: Sven Höfling
2013: Claus Ropers
2012: Alex Greilich
2011: not awarded
2010: Thomas Seyller
2009: Florian Marquardt
2008: Fedor Jelezko
2007: Jonathan Finley
2006: Manfred Fiebig
2005: Wolfgang Belzig
2004: Markus Morgenstern
2003: Jurgen Smet
2002: Harald Reichert
2001: Manfred Bayer
2000: Clemens Bechinger
1999: Thomas Herrmannsdörfer
1998: Achim Wixforth
1997: Christoph Geibel
1996: Bo Persson
1995: Jochen Feldmann
1994: Paul Müller
1993: Gertrud Zwicknagl
1992: Kurt Kremer
1991: Christian Thomsen
1990: Hermann Grabert, Helmut Wipf
1989: Ulrich Eckern, Gerd Schön, Wilhelm Zwerger
1988: Martin Stutzmann
1987: Bernd Ewen, Dieter Richter
1986: Gerhard Abstreiter
1985: Hans Werner Diehl, Siegfried Dietrich
1984: Gottfried Döhler
1983: Klaus Sattler
1982: Volker Dohm, Reinhard Folk
1981: Klaus von Klitzing
1980: Klaus Funke
1979: Heiner Müller-Krumbhaar
1978: Bernhard Authier, Horst Fischer
1977: Siegfried Hunklinger
1976: Franz Wegner
1975: Karl-Heinz Zschauer
1974: Andreas Otto
1973: Peter Ehrhart
See also
List of physics awards |
https://en.wikipedia.org/wiki/Patch%20test | A patch test is a diagnostic method used to determine which specific substances cause allergic inflammation of a patient's skin.
Patch testing helps identify which substances may be causing a delayed-type allergic reaction in a patient and may identify allergens not identified by blood testing or skin prick testing. It is intended to produce a local allergic reaction on a small area of the patient's back, where the diluted chemicals were planted.
The chemicals included in the patch test kit are the offenders in approximately 85–90 percent of contact allergic eczema and include chemicals present in metals (e.g., nickel), rubber, leather, formaldehyde, lanolin, fragrance, toiletries, hair dyes, medicine, pharmaceutical items, food, drink, preservative, and other additives.
Mechanism
A patch test relies on the principle of a type IV hypersensitivity reaction.
The first step in becoming allergic is sensitization. When skin is exposed to an allergen, the antigen-presenting cells (APCs) – also known as Langerhans cell or Dermal Dendritic Cell – phagocytize the substance, break it down to smaller components and present them on their surface bound major histocompatibility complex type two (MHC-II) molecules. The APC then travels to a lymph node, where it presents the displayed allergen to a CD4+ T-cell, or T-helper cell. The T-cell undergoes clonal expansion and some clones of the newly formed antigen specific sensitized T-cells travel back to the site of antigen exposure.
When the skin is again exposed to the antigen, the memory t-cells in the skin recognize the antigen and produce cytokines (chemical signals), which cause more T-cells to migrate from blood vessels. This starts a complex immune cascade leading to skin inflammation, itching, and the typical rash of contact dermatitis. In general, it takes 2–4 days for a response in patch testing to develop. The patch test is just induction of contact dermatitis in a small area.
Process
Application of the patch t |
https://en.wikipedia.org/wiki/STANAG%203910 | STANAG 3910 High Speed Data Transmission Under STANAG 3838 or Fibre Optic Equivalent Control is a protocol defined in a NATO Standardization Agreement for the transfer of data, principally intended for use in avionic systems. STANAG 3910 allows a 1 Mb/s STANAG 3838 / MIL-STD-1553B / MoD Def Stan 00-18 Pt 2 (3838/1553B) data bus to be augmented with a 20 Mb/s high-speed (HS) bus, which is referred to in the standard as the HS channel: the 3838/1553B bus in an implementation of STANAG 3910 is then referred to as the low-speed (LS) channel. Either or both channels may be multiply redundant, and may use either electrical or optical media. Where the channels use redundant media, these are individually referred to as buses by the standard.
History
The original STANAG 3910, i.e. the NATO standard, reached, at least, draft version 1.8, before work on it was abandoned in the early 1990s in favour of its publication through non-military standardization organizations: the foreword to Rev. 1.7 of the STANAG from March 1990 stated "The main body of this document is identical to the proposed Rev 1.7 of prEN 3910". Following this, several provisional, green-paper versions, prEN 3910 P1 & P2, were produced by working-group C2-GT9 of the Association Europeene des Constructeurs de Materiel Aerospatial (AECMA) (now ASD-STAN), before its development also ceased in 1996-7 (following the withdrawal of the French delegation, who held the chair of AECMA C2-GT9 at the time). As a result, the standard remains (as of Aug. 2013) in green paper form: the latest draft version is prEN3910-001 Issue P1, the front sheet of which states, 'This "Aerospace Series" Prestandard has been drawn up under the responsibility of AECMA (The European Association of Aerospace Industries). It is published on green paper for the needs of AECMA-Members.' However, despite this disclaimer, the document is offered for sale by ASD-STAN, currently (August 2013) at €382.64.
Utilisation
The incomplete nature of the s |
https://en.wikipedia.org/wiki/Arnaud%20Balard | Arnaud Balard (born 1971) is a French deafblind artist. In 2009, Balard wrote a manifesto outlining his philosophy of Surdism, an artistic, philosophical, and cultural movement celebrating deaf culture and deaf arts (including cinema, theater, literature, and visual arts). He is also known for designing the Sign Union flag, an image intended to represent global unity for deaf and deafblind people.
Early life and education
Arnaud Balard was born in 1971 in Toulouse. Balard was born deaf; as an adult, he developed tunnel vision and was diagnosed with deteriorating vision caused by Usher syndrome in 1999.
From age three to seven, Balard lived with an aunt to attend deaf school. Since he was able to learn French, he attended mainstream schools from age eight to eighteen. He attended Ecole Sainte Therese and Collège Privé Sainte-geneviève Saint-joseph in Rodez. His education was conducted through oral French; he did not learn French Sign Language until he was an adult.
He attended Université de Toulouse-Le Mirail from 1992 to 1993, before transferring to the École MJM Graphic Design, studying there from 1993 to 1995. From 1995 to 1998, Balard attended the European Academy of Art in Brittany on the Rennes campus, graduating with the congratulations of the jury. In 2000, he moved to Brussels and studied at l'École nationale supérieure des arts visuels de La Cambre.
Activism and artwork
Balard writes, teaches classes and workshops, and creates artwork that promotes deaf culture. In 2009, he wrote a 22-page manifesto coining the word "Surdism" (from the French word for deaf, sourd) and arguing for the importance of deaf arts and greater acceptance of deaf identity. Balard was unaware at the time of the similar De'VIA arts movement which had originated in the United States in 1989, but Surdism differs from De'VIA in its inclusion of arts beyond visual arts, such as theater, cinema, and literary works. The Surdism manifesto urges the use of creative works to challenge he |
https://en.wikipedia.org/wiki/Noncommutative%20signal-flow%20graph | In automata theory and control theory, branches of mathematics, theoretical computer science and systems engineering, a noncommutative signal-flow graph is a tool for modeling interconnected systems and state machines by mapping the edges of a directed graph to a ring or semiring.
A single edge weight might represent an array of impulse responses of a complex system (see figure to the right), or a character from an alphabet picked off the input tape of a finite automaton, while the graph might represent the flow of information or state transitions.
As diverse as these applications are, they share much of the same underlying theory.
Definition
Consider n equations involving n+1 variables {x0, x1,...,xn}.
with aij elements in a ring or semiring R. The free variable x0 corresponds to a source vertex v0, thus having no defining equation. Each equation corresponds to a fragment of a directed graph G=(V,E) as show in the figure.
The edge weights define a function f from E to R. Finally fix an output vertex vm. A signal-flow graph is the collection of this data S = (G=(V,E), v0,vm V, f : E → R). The equations may not have a solution, but when they do,
with T an element of R called the gain.
Successive Elimination
Return Loop Method
There exist several noncommutative generalizations of Mason's rule. The most common is the return loop method (sometimes called the forward return loop method (FRL), having a dual backward return loop method (BRL)). The first rigorous proof is attributed to Riegle, so it is sometimes called Riegle's rule.
As with Mason's rule, these gain expressions combine terms in a graph-theoretic manner (loop-gains, path products, etc.). They are known to hold over an arbitrary noncommutative ring and over the semiring of regular expressions.
Formal Description
The method starts by enumerating all paths from input to output, indexed by j J. We use the following definitions:
The j-th path product is (by abuse of notation) a tuple of kj |
https://en.wikipedia.org/wiki/Pelado | In Mexican society, pelado is "a term said to have been invented to describe a certain class of urban 'bum' in Mexico in the 1920s."
It was used, however, much earlier. Lewis Garrard used it in his book, "Wah-to-yah and the Taos Trail," his first-hand account of crossing the Plains to Taos, published in 1850. He wrote: "This hos has feelin's hyar," slapping his breast, "for poor human natur in any fix, but for these palous (pelados) he doesn't care a cuss."
Historical background
Mexico has a long tradition of urban poverty, beginning with the léperos, a term referring to shiftless vagrants of various racial categories in the colonial hierarchical racial system, the sociedad de castas. They included mestizos, natives, and poor whites (españoles). Léperos were viewed as unrespectable people (el pueblo bajo) by polite society (la gente culta), who judged them as being morally and biologically inferior.
Léperos supported themselves as they could through petty commerce or begging, but many resorted to crime. A study of crime in eighteenth-century Mexico City based on arrest records indicates that they were "neither marginal types nor dregs of the lower classes. They consisted of both men and women; they were not particularly young; they were not mainly single and rootless; they were not merely Indian and casta; and they were not largely unskilled." None of the popular stereotypes of a young rootless, unskilled male is borne out by the arrest records. "The dangerous class existed only in the collective mind of the colonial elite."
They established a thieves' market across from the viceregal palace, which was later moved to the Tepito area of the working-class Colonia Guerrero. They spent much of their time in taverns, leading to the official promotion of theater as an alternative.
Initially, many of these plays were organized by the church, but the people soon set up their own theaters, where the humor of the taverns survived. The rowdy, often illegal stagings were |
https://en.wikipedia.org/wiki/Halocline | In oceanography, a halocline (from Greek hals, halos 'salt' and klinein 'to slope') is a cline, a subtype of chemocline caused by a strong, vertical salinity gradient within a body of water. Because salinity (in concert with temperature) affects the density of seawater, it can play a role in its vertical stratification. Increasing salinity by one kg/m3 results in an increase of seawater density of around 0.7 kg/m3.
Description
In the midlatitudes, an excess of evaporation over precipitation leads to surface waters being saltier than deep waters. In such regions, the vertical stratification is due to surface waters being warmer than deep waters and the halocline is destabilizing. Such regions may be prone to salt fingering, a process which results in the preferential mixing of salinity.
In certain high latitude regions (such as the Arctic Ocean, Bering Sea, and the Southern Ocean) the surface waters are actually colder than the deep waters and the halocline is responsible for maintaining water column stability, isolating the surface waters from the deep waters. In these regions, the halocline is important in allowing for the formation of sea ice, and limiting the escape of carbon dioxide to the atmosphere.
Haloclines are also found in fjords, and poorly mixed estuaries where fresh water is deposited at the ocean surface.
A halocline can be easily created and observed in a drinking glass or other clear vessel. If fresh water is slowly poured over a quantity of salt water, using a spoon held horizontally at water-level to prevent mixing, a hazy interface layer, the halocline, will soon be visible due to the varying index of refraction across the boundary.
A halocline is most commonly confused with a thermocline – a thermocline is an area within a body of water that marks a drastic change in temperature. A halocline can coincide with a thermocline and form a pycnocline.
Haloclines are common in water-filled limestone caves near the ocean. Less dense fresh wat |
https://en.wikipedia.org/wiki/Shale%20gouge%20ratio | Shale Gouge Ratio (typically abbreviated to SGR) is a mathematical algorithm that aims to predict the fault rock types for simple fault zones developed in sedimentary sequences dominated by sandstone and shale.
The parameter is widely used in the oil and gas exploration and production industries to enable quantitative predictions to be made regarding the hydrodynamic behavior of faults.
Definition
At any point on a fault surface, the shale gouge ratio is equal to the net shale/clay content of the rocks that have slipped past that point.
The SGR algorithm assumes complete mixing of the wall-rock components in any particular 'throw interval'. The parameter is a measure of the 'upscaled' composition of the fault zone.
Application to hydrocarbon exploration
Hydrocarbon exploration involves identifying and defining accumulations of hydrocarbons that are trapped in subsurface structures. These structures are often segmented by faults. For a thorough trap evaluation, it is necessary to predict whether the fault is sealing or leaking to hydrocarbons and also to provide an estimate of how 'strong' the fault seal might be. The 'strength' of a fault seal can be quantified in terms of subsurface pressure, arising from the buoyancy forces within the hydrocarbon column, that the fault can support before it starts to leak. When acting on a fault zone this subsurface pressure is termed capillary threshold pressure.
For faults developed in sandstone and shale sequences, the first order control on capillary threshold pressure is likely to be the composition, in particular the shale or clay content, of the fault-zone material. SGR is used to estimate the shale content of the fault zone.
In general, fault zones with higher clay content, equivalent to higher SGR values, can support higher capillary threshold pressures. On a broader scale, other factors also exert a control on the threshold pressure, such as depth of the rock sequence at the time of faulting, and the maxim |
https://en.wikipedia.org/wiki/Hernan%20Lopez-Schier | Dr. Hernán López-Schier is a developmental biologist and neuroscientist known for his work on sensory biology and organ regeneration. He is an associate faculty at the Graduate School of Quantitative Biology at the Ludwig Maximilian University in Munich, Germany, and visiting professor at the New York University Abu Dhabi, UAE.
Education
Lopez-Schier trained in molecular genetics at the Rockefeller University in New York City. In 1997, he moved to the University of Cambridge to pursue a PhD in genetics and cell biology at the Gurdon Institute under the mentorship of Daniel St Johnston, FRS.
Career
After his PhD, Lopez-Schier joined the group of Kavli Prize laureate A. James Hudspeth as a Postdoctoral Fellow, funded by the Wellcome Trust and the Life Sciences Research Foundation as Howard Hughes Medical Institute fellow. In 2007, he moved to the Centre for Genomic Regulation in Barcelona, Spain, as junior group leader, funded by the Spanish Ministry of Science and Innovation and an award by the European Research Council. In 2011–2016 he chaired a Research Networking Program from the European Science Foundation. Between 2012 and 2023 he was research unit director at the Helmholtz Zentrum München, and member of the Graduate School of Systemic Neurosciences at the Ludwig Maximilian University in Munich, Germany. Other academic positions were as Visiting Scholar at the National Institute of Genetics in Mishima, Japan in 2008 and 2019, Visiting Scientist at the Janelia Research Campus in 2014, Visiting Professor at Harvard University, and member of the Professional Development Committee at the Society for Neuroscience from 2020 to 2023. He has served at scientific review panels for Austrian, French, Spanish, Portuguese and British government agencies. Since 2022, he is a member of the Project Evaluation Committee of the EMBL Imaging Centre. In 2019, he was appointed to the Scientific Advisory Board of the biotechnology company Sensorion.
His group has made a signific |
https://en.wikipedia.org/wiki/Cyrillic%20Projector | The Cyrillic Projector is a sculpture created by American artist Jim Sanborn in the early 1990s, and purchased by the University of North Carolina at Charlotte in 1997. It is currently installed between the campus's Friday and Fretwell Buildings.
An encrypted trilogy
The encrypted sculpture Cyrillic Projector is part of an encrypted family of three intricate puzzle-sculptures by Sanborn, the other two named Kryptos and Antipodes. The Kryptos sculpture (located at CIA headquarters in Langley, Virginia) has text which is duplicated on Antipodes. Antipodes has two sides — one with the Latin alphabet and one with Cyrillic. The Latin side is similar to Kryptos. The Cyrillic side is similar to the Cyrillic Projector.
Solution
The encrypted text of the Cyrillic Projector was first reportedly solved by Frank Corr in early July 2003, followed by an equivalent decryption by Mike Bales in September of the same year. Both endeavors gave results in the Russian language. The first English translation of the text was led by Elonka Dunin.
The sculpture includes two messages. The first is a Russian text that explains the use of psychological control to develop and maintain potential sources of information. The second is a partial quote about the Soviet dissident, Nobel Peace Prize awarded scientist Sakharov. The text is from a classified KGB memo, detailing concerns that his report at the 1982 Pugwash conference was going to be used by the U.S. for anti-Soviet propaganda purposes.
Notes |
https://en.wikipedia.org/wiki/Epitranscriptomic%20sequencing | In epitranscriptomic sequencing, most methods focus on either (1) enrichment and purification of the modified RNA molecules before running on the RNA sequencer, or (2) improving or modifying bioinformatics analysis pipelines to call the modification peaks. Most methods have been adapted and optimized for mRNA molecules, except for modified bisulfite sequencing for profiling 5-methylcytidine which was optimized for tRNAs and rRNAs.
There are seven major classes of chemical modifications found in RNA molecules: N6-methyladenosine, 2'-O-methylation, N6,2'-O-dimethyladenosine, 5-methylcytidine, 5-hydroxylmethylcytidine, inosine, and pseudouridine. Various sequencing methods have been developed to profile each type of modification. The scale, resolution, sensitivity, and limitations associated with each method and the corresponding bioinformatics tools used will be discussed.
Methods for profiling N6-methyladenosine
Methylation of adenosine does not affect its ability to base-pair with thymidine or uracil, so N6-methyladenosine (m6A) cannot be detected using standard sequencing or hybridization methods. This modification is marked by the methylation of the adenosine base at the nitrogen-6 position. It is abundantly found in polyA+ mRNA; also found in tRNA, rRNA, snRNA, and long ncRNA.
m6A-seq and MeRIP-seq
In 2012, the first two methods for m6A sequencing came out that enabled transcriptome-wide profile of m6A in mammalian cells. These two techniques, called m6A-seq and MeRIP-seq (m6A-specific methylated RNA immunoprecipitation), are also the first methods to allow for any type of RNA modification sequencing. These methods were able to detect 10,000 m6A peaks in the mammalian transcriptome; the peaks were found to be enriched in 3’UTR regions, near STOP codons, and within long exons.
The two methods were optimized to detect methylation peaks in poly(A)+ mRNA, but the protocol could be adapted to profile any type of RNA. Collected RNA sample is fragmented into ~1 |
https://en.wikipedia.org/wiki/Radical%20polynomial | In mathematics, in the realm of abstract algebra, a radical polynomial is a multivariate polynomial over a field that can be expressed as a polynomial in the sum of squares of the variables. That is, if
is a polynomial ring, the ring of radical polynomials is the subring generated by the polynomial
Radical polynomials are characterized as precisely those polynomials that are invariant under the action of the orthogonal group.
The ring of radical polynomials is a graded subalgebra of the ring of all polynomials.
The standard separation of variables theorem asserts that every polynomial can be expressed as a finite sum of terms, each term being a product of a radical polynomial and a harmonic polynomial. This is equivalent to the statement that the ring of all polynomials is a free module over the ring of radical polynomials. |
https://en.wikipedia.org/wiki/AI%20Song%20Contest | The AI Song Contest () is an international music competition for songs that have been composed using artificial intelligence (AI). The inaugural edition took place on 12 May 2020 and was organised by the Dutch public broadcaster VPRO, in collaboration with NPO 3FM and NPO Innovation. Since 2021, the contest has been held as part of an annual conference organised by the Belgian technology hub Wallifornia MusicTech.
Format
The format of the competition was created by the Dutch programme creator Karen van Dijk (VPRO) and was inspired by the Eurovision Song Contest. Participating teams are tasked with the composition of a song using artificial intelligence. Each submission is then evaluated by a jury, which assesses the use of AI in the songwriting process, and by the public, which assesses the quality of the song through online ratings. The winner of the contest is the entry with the highest overall score.
Unlike the Eurovision Song Contest, countries can be represented by multiple teams. While the 2020 edition only allowed teams from "Eurovision countries" to compete, this rule was dropped in 2021 to allow teams from outside Europe and Australia to enter as well. In addition, entries would no longer be judged for their suitability for the Eurovision Song Contest, and the maximum song length was extended from three to four minutes. In 2022, a semi-final was introduced in which the jury selected fifteen entries to advance to the final.
Past editions
Awards and nominations
See also
Algorithmic composition
Computer music
Music and artificial intelligence
Pop music automation |
https://en.wikipedia.org/wiki/MPEG%20LA | MPEG LA is an American company based in Denver, Colorado that licenses patent pools covering essential patents required for use of the MPEG-2, MPEG-4, IEEE 1394, VC-1, ATSC, MVC, MPEG-2 Systems, AVC/H.264 and HEVC standards.
Via Licensing Corp acquired MPEG LA in April 2023 and formed a new patent pool administration company called Via Licensing Alliance.
History
MPEG LA started operations in July 1997 immediately after receiving a Department of Justice Business Review Letter. During formation of the MPEG-2 standard, a working group of companies that participated in the formation of the MPEG-2 standard recognized that the biggest challenge to adoption was efficient access to essential patents owned by many patent owners. That ultimately led to a group of various MPEG-2 patent owners to form MPEG LA, which in turn created the first modern-day patent pool as a solution. The majority of patents underlying MPEG-2 technology were owned by three companies: Sony (311 patents), Thomson (198 patents) and Mitsubishi Electric (119 patents).
In June 2012, MPEG LA announced a call for patents essential to the High Efficiency Video Coding (HEVC) standard.
In September 2012, MPEG LA launched Librassay, which makes diagnostic patent rights from some of the world's leading research institutions available to everyone through a single license. Organizations which have included patents in Librassay include Johns Hopkins University; Ludwig Institute for Cancer Research; Memorial Sloan Kettering Cancer Center; National Institutes of Health (NIH); Partners HealthCare; The Board of Trustees of the Leland Stanford Junior University; The Trustees of the University of Pennsylvania; The University of California, San Francisco; and Wisconsin Alumni Research Foundation (WARF).
On September 29, 2014, the MPEG LA announced their HEVC license which covers the patents from 23 companies. The license is US$0.20 per HEVC product after the first 100,000 units each year with an annual cap. The licen |
https://en.wikipedia.org/wiki/Undecidable%20problem | In computability theory and computational complexity theory, an undecidable problem is a decision problem for which it is proved to be impossible to construct an algorithm that always leads to a correct yes-or-no answer. The halting problem is an example: it can be proven that there is no algorithm that correctly determines whether an arbitrary program eventually halts when run.
Background
A decision problem is a question which, for every input in some infinite set of inputs, answers "yes" or "no".. Those inputs can be numbers (for example, the decision problem "is the input a prime number?") or other values of some other kind, such as strings of a formal language.
The formal representation of a decision problem is a subset of the natural numbers. For decision problems on natural numbers, the set consists of those numbers that the decision problem answers "yes" to. For example, the decision problem "is the input even?" is formalized as the set of even numbers. A decision problem whose input consists of strings or more complex values is formalized as the set of numbers that, via a specific Gödel numbering, correspond to inputs that satisfy the decision problem's criteria.
A decision problem A is called decidable or effectively solvable if the formalized set of A is a recursive set. Otherwise, A is called undecidable. A problem is called partially decidable, semi-decidable, solvable, or provable if A is a recursively enumerable set.
Example: the halting problem in computability theory
In computability theory, the halting problem is a decision problem which can be stated as follows:
Given the description of an arbitrary program and a finite input, decide whether the program finishes running or will run forever.
Alan Turing proved in 1936 that a general algorithm running on a Turing machine that solves the halting problem for all possible program-input pairs necessarily cannot exist. Hence, the halting problem is undecidable for Turing machines.
Relationship wit |
https://en.wikipedia.org/wiki/Digital%20Visual%20Interface | Digital Visual Interface (DVI) is a video display interface developed by the Digital Display Working Group (DDWG). The digital interface is used to connect a video source, such as a video display controller, to a display device, such as a computer monitor. It was developed with the intention of creating an industry standard for the transfer of uncompressed digital video content.
Featuring support for analog connections, DVI devices manufactured as DVI-I are compatible with the analog VGA interface by including VGA pins, while DVI-D devices are digital-only. This compatibility, along with other advantages, led to its widespread acceptance over competing digital display standards Plug and Display (P&D) and Digital Flat Panel (DFP). Although DVI is predominantly associated with computers, it is sometimes used in other consumer electronics such as television sets and DVD players.
History
An earlier attempt to promulgate an updated standard to the analog VGA connector was made by the Video Electronics Standards Association (VESA) in 1994 and 1995, with the Enhanced Video Connector (EVC), which was intended to consolidate cables between the computer and monitor. EVC used a 35-pin Molex MicroCross connector and carried analog video (input and output), analog stereo audio (input and output), and data (via USB and FireWire). At the same time, with the increasing availability of digital flat-panel displays, the priority shifted to digital video transmission, which would remove the extra analog/digital conversion steps required for VGA and EVC; the EVC connector was reused by VESA, which released the P&D standard in 1997. P&D offered single-link TMDS digital video with, as an option, analog video output and data (USB and FireWire), using a 35-pin MicroCross connector similar to EVC; the analog audio and video input lines from EVC were repurposed to carry digital video for P&D.
Because P&D was a physically large, expensive connector, a consortium of companies developed th |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.