source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Cladococcus%20viminalis | Cladococcus viminalis is a species of radiolarians.
Anatomy
This radiolarian, like all other radiolarians, can produce extremely complex silica tests of pores and spines that are laid down in an obvious geometric shape. The spines give this species buoyancy, while the pores provide a way for cell material called pseudopodia to escape this species's body. The pseudopodia engulf any food that gets trapped on the spines and carry it to the middle of the cell to be digested. This species is also a polycystine radiolarian.
Habitat
This species can be found in the Mediterranean Sea, near the surface of the ocean, in the fjords of Norway, and in the central Pacific Ocean. It is one of the most commonly fossilized radiolarian, frequently discovered in limestone and chalk rocks. |
https://en.wikipedia.org/wiki/Alterococcus | Alterococcus is a genus of bacteria from the family of Opitutaceae with one species Alterococcus agarolyticus. |
https://en.wikipedia.org/wiki/Leabra | Leabra stands for local, error-driven and associative, biologically realistic algorithm. It is a model of learning which is a balance between Hebbian and error-driven learning with other network-derived characteristics. This model is used to mathematically predict outcomes based on inputs and previous learning influences. This model is heavily influenced by and contributes to neural network designs and models. This algorithm is the default algorithm in emergent (successor of PDP++) when making a new project, and is extensively used in various simulations.
Hebbian learning is performed using conditional principal components analysis (CPCA) algorithm with correction factor for sparse expected activity levels.
Error-driven learning is performed using GeneRec, which is a generalization of the recirculation algorithm, and approximates Almeida–Pineda recurrent backpropagation. The symmetric, midpoint version of GeneRec is used, which is equivalent to the contrastive Hebbian learning algorithm (CHL). See O'Reilly (1996; Neural Computation) for more details.
The activation function is a point-neuron approximation with both discrete spiking and continuous rate-code output.
Layer or unit-group level inhibition can be computed directly using a k-winners-take-all (KWTA) function, producing sparse distributed representations.
The net input is computed as an average, not a sum, over connections, based on normalized, sigmoidally transformed weight values, which are subject to scaling on a connection-group level to alter relative contributions. Automatic scaling is performed to compensate for differences in expected activity level in the different projections.
Documentation about this algorithm can be found in the book "Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain" published by MIT press. and in the Emergent Documentation
Overview of the leabra algorithm
The pseudocode for Leabra is given here, showing exactly how th |
https://en.wikipedia.org/wiki/Christiaan%20Eijkman | Christiaan Eijkman ( , , ; 11 August 1858 – 5 November 1930) was a Dutch physician and professor of physiology whose demonstration that beriberi is caused by poor diet led to the discovery of antineuritic vitamins (thiamine). Together with Sir Frederick Hopkins, he received the Nobel Prize for Physiology or Medicine in 1929 for the discovery of vitamins.
Biography
Early life and education
Christiaan Eijkman was born on 11 August 1858, at Nijkerk, Netherlands as the seventh child of Christiaan Eijkman, the headmaster of a local school, and Johanna Alida Pool. His elder brother Johann Frederik Eijkman (1851–1915) was also a chemist.
A year later, in 1859, the Eijkman family moved to Zaandam, where his father was appointed head of a newly founded school for advanced elementary education. It was here that Christiaan and his brothers received their early education. In 1875, after taking his preliminary examinations, Eijkman became a student at the Military Medical School of the University of Amsterdam, where he was trained as a medical officer for the Netherlands Indies Army, passing through all his examinations with honours.
From 1879 to 1881, he was an assistant of T. Place, Professor of Physiology, during which time he wrote his thesis On Polarization of the Nerves, which gained him his doctoral degree, with honours, on 13 July 1883.
Career
In 1883, Eijkman left the Netherlands for The Dutch East Indies, where he was made medical officer of health, first in Semarang, then later at Tjilatjap, a small village on the south coast of Java, and at Padang Sidempoean in Northern Sumatra. It was at Tjilatjap that he caught malaria, which later so impaired his health that he, in 1885, had to return to Europe on sick-leave.
For Eijkman this was to prove a lucky event, as it enabled him to work in E. Forster's laboratory in Amsterdam, and also in Robert Koch's bacteriological laboratory in Berlin; here he came into contact with C. A. Pekelharing and C. Winkler, who were v |
https://en.wikipedia.org/wiki/RNA%20recognition%20motif | RNA recognition motif, RNP-1 is a putative RNA-binding domain of about 90 amino acids that are known to bind single-stranded RNAs. It was found in many eukaryotic proteins.
The largest group of single strand RNA-binding protein is the eukaryotic RNA recognition motif (RRM) family that contains an eight amino acid RNP-1 consensus sequence.
RRM proteins have a variety of RNA binding preferences and functions, and include heterogeneous nuclear ribonucleoproteins (hnRNPs), proteins implicated in regulation of alternative splicing (SR, U2AF2, Sxl), protein components of small nuclear ribonucleoproteins (U1 and U2 snRNPs), and proteins that regulate RNA stability and translation (PABP, La, Hu). The RRM in heterodimeric splicing factor U2 snRNP auxiliary factor appears to have two RRM-like domains with specialised features for protein recognition. The motif also appears in a few single stranded DNA binding proteins.
The typical RRM consists of four anti-parallel beta-strands and two alpha-helices arranged in a beta-alpha-beta-beta-alpha-beta fold with side chains that stack with RNA bases. A third helix is present during RNA binding in some cases. The RRM is reviewed in a number of publications.
Human proteins containing this domain
A2BP1; ACF; BOLL; BRUNOL4; BRUNOL5; BRUNOL6; CCBL2; CGI-96;
CIRBP; CNOT4; CPEB2; CPEB3; CPEB4; CPSF7; CSTF2; CSTF2T;
CUGBP1; CUGBP2; D10S102; DAZ1; DAZ2; DAZ3; DAZ4; DAZAP1;
DAZL; DNAJC17; DND1; EIF3S4; EIF3S9; EIF4B; EIF4H; ELAVL1;
ELAVL2; ELAVL3; ELAVL4; ENOX1; ENOX2; EWSR1; FUS; FUSIP1;
G3BP; G3BP1; G3BP2; GRSF1; HNRNPL; HNRPA0; HNRPA1; HNRPA2B1;
HNRPA3; HNRPAB; HNRPC; HNRPCL1; HNRPD; HNRPDL; HNRPF; HNRPH1;
HNRPH2; HNRPH3; HNRPL; HNRPLL; HNRPM; HNRPR; HRNBP1; HSU53209;
HTATSF1; IGF2BP1; IGF2BP2; IGF2 |
https://en.wikipedia.org/wiki/Google%20Safe%20Browsing | Google Safe Browsing is a service from Google that warns users when they attempt to navigate to a dangerous website or download dangerous files. Safe Browsing also notifies webmasters when their websites are compromised by malicious actors and helps them diagnose and resolve the problem. This protection works across Google products and is claimed to “power safer browsing experiences across the Internet”. It lists URLs for web resources that contain malware or phishing content. Browsers like Google Chrome, Safari, Firefox, Vivaldi, Brave and GNOME Web use these lists from Google Safe Browsing to check pages against potential threats. Google also provides a public API for the service.
Google provides information to Internet service providers, by sending email alerts to autonomous system operators regarding threats hosted on their networks. As of September 2017, over 3 billion Internet devices are protected by the service. Alternatives are offered by both Tencent and Yandex.
Clients protected
Web browsers: Google Chrome, Safari, Firefox, Vivaldi, Brave and GNOME Web.
Android: Google Play Protect, Verify Apps API
Google Messages
Google Search
Google AdSense
Gmail
Instagram
Privacy
Google maintains the Safe Browsing Lookup API, which has a privacy drawback: "The URLs to be looked up are not hashed so the server knows which URLs the API users have looked up". The Safe Browsing Update API, on the other hand, compares 32-bit hash prefixes of the URL to preserve privacy. The Chrome, Firefox and Safari browsers use the latter.
Safe Browsing also stores a mandatory preferences cookie on the computer.
Google Safe Browsing "conducts client-side checks. If a website looks suspicious, it sends a subset of likely phishing and social engineering terms found on the page to Google to obtain additional information available from Google's servers on whether the website should be considered malicious". Logs, which include an IP address and one or more cookies, are kept |
https://en.wikipedia.org/wiki/Nitrogen-15%20tracing | Nitrogen-15 (15N) tracing is a technique to study the nitrogen cycle using the heavier, stable nitrogen isotope 15N. Despite the different weights, 15N is involved in the same chemical reactions as the more abundant 14N and is therefore used to trace and quantify conversions of one nitrogen compound to another. 15N tracing is applied in biogeochemistry, soil science, environmental science, environmental microbiology and small molecule activation research.
Applications
15N tracing allows researchers to distinguish specific nitrogen conversions from a network of simultaneous reactions; e.g. ammonium can at the same time be oxidised by autotrophic microorganisms, produced by mineralisation of organic matter, produced by dissimilatory nitrate reduction and assimilated by microbes and plants. In this case, quantifying the absolute amounts of ammonium does not explain how it is produced or consumed. However, the conversion of one 15N labelled compound to another can directly be linked through the isotopic signature.
15N tracing has been applied to quantify rates of nitrogen transformations in soil and to distinguish the sources of the greenhouse gas nitrous oxide under different environmental conditions.
Methodical approaches
The two main approaches are natural abundance and enrichment techniques.
Natural abundance techniques
Natural abundance techniques can be applied without artificial disturbance. The natural 15N abundances are expressed in delta (δ) notation relative to the 15N concentration in the air. Due to enzymatic discrimination, natural 15N abundances change slightly in microbially mediated reactions in soil. Apart from δ values, the site preference of 15N and 14N (isotopomers) for the inner or outer position within the nitrous oxide molecule has been used to determine its sources (nitrification or denitrification).
Enrichment techniques
When nitrogen substrates are artificially enriched (labeled) with 15N, the product of a reaction can directly be lin |
https://en.wikipedia.org/wiki/Respiratory%20quotient | The respiratory quotient (RQ or respiratory coefficient) is a dimensionless number used in calculations of basal metabolic rate (BMR) when estimated from carbon dioxide production. It is calculated from the ratio of carbon dioxide produced by the body to oxygen consumed by the body. Such measurements, like measurements of oxygen uptake, are forms of indirect calorimetry. It is measured using a respirometer. The respiratory quotient value indicates which macronutrients are being metabolized, as different energy pathways are used for fats, carbohydrates, and proteins. If metabolism consists solely of lipids, the respiratory quotient is approximately 0.7, for proteins it is approximately 0.8, and for carbohydrates it is 1.0. Most of the time, however, energy consumption is composed of both fats and carbohydrates. The approximate respiratory quotient of a mixed diet is 0.8. Some of the other factors that may affect the respiratory quotient are energy balance, circulating insulin, and insulin sensitivity.
It can be used in the alveolar gas equation.
Calculation
The respiratory quotient (RQ) is the ratio:
RQ = CO2 eliminated / O2 consumed
where the term "eliminated" refers to carbon dioxide (CO2) removed from the body.
In this calculation, the CO2 and O2 must be given in the same units, and in quantities proportional to the number of molecules. Acceptable inputs would be either moles, or else volumes of gas at standard temperature and pressure.
Many metabolized substances are compounds containing only the elements carbon, hydrogen, and oxygen. Examples include fatty acids, glycerol, carbohydrates, deamination products, and ethanol. For complete oxidation of such compounds, the chemical equation is
CxHyOz + (x + y/4 - z/2) O2
→ x CO2 + (y/2) H2O
and thus metabolism of this compound gives an RQ of x/(x + y/4 - z/2).
For glucose, with the molecular formula, C6H12O6, the complete oxidation equation is C6H12O6 + 6 O2
→ 6 CO2 + 6 H2O. Thus, the RQ= 6 CO2/ 6 O2=1.
Fo |
https://en.wikipedia.org/wiki/Final%20Fantasy%20%28video%20game%29 | is a fantasy role-playing video game developed and published by Square in 1987. It is the first game in Square's Final Fantasy series, created by Hironobu Sakaguchi. Originally released for the NES, Final Fantasy was remade for several video game consoles and is frequently packaged with Final Fantasy II in video game collections. The first Final Fantasy story follows four youths called the Warriors of Light, who each carry one of their world's four elemental crystals which have been darkened by the four Elemental Fiends. Together, they quest to defeat these evil forces, restore light to the crystals, and save their world.
Final Fantasy was originally conceived under the working title Fighting Fantasy, but trademark issues and dire circumstances surrounding Square as well as Sakaguchi himself prompted the name to be changed. The game was a great commercial success, received generally positive reviews, and spawned many successful sequels and supplementary titles in the form of the Final Fantasy series. The original is now regarded as one of the most influential and successful role-playing games on the Nintendo Entertainment System, playing a major role in popularizing the genre. Critical praise focused on the game's graphics, while criticism targeted the time spent wandering in search of random battle encounters to raise the player's experience level. By March 2003, all versions of Final Fantasy had sold a combined two million copies worldwide.
Gameplay
Final Fantasy has four basic game modes: an overworld map, town and dungeon maps, a battle screen, and a menu screen. The overworld map is a scaled-down version of the game's fictional world, which the player uses to direct characters to various locations. The primary means of travel across the overworld is by foot; a ship, a canoe, and an airship become available as the player progresses. With the exception of some battles in preset locations or with bosses, enemies are randomly encountered on field maps and on the |
https://en.wikipedia.org/wiki/Phase-space%20formulation | The phase-space formulation of quantum mechanics places the position and momentum variables on equal footing in phase space. In contrast, the Schrödinger picture uses the position or momentum representations (see also position and momentum space). The two key features of the phase-space formulation are that the quantum state is described by a quasiprobability distribution (instead of a wave function, state vector, or density matrix) and operator multiplication is replaced by a star product.
The theory was fully developed by Hilbrand Groenewold in 1946 in his PhD thesis, and independently by Joe Moyal, each building on earlier ideas by Hermann Weyl and Eugene Wigner.
The chief advantage of the phase-space formulation is that it makes quantum mechanics appear as similar to Hamiltonian mechanics as possible by avoiding the operator formalism, thereby "'freeing' the quantization of the 'burden' of the Hilbert space". This formulation is statistical in nature and offers logical connections between quantum mechanics and classical statistical mechanics, enabling a natural comparison between the two (see classical limit). Quantum mechanics in phase space is often favored in certain quantum optics applications (see optical phase space), or in the study of decoherence and a range of specialized technical problems, though otherwise the formalism is less commonly employed in practical situations.
The conceptual ideas underlying the development of quantum mechanics in phase space have branched into mathematical offshoots such as Kontsevich's deformation-quantization (see Kontsevich quantization formula) and noncommutative geometry.
Phase-space distribution
The phase-space distribution of a quantum state is a quasiprobability distribution. In the phase-space formulation, the phase-space distribution may be treated as the fundamental, primitive description of the quantum system, without any reference to wave functions or density matrices.
There are several different ways |
https://en.wikipedia.org/wiki/Metacomputing | Metacomputing is all computing and computing-oriented activity which involves computing knowledge (science and technology) utilized for the research, development and application of different types of computing. It may also deal with numerous types of computing applications, such as: industry, business, management and human-related management. New emerging fields of metacomputing focus on the methodological and technological aspects of the development of large computer networks/grids, such as the Internet, intranet and other territorially distributed computer networks for special purposes.
Uses
In computer science
Metacomputing, as a computing of computing, includes: the organization of large computer networks, choice of the design criteria (for example: peer-to-peer or centralized solution) and metacomputing software (middleware, metaprogramming) development where, in the specific domains, the concept metacomputing is used as a description of software meta-layers which are networked platforms for the development of user-oriented calculations, for example for computational physics and bio-informatics.
Here, serious scientific problems of systems/networks complexity emerge, not only related to domain-dependent complexities but focused on systemic meta-complexity of computer network infrastructures.
Metacomputing is also a useful descriptor for self-referential programming systems. Often these systems are functional as fifth-generation computer languages which require the use of an underlying metaprocessor software operating system in order to be operative. Typically metacomputing occurs in an interpreted or real-time compiling system since the changing nature of information in processing results may result in an unpredictable compute state throughout the existence of the metacomputer (the information state operated upon by the metacomputing platform).
In socio-cognitive engineering
From the human and social perspectives, metacomputing is especially focused on: |
https://en.wikipedia.org/wiki/Desulfonatronovibrio%20thiodismutans | Desulfonatronovibrio thiodismutans is a species of haloalkaliphilic sulfate-reducing bacteria. It is able to grow lithotrophically by dismutation of thiosulfate and sulfite. |
https://en.wikipedia.org/wiki/AMELY | Amelogenin, Y isoform is a protein that in humans is encoded by the AMELY gene. AMELY is located on the Y chromosome and encodes a form of amelogenin. Amelogenin is an extracellular matrix protein involved in biomineralization during tooth enamel development.
Clinical significance
Mutations in the related AMELX gene on the X chromosome cause X-linked amelogenesis imperfecta. |
https://en.wikipedia.org/wiki/Forest%20floor | The forest floor, also called detritus or duff, is the part of a forest ecosystem that mediates between the living, aboveground portion of the forest and the mineral soil, principally composed of dead and decaying plant matter such as rotting wood and shed leaves. In some countries, like Canada, forest floor refers to L, F and H organic horizons. It hosts a wide variety of decomposers and predators, including invertebrates, fungi, algae, bacteria, and archaea.
The forest floor serves as a bridge between the above ground living vegetation and the soil, and thus is a crucial component in nutrient transfer through the biogeochemical cycle. Leaf litter and other plant litter transmits nutrients from plants to the soil. The plant litter of the forest floor (or L horizon) prevents erosion, conserves moisture, and provides nutrients to the entire ecosystem. The F horizon consists of plant material in which decomposition is apparent, but the origins of plant residues are still distinguishable. The H horizon consists of well-decomposed plant material so that plant residues are not recognizable, with the exception of some roots or wood.
The nature of the distinction between organisms "in" the soil and components "of" the soil is disputed, with some questioning whether such a distinction exists at all. The majority of carbon storage and biomass production in forests occurs below ground. Despite this, conservation policy and scientific study tends to neglect the below-ground portion of the forest ecosystem. As a crucial part of soil and the below-ground ecosystem, the forest floor profoundly impacts the entire forest.
Much of the energy and carbon fixed by forests is periodically added to the forest floor through litterfall, and a substantial portion of the nutrient requirements of forest ecosystems is supplied by decomposition of organic matter in the forest floor and soil surface. Decomposers, such as arthropods and fungi, are necessary for the transformation of dead orga |
https://en.wikipedia.org/wiki/Fluentd | Fluentd is a cross-platform open-source data collection software project originally developed at Treasure Data. It is written primarily in the Ruby programming language.
Overview
Fluentd was positioned for "big data", semi- or un-structured data sets. It analyzes event logs, application logs, and clickstreams. According to Suonsyrjä and Mikkonen, the "core idea of Fluentd is to be the unifying layer between different types of log inputs and outputs.", Fluentd is available on Linux, macOS, and Windows.
History
Fluentd was created by Sadayuki Furuhashi as a project of the Mountain View-based firm Treasure Data. Written primarily in Ruby, its source code was released as open-source software in October 2011.
The company announced $5 million of funding in 2013.
Treasure Data was then sold to Arm Ltd. in 2018.
Users
Fluentd was one of the data collection tools recommended by Amazon Web Services in 2013, when it was said to be similar to Apache Flume or Scribe. Google Cloud Platform's BigQuery recommends Fluentd as the default real-time data-ingestion tool, and uses Google's customized version of Fluentd, called google-fluentd, as a default logging agent.
Fluent Bit
Fluent Bit is a log processor and log forwarder which is being developed as a CNCF sub-project under the umbrella of Fluentd project. Fluentd is written in C and Ruby and built as a Ruby gem so it consumes some amount of memory resources. On the other hand, since Fluent Bit is written only in C and has no dependencies, the consumed memory usage much decreased compared to Fluentd which makes it easy to run on the embedded Linux and container environment. |
https://en.wikipedia.org/wiki/Distinctness%20of%20image | Distinctness of image (DOI) is a quantification of the deviation of the direction of light propagation from the regular direction by scattering during transmission or reflection. DOI is sensitive to even subtle scattering effects; the more light is being scattered out of the regular direction the more the initially sharp (well defined) image is blurred (that is, small details are lost). In polluted air it is the sum of all particles of various dimensions (dust, aerosols, vapor, etc.) that induces haze.
DOI is measured to characterize the visual appearance of polished high-gloss surfaces such as automotive car finishes, mirrors, beyond the capabilities of gloss.
Other appearance phenomena are: gloss, haze, and orange peel. Various categories of visual appearance related to the perception of regular or diffuse reflection and transmission of light have been organized under the concept of cesia in an order system with three variables. In this system, DOI is connected to the variable called diffusivity.
Reflected Image Quality (RIQ) vs. DOI
DOI is not sensitive to low amounts of orange peel on highest quality surfaces.
RIQ has more proportionate response to orange peel on a wider range of surface finishes.
RIQ works well in differentiating low gloss surfaces with different specular/diffuse components.
Parameters that affect RIQ
Substrate alignment (horizontal/vertical)
Coating formulation
Substrate
Application technique |
https://en.wikipedia.org/wiki/Network%20synthesis%20filters | In signal processing, network synthesis filters are filters designed by the network synthesis method. The method has produced several important classes of filter including the Butterworth filter, the Chebyshev filter and the Elliptic filter. It was originally intended to be applied to the design of passive linear analogue filters but its results can also be applied to implementations in active filters and digital filters. The essence of the method is to obtain the component values of the filter from a given rational function representing the desired transfer function.
Description of method
The method can be viewed as the inverse problem of network analysis. Network analysis starts with a network and by applying the various electric circuit theorems predicts the response of the network. Network synthesis on the other hand, starts with a desired response and its methods produce a network that outputs, or approximates to, that response.
Network synthesis was originally intended to produce filters of the kind formerly described as wave filters but now usually just called filters. That is, filters whose purpose is to pass waves of certain frequencies while rejecting waves of other frequencies. Network synthesis starts out with a specification for the transfer function of the filter, H(s), as a function of complex frequency, s. This is used to generate an expression for the input impedance of the filter (the driving point impedance) which then, by a process of continued fraction or partial fraction expansions results in the required values of the filter components. In a digital implementation of a filter, H(s) can be implemented directly.
The advantages of the method are best understood by comparing it to the filter design methodology that was used before it, the image method. The image method considers the characteristics of an individual filter section in an infinite chain (ladder topology) of identical sections. The filters produced by this method suffer f |
https://en.wikipedia.org/wiki/Reverse%20correlation%20technique | The reverse correlation technique is a data driven study method used primarily in psychological and neurophysiological research. This method earned its name from its origins in neurophysiology, where cross-correlations between white noise stimuli and sparsely occurring neuronal spikes could be computed quicker when only computing it for segments preceding the spikes. The term has since been adopted in psychological experiments that usually do not analyze the temporal dimension, but also present noise to human participants. In contrast to the original meaning, the term is here thought to reflect that the standard psychological practice of presenting stimuli of defined categories to the participants is "reversed": Instead, the participant's mental representations of categories are estimated from interactions of the presented noise and the behavioral responses. It is used to create composite pictures of individual and/or group mental representations of various items (e.g. faces, bodies, and the self) that depict characteristics of said items (e.g. trustworthiness and self-body image). This technique is helpful when evaluating the mental representations of those with and without mental illnesses.
Terms
This technique utilizes spike-triggered average to explain what areas of signal and noise in an image are valuable for the given research question. Signal is information used to produce objects of value that help explain and connect the world around us. Noise is commonly referred to as unwanted signal that obscures the information that the signal is trying to present. Most importantly for reverse correlation studies, noise is randomly varying information. To determine the areas of importance using reverse correlation, noise is applied to a base image and then evaluated by observers. A base image is any image void of noise that relates to the research question. A base image that has noise superimposed on top is the stimuli that is presented to and evaluated by participan |
https://en.wikipedia.org/wiki/Paraneoplastic%20syndrome | A paraneoplastic syndrome is a syndrome (a set of signs and symptoms) that is the consequence of a tumor in the body (usually a cancerous one). It is specifically due to the production of chemical signaling molecules (such as hormones or cytokines) by tumor cells or by an immune response against the tumor. Unlike a mass effect, it is not due to the local presence of cancer cells.
Paraneoplastic syndromes are typical among middle-aged to older people, and they most commonly occur with cancers of the lung, breast, ovaries or lymphatic system (a lymphoma). Sometimes, the symptoms of paraneoplastic syndromes show before the diagnosis of a malignancy, which has been hypothesized to relate to the disease pathogenesis. In this paradigm, tumor cells express tissue-restricted antigens (e.g., neuronal proteins), triggering an anti-tumor immune response which may be partially or, rarely, completely effective in suppressing tumor growth and symptoms. Patients then come to clinical attention when this tumor immune response breaks immune tolerance and begins to attack the normal tissue expressing that (e.g., neuronal) protein.
The abbreviation PNS is sometimes used for paraneoplastic syndrome, although it is used more often to refer to the peripheral nervous system.
Signs and symptoms
Symptomatic features of paraneoplastic syndrome cultivate in four ways: endocrine, neurological, mucocutaneous, and hematological. The most common presentation is a fever (release of endogenous pyrogens often related to lymphokines or tissue pyrogens), but the overall picture will often include several clinical cases observed which may specifically simulate more common benign conditions.
Endocrine
The following diseases manifest by means of endocrine dysfunction: Cushing syndrome, syndrome of inappropriate antidiuretic hormone, hypercalcemia, hypoglycemia, carcinoid syndrome, and hyperaldosteronism.
Neurological
The following diseases manifest by means of neurological dysfunction: Lambert–Ea |
https://en.wikipedia.org/wiki/Arzak | Arzak is a restaurant in San Sebastián, Spain. It features New Basque Cuisine. In 2008, Arzak's owner and chef, Juan Mari Arzak, was awarded the Universal Basque award for "adapting gastronomy, one of the most important traditions of the Basque Country, to the new times and making of it one of the most innovative of the world".
Description
Arzak serves modernized Basque cuisine, inspired by cooking techniques and presentation found in nouvelle cuisine. The restaurant is inside an old multiple-storied brick building, with the restaurant on the main floor. Above the restaurant is a wine cellar with over 100,000 bottles of wine, as well as a test kitchen. The test kitchen, established in 2000, is where the Arzaks come up with new recipes and methods. It houses over 1500 ingredients that can potentially serve as inspiration for a dish. Assisted by chefs Igor Zalacain and Xabi Gutiérrez, the Arzak test kitchen comes up with 40 new dishes a year, stemming from recipes that are created and refined over a period of three to six months.
History
Built as a house in 1897 by the current owner's grandparents (José Maria Arzak Etxabe and Escolastica Lete), they turned it into a wine inn and tavern. At the time, the village of Alza was separate from San Sebastián. When the next generation took over (Juan Ramon Arzak and Francisca Arratibel), it was turned into a restaurant. Kitchen duties are now shared between Juan Mari Arzak and his daughter Elena Arzak.
Awards and honors
1989–present, Michelin Guide Three Stars
2003–present, Restaurant (magazine) S.Pellegrino World's 50 Best Restaurants
2013, 8th best restaurant in the world, S. Pellegrino |
https://en.wikipedia.org/wiki/Leopold%20B.%20Felsen | Leopold B. Felsen (born in Munich in 1924; died in the US September 24, 2005) was a electrical engineer and physicist known for studies of Electromagnetism and wave-based disciplines. He had to flee Germany at 16 due to the Nazis. He has fundamental contributions to electromagnetic field analysis.
Academic life
Leopold B. Felsen was a professor at Polytechnic University of New York and at Boston University College of Engineering, an IEEE life fellow and a fellow of both the Acoustical Society of America and the Optical Society of America. He earned bachelor's, master's and doctoral degrees from what was then the Polytechnic Institute of Brooklyn.
Awards
In 1991 he won the IEEE Heinrich Hertz Medal.
Publications
Leopold B. Felsen, and Nathan Marcuvitz, Radiation and scattering of waves, (1994), 888 pages. |
https://en.wikipedia.org/wiki/Isoantibodies | Isoantibodies, formerly called alloantibodies, are antibodies produced by an individual against isoantigens produced by members of the same species. In the case of the species Homo sapiens, for example, there are a significant number of antigens that are different in every individual. When antigens from another individual are introduced into another's body, these isoantibodies immediately bind to and destroy them.
One common example is the isohaemagglutinins, which are responsible for blood transfusion reactions. This may subjectively differ from the term 'natural' antibodies, or simply 'antibodies', as the former seem to arise from genetic control without apparent antigenic stimulation whereas the latter arise due to antigenic stimulation.
Isoantigens
A protein or other substance, such as histocompatibility or red blood cell antigens, that is present in only some members of a species and therefore able to stimulate isoantibody production in other members of the same species who lack it. When injected into another animal, they trigger an immune response aimed at eliminating them. Therefore, it can be thought of as an antigen that is present in some members of the same species, but is not common to all members of that species. If an alloantigen is presented to a member of the same species that does not have the alloantigen, it will be recognized as foreign. They are the products of polymorphic genes.
Production of isohaemagglutinins
Isoantibodies are seen in people with different blood groups. The anti-A or anti-B isoantibodies or both (also called isohaemagglutinins) are produced by an individual against the antigens (A or B) on the RBCs of other blood groups. In a person with A blood group, the plasma will contain isoantibodies against B antigens, so immediately after transfusion of blood from B group the anti-B isohemagglutinins agglutinate the foreign red blood cells.
Anti-A and anti-B antibodies (called isohaemagglutinins), which are not present in human |
https://en.wikipedia.org/wiki/Line%20Mode%20Browser | The Line Mode Browser (also known as LMB, WWWLib, or just www) is the second web browser ever created.
The browser was the first demonstrated to be portable to several different operating systems.
Operated from a simple command-line interface, it could be widely used on many computers and computer terminals throughout the Internet.
The browser was developed starting in 1990, and then supported by the World Wide Web Consortium (W3C) as an example and test application for the libwww library.
History
One of the fundamental concepts of the "World Wide Web" projects at CERN was "universal readership". In 1990, Tim Berners-Lee had already written the first browser, WorldWideWeb (later renamed to Nexus), but that program only worked on the proprietary software of NeXT computers, which were in limited use. Berners-Lee and his team could not port the WorldWideWeb application with its features—including the graphical WYSIWYG editor— to the more widely deployed X Window System, since they had no experience in programming it. The team recruited Nicola Pellow, a math student intern working at CERN, to write a "passive browser" so basic that it could run on most computers of that time.
The name "Line Mode Browser" refers to the fact that, to ensure compatibility with the earliest computer terminals such as Teletype machines, the program only displayed text, (no images) and had only line-by-line text input (no cursor positioning).
Development started in November 1990 and the browser was demonstrated in December 1990.
The development environment used resources from the PRIAM project, a French language acronym for "PRojet Interdivisionnaire d'Assistance aux Microprocesseurs", a project to standardise microprocessor development across CERN.
The short development time produced software in a simplified dialect of the C programming language. The official standard ANSI C was not yet available on all platforms.
The Line Mode Browser was released to a limited audience on VAX, RS/6000 a |
https://en.wikipedia.org/wiki/Open%20Control%20Architecture | The Open Control Architecture (OCA) is a communications protocol architecture for control, monitoring, and connection management of networked audio and video devices. Such networks are referred to as "media networks".
The official specification of OCA is the Audio Engineering Society (AES) standard known as AES70-2015, or just AES70.
AES70 is an open standard that may be used freely, without licenses, fees, or organization memberships.
Applicability
AES70 is intended to support media networks that combine devices from diverse manufacturers. Targeted for professional applications, AES70 is suitable for media networks of 2 to 10,000 devices, including networks with mission-critical and/or life-safety roles.
AES70 is for device control, monitoring, and connection management only. It does not provide transport of media program material. However, AES70 is designed to work with virtually any media transport scheme, as the application requires.
AES70's parts are separable and may be used independently. For example, a device may implement AES70 connection management, but use other means for operational control and monitoring.
AES70 is termed an "architecture" because it provides the basis for definition of multiple control protocols. These protocols all share a common programming model, but vary in signalling detail, depending on the form of the underlying data transport mechanism. An AES70 application will use whichever AES70 protocol is appropriate for the communications method available.
Background
OCA, the architecture of AES70, was developed by the OCA Alliance, trade association, beginning in 2011. OCA was based on an existing control protocol named OCP, which had been created by Bosch Communications Systems in 2009 and 2010. OCP was in turn based on an embryonic control protocol standard named AES-24
developed by the AES in the early 1990s.
From the outset, it was the intention of all involved to have OCA rendered into an open public standar |
https://en.wikipedia.org/wiki/Integument | In biology, an integument is the tissue surrounding an organism's body or an organ within, such as skin, a husk, shell, germ or rind.
Etymology
The term is derived from integumentum, which is Latin for "a covering". In a transferred, or figurative sense, it could mean a cloak or a disguise. In English, "integument" is a fairly modern word, its origin having been traced back to the early seventeenth century; and refers to a material or layer with which anything is enclosed, clothed, or covered in the sense of "clad" or "coated", as with a skin or husk.
Botanical usage
In botany, the term "integument" may be used as it is in zoology, referring to the covering of an organ. When the context indicates nothing to the contrary, the word commonly refers to an envelope covering the nucellus of the ovule. The integument may consist of one layer (unitegmic) or two layers (bitegmic), each of which consisting of two or more layers of cells. The integument is perforated by a pore, the micropyle, through which the pollen tube can enter. It may develop into the testa, or seed coat.
Zoological usage
The integument of an organ in zoology typically would comprise membranes of connective tissue such as those around a kidney or liver. In referring to the integument of an animal, the usual sense is its skin and its derivatives: the integumentary system, where "integumentary" is a synonym of "cutaneous".
In arthropods, the integument, or external "skin", consists of a single layer of epithelial ectoderm from which arises the cuticle, an outer covering of chitin, the rigidity of which varies as per its chemical composition.
Derivative terms and sundry usages
Derivative terms include various adjectival forms such as integumentary (e.g. system), integumental (e.g. integumental glands, "peltate glands, the integument being raised like a bladder due to abundant secretion") and integumented (as opposed to bare).
Other illustrative examples of usage occur in the following articles:
Conne |
https://en.wikipedia.org/wiki/Triangular%20function | A triangular function (also known as a triangle function, hat function, or tent function) is a function whose graph takes the shape of a triangle. Often this is an isosceles triangle of height 1 and base 2 in which case it is referred to as the triangular function. Triangular functions are useful in signal processing and communication systems engineering as representations of idealized signals, and the triangular function specifically as an integral transform kernel function from which more realistic signals can be derived, for example in kernel density estimation. It also has applications in pulse-code modulation as a pulse shape for transmitting digital signals and as a matched filter for receiving the signals. It is also used to define the triangular window sometimes called the Bartlett window.
Definitions
The most common definition is as a piecewise function:
Equivalently, it may be defined as the convolution of two identical unit rectangular functions:
The triangular function can also be represented as the product of the rectangular and absolute value functions:
Note that some authors instead define the triangle function to have a base of width 1 instead of width 2:
In its most general form a triangular function is any linear B-spline:
Whereas the definition at the top is a special case
where , , and .
A linear B-spline is the same as a continuous piecewise linear function , and this general triangle function is useful to formally define as
where for all integer .
The piecewise linear function passes through every point expressed as coordinates with ordered pair , that is,
.
Scaling
For any parameter :
Fourier transform
The transform is easily determined using the convolution property of Fourier transforms and the Fourier transform of the rectangular function:
where is the normalized sinc function.
See also
Källén function, also known as triangle function
Tent map
Triangular distribution
Triangle wave, a piecewise linear periodic function |
https://en.wikipedia.org/wiki/Live%20event%20support | Live event support includes staging, scenery, mechanicals, sound, lighting, video, special effects, transport, packaging, communications, costume and makeup for live performance events including theater, music, dance, and opera. They all share the same goal: to convince live audience members that there is no better place that they could be at the moment. This is achieved through establishing a bond between performer and audience. Live performance events tend to use visual scenery, lighting, costume amplification and a shorter history of visual projection and sound amplification reinforcement.
Visual support
Live event visual amplification
Introduction
Live event visual amplification is the display of live and pre-recorded images as a part of a live stage event. Visual amplification began when films, projected onto a stage, added characters or background information to a production.
35 mm motion picture projectors became available in 1910 - but which theatre or opera company first used a movie in a stage production is not known. In 1935, less costly 16 mm film equipment allowed many other performance groups and school theaters to use motion pictures in productions.
In 1970, closed circuit video cameras and videocassette machines became available and Live Event Visual Amplification came of age. For the first time live closeups of stage performers could be displayed in real time. These systems also made it possible to show pre-recorded videos that added information & visual intensity to a live event.
One of the first video touring systems was created by video designer TJ McHose in 1975 for the rock band The Tubes using black and white television monitors. In 1978, TJ McHose designed a touring color video system that enlarged performers at the Kool Jazz Festivals in sports stadiums across the United States.
Live event visual reinforcement
Introduction
Live event visual reinforcement is the addition of projected lighting effects and images onto any type of per |
https://en.wikipedia.org/wiki/Permission%20%28philosophy%29 | Permission, in philosophy, is the attribute of a person whose performance of a specific action, otherwise ethically wrong or dubious, would thereby involve no ethical fault. The term "permission" is more commonly used to refer to consent. Consent is the legal embodiment of the concept, in which approval is given to another party.
Permissions depend on norms or institutions.
Many permissions and obligations are complementary to each other, and deontic logic is a tool sometimes used in reasoning about such relationships.
Further reading
Alexy, Robert, Theorie der Grundrechte, Suhrkamp, Frankfurt a. M.: 1985. Translation: A theory of constitutional rights, Oxford University Press, Oxford: 2002.
Raz, Joseph, Practical reason and norms, Oxford University, Oxford: 1975.
von Wright, G. H., Norm and action. A logical enquiry, Routledge & Kegan Paul, London: 1963.
Concepts in ethics
Deontic logic |
https://en.wikipedia.org/wiki/Rhomboid%20fossa | The rhomboid fossa is a rhombus-shaped depression that is the anterior part of the fourth ventricle. Its anterior wall, formed by the back of the pons and the medulla oblongata, constitutes the floor of the fourth ventricle.
It is covered by a thin layer of grey matter continuous with that of the spinal cord; superficial to this is a thin lamina of neuroglia which constitutes the ependyma of the ventricle and supports a layer of ciliated epithelium.
Parts
The fossa consists of three parts, superior, intermediate, and inferior:
The superior part
The superior part is triangular in shape and limited laterally by the superior cerebellar peduncle; its apex, directed upward, is continuous with the cerebral aqueduct; its base is represented by an imaginary line at the level of the upper ends of the superior foveae.
The intermediate part
The intermediate part extends from this level to that of the horizontal portions of the taeniae of the ventricle; it is narrow above where it is limited laterally by the middle peduncle, but widens below and is prolonged into the lateral recesses of the ventricle.
The inferior part
The inferior part is triangular, and its downwardly directed apex, named the calamus scriptorius (as is shaped like a writing quill-nib) is continuous with the central canal of the closed part of the medulla oblongata.
The sulcus limitans forms the lateral boundary of the medial eminence.
Features
In the superior part of the rhomboid fossa it corresponds with the lateral limit of the fossa and presents a bluish-gray area, the locus coeruleus, which owes its color to an underlying patch of deeply pigmented nerve cells, termed the substantia ferruginea.
At the level of the facial colliculus the sulcus limitans widens into a flattened depression, the superior fovea, and in the inferior part of the fossa appears as a distinct dimple, the inferior fovea.
Lateral to the foveæ is a rounded elevation named the area acustica, which extends into the lateral recess |
https://en.wikipedia.org/wiki/Leyland%20number | In number theory, a Leyland number is a number of the form
where x and y are integers greater than 1. They are named after the mathematician Paul Leyland. The first few Leyland numbers are
8, 17, 32, 54, 57, 100, 145, 177, 320, 368, 512, 593, 945, 1124 .
The requirement that x and y both be greater than 1 is important, since without it every positive integer would be a Leyland number of the form x1 + 1x. Also, because of the commutative property of addition, the condition x ≥ y is usually added to avoid double-covering the set of Leyland numbers (so we have 1 < y ≤ x).
Leyland primes
A Leyland prime is a Leyland number that is also a prime. The first such primes are:
17, 593, 32993, 2097593, 8589935681, 59604644783353249, 523347633027360537213687137, 43143988327398957279342419750374600193, ...
corresponding to
32+23, 92+29, 152+215, 212+221, 332+233, 245+524, 563+356, 3215+1532.
One can also fix the value of y and consider the sequence of x values that gives Leyland primes, for example x2 + 2x is prime for x = 3, 9, 15, 21, 33, 2007, 2127, 3759, ... ().
By November 2012, the largest Leyland number that had been proven to be prime was 51226753 + 67535122 with 25050 digits. From January 2011 to April 2011, it was the largest prime whose primality was proved by elliptic curve primality proving. In December 2012, this was improved by proving the primality of the two numbers 311063 + 633110 (5596 digits) and 86562929 + 29298656 (30008 digits), the latter of which surpassed the previous record. There are many larger known probable primes such as 3147389 + 9314738, but it is hard to prove primality of large Leyland numbers. Paul Leyland writes on his website: "More recently still, it was realized that numbers of this form are ideal test cases for general purpose primality proving programs. They have a simple algebraic description but no obvious cyclotomic properties which special purpose algorithms can exploit."
There is a project called XYYXF to factor composite |
https://en.wikipedia.org/wiki/Immune%20tolerance | Immune tolerance, or immunological tolerance, or immunotolerance, is a state of unresponsiveness of the immune system to substances or tissue that would otherwise have the capacity to elicit an immune response in a given organism. It is induced by prior exposure to that specific antigen and contrasts with conventional immune-mediated elimination of foreign antigens (see Immune response). Tolerance is classified into central tolerance or peripheral tolerance depending on where the state is originally induced—in the thymus and bone marrow (central) or in other tissues and lymph nodes (peripheral). The mechanisms by which these forms of tolerance are established are distinct, but the resulting effect is similar.
Immune tolerance is important for normal physiology. Central tolerance is the main way the immune system learns to discriminate self from non-self. Peripheral tolerance is key to preventing over-reactivity of the immune system to various environmental entities (allergens, gut microbes, etc.). Deficits in central or peripheral tolerance also cause autoimmune disease, resulting in syndromes such as systemic lupus erythematosus, rheumatoid arthritis, type 1 diabetes, autoimmune polyendocrine syndrome type 1 (APS-1), and immunodysregulation polyendocrinopathy enteropathy X-linked syndrome (IPEX), and potentially contribute to asthma, allergy, and inflammatory bowel disease. And immune tolerance in pregnancy is what allows a mother animal to gestate a genetically distinct offspring with an alloimmune response muted enough to prevent miscarriage.
Tolerance, however, also has its negative tradeoffs. It allows for some pathogenic microbes to successfully infect a host and avoid elimination. In addition, inducing peripheral tolerance in the local microenvironment is a common survival strategy for a number of tumors that prevents their elimination by the host immune system.
Historical background
The phenomenon of immune tolerance was first described by Ray D. Owen |
https://en.wikipedia.org/wiki/Erdal%20%C4%B0n%C3%B6n%C3%BC | Erdal İnönü (6 June 1926 – 31 October 2007) was a Turkish theoretical physicist and politician, who served as the interim prime minister of Turkey between 16 May and 25 June 1993. He also served as the deputy prime minister of Turkey from 1991 to 1993 and as the minister of foreign affairs from March to October 1995. He served as the leader of the Social Democracy Party (SODEP) from 1983 to 1985 and later the Social Democratic Populist Party (SHP) from 1986 to 1993. He was the son of the second president of Turkey, İsmet İnönü.
İnönü initially founded the SODEP in 1983 with the intention of contesting the 1983 general election. However, the National Security Council that had been established following the 1980 military coup banned İnönü from standing for office. Standing down as chairman in order to be replaced by a politician that could seek office, İnönü was succeeded by Cezmi Kartay. However, SODEP was banned completely from contesting the election, resulting in İnönü taking over as leader for a second time shortly after. His party contested the 1984 local elections and came second with 23.4% of the vote. SODEP merged with the People's Party in 1985 and İnönü became the leader of the new Social Democratic Populist Party (SHP) in 1986. In the 1986 parliamentary by-elections, the SHP came third with 22.7% of the vote and İnönü was elected as a member of Parliament for İzmir. He was the only successful SHP candidate.
Following the 1991 general election, the SHP formed a coalition with Süleyman Demirel's True Path Party (DYP) and İnönü became Deputy Prime Minister. He briefly served as the acting prime minister in 1993 after Demirel was elected president. After the DYP elected Tansu Çiller as their leader and she formed a government, İnönü continued as deputy prime minister until he resigned as party leader in 1993. He later served as foreign minister in 1995 until he stepped down as an MP in the 1995 general election.
Early life
He graduated from the Physics De |
https://en.wikipedia.org/wiki/Zebra%20Imaging | Zebra Imaging was a company that developed 3D digital holographic images, hologram imagers and interactive 3D displays for government and commercial uses. The company offers digital holograms that are autostereoscopic (no glasses or goggles required), full-parallax (viewing of the image from viewpoints above and below as well as from side to side) and in monochrome or full-color. They have also developed a 3D Dynamic Display, which is capable of rendering holograms in real time; design work with 3D programs such as SketchUp and 123D Catch can be viewed on a holographic display while they are actively being edited.
History
Zebra Imaging was founded in 1996 by graduates of the Massachusetts Institute of Technology’s Media Laboratory. Its technology was based in-part on work done at the MIT Media Laboratory’s Spatial Imaging Group under the direction of the late holography pioneer, Prof. Stephen Benton.
In 2017, Zebra Imaging announced the sale of its 3D Holographic Print Assets to HoloTech Switzerland AG.
Technology
The company has been granted over 40 patents with others pending in the U.S. and abroad. Zebra Imaging's 3D digital holographic technology presents multiple perspectives of an image simultaneously and independently to all viewers. The imagery in the static holograms can be subdivided into channels to show short animations or peel-away and overlay views. Since 2005, Zebra Imaging has been developing dynamic (motion-capable) 3D display technology, supported in-part by DARPA (Defense Advanced Research Projects Agency). In November 2011 Zebra's dynamic ZScape display was selected among Time Magazine's "50 Most Important Inventions of 2011."
Funding
In 2006, it completed a $5.9 million Series C round of funding led by Voyager Capital.
In 2013, the company raised $3.9 million financing collected from 11 investors.
See also
Digital holography
Hogel
Holography
List of companies based in Austin, Texas
MIT Media Lab
Volumetric display |
https://en.wikipedia.org/wiki/Kaymak | Kaymak, sarshir, or qashta/ashta ( ; or ; ), is a creamy dairy food similar to clotted cream, made from the milk of water buffalo, cows, sheep, or goats in Central Asia, some Balkan countries, some Caucasus countries, the countries of the Levant, Turkic regions, Iran and Iraq. In Poland, the name refers to a confection similar to dulce de leche instead.
The traditional method of making kaymak is to boil the raw milk slowly, then simmer it for two hours over a very low heat. After the heat source is shut off, the cream is skimmed and left to chill (and mildly ferment) for several hours or days. Kaymak has a high percentage of milk fat, typically about 60%. It has a thick, creamy consistency (not entirely compact, because of milk protein fibers) and a rich taste.
Etymology
The word kaymak has Central Asian Turkic origins, possibly formed from the verb , which means 'melt' and 'molding of metal' in Turkic. The first written records of the word kaymak is in the well-known book of Mahmud al-Kashgari, . The word remains as in Mongolian, which refers to a fried clotted cream, and with small variations in Turkic languages as in Azerbaijani, in Uzbek, in Kazakh and Shor, in Kyrgyz, in Turkish, in Turkmen, () in Georgian, () in Greek, and кајмак (kajmak) in Serbo-Croatian, caimac in Romanian.
This dairy food is called sarshir in Iran. This word means 'top of the milk'. They use this name because after boiling milk, a layer of fat stands on the top of the boiled milk.
Turkey
Shops in Turkey have been devoted to kaymak production and consumption for centuries. Kaymak is mainly consumed today for breakfast along with the traditional Turkish breakfast. One type of kaymak is found in the Afyonkarahisar region where the water buffalo are fed from the residue of poppy seeds pressed for oil. Kaymak is traditionally eaten with baklava and other Turkish desserts, fruit preserve and honey or as a filling in pancakes.
Balkans
Known as , it is almost always made at home |
https://en.wikipedia.org/wiki/Bicyclic%20molecule | A bicyclic molecule () is a molecule that features two joined rings. Bicyclic structures occur widely, for example in many biologically important molecules like α-thujene and camphor. A bicyclic compound can be carbocyclic (all of the ring atoms are carbons), or heterocyclic (the rings' atoms consist of at least two elements), like DABCO. Moreover, the two rings can both be aliphatic (e.g. decalin and norbornane), or can be aromatic (e.g. naphthalene), or a combination of aliphatic and aromatic (e.g. tetralin).
Three modes of ring junction are possible for a bicyclic compound:
In spiro compounds, the two rings share only one single atom, the spiro atom, which is usually a quaternary carbon. An example of a spirocyclic compound is the photochromic switch spiropyran.
In fused/condensed bicyclic compounds, two rings share two adjacent atoms. In other words, the rings share one covalent bond, i.e. the bridgehead atoms are directly connected (e.g. α-thujene and decalin).
In bridged bicyclic compounds, the two rings share three or more atoms, separating the two bridgehead atoms by a bridge containing at least one atom. For example, norbornane, also known as bicyclo[2.2.1]heptane, can be viewed as a pair of cyclopentane rings each sharing three of their five carbon atoms. Camphor is a more elaborate example.
Nomenclature
Bicyclic molecules are described by IUPAC nomenclature. The root of the compound name depends on the total number of atoms in all rings together, possibly followed by a suffix denoting the functional group with the highest priority. Numbering of the carbon chain always begins at one bridgehead atom (where the rings meet) and follows the carbon chain along the longest path, to the next bridgehead atom. Then numbering is continued along the second longest path and so on. Fused and bridged bicyclic compounds get the prefix bicyclo, whereas spirocyclic compounds get the prefix spiro. In between the prefix and the suffix, a pair of brackets with numerals |
https://en.wikipedia.org/wiki/4th%20meridian%20west | The meridian 4° west of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, the Atlantic Ocean, Europe, Africa, the Southern Ocean, and Antarctica to the South Pole.
The 4th meridian west forms a great circle with the 176th meridian east.
From Pole to Pole
Starting at the North Pole and heading south to the South Pole, the 4th meridian west passes through:
{| class="wikitable plainrowheaders"
! scope="col" width="125" | Co-ordinates
! scope="col" | Country, territory or sea
! scope="col" | Notes
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Arctic Ocean
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Atlantic Ocean
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Scotland
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | North Sea
| style="background:#b0e0e6;" | Dornoch Firth
|-
|
! scope="row" |
| Scotland
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | North Sea
| style="background:#b0e0e6;" | Moray Firth
|-
|
! scope="row" |
| Scotland — passing through Motherwell, just east of Glasgow (at )
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Irish Sea
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Wales — passing just west of Swansea (at )
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Bristol Channel
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| England — passing just east of Plymouth (at )
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Atlantic Ocean
| style="background:#b0e0e6;" | English Channel
|-
|
! scope="row" |
|
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Atlantic Ocean
| style="background:#b0e0e6;" | Bay of Biscay
|-
|
! scope="row" |
|
|-
| style="background:#b0e0e6;" |
! scope="row" st |
https://en.wikipedia.org/wiki/WebSocket | WebSocket is a computer communications protocol, providing simultaneous two-way communication channels over a single Transmission Control Protocol (TCP) connection. The WebSocket protocol was standardized by the IETF as in 2011. The current specification allowing web applications to use this protocol is known as WebSockets. It is a living standard maintained by the WHATWG and a successor to The WebSocket API from the W3C.
WebSocket is distinct from the Hypertext Transfer Protocol (HTTP) used to serve most webpages. Both protocols are located at layer 7 in the OSI model and depend on TCP at layer 4. Although they are different, states that WebSocket "is designed to work over HTTP ports 443 and 80 as well as to support HTTP proxies and intermediaries", thus making it compatible with HTTP. To achieve compatibility, the WebSocket handshake uses the HTTP Upgrade header to change from the HTTP protocol to the WebSocket protocol.
The WebSocket protocol enables full-duplex interaction between a web browser (or other client application) and a web server with lower overhead than half-duplex alternatives such as HTTP polling, facilitating real-time data transfer from and to the server. This is made possible by providing a standardized way for the server to send content to the client without being first requested by the client, and allowing messages to be passed back and forth while keeping the connection open. In this way, a two-way ongoing conversation can take place between the client and the server. The communications are usually done over TCP port number 443 (or 80 in the case of unsecured connections), which is beneficial for environments that block non-web Internet connections using a firewall. Additionally, WebSocket enables streams of messages on top of TCP. TCP alone deals with streams of bytes with no inherent concept of a message. Similar two-way browser–server communications have been achieved in non-standardized ways using stopgap technologies such as Comet or |
https://en.wikipedia.org/wiki/CableACE%20Award | The CableACE Award (earlier known as the ACE Awards; ACE was an acronym for "Award for Cable Excellence") is a defunct award that was given by what was then the National Cable Television Association from 1978 to 1997 to honor excellence in American cable television programming. The trophy itself was shaped as a glass spade, alluding to the Ace of spades.
History
The CableACE was created to serve as the cable industry's counterpart to broadcast television's Primetime Emmy Awards. Until the 40th ceremony in 1988, the Emmys refused to honor cable programming. For much of its existence, the ceremony aired on a simulcast on as many as twelve cable networks in some years. The last few years found the ceremony awarded solely to one network, usually Lifetime or TBS.
In 1992, the award's official name was changed from ACE to CableACE, agreeing to do so to reduce confusion with the American Cinema Editors (ACE) society.
By 1997, the Emmys began to reach a tipping point, where cable programming had grown to hold much more critical acclaim over broadcast programming, and met an even parity, a position that would only hold for a short time before cable programming began to dominate the categories of the Primetime Emmys.
Few attended the national CableACE Awards ceremony in November 1997, and the CableACE show had a low 0.6 rating on TNT, compared with a 1.2 rating the year before, while the Emmys had a 13.5 rating that year. Smaller cable networks called for the CableACEs to be saved as their only real forum for recognition.
In April 1998, members of the NCTA chose to end the CableACEs.
Judging
Professionals in the television industry were randomly selected to be judges. A Universal City hotel would be selected, where several rooms would be rented for the day. Individual rooms would be designated for each award category. Judges were discouraged from leaving the rooms at any time during the day-long judging. There were usually eight to 12 judges for each category. Dependi |
https://en.wikipedia.org/wiki/De%20analysi%20per%20aequationes%20numero%20terminorum%20infinitas | De analysi per aequationes numero terminorum infinitas (or On analysis by infinite series, On Analysis by Equations with an infinite number of terms, or On the Analysis by means of equations of an infinite number of terms) is a mathematical work by Isaac Newton.
Creation
Composed in 1669, during the mid-part of that year probably, from ideas Newton had acquired during the period 1665–1666. Newton wrote
The explication was written to remedy apparent weaknesses in the logarithmic series [infinite series for ] , that had become republished due to Nicolaus Mercator, or through the encouragement of Isaac Barrow in 1669, to ascertain the knowing of the prior authorship of a general method of infinite series. The writing was circulated amongst scholars as a manuscript in 1669, including John Collins a mathematics intelligencer for a group of British and continental mathematicians. His relationship with Newton in the capacity of informant proved instrumental in securing Newton recognition and contact with John Wallis at the Royal Society.
Both Cambridge University Press and Royal Society rejected the treatise from publication, being instead published in London in 1711 by William Jones, and again in 1744, as Methodus fluxionum et serierum infinitarum cum eisudem applicatione ad curvarum geometriam in Opuscula mathematica, philosophica et philologica by Marcum-Michaelem Bousquet at that time edited by Johann Castillioneus.
Content
The exponential series, i.e., tending toward infinity, was discovered by Newton and is contained within the Analysis. The treatise contains also the sine series and cosine series and arc series, the logarithmic series and the binomial series.
See also
Newton's method |
https://en.wikipedia.org/wiki/Evasive%20Boolean%20function | In mathematics, an evasive Boolean function ƒ (of n variables) is a Boolean function for which every decision tree algorithm has running time of exactly n. Consequently, every decision tree algorithm that represents the function has, at worst case, a running time of n.
Examples
An example for a non-evasive boolean function
The following is a Boolean function on the three variables x, y, z:
(where is the bitwise "and", is the bitwise "or", and is the bitwise "not").
This function is not evasive, because there is a decision tree that solves it by checking exactly two variables: The algorithm first checks the value of x. If x is true, the algorithm checks the value of y and returns it.
( )
If x is false, the algorithm checks the value of z and returns it.
A simple example for an evasive boolean function
Consider this simple "and" function on three variables:
A worst-case input (for every algorithm) is 1, 1, 1. In every order we choose to check the variables, we have to check all of them. (Note that in general there could be a different worst-case input for every decision tree algorithm.) Hence the functions: "and", "or" (on n variables) are evasive.
Binary zero-sum games
For the case of binary zero-sum games, every evaluation function is evasive.
In every zero-sum game, the value of the game is achieved by the minimax algorithm (player 1 tries to maximize the profit, and player 2 tries to minimize the cost).
In the binary case, the max function equals the bitwise "or", and the min function equals the bitwise "and".
A decision tree for this game will be of this form:
every leaf will have value in {0, 1}.
every node is connected to one of {"and", "or"}
For every such tree with n leaves, the running time in the worst case is n (meaning that the algorithm must check all the leaves):
We will exhibit an adversary that produces a worst-case input – for every leaf that the algorithm checks, the adversary will answer 0 if the leaf's parent i |
https://en.wikipedia.org/wiki/Radio%20jamming | Radio jamming is the deliberate blocking of or interference with wireless communications. In some cases, jammers work by the transmission of radio signals that disrupt telecommunications by decreasing the signal-to-noise ratio.
The concept can be used in wireless data networks to disrupt information flow. It is a common form of censorship in totalitarian countries, in order to prevent foreign radio stations in border areas from reaching the country.
Jamming is usually distinguished from interference that can occur due to device malfunctions or other accidental circumstances. Devices that simply cause interference are regulated differently. Unintentional "jamming" occurs when an operator transmits on a busy frequency without first checking whether it is in use, or without being able to hear stations using the frequency. Another form of unintentional jamming occurs when equipment accidentally radiates a signal, such as a cable television plant that accidentally emits on an aircraft emergency frequency.
Distinction between "jamming" and "interference"
Originally the terms were used interchangeably but nowadays most radio users use the term "jamming" to describe the deliberate use of radio noise or signals in an attempt to disrupt communications (or prevent listening to broadcasts) whereas the term "interference" is used to describe unintentional forms of disruption (which are far more common). However, the distinction is still not universally applied. For inadvertent disruptions, see electromagnetic compatibility.
Method
Intentional communications jamming is usually aimed at radio signals to disrupt control of a battle. A transmitter, tuned to the same frequency as the opponents' receiving equipment and with the same type of modulation, can, with enough power, override any signal at the receiver. Digital wireless jamming for signals such as Bluetooth and WiFi is possible with very low power.
The most common types of this form of signal jamming are random noise, ra |
https://en.wikipedia.org/wiki/Tinbergen%27s%20four%20questions | Tinbergen's four questions, named after 20th century biologist Nikolaas Tinbergen, are complementary categories of explanations for animal behaviour. These are also commonly referred to as levels of analysis. It suggests that an integrative understanding of behaviour must include ultimate (evolutionary) explanations, in particular:
behavioural adaptive functions
phylogenetic history; and the proximate explanations
underlying physiological mechanisms
ontogenetic/developmental history.
Four categories of questions and explanations
When asked about the purpose of sight in humans and animals, even elementary-school children can answer that animals have vision to help them find food and avoid danger (function/adaptation). Biologists have three additional explanations: sight is caused by a particular series of evolutionary steps (phylogeny), the mechanics of the eye (mechanism/causation), and even the process of an individual's development (ontogeny).
This schema constitutes a basic framework of the overlapping behavioural fields of ethology, behavioural ecology, comparative psychology, sociobiology, evolutionary psychology, and anthropology. Julian Huxley identified the first three questions. Niko Tinbergen gave only the fourth question, as Huxley's questions failed to distinguish between survival value and evolutionary history; Tinbergen's fourth question helped resolve this problem.
Evolutionary (ultimate) explanations
First question: Function (adaptation)
Darwin's theory of evolution by natural selection is the only scientific explanation for why an animal's behaviour is usually well adapted for survival and reproduction in its environment. However, claiming that a particular mechanism is well suited to the present environment is different from claiming that this mechanism was selected for in the past due to its history of being adaptive.
The literature conceptualizes the relationship between function and evolution in two ways. On the one hand, function |
https://en.wikipedia.org/wiki/Membranous%20glomerulonephritis | Membranous glomerulonephritis (MGN) is a slowly progressive disease of the kidney affecting mostly people between ages of 30 and 50 years, usually white people (i.e., those of European, Middle Eastern, or North African ancestry.).
It is the second most common cause of nephrotic syndrome in adults, with focal segmental glomerulosclerosis (FSGS) recently becoming the most common.
Signs and symptoms
Most people will present as nephrotic syndrome, with the triad of albuminuria, edema and low serum albumin (with or without kidney failure). High blood pressure and high cholesterol are often also present. Others may not have symptoms and may be picked up on screening, with urinalysis finding high amounts of protein loss in the urine. A definitive diagnosis of membranous nephropathy requires a kidney biopsy, though given the very high specificity of anti-PLA2R antibody positivity this can sometimes be avoided in patients with nephrotic syndrome and preserved kidney function
Causes
Traditional definitions split membranous nephropathy into 'primary/idiopathic' or 'secondary'. It is likely that instead the field will move to novel classification on the basis of the specific autoantigen detected, though given the current lack of clinical assays (other than for PLA2R autoantibodies) this may be several years off still.
Primary/idiopathic
Traditionally 85% of MGN cases are classified as primary membranous glomerulonephritis—that is to say, the cause of the disease is idiopathic (of unknown origin or cause). This can also be referred to as idiopathic membranous nephropathy.
Antibodies to an M-type phospholipase A2 receptor are responsible around 60% of cases of membranous nephropathy. Testing for these anti-PLA2R has revolutionised diagnosis and treatment of this disease in antibody positive patients, and tracking titre level over time allows you to predict risk of disease progression and chance of spontaneous remission There is little secondary disease association with P |
https://en.wikipedia.org/wiki/Sony%20NEX-5T | The Sony α NEX-5T is a mid-range rangefinder-styled digital mirrorless interchangeable lens camera announced by Sony on 27 August 2013.
See also
Sony NEX-5
Sony NEX-7
List of Sony E-mount cameras
List of smallest mirrorless cameras |
https://en.wikipedia.org/wiki/Globin | The globins are a superfamily of heme-containing globular proteins, involved in binding and/or transporting oxygen. These proteins all incorporate the globin fold, a series of eight alpha helical segments. Two prominent members include myoglobin and hemoglobin. Both of these proteins reversibly bind oxygen via a heme prosthetic group. They are widely distributed in many organisms.
Structure
Globin superfamily members share a common three-dimensional fold. This 'globin fold' typically consists of eight alpha helices, although some proteins have additional helix extensions at their termini. Since the globin fold contains only helices, it is classified as an all-alpha protein fold.
The globin fold is found in its namesake globin families as well as in phycocyanins. The globin fold was thus the first protein fold discovered (myoglobin was the first protein whose structure was solved).
Helix packaging
The eight helices of the globin fold core share significant nonlocal structure, unlike other structural motifs in which amino acids close to each other in primary sequence are also close in space. The helices pack together at an average angle of about 50 degrees, significantly steeper than other helical packings such as the helix bundle. The exact angle of helix packing depends on the sequence of the protein, because packing is mediated by the sterics and hydrophobic interactions of the amino acid side chains near the helix interfaces.
Evolution
Globins evolved from a common ancestor and can be divided into three lineages:
Family M (for myoglobin-like) or F (for FHb-like), which has a typical 3/3 fold.
Subfamily FHb, for flavohaemoglobins. Chimeric.
Subfamily SDgb, for single-domain globins (not to be confused with SSDgb).
Family S (for sensor-like), again with a 3/3 fold.
Subfamily GCS, for Globin-coupled sensors. Chimeric.
Subfamily PGb, for protoglobins. Single-domain.
Subfamily SSDgb, for sensor single-domain globins.
Family T (for truncated), with a 2/2 |
https://en.wikipedia.org/wiki/Steiner%20tree%20problem | In combinatorial mathematics, the Steiner tree problem, or minimum Steiner tree problem, named after Jakob Steiner, is an umbrella term for a class of problems in combinatorial optimization. While Steiner tree problems may be formulated in a number of settings, they all require an optimal interconnect for a given set of objects and a predefined objective function. One well-known variant, which is often used synonymously with the term Steiner tree problem, is the Steiner tree problem in graphs. Given an undirected graph with non-negative edge weights and a subset of vertices, usually referred to as terminals, the Steiner tree problem in graphs requires a tree of minimum weight that contains all terminals (but may include additional vertices) and minimizes the total weight of its edges. Further well-known variants are the Euclidean Steiner tree problem and the rectilinear minimum Steiner tree problem.
The Steiner tree problem in graphs can be seen as a generalization of two other famous combinatorial optimization problems: the (non-negative) shortest path problem and the minimum spanning tree problem. If a Steiner tree problem in graphs contains exactly two terminals, it reduces to finding the shortest path. If, on the other hand, all vertices are terminals, the Steiner tree problem in graphs is equivalent to the minimum spanning tree. However, while both the non-negative shortest path and the minimum spanning tree problem are solvable in polynomial time, no such solution is known for the Steiner tree problem. Its decision variant, asking whether a given input has a tree of weight less than some given threshold, is NP-complete, which implies that the optimization variant, asking for the minimum-weight tree in a given graph, is NP-hard. In fact, the decision variant was among Karp's original 21 NP-complete problems. The Steiner tree problem in graphs has applications in circuit layout or network design. However, practical applications usually require variations, givi |
https://en.wikipedia.org/wiki/Introduction%20to%20Quantum%20Mechanics%20%28book%29 | Introduction to Quantum Mechanics, often called Griffiths, is an introductory textbook on quantum mechanics by David J. Griffiths. The book is considered a standard undergraduate textbook in the subject. Originally published by Pearson Education in 1995 with a second edition in 2005, Cambridge University Press (CUP) reprinted the second edition in 2017. In 2018, CUP released a third edition of the book with Darrell F. Schroeter as co-author; this edition is known as Griffiths and Schroeter.
Content (3rd edition)
Part I: Theory
Chapter 1: The Wave Function
Chapter 2: Time-independent Schrödinger Equation
Chapter 3: Formalism
Chapter 4: Quantum Mechanics in Three Dimensions
Chapter 5: Identical Particles
Chapter 6: Symmetries and Conservation Laws
Part II: Applications
Chapter 7: Time-independent Perturbation Theory
Chapter 8: The Variational Principle
Chapter 9: The WKB Approximation
Chapter 10: Scattering
Chapter 11: Quantum Dynamics
Chapter 12: Afterword
Appendix: Linear Algebra
Index
Reception
The book was reviewed by John R. Taylor, among others. It has also been recommended in other, more advanced, textbooks on the subject.
According to physicists Yoni Kahn of Princeton University and Adam Anderson of the Fermi National Accelerator Laboratory, Griffiths' Quantum Mechanics covers all materials needed for questions on quantum mechanics and atomic physics in the Physics Graduate Record Examinations (Physics GRE).
Publication history
See also
Introduction to Electrodynamics by the same author
List of textbooks in electromagnetism
List of textbooks on classical mechanics and quantum mechanics |
https://en.wikipedia.org/wiki/Bulb%20of%20vestibule | In female anatomy, the vestibular bulbs, bulbs of the vestibule or clitoral bulbs are two elongated masses of erectile tissue typically described as being situated on either side of the vaginal opening. They are united to each other in front by a narrow median band. Some research indicates that they do not surround the vaginal opening, and are more closely related to the clitoris than to the vestibule.
Structure
Research indicates that the vestibular bulbs are more closely related to the clitoris than to the vestibule because of the similarity of the trabecular and erectile tissue within the clitoris and bulbs, and the absence of trabecular tissue in other genital organs, with the erectile tissue's trabecular nature allowing engorgement and expansion during sexual arousal. Ginger et al. state that although a number of texts report that they surround the vaginal opening, this does not appear to be the case and tunica albuginea does not envelop the erectile tissue of the bulb.
The vestibular bulbs are homologous to the bulb of penis and the adjoining part of the corpus spongiosum of the male and consist of two elongated masses of erectile tissue. Their posterior ends are expanded and are in contact with the greater vestibular glands; their anterior ends are tapered and joined to one another by the pars intermedia; their deep surfaces are in contact with the inferior fascia of the urogenital diaphragm; superficially, they are covered by the bulbospongiosus.
Physiology
During the response to sexual arousal the bulbs fill with blood, which then becomes trapped, causing erection. As the clitoral bulbs fill with blood, they tightly cuff the vaginal opening, causing the vulva to expand outward. This puts pressure on nearby structures that include the corpus cavernosum of clitoris and crus of clitoris, inducing pleasure.
The blood inside the bulb's erectile tissue is released to the circulatory system by the spasms of orgasm, but if orgasm does not occur, the blood will |
https://en.wikipedia.org/wiki/Alphamagic%20square | An alphamagic square is a magic square that remains magic when its numbers are replaced by the number of letters occurring in the name of each number. Hence 3 would be replaced by 5, the number of letters in "three". Since different languages will have a different number of letters for the spelling of the same number, alphamagic squares are language-dependent. Alphamagic squares were invented by Lee Sallows in 1986.
Example
The example below is alphamagic. To find out if a magic square is also an alphamagic square, convert it into the array of corresponding number words. For example,
converts to ...
Counting the letters in each number word generates the following square which turns out to also be magic:
If the generated array is also a magic square, the original square is alphamagic. In 2017 British computer scientist Chris Patuzzo discovered several doubly alphamagic squares in which the generated square is in turn an alphamagic square.
The above example enjoys another special property: the nine numbers in the lower square are consecutive. This prompted Martin Gardner to describe it as "Surely the most fantastic magic square ever discovered."
A geometric alphamagic square
Sallows has produced a still more magical version—a square which is both geomagic and alphamagic. In the square shown in Figure 1, any three shapes in a straight line—including the diagonals—tile the cross; thus the square is geomagic. The number of letters in the number names printed on any three shapes in a straight line sum to forty five; thus the square is alphamagic.
Other languages
The Universal Book of Mathematics provides the following information about Alphamagic Squares:
A surprisingly large number of 3 × 3 alphamagic squares exist—in English and in other languages. French allows just one 3 × 3 alphamagic square involving numbers up to 200, but a further 255 squares if the size of the entries is increased to 300. For entries less than 100, none occurs in Danish or in |
https://en.wikipedia.org/wiki/LED%20wallpaper | LED wallpaper is the integration of light-emitting diodes into flat substrates suitable to be applied to walls for interior decoration purposes.
The experimentation on the combination of light sources and wall covering surfaces has been largely fostered by the progressive miniaturisation of low-voltage lighting technology, such as LEDs and OLEDs, suitable to be incorporated into low-thickness materials to be applied onto interior walls.
The new possibilities offered by these developments have prompted some designers and companies to research and develop proprietary LED wallpaper technologies, some of which are currently available for commercial purchase. Other solutions mainly exist as prototypes or are in the process of being further refined.
The first use of the term LED wallpaper is found in the book Wallpaper by Lachlan Blackley, describing the work of textile designer Maria Yaschuk, who designed a flexible solution to incorporate LEDs into digitally printed wall covering material in 2004. This definition is currently used by companies such as Meystyle and designer Ingo Maurer in relation to LED wall covering materials included in their catalogues. Other similar concepts are light-emitting wallpaper used by Lomox and luminous textile used by Philips.
Meystyle
Meystyle claim to have been the first company to integrate light-emitting diodes into wallpaper so that it can be hung like a traditional wall covering. Maria Yaschuk, co-founder of the company together with sister Ekaterina Yaschuk, presented the first prototype of LED wallpaper as part of her graduation project for the MA degree in Textile Futures at CSM in 2004. The concept was successively developed with her sister Ekaterina into a series of designs exhibited in 2007 under the name Wire Geometrics at the National Glass Centre in Sunderland. The same year Maria and Ekaterina went on to commercialise their product under the company name Meystyle LED Wallpaper & Fabric.
Meystyle uses digital printing |
https://en.wikipedia.org/wiki/Dynamic%20texture | Dynamic texture ( sometimes referred to as temporal texture) is the texture with motion which can be found in videos of sea-waves, fire, smoke, wavy trees, etc. Dynamic texture has a spatially repetitive pattern with time-varying visual pattern. Modeling and analyzing dynamic texture is a topic of images processing and pattern recognition in computer vision.
Extracting features that describe the dynamic texture can be utilized for tasks of images sequences classification, segmentation, recognition and retrieval. Comparing with texture found within static images, analyzing dynamic texture is a challenging problem. It is important that the extracted features from dynamic texture combine motion and appearance description, and also be invariance to some transformation such as rotation, translation and illumination.
Analysis methods of dynamic texture
The methods of dynamic texture recognition can categorized as follows:
Methods based on optical flow: by applying optical flow to the dynamic texture, velocity with direction and magnitude can be detected and used to recognize the dynamic texture. Due to simplicity of its computation, it is currently the most popular method.
Methods computing geometric properties: this methods track the surfaces of motion trajectories in spatiotemporal domain.
Methods based on local spatiotemporal filtering : this methods analyze the local spatiotemporal patterns and its orientation and energy and employ them as feature used for classification.
Methods based on global spatiotemporal transform: this method characterize the motion at different scale using wavelets that can decompose the motion into local and global.
Model-based methods : These methods aims at generating a model to describe the motion by a set of parameters.
Applications
- Segmenting the sequence images of natural scenes. This helps on differentiate between streets and grass alongside these streets which could be used in the application of navigations.
- |
https://en.wikipedia.org/wiki/SKI%20combinator%20calculus | The SKI combinator calculus is a combinatory logic system and a computational system. It can be thought of as a computer programming language, though it is not convenient for writing software. Instead, it is important in the mathematical theory of algorithms because it is an extremely simple Turing complete language. It can be likened to a reduced version of the untyped lambda calculus. It was introduced by Moses Schönfinkel and Haskell Curry.
All operations in lambda calculus can be encoded via abstraction elimination into the SKI calculus as binary trees whose leaves are one of the three symbols S, K, and I (called combinators).
Notation
Although the most formal representation of the objects in this system requires binary trees, for simpler typesetting they are often represented as parenthesized expressions, as a shorthand for the tree they represent. Any subtrees may be parenthesized, but often only the right-side subtrees are parenthesized, with left associativity implied for any unparenthesized applications. For example, ISK means ((IS)K). Using this notation, a tree whose left subtree is the tree KS and whose right subtree is the tree SK can be written as KS(SK). If more explicitness is desired, the implied parentheses can be included as well: ((KS)(SK)).
Informal description
Informally, and using programming language jargon, a tree (xy) can be thought of as a function x applied to an argument y. When evaluated (i.e., when the function is "applied" to the argument), the tree "returns a value", i.e., transforms into another tree. The "function", "argument" and the "value" are either combinators or binary trees. If they are binary trees, they may be thought of as functions too, if needed.
The evaluation operation is defined as follows:
(x, y, and z represent expressions made from the functions S, K, and I, and set values):
I returns its argument:
Ix = x
K, when applied to any argument x, yields a one-argument constant function Kx, which, when applied to |
https://en.wikipedia.org/wiki/Dirichlet%27s%20ellipsoidal%20problem | In astrophysics, Dirichlet's ellipsoidal problem, named after Peter Gustav Lejeune Dirichlet, asks under what conditions there can exist an ellipsoidal configuration at all times of a homogeneous rotating fluid mass in which the motion, in an inertial frame, is a linear function of the coordinates. Dirichlet's basic idea was to reduce Euler equations to a system of ordinary differential equations such that the position of a fluid particle in a homogeneous ellipsoid at any time is a linear and homogeneous function of initial position of the fluid particle, using Lagrangian framework instead of the Eulerian framework.
History
In the winter of 1856–57, Dirichlet found some solutions of Euler equations and he presented those in his lectures on partial differential equations in July 1857 and published the results in the same month. His work was left unfinished at his sudden death in 1859, but his notes were collated and published by Richard Dedekind posthumously in 1860.
Bernhard Riemann said, "In his posthumous paper, edited for publication by Dedekind, Dirichlet has opened up, in a most remarkable way, an entirely new avenue for investigations on the motion of a self-gravitating homogeneous ellipsoid. The further development of his beautiful discovery has a particular interest to the mathematician even apart from its relevance to the forms of heavenly bodies which initially instigated these investigations."
Riemann–Lebovitz formulation
Dirichlet's problem is generalized by Bernhard Riemann in 1860 and by Norman R. Lebovitz in modern form in 1965. Let be the semi-axes of the ellipsoid, which varies with time. Since the ellipsoid is homogeneous, the constancy of mass requires the constancy of the volume of the ellipsoid,
same as the initial volume. Consider an inertial frame and a rotating frame , with being the linear transformation such that and it is clear that is orthogonal, i.e., . We can define an anti-symmetric matrix with this,
where we can write th |
https://en.wikipedia.org/wiki/Fundamental%20theorems%20of%20welfare%20economics | There are two fundamental theorems of welfare economics. The first states that in economic equilibrium, a set of complete markets, with complete information, and in perfect competition, will be Pareto optimal (in the sense that no further exchange would make one person better off without making another worse off). The requirements for perfect competition are these:
There are no externalities and each actor has perfect information.
Firms and consumers take prices as given (no economic actor or group of actors has market power).
The theorem is sometimes seen as an analytical confirmation of Adam Smith's "invisible hand" principle, namely that competitive markets ensure an efficient allocation of resources. However, there is no guarantee that the Pareto optimal market outcome is socially desirable, as there are many possible Pareto efficient allocations of resources differing in their desirability (e.g. one person may own everything and everyone else nothing).
The second theorem states that any Pareto optimum can be supported as a competitive equilibrium for some initial set of endowments. The implication is that any desired Pareto optimal outcome can be supported; Pareto efficiency can be achieved with any redistribution of initial wealth. However, attempts to correct the distribution may introduce distortions, and so full optimality may not be attainable with redistribution.
The theorems can be visualized graphically for a simple pure exchange economy by means of the Edgeworth box diagram.
History of the fundamental theorems
Adam Smith (1776)
In a discussion of import tariffs Adam Smith wrote that:
Every individual necessarily labours to render the annual revenue of the society as great as he can... He is in this, as in many other ways, led by an invisible hand to promote an end which was no part of his intention... By pursuing his own interest he frequently promotes that of the society more effectually than when he really intends to promote it.Note that Smi |
https://en.wikipedia.org/wiki/XNU | XNU (X is Not Unix) is the computer operating system (OS) kernel developed at Apple Inc. since December 1996 for use in the Mac OS X (now macOS) operating system and released as free and open-source software as part of the Darwin OS, which in addition to macOS is also the basis for the Apple TV Software, iOS, iPadOS, watchOS, visionOS, and tvOS OSes. XNU is an abbreviation of X is Not Unix.
Originally developed by NeXT for the NeXTSTEP operating system, XNU was a hybrid kernel derived from version 2.5 of the Mach kernel developed at Carnegie Mellon University, which incorporated the bulk of the 4.3BSD kernel modified to run atop Mach primitives, along with an application programming interface (API) in Objective-C for writing drivers named Driver Kit.
After Apple acquired NeXT, the kernel was updated with code derived from OSFMK 7.3 from OSF, and the FreeBSD project, and the Driver Kit was replaced with new API on a restricted subset of C++ (based on Embedded C++) named I/O Kit.
Kernel design
XNU is a hybrid kernel, containing features of both monolithic kernels and microkernels, attempting to make the best use of both technologies, such as the message passing ability of microkernels enabling greater modularity and larger portions of the OS to benefit from memory protection, and retaining the speed of monolithic kernels for some critical tasks.
, XNU runs on ARM64 and x86-64 processors, both one processor and symmetric multiprocessing (SMP) models. PowerPC support was removed as of the version in Mac OS X Snow Leopard. Support for IA-32 was removed as of the version in Mac OS X Lion; support for 32-bit ARM was removed as of the version in .
Mach
The basis of the XNU kernel is a heavily modified (hybrid) Open Software Foundation Mach kernel (OSFMK) 7.3. OSFMK 7.3 is a microkernel that includes applicable code from the University of Utah Mach 4 kernel and from the many Mach 3.0 variants forked from the original Carnegie Mellon University Mach 3.0 microkernel.
O |
https://en.wikipedia.org/wiki/En%20Rathathin%20Rathame | En Rathathin Rathame () is a 1989 Indian Tamil-language science fiction action film directed by K. Vijayan, and finished by his son Sundar K. Vijayan, following his father's death. The film is the Tamil debut of Bollywood actress Meenakshi Seshadri. It is a remake of the Hindi film Mr. India (1987).
Plot
A group of royals are much disappointed by the amendment of democratic India, where no more kingdoms and kings shall exist. To get back the provinces and monarchy rule, all take Narendra Bhupathy, a physically challenged King and his son Ilayaraja Bhupathy as their leaders. They support anti-social elements from foreign nations for getting weapons from them to create chaos in the country.
Natesan is a poor auto-rickshaw driver who takes care of orphan children at his home. He manages to feed and educate the children with the meagre amount he earns and does not accept any help. Meenakshi is a press reporter for a magazine named Bakirangam, which has a very poor circulation. To improve its flow, the magazine's Editor insists that she interview a rapist who is capable of describing a rape point by point. Meenakshi is afraid to do the assignment, but accepts as she has no other. Natesan visits the Bakirangam office to advertise for a rental home, and he is diverted to Meenakshi's cabin. Meenakshi misunderstands him to be the rapist, and at one point, she shouts for help. Angered, Natesan explains who he is and why he has come, and Meenakshi apologises for her misunderstanding. Meenakshi asks his home for rent, but demands that no children to be present as it is a big disturbance for her work. Natesan lies to her that no children are present in the house as he needs the money. Meenakshi visits the house and gets irritated about the mess, but Natesan manages to grab the advance amount from her. Meenakshi finds there are a lot of children in the home and demands the money she gave. But Natasen says since he did not give receipt for the amount she gave as advance, she ca |
https://en.wikipedia.org/wiki/Murinus%20Cornelius%20Piepers | Marinus Cornelius Piepers (1836 – 1919 in The Hague) was a Dutch entomologist and lawyer.
Piepers studied law in Leiden and obtained his doctorate in 1859. He worked for the administration of justice until 1894. In 1879, he was appointed Advocate General at the Supreme Court of the Dutch East Indies. In 1890 he became vice-president.
Piepers spent three decades from 1863 in the Dutch East Indies (now Indonesia) collecting and observing butterfly behaviour and other organisms. He recorded his data in two books Mimikry, Selektion und Darwinismus (1903) and Nocheinmal Mimicry, Selektion und Darwinism (1907). Piepers was an opponent of Darwinism and was influenced by the research of Theodor Eimer. He specialised in Lepidoptera and Coleoptera especially of the Dutch East Indies. His collection is conserved in Naturalis in Leiden. Piepers was a philosophical vitalist.
Selected publications
Mimikry, Selektion und Darwinismus (1903)
[https://archive.org/details/rhopaloceraofjav02piep/page/n7/mode/2up The Rhopalocera of Java] with Pieter Cornelius Tobias Snellen and Hans Fruhstorfer. The Hague,M. Nijhoff 1909-18. Four volumes. |
https://en.wikipedia.org/wiki/Tournament%20of%20the%20Towns | The Tournament of the Towns (International Mathematics Tournament of the Towns, Турнир Городов, Международный Математический Турнир Городов) is an international mathematical competition for school students originating in Russia.
The contest was created by mathematician Nikolay Konstantinov and has participants from over 100 cities in many different countries.
Organization
There are two rounds in this contest: Fall (October) and Spring (February–March) of the same academic year.
Both have an O-Level (Basic) paper and an A-Level (Advanced) paper separated by 1–2 weeks.
The O-Level contains around 5 questions and the A-Level contains around 7 questions.
The duration of the exams is 5 hours for both Levels.
The A-Level problems are more difficult than O-Level but have a greater maximum score.
Participating students are divided into two divisions;
Junior (usually grades 7–10) and Senior (two last school grades, usually grades 11–12).
To account for age differences inside of each division, students in different grades have different loadings (coefficients). A contestant's final score is his/her highest score from the four exams. It is not necessary albeit recommended to write all four exams.
Different towns are given handicaps to account for differences in population. A town's score is the average of the scores of its N best students, where its population is N hundred thousand. It is also worth noting that the minimum value of N is 5.
Philosophy
Tournament of Towns differs from many other similar competitions by its philosophy relying much more upon ingenuity than the drill. First, problems are difficult (especially in A Level in the Senior division where they are comparable with those at International Mathematical Olympiad but much more ingenious and less technical). Second, it allows the participants to choose problems they like as for each paper the participant's score is the sum of his/her 3 best answers.
The problems are mostly combinatorial, with the |
https://en.wikipedia.org/wiki/Non-catalytic%20tyrosine-phosphorylated%20receptor | Non-catalytic tyrosine-phosphorylated receptors (NTRs), also called immunoreceptors or Src-family kinase-dependent receptors, are a group of cell surface receptors expressed by leukocytes that are important for cell migration and the recognition of abnormal cells or structures and the initiation of an immune response. These transmembrane receptors are not grouped into the NTR family based on sequence homology, but because they share a conserved signalling pathway utilizing the same signalling motifs. A signaling cascade is initiated when the receptors bind their respective ligand resulting in cell activation. For that tyrosine residues in the cytoplasmic tail of the receptors have to be phosphorylated, hence the receptors are referred to as tyrosine-phosphorylated receptors. They are called non-catalytic receptors, as the receptors have no intrinsic tyrosine kinase activity and cannot phosphorylate their own tyrosine residues. Phosphorylation is mediated by additionally recruited kinases. A prominent member of this receptor family is the T-cell receptor.
Features and Classification
Members of the Non-catalytic tyrosine-phosphorylated receptor family share a couple of common features.
The most prominent feature is the presence of conserved signalling motifs containing tyrosine residue, such as Immunoreceptor tyrosine-based activation motifs (ITAMs), in the cytoplasmic tail of the receptors. The receptor signaling pathway is initiated by ligand binding to the extracellular domains of the receptor. Upon binding, the tyrosine residues in the signaling motifs are phosphorylated by membrane-associated tyrosine kinases. The receptors themselves have no intrinsic tyrosine kinase activity. The phosphorylated NTRs, in turn, initiate a specific intracellular signaling cascades. The signaling cascade is down-regulated by dephosphorylation by protein tyrosine phosphatases. Additional characteristics of the receptor family are a rather small (< 20 nm) extracellular domain and t |
https://en.wikipedia.org/wiki/Encyclopaedistics | Encyclopaedistics or encyclopaedics as a discipline, is the academic scholarship of encyclopedias as sources of encyclopedic knowledge and cultural objects as well; in this sense, this discipline is also known as "encyclopaedia studies" and can be termed as "theoretical encyclopaediography" by analogy with theoretical lexicography. Encyclopaedistics as a practical activity (profession or business) also called "encyclopaedic practice" or "encyclopedism" is the process of assembling encyclopaedias available to the public for sale or for free (encyclopaedia publishing or practical encyclopediography). In this sense, it is the art or craft of writing, compiling, and editing the paper or online encyclopedias. As a practical activity, encyclopaedistics originated in the Middle Ages in connection with the development of compendiums based on alphabetical structuring (e.g. first edition of Polyanthea by Dominicus Nanus Mirabellius). Encyclopaedistics is often defined as "the art and science of selecting and disseminating the information most significant to mankind".
Field of study
Encyclopaedistics is a specialized aspect of information science and communication science. At the same time, encyclopaedistics is also considered as one of scholarly disciplines which are seen as auxiliary for historical research (auxiliary sciences of history) . Third, encyclopaedics is a domain of philosophy (Romanticism). This term associated with German philosophers of the 18th century, such as Novalis, Friedrich Schlegel, who sought to create a "Scientific Bible" - both real and ideal book as the quintessence of human education (enlightenment).
In any case, the most popular topics in encyclopaedia studies refferd the history of organization of encyclopaedic knowledge, encyclopaedic knowledge determination and selection, glossary composition, current state of development of encyclopaedic activity, features of making encyclopaedias and encyclopaedic articles, usage, role and significance |
https://en.wikipedia.org/wiki/ACIN1 | Apoptotic chromatin condensation inducer in the nucleus is a protein that in humans is encoded by the ACIN1 gene. |
https://en.wikipedia.org/wiki/Otis%20Boykin | Otis Frank Boykin (August 29, 1920March 26, 1982) was an American inventor and engineer. His inventions include electrical resistors used in computing, missile guidance, and pacemakers.
Early life and education
Otis Boykin was born on August 29, 1920, in Dallas, Texas. His father, Walter B. Boykin, was a carpenter, and later became a preacher. His mother, Sarah, was a maid, who died of heart failure when Otis was a year old. This inspired him to help improve the pacemaker. Boykin attended Booker T. Washington High School in Dallas, where he was the valedictorian, graduating in 1938. He attended Fisk University on a scholarship, worked as a laboratory assistant at the university's nearby aerospace laboratory, and left in 1941.
Boykin then moved to Chicago, where he found work as a clerk at Electro Manufacturing Company. He was subsequently hired as a laboratory assistant for the Majestic Radio and Television Corporation; at that company, he rose to become foreman of their factory. By 1944, he was working for the P.J. Nilsen Research Labs.
In 1946–1947, he studied at Illinois Institute of Technology, but dropped out after two years; some sources say it was because he could not afford his tuition, but he later stated he left for an employment opportunity and did not have time to return to finish his degree. One of his mentors was Dr. Denton Deere, an engineer and inventor with his own laboratory. Another mentor was Dr. Hal F. Fruth, with whom he collaborated on several experiments, including a more effective way to test automatic pilot control units in airplanes. The two men later went into business, opening an electronics research lab in the late 1940s.
In the 1950s, Boykin and Fruth worked together at the Monson Manufacturing Corporation; Boykin was the company's chief engineer. In the early 1960s, Boykin was a senior project engineer at the Chicago Telephone Supply Corporation, later known as CTS Labs. It was here that he did much of his pacemaker research. But |
https://en.wikipedia.org/wiki/Voice%20over%20WLAN | Voice over Wireless LAN (VoWLAN), also Voice over WiFi (VoWiFi), is the use of a wireless broadband network according to the IEEE 802.11 standards for the purpose of vocal conversation. In essence, it is Voice over IP (VoIP) over a Wi-Fi network. In most cases, the Wi-Fi network and voice components supporting the voice system are privately owned.
VoWLAN can be conducted over any Internet accessible device, including a laptop, PDA or VoWLAN units which look and function like DECT and cellphones. Just like for IP-DECT, the VoWLAN's main advantages to consumers are cheaper local and international calls, free calls to other VoWLAN units and a simplified integrated billing of both phone and Internet service providers.
Although VoWLAN and 3G have certain feature similarities, VoWLAN is different in the sense that it uses a wireless internet network (typically 802.11) rather than a cellular network. Both VoWLAN and 3G are used in different ways, although with a femtocell the two can deliver similar service to users and can be considered alternatives.
Applications
For a single location organisation it enables use of existing Wi-Fi network for low (or no) cost of use VoIP (hence VoWLAN) communication in a similar manner to land mobile radio system or walkie-talkie systems with push to talk and emergency broadcast channels. They are also used across multiple locations for mobile workers such as delivery drivers, these workers need to take advantage of 3G type services whereby a cellular company provide data access between the handheld device and the companies back-end network.
Benefits
A voice over WLAN system offers several benefits to organizations, such as hospitals and warehouses. Such advantages include increased mobility and cost savings. For instance, nurses and doctors within a hospital can maintain voice communications at any time at less cost, compared to cellular service.
Types
as an extension to cellular network using Generic Access Network or Unlicensed Mob |
https://en.wikipedia.org/wiki/Quark%20model | In particle physics, the quark model is a classification scheme for hadrons in terms of their valence quarks—the quarks and antiquarks that give rise to the quantum numbers of the hadrons. The quark model underlies "flavor SU(3)", or the Eightfold Way, the successful classification scheme organizing the large number of lighter hadrons that were being discovered starting in the 1950s and continuing through the 1960s. It received experimental verification beginning in the late 1960s and is a valid effective classification of them to date. The model was independently proposed by physicists Murray Gell-Mann, who dubbed them "quarks" in a concise paper, and George Zweig, who suggested "aces" in a longer manuscript. André Petermann also touched upon the central ideas from 1963 to 1965, without as much quantitative substantiation. Today, the model has essentially been absorbed as a component of the established quantum field theory of strong and electroweak particle interactions, dubbed the Standard Model.
Hadrons are not really "elementary", and can be regarded as bound states of their "valence quarks" and antiquarks, which give rise to the quantum numbers of the hadrons. These quantum numbers are labels identifying the hadrons, and are of two kinds. One set comes from the Poincaré symmetry—JPC, where J, P and C stand for the total angular momentum, P-symmetry, and C-symmetry, respectively.
The other set is the flavor quantum numbers such as the isospin, strangeness, charm, and so on. The strong interactions binding the quarks together are insensitive to these quantum numbers, so variation of them leads to systematic mass and coupling relationships among the hadrons in the same flavor multiplet.
All quarks are assigned a baryon number of . Up, charm and top quarks have an electric charge of +, while the down, strange, and bottom quarks have an electric charge of −. Antiquarks have the opposite quantum numbers. Quarks are spin- particles, and thus fermions. Each quark |
https://en.wikipedia.org/wiki/Laplace%20expansion%20%28potential%29 | In physics, the Laplace expansion of potentials that are directly proportional to the inverse of the distance (), such as Newton's gravitational potential or Coulomb's electrostatic potential, expresses them in terms of the spherical Legendre polynomials. In quantum mechanical calculations on atoms the expansion is used in the evaluation of integrals of the inter-electronic repulsion.
The Laplace expansion is in fact the expansion of the inverse distance between two points. Let the points have position vectors and , then the Laplace expansion is
Here has the spherical polar coordinates and has with homogeneous polynomials of degree . Further r< is min(r, r′) and r> is max(r, r′). The function is a normalized spherical harmonic function. The expansion takes a simpler form when written in terms of solid harmonics,
Derivation
The derivation of this expansion is simple. By the law of cosines,
We find here the generating function of the Legendre polynomials :
Use of the spherical harmonic addition theorem
gives the desired result.
Neumann Expansion
A similar equation has been derived by John von Neumann that allows expression of in prolate spheroidal coordinates as a series:
where and are associated Legendre functions of the first and second kind, respectively, defined such that they are real for . In analogy to the spherical coordinate case above, the relative sizes of the radial coordinates are important, as and . |
https://en.wikipedia.org/wiki/Columbia%20%28supercomputer%29 | Columbia was a supercomputer built by Silicon Graphics (SGI) for the National Aeronautics and Space Administration (NASA), installed in 2004 at the NASA Advanced Supercomputing (NAS) facility located at Moffett Field in California. Named in honor of the crew who died in the Space Shuttle Columbia disaster, it increased NASA's supercomputing capacity ten-fold for the agency's science, aeronautics and exploration programs.
Missions run on Columbia include high-fidelity simulations of the Space Shuttle vehicle and launch systems, hurricane track prediction, global ocean circulation, and the physics of supernova detonations.
History
Columbia debuted as the second most powerful supercomputer on the TOP500 list in November 2004 at a LINPACK rating of 51.87 teraflops, or 51.87 trillion floating point calculations per second. By June 2007 it had dropped to 13th.
It was originally composed of 20 interconnected SGI Altix 3700 512-processor multi-rack systems running SUSE Linux Enterprise, using Intel Itanium 2 Montecito and Montvale processors. In 2006, NASA and SGI added four new Altix 4700 nodes containing 256 dual-core processors, which decreased the physical footprint and the power cost of the supercomputer. The nodes were connected with InfiniBand single and double data rate (SDR and DDR) cabling with transfer speeds of up to 10 gigabits per second.
The SGI Altix platform was selected due to a positive experience with Kalpana, a single-node Altix 512-CPU system built and operated by NASA and SGI and named after Columbia astronaut Kalpana Chawla, the first Indian-born woman to fly in space. Kalpana was later integrated into the Columbia supercomputer system as the first node of twenty.
At its peak, Columbia had a total of 10,240 processors and 20 terabytes of memory, 440 terabytes of online storage, and 10 petabytes of archival tape storage. The Project Columbia team, composed mostly of computer scientists and engineers from NAS, SGI, and Intel, were awarded the Gov |
https://en.wikipedia.org/wiki/Management%20of%20multiple%20sclerosis | Multiple sclerosis (MS) is a chronic inflammatory demyelinating disease that affects the central nervous system (CNS). Several therapies for it exist, although there is no known cure.
The most common initial course of the disease is the relapsing-remitting subtype, which is characterized by unpredictable attacks (relapses) followed by periods of relative remission with no new signs of disease activity. After some years, many of the people who have this subtype begin to experience neurologic decline without acute relapses. When this happens it is called secondary progressive multiple sclerosis. Other, less common, courses of the disease are the primary progressive (decline from the beginning without attacks) and the progressive-relapsing (steady neurologic decline and superimposed attacks). Different therapies are used for patients experiencing acute attacks, for patients who have the relapsing-remitting subtype, for patients who have the progressive subtypes, for patients without a diagnosis of MS who have a demyelinating event, and for managing the various consequences of MS.
The primary aims of therapy are returning function after an attack, preventing new attacks, and preventing disability. As with any medical treatment, medications used in the management of MS may have several adverse effects, and many possible therapies are still under investigation. At the same time different alternative treatments are pursued by many people, despite the fact that there is little supporting, comparable, replicated scientific study. Stem cell therapy is being studied.
This article focuses on therapies for standard MS; borderline forms of MS have particular treatments that are excluded.
Acute attacks
Administration of high doses of intravenous corticosteroids, such as methylprednisolone, is the routine therapy for acute relapses. This is administered over a period of three to five days, and has a well-established efficacy in promoting a faster recovery from disability afte |
https://en.wikipedia.org/wiki/Madhava%27s%20sine%20table | Madhava's sine table is the table of trigonometric sines constructed by the 14th century Kerala mathematician-astronomer Madhava of Sangamagrama (c. 1340 – c. 1425). The table lists the trigonometric sines of the twenty-four angles from 3.75° to 90° in steps of 3.75° (1/24 of a right angle, 90°). The table is encoded in the letters of the Devanagari alphabet using the Katapayadi system, giving entries the appearance of the verses of a poem in Sanskrit.
Madhava's original work containing the sine table has not been found. The table is reproduced in the Aryabhatiyabhashya of Nilakantha Somayaji(1444–1544) and also in the Yuktidipika/Laghuvivrti commentary of Tantrasamgraha by Sankara Variar (circa. 1500-1560).
The table
The image below gives Madhava's sine table in Devanagari as reproduced in Cultural foundations of mathematics by C.K. Raju. The first twelve lines constitute the entries in the table. The last word in the thirteenth line indicates that these are "as told by Madhava".
Values in Madhava's table
To understand the meaning of the values tabulated by Madhava, consider some angle whose measure is A. Consider a circle of unit radius and center O. Let the arc PQ of the circle subtend an angle A at the center O. Drop the perpendicular QR from Q to OP; then the length of the line segment RQ is the value of the trigonometric sine of the angle A. Let PS be an arc of the circle whose length is equal to the length of the segment RQ. For various angles A, Madhava's table gives the measures of the corresponding angles POS in arcminutes, arcseconds and sixtieths of an arcsecond.
As an example, let A be an angle whose measure is 22.50°. In Madhava's table, the entry corresponding to 22.50° is the measure in arcminutes, arcseconds and sixtieths of an arcsecond of the angle whose radian measure is the value of sin 22.50°, which is 0.3826834;
multiply 0.3826834 radians by 180/ to convert to 21.92614 degrees, which is
1315 arcminutes 34 arcseconds 07 sixtieths of an |
https://en.wikipedia.org/wiki/Specialized%20pro-resolving%20mediators | Specialized pro-resolving mediators (SPM, also termed specialized proresolving mediators) are a large and growing class of cell signaling molecules formed in cells by the metabolism of polyunsaturated fatty acids (PUFA) by one or a combination of lipoxygenase, cyclooxygenase, and cytochrome P450 monooxygenase enzymes. Pre-clinical studies, primarily in animal models and human tissues, implicate SPM in orchestrating the resolution of inflammation. Prominent members include the resolvins and protectins.
SPM join the long list of other physiological agents which tend to limit inflammation (see ) including glucocorticoids, interleukin 10 (an anti-inflammatory cytokine), interleukin 1 receptor antagonist (an inhibitor of the action of pro-inflammatory cytokine, interleukin 1), annexin A1 (an inhibitor of formation of pro-inflammatory metabolites of polyunsaturated fatty acids, and the gaseous resolvins, carbon monoxide (see ), nitric oxide (see ), and hydrogen sulfide (see ).
The absolute as well as relative roles of the SPM along with other physiological anti-inflammatory agents in resolving human inflammatory responses remain to be defined precisely. However, studies suggest that synthetic SPM that are resistant to being metabolically inactivated hold promise of being clinically useful pharmacological tools for preventing and resolving a wide range of pathological inflammatory responses along with the tissue destruction and morbidity that these responses cause. Based on animal model studies, the inflammation-based diseases which may be treated by such metabolically resistant SPM analogs include not only pathological and tissue damaging responses to invading pathogens but also a wide array of pathological conditions in which inflammation is a contributing factor such as allergic inflammatory diseases (e.g. asthma, rhinitis), autoimmune diseases (e.g. rheumatoid arthritis, systemic lupus erythematosus), psoriasis, atherosclerosis disease leading to heart attacks and st |
https://en.wikipedia.org/wiki/Embedded%20HTTP%20server | An embedded HTTP server is an HTTP server used in an embedded system.
The HTTP server is usually implemented as a software component of an application (embedded) system that controls and/or monitors a machine with mechanical and/or electrical parts.
The HTTP server implements the HTTP protocol in order to allow communications with one or more local or remote users using a browser. The aim is to let users to interact with information provided by the embedded system (user interface, data monitoring, data logging, data configuration, etc.) via network, without using traditional peripherals required for local user interfaces (display, keyboard, etc.).
In some cases the functionalities provided via HTTP server allow also program-to-program communications, e.g. to retrieve data logged about the monitored machine, etc.
Usages
Examples of usage within an embedded application might be (e.g.):
to provide a thin client interface for a traditional application;
to provide indexing, reporting, and debugging tools during the development stage;
to implement a protocol for the distribution and acquisition of information to be displayed in the regular interface — possibly a web service, and possibly using XML as the data format;
to develop a web application.
Advantages
There are a few advantages to using HTTP to perform the above:
HTTP is a well studied cross-platform protocol and there are mature implementations freely available;
HTTP is seldom blocked by firewalls and intranet routers;
HTTP clients (e.g. web browsers) are readily available with all modern computers;
there is a growing tendency of using embedded HTTP servers in applications that parallels the rising trends of home-networking and ubiquitous computing.
Typical requirements
Natural limitations of the platforms where an embedded HTTP server runs contribute to the list of the non-functional requirements of the embedded, or more precise, embeddable HTTP server. Some of these requirements are the followin |
https://en.wikipedia.org/wiki/Oxygen-evolving%20complex | The oxygen-evolving complex (OEC), also known as the water-splitting complex, is a water-oxidizing enzyme involved in the photo-oxidation of water during the light reactions of photosynthesis. OEC is surrounded by 4 core proteins of photosystem II at the membrane-lumen interface. The mechanism for splitting water involves absorption of three photons before the fourth provides sufficient energy for water oxidation. Based on a widely accepted theory from 1970 by Kok, the complex can exist in 5 states, denoted S0 to S4, with S0 the most reduced and S4 the most oxidized. Photons trapped by photosystem II move the system from state S0 to S4. S4 is unstable and reacts with water producing free oxygen. For the complex to reset to the lowest state, S0, it uses 2 water molecules to pull out 4 electrons.
The OEC active site contains a cluster of manganese and calcium, with the formula Mn4Ca1OxCl1–2(HCO3)y. This cluster is coordinated by D1 and CP43 subunits and stabilized by peripheral membrane proteins. Other characteristics of it have been reviewed.
Currently, the mechanism of the complex is not completely understood. Along with the role of Ca+2, Cl−1, and the membrane proteins surrounding the metal cluster not being well understood. Much of what is known has been collected from flash photolysis experiments, electron paramagnetic resonance (EPR), and X-ray spectroscopy. |
https://en.wikipedia.org/wiki/Journal%20of%20Chemometrics | The Journal of Chemometrics is a monthly peer-reviewed scientific journal published since 1987 by John Wiley & Sons. It publishes original scientific papers, reviews, and short communications on fundamental and applied aspects of chemometrics. The current editor-in-chief is Age K. Smilde (University of Amsterdam).
Abstracting and indexing
Journal of Chemometrics is abstracted and indexed in:
Chemical Abstracts Service
Scopus
Web of Science
According to the Journal Citation Reports, the journal has a 2020 impact factor of 2.467, ranking it 27th out of 64 journals in the category "Instruments & Instrumentation", 35th out of 63 journals in the category "Automation & Control Systems", 30th out of 125 journals in the category "Statistics & Probability", 40th out of 108 journals in the category "Mathematics Interdisciplinary Applications", and 54th out of 87 journals in the category "Chemistry Analytical",
Highest cited papers
Selectivity, local rank, three-way data analysis and ambiguity in multivariate curve resolution, Volume 9, Issue 1, Jan-Feb 1995, Pages: 31–58, Tauler R, Smilde A, Kowalski B. Cited 370 times.
Genetic algorithms as a strategy for feature-selection, Volume 6, Issue 5, Sep-Oct 1992, Pages: 267–281, Leardi R, Boggia R, Terrile M. Cited 296 times.
Multiway calibration. Multilinear PLS, Volume 10, Issue 1, Jan-Feb 1996, Pages: 47–61, Bro R. Cited 290 times. |
https://en.wikipedia.org/wiki/Impulse%20response | In signal processing and control theory, the impulse response, or impulse response function (IRF), of a dynamic system is its output when presented with a brief input signal, called an impulse (). More generally, an impulse response is the reaction of any dynamic system in response to some external change. In both cases, the impulse response describes the reaction of the system as a function of time (or possibly as a function of some other independent variable that parameterizes the dynamic behavior of the system).
In all these cases, the dynamic system and its impulse response may be actual physical objects, or may be mathematical systems of equations describing such objects.
Since the impulse function contains all frequencies (see the Fourier transform of the Dirac delta function, showing infinite frequency bandwidth that the Dirac delta function has), the impulse response defines the response of a linear time-invariant system for all frequencies.
Mathematical considerations
Mathematically, how the impulse is described depends on whether the system is modeled in discrete or continuous time. The impulse can be modeled as a Dirac delta function for continuous-time systems, or as the Kronecker delta for discrete-time systems. The Dirac delta represents the limiting case of a pulse made very short in time while maintaining its area or integral (thus giving an infinitely high peak). While this is impossible in any real system, it is a useful idealisation. In Fourier analysis theory, such an impulse comprises equal portions of all possible excitation frequencies, which makes it a convenient test probe.
Any system in a large class known as linear, time-invariant (LTI) is completely characterized by its impulse response. That is, for any input, the output can be calculated in terms of the input and the impulse response. (See LTI system theory.) The impulse response of a linear transformation is the image of Dirac's delta function under the transformation, analogous t |
https://en.wikipedia.org/wiki/Triviality%20%28mathematics%29 | In mathematics, the adjective trivial is often used to refer to a claim or a case which can be readily obtained from context, or an object which possesses a simple structure (e.g., groups, topological spaces). The noun triviality usually refers to a simple technical aspect of some proof or definition. The origin of the term in mathematical language comes from the medieval trivium curriculum, which distinguishes from the more difficult quadrivium curriculum. The opposite of trivial is nontrivial, which is commonly used to indicate that an example or a solution is not simple, or that a statement or a theorem is not easy to prove.
The judgement of whether a situation under consideration is trivial or not depends on who considers it since the situation is obviously true for someone who has sufficient knowledge or experience of it while to someone who has never seen this, it may be even hard to be understood so not trivial at all. And there can be an argument about how quickly and easily a problem should be recognized for the problem to be treated as trivial. So, triviality is not a universally agreed property in mathematics and logic.
Trivial and nontrivial solutions
In mathematics, the term "trivial" is often used to refer to objects (e.g., groups, topological spaces) with a very simple structure. These include, among others:
Empty set: the set containing no or null members
Trivial group: the mathematical group containing only the identity element
Trivial ring: a ring defined on a singleton set
"Trivial" can also be used to describe solutions to an equation that have a very simple structure, but for the sake of completeness cannot be omitted. These solutions are called the trivial solutions. For example, consider the differential equation
where is a function whose derivative is . The trivial solution is the zero function
while a nontrivial solution is the exponential function
The differential equation with boundary conditions is important in mathematics and |
https://en.wikipedia.org/wiki/Airline%20codes | This is a list of airline codes. The table lists IATA's two-character airline designators, ICAO's three-character airline designators and the airline call signs (telephony designator). Historical assignments are also included.
IATA airline designator
IATA airline designators, sometimes called IATA reservation codes, are two-character codes assigned by the International Air Transport Association (IATA) to the world's airlines. The standard is described in IATA's Standard Schedules Information Manual and the codes themselves are described in IATA's Airline Coding Directory. (Both are published semiannually.)
The IATA codes were originally based on the ICAO designators which were issued in 1947 as two-letter airline identification codes (see the section below). IATA expanded the two-character-system with codes consisting of a letter and a digit (or vice versa) e.g. EasyJet's U2 after ICAO had introduced its current three-letter-system in 1982. Until then only combinations of letters were used.
Airline designator codes follow the format xx(a), i.e., two alphanumeric characters (letters or digits) followed by an optional letter. Although the IATA standard provides for three-character airline designators, IATA has not used the optional third character in any assigned code. This is because some legacy computer systems, especially the "central reservations systems", have failed to comply with the standard, notwithstanding the fact that it has been in place for twenty years. The codes issued to date comply with IATA Resolution 762, which provides for only two characters. These codes thus comply with the current airline designator standard, but use only a limited subset of its possible range.
There are three types of designator: unique, numeric/alpha and controlled duplicate (explained below):
IATA airline designators are used to identify an airline for commercial purposes in reservations, timetables, tickets, tariffs, air waybills and in telecommunications.
A fli |
https://en.wikipedia.org/wiki/PolymiRTS | Polymorphism in microRNA Target Site (PolymiRTS) is a database of naturally occurring DNA variations in putative microRNA target sites.
See also
MicroRNA
List of miRNA target prediction tools |
https://en.wikipedia.org/wiki/Urobilin | Urobilin or urochrome is the chemical primarily responsible for the yellow color of urine. It is a linear tetrapyrrole compound that, along with the related colorless compound urobilinogen, are degradation products of the cyclic tetrapyrrole heme.
Metabolism
Urobilin is generated from the degradation of heme, which is first degraded through biliverdin to bilirubin. Bilirubin is then excreted as bile, which is further degraded by microbes present in the large intestine to urobilinogen. Some of this remains in the large intestine, and its conversion to stercobilin gives feces their brown color. Some is reabsorbed into the bloodstream and then delivered to kidney. When urobilinogen is exposed to air, it is oxidized to urobilin, giving urine its yellow color.
Importance
Many urine tests (urinalysis) monitor the amount of urobilin in urine, as its levels can give insight on the effectiveness of urinary tract function. Normally, urine would appear as either light yellow or colorless. A lack of water intake, for example following sleep or dehydration, reduces the water content of urine, thereby concentrating urobilin and producing a darker color of urine. Obstructive jaundice reduces biliary bilirubin excretion, which is then excreted directly from the blood stream into the urine, giving a dark-colored urine but with a paradoxically low urobilin concentration, no urobilinogen, and usually with correspondingly pale faeces. Darker urine can also be due to other chemicals, such as various ingested dietary components or drugs, porphyrins in patients with porphyria, and homogentisate in patients with alkaptonuria.
See also
Bile pigment
Bilirubin
Biliverdin
Heme
Stercobilin |
https://en.wikipedia.org/wiki/Welding%20of%20advanced%20thermoplastic%20composites | Advanced thermoplastic composites (ACM) have a high strength fibres held together by a thermoplastic matrix. Advanced thermoplastic composites are becoming more widely used in the aerospace, marine, automotive and energy industry. This is due to the decreasing cost and superior strength to weight ratios, over metallic parts. Advance thermoplastic composite have excellent damage tolerance, corrosion resistant, high fracture toughness, high impact resistance, good fatigue resistance, low storage cost, and infinite shelf life. Thermoplastic composites also have the ability to be formed and reformed, repaired and fusion welded.
Fusion bonding fundamentals
Fusion bonding is a category of techniques for welding thermoplastic composites. It requires the melting of the joint interface, which decreases the viscosity of the polymer and allows for intermolecular diffusion. These polymer chains then diffuse across the joint interface and become entangled, giving the joint its strength.
Welding techniques
There are many welding techniques that can be used to fusion bond thermoplastic composites. These different techniques can be broken down into three classifications for their ways of generating heat; frictional heating, external heating and electromagnetic heating. Some of these techniques can be very limited and only used for specific joints and geometries.
Friction welding
Friction welding is best used for parts that are small and flat. The welding equipment is often expensive, but produces high-quality welds.
Linear vibration welding
Two flat parts are brought together under pressure with one fixed in place and the other vibrating back-and-forth parallel to the joint. Frictional heat is then generated till the polymers are softened or melted. Once the desired temperature is met, the vibration motion stops, the polymer solidifies and a weld joint is made. The two most important welding parameters that affect the mechanical performance are welding pressure and time. De |
https://en.wikipedia.org/wiki/KCNA4 | Potassium voltage-gated channel subfamily A member 4 also known as Kv1.4 is a protein that in humans is encoded by the KCNA4 gene. It contributes to the cardiac transient outward potassium current (Ito1), the main contributing current to the repolarizing phase 1 of the cardiac action potential.
Description
Potassium channels represent the most complex class of voltage-gated ion channels from both functional and structural standpoints. Their diverse functions include regulating neurotransmitter release, heart rate, insulin secretion, neuronal excitability, epithelial electrolyte transport, smooth muscle contraction, and cell volume. Four sequence-related potassium channel genes - shaker, shaw, shab, and shal - have been identified in Drosophila, and each has been shown to have human homolog(s). This gene encodes a member of the potassium channel, voltage-gated, shaker-related subfamily. This member contains six membrane-spanning domains with a shaker-type repeat in the fourth segment. It belongs to the A-type potassium current class, the members of which may be important in the regulation of the fast repolarizing phase of action potentials in heart and thus may influence the duration of cardiac action potential. The coding region of this gene is intronless, and the gene is clustered with genes KCNA3 and KCNA10 on chromosome 1 in humans.
KCNA4 (Kv1.4) contains a tandem inactivation domain at the N terminus. It is composed of two subdomains. Inactivation domain 1 (ID1, residues 1-38) consists of a flexible N terminus anchored at a 5-turn helix, and is thought to work by occluding the ion pathway, as is the case with a classical ball domain. Inactivation domain 2 (ID2, residues 40-50) is a 2.5 turn helix with a high proportion of hydrophobic residues that probably serves to attach ID1 to the cytoplasmic face of the channel. In this way, it can promote rapid access of ID1 to the receptor site in the open channel. ID1 and ID2 function together to bring about fast inacti |
https://en.wikipedia.org/wiki/List%20of%20artificial%20intelligence%20artists | Many notable artificial intelligence artists have created a wide variety of artificial intelligence art from the 1960s to today. These include:
20th century
Harold Cohen, active from 1960s to 2010s. Cohen's work is primarily with AARON, a series of computer programs that autonomously create original images.
Eric Millikin, active from 1980s to present. Millikin's work includes AI-generated virtual reality, video art, poetry, music, and performance art, on topics such as animal rights, climate change, anti-racism, witchcraft, and the occult.
Karl Sims, active from 1980s to present. Sims is best known for using particle systems and artificial life in computer animation.
21st century
Sougwen Chung, active from 2010s to present. Chung's work includes performances with a robotic arm that uses AI to attempt to draw in a manner similar to Chung.
Stephanie Dinkins, active from 2010s to present. Dinkins' work includes recordings of conversations with an artificially intelligent robot that resembles a black woman, discussing topics such as race and the nature of being.
Jake Elwes, active from 2010s to present. Their practice is the exploration of artificial intelligence, queer theory and technical biases.
Libby Heaney, active from 2010s to present. Heaney's practice includes work with chatbots.
Mario Klingemann, active from 2010s to present. Klingemann's works examine creativity, culture, and perception through machine learning and artificial intelligence.
Mauro Martino, active from 2010s to present. Martino's work work includes design, data visualization and infographics.
Trevor Paglen, active from 2000s to present. Paglen's practice includes work in photography and geography, on topics like mass surveillance and data collection.
Anna Ridler, active from 2010s to present. Ridler works with collections of information, including self-generated data sets, often working with floral photography. |
https://en.wikipedia.org/wiki/Dichlorooctylisothiazolinone | Dichlorooctylisothiazolinone, DCOIT or DCOI, is the organic compound with the formula SC(Cl)=C(Cl)C(O)NC7H15. It is a white solid that melts near room temperature. It is an isothiazolinone, a class of heterocyclic compounds used as biocides. DCOIT has attracted attention as an antifouling compound. It is a replacement for organotin compounds that have been largely banned for causing environmental damage. DCOIT however is itself controversial.
Safety
Isothiazolinones are highly bioactive and have attracted scrutiny for causing contact dermatitis. |
https://en.wikipedia.org/wiki/Applied%20Physics%20A | Applied Physics A: Materials Science and Processing is a peer-reviewed scientific journal that is published monthly by Springer Science+Business Media. The editor-in-chief is Thomas Lippert (Paul Scherrer Institute). This publication is complemented by Applied Physics B (Lasers & Optics).
History
The journal Applied Physics was originally conceived and founded in 1972 by Helmut K.V. Lotsch at Springer-Verlag Berlin Heidelberg New York. Lotsch edited the journal up to volume 25 and split it thereafter into the two part A26(Solids and Surfaces) and B26(Photophysics and Laser Chemistry). He continued his editorship up to the volumes A61 and B61. Starting in 1995 the two journals were continued under separate editorships.
Aims and scope
Applied Physics A journal covers theoretical and experimental research in applied physics, including surfaces, thin films, the condensed phase of materials, nanostructured materials, application of nanotechnology, and techniques pertaining to advanced processing and characterization. Coverage also includes characterizing materials, evaluating materials, optical & electronic materials, production engineering, process engineering, interfaces (surfaces & thin films), corrosion, and finally coatings.
Publishing formats include articles pertaining to original research, reviews, and rapid communications. Invited papers are also included on a regular basis and collected in special issues.
Abstracting and indexing
This journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 2.584. |
https://en.wikipedia.org/wiki/Bacterial%20genome | Bacterial genomes are generally smaller and less variant in size among species when compared with genomes of eukaryotes. Bacterial genomes can range in size anywhere from about 130 kbp to over 14 Mbp. A study that included, but was not limited to, 478 bacterial genomes, concluded that as genome size increases, the number of genes increases at a disproportionately slower rate in eukaryotes than in non-eukaryotes. Thus, the proportion of non-coding DNA goes up with genome size more quickly in non-bacteria than in bacteria. This is consistent with the fact that most eukaryotic nuclear DNA is non-gene coding, while the majority of prokaryotic, viral, and organellar genes are coding.
Right now, we have genome sequences from 50 different bacterial phyla and 11 different archaeal phyla. Second-generation sequencing has yielded many draft genomes (close to 90% of bacterial genomes in GenBank are currently not complete); third-generation sequencing might eventually yield a complete genome in a few hours. The genome sequences reveal much diversity in bacteria. Analysis of over 2000 Escherichia coli genomes reveals an E. coli core genome of about 3100 gene families and a total of about 89,000 different gene families. Genome sequences show that parasitic bacteria have 500–1200 genes, free-living bacteria have 1500–7500 genes, and archaea have 1500–2700 genes. A striking discovery by Cole et al. described massive amounts of gene decay when comparing Leprosy bacillus to ancestral bacteria. Studies have since shown that several bacteria have smaller genome sizes than their ancestors did. Over the years, researchers have proposed several theories to explain the general trend of bacterial genome decay and the relatively small size of bacterial genomes. Compelling evidence indicates that the apparent degradation of bacterial genomes is owed to a deletional bias.
Methods and techniques
As of 2014, there are over 30,000 sequenced bacterial genomes publicly available and thousands of m |
https://en.wikipedia.org/wiki/Collection%20No.%201 | Collection #1 is the name of a set of email addresses and passwords that appeared on the dark web around January 2019. The database contains over 773 million unique email addresses and 21 million unique passwords, resulting in more than 2.7 billion email/password pairs. The list, reviewed by computer security experts, contains exposed addresses and passwords from over 2000 previous data breaches as well as an estimated 140 million new email addresses and 10 million new passwords from previously unknown sources, and collectively makes it the largest data breach on the Internet.
Collection #1 was discovered by security researcher Troy Hunt, founder of "Have I Been Pwned?," a website that allows users to search their email addresses and passwords to know if either has appeared in a known data breach. The database had been briefly posted to Mega in January 2019, and links to the database posted in a popular hacker forum. Hunt discovered that the offering contained 87 gigabytes of data across 12,000 files. Not only was this discovery of concern to Hunt, but he further found that the passwords were available in plaintext format rather than in their hashed version. This implied that the creators of this database had been able to successfully crack the hashes of these passwords from weak implementation of hashing algorithms. Security researchers noted that unlike other username/password lists which are usually sold on the dark web, Collection #1 was temporarily available at no cost, and could potentially be used by a larger number of malicious agents, primarily for credential stuffing.
By January 30, 2019, security researchers observed that similar sets of data, named Collections #2 through #5, have been seen for sale on the dark web. Collections #2-5 included over 845 gigabytes of data, with a total of 25 billion email/password records. Security researchers at Hasso Plattner Institute estimated that Collections #2-5, after removing duplicates, has about three times as mu |
https://en.wikipedia.org/wiki/Behavioral%20momentum | Behavioral momentum is a theory in quantitative analysis of behavior and is a behavioral metaphor based on physical momentum. It describes the general relation between resistance to change (persistence of behavior) and the rate of reinforcement obtained in a given situation.
B. F. Skinner (1938) proposed that all behavior is based on a fundamental unit of behavior called the discriminated operant. The discriminated operant, also known as the three-term contingency, has three components: an antecedent discriminative stimulus, a response, and a reinforcing or punishing consequence. The organism responds in the presence of the stimulus because past responses in the presence of that stimulus have produced reinforcement.
Resistance to change
According to behavioral momentum theory, there are two separable factors that independently govern the rate with which a discriminated operant occurs and the persistence of that response in the face of disruptions such as punishment, extinction, or the differential reinforcement of alternative behaviors. (see Nevin & Grace, 2000, for a review). First, the positive contingency between the response and a reinforcing consequence controls response rates (i.e., a response–reinforcer relation) by shaping a particular pattern of responding. This is governed by the relative law of effect (i.e., the matching law; Herrnstein, 1970). Secondly, the Pavlovian relation between surrounding, or context, stimuli and the rate or magnitude (but not both) of reinforcement obtained in the context (i.e., a stimulus–reinforcer relation) governs the resistance of the behavior to operations such as extinction. Resistance to change is assessed by measuring responding during operations such as extinction or satiation that tend to disrupt the behavior and comparing these measurements to stable, pre-disruption response rates.
Resistance to disruption has been considered a better measure of response strength than a simple measure of response rate.(Nevin, 1974) |
https://en.wikipedia.org/wiki/Biological%20Weapons%20Act%201974 | The Biological Weapons Act 1974 (citation 1974 c.6) is an Act of the Parliament of the United Kingdom, passed during the reign of Queen Elizabeth II on 8 February 1974, with the long title "An Act to prohibit the development, production, acquisition and possession of certain biological agents and toxins and of biological weapons."
The Act makes illegal the development, production, acquisition or retainment of biological weapons, as well as any weapon delivery systems for the deployment of biological weapons. It also forbade the exchange between people of biological weapons and established the prison sentence for committing the crimes mentioned in the Act; a maximum sentence of life imprisonment.
The Act extends to anyone within the United Kingdom, or British citizens abroad, however the citizen must be within the United Kingdom, the Isle of Man or any of the British colonies to be arrested for the offense.
It also gives Customs and Excise officers the power to seize biological weapons coming in or out of the United Kingdom, or British citizens in other countries transporting biological weaponry for deportation to the UK.
A fourth section says that anyone may be charged (as with other crimes) with aiding and abetting or conspiring to transport biological weapons.
The Act extends to the entire United Kingdom.
External links
United Kingdom Acts of Parliament 1974
United Kingdom military law
1974 in international relations
Biological warfare
United Kingdom biological weapons program |
https://en.wikipedia.org/wiki/Vorsetuzumab%20mafodotin | Vorsetuzumab mafodotin (SGN-75) is an antibody-drug conjugate (ADC) directed to the protein CD70 designed for the treatment of cancer. It is a humanized monoclonal antibody, vorsetuzumab, conjugated with noncleavable monomethyl auristatin F (MMAF), a cytotoxic agent.
This drug was developed by Seattle Genetics, Inc. The drug completed phase I clinical trials for renal cell carcinoma, but development was discontinued in 2013.
No reason was given but SG plan to start clinical trials of SGN-CD70A in 2014. |
https://en.wikipedia.org/wiki/Functional%20integration | Functional integration is a collection of results in mathematics and physics where the domain of an integral is no longer a region of space, but a space of functions. Functional integrals arise in probability, in the study of partial differential equations, and in the path integral approach to the quantum mechanics of particles and fields.
In an ordinary integral (in the sense of Lebesgue integration) there is a function to be integrated (the integrand) and a region of space over which to integrate the function (the domain of integration). The process of integration consists of adding up the values of the integrand for each point of the domain of integration. Making this procedure rigorous requires a limiting procedure, where the domain of integration is divided into smaller and smaller regions. For each small region, the value of the integrand cannot vary much, so it may be replaced by a single value. In a functional integral the domain of integration is a space of functions. For each function, the integrand returns a value to add up. Making this procedure rigorous poses challenges that continue to be topics of current research.
Functional integration was developed by Percy John Daniell in an article of 1919 and Norbert Wiener in a series of studies culminating in his articles of 1921 on Brownian motion. They developed a rigorous method (now known as the Wiener measure) for assigning a probability to a particle's random path. Richard Feynman developed another functional integral, the path integral, useful for computing the quantum properties of systems. In Feynman's path integral, the classical notion of a unique trajectory for a particle is replaced by an infinite sum of classical paths, each weighted differently according to its classical properties.
Functional integration is central to quantization techniques in theoretical physics. The algebraic properties of functional integrals are used to develop series used to calculate properties in quantum elec |
https://en.wikipedia.org/wiki/Bob%20Dobkin | Robert C. Dobkin (born 1943 in Philadelphia) is an American electrical engineer, co-founder of Linear Technology Corporation, and veteran linear (analog) integrated circuit (IC) designer.
Career
Dobkin studied Electrical Engineering at MIT, but did not complete a degree. After early employments e.g. at GE Reentry Systems, he joined Philbrick Nexus in Massachusetts working on IC development with Bob Pease. He joined National Semiconductor (NSC) in January 1969. He resigned the position as Director of Advanced Circuit Development at NSC in July 1981 and co-founded Linear Technology with Robert H. Swanson in the same year.
Dobkin continued to serve as the company's Chief Technical Officer through its acquisition by Analog Devices in 2016. He was on the board of directors of Spectra7 Microsystems Inc. from 2013 until his retirement in 2021.
Dobkin holds more than 100 patents in the field of analog circuits.
In 2021 Dobkin was charged by the SEC with insider trading for sharing non-public information regarding the merger of Analog Devices and Linear Technology. Dobkin was found guilty and ordered to pay a fine of $252,092.16 and barred from serving as an officer or director of a publicly traded company.
Works
LM118, first high speed operational amplifier.
LM199, heated buried-Zener voltage reference, and its improved successor, the LTZ1000.
LM317, first variable three-pin voltage regulator.
LT1083, first low-dropout regulator.
LT3080, three terminal adjustable regulator with a current source reference. |
https://en.wikipedia.org/wiki/Astrophysics%20Source%20Code%20Library | The Astrophysics Source Code Library (ASCL) is an online registry of scientist-written software used in astronomy or astrophysics research. The primary objective of the ASCL is to make the software used in research available for examination to improve the transparency of research.
Entries in the ASCL are indexed by the SAO/NASA Astrophysics Data System (ADS) and Web of Science's Data Citation Index and because each code is assigned a unique ascl ID, software can be cited in a journal paper even when there is no citable paper describing the code. Web of Science and ADS indexing makes research software more discoverable. Additionally, ADS can link some papers which use codes to the code entries, which makes it easier to examine the computational methods used. ADS also tracks citations for software (assuming the citations are formatted correctly), which can help research software authors for whom citations are an important measure.
Entries in the ASCL include the name, description, author of the code, ascl ID, and either a link to a download site for the software or an attached archive file for the software so the code can be downloaded directly from the ASCL. A link to a paper describing or using the software is usually included as well to demonstrate that the software has been used in refereed research.
History
Established in 1999 by Robert J. Nemiroff and John Wallin
Migrated to APOD discussion forum Starship Asterisk* in 2010
Advisory committee formed in 2011
ADS started indexing entries in 2012
New database and site in production in 2014
Advisory Committee
Peter Teuben, University of Maryland, Chair
Bruce Berriman, California Institute of Technology
Jessica Mink, Center for Astrophysics Harvard & Smithsonian
Robert J. Nemiroff, Michigan Technological University
Rein Warmels, European Southern Observatory
Lior Shamir, Lawrence Technological University
Keith Shortridge, Australian Astronomical Observatory
John Wallin, Middle Tennessee State University
|
https://en.wikipedia.org/wiki/Herapathite | Herapathite, or iodoquinine sulfate, is a chemical compound whose crystals are dichroic and thus can be used for polarizing light.
It was discovered in 1852 by William Bird Herapath, a Bristol surgeon and chemist. One of his pupils found that adding iodine to the urine of a dog that had been fed quinine produced unusual green crystals. Herapath noticed while studying the crystals under a microscope that they appeared to polarize light.
In the 1930s, invented a process to grow single herapathite crystals large enough to be sandwiched between two sheets of glass to create a polarizing filter; these were sold under the Bernotar name by Carl Zeiss. Herapathite can be formed by precipitation by dissolving quinine sulfate in acetic acid and adding iodine tincture.
Herapathite's dichroic properties came to the attention of Sir David Brewster, and were later used by Edwin H. Land in 1929 to construct the first type of Polaroid sheet polarizer. He did this by embedding herapathite crystals in a polymer instead of growing a single large crystal.
Structurally, herapathite consists of quinine (in a cationic doubly-protonated ammonium form), sulfate counterions, and triiodide units, all as a hydrate. They combine as 4C20H26N2O2•3SO4•2I3•6H2O, or sometimes other ratios and higher polyiodides. |
https://en.wikipedia.org/wiki/ChimerDB | ChimerDB in computational biology is a database of fusion sequences.
ChimerDB currently consists of three searchable datasets.
ChimerKB is a curated knowledge base of 1,066 fusion genes sourced from publicly available scientific literature.
ChimerPub provides continuously updated descriptions on fusion genes text mined from publications.
ChimerSeq is a database of RNA-seq data of fusion sequences downloaded from the TCGA data portal.
See also
ECgene
Fusion gene |
https://en.wikipedia.org/wiki/Activist%20ageing | In the field of ageing studies, activist ageing refers to activism and research that empowers the elderly. This approach investigates how ageing is imagined (in mostly Western societies), how Ageism operates, and how elders respond to exclusion.
Many elders, and especially women, are involved in organizations that aim to effect social change on issues related to ageing or in general. Retirement engenders a form of social exclusion. In this context becoming an activist or a volunteer represents one's agency and participation in social change, outside the market system. Instead of assuming a passive role they act. As elder rights activists and members of community organizations, they try to prevent elder abuse, raise awareness, build resources and networks. Activist ageing is different from active ageing.
Organizations
Respecting Elders: Communities Against Abuse (RECAA): Aims to raise awareness of elder mistreatment within the ethnocultural communities, using strategies such as forum theatre (acting out situations from the life of elders to stimulate discussion). They participate to events like the World Elder Abuse Awareness Day.
Raging Grannies: Created in 1987 in British Columbia, they spread across countries. They are social justice activists.
CARP (Canada): An organization that defends the needs of older Canadians and offer benefits to its members.
Seniors Action Quebec: Aims to address the living conditions of English-Speaking seniors in Quebec and give access to information for an active and healthy aging to its members.
Ageivism - The Ideology of Activist Ageing
For many years the action around rights of older persons and social activism of older adults was not anchored in a unique ideological framework. It is only in recent years that attempts to frame the global elder rights movement within an ideology began to build up. One such an attempt is the development of Ageivism as an ideology of aging.
Ageivism is defined as follows:
"Ageivism" refers to |
https://en.wikipedia.org/wiki/Bestrophin%201 | Bestrophin-1 (Best1) is a protein that, in humans, is encoded by the BEST1 gene (RPD ID - 5T5N/4RDQ).
The bestrophin family of proteins comprises four evolutionary related genes (BEST1, BEST2, BEST3, and BEST4) that code for integral membrane proteins. This family was first identified in humans by linking a BEST1 mutation with Best vitelliform macular dystrophy (BVMD). Mutations in the BEST1 gene have been identified as the primary cause for at least five different degenerative retinal diseases.
The bestrophins are an ancient family of structurally conserved proteins that have been identified in nearly every organism studied from bacteria to humans. In humans, they function as calcium-activated anion channels, each of which has a unique tissue distribution throughout the body. Specifically, the BEST1 gene on chromosome 11q13 encodes the Bestrophin-1 protein in humans whose expression is highest in the retina.
Structure
Gene
The bestrophin genes share a conserved gene structure, with almost identical sizes of the 8 RFP-TM domain-encoding exons and highly conserved exon-intron boundaries. Each of the four bestrophin genes has a unique 3-prime end of variable length.
BEST1 has been shown by two independent studies to be regulated by Microphthalmia-associated transcription factor.
Protein
Bestrophin-1 is an integral membrane protein found primarily in the retinal pigment epithelium (RPE) of the eye. Within the RPE layer, it is mainly located on the basolateral plasma membrane. Protein crystallization structures indicate this protein's primary ion channel function as well as its calcium regulatory capabilities. Bestrophin-1 consists of 585 amino acids and both N- and the C-termini are located within the cell.
The structure of Best1 consists of five identical subunits that each span the membrane four times and form a continuous, funnel-shaped pore via the second transmembrane domain containing a high content of aromatic residues, including an invariant arg-phe |
https://en.wikipedia.org/wiki/Jason%20Mars | Jason O. Mars (Born May 27, 1983) is an American computer scientist, author, and entrepreneur. He is best known for his research into computer architecture and artificial intelligence, particularly in the design and deployment of conversational AI. The best-selling author of Breaking Bots: Inventing a New Voice in the AI Revolution, he has been involved in multiple AI initiatives and startups over the course of his career, including ZeroShotBot, Jaseci, Clinc, Myca, and ImpactfulAI.
Mars holds a PhD in Computer Science from the University of Virginia(UVA), and is currently employed as an Associate Professor of Computer Science and Engineering at the University of Michigan(U-M). He is also acting co-director of U-M's Clarity Lab alongside his wife, Professor Lingjia Tang. There, Mars helps direct advanced research within artificial intelligence, large-scale computing, and coding. Among the lab's most notable projects is the open source Sirius, later rebranded as Lucida.
A virtual assistant capable of understanding both visual and auditory queries, Lucida was intended by Mars and his colleagues as a sandbox that would help programmers explore the complexities of speech recognition. Mars also hoped that it would act as a foundation for the development of hardware better-suited for conversational AI. The project was supported by Google, the Defense Advanced Research Projects Agency (DARPA), and the National Science Foundation.
Jason Mars was one of ten individuals celebrated at the 28th Annual Caribbean American Heritage (CARAH) Awards. Mars received the Vanguard Award from the Institute of Caribbean Studies for his technological impact and "contributions to America and the world." Other winners include Pfizer Principal Scientist for Viral Vaccines Vidia Roopchand and Grammy-winning songwriter Gordon Chambers. Past honorees of the CARAH Awards include former United States Attorney General Eric Holder, former United States Ambassador to the United Nations Andrew Youn |
https://en.wikipedia.org/wiki/Android%20Studio | Android Studio is the official integrated development environment (IDE) for Google's Android operating system, built on JetBrains' IntelliJ IDEA software and designed specifically for Android development. It is available for download on Windows, macOS and Linux based operating systems. It is a replacement for the Eclipse Android Development Tools (E-ADT) as the primary IDE for native Android application development.
Android Studio was announced on May 16, 2013, at the Google I/O conference. It was in early access preview stage starting from version 0.1 in May 2013, then entered beta stage starting from version 0.8 which was released in June 2014. The first stable build was released in December 2014, starting from version 1.0. At the end of 2015, Google dropped support for Eclipse ADT, making Android Studio the only officially supported IDE for Android development.
On May 7, 2019, Kotlin replaced Java as Google's preferred language for Android app development. Java is still supported, as is C++.
Features
The following features are provided in the current stable version:
Gradle-based build support
Android-specific refactoring and quick fixes
Lint tools to catch performance, usability, version compatibility and other problems
ProGuard integration and app-signing capabilities
Template-based wizards to create common Android designs and components
A rich layout editor that allows users to drag-and-drop UI components, option to preview layouts on multiple screen configurations
Support for building Android Wear apps
Built-in support for Google Cloud Platform, enabling integration with Firebase Cloud Messaging (Earlier 'Google Cloud Messaging') and Google App Engine
Android Virtual Device (Emulator) to run and debug apps in the Android studio.
Android Studio supports all the same programming languages of IntelliJ (and CLion) e.g. Java, C++, and more with extensions, such as Go; and Android Studio 3.0 or later supports Kotlin, and "all Java 7 language features |
https://en.wikipedia.org/wiki/Symmetric%20difference | In mathematics, the symmetric difference of two sets, also known as the disjunctive union and set sum, is the set of elements which are in either of the sets, but not in their intersection. For example, the symmetric difference of the sets and is .
The symmetric difference of the sets A and B is commonly denoted by (traditionally, ), , or .
It can be viewed as a form of addition modulo 2.
The power set of any set becomes an abelian group under the operation of symmetric difference, with the empty set as the neutral element of the group and every element in this group being its own inverse. The power set of any set becomes a Boolean ring, with symmetric difference as the addition of the ring and intersection as the multiplication of the ring.
Properties
The symmetric difference is equivalent to the union of both relative complements, that is:
The symmetric difference can also be expressed using the XOR operation ⊕ on the predicates describing the two sets in set-builder notation:
The same fact can be stated as the indicator function (denoted here by ) of the symmetric difference, being the XOR (or addition mod 2) of the indicator functions of its two arguments: or using the Iverson bracket notation .
The symmetric difference can also be expressed as the union of the two sets, minus their intersection:
In particular, ; the equality in this non-strict inclusion occurs if and only if and are disjoint sets. Furthermore, denoting and , then and are always disjoint, so and partition . Consequently, assuming intersection and symmetric difference as primitive operations, the union of two sets can be well defined in terms of symmetric difference by the right-hand side of the equality
.
The symmetric difference is commutative and associative:
The empty set is neutral, and every set is its own inverse:
Thus, the power set of any set X becomes an abelian group under the symmetric difference operation. (More generally, any field of sets forms a group with |
https://en.wikipedia.org/wiki/Mass%20segregation%20%28astronomy%29 | In astronomy, dynamical mass segregation is the process by which heavier members of a gravitationally bound system, such as a star cluster, tend to move toward the center, while lighter members tend to move farther away from the center.
Equipartition of kinetic energy
During a close encounter of two members of the cluster, the members exchange both energy and momentum. Although energy can be exchanged in either direction, there is a statistical tendency for the kinetic energy of the two members to equalize during an encounter; this statistical phenomenon is called equipartition, and is similar to the fact that the expected kinetic energy of the molecules of a gas are all the same at a given temperature.
Since kinetic energy is proportional to mass times the square of the speed, equipartition requires the less massive members of a cluster to be moving faster. The more massive members will thus tend to sink into lower orbits (that is, orbits closer to the center of the cluster), while the less massive members will tend to rise to higher orbits.
The time it takes for the kinetic energies of the cluster members to roughly equalize is called the relaxation time of the cluster. A relaxation time-scale assuming energy is exchanged through two-body interactions was approximated in the textbook by Binney & Tremaine as
where is the number of stars in the cluster and is the typical time it takes for a star to cross the cluster. This is on the order of 100 million years for a typical globular cluster with radius 10 parsecs consisting of 100 thousand stars. The most massive stars in a cluster can segregate more rapidly than the less massive stars. This time-scale can be approximated using a toy model developed by Lyman Spitzer of a cluster where stars only have two possible masses ( and ). In this case, the more massive stars (mass ) will segregate in the time
Outward segregation of white dwarfs was observed in the globular cluster 47 Tucanae in a HST study of the re |
https://en.wikipedia.org/wiki/Potassium%20sodium%20tartrate | Potassium sodium tartrate tetrahydrate, also known as Rochelle salt, is a double salt of tartaric acid first prepared (in about 1675) by an apothecary, Pierre Seignette, of La Rochelle, France. Potassium sodium tartrate and monopotassium phosphate were the first materials discovered to exhibit piezoelectricity. This property led to its extensive use in "crystal" gramophone (phono) pick-ups, microphones and earpieces during the post-World War II consumer electronics boom of the mid-20th century. Such transducers had an exceptionally high output with typical pick-up cartridge outputs as much as 2 volts or more. Rochelle salt is deliquescent so any transducers based on the material deteriorated if stored in damp conditions.
It has been used medicinally as a laxative. It has also been used in the process of silvering mirrors. It is an ingredient of Fehling's solution (reagent for reducing sugars). It is used in electroplating, in electronics and piezoelectricity, and as a combustion accelerator in cigarette paper (similar to an oxidizer in pyrotechnics).
In organic synthesis, it is used in aqueous workups to break up emulsions, particularly for reactions in which an aluminium-based hydride reagent was used. Sodium Potassium tartrate is also important in the food industry.
It is a common precipitant in protein crystallography and is also an ingredient in the Biuret reagent which is used to measure protein concentration. This ingredient maintains cupric ions in solution at an alkaline pH.
Preparation
The starting material is tartar with a minimum tartaric acid content 68 %. This is first dissolved in water or in the mother liquor of a previous batch. It is then basified with hot saturated sodium hydroxide solution to pH 8, decolorized with activated charcoal, and chemically purified before being filtered. The filtrate is evaporated to 42 °Bé at 100 °C, and passed to granulators in which Seignette's salt crystallizes on slow cooling. The salt is separated from the mo |
https://en.wikipedia.org/wiki/Arcus%20II%3A%20Silent%20Symphony | is a computer game developed and released in Japan by Wolf Team. Narumi Kakinouchi, co-creator of Vampire Princess Miyu, was the art director for this game. The music for the game was composed by Masaaki Uno, Motoi Sakuraba, and Yasunori Shiono.
See also
Arcus Odyssey |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.