text stringlengths 9 3.55k | source stringlengths 31 280 |
|---|---|
In native methods, the letters of the language are displayed on the screen corresponding to the numeral keys based on the probabilities of those letters for that language. Additional letters can be accessed by using a special key. When a word is partially typed, options are presented from which the user can make a selection. | https://en.wikipedia.org/wiki/Indic_computing |
In native vocabulary, the fricatives and are allophones of a single phoneme /θ/. is used morpheme-initially, as in þak ('roof'), and before a voiceless consonant, as in maðkur ('worm'). is used intervocalically, as in iða ('vortex') and word-finally, as in bað ('bath'), although it is devoiced to before pause. Some loanwords (mostly from Classical Greek) have introduced the phone in intervocalic environments, as in Aþena ('Athens').The phone is actually a laminal voiceless alveolar non-sibilant fricative . The corresponding voiced phone is similar, but is apical rather than laminal (Ladefoged & Maddieson 1996). | https://en.wikipedia.org/wiki/Icelandic_phonology |
In natura (Latin for "in Nature") is a phrase to describe conditions present in a non-laboratory environment, to differentiate it from in vivo (experiments on live organisms in a lab) and ex vivo (experiments on cultivated cells isolated from multicellular organisms) conditions. | https://en.wikipedia.org/wiki/In_natura |
In natural and social science research, a protocol is most commonly a predefined procedural method in the design and implementation of an experiment. Protocols are written whenever it is desirable to standardize a laboratory method to ensure successful replication of results by others in the same laboratory or by other laboratories. Additionally, and by extension, protocols have the advantage of facilitating the assessment of experimental results through peer review. In addition to detailed procedures, equipment, and instruments, protocols will also contain study objectives, reasoning for experimental design, reasoning for chosen sample sizes, safety precautions, and how results were calculated and reported, including statistical analysis and any rules for predefining and documenting excluded data to avoid bias.Similarly, a protocol may refer to the procedural methods of health organizations, commercial laboratories, manufacturing plants, etc. to ensure their activities (e.g., blood testing at a hospital, testing of certified reference materials at a calibration laboratory, and manufacturing of transmission gears at a facility) are consistent to a specific standard, encouraging safe use and accurate results.Finally, in the field of social science, a protocol may also refer to a "descriptive record" of observed events or a "sequence of behavior" of one or more organisms, recorded during or immediately after an activity (e.g., how an infant reacts to certain stimuli or how gorillas behave in natural habitat) to better identify "consistent patterns and cause-effect relationships." These protocols may take the form of hand-written journals or electronically documented media, including video and audio capture. | https://en.wikipedia.org/wiki/Clinical_trial_protocol |
In natural bidding systems most notrump (NT) bids are made with balanced hands and within a narrowly defined high card point (HCP) range. In these systems, such as Acol and Standard American, NT bids are limit bids and therefore are not forcing. Bearing in mind the need to bid only to the optimum contract and no higher, bids above game are made only in specific circumstances, one of which is to alert partner to the fact that a slam may be possible and inviting partner to take part in the decision making process. Before looking at the detail, it is necessary to understand that bridge theory and practice suggest that the HCP method of hand evaluation, together with common sense concerning balance and cover in all suits, is the best for deciding the level of NT contracts, thus: 25+ HCP is sufficient for a game 3NT 33+ HCP should yield 12 tricks 37+ HCP will probably produce a grand slam Assuming a weak NT bidding system, for example Acol, this is how quantitative bids work: An opening bid of 1NT shows 12, 13 or 14 HCP. | https://en.wikipedia.org/wiki/Quantitative_notrump_bids |
If responder has 21 HCP, then a small slam looks certain (21 + 12 opener's minimum = 33) and should be bid. If responder has 18 HCP or less, then even a small slam is not possible (18 + 14 opener's maximum = no more than 32) If responder has 19 or 20 HCP, then a small slam is a possibility but more information is needed about opener's hand before it should be bid. This is where a quantitative bid should be made. | https://en.wikipedia.org/wiki/Quantitative_notrump_bids |
A bid of 4NT "invites" opener to: bid 6NT with a maximum holding of 14 HCP (19 + 14 = 33 which is sufficient) pass with a minimum 12 HCP (20+ 12 = only 32) with partnership agreement, bid 5NT holding 13 HCP - asking partner to bid 6NT with 20 HCP and to pass holding 19 HCP. An opening bid of 2NT shows 20, 21 or 22 HCP. If responder has 13 HCP, then a small slam looks certain (13 + 20 opener's minimum = 33) and should be bid If responder has 11 or 12 HCP, then a small slam is a possibility but more information is needed about opener's hand before it should be bid. | https://en.wikipedia.org/wiki/Quantitative_notrump_bids |
This is where a quantitative bid should be made. A bid of 4NT "invites" opener to: bid 6NT with a maximum holding of 22 HCP (11 + 22 = 33 which is sufficient) pass with a minimum 20 HCP (11+ 20 = only 31) with partnership agreement, bid 5NT holding 21 HCP - asking partner to bid 6NT with 12 HCP and to pass holding 11 HCP. When responder is even stronger and is considering whether a small or grand slam is better (and only these two options), then the initiating bid is 5NT (not 4NT) Similar bids can be made using a strong no trump bidding system, for example Standard American, by adjusting the HCP count accordingly | https://en.wikipedia.org/wiki/Quantitative_notrump_bids |
In natural conditions, free radicals are characterised with an extremely short lifespan, so in order to capture the EPR signal, an external molecule with a stable free radical must be delivered. Usually it happens by injection into the animal's body. There are two main classes of spin probes used for imaging: nitroxide and triarylmethyl (TAM, trityl) radicals. Nitroxide radicals are sensitive to oxygen concentration, pH, thiol concentrations, viscosity and polarity. | https://en.wikipedia.org/wiki/Electron_resonance_imaging |
The issue with these type of spin probes is their fast reduction, which sometimes leads to loss of the EPR signal. Triarylmethyl radicals are characterised by a far longer lifespan, and an increased stability towards reducing and oxidising biological agents. | https://en.wikipedia.org/wiki/Electron_resonance_imaging |
They are perfect for measuring the oxygen concentration, pH, thiol concentrations, inorganic phosphate and redox status. Although, the aforementioned spin probes are the most popular choice, there are many more that can be used in ERI. One of many examples is melanin – a polymeric pigment that contains a mixture of eumelanin and pheomelanin. This is the only substance that occurs in natural conditions and allows for the registration of the EPR signal, without the need to deliver extraneous spin probes. | https://en.wikipedia.org/wiki/Electron_resonance_imaging |
In natural deduction, judgments have the shape A 1 , A 2 , … , A n ⊢ B {\displaystyle A_{1},A_{2},\ldots ,A_{n}\vdash B} where the A i {\displaystyle A_{i}} 's and B {\displaystyle B} are again formulae and n ≥ 0 {\displaystyle n\geq 0} . Permutations of the A i {\displaystyle A_{i}} 's are immaterial. In other words, a judgment consists of a list (possibly empty) of formulae on the left-hand side of a turnstile symbol " ⊢ {\displaystyle \vdash } ", with a single formula on the right-hand side. | https://en.wikipedia.org/wiki/Sequent_calculus |
The theorems are those formulae B {\displaystyle B} such that ⊢ B {\displaystyle \vdash B} (with an empty left-hand side) is the conclusion of a valid proof. (In some presentations of natural deduction, the A i {\displaystyle A_{i}} s and the turnstile are not written down explicitly; instead a two-dimensional notation from which they can be inferred is used.) The standard semantics of a judgment in natural deduction is that it asserts that whenever A 1 {\displaystyle A_{1}} , A 2 {\displaystyle A_{2}} , etc., are all true, B {\displaystyle B} will also be true. The judgments A 1 , … , A n ⊢ B {\displaystyle A_{1},\ldots ,A_{n}\vdash B} and ⊢ ( A 1 ∧ ⋯ ∧ A n ) → B {\displaystyle \vdash (A_{1}\land \cdots \land A_{n})\rightarrow B} are equivalent in the strong sense that a proof of either one may be extended to a proof of the other. | https://en.wikipedia.org/wiki/Sequent_calculus |
In natural disasters and other emergencies, the portability of bucket latrines can make them a useful part of an appropriate emergency response, especially where pit latrines cannot be isolated from floodwater or groundwater (potentially leading to groundwater pollution) and where the contents can be safely disposed into sanitary systems, taking measures to avoid contact with the contents. Different organizations give advice on how to build bucket toilets in case of emergency. The Twin Bucket Emergency Toilet system (a two bucket system), for example, has been developed in Christchurch, New Zealand following their infrastructure destroying earthquake in 2011. The system has been endorsed by the Portland Bureau of Emergency Management. It is promoted by the volunteer advocacy group PHLUSH (Public Hygiene Lets Us Stay Human) for reasons of safety, affordability, and matching ecological sanitation principles. | https://en.wikipedia.org/wiki/Bucket_toilet |
In natural ecosystems, populations naturally expand until they reach the carrying capacity of the environment; if the resources on which they depend are exhausted, they naturally collapse. According to the animal rights movement, calling this an 'overpopulation' is more an ethics question than a scientific fact. Animal rights organisations are commonly critics of ecological systems and wildlife management. Animal rights activists and locals earning income from commercial hunts counter that scientists are outsiders who do not know wildlife issues, and that any slaughter of animals is evil.Various case studies indicate that use of cattle as 'natural grazers' in many European nature parks due to absence of hunting, culling or natural predators (such as wolves),may cause an overpopulation because the cattle do not migrate. | https://en.wikipedia.org/wiki/Animal_overpopulation |
This has the effect of reducing plant biodiversity, as the cattle consume native plants. Because such cattle populations begin to starve and die in the winter as available forage drops, this has caused animal rights activists to advocate supplemental feeding, which has the effect of exacerbating the ecological effects, causing nitrification and eutrophication due to excess faeces, deforestation as trees are destroyed, and biodiversity loss.Despite the ecological effects of overpopulation, wildlife managers may want such high populations in order to satisfy public enjoyment of seeing wild animals. Others contend that introducing large predators such as lynx and wolves may have similar economic benefits, even if tourists rarely actually catch glimpses of such creatures.In regards to population size, most of the methods used give estimates that vary in accuracy to the actual size and density of the population. Criticisms of theses methods generally fall onto the efficacy of methods used. | https://en.wikipedia.org/wiki/Animal_overpopulation |
In natural ecosystems, the greatest utilization of carbon is through the uptake of carbon in photosynthesis and the second greatest utilization of carbon is through the release of carbon in cellular respiration. minute changes to these two fluxes can have a larger effect on the carbon dioxide in the atmosphere. These two processes have a significant effect on the atmospheric carbon dioxide concentration, making their correct functioning essential to sustaining life. Without carbon dioxide, plants would not be able to carry out photosynthesis, in turn not producing oxygen, affecting all forms of life on earth. | https://en.wikipedia.org/wiki/Ecosystem_respiration |
Without the presence of ecosystem respiration throughout earth's systems, it is safe to say the basic idea of "life" would be lost. Prior to these processes in earth's early years of formation, the air and oceans were anoxic. An anoxic environment is one without the presence of oxygen, majorly consisting of anaerobic microbes. | https://en.wikipedia.org/wiki/Ecosystem_respiration |
The evolution of oxygenic photosynthesis in the atmosphere amplified the productivity of the biosphere, increasing biodiversity. With the presence of photosynthesis providing oxygen to the atmosphere, respiration soon evolved to provide the necessary components photosynthesis demanded to function. This coevolution of photosynthesis and respiration processes has led us to the biodiverse and fruitful ecosystems we know today. | https://en.wikipedia.org/wiki/Ecosystem_respiration |
In natural environment, crystal lattices of quartz and/or feldspar are bombarded with radiation released from radiogenic source such as in -situ radioactive decay. As the crystals are irradiated, charges are stored up in their crystallographic defects. The charge trapping process involves atomic-scale ionic substitution of both electron and hole within the crystal lattices of quartz and feldspar. | https://en.wikipedia.org/wiki/Optically_stimulated_luminescence_thermochronometry |
The electron diffusion happens in response to ionizing radiation as the minerals cools below their closure temperature.If quartz or feldspar gains are exposed to natural light source such as the sun, the trapped charges will be evicted in form of luminescence. This natural process is called bleaching. Any other process that could heat up the sample will also cause the trapped electrons to escape from the crystal lattice known as thermal bleaching. Optical bleaching of the mineral leads to eviction of trapped charges in the minerals, hence, careful sampling and handling must be followed to avoid using bleached sample for OSL thermochronometry. To artificially produce luminescence in the laboratory for luminescence study of the mineral, these two processes are adopted. | https://en.wikipedia.org/wiki/Optically_stimulated_luminescence_thermochronometry |
In natural evolution and artificial evolution (e.g. artificial life and evolutionary computation) the fitness (or performance or objective measure) of a schema is rescaled to give its effective fitness which takes into account crossover and mutation. Effective fitness is used in Evolutionary Computation to understand population dynamics. While a biological fitness function only looks at reproductive success, an effective fitness function tries to encompass things that are needed to be fulfilled for survival on population level. In homogeneous populations, reproductive fitness and effective fitness are equal. | https://en.wikipedia.org/wiki/Effective_fitness |
When a population moves away from homogeneity a higher effective fitness is reached for the recessive genotype. This advantage will decrease while the population moves toward an equilibrium. The deviation from this equilibrium displays how close the population is to achieving a steady state. | https://en.wikipedia.org/wiki/Effective_fitness |
When this equilibrium is reached, the maximum effective fitness of the population is achieved.Problem solving with evolutionary computation is realized with a cost function. If cost functions are applied to swarm optimization they are called a fitness function. Strategies like reinforcement learning and NEAT neuroevolution are creating a fitness landscape which describes the reproductive success of cellular automata.The effective fitness function models the number of fit offspring and is used in calculations that include evolutionary processes, such as mutation and crossover, important on the population level.The effective fitness model is superior to its predecessor, the standard reproductive fitness model. | https://en.wikipedia.org/wiki/Effective_fitness |
It advances in the qualitatively and quantitatively understanding of evolutionary concepts like bloat, self-adaptation, and evolutionary robustness. While reproductive fitness only looks at pure selection, effective fitness describes the flow of a population and natural selection by taking genetic operators into account.A normal fitness function fits to a problem, while an effective fitness function is an assumption if the objective was reached. The difference is important for designing fitness functions with algorithms like novelty search in which the objective of the agents is unknown.In the case of bacteria effective fitness could include production of toxins and rate of mutation of different plasmids, which are mostly stochastically determined | https://en.wikipedia.org/wiki/Effective_fitness |
In natural gas furnaces, water heaters, and room heating systems, a safety cut-off switch is normally included so that the gas supply to the pilot and heating system is shut off by an electrically operated valve if the pilot light goes out. This cut-off switch usually detects the pilot light in one of several ways: A flame rectification device. A sensor filled with mercury is used to detect the heat of the pilot light. Contraction of the mercury results in sufficient pressure to operate an electrical switch that interrupts the flow of electricity and shuts off the gas valve when the pilot light goes out. | https://en.wikipedia.org/wiki/Pilot_lamp |
A photoresistor is used to detect the light from the pilot lamp. When the pilot light goes out, electrical circuitry connected to the photoresistor shuts off the gas valve. Use of a pilot generator or a thermocouple in the flame provides heating appliance safety as it generates enough electric current from the burning flame to hold the gas valve open. | https://en.wikipedia.org/wiki/Pilot_lamp |
If the pilot light goes out, the pilot generator cools off and the current stops, closing the gas valve.Other units use a non-electrical approach, where the pilot heats a bimetallic element or a gas-filled tube to exert mechanical pressure to keep the gas valve open. If the pilot fails, the valve closes. | https://en.wikipedia.org/wiki/Pilot_lamp |
To restart the system, the valve must be held open manually and the pilot lit, and then the valve must be held open until the element heats up enough to hold the valve open. Non-electrical schemes are appropriate for systems that do not use electricity. The above methods are examples of the use of "fail-safe" safety protection. | https://en.wikipedia.org/wiki/Pilot_lamp |
In natural gas pricing, the Canadian definition is that 1,000,000 Btu ≡ 1.054615 GJ. The energy content (high or low heating value) of a volume of natural gas varies with the composition of the natural gas, which means there is no universal conversion factor for energy to volume. 1 cubic foot (28 litres) of average natural gas yields ≈ 1,030 Btu (between 1,010 Btu and 1,070 Btu, depending on quality, when burned) As a coarse approximation, 1,000 cubic feet (28 m3) of natural gas yields ≈ 1,000,000 Btu ≈ 1 GJ. For natural gas price conversion 1,000 m3 ≈ 36.9 million Btu and 1,000,000 Btu ≈ 27.1 m3 | https://en.wikipedia.org/wiki/British_thermal_unit |
In natural habitats, the species has an omnivorous diet composed of plants, algae and various prey including small fish, crustaceans, insects and worms. The fish can protrude its jaw 4.2% of its standard length, allowing it to have a varied diet. Inferior social status and associated stress can affect digestive function in convict cichlids. | https://en.wikipedia.org/wiki/Convict_cichlid |
In natural language applications, libraries are a way to cope with thousands of details involved in syntax, lexicon, and inflection. The GF Resource Grammar Library is the standard library for Grammatical Framework. It covers the morphology and basic syntax for an increasing number of languages, currently including Afrikaans, Amharic (partial), Arabic (partial), Basque (partial), Bulgarian, Catalan, Chinese, Czech (partial), Danish, Dutch, English, Estonian, Finnish, French, German, Greek ancient (partial), Greek modern, Hebrew (fragments), Hindi, Hungarian (partial), Interlingua, Italian, Japanese, Korean (partial), Latin (partial), Latvian, Maltese, Mongolian, Nepali, Norwegian bokmål, Norwegian nynorsk, Persian, Polish, Punjabi, Romanian, Russian, Sindhi, Slovak (partial), Slovene (partial), Somali (partial), Spanish, Swahili (fragments), Swedish, Thai, Turkish (fragments), and Urdu. In addition, 14 languages have WordNet lexicon and large-scale parsing extensions.A full API documentation of the library can be found at the RGL Synopsis page. The RGL status document gives the languages currently available in the GF Resource Grammar Library, including their maturity. | https://en.wikipedia.org/wiki/Grammatical_Framework |
In natural language processing (NLP), a text graph is a graph representation of a text item (document, passage or sentence). It is typically created as a preprocessing step to support NLP tasks such as text condensationterm disambiguation (topic-based) text summarization, relation extraction and textual entailment. | https://en.wikipedia.org/wiki/Text_graph |
In natural language processing (NLP), a word embedding is a representation of a word. The embedding is used in text analysis. Typically, the representation is a real-valued vector that encodes the meaning of the word in such a way that words that are closer in the vector space are expected to be similar in meaning. Word embeddings can be obtained using language modeling and feature learning techniques, where words or phrases from the vocabulary are mapped to vectors of real numbers. Methods to generate this mapping include neural networks, dimensionality reduction on the word co-occurrence matrix, probabilistic models, explainable knowledge base method, and explicit representation in terms of the context in which words appear.Word and phrase embeddings, when used as the underlying input representation, have been shown to boost the performance in NLP tasks such as syntactic parsing and sentiment analysis. | https://en.wikipedia.org/wiki/Word_embedding |
In natural language processing a w-shingling is a set of unique shingles (therefore n-grams) each of which is composed of contiguous subsequences of tokens within a document, which can then be used to ascertain the similarity between documents. The symbol w denotes the quantity of tokens in each shingle selected, or solved for. The document, "a rose is a rose is a rose" can therefore be maximally tokenized as follows: (a,rose,is,a,rose,is,a,rose)The set of all contiguous sequences of 4 tokens (Thus 4=n, thus 4-grams) is { (a,rose,is,a), (rose,is,a,rose), (is,a,rose,is), (a,rose,is,a), (rose,is,a,rose) } Which can then be reduced, or maximally shingled in this particular instance to { (a,rose,is,a), (rose,is,a,rose), (is,a,rose,is) }. | https://en.wikipedia.org/wiki/W-shingling |
In natural language processing and information retrieval, explicit semantic analysis (ESA) is a vectoral representation of text (individual words or entire documents) that uses a document corpus as a knowledge base. Specifically, in ESA, a word is represented as a column vector in the tf–idf matrix of the text corpus and a document (string of words) is represented as the centroid of the vectors representing its words. Typically, the text corpus is English Wikipedia, though other corpora including the Open Directory Project have been used.ESA was designed by Evgeniy Gabrilovich and Shaul Markovitch as a means of improving text categorization and has been used by this pair of researchers to compute what they refer to as "semantic relatedness" by means of cosine similarity between the aforementioned vectors, collectively interpreted as a space of "concepts explicitly defined and described by humans", where Wikipedia articles (or ODP entries, or otherwise titles of documents in the knowledge base corpus) are equated with concepts. The name "explicit semantic analysis" contrasts with latent semantic analysis (LSA), because the use of a knowledge base makes it possible to assign human-readable labels to the concepts that make up the vector space. | https://en.wikipedia.org/wiki/Explicit_semantic_analysis |
In natural language processing, Brown clustering or IBM clustering is a form of hierarchical clustering of words based on the contexts in which they occur, proposed by Peter Brown, William A. Brown, Vincent Della Pietra, Peter de Souza, Jennifer Lai, and Robert Mercer of IBM in the context of language modeling. The intuition behind the method is that a class-based language model (also called cluster n-gram model), i.e. one where probabilities of words are based on the classes (clusters) of previous words, is used to address the data sparsity problem inherent in language modeling. The method has been successfully used to improve parsing , domain adaptation, and name d entity recognition.Jurafsky and Martin give the example of a flight reservation system that needs to estimate the likelihood of the bigram "to Shanghai", without having seen this in a training set. The system can obtain a good estimate if it can cluster "Shanghai" with other city names, then make its estimate based on the likelihood of phrases such as "to London", "to Beijing" and "to Denver". | https://en.wikipedia.org/wiki/Brown_clustering |
In natural language processing, Latent Dirichlet Allocation (LDA) is a Bayesian network (and, therefore, a generative statistical model) that explains a set of observations through unobserved groups, and each group explains why some parts of the data are similar. The LDA is an example of a Bayesian topic model. In this, observations (e.g., words) are collected into documents, and each word's presence is attributable to one of the document's topics. Each document will contain a small number of topics. | https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation |
In natural language processing, a corpus is a set of sentences or texts, and a language model is a probability distribution over entire sentences or texts. Consequently, in NLP, the more commonly used measure is perplexity per word, defined as: where s 1 , . . . | https://en.wikipedia.org/wiki/Perplexity |
, s n {\displaystyle s_{1},...,s_{n}} are the n {\displaystyle n} sentences in the corpus, but N {\displaystyle N} is the number of words in the corpus. This normalizes the perplexity by the length of the text, allowing for more meaningful comparisons between different texts or models. Suppose the average sentence xi in the corpus has a probability of 2 − 190 {\displaystyle 2^{-190}} according to the language model. | https://en.wikipedia.org/wiki/Perplexity |
This would give a model perplexity of 2190 per sentence. However, it is more common to normalize for sentence length. Thus, if the test sample's sentences comprised a total of 1,000 words, and could be coded using 7.95 bits per word, one could report a model perplexity of 27.95 = 247 per word. In other words, the model is as confused on test data as if it had to choose uniformly and independently among 247 possibilities for each word. | https://en.wikipedia.org/wiki/Perplexity |
In natural language processing, a hallucination is often defined as "generated content that is nonsensical or unfaithful to the provided source content". Depending on whether the output contradicts the prompt or not they could be divided to closed-domain and open-domain respectively.Hallucination was shown to be a statistically inevitable byproduct of any imperfect generative model that is trained to maximize training likelihood, such as GPT-3, and requires active learning (such as Reinforcement learning from human feedback) to be avoided. Errors in encoding and decoding between text and representations can cause hallucinations. AI training to produce diverse responses can also lead to hallucination. | https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence) |
Hallucinations can also occur when the AI is trained on a dataset wherein labeled summaries, despite being factually accurate, are not directly grounded in the labeled data purportedly being "summarized". Larger datasets can create a problem of parametric knowledge (knowledge that is hard-wired in learned system parameters), creating hallucinations if the system is overconfident in its hardwired knowledge. In systems such as GPT-3, an AI generates each next word based on a sequence of previous words (including the words it has itself previously generated during the same conversation), causing a cascade of possible hallucination as the response grows longer. | https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence) |
By 2022, papers such as the New York Times expressed concern that, as adoption of bots based on large language models continued to grow, unwarranted user confidence in bot output could lead to problems.In August 2022, Meta warned during its release of BlenderBot 3 that the system was prone to "hallucinations", which Meta defined as "confident statements that are not true". On 15 November 2022, Meta unveiled a demo of Galactica, designed to "store, combine and reason about scientific knowledge". | https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence) |
Content generated by Galactica came with the warning "Outputs may be unreliable! Language Models are prone to hallucinate text." In one case, when asked to draft a paper on creating avatars, Galactica cited a fictitious paper from a real author who works in the relevant area. | https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence) |
Meta withdrew Galactica on 17 November due to offensiveness and inaccuracy.There are several reasons for natural language models to hallucinate data. For example: Hallucination from data: There are divergences in the source content (which would often happen with large training data sets). Hallucination from training: Hallucination still occurs when there is little divergence in the data set. In that case, it derives from the way the model is trained. A lot of reasons can contribute to this type of hallucination, such as: An erroneous decoding from the transformer A bias from the historical sequences that the model previously generated A bias generated from the way the model encodes its knowledge in its parameters | https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence) |
In natural language processing, a one-hot vector is a 1 × N matrix (vector) used to distinguish each word in a vocabulary from every other word in the vocabulary. The vector consists of 0s in all cells with the exception of a single 1 in a cell used uniquely to identify the word. One-hot encoding ensures that machine learning does not assume that higher numbers are more important. For example, the value '8' is bigger than the value '1', but that does not make '8' more important than '1'. The same is true for words: the value 'laughter' is not more important than 'laugh'. | https://en.wikipedia.org/wiki/One-hot_encoding |
In natural language processing, a sentence embedding refers to a numeric representation of a sentence in the form of a vector of real numbers which encodes meaningful semantic information.State of the art embeddings are based on the learned hidden layer representation of dedicated sentence transformer models. BERT pioneered an approach involving the use of a dedicated token preprended to the beginning of each sentence inputted into the model; the final hidden state vector of this token encodes information about the sentence and can be fine-tuned for use in sentence classification tasks. In practice however, BERT's sentence embedding with the token achieves poor performance, often worse than simply averaging non-contextual word embeddings. SBERT later achieved superior sentence embedding performance by fine tuning BERT's token embeddings through the usage of a siamese neural network architecture on the SNLI dataset. | https://en.wikipedia.org/wiki/Sentence_embedding |
Other approaches are loosely based on the idea of distributional semantics applied to sentences. Skip-Thought trains an encoder-decoder structure for the task of neighboring sentences predictions. Though this has been shown to achieve worse performance than approaches such as InferSent or SBERT. | https://en.wikipedia.org/wiki/Sentence_embedding |
An alternative direction is to aggregate word embeddings, such as those returned by Word2vec, into sentence embeddings. The most straightforward approach is to simply compute the average of word vectors, known as continuous bag-of-words (CBOW). However, more elaborate solutions based on word vector quantization have also been proposed. One such approach is the vector of locally aggregated word embeddings (VLAWE), which demonstrated performance improvements in downstream text classification tasks. | https://en.wikipedia.org/wiki/Sentence_embedding |
In natural language processing, dependency-based parsing can be formulated as an ASP problem. The following code parses the Latin sentence "Puella pulchra in villa linguam latinam discit", "the pretty girl is learning Latin in the villa". The syntax tree is expressed by the arc predicates which represent the dependencies between the words of the sentence. The computed structure is a linearly ordered rooted tree. | https://en.wikipedia.org/wiki/Answer-set_programming |
In natural language processing, deterministic parsing refers to parsing algorithms that do not backtrack. LR-parsers are an example. (This meaning of the words "deterministic" and "non-deterministic" differs from that used to describe nondeterministic algorithms.) | https://en.wikipedia.org/wiki/Deterministic_parsing |
The deterministic behavior is desired and expected in compiling programming languages. In natural language processing, it was thought for a long time that deterministic parsing is impossible due to ambiguity inherent in natural languages (many sentences have more than one plausible parse). Thus, non-deterministic approaches such as the chart parser had to be applied. However, Mitch Marcus proposed in 1978 the Parsifal parser that was able to deal with ambiguities while still keeping the deterministic behavior. | https://en.wikipedia.org/wiki/Deterministic_parsing |
In natural language processing, entity linking, also referred to as named-entity linking (NEL), named-entity disambiguation (NED), named-entity recognition and disambiguation (NERD) or named-entity normalization (NEN) is the task of assigning a unique identity to entities (such as famous individuals, locations, or companies) mentioned in text. For example, given the sentence "Paris is the capital of France", the idea is to determine that "Paris" refers to the city of Paris and not to Paris Hilton or any other entity that could be referred to as "Paris". Entity linking is different from named-entity recognition (NER) in that NER identifies the occurrence of a named entity in text but it does not identify which specific entity it is (see Differences from other techniques). | https://en.wikipedia.org/wiki/Entity_Linking |
In natural language processing, it is often necessary to compare tree structures (e.g. parse trees) for similarity. Such comparisons can be performed by computing dot products of vectors of features of the trees, but these vectors tend to be very large: NLP techniques have come to a point where a simple dependency relation over two words is encoded with a vector of several millions of features. It can be impractical to represent complex structures such as trees with features vectors. Well-designed kernels allow computing similarity over trees without explicitly computing the feature vectors of these trees. Moreover, kernel methods have been widely used in machine learning tasks ( e.g. SVM ), and thus plenty of algorithms are working natively with kernels, or have an extension that handles kernelization. An example application is classification of sentences, such as different types of questions. | https://en.wikipedia.org/wiki/Tree_kernel |
In natural language processing, language identification or language guessing is the problem of determining which natural language given content is in. Computational approaches to this problem view it as a special case of text categorization, solved with various statistical methods. | https://en.wikipedia.org/wiki/Language_identification |
In natural language processing, linguistics, and neighboring fields, Linguistic Linked Open Data (LLOD) describes a method and an interdisciplinary community concerned with creating, sharing, and (re-)using language resources in accordance with Linked Data principles. The Linguistic Linked Open Data Cloud was conceived and is being maintained by the Open Linguistics Working Group (OWLG) of the Open Knowledge Foundation, but has been a point of focal activity for several W3C community groups, research projects, and infrastructure efforts since then. | https://en.wikipedia.org/wiki/Linguistic_Linked_Open_Data |
In natural language processing, multinomial LR classifiers are commonly used as an alternative to naive Bayes classifiers because they do not assume statistical independence of the random variables (commonly known as features) that serve as predictors. However, learning in such a model is slower than for a naive Bayes classifier, and thus may not be appropriate given a very large number of classes to learn. In particular, learning in a Naive Bayes classifier is a simple matter of counting up the number of co-occurrences of features and classes, while in a maximum entropy classifier the weights, which are typically maximized using maximum a posteriori (MAP) estimation, must be learned using an iterative procedure; see #Estimating the coefficients. | https://en.wikipedia.org/wiki/Multinomial_logistic_regression |
In natural language processing, open information extraction (OIE) is the task of generating a structured, machine-readable representation of the information in text, usually in the form of triples or n-ary propositions. | https://en.wikipedia.org/wiki/Open_information_extraction |
In natural language processing, semantic compression is a process of compacting a lexicon used to build a textual document (or a set of documents) by reducing language heterogeneity, while maintaining text semantics. As a result, the same ideas can be represented using a smaller set of words. In most applications, semantic compression is a lossy compression, that is, increased prolixity does not compensate for the lexical compression, and an original document cannot be reconstructed in a reverse process. | https://en.wikipedia.org/wiki/Semantic_compression |
In natural language processing, semantic role labeling (also called shallow semantic parsing or slot-filling) is the process that assigns labels to words or phrases in a sentence that indicates their semantic role in the sentence, such as that of an agent, goal, or result. It serves to find the meaning of the sentence. To do this, it detects the arguments associated with the predicate or verb of a sentence and how they are classified into their specific roles. A common example is the sentence "Mary sold the book to John." The agent is "Mary," the predicate is "sold" (or rather, "to sell,") the theme is "the book," and the recipient is "John." Another example is how "the book belongs to me" would need two labels such as "possessed" and "possessor" and "the book was sold to John" would need two other labels such as theme and recipient, despite these two clauses being similar to "subject" and "object" functions. | https://en.wikipedia.org/wiki/Semantic_role_labelling |
In natural language processing, textual entailment (TE), also known as natural language inference (NLI), is a directional relation between text fragments. The relation holds whenever the truth of one text fragment follows from another text. | https://en.wikipedia.org/wiki/Textual_entailment |
In natural language, relative sentences combined with coordinations can introduce ambiguity: A customer inserts a card that is valid and opens an account.In ACE the sentence has the unequivocal meaning that the customer opens an account, as reflected by the paraphrase: A card is valid. A customer inserts the card. The customer opens an account.To express the alternative—though not very realistic—meaning that the card opens an account, the relative pronoun that must be repeated, thus yielding a coordination of relative sentences: A customer inserts a card that is valid and that opens an account.This sentence is unambiguously equivalent in meaning to the paraphrase: A card is valid. The card opens an account. A customer inserts the card. | https://en.wikipedia.org/wiki/Attempto_Controlled_English |
In natural languages, an indicative conditional is a conditional sentence such as "If Leona is at home, she isn't in Paris", whose grammatical form restricts it to discussing what could be true. Indicatives are typically defined in opposition to counterfactual conditionals, which have extra grammatical marking which allows them to discuss eventualities which are no longer possible. Indicatives are a major topic of research in philosophy of language, philosophical logic, and linguistics. Open questions include which logical operation indicatives denote, how such denotations could be composed from their grammatical form, and the implications of those denotations for areas including metaphysics, psychology of reasoning, and philosophy of mathematics. | https://en.wikipedia.org/wiki/Indicative_conditional |
In natural languages, the meaning of a complex spoken sentence can be understood by decomposing it into smaller lexical segments (roughly, the words of the language), associating a meaning to each segment, and combining those meanings according to the grammar rules of the language. Though lexical recognition is not thought to be used by infants in their first year, due to their highly limited vocabularies, it is one of the major processes involved in speech segmentation for adults. Three main models of lexical recognition exist in current research: first, whole-word access, which argues that words have a whole-word representation in the lexicon; second, decomposition, which argues that morphologically complex words are broken down into their morphemes (roots, stems, inflections, etc.) and then interpreted and; third, the view that whole-word and decomposition models are both used, but that the whole-word model provides some computational advantages and is therefore dominant in lexical recognition.To give an example, in a whole-word model, the word "cats" might be stored and searched for by letter, first "c", then "ca", "cat", and finally "cats". The same word, in a decompositional model, would likely be stored under the root word "cat" and could be searched for after removing the "s" suffix. | https://en.wikipedia.org/wiki/Speech_segmentation |
"Falling", similarly, would be stored as "fall" and suffixed with the "ing" inflection.Though proponents of the decompositional model recognize that a morpheme-by-morpheme analysis may require significantly more computation, they argue that the unpacking of morphological information is necessary for other processes (such as syntactic structure) which may occur parallel to lexical searches. As a whole, research into systems of human lexical recognition is limited due to little experimental evidence that fully discriminates between the three main models.In any case, lexical recognition likely contributes significantly to speech segmentation through the contextual clues it provides, given that it is a heavily probabilistic system—based on the statistical likelihood of certain words or constituents occurring together. For example, one can imagine a situation where a person might say "I bought my dog at a ____ shop" and the missing word's vowel is pronounced as in "net", "sweat", or "pet". | https://en.wikipedia.org/wiki/Speech_segmentation |
While the probability of "netshop" is extremely low, since "netshop" isn't currently a compound or phrase in English, and "sweatshop" also seems contextually improbable, "pet shop" is a good fit because it is a common phrase and is also related to the word "dog".Moreover, an utterance can have different meanings depending on how it is split into words. A popular example, often quoted in the field, is the phrase "How to wreck a nice beach", which sounds very similar to "How to recognize speech". As this example shows, proper lexical segmentation depends on context and semantics which draws on the whole of human knowledge and experience, and would thus require advanced pattern recognition and artificial intelligence technologies to be implemented on a computer. | https://en.wikipedia.org/wiki/Speech_segmentation |
Lexical recognition is of particular value in the field of computer speech recognition, since the ability to build and search a network of semantically connected ideas would greatly increase the effectiveness of speech-recognition software. Statistical models can be used to segment and align recorded speech to words or phones. Applications include automatic lip-synch timing for cartoon animation, follow-the-bouncing-ball video sub-titling, and linguistic research. Automatic segmentation and alignment software is commercially available. | https://en.wikipedia.org/wiki/Speech_segmentation |
In natural nuclear radiation, the byproducts are very small compared to the nuclei from which they originate. Nuclear fission is the process of splitting a nucleus into roughly equal parts, and releasing energy and neutrons in the process. If these neutrons are captured by another unstable nucleus, they can fission as well, leading to a chain reaction. The average number of neutrons released per nucleus that go on to fission another nucleus is referred to as k. Values of k larger than 1 mean that the fission reaction is releasing more neutrons than it absorbs, and therefore is referred to as a self-sustaining chain reaction. | https://en.wikipedia.org/wiki/Nuclear_technology |
A mass of fissile material large enough (and in a suitable configuration) to induce a self-sustaining chain reaction is called a critical mass. When a neutron is captured by a suitable nucleus, fission may occur immediately, or the nucleus may persist in an unstable state for a short time. If there are enough immediate decays to carry on the chain reaction, the mass is said to be prompt critical, and the energy release will grow rapidly and uncontrollably, usually leading to an explosion. | https://en.wikipedia.org/wiki/Nuclear_technology |
When discovered on the eve of World War II, this insight led multiple countries to begin programs investigating the possibility of constructing an atomic bomb — a weapon which utilized fission reactions to generate far more energy than could be created with chemical explosives. The Manhattan Project, run by the United States with the help of the United Kingdom and Canada, developed multiple fission weapons which were used against Japan in 1945 at Hiroshima and Nagasaki. During the project, the first fission reactors were developed as well, though they were primarily for weapons manufacture and did not generate electricity. | https://en.wikipedia.org/wiki/Nuclear_technology |
In 1951, the first nuclear fission power plant was the first to produce electricity at the Experimental Breeder Reactor No. 1 (EBR-1), in Arco, Idaho, ushering in the "Atomic Age" of more intensive human energy use.However, if the mass is critical only when the delayed neutrons are included, then the reaction can be controlled, for example by the introduction or removal of neutron absorbers. This is what allows nuclear reactors to be built. Fast neutrons are not easily captured by nuclei; they must be slowed (slow neutrons), generally by collision with the nuclei of a neutron moderator, before they can be easily captured. Today, this type of fission is commonly used to generate electricity. | https://en.wikipedia.org/wiki/Nuclear_technology |
In natural optical activity, the difference between the LCP light and the RCP light is caused by the asymmetry of the molecules (i.e. chiral molecules). Because of the handedness of the molecule, the absorption of the LCP light would be different from the RCP light. However, in MCD in the presence of a magnetic field, LCP and RCP no longer interact equivalently with the absorbing medium. Thus, there is not the same direct relation between magnetic optical activity and molecular stereochemistry which would be expected, because it is found in natural optical activity. | https://en.wikipedia.org/wiki/Magnetic_circular_dichroism |
So, natural CD is much more rare than MCD which does not strictly require the target molecule to be chiral.Although there is much overlap in the requirements and use of instruments, ordinary CD instruments are usually optimized for operation in the ultraviolet, approximately 170–300 nm, while MCD instruments are typically required to operate in the visible to near infrared, approximately 300–2000 nm. The physical processes that lead to MCD are substantively different from those of CD. However, like CD, it is dependent on the differential absorption of left and right hand circularly polarized light. MCD will only exist at a given wavelength if the studied sample has an optical absorption at that wavelength. This is distinctly different from the related phenomenon of optical rotatory dispersion (ORD), which can be observed at wavelengths far from any absorption band. | https://en.wikipedia.org/wiki/Magnetic_circular_dichroism |
In natural philosophy and the philosophy of science, medieval philosophers were mainly influenced by Aristotle. However, from the fourteenth century onward, the increasing use of mathematical reasoning in natural philosophy prepared the way for the rise of science in the early modern period. The more mathematical reasoning techniques of William Heytesbury and William of Ockham are indicative of this trend. Other contributors to natural philosophy are Albert of Saxony, John Buridan, and Nicholas of Autrecourt. See also the article on the Continuity thesis, the hypothesis that there was no radical discontinuity between the intellectual development of the Middle Ages and the developments in the Renaissance and early modern period. | https://en.wikipedia.org/wiki/Medieval_logic |
In natural photosynthesis, photosynthetic organisms produce energy-rich organic molecules from water and carbon dioxide by using solar radiation. Therefore, the process of photosynthesis removes carbon dioxide, a greenhouse gas, from the air. Artificial photosynthesis, as performed by the Bionic Leaf, is approximately 10 times more efficient than natural photosynthesis. Using a catalyst, the Bionic Leaf can remove excess carbon dioxide in the air and convert that to useful alcohol fuels, like isopropanol and isobutanol.The efficiency of the Bionic Leaf's artificial photosynthesis is the result of bypassing obstacles in natural photosynthesis by virtue of its artificiality. | https://en.wikipedia.org/wiki/Bionic_Leaf |
In natural systems, there are numerous energy conversion bottlenecks that limit the overall efficiency of photosynthesis. As a result, most plants do not exceed 1% efficiency and even microalgae grown in bioreactors do not exceed 3%. | https://en.wikipedia.org/wiki/Bionic_Leaf |
Existing artificial photosynthetic solar-to-fuels cycles may exceed natural efficiencies but cannot complete the cycle via carbon fixation. When the catalysts of the Bionic Leaf are coupled with the bacterium Ralstonia eutropha, this results in a hybrid system capable of carbon dioxide fixation. This system can store more than half of its input energy as products of carbon dioxide fixation. Overall, the hybrid design allows for artificial photosynthesis with efficiencies rivaling that of natural photosynthesis. | https://en.wikipedia.org/wiki/Bionic_Leaf |
In natural populations, genetic drift and natural selection do not act in isolation; both phenomena are always at play, together with mutation and migration. Neutral evolution is the product of both mutation and drift, not of drift alone. Similarly, even when selection overwhelms genetic drift, it can only act on variation that mutation provides. While natural selection has a direction, guiding evolution towards heritable adaptations to the current environment, genetic drift has no direction and is guided only by the mathematics of chance. | https://en.wikipedia.org/wiki/Wright-Fisher_model |
As a result, drift acts upon the genotypic frequencies within a population without regard to their phenotypic effects. In contrast, selection favors the spread of alleles whose phenotypic effects increase survival and/or reproduction of their carriers, lowers the frequencies of alleles that cause unfavorable traits, and ignores those that are neutral.The law of large numbers predicts that when the absolute number of copies of the allele is small (e.g., in small populations), the magnitude of drift on allele frequencies per generation is larger. The magnitude of drift is large enough to overwhelm selection at any allele frequency when the selection coefficient is less than 1 divided by the effective population size. | https://en.wikipedia.org/wiki/Wright-Fisher_model |
Non-adaptive evolution resulting from the product of mutation and genetic drift is therefore considered to be a consequential mechanism of evolutionary change primarily within small, isolated populations. The mathematics of genetic drift depend on the effective population size, but it is not clear how this is related to the actual number of individuals in a population. Genetic linkage to other genes that are under selection can reduce the effective population size experienced by a neutral allele. | https://en.wikipedia.org/wiki/Wright-Fisher_model |
With a higher recombination rate, linkage decreases and with it this local effect on effective population size. This effect is visible in molecular data as a correlation between local recombination rate and genetic diversity, and negative correlation between gene density and diversity at noncoding DNA regions. Stochasticity associated with linkage to other genes that are under selection is not the same as sampling error, and is sometimes known as genetic draft in order to distinguish it from genetic drift.Low allele frequency makes alleles more vulnerable to being eliminated by random chance, even overriding the influence of natural selection. For example, while disadvantageous mutations are usually eliminated quickly within the population, new advantageous mutations are almost as vulnerable to loss through genetic drift as are neutral mutations. Not until the allele frequency for the advantageous mutation reaches a certain threshold will genetic drift have no effect. | https://en.wikipedia.org/wiki/Wright-Fisher_model |
In natural resources management and environmental policy more generally, demand management refers to policies to control consumer demand for environmentally sensitive or harmful goods such as water and energy. Within manufacturing firms the term is used to describe the activities of demand forecasting, planning, and order fulfillment. In the environmental context demand management is increasingly taken seriously to reduce the economy's throughput of scarce resources for which market pricing does not reflect true costs. Examples include metering of municipal water, and carbon taxes on gasoline. | https://en.wikipedia.org/wiki/Demand_Management |
In natural science and signal processing, an artifact or artefact is any error in the perception or representation of any information introduced by the involved equipment or technique(s). | https://en.wikipedia.org/wiki/Artifact_(error) |
In natural science, impossibility assertions (like other assertions) come to be widely accepted as overwhelmingly probable rather than considered proved to the point of being unchallengeable. The basis for this strong acceptance is a combination of extensive evidence of something not occurring, combined with an underlying theory, very successful in making predictions, whose assumptions lead logically to the conclusion that something is impossible. Two examples of widely accepted impossibilities in physics are perpetual motion machines, which violate the law of conservation of energy, and exceeding the speed of light, which violates the implications of special relativity. Another is the uncertainty principle of quantum mechanics, which asserts the impossibility of simultaneously knowing both the position and the momentum of a particle. | https://en.wikipedia.org/wiki/Impossibility_proof |
There is also Bell's theorem: no physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics. While an impossibility assertion in natural science can never be absolutely proved, it could be refuted by the observation of a single counterexample. Such a counterexample would require that the assumptions underlying the theory that implied the impossibility be re-examined. | https://en.wikipedia.org/wiki/Impossibility_proof |
In natural science, subaerial (literally "under the air") has been used since 1833, notably in geology and botany, to describe features and events occurring or formed on or near the Earth's land surface. They are thus exposed to Earth's atmosphere. This may be contrasted with subaqueous events or features located below a water surface, submarine events or features located below a sea surface, subterranean events or features located below ground, or subglacial events or features located below glacial ice such as ice sheets. | https://en.wikipedia.org/wiki/Subaerial |
In natural selection, negative selection or purifying selection is the selective removal of alleles that are deleterious. This can result in stabilising selection through the purging of deleterious genetic polymorphisms that arise through random mutations.Purging of deleterious alleles can be achieved on the population genetics level, with as little as a single point mutation being the unit of selection. In such a case, carriers of the harmful point mutation have fewer offspring each generation, reducing the frequency of the mutation in the gene pool. In the case of strong negative selection on a locus, the purging of deleterious variants will result in the occasional removal of linked variation, producing a decrease in the level of variation surrounding the locus under selection. | https://en.wikipedia.org/wiki/Purifying_selection |
The incidental purging of non-deleterious alleles due to such spatial proximity to deleterious alleles is called background selection. This effect increases with lower mutation rate but decreases with higher recombination rate.Purifying selection can be split into purging by non-random mating (assortative mating) and purging by genetic drift. Purging by genetic drift can remove primarily deeply recessive alleles, whereas natural selection can remove any type of deleterious alleles. | https://en.wikipedia.org/wiki/Purifying_selection |
In natural settings, the black vulture eats mainly carrion. In areas populated by humans, it may scavenge at garbage dumps, but also takes eggs, fruit (both ripe and rotting), fish, dung and ripe/decomposing plant material and can kill or injure newborn or incapacitated mammals. Like other vultures, it plays an important role in the ecosystem by disposing of carrion which would otherwise be a breeding ground for disease. The black vulture locates food either by sight or by following New World vultures of the genus Cathartes to carcasses. | https://en.wikipedia.org/wiki/Black_vulture |
These vultures—the turkey vulture, the lesser yellow-headed vulture, and the greater yellow-headed vulture—forage by detecting the scent of ethyl mercaptan, a gas produced by the beginnings of decay in dead animals. Their heightened ability to detect odors allows them to search for carrion below the forest canopy. The black vulture is aggressive when feeding, and may chase the slightly larger turkey vulture from carcasses.The black vulture also occasionally feeds on livestock or deer. | https://en.wikipedia.org/wiki/Black_vulture |
It is the only species of New World vulture which preys on cattle. It occasionally harasses cows which are giving birth, but primarily preys on newborn calves, as well as lambs and piglets. In its first few weeks, a calf will allow vultures to approach it. | https://en.wikipedia.org/wiki/Black_vulture |
The vultures swarm the calf in a group, then peck at the calf's eyes, or at the nose or the tongue. The calf then goes into shock and is killed by the vultures.Black vultures have sometimes been observed removing and eating ticks from resting capybaras and Baird's tapir (Tapirus bairdii). These vultures are known to kill baby herons and seabirds on nesting colonies, and feed on domestic ducks, small birds, skunks, opossums, other small mammals, lizards, small snakes, young turtles and insects. | https://en.wikipedia.org/wiki/Black_vulture |
Like other birds with scavenging habits, the black vulture presents resistance to pathogenic microorganisms and their toxins. Many mechanisms may explain this resistance. Anti-microbial agents may be secreted by the liver or gastric epithelium, or produced by microorganisms of the normal microbiota of the species. | https://en.wikipedia.org/wiki/Black_vulture |
In natural systems, secondary minerals may form as a byproduct of bacterial metal reduction. Commonly observed secondary minerals produced during experimental bio-reduction by dissimilatory metal reducers include magnetite, siderite, green rust, vivianite, and hydrous Fe(II)-carbonate. | https://en.wikipedia.org/wiki/Dissimilatory_metal-reducing_bacteria |
In natural time domain each event is characterized by two terms, the "natural time" χ, and the energy Qk. χ is defined as k/N, where k is a natural number (the k-th event) and N is the total number of events in the time sequence of data. A related term, pk, is the ratio Qk / Qtotal, which describes the fractional energy released. The term κ1 is the variance in natural time: κ 1 = ∑ k = 1 N p k ( χ k ) 2 − ( ∑ k = 1 N p k χ k ) 2 {\displaystyle \kappa _{1}=\sum _{k=1}^{N}p_{k}(\chi _{k})^{2}-{\bigl (}\sum _{k=1}^{N}p_{k}\chi _{k}{\bigr )}^{2}} where χ k = k / N {\displaystyle \textstyle \chi _{k}=k/N} and p k = Q k ∑ n = 1 N Q n {\displaystyle \textstyle \ p_{k}={\frac {Q_{k}}{\sum _{n=1}^{N}Q_{n}}}} | https://en.wikipedia.org/wiki/Natural_time_analysis |
In natural units where c = 1, the energy–momentum equation reduces to E 2 = p 2 + m 0 2 . {\displaystyle E^{2}=p^{2}+m_{0}^{2}\,.} In particle physics, energy is typically given in units of electron volts (eV), momentum in units of eV·c−1, and mass in units of eV·c−2. | https://en.wikipedia.org/wiki/Relativistic_energy-momentum_equation |
In electromagnetism, and because of relativistic invariance, it is useful to have the electric field E and the magnetic field B in the same unit (Gauss), using the cgs (Gaussian) system of units, where energy is given in units of erg, mass in grams (g), and momentum in g·cm·s−1. Energy may also in theory be expressed in units of grams, though in practice it requires a large amount of energy to be equivalent to masses in this range. For example, the first atomic bomb liberated about 1 gram of heat, and the largest thermonuclear bombs have generated a kilogram or more of heat. Energies of thermonuclear bombs are usually given in tens of kilotons and megatons referring to the energy liberated by exploding that amount of trinitrotoluene (TNT). | https://en.wikipedia.org/wiki/Relativistic_energy-momentum_equation |
In natural units, the Dirac equation may be written as ( i γ μ ∂ μ − m ) ψ = 0 {\displaystyle \ \left(i\gamma ^{\mu }\partial _{\mu }-m\right)\psi =0\ } where ψ {\displaystyle \ \psi \ } is a Dirac spinor. Switching to Feynman notation, the Dirac equation is ( i ∂ / − m ) ψ = 0 . {\displaystyle \ (i{\partial \!\!\!/}-m)\psi =0~.} | https://en.wikipedia.org/wiki/Dirac_matrices |
In naturally ventilated buildings, occupants take numerous actions to keep themselves comfortable when the indoor conditions drift towards discomfort. Operating windows and fans, adjusting blinds/shades, changing clothing, and consuming food and drinks are some of the common adaptive strategies. Among these, adjusting windows is the most common. Those occupants who take these sorts of actions tend to feel cooler at warmer temperatures than those who do not.The behavioral actions significantly influence energy simulation inputs, and researchers are developing behavior models to improve the accuracy of simulation results. For example, there are many window-opening models that have been developed to date, but there is no consensus over the factors that trigger window opening.People might adapt to seasonal heat by becoming more nocturnal, doing physical activity and even conducting business at night. | https://en.wikipedia.org/wiki/PMV/PPD_model |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.