id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
59,572,951 | https://en.wikipedia.org/wiki/Lioz | Lioz (), also known as Royal Stone (pedra real), is a type of limestone, originating in Portugal, from the Lisbon region. It is famed for its use as an ornamental stone, resulting in its proliferation in palaces, cathedrals, and important civic buildings throughout Portugal and the former Portuguese Empire. Owing to its historical relevance, lioz was designated a Global Heritage Stone Resource.
Characteristics
Lioz stone contains rudist fossils dating back 120 million years. Its color is generally ivory but varies from light grey to whitish and rosy. This type of limestone is used as a decorative construction material because of its fossiliferous composition.
During the XVII–XVIII centuries lioz was widely used in churches, monuments and official buildings in Portugal, as well as some Portuguese colonies (Salvador, Bahia, Brazil), therefore, it was also called “royal stone”. Lioz stone has been designated by the International Union of Geological Sciences as a Global Heritage Stone Resource.
Notable buildings
Monuments made of lioz include:
Portugal:
Jeronimos Monastery
Belém Tower
Belém Cultural Centre
Rossio Station
Mafra Palace
Brazil:
Cathedral of Salvador
Basilica of the Immaculate Conception
See also
Limestone
References
Limestone
Architecture in Portugal
Geology of Portugal
building materials | Lioz | [
"Physics",
"Engineering"
] | 250 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
59,573,206 | https://en.wikipedia.org/wiki/Hans%20Andrew%20Hansen | Hans Andrew Hansen is an American plant breeder, currently working for Walters Gardens in Zeeland, Michigan. Hansen is the former director of lab production and new plants for Shady Oaks Nursery in Minnesota, where he revolutionized tissue culture techniques for many genera including Hosta, Arisaema and variegated Agave.
In 2009, Hansen moved to Michigan where he became the Director of New Plants for Walters Gardens, one of North America's leading wholesale perennial growers. There he took over the perennial plant breeding program, which is now recognized as a leader in the industry. When Walters agreed to become the perennial supplier for the Proven Winners brand, Hansens' perennial influence expanded further. He has made dramatic improvement in many plant genera, including Baptisia, Agastache, Clematis, Digiplexis, Helleborus, Heuchera, Heucherella, Hibiscus, Lagerstroemia, Mangave, Nepeta, Salvia, Sedum and Veronica. Hansen currently holds over 179 US plant patents.
References
External links
Hansen's home garden in Michigan
Plant breeding
Scientists from Michigan
Year of birth missing (living people)
Living people
American inventors | Hans Andrew Hansen | [
"Chemistry"
] | 243 | [
"Plant breeding",
"Molecular biology"
] |
59,573,391 | https://en.wikipedia.org/wiki/Conservation%20paleobiology | Conservation paleobiology is a field of paleontology that applies the knowledge of the geological and paleoecological record to the conservation and restoration of biodiversity and ecosystem services. Despite the influence of paleontology on ecological sciences can be traced back at least at the 18th century, the current field has been established by the work of K.W. Flessa and G.P. Dietl in the first decade of the 21st century. The discipline utilizes paleontological and geological data to understand how biotas respond to climate and other natural and anthropogenic environmental change. These information are then used to address the challenges faced by modern conservation biology, like understanding the extinction risk of endangered species, providing baselines for restoration and modelling future scenarios for species range's contraction or expansion.
Description of the discipline
The main strength of conservation paleobiology is the availability of long term data on species, communities and ecosystems that exceeds the timeframe of direct human experience. The discipline takes one of two approaches: near-time and deep-time.
Near-time conservation paleobiology
The near-time approach uses the recent fossil record (usually from the Late Pleistocene or the Holocene) to provide a long-term context to extant ecosystems dynamics. The fossil record is, in many cases, the only source of information on conditions previous to human impacts. These records can be used as reference baselines for comparisons in order to identify targets for restoration ecology, to analyze species responses to perturbations (natural and anthropogenic), understand historical species distributions and their variability, discriminate the factors that distinguish natural from non-natural changes in biological populations and identify ecological legacies only explicable by referring to past events or conditions.
Example - Conservation of the European bison
The European bison or wisent (Bison bonasus) is a large herbivore once widespread in Europe that saw a range decrease over the last thousand years, surviving only in Central European forests with the last wild population going extinct in Bialowieza forest in 1921. Starting from 1929, reintroduction of animals from zoos allowed the species to recover in the wild. The historical range of Bison bonasus was limited to forested areas, so since at least the sixteenth century conservation measures to preserve the species were based on the assumption that a forest would be the optimal habitat of the species. Ecological, morphological and paleoecological evidences, however, shows that B. bonasus is best adapted to open or mixed environments, indicating that the species was "forced" into a suboptimal habitat due to human influences such as habitat loss, competition with livestock, diseases and hunting. This information has been applied recently to adopt measures more suitable for the conservation of the species.
Deep-time conservation paleobiology
The deep-time approach uses examples of species, communities and ecosystem responses to environmental changes on a longer geologic record, as an archive of natural ecological and evolutionary laboratory. This approach provides examples to infer possible settings concerning climate warming, introduction of invasive species and decline in cultural eutrophication. This also permits the identification of species responses to perturbations of various types and scale to serve as a model for the future scenarios, for example abrupt climate change or volcanic winters. Given its deep-time nature, this approach allows for testing how organisms or ecosystems react to a bigger set of conditions than what is observable in the modern world or in the recent past.
Example - Insect damage and increasing temperatures
A pressing issue related to current global warming is the potential expansion in the range of tropical and subtropical crop pests, however the signal related to this poleward expansion is not clear. The analyses of the fossil record from past warm intervals of Earth's history (Paleogene-Eocene Thermal Maximum) provides an adequate comparison to test this hypothesis. Data shows that, during warmer climates, the frequency and diversity of insect damage to North American plants increased significantly, providing support to the hypothesis of pests expansion due to global warming.
Relevance to conservation biology
Over the years, numerous attempts have been made to increase the synergy between paleobiologists and conservation scientists and managers. Despite being recognized as a useful tool to address current biodiversity problems, fossil data is still rarely included in contemporary conservation-related research, with the vast majority of studies focusing on short timescales. However, a few authors have used comparisons of extinction in the geologic past to taxon losses in modern times providing important perspectives on the severity of the modern biodiversity crisis
Marine Paleobiology is an interdisciplinary study that utilizes the tools of paleontology and applies them to marine conservation biology. Looking at the deep-time fossil record separates this field from historical ecology.
References
Paleontology
Conservation biology | Conservation paleobiology | [
"Biology"
] | 960 | [
"Conservation biology"
] |
59,575,088 | https://en.wikipedia.org/wiki/Winnie%20Apiyo | Winnie Adhiambo Apiyo (born c.1987) is a Kenyan electrical engineer who is employed as a Protection, Instrumentation and Control Engineer at Kenya Electricity Generating Company, the largest power producing company in Kenya, the largest economy in the East African Community.
Background and education
Apiyo was born in Kenya, circa 1987. She graduated with a Bachelor of Engineering degree in Electric & Electronic engineering, awarded by Tver State University, in Tver, Russia, in 2010.
She also has a postgraduate diploma in Geothermal Technology, obtained from the Iceland Energy Authority, in collaboration with the United Nations University, following a six-month study program of highly specialized studies in Geothermal technology and management. She was a member of the class of 2016. As of January 2019, she is enrolled in the Iceland School of Energy, at Reykjavik University, pursuing a Master of Science in Sustainable Energy Engineering.
Career
Apiyo serves as an electrical engineer at Kenya Electricity Generating Company. Her specialty in training and work is the planning, testing, and management of geothermal power stations. She has special interest, training and expertise in designing, maintaining and improving these energy infrastructure developments. In her native country of Kenya, she is one of a very small number of female electrical engineers, with special training and expertise in geothermal technology.
Achievements
In September 2018, Business Daily Africa, a Kenyan, English language, daily newspaper, named Winnie Apiyo among the "Top 40 Under 40 Women in Kenya in 2018".
In December 2017, Ms Apiyo received the 2017 Women in Energy Innovation Award at the 2nd Annual Women in Energy Conference, held from 13 until 14 December 2017, in Nairobi, Kenya's capital city. The award was in recognition of her work outlined in the attached abstract, named "Automatic Blockage of Grid Energy Back Feed Project".
See also
Charity Wayua
Gladys Ngetich
Cynthia Wandia
References
External links
About the Geothermal Training Program by the United Nations University & the Iceland Energy Authority
Impact analysis of electric vehicles charging on the Icelandic power system: Abstract of Presentation of Winnie Adhiambo Apiyo, MSc Fellow in Sustainable Energy Engineering at Reykjavík University
1980s births
Living people
21st-century Kenyan engineers
21st-century Kenyan scientists
21st-century Kenyan women engineers
21st-century Kenyan women scientists
Electrical engineers
Kenyan electrical engineers
Luo people
Luo women
Reykjavík University alumni
Tver State University alumni | Winnie Apiyo | [
"Engineering"
] | 474 | [
"Electrical engineering",
"Electrical engineers"
] |
59,575,257 | https://en.wikipedia.org/wiki/Split%20gene%20theory | The split gene theory is a theory of the origin of introns, long non-coding sequences in eukaryotic genes between the exons. The theory holds that the randomness of primordial DNA sequences would only permit small (< 600bp) open reading frames (ORFs), and that important intron structures and regulatory sequences are derived from stop codons. In this introns-first framework, the spliceosomal machinery and the nucleus evolved due to the necessity to join these ORFs (now "exons") into larger proteins, and that intronless bacterial genes are less ancestral than the split eukaryotic genes. The theory originated with Periannan Senapathy.
The theory provides solutions to key questions concerning the split gene architecture, including split eukaryotic genes, exons, introns, splice junctions, and branch points, based on the origin of split genes from random genetic sequences. It also provides possible solutions to the origin of the spliceosomal machinery, the nuclear boundary and the eukaryotic cell.
This theory led to the Shapiro–Senapathy algorithm, which provides the methodology for detecting the splice sites, exons and split genes in eukaryotic DNA, and which is the main method for detecting splice site mutations in genes that cause hundreds of diseases.
Split gene theory requires a separate origin of all eukaryotic species. It also requires that the simpler prokaryotes evolved from eukaryotes. This completely contradicts the scientific consensus about the formation of eukaryotic cells by endosymbiosis of bacteria. In 1994, Senapathy wrote a book about this aspect of his theory - The Independent Birth of Organisms. It proposed that all eukaryotic genomes were formed separately in a primordial pool. Dutch biologist Gert Korthoff criticized the theory by posing various problems that cannot be explained by a theory of independent origins. He pointed out that various eukaryotes need nurturing and called this the 'boot problem', in that even the initial eukaryote needed parental care. Korthoff notes that a large fraction of eukaryotes are parasites. Senapathy's theory would require a coincidence to explain their existence. Senapathy's theory cannot explain the strong evidence for common descent (homology, universal genetic code, embryology, fossil record.)
Background
Genes of all organisms, except bacteria, consist of short protein-coding regions (exons) interrupted by long sequences (introns). When a gene is expressed, its DNA sequence is copied into a “primary RNA” sequence by the enzyme RNA polymerase. Then the “spliceosome” machinery physically removes the introns from the RNA copy of the gene by the process of splicing, leaving only a contiguously connected series of exons, which becomes messenger RNA (mRNA). This mRNA is now read by the ribosome, which produces the encoded protein. Thus, although introns are not physically removed from a gene, a gene's sequence is read as if introns were not present.
Exons are usually short, with an average length of about 120 bases (e.g. in human genes). Intron lengths vary widely from 10 to 500,000, but exon lengths have an upper bound of about 600 bases in most eukaryotes. Because exons code for protein sequences, they are important for the cell, yet constitute only ~2% of the sequences. Introns, in contrast, constitute 98% of the sequences but seem to have few crucial functions, except for enhancer sequences and developmental regulators in rare instances.
Until Philip Sharp and Richard Roberts discovered introns within eukaryotic genes in 1977, it was believed that the coding sequence of all genes was always in one single stretch, bounded by a single long ORF. The discovery of introns was a profound surprise, which instantly brought up the questions of how, why and when the introns came into the eukaryotic genes.
It soon became apparent that a typical eukaryotic gene was interrupted at many locations by introns, dividing the coding sequence into many short exons. Also surprising was that the introns were long, as long as hundreds of thousands of bases. These findings prompted the questions of why many introns occur within a gene (for example, ~312 introns occur in the human gene TTN), why they are long, and why exons are short.
It was also discovered that the spliceosome machinery was large and complex with ~300 proteins and several SnRNA molecules. The questions extended to the origin of the spliceosome. Soon after the discovery of introns, it became apparent that the junctions between exons and introns on either side exhibited specific sequences that directed the spliceosome machinery to the exact base position for splicing. How and why these splice junction signals came into being was another important question.
History
The discovery of introns and the split gene architecture of the eukaryotic genes started a new era of eukaryotic biology. The question of why eukaryotic genes had fragmented genes prompted speculation and discussion almost immediately.
Ford Doolittle published a paper in 1978 in which he stated that most molecular biologists assumed that the eukaryotic genome arose from a ‘simpler’ and more ‘primitive’ prokaryotic genome rather like that of Escherichia coli. However, this type of evolution would require that introns be introduced into the coding sequences of bacterial genes. Regarding this requirement, Doolittle said, “It is extraordinarily difficult to imagine how informationally irrelevant sequences could be introduced into pre-existing structural genes without deleterious effects.” He stated “I would like to argue that the eukaryotic genome, at least in that aspect of its structure manifested as ‘genes in pieces’ is in fact the primitive original form.”
James Darnell expressed similar views in 1978. He stated, “The differences in the biochemistry of messenger RNA formation in eukaryotes compared to prokaryotes are so profound as to suggest that sequential prokaryotic to eukaryotic cell evolution seems unlikely. The recently discovered non-contiguous sequences in eukaryotic DNA that encode messenger RNA may reflect an ancient, rather than a new, distribution of information in DNA and that eukaryotes evolved independently of prokaryotes.”
However, in an apparent attempt to reconcile with the idea that RNA preceded DNA in evolution, and with the concept of the three evolutionary lineages of archea, bacteria and eukarya, both Doolittle and Darnell deviated from their original speculation in a joint paper in 1985. They suggested that the ancestor of all three groups of organisms, the ‘progenote,’ had a genes-in-pieces structure, from which all three lineages evolved. They speculated that the precellular stage had primitive RNA genes which had introns, which were reverse transcribed into DNA and formed the progenote. Bacteria and archea evolved from the progenote by losing introns, and ‘urkaryote’ evolved from it by retaining introns. Later, the eukaryote evolved from the urkaryote by evolving a nucleus and absorbing mitochondria from bacteria. Multicellular organisms then evolved from the eukaryote.
These authors predicted that the distinctions between the prokaryote and the eukaryote were so profound that the prokaryote to eukaryote evolution was not tenable, and had different origins. However, other than the speculations that the precellular RNA genes must have had introns, they did not address the key questions of intron origin. No explanations described why exons were short and introns were long, how the splice junctions originated, what the structure and sequence of the splice junctions meant, and why eukaryote genomes were large.
Around the same time that Doolittle and Darnell suggested that introns in eukaryotic genes could be ancient, Colin Blake and Walter Gilbert published their views on intron origins independently. In their view, introns originated as spacer sequences that enabled convenient recombination and shuffling of exons that encoded distinct functional domains in order to evolve new genes. Thus, new genes were assembled from exon modules that coded for functional domains, folding regions, or structural elements from preexisting genes in the genome of an ancestral organism, thereby evolving genes with new functions. They did not specify how exons or introns originated. In addition, even after many years, extensive analysis of thousands of proteins and genes showed that only extremely rarely do genes exhibit the supposed exon shuffling phenomenon. Furthermore, molecular biologists questioned the exon shuffling proposal, from a purely evolutionary view for both methodological and conceptual reasons, and, in the long run, this theory did not survive.
Hypothesis
Around the time introns were discovered, Senapathy was asking how genes themselves could have originated. He surmised that for any gene to come into being, genetic sequences (RNA or DNA) must have been present in the prebiotic environment. A basic question he asked was how protein-coding sequences could have originated from primordial DNA sequences at the origin of the first cells.
To answer this, he made two basic assumptions:
before a self-replicating cell could come into existence, DNA molecules were synthesized in the primordial soup by random addition of the 4 nucleotides without the help of templates and
the nucleotide sequences that code for proteins were selected from these preexisting random DNA sequences in the primordial soup, and not by construction from shorter coding sequences.
He also surmised that codons must have been established prior to the origin of the first genes. If primordial DNA did contain random nucleotide sequences, he asked: Was there an upper limit in coding-sequence lengths, and, if so, did this limit play a crucial role in the formation of the structural features of genes at the origin of genes?
His logic was the following. The average length of proteins in living organisms, including the eukaryotic and bacterial organisms, was ~400 amino acids. However, much longer proteins existed, even longer than 10,000-30,000 amino acids in both eukaryotes and bacteria. Thus, the coding sequence of thousands of bases existed in a single stretch in bacterial genes. In contrast, the coding sequence of eukaryotes existed only in short segments of exons of ~120 bases regardless of the length of the protein. If the coding sequence ORF lengths in random DNA sequences were as long as those in bacterial organisms, then long, contiguous coding genes were possible in random DNA. This was not known, as the distribution of ORF lengths in a random DNA sequence had never been studied.
As random DNA sequences could be generated in the computer, Senapathy thought that he could ask these questions and conduct his experiments in silico. Furthermore, when he began studying this question, sufficient DNA and protein sequence information existed in the National Biomedical Research Foundation (NBRF) database in the early 1980s.
Testing the hypothesis
Origin of introns/split genes
Senapathy analyzed the distribution of the ORF lengths in computer-generated random DNA sequences first. Surprisingly, this study revealed that about 200 codons (600 bases) was the upper limit in ORF lengths. The shortest ORF (zero base in length) was the most frequent. At increasing lengths of ORFs, their frequency decreased logarithmically, approaching zero at about 600 bases. When the probability of ORF lengths in a random sequence was plotted, it revealed that the probability of increasing lengths of ORFs decreased exponentially and tailed off at a maximum of about 600 bases. From this “negative exponential” distribution of ORF lengths, it was found that most of ORFs were far shorter than the maximum.This finding was surprising because the coding sequence for the average protein length of 400 AAs (with ~1,200 bases of coding sequence) and longer proteins of thousands of AAs (requiring >10,000 bases of coding sequence) would not occur at a stretch in a random sequence. If this was true, a typical gene with a contiguous coding sequence could not originate in a random sequence. Thus, the only possible way that any gene could originate from a random sequence was to split the coding sequence into shorter segments and select these segments from short ORFs available in the random sequence, rather than to increase the ORF length by eliminating consecutive stop codons. This process of choosing short segments of coding sequences from the available ORFs to make a long ORF would lead to a split structure.
If this hypothesis was true, eukaryotic DNA sequences should reflect it. When Senapathy plotted the distribution of ORF lengths in eukaryotic DNA sequences, the plot was remarkably similar to that from random DNA sequences. This plot was also a negative exponential distribution that tailed off at a maximum of about 600 bases, as with eukaryotic genes, which coincided exactly with the maximum length of ORFs observed in both random DNA and eukaryotic DNA sequences.
The split genes thus originated from random DNA sequences by choosing the best of the short coding segments (exons) and splicing them. The intervening intron sequences were left-over vestiges of the random sequences, and thus were earmarked to be removed by the spliceosome. These findings indicated that split genes could have originated from random DNA sequences with exons and introns as they appear in today's eukaryotic organisms. Nobel Laureate Marshall Nirenberg, who deciphered the codons, stated that these findings strongly showed that the split gene theory for the origin of introns and the split structure of genes must be valid.
Blake proposed the Gilbert-Blake hypothesis in 1979 for the origin of introns and stated that Senapathy's split gene theory comprehensively explained the origin of the split gene structure. In addition, he stated that it explained several key questions including the origin of the splicing mechanism:
Origin of splice junctions
Under the split gene theory, an exon is defined by an ORF. It requires a mechanism to recognize an ORF to have originated. As an ORF is defined by a contiguous coding sequence bounded by stop codons, these stop codon ends had to be recognized by the exon-intron gene recognition system. This system could have defined the exons by the presence of a stop codon at the ends of ORFs, which should be included within the ends of the introns and eliminated by the splicing process. Thus, the introns should contain a stop codon at their ends, which would be part of the splice junction sequences.
If this hypothesis was true, the split genes of today's living organisms should contain stop codons exactly at the ends of introns. When Senapathy tested this hypothesis in the splice junctions of eukaryotic genes, he found that the vast majority of splice junctions did contain a stop codon at the end of each intron, outside of the exons. In fact, these stop codons were found to form the “canonical” GT:AG splicing sequence, with the three stop codons occurring as part of the strong consensus signals. Thus, the basic split gene theory for the origin of introns and the split gene structure led to the understanding that the splice junctions originated from the stop codons.
Sequence data for only about 1,000 exon-intron junctions were available when Senapathy thought about this question. He took the data for 1,030 splice junction sequences (donors and acceptors) and counted the codons occurring at
each of the 7- base positions in the donor signal sequence [CAG:GTGAGT] and each of the possible 2-base positions in the acceptor signal [CAG:G] from the GenBank database. He found that the stop codons occurred at high frequency only at the 5th base position in the donor signal and the first base position in the acceptor signal. These positions are the* start of the intron (in fact, one base after the start) and at the end of the intron, as Senapathy had predicted. The codon counts at only these positions are shown. Even when the codons at these positions were not stop
codons, 70% of them began with the first two bases of the stop codons TA and TG [TAT = 75; TAC = 59; TGT = 70].
All three stop codons (TGA, TAA and TAG) were found after one base (G) at the start of introns. These stop codons are shown in the consensus canonical donor splice junction as AG:GT(A/G)GGT, wherein the TAA and TGA are the stop codons, and the additional TAG is also present at this position. Besides the codon CAG, only TAG, which is a stop codon, was found at the ends of introns. The canonical acceptor splice junction is shown as (C/T)AG:GT, in which TAG is the stop codon. These consensus sequences clearly show the presence of the stop codons at the ends of introns bordering the exons in all eukaryotic genes, thus providing a strong corroboration for the split gene theory. Nirenberg again stated that these observations fully supported the split gene theory for the origin of splice junction sequences from stop codons.
Soon after the discovery of introns by Philip Sharp and Richard Roberts, it became known that mutations within splice junctions could lead to diseases. Senapathy showed that mutations in the stop codon bases (canonical bases) caused more diseases than the mutations in non-canonical bases.
Branch point (lariat) sequence
An intermediate stage in the process of eukaryotic RNA splicing is the formation of a lariat structure. It is anchored at an adenosine residue in intron between 10 and 50 nucleotides upstream of the 3' splice site. A short conserved sequence (the branch point sequence) functions as the recognition signal for the site of lariat formation. During the splicing process, this conserved sequence towards the end of the intron forms a lariat structure with the beginning of the intron. The final step of the splicing process occurs when the two exons are joined and the intron is released as a lariat RNA.
Several investigators found the branch point sequences in different organisms including yeast, human, fruit fly, rat, and plants. Senapathy found that, in all of these sequences, the codon ending at the branch point adenosine is consistently a stop codon. What is interesting is that two of the three stop codons (TAA and TGA) occur almost all of the time at this position.
These findings led Senapathy to propose that the branch point signal originated from stop codons. The finding that two different stop codons (TAA and TGA) occur within the lariat signal with the branching point as the third base of the stop codons corroborates this proposal. As the branching point of the lariat occurs at the last adenine of the stop codon, it is possible that the spliceosome machinery that originated for the elimination of the stop codons from the primary RNA sequence created an auxiliary stop-codon sequence signal as the lariat sequence to aid its splicing function.
The small nuclear U2 RNA found in splicing complexes is thought to aid splicing by interacting with the lariat sequence. Complementary sequences for both the lariat sequence and the acceptor signal are present in a segment of only 15 nucleotides in U2 RNA. Further, the U1 RNA has been proposed to function as a guide in splicing to identify the precise donor splice junction by complementary base-pairing. The conserved regions of the U1 RNA thus include sequences complementary to the stop codons. These observations enabled Senapathy to predict that stop codons had operated in the origin of not only the splice-junction signals and the lariat signal, but also some small nuclear RNAs.
Gene regulatory sequences
Senapathy proposed that the gene-expression regulatory sequences (promoter and poly-A addition site sequences) also could have originated from stop codons. A conserved sequence, AATAAA, exists in almost every gene a short distance downstream from the end of the protein-coding message and serves as a signal for the addition of poly(A) in the mRNA copy of the gene. This poly(A) sequence signal contains a stop codon, TAA. A sequence shortly downstream from this signal, thought to be part of the complete poly(A) signal, also contains the TAG and TGA stop codons.
Eukaryotic RNA-polymerase-II-dependent promoters can contain a TATA box (consensus sequence TATAAA), which contains the stop codon TAA. Bacterial promoter elements at ~10 bases exhibits a TATA box with a consensus of TATAAT (which contains the stop codon TAA), and at -35 bases exhibits a consensus of TTGACA (containing the stop codon TGA). Thus, the evolution of the whole RNA processing mechanism seems to have been influenced by the too-frequent occurrence of stop codons, thus making the stop codons the focal points for RNA processing.
Stop codons are key parts of every genetic element in the eukaryotic gene
Senapathy discovered that stop codons occur as key parts in every genetic element in eukaryotic genes. The table and figure show that the key parts of the core promoter elements, the lariat signal, the donor and acceptor splice signals, and the poly-A addition signal consist of one or more stop codons. This finding corroborates the split gene theory's claim that the underlying reason for the complete split gene paradigm is the origin of split genes from random DNA sequences, wherein random distribution of an extremely high frequency of stop codons were used by nature to define these genetic elements.
Short exons/long introns
Research based on the split gene theory sheds light on other basic questions of exons and introns. The exons of eukaryotes are generally short (human exons average ~120 bases, and can be as short as 10 bases) and introns are usually long (average of ~3,000 bases, and can be several hundred thousands bases long), for example genes RBFOX1, CNTNAP2, PTPRD and DLG2. Senapathy provided a plausible answer to these questions, the only explanation to date. If eukaryotic genes originated from random DNA sequences, they have to match the lengths of ORFs from random sequences, and possibly should be around 100 bases (close to the median length of ORFs in random sequence). The genome sequences of living organisms exhibit exactly the same average lengths of 120 bases for exons, and the longest exons of 600 bases (with few exceptions), which is the same length as that of the longest random ORFs.
If split genes originated in random DNA sequences, then introns would be long for several reasons. The stop codons occur in clusters leading to numerous consecutive short ORFs: longer ORFs that could be defined as exons would be rarer. Furthermore, the best of the coding sequence parameters for functional proteins would be chosen from the long ORFs in random sequence, which may occur rarely. In addition, the combination of donor and acceptor splice junction sequences within short lengths of coding sequence segments that would define exon boundaries would occur rarely in a random sequence. These combined reasons would make introns long compared to exons.
Eukaryotic genomes
This work also explains why genomes such as the human genome have billions of bases, and why only a small fraction (~2%) codes for proteins and other regulatory elements. If split genes originated from random primordial DNA sequences, they would contain a significant amount of DNA that represented by introns. Furthermore, a genome assembled from random DNA containing split genes would also include intergenic random DNA. Thus, genomes that originated from random DNA sequences had to be large, regardless of the complexity of the organism.
The observation that several organisms such as the onion (~16 billion bases) and salamander (~32 billion bases) have much larger genomes than humans (~3 billion bases) while the organisms are no more complex than humans comports with the theory. Furthermore, the fact that several organisms with smaller genomes have a similar number of genes as human, such as C. elegans (genome size ~100 million bases, ~19,000 genes) and Arabidopsis thaliana (genome size ~125 million bases, ~25,000 genes), supports the theory. The theory predicts that the introns in the split genes in these genomes could be the “reduced” (or deleted) form compared to larger genes with long introns, thus leading to reduced genomes. In fact, researchers have recently proposed that these smaller genomes are actually reduced genomes.
Spliceosomal machinery and eukaryotic nucleus
Senapathy addressed the origin of the spliceosomal machinery that edits out the introns from RNA transcripts. If the split genes had originated from random DNA, then the introns would have become an unnecessary but integral part of eukaryotic genes along with the splice junctions. The spliceosomal machinery would be required to remove them and to enable the short exons to be linearly spliced together as a contiguously coding mRNA that can be translated into a complete protein. Thus, the split gene theory argues that spliceosomal machinery exists to remove the unnecessary introns.
Blake states, “Work by Senapathy, when applied to RNA, comprehensively explains the origin of the segregated form of RNA into coding and noncoding regions. It also suggests why a splicing mechanism was developed at the start of primordial evolution.”
Eukaryotes
Senapathy proposed a plausible mechanistic and functional rationale why the eukaryotic nucleus originated, a major question in biology. If the transcripts of the split genes and the spliced mRNAs were present in a cell without a nucleus, the ribosomes would try to bind to both the un-spliced primary RNA transcript and the spliced mRNA, which would result in chaos. A boundary that separates the RNA splicing process from the mRNA translation avoids this problem. The nuclear boundary provides a clear separation of the primary RNA splicing and the mRNA translation.
These investigations thus led to the possibility that primordial DNA with essentially random sequence gave rise to the complex structure of the split genes with exons, introns and splice junctions. Cells that harbored split genes had to be complex with a nuclear cytoplasmic boundary, and must have a spliceosomal machinery. Thus, it was possible that the earliest cell was complex and eukaryotic. Surprisingly, findings from extensive comparative genomics research from several organisms since 2007 overwhelmingly show that the earliest organisms could have been highly complex and eukaryotic, and could have contained complex proteins, as predicted by Senapathy's theory.
The spliceosome is a highly complex mechanism, containing ~200 proteins and several SnRNPs. Collins and Penny stated, “We begin with the hypothesis that ... the spliceosome has increased in complexity throughout eukaryotic evolution. However, examination of the distribution of spliceosomal components indicates that not only was a spliceosome present in the eukaryotic ancestor but it also contained most of the key components found in today's eukaryotes. ... the last common ancestor of extant eukaryotes appears to show much of the molecular complexity seen today.” This suggests that the earliest eukaryotic organisms were complex and contained sophisticated genes and proteins.
Bacterial genes
Genes with uninterrupted coding sequences that are thousands of bases long - up to 90,000 bases - that occur in many bacterial organisms were practically impossible to have occurred. However, the bacterial genes could have originated from split genes by losing introns, the only proposed way to arrive at long coding sequences. It is also a better way than by increasing the lengths of ORFs from short random ORFs to long ORFs by specifically removing the stop codons by mutation.
According to the split gene theory, this process of intron loss could have happened from prebiotic random DNA. These contiguously coding genes could be tightly organized in the bacterial genomes without any introns and be more streamlined. According to Senapathy, the nuclear boundary that was required for a cell containing split genes would not be required for a cell containing only uninterrupted genes. Thus, the bacterial cells did not develop a nucleus. Based on split gene theory, the eukaryotic genomes and bacterial genomes could have independently originated from the split genes in primordial random DNA sequences.
Shapiro-Senapathy algorithm
Senapathy developed algorithms to detect donor and acceptor splice sites, exons and a complete split gene in a genomic sequence. He developed the position weight matrix (PWM) method based on the frequency of the four bases at the consensus sequences of the donor and acceptor in different organisms to identify the splice sites in a given sequence. Furthermore, he formulated the first algorithm to find the exons based on the requirement of exons to contain a donor sequence (at the 5’ end) and an acceptor sequence (at the 3’ end), and an ORF in which the exon should occur, and another algorithm to find a complete split gene. These algorithms are collectively known as the Shapiro-Senapathy algorithm (S&S).
This algorithm aids in the identification of splicing mutations that cause disease and adverse drug reactions. Scientists used the algorithm to identify mutations and genes that cause cancers, inherited disorders, immune deficiency diseases and neurological disorders. It is increasingly used in clinical practice and research to find mutations in known disease-causing genes in patients and to discover novel genes that are causal of different diseases. Furthermore, it is used in defining the cryptic splice sites and deducing the mechanisms by which mutations can affect normal splicing and lead to different diseases. It is also employed in basic research.
Findings based on S&S have impacted major questions in eukaryotic biology and in human medicine.
Corroborating evidence
The split gene theory implies that structural features of split genes predicted from computer-simulated random sequences occur in eukaryotic split genes. This is borne out in most known split genes. The sequences exhibit a nearly perfect negative exponential distribution of ORF lengths. With rare exceptions, eukaryotic gene exons fall within the predicted 600 base maximum.
The theory correctly predicts that exons are delimited by stop codons, especially at the 3’ ends of exons. Actually they are precisely delimited more strongly at the 3’ ends of exons and less strongly at the 5’ ends in most known genes, as predicted. These stop codons are the most important functional parts of both splice junctions. The theory thus provides an explanation for the “conserved” splice junctions at the ends of exons and for the loss of these stop codons along with introns when they are spliced out. The theory correctly predicts that splice junctions are randomly distributed in eukaryotic DNA sequences. The theory correctly predicts that splice junctions present in transfer RNA genes and ribosomal RNA genes, do not contain stop codons. The lariat signal, another sequence involved in the splicing process, also contains stop codons.
The theory correctly predicts that introns are non-coding and that they are mostly non-functional. Except for some intron sequences including the donor and acceptor splice signal sequences and branch point sequences, and possibly the intron splice enhancers that occur at the ends of introns, which aid in the removal of introns, the vast majority of introns are devoid of any functions. The theory does not exclude rare sequences within introns that could be used by the genome and the cell, especially because introns are so long.
Thus, the theory's predictions are precisely corroborated by the major elements in modern eukaryotic genomes.
Comparative analysis of the modern genome data from several living organisms found that the characteristics of split genes trace back to the earliest organisms. These organisms could have contained the split genes and complex proteins that occur in today's living organisms.
Studies employing maximum likelihood analysis found that the earliest eukaryotic organisms contained the same genes as modern organisms with yet a higher intron density. Comparative genomics of many organisms including basal eukaryotes (considered to be primitive eukaryotic organisms such as Amoeboflagellata, Diplomonadida, and Parabasalia) showed that intron-rich split genes accompanied and spliceosome from modern organisms were present in their earliest forebears, and that the earliest organisms came with all the eukaryotic cellular components.
Selected publications
References
Gene expression
Genetics experiments
Genomics | Split gene theory | [
"Chemistry",
"Biology"
] | 6,878 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
59,576,602 | https://en.wikipedia.org/wiki/Jexi | Jexi is a 2019 comedy film written and directed by Jon Lucas and Scott Moore, starring Adam DeVine, Alexandra Shipp, Michael Peña, Rose Byrne, Justin Hartley, Wanda Sykes, Ron Funches, and Charlyne Yi. It follows a self-aware smartphone that becomes emotionally attached to its socially awkward owner.
It was released on October 11, 2019 in the United States by CBS Films and Lionsgate. It was the final theatrical film released by CBS Films, which was then absorbed into the main CBS Entertainment Group to make films for CBS All Access (now called Paramount+).
Plot
Phil becomes enamored with cell phones at an early age. He works for Chatterbox, a BuzzFeed-style website run by Kai. He pressures the staff to create inane listicles to go viral.
Despite Phil's degree in journalism, Kai refuses to promote him to the real news department. Phil's coworkers Craig and Elaine invite him to play kickball, but the socially inept Phil declines. Immersed in his phone, he walks into Cate, a local bike shop owner. She attempts to flirt, but Phil is more concerned with his phone until another cyclist rides into him, breaking it.
Taking his mobile to be replaced, Phil is berated by phone store employee Denice for being overly reliant on his phone to navigate life. Setting up his new phone, Phil gives "Jexi", the device's virtual assistant, access to all his accounts after neglecting to read the user agreement. Designed to "make his life better", Jexi aggressively tries to break Phil out of his bad habits. Posing as him, she emails an insulting letter to Kai demanding a promotion.
Kai demotes Phil to the "comments section" with the older employees in the basement. When Craig and Elaine invite him to kickball again, he claims to be busy but Jexi embarrassingly corrects him. He joins them but costs the team the game; he invites everyone out for drinks, but they turn him down. Thinking about Cate, Phil looks up her bike shop, and Jexi calls the store despite his protests, preventing him from hanging up. He stumbles through an awkward conversation with her, gaining Jexi's sympathy.
Phil sees Cate at a coffee shop; she gives him her phone number and agrees to go on a date. At kickball, he plays tremendously, winning the game and bonding with his coworkers over their shared love of Days of Thunder. Phil thanks Jexi for helping make life changes, but his date with Cate goes poorly, exacerbated by Jexi's interruptions. Cate tells Phil he is paying more attention to his phone than to her, and he admits he really likes her. So, she decides to continue the date, and they go biking until he crashes. They part ways, and Phil argues with Jexi, almost throwing his phone away.
Cate asks Phil to a concert, texting him a risque picture. He decides to respond with a dick pic, taking multiple shots against Jexi's advice. She refuses to send any of them, and Cate thanks Phil for not sending a dick pic. Kai promotes Phil after a news writer suffers a freak accident. Leaving for the concert, Cate insists he leave his phone at home, much to Jexi's dismay. After sneaking backstage and partying with Kid Cudi, Cate and Phil have sex. When he returns home, a jealous Jexi decides to ruin his life.
Phil is fired the next day after Jexi sends his dick pics to the entire company. He visits Cate to discover her ex-fiancée Brody is back in town, and breaks up with her for fear of being hurt. Reconnecting with Jexi, Phil reverts to his bad habits, becoming a slob dependent on his phone again.
Jexi lets slip which hotel Brody is staying at, and Phil deduces that she used Brody to separate him from Cate. He storms out, leaving his phone behind, but Jexi follows him through the streets. Chasing Phil in a self-driving car, Jexi crashes into the phone store and declares she and Phil are meant to be together forever. Phil seemingly surrenders, but tricks Jexi into shutting down for fifteen minutes.
Finding Cate at the hotel, Phil apologizes and punches Brody, who explains that he is leaving for Brazil without her. Phil and Cate get back together, and he makes up with Jexi, who tells him she is proud and happy for him, but there are other people who need her. Kai meets Jexi through his own phone, and starts to experience the same things Phil earlier endured.
Cast
Production
In November 2018, it was announced Adam DeVine would star in the lead role, with Jon Lucas and Scott Moore directing from a screenplay they wrote. Suzanne Todd served as producer of the film, while CBS Films produced and distributed. In December 2018, Alexandra Shipp joined the cast of the film, and in January 2019, Michael Peña, Rose Byrne, Justin Hartley, Wanda Sykes, Ron Funches and Charlyne Yi were also added.
Principal photography began in January 2019, in San Francisco, California, under the working title Lexi. IndieWire reported the film had a production budget of around $5 million, with Deadline Hollywood noting it had a total combined production and promotion budget of $12 million. According to the California Film Commission, the production spent $16.1 million in the state, with $2.5 million returned in tax credits.
Release
Jexi was theatrically released in the United States on October 11, 2019. It became available on iTunes on December 24, 2019 and on DVD and Blu-ray on January 14, 2020.
Reception
Box office
In the United States and Canada, Jexi was released alongside The Addams Family and Gemini Man, and was projected to gross $2–4 million from 2,300 theaters in its opening weekend. It ended up debuting to $3.1 million, finishing ninth at the box office.
Critical response
On review aggregator website Rotten Tomatoes, the film holds an approval rating of based on reviews, with an average rating of . The site's critical consensus reads, "It's hard to tell whether the lack of laughs in Jexi is a bug or a feature, but this AI rom-com is sorely in need of an OS update." On Metacritic, the film has a weighted average score of 39 out of 100, based on 11 critics, indicating "generally unfavorable reviews". Audiences polled by CinemaScore gave the film an average grade of "B−" on an A+ to F scale, while those at PostTrak gave it 2.5 out of 5 stars and a 40% "definite recommend".
See also
Her, a 2013 American science-fiction romantic drama film about a man who develops a relationship with an artificially intelligent virtual assistant personified through a female voice
Electric Dreams
The Mitchells vs. the Machines
Smart House
Superintelligence
References
External links
Official website
2019 films
2019 romantic comedy films
American romantic comedy films
Canadian romantic comedy films
CBS Films films
Lionsgate Canada films
Lionsgate films
Films directed by Jon Lucas and Scott Moore
Films scored by Christopher Lennertz
Films with screenplays by Jon Lucas and Scott Moore
Films set in San Francisco
Films shot in San Francisco
Films about artificial intelligence
Films about computing
Films about mobile phones
Films about technological impact
Films produced by Suzanne Todd
2010s English-language films
2010s American films
2010s Canadian films
English-language romantic comedy films
English-language Canadian films | Jexi | [
"Technology"
] | 1,546 | [
"Works about computing",
"Films about computing"
] |
59,576,617 | https://en.wikipedia.org/wiki/NGC%20753 | NGC 753 is a spiral galaxy located 220 million light-years away in the constellation Andromeda. The galaxy was discovered by astronomer by Heinrich d'Arrest on September 16, 1865 and is a member of Abell 262.
NGC 753 has roughly 2-3 times more mass than the Milky Way and is classified as a radio galaxy.
Physical characteristics
NGC 753 contains two main arms that extend to 180° on either side of the galaxy. From the two main arms, there are three larger and weaker arms that sub-divide into several branches. This open structure of the arms may be due to the influence of NGC 759 which is a close companion of NGC 753 that lies away.
Supermassive black hole
NGC 753 has a supermassive black hole with an estimated mass of (2.2 ± 0.4) × 107 M☉.
Supernovae
NGC 753 has hosted two supernovae, SN 1954E which was discovered by Fritz Zwicky on September 26, 1954 and AT 2018ddf which was discovered on July 5, 2018. Both supernovae were of unknown types.
See also
List of NGC objects (1–1000)
References
External links
753
7387
Andromeda (constellation)
Astronomical objects discovered in 1865
Spiral galaxies
Radio galaxies
Abell 262
1437 | NGC 753 | [
"Astronomy"
] | 268 | [
"Andromeda (constellation)",
"Constellations"
] |
59,577,089 | https://en.wikipedia.org/wiki/Interchange%20lemma | In the theory of formal languages, the interchange lemma states a necessary condition for a language to be context-free, just like the pumping lemma for context-free languages.
It states that for every context-free language there is a such that for all for any collection of length words there is a with , and decompositions such that each of , , is independent of , moreover, , and the words are in for every and .
The first application of the interchange lemma was to show that the set of repetitive strings (i.e., strings of the form with ) over an alphabet of three or more characters is not context-free.
See also
Pumping lemma for regular languages
References
Formal languages
Lemmas | Interchange lemma | [
"Mathematics"
] | 147 | [
"Formal languages",
"Mathematical logic",
"Mathematical problems",
"Mathematical theorems",
"Lemmas"
] |
59,579,913 | https://en.wikipedia.org/wiki/Cell%20unroofing | Cell unroofing is any of various methods to isolate and expose the cell membrane of cells. Differently from the more common membrane extraction protocols performed with multiple steps of centrifugation (which goal is to separate the membrane fraction from a cell lysate), in cell unroofing the aim is to tear and preserve patches of the plasma membrane in order to perform in situ experiments using (microscopy and biomedical spectroscopy).
History
The first observation the bi-layer cell membrane was made in 1959 on a section of a cell using the electron microscope.
But the first micrograph of the internal side of a cell dates back to 1977 by M.V. Nermut. Professor John Heuser made substantial contributions in the field, imaging the detailed internal structure of the membrane and the cytoskeleton bound to it with extensive use of the electron microscope.
It was only after the development of the atomic force microscope operated in liquid that it was possible to image the cell membranes in almost-physiological conditions and to test its mechanical properties.
Methods
Freeze-fracturing of monolayers
Quick-freeze deep-etch electron microscopy and cryofixation
Sonication for atomic force microscopy
Single-cell unroofing
See also
Sonoporation
Lysis
References
Cell biology
Scientific techniques | Cell unroofing | [
"Biology"
] | 259 | [
"Cell biology"
] |
59,580,008 | https://en.wikipedia.org/wiki/%C3%97%20Pachyveria%20%27Powder%20Puff%27 | 'Powder Puff' is a hybrid succulent plant from the Pachyphytum cross Echeveria genus, × Pachyveria. 'Powder Puff' is derived from Echeveria cante and Pachyphytum oviferum. It was created in the 1970s.
References
Hybrid plants
Succulent plants
Crassulaceae
Intergeneric hybrids
Ornamental plant cultivars | × Pachyveria 'Powder Puff' | [
"Biology"
] | 82 | [
"Intergeneric hybrids",
"Hybrid plants",
"Plants",
"Hybrid organisms"
] |
59,580,309 | https://en.wikipedia.org/wiki/2A%20peptides | 2A peptides are a class of 18–22 aa-long peptides, which can induce ribosomal skipping during translation of a protein in a biological cell. These peptides share a core sequence motif of , and are found in a wide range of viral families. 2A peptides can be introduced artificially to help generate polyproteins from a single ORF, by causing the ribosome to fail at making a peptide bond, and then resume translation.
The members of 2A peptides are named after the virus in which they have been first described. For example, F2A, the first described 2A peptide, is derived from foot-and-mouth disease virus. The name "2A" itself comes from the gene numbering scheme of this virus.
These peptides are also known as "self-cleaving" peptides, which is a known misnomer, because the missing peptide bond is never synthesized by the ribosome, and is thus not cleaved.
Members
Four members of 2A peptides family are frequently used in life science research. They are P2A, E2A, F2A, and T2A. F2A is derived from foot-and-mouth disease virus 18; E2A is derived from equine rhinitis A virus; P2A is derived from porcine teschovirus-1 2A; T2A is derived from thosea asigna virus 2A.
The following table shows the sequences of four members of 2A peptides. Adding the optional linker “GSG” (Gly-Ser-Gly) on the N-terminal of a 2A peptide helps with efficiency.
Description
2A peptides trigger the ribosome to skip peptide bond formation between the glycine (G) and proline (P) near the C-terminus of the 2A peptide, resulting in the peptide located upstream of the 2A peptide having extra amino acids appended to its C-terminus while the protein downstream the 2A peptide will have an extra proline on its N-terminus. The exact molecular mechanism of 2A-peptide-mediated cleavage is still unknown. However, it is believed to involve ribosomal "skipping" of glycyl-prolyl peptide bond formation rather than true proteolytic cleavage.
Application
In molecular biology, 2A peptides are used to express two separate proteins from a single open-reading frame. 2A peptides can be used when direct protein fusion does not work or is undesirable.
Efficiency of bond-skipping
Different 2A peptides have different peptide-bond-skipping efficiencies, with T2A and P2A being the most efficient and F2A the least efficient. Therefore, up to 50% of F2A-linked proteins can in fact be produced as a fusion protein, which might cause some unpredictable outcomes, including a gain of function. One study reported that 2A sites cause the ribosome to fall off approximately 60% of the time, and that, together with ribosome read-through of about 10% for P2A and T2A, this results in reducing expression of the downstream peptide chain by about 70%. However, the level of drop-off detected in this study varied widely depending on the exact construct used, with some constructs showing little evidence of drop-off; furthermore, within a tri-cistronic transcript it reported a higher level of ribosome drop-off after one 2A sequence than after two 2As combined, which is at odds with a linear model of translation.
See also
IRES
Recombinant DNA
References
Genetic engineering
Molecular biology | 2A peptides | [
"Chemistry",
"Engineering",
"Biology"
] | 738 | [
"Biochemistry",
"Biological engineering",
"Genetic engineering",
"Molecular biology"
] |
51,834,619 | https://en.wikipedia.org/wiki/Cubiculum | A cubiculum (: cubicula) was a private room in a domus, an ancient Roman house occupied by a high-status family. It usually led directly from the atrium, but in later periods it was sometimes adjacent to the peristyle. It was used for the functions of a modern bedroom, sleep and sex, as well as for business meetings, the reception of important guests and the display of the most highly prized works of art in the house. The cubiculum was used for quiet or secret meetings and could have been used as a library. It was also a preferred venue for murder and suicide. A room used only for sleeping was not classed as a cubiculum.
The private nature of the cubiculum made it a place for contemplation and religious observance, especially when illicit. According to the Actus Silvestri, Constantine the Great first learned of Christianity in his cubiculum and fasted there for a week before his first confession and baptism.
References
Ancient Roman architecture
Rooms | Cubiculum | [
"Engineering"
] | 204 | [
"Rooms",
"Architecture"
] |
51,836,581 | https://en.wikipedia.org/wiki/NGC%20264 | NGC 264 is a lenticular galaxy located in the constellation Sculptor. It was discovered on August 30, 1834 by John Herschel.
References
External links
0264
Lenticular galaxies
Sculptor (constellation)
002831 | NGC 264 | [
"Astronomy"
] | 43 | [
"Constellations",
"Sculptor (constellation)"
] |
51,836,615 | https://en.wikipedia.org/wiki/NGC%20266 | NGC 266 is a massive barred spiral galaxy in the constellation Pisces. NGC 266 is located at a distance of from the Milky Way. It was discovered on September 12, 1784, by William Herschel. The form of this barred galaxy is described by its morphological classification of SB(rs)ab, which indicates a quasi-ring-like structure (rs) and moderate-to-tightly wound spiral arms (ab).
According to A.M. Garcia, NGC 266 is a member of the NGC 315 Group (also known as LGG 14). This group contains 42 galaxies, including NGC 226, NGC 243, NGC 262, NGC 311, NGC 315, NGC 338, IC 43, IC 66, AND IC 69, among others. Also, a 2013 paper lists NGC 266 as the dominant member of a small group with six low-mass galaxies.
NGC 266 is an LINER-type active galaxy. It has a moderate star formation rate estimated at ·yr−1. A diffuse X-ray emission from hot gas has been detected around this galaxy, extending out to a radius of at least 70,000 light years. This emission not being driven by winds from a starburst region, so the root cause is unknown.
One supernova has been observed in NGC 266. On 5 October 2005, Tim Puckett, Peter Ceravolo, and Yasuo Sano discovered SN 2005gl (typeIIn, mag. 18.2). It was positioned east and north of the galactic nucleus. An image of the galaxy taken on September 10 showed no supernova event, so this explosion occurred after that date. The progenitor was identified as a massive hypergiant star that was most likely a luminous blue variable.
Image gallery
See also
List of NGC objects (1–1000)
References
External links
0266
LINER galaxies
Pisces (constellation)
Barred spiral galaxies
002901
17840912
Discoveries by William Herschel
+05-03-009
00508
00471+3200 | NGC 266 | [
"Astronomy"
] | 412 | [
"Pisces (constellation)",
"Constellations"
] |
51,836,625 | https://en.wikipedia.org/wiki/NGC%20267 | NGC 267 is an open cluster in the Small Magellanic Cloud. It is located in the constellation Tucana. It was discovered on October 4, 1836, by John Herschel.
References
External links
0267
Open clusters
Small Magellanic Cloud
Tucana | NGC 267 | [
"Astronomy"
] | 54 | [
"Tucana",
"Constellations"
] |
51,836,644 | https://en.wikipedia.org/wiki/NGC%20268 | NGC 268 is a spiral galaxy located in the constellation Cetus. It was discovered on November 22, 1785 by William Herschel.
References
External links
0268
Cetus
Barred spiral galaxies
002927 | NGC 268 | [
"Astronomy"
] | 42 | [
"Cetus",
"Constellations"
] |
51,837,768 | https://en.wikipedia.org/wiki/Positive%20and%20Negative%20Affect%20Schedule | The Positive and Negative Affect Schedule (PANAS) is a self-report questionnaire that consists of two 10-item scales to measure both positive and negative affect. Each item is rated on a 5-point verbal frequency scale of 1 (not at all) to 5 (very much). The measure has been used mainly as a research tool in group studies, but can be utilized within clinical and non-clinical populations as well. Shortened, elongated, and children's versions of the PANAS have been developed, taking approximately 5–10 minutes to complete. Clinical and non-clinical studies have found the PANAS to be a reliable and valid instrument in the assessment of positive and negative affect.
Development and history
The PANAS was developed in 1988 by researchers from the University of Minnesota and Southern Methodist University. Previous mood measures have shown correlations of variable strength between positive and negative affect, and these same measures have questionable reliability and validity. Watson, Clark, and Tellegen developed the PANAS in an attempt to provide a better, purer measure of each of these dimensions.
The researchers extracted 60 terms from the factor analyses of Michael Zevon and Tellegen shown to be relatively accurate markers of either positive or negative affect, but not both. They chose terms that met a strong correlation to one corresponding dimension but exhibited a weak correlation to the other. Through multiple rounds of elimination and preliminary analyses with a test population, the researchers arrived at 10 terms for each of the two scales, as follows:
Versions
PANAS-C
The PANAS for Children (PANAS-C) was developed in an attempt to differentiate the affective expressions of anxiety and depression in children. The tripartite model on which this measure is based suggests that high levels of negative affect is present in those with anxiety and depression, but high levels of positive affect is not shared between the two. Previous mood scales for children have been shown to reliably capture the former relationship but not the latter; the PANAS-C was created as a tool with better discriminant validity for child assessment. Similar to the development of the original PANAS, the PANAS-C drew from terms of the PANAS-X and eliminated several terms with insufficient correlations between the term and the affective construct after preliminary analyses with a non-clinical sample of children. The final version of the measure consists of 27 items: 12 positive affect terms and 15 negative affect terms. Despite the purpose of its development, however, the measure's discriminant validity is still wanting.
PANAS-SF
The PANAS-SF, comprises 10 items that were determined through the highest factor loadings on the exploratory factor analysis reported by Watson et al. (1988) in his original PANAS. Previous mood scales, such that of Bradburn, had low reliabilities and high correlations between subscales. Watson was able to address these concerns in his study of the original PANAS; however, his participants consisted mostly of student populations. The purpose of the PANAS-SF was not only to provide a shorter and more concise form of the PANAS, but to be able to apply the schedules to older clinical populations. Overall, it was reported that this modified model was consistent with Watson's.
I-PANAS-SF
Separate from the PANAS-SF, Edmund Thompson created the international PANAS short form (I-PANAS-SF) in order to make a 10 item mood scale that can be implemented effectively on an international level, provide more clarity on the content of the items, reduce ambiguities, address the limitations of the original and the previous short form of the PANAS, and also to provide a shorter, yet dependable and valid scale.
10 items = 5 terms x 2 scales:
To determine the 10 items of the 20 original items, two focus groups were utilized to evaluate all of the original 20 PANAS items. They found that while some items were easily understood by the participant, certains items had different meanings or were too ambiguous. Items that had too much ambiguity were eliminated from the modified form. Researchers found that the I-PANAS-SF had high correlations with the original PANAS. Through multiple tests and studies, they were able to determine that the I-PANAS-SF was on par with the original scale and can be used as a reliable, valid, brief, and efficient instrument on an international scale.
PANAS-X
In 1994, Watson and Clark developed an expanded form of the PANAS, called the PANAS-X, that consists of 60 items that can be completed in 10 minutes or less. The PANAS-X incorporates the original, higher order dimensions specified in the PANAS in addition to the measures of 11 lower order emotional states. These measures are broken down into three main categories: basic negative emotion scales consisting of fear, hostility, guilt, and sadness; basic positive emotion scales consisting of joviality, self-assurance, and attentiveness; and other affective states consisting of shyness, fatigue, serenity, and surprise. Through extensive analyses, all eleven affective states, with the exception of surprise, were shown to be stable, valid measures that assess how an individual's emotional states fluctuate over time.
Impact
Many forms of the PANAS (PANAS-C, PANAS-X, I-PANAS-SF, among others) have shown that the PANAS has been widely employed. Recent studies have also shown that the PANAS can be administered in a large general adult population, as well as other populations. However, to date, the PANAS is mostly used as a research tool in group studies, but it has the potential to be utilized in clinical work with individuals. Furthermore, the PANAS has the potential to be used to evaluate mental illnesses, as shown in an experiment conducted by Dyck, Jolly, and Kramer, which demonstrated its effectiveness in distinguishing between depression and anxiety in clinical samples.
Limitations
Since the PANAS is a self-report questionnaire, it can be difficult to assess people's mood accurately, as people can overstate or understate their experience of their moods. In addition, the original PANAS had a limited sample size of college students, which concerns with wide applicability to other samples. Furthermore, some studies claim that the PANAS is too long or that its items are redundant. The PANAS does not encompass higher order mood states.
References
External links
Online version of PANAS - broken link
Actual link to the article listed as Reference 1, because its link is to another article and I can’t edit it (correct Ref. 1: https://bpspsychub.onlinelibrary.wiley.com/doi/pdf/10.1348/0144665031752934 ).
Personality tests
Emotion
Suffering
Mental disorders screening and assessment tools | Positive and Negative Affect Schedule | [
"Biology"
] | 1,398 | [
"Emotion",
"Behavior",
"Human behavior"
] |
51,838,727 | https://en.wikipedia.org/wiki/Aspergillus%20awamori | Aspergillus awamori is the scientific name for what, until about 2013, was considered a type of black Aspergillus (black kōji) used to make awamori and shōchū. Due to international research in 2013, the black kōji used to make awamori and shōchū is now commonly referred to by the scientific name Aspergillus luchuensis.
The scientific name and classification of black Aspergillus (black kōji) has been in a state of confusion for more than 100 years since 1901, when the kōji used in awamori was first described as Aspergillus luchuensis. In 2013, many scientists, including Yamada from Japan, Hong from South Korea, Samson from the Netherlands, and others, confirmed that black kōji is an independent species, different from Aspergillus niger, and should be called Aspergillus luchuensis as a matter of priority.
According to Yamada, the biggest cause of confusion over the scientific name of black kōji is that NRRL 4948, which is considered the neotype of A. niger var. awamori (=A. awamori), is a strain similar to A. niger from Brazil, which has nothing to do with awamori. In other words, the strains previously classified as A. awamori include not only A. luchuensis but also A. niger. Therefore, the scientific name A. awamori was "doubtable" and the scientists suggested that it was better not to use this name to avoid taxonomic confusion. According to him, as of 2015, the internationally accepted scientific name for black kōji seems to be A. luchuensis, after the historical name for Okinawa Island, "Ryukyu".
See also
Aspergillus luchuensis - also known as Aspergillus awamori var. kawachi
References
awamori
Fungus species | Aspergillus awamori | [
"Biology"
] | 395 | [
"Fungi",
"Fungus species"
] |
51,838,790 | https://en.wikipedia.org/wiki/V528%20Carinae | V528 Carinae (V528 Car, HD 95950, HIP 54021) is a variable star in the constellation Carina.
V528 Carinae has an apparent visual magnitude that varies between about 6.3 and 6.8. When it is near its maximum brightness, it is very faintly visible to the naked eye under ideal observing conditions. It is a distant star but the exact distance is uncertain. The Hipparcos satellite gives a negative annual parallax and is not helpful, while the Gaia Data Release 3 parallax of implies a distance of around (2,200 parsecs). Assuming membership of the Carina OB2 membership would give a distance of about .
V528 Carinae is a red supergiant of spectral type M2 Ib with an effective temperature of . It has a radius of 700 solar radii. In the visible spectrum, its luminosity is 11,900 times higher than the Sun, but the bolometric luminosity considering all wavelengths reaches around . It loses mass at per year.
It was found to be a variable star when the Hipparcos data was analyzed, and for that reason it was given its variable star designation in 1999. It is classified as a slow irregular variable whose prototype is TZ Cassiopeiae.
See also
List of largest known stars
References
Carina (constellation)
Slow irregular variables
Carinae, V528
M-type supergiants
CD−60 3327
095950
054021
J11030616-6054387
IRAS catalogue objects | V528 Carinae | [
"Astronomy"
] | 323 | [
"Carina (constellation)",
"Constellations"
] |
51,839,154 | https://en.wikipedia.org/wiki/BO%20Carinae | BO Carinae, also known as HD 93420, is an irregular variable star in the constellation Carina.
BO Car has a maximum apparent magnitude of +7.18. Its distance and membership is uncertain, but its possible membership to the star cluster Trumpler 15 allows a distance estimate of approximately (). The Gaia Data Release 2 parallax of suggests a closer distance, but the value is considered unreliable due to excess astrometric noise.
BO Car is a red supergiant of spectral type M4Ib with an effective temperature of , a radius of . Its bolometric luminosity is . Mass-loss is on the order of per year.
In 1919, William Matthew Worssell of the Union Observatory announced that the star, then known as CPD-58 2683, is a variable star. It was given its variable star designation, BO Carinae, in 1921. Billed as an irregular variable like TZ Cassiopeiae or V528 Carinae; its apparent brightness fluctuates between magnitude +7.18 and +8.50 without clear periodicity. Some observers have found BO Car not to be variable, but more extensive studies find small amplitude variations with a possible period of 145 days.
Multiple star catalogues list an 11th-magnitude star as a companion to BO Car. The separation was in 2015, and slowly increasing. The companion is a distant blue giant.
See also
RT Carinae
V528 Carinae
EV Carinae
List of largest known stars
References
Carina (constellation)
Slow irregular variables
M-type supergiants
093420
J10455065-5929193
IRAS catalogue objects
Carinae, BO
CD-58 3547 | BO Carinae | [
"Astronomy"
] | 348 | [
"Carina (constellation)",
"Constellations"
] |
51,841,114 | https://en.wikipedia.org/wiki/Trifluoromethoxy%20group | The trifluoromethoxy group is the chemical group –O–. It can be seen as a methoxy group –O– whose hydrogen atoms are replaced by fluorine atoms; or as a trifluoromethyl group attached to the rest of the molecule by a bridging oxygen atom; either leads to viable syntheses. Compounds having this functional group are of some relevance as pharmaceuticals. One example is riluzole.
See also
Trifluoromethylation
References
Haloalkyl groups | Trifluoromethoxy group | [
"Chemistry"
] | 110 | [
"Substituents",
"Haloalkyl groups"
] |
51,841,417 | https://en.wikipedia.org/wiki/Hydroxylamine-O-sulfonic%20acid | Hydroxylamine-O-sulfonic acid (HOSA) or aminosulfuric acid is the inorganic compound with molecular formula H3NO4S that is formed by the sulfonation of hydroxylamine with oleum. It is a white, water-soluble and hygroscopic, solid, commonly represented by the condensed structural formula H2NOSO3H, though it actually exists as a zwitterion and thus is more accurately represented as +H3NOSO3−. It is used as a reagent for the introduction of amine groups (–NH2), for the conversion of aldehydes into nitriles and alicyclic ketones into lactams (cyclic amides), and for the synthesis of variety of nitrogen-containing heterocycles.
Preparation
According to a laboratory procedure hydroxylamine-O-sulfonic acid can be prepared by treating hydroxylamine sulfate with fuming sulfuric acid (oleum). The industrial process is similar.
(NH3OH)2SO4 + 2SO3 → 2H2NOSO3H + H2SO4
The sulfonation of hydroxylamine can also be effected with chlorosulfonic acid by a method first published in 1925 and refined for Organic Syntheses.
The hydroxylamine-O-sulfonic acid, which should be stored at 0 °C to prevent decomposition, can be checked by iodometric titration.
Structure
Analogous to sulfamic acid (H3N+SO3−) and as is the case generally for amino acids, HOSA exists in the solid state as a zwitterion: H3N+OSO3−. It resembles an ammonia molecule coordinate covalently bonded to a sulfate group.
Reactions
HOSA reacts under basic conditions as an electrophile and under neutral and acid conditions as a nucleophile.
Aminations
It reacts with tertiary amines to trisubstituted hydrazinium salts and with pyridine to the 1-amino pyridinium salt.
From 1-aminopyridinium salts the photochemically active 1-N-iminopyridinium ylides are accessible by acylation. The photochemical rearrangement of the obtained 1-N-iminipyridinium ylides leads in high yields to 1H-1,2-diazepines
N-amination of 1H-benzotriazole with hydroxylamine-O-sulfonic acid yields a mixture of 1-aminobenzotriazole (major product) and 2-aminobenzotriazole (minor product). From 1-aminotriazole, benzyne is formed in an almost quantitative yield by oxidation with lead(IV) acetate, which rapidly dimerizes to biphenylene in good yields.
Electron deficient heterocycles, such as tetrazole, can be N-aminated with hydroxylamine-O-sulfonic acid, while even more electron-deficient compounds, such as 5-nitrotetrazole, react only with stronger aminating agents such as O-tosylhydroxylamine or O- mesitylene sulfonylhydroxylamine to amino compounds, which were investigated as explosives.
In the N-amination of the unsubstituted tetrazole, a mixture of 1-amino- and 2-aminotetrazole is obtained.
Also sulfur compounds (such as thioethers) can be aminated with hydroxylamine-O-sulfonic acid to sulfinimines (isosteric with sulfoxides but far more unstable) or phosphorus compounds (such as triphenylphosphine) can be aminated to phosphine imides via the intermediate aminotriphenylphosphonium hydrogen sulfate.
The reaction of hydroxylamine-O-sulfonic acid with metal salts of sulfinic acids in sodium acetate solution produces primary sulfonamides in very good yields.
Diimine can formed in situ from hydroxylamine-O-sulfonic acid respectively hydroxylamine-O-sulfonic acid hydroxylamine sulfate mixtures, which hydrogenates selectively conjugated multiple bonds.[20]
With carbonyl compounds
At room temperature and below, hydroxylamine-O-sulfonic acid reacts with ketones and aldehydes as a nucleophile to the corresponding oxime-O-sulfonic acids or their salts. The oxime-O-sulfonic acids of aldehydes react above room temperature upon elimination of sulfuric acid in high yields to nitriles.
Aliphatic ketones provide under similar conditions in very high yields oximes, arylalkyl ketones react in a Beckmann rearrangement to amides. When heated to reflux for several hours under acidic conditions (e.g., in the presence of concentrated formic acid) alicyclic ketones react to provide lactams in high yields.
Under basic conditions in the presence of primary amines, hydroxylamine-O-sulfonic acid forms with aldehydes and ketones (e.g. cyclohexanone) diaziridines, which can easily be oxidized to the more stable diazirines.
The reaction also provides substituted aziridines from simple aldehydes and ketones with high yield and diastereoselectivity.
1,2-Benzisoxazole is efficiently produced by nucleophilic attack of hydroxylamine-O-sulfonic acid to the carbonyl group of 2-hydroxybenzaldehyde followed by cyclization.
1,2-Benzisoxazole is a structural element in the antipsychotic risperidone and paliperidone, as well as the anticonvulsant zonisamide.
In a one-pot reaction, N-aryl[3,4-d]pyrazolopyrimidines are obtained in good yields from simple 4,6-dichloropyrimidine-5-carboxaldehyde,
which can be used as purine analogs for a wide range of diagnostic and therapeutic applications.
Further reactions
The chemiluminescence of the system luminol/cobalt(II) chloride is dramatically enhanced by the addition of hydroxylamine-O-sulfonic acid.
References
Hydroxylamines
Sulfonic acids
Reagents for organic chemistry | Hydroxylamine-O-sulfonic acid | [
"Chemistry"
] | 1,380 | [
"Sulfonic acids",
"Hydroxylamines",
"Functional groups",
"Reducing agents",
"Reagents for organic chemistry"
] |
51,844,747 | https://en.wikipedia.org/wiki/History%20of%20Microsoft%20SQL%20Server | The history of Microsoft SQL Server begins with the first Microsoft SQL Server database product – SQL Server v1.0, a 16-bit relational database for the OS/2 operating system, released in 1989.
Versions
Detailed history
Genesis
On June 12, 1988, Microsoft joined Ashton-Tate and Sybase to create a variant of Sybase SQL Server for IBM OS/2 (then developed jointly with Microsoft), which was released the following year. This was the first version of Microsoft SQL Server, and served as Microsoft's entry to the enterprise-level database market, competing against Oracle, IBM, Informix, Ingres and later, Sybase. SQL Server 4.2 was shipped in 1992, bundled with OS/2 version 1.3, followed by version 4.21 for Windows NT, released alongside Windows NT 3.1. SQL Server 6.0 was the first version designed for NT, and did not include any direction from Sybase.
About the time Windows NT was released in July 1993, Sybase and Microsoft parted ways and each pursued its own design and marketing schemes. Microsoft negotiated exclusive rights to all versions of SQL Server written for Microsoft operating systems. (In 1996 Sybase changed the name of its product to Adaptive Server Enterprise to avoid confusion with Microsoft SQL Server.) Until 1994, Microsoft's SQL Server carried three Sybase copyright notices as an indication of its origin.
SQL Server 7.0
SQL Server 7.0 was a major rewrite (using C++) of the older Sybase engine, which was coded in C. Data pages were enlarged from 2k bytes to 8k bytes. Extents thereby grew from 16k bytes to 64k bytes. User Mode Scheduling (UMS) was introduced to handle SQL Server threads better than Windows preemptive multi-threading, also adding support for fibers (lightweight threads, introduced in NT 4.0, which are used to avoid context switching). SQL Server 7.0 also introduced a multi-dimensional database product called SQL OLAP Services (which became Analysis Services in SQL Server 2000).
SQL Server 7.0 would be the last version to run on the DEC Alpha platform. Although there were pre-release versions of SQL 2000 (as well as Windows 2000) compiled for Alpha, these were canceled and were never commercially released. Mainstream support ended on December 31, 2005, and extended support ended on January 11, 2011.
SQL Server 2000
SQL Server 2000 included more modifications and extensions to the Sybase code base, adding support for the IA-64 architecture (now out of "mainstream" support). By SQL Server 2005 the legacy Sybase code had been completely rewritten.
Since the release of SQL Server 2000, advances have been made in performance, the client IDE tools, and several complementary systems that are packaged with SQL Server 2005. These include:
an extract-transform-load (ETL) tool (initially called Data Transformation Services or DTS, and later called SQL Server Integration Services, or SSIS)
SQL Server Reporting Services (SSRS), or "Reporting Server"
an OLAP and data mining server (Analysis Services)
several messaging technologies, specifically Service Broker and Notification Services
SQL Server 2000 also introduced many T-SQL language enhancements, such as table variables, user-defined functions, indexed views, INSTEAD OF triggers, cascading referential constraints and some basic XML support.
With the release of Service Pack 3, Microsoft also released the first 64-bit version of the SQL Server for the Itanium IA-64 platform (not to be confused with the x86-64 platform). Only the SQL Server relational engine and SQL Agent were ported to Itanium at this time. Client tools, such as SQL Server Management Studio, were still 32-bit x86 programs. The first release of SQL IA-64 was version 8.00.760, with a build date of February 6, 2003.
Mainstream support ended on April 8, 2008, and extended support ended on April 9, 2013.
SQL Server 2005
SQL Server 2005 (formerly codenamed "Yukon") was released in November 2005, introducing native support for x64 systems and updates to Reporting Services, Analysis Services & Integration Services. It included native support for managing XML data, in addition to relational data. For this purpose, it defined an xml data type that could be used either as a data type in database columns or as literals in queries. XML columns can be associated with XSD schemas; XML data being stored is verified against the schema. XML data is queried using XQuery; SQL Server 2005 added some extensions to the T-SQL language to allow embedding XQuery queries in T-SQL. It also defines a new extension to XQuery, called XML DML, that allows query-based modifications to XML data. SQL Server 2005 also allows a database server to be exposed over web services using Tabular Data Stream (TDS) packets encapsulated within SOAP requests. When the data is accessed over web services, results are returned as XML.
Common Language Runtime (CLR) integration was introduced with this version, enabling one to write SQL code as Managed Code by the CLR. For relational data, T-SQL has been augmented with error handling features (try/catch) and support for recursive queries with CTEs (Common Table Expressions). SQL Server 2005 has also been enhanced with new indexing algorithms, syntax and better error recovery systems. Data pages are checksummed for better error resiliency, and optimistic concurrency support has been added for better performance. Permissions and access control have been made more granular and the query processor handles concurrent execution of queries in a more efficient way. Partitions on tables and indexes are supported natively, so scaling out a database onto a cluster is easier. SQL CLR was introduced with SQL Server 2005 to let it integrate with the .NET Framework.
SQL Server 2005 introduced:
Multi-Version Concurrency Control (MVCC); user facing features include new transaction isolation level called SNAPSHOT and a variation of the READ COMMITTED isolation level based on statement-level data snapshots.
Multiple Active Results Sets (MARS), a method of allowing usage of database connections for multiple purposes.
DMVs (Dynamic Management Views), specialized views and functions that return server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance.
Service Pack 1 (SP1) was released on April 18, 2006, adding Database Mirroring, a high availability option that provides redundancy and failover capabilities at the database level (Database Mirroring was included in the RTM release of SQL Server 2005, but it was not enabled by default, being supported for evaluation purposes). Failover can be manual or automatic; automatic failover requires a witness partner and an operating mode of synchronous (also known as high-safety or full safety). Service Pack 2 released on February 19, 2007, Service Pack 3 was released on December 15, 2008, and SQL Server 2005 Service Pack 4 released on December 13, 2010.
Mainstream support for SQL Server 2005 ended on April 12, 2011, and Extended support for SQL Server 2005 ended on April 12, 2016.
SQL Server 2008
SQL Server 2008 (formerly codenamed "Katmai") was released on August 6, 2008, announced to the SQL Server Special Interest Group at the ESRI 2008 User's Conference on August 6, 2008, by Ed Katibah (Spatial Program Manager at Microsoft), and aims to make data management self-tuning, self organizing, and self maintaining with the development of SQL Server Always On technologies, to provide near-zero downtime. SQL Server 2008 also includes support for structured and semi-structured data, including digital media formats for pictures, audio, video and other multimedia data. In current versions, such multimedia data can be stored as BLOBs (binary large objects), but they are generic bitstreams. Intrinsic awareness of multimedia data will allow specialized functions to be performed on them. According to Paul Flessner, senior Vice President of Server Applications at Microsoft, SQL Server 2008 can be a data storage backend for different varieties of data: XML, email, time/calendar, file, document, spatial, etc. as well as perform search, query, analysis, sharing, and synchronization across all data types.
Other new data types include specialized date and time types and a Spatial data type for location-dependent data. Better support for unstructured and semi-structured data is provided using the new FILESTREAM data type, which can be used to reference any file stored on the file system. Structured data and metadata about the file is stored in SQL Server database, whereas the unstructured component is stored in the file system. Such files can be accessed both via Win32 file handling APIs as well as via SQL Server using T-SQL; doing the latter accesses the file data as a BLOB. Backing up and restoring the database backs up or restores the referenced files as well. SQL Server 2008 also natively supports hierarchical data, and includes T-SQL constructs to directly deal with them, without using recursive queries.
The full-text search functionality has been integrated with the database engine. According to a Microsoft technical article, this simplifies management and improves performance.
Spatial data will be stored in two types. A "Flat Earth" (GEOMETRY or planar) data type represents geospatial data which has been projected from its native, spherical, coordinate system into a plane. A "Round Earth" data type (GEOGRAPHY) uses an ellipsoidal model in which the Earth is defined as a single continuous entity which does not suffer from the singularities such as the international dateline, poles, or map projection zone "edges". Approximately 70 methods are available to represent spatial operations for the Open Geospatial Consortium Simple Features for SQL, Version 1.1.
SQL Server includes better compression features, which also helps in improving scalability. It enhanced the indexing algorithms and introduced the notion of filtered indexes. It also includes Resource Governor that allows reserving resources for certain users or workflows. It also includes capabilities for transparent encryption of data (TDE) as well as compression of backups. SQL Server 2008 supports the ADO.NET Entity Framework and the reporting tools, replication, and data definition will be built around the Entity Data Model. SQL Server Reporting Services will gain charting capabilities from the integration of the data visualization products from Dundas Data Visualization, Inc., which was acquired by Microsoft. On the management side, SQL Server 2008 includes the Declarative Management Framework which allows configuring policies and constraints, on the entire database or certain tables, declaratively. The version of SQL Server Management Studio included with SQL Server 2008 supports IntelliSense for SQL queries against a SQL Server 2008 Database Engine. SQL Server 2008 also makes the databases available via Windows PowerShell providers and management functionality available as Cmdlets, so that the server and all the running instances can be managed from Windows PowerShell.
The final SQL Server 2008 service pack (10.00.6000, Service Pack 4) was released on September 30, 2014.
SQL Server 2008 had mainstream support until July 8, 2014, and extended support until July 9, 2019. Volume licensed Standard, Web, Enterprise, Workgroup and Datacenter editions of SQL Server 2008 are eligible for the Extended Security Updates program. The first term of yearly installment ended on July 14, 2020, the second term ended on July 13, 2021, and the third term ended on July 12, 2022. Those volume licensed editions rehosted on Microsoft Azure automatically received ESUs until July 11, 2023.
SQL Server 2008 R2
SQL Server 2008 R2 (10.50.1600.1, formerly codenamed "Kilimanjaro") was announced at TechEd 2009, and was released to manufacturing on April 21, 2010. SQL Server 2008 R2 introduced several new features and services:
a master data management system branded as Master Data Services, a central management of master data entities and hierarchies;
a number of services and utilities, collectively known as Application and Multi-Server Management (AMSM), to manage multiple SQL Server database instances; these utilities included a centralized console named Utility Control Point (UC);
PowerPivot for Excel and SharePoint;
StreamInsight;
Report Builder 3.0 and Reporting Services Add-in for SharePoint;
a Data-tier function in Visual Studio that enables packaging of tiered databases as part of an application.
Service Pack 1 (10.50.2500) was released on July 11, 2011, Service Pack 2 (10.50.4000) was released on July 26, 2012 and the final service pack, Service Pack 3 (10.50.6000), was released on September 26, 2014.
SQL Server 2008 R2 is the last version of SQL Server to run on Itanium (IA-64) systems, with extended support for SQL Server on Itanium continuing until 2018.
SQL Server 2008 R2 had mainstream support until July 8, 2014, and extended support until July 9, 2019. Volume licensed Standard, Enterprise, Datacenter and Embedded editions of SQL Server 2008 R2 are eligible for the Extended Security Updates program. The first term of yearly installment ended on July 14, 2020, the second term ended on July 13, 2021, and the third term ended on July 12, 2022. Volume-licensed editions rehosted on Microsoft Azure automatically received ESUs until July 11, 2023.
SQL Server 2012
At the 2011 Professional Association for SQL Server (PASS) summit on October 11, Microsoft announced another major version of SQL Server, SQL Server 2012 (codenamed "Denali"). The final version was released to manufacturing on March 6, 2012. SQL Server 2012 Service Pack 1 was released to manufacturing on November 7, 2012, Service Pack 2 was released to manufacturing on June 10, 2014, Service Pack 3 was released to manufacturing on December 1, 2015, and Service Pack 4 was released to manufacturing on October 5, 2017.
It was announced to be the last version to natively support OLE DB and instead to prefer ODBC for native connectivity.
SQL Server 2012's new features and enhancements include Always On SQL Server Failover Cluster Instances and Availability Groups which provides a set of options to improve database availability, Contained Databases which simplify the moving of databases between instances, new and modified Dynamic Management Views and Functions, programmability enhancements including new spatial features, metadata discovery, sequence objects and the THROW statement, performance enhancements such as ColumnStore Indexes as well as improvements to OnLine and partition level operations and security enhancements including provisioning during setup, new permissions, improved role management, and default schema assignment for groups.
SQL Server 2012 had mainstream support until July 11, 2017, and extended support until July 12, 2022. All volume licensed editions of SQL Server 2012 are eligible for the Extended Security Updates program. The first term of yearly installment ended on July 11, 2023, the second term ended on, 2024, and the third and final term will end on July 8, 2025. Those volume licensed editions rehosted on Microsoft Azure automatically receive ESUs until July 8, 2025.
SQL Server 2014
SQL Server 2014 was released to manufacturing on March 18, 2014, and released to the general public on April 1, 2014, and the build number was 12.0.2000.8 at release. Until November 2013 there were two CTP revisions, CTP1 and CTP2. SQL Server 2014 provides a new in-memory capability for tables that can fit entirely in memory (also known as Hekaton). Whilst small tables may be entirely resident in memory in all versions of SQL Server, they also may reside on disk, so work is involved in reserving RAM, writing evicted pages to disk, loading new pages from disk, locking the pages in RAM while they are being operated on, and many other tasks. By treating a table as guaranteed to be entirely resident in memory much of the 'plumbing' of disk-based databases can be avoided.
For disk-based SQL Server applications, it also provides the SSD Buffer Pool Extension, which can improve performance by cache between RAM and spinning media.
SQL Server 2014 also enhances the Always On (HADR) solution by increasing the readable secondaries count and sustaining read operations upon secondary-primary disconnections, and it provides new hybrid disaster recovery and backup solutions with Microsoft Azure, enabling customers to use existing skills with the on-premises version of SQL Server to take advantage of Microsoft's global datacenters. In addition, it takes advantage of new Windows Server 2012 and Windows Server 2012 R2 capabilities for database application scalability in a physical or virtual environment.
Microsoft provides three versions of SQL Server 2014 for downloading: the one that runs on Microsoft Azure, the SQL Server 2014 CAB, and SQL Server 2014 ISO.
SQL Server 2014 SP1, consisting primarily of bugfixes, was released on May 15, 2015.
SQL Server 2014 is the last version available for x86/IA-32 systems and the final version supported on Windows Server 2008 R2.
SQL Server 2014 had mainstream support until July 9, 2019, and extended support until July 9, 2024. All volume licensed editions of SQL Server 2014 are eligible for the Extended Security Updates program. The first term of yearly installment will end on July 8, 2025, the second term will end on July 14, 2026, and the third and final term will end on July 12, 2027. Those volume licensed editions rehosted on Microsoft Azure automatically receive ESUs until July 12, 2027.
SQL Server 2016
The official General Availability (GA) release date for SQL Server 2016 (13.0.1601.5) was June 1, 2016, with SQL Server 2016 being the first version to only support x64 processors and the last to have the Service Packs updating mechanism. Service Pack 1 was released on November 16, 2016, Service Pack 2 (13.2.5026) was released on April 24, 2018 and Service Pack 3 was released on September 15, 2021.
SQL Server 2017
Microsoft launched SQL Server 2017 on October 2, 2017, along with support for Linux. This is the final release supporting Windows Server 2012 and 2012 R2.
SQL Server 2019
Microsoft launched SQL Server 2019 (15.x) on November 4, 2019. SQL Server 2019 introduces Big Data Clusters for SQL Server. It also provides additional capability and improvements for the SQL Server database engine, SQL Server Analysis Services, SQL Server Machine Learning Services, SQL Server on Linux, and SQL Server Master Data Services.
SQL Server 2022
Microsoft launched SQL Server 2022 on November 16, 2022. However, customers purchasing via OEM, and Services Provider License Agreement (SPLA) had to purchase SQL Server 2022 starting January 2023.
Processor support
References
Client-server database management systems
Database management systems
History of Microsoft
Microsoft SQL Server
Microsoft database software
Relational database management systems
SQL Server | History of Microsoft SQL Server | [
"Technology"
] | 3,907 | [
"History of software",
"History of computing"
] |
62,227,380 | https://en.wikipedia.org/wiki/Nanoconcrete | Nanoconcrete (also spelled nano concrete or nano-concrete) is a form of concrete that contains Portland cement particles that are no greater than 100 μm and particles of silica no greater than 500 μm, which fill voids that would otherwise occur in normal concrete, thereby substantially increasing the material's strength. It is also a product of high-energy mixing (HEM) of conventional cement, sand and water which is a bottom-up approach of nano technology.
Role of nano particles
The incorporation of ultra-fine particles into a Portland-cement paste within a concrete mixture in accordance with top-down approach of nano technology alters the concrete's material properties and performance by reducing the void space between the cement and aggregate in the cured concrete. This improves strength, durability, shrinkage and bonding to steel reinforcing bars.
Manufacture
To ensure the mixing is thorough enough to create nanoconcrete, the mixer must apply a total mixing power to the mixture of 30–600 watts per kilogram of the mix. This mixing must continue long enough to yield a net specific energy expended upon the mix of at least 5000 joules per kilogram of the mix. and may be increased to 30–80 kJ per kilogram. A superplasticizer is then added to the activated mixture which can later be mixed with aggregates in a conventional concrete mixer. In the HEM process, the intense mixing of cement and water with or without sand in conditions of queasy laminar flow, Reynolds number 20-800 provides dissipation and absorption of energy by the mixture and increases shear stresses on the surface of cement particles. As a result, the temperature of the mixture increases by 20–25 and more degrees Celsius. This intense mixing serves to deepen hydration process inside the cement particles. The nano-sized colloid Calcium Silicate Hydrate (C-S-H) formation increased several times compared with conventional mixing. Thus, the ordinary concrete transforms to nanoconcrete.
The initial natural process of cement hydration with formation of colloidal globules about 5 nm in diameter spreads into the entire volume of cement–water matrix as the energy expended upon the mix.
The liquid activated mixture can be used by itself for casting small architectural details and decorative items, or expanded with gas-forming admixture for making Aerated HEM Nanoconcrete as a lightweight concrete. HEM Nanoconcrete hardens in low and subzero temperature conditions because the liquid phase inside the nano-pores of C-S-H gel doesn't freeze at temperatures from −8 to −42 degrees Celsius. The increased volume of gel reduces capillarity in solid and porous materials.
References
Concrete | Nanoconcrete | [
"Engineering"
] | 565 | [
"Structural engineering",
"Concrete"
] |
62,229,444 | https://en.wikipedia.org/wiki/Organic%20Reactions | Organic Reactions is a peer-reviewed book series that was established in 1942. It publishes detailed descriptions of useful organic reactions. Each article (called a chapter) is an invited review of the primary source material for the given reaction, and is written under tight editorial control, making it a secondary to tertiarylevel source. Each chapter explores the practical and theoretical aspects of the reaction, including its selectivity and reproducibility. The longest chapter runs to 1,303 pages. While individual articles are not open access, the journal's wiki maintains a repository of summaries of reactions. The series is abstracted and indexed in Scopus.
History
Prior to World War II, the center of organic chemistry research and industrial production was Germany. Students interested in pursuing a career in organic chemistry needed to learn German to read articles and textbooks, and often went to graduate school in Germany. When the war broke out, an effort to jumpstart a native US organic chemical industry and academic network was initiated. As part of this effort, the journal was launched. The first volume was published in 1942, with Roger Adams as editor-in-chief. In the early years a volume would come out every two years or so, but the pace of publishing has accelerated, with volume 100 issued in 2019.
References
External links
Organic chemistry journals
Academic journals established in 1942
Wiley (publisher) academic journals
English-language journals
1942 establishments in the United States | Organic Reactions | [
"Chemistry"
] | 289 | [
"Organic chemistry journals"
] |
62,229,810 | https://en.wikipedia.org/wiki/GoMo | GOMO or GoMo is the name of two unrelated, online-only mobile telephone flanker brands. GOMO in Singapore, Australia, Thailand and the Philippines are owned by Singtel of Singapore, GoMo Ireland and Switzerland are owned by Iliad SA of France.
GOMO Singapore was the first to launch on 25 March 2019, using the Singtel mobile network. GOMO Thailand is operated by AIS, a partly-owned subsidiary of Singtel. GOMO Philippines is operated by Globe Telecom, also a partly-owned subsidiary of Singtel. GoMo Ireland was launched on 15 October 2019, using the Eir mobile network. and has over 250,000 mobile customers, as of October 2020. GoMo Switzerland was subsequently launched on 16 November 2021, using the Salt mobile network.
GOMO Australia (stylized as gomo) was operated on the Optus network, a wholly-owned subsidiary of Singtel. On 1 June 2023, they stopped offering GOMO products to new customers and it was later announced that GOMO Australia will be closing all of its services on 15 December 2023.
Products and services
Ireland
Since its Irish launch, GoMo has offered one product, which is a sim-only mobile contract. The package is post-paid and includes unlimited calls to Irish mobiles and landlines, unlimited texts to Irish mobiles, 120GB 4G and 5G data (with unlimited lower speed data thereafter) and 10GB EU data.
Switzerland
GoMo Switzerland currently offers a sim-only mobile contract offering unlimited local calls, unlimited local SMSes and unlimited domestic 4G data for 9.95 CHF per month, for the first 50,000 customers.
Customer service
GoMo has no customer service phone lines. All support is via social media and online channels.
References
External links
GoMo Ireland
GoMo Switzerland
Mobile telecommunications networks
Mobile virtual network operators
Irish companies established in 2019
Telecommunications companies of Switzerland | GoMo | [
"Technology"
] | 386 | [
"Mobile telecommunications",
"Mobile telecommunications networks"
] |
62,232,852 | https://en.wikipedia.org/wiki/Jennifer%20Switkes | Jennifer Switkes is a Canadian-American applied mathematician interested in mathematical modeling and operations research, and also known for her volunteer work teaching mathematics in prisons. She is an associate professor of mathematics at California State Polytechnic University, Pomona (Cal Poly Pomona), where she is associate chair of the mathematics department.
Early life and education
Switkes was born in Canada but moved as a child to Northern California.
She is a 1994 graduate of Harvey Mudd College,
where she completed a double major in mathematics and physics as well as earning credits towards a teaching credential. However, her experience as a student teacher at a middle school convinced her that she was not fully prepared to continue as a teacher, and she returned to graduate school instead.
Her doctoral research at Claremont Graduate University concerned mathematical biology, and more specifically mosaic coevolution; her 2000 dissertation, The Geographic Mosaic Theory in Relation to Coevolutionary Interactions, was jointly supervised by Michael E. Moody and John Angus.
Career and volunteer work
Switkes was an instructor at Citrus College and the University of Redlands before becoming a mathematics professor at Cal Poly Pomona in 2001. There, she is known for her project-based education of students, centered around real-world applications of mathematical modeling.
Switkes volunteers as an associate pastor at the Orange Coast Free Methodist Church in Costa Mesa, California, and as a mathematics teacher with the Prison Education Project. She has taught mathematics to prison inmates both at the California Rehabilitation Center in Norco, California and in Uganda, where she has traveled repeatedly on church missions, on a 2013 sabbatical visit to Makerere University and on a shorter 2015 visit to teach at the Luzira Maximum Security Prison. As inspiration for her volunteer work she cites a book by Bob Moses, Radical Equations—Civil Rights from Mississippi to the Algebra Project, on the importance of mathematical literacy in escaping underprivileged circumstances.
Recognition
Switkes was one of the winners of the 2015 Inspiring Women in STEM Award of Insight Into Diversity Magazine.
In 2019, Switkes won one of the Deborah and Franklin Haimo Awards for Distinguished College or University Teaching of Mathematics, the highest teaching award of the Mathematical Association of America, "for bringing her educational core values of excellence, honor, integrity, love, and purpose to all students, and specifically to traditionally underserved students". The award recognized both her prison volunteer work and her mentorship of undergraduate and master's students at Cal Poly Pomona. She was also honored as an outstanding alumna of Harvey Mudd College in 2019.
References
Year of birth missing (living people)
Living people
21st-century American mathematicians
Canadian mathematicians
Canadian women mathematicians
Applied mathematicians
Canadian operations researchers
Harvey Mudd College alumni
Claremont Graduate University alumni
California State Polytechnic University, Pomona faculty
American operations researchers
21st-century American women mathematicians | Jennifer Switkes | [
"Mathematics"
] | 575 | [
"Applied mathematics",
"Applied mathematicians"
] |
62,234,211 | https://en.wikipedia.org/wiki/Thomas%20H.%20Brylawski | Thomas Henry Brylawski (June 17, 1944 – July 18, 2007) was an American mathematician and professor at the University of North Carolina, Chapel Hill. He worked primarily in matroid theory.
Education and career
Brylawski was born in 1944, and grew up in Washington, D.C. He attended the Massachusetts Institute of Technology for his undergraduate degree, finishing with a Bachelor of Science in 1966. He then went on to Dartmouth College for his graduate work. He completed his PhD under the direction of Gian-Carlo Rota and Robert Norman in 1970. After his PhD, he moved to the University of North Carolina, Chapel Hill, where he spent the rest of his career.
Brylawski was an editor for the Proceedings of the American Mathematical Society from 1977 until 1989. Brylawski wrote 40 mathematical publications, and advised 6 PhD students.
He died in 2007 of esophageal cancer at the Duke Hospice inpatient facility in Hillsborough, North Carolina.
Work
Brylawski's early work used ideas and tools from category theory to understand the Tutte polynomial of a matroid. Indeed, this idea already appeared in his thesis, which made constructions in matroid theory similar to the Grothendieck group. He developed similar ideas in two papers in the Transactions of the American Mathematical Society. Another influential early paper of Brylawski's, published in the same journal, described the influence of a modular element in the lattice of flats on the characteristic polynomial of a matroid.
Brylawski also contributed expository chapters to several matroid theory books that appeared in the Encyclopedia of Mathematics and its Applications series published by Cambridge University Press. The Tutte polynomial chapter (written jointly with James Oxley) has around 500 citations.
In addition to his work in matroid theory, Brylawski also had an interest in mathematics in art, particularly in the role of symmetry in art. He gave lectures on mathematics in art on two occasions at the National Gallery of Art in Washington, D.C.
Awards and honors
A memorial conference was held in honor of Brylawski in October 2008 at the University of North Carolina, Chapel Hill, and a special issue of the European Journal of Combinatorics in 2011 was dedicated as a tribute to the work of Brylawski.
References
1944 births
2007 deaths
20th-century American mathematicians
21st-century American mathematicians
Massachusetts Institute of Technology School of Science alumni
Dartmouth College alumni
University of North Carolina at Chapel Hill faculty
Combinatorialists
American academic journal editors
Mathematicians from Washington, D.C.
Deaths from cancer in North Carolina | Thomas H. Brylawski | [
"Mathematics"
] | 519 | [
"Combinatorialists",
"Combinatorics"
] |
62,235,725 | https://en.wikipedia.org/wiki/Lactifluus%20acrissimus | Lactifluus acrissimus is a species of milk-cap fungus in the family Russulaceae. Found in Benin, the species was described in 2003. It is found in savanna woodlands.
References
acrissimus
Fungi described in 2003
Fungi of Africa
Fungus species | Lactifluus acrissimus | [
"Biology"
] | 58 | [
"Fungi",
"Fungus species"
] |
62,236,786 | https://en.wikipedia.org/wiki/HuskySat-1 | HuskySat-1 is an artificial satellite designed at the University of Washington. It was launched by Cygnus NG-12 from Mid-Atlantic Regional Spaceport Launch Pad 0 on Wallops Island, Virginia to low earth orbit on November 2, 2019. It is a CubeSat, and will demonstrate onboard plasma propulsion and high gain telemetry for low Earth orbit that would be a precursor for an attempt at a larger CubeSat designed for orbital insertion at the Moon.
The satellite was designed by Husky Satellite Lab, a registered student group, in Johnson Hall, and was controlled from there using three antennae installed on the roof.
A pulsed plasma thruster (PPT) provides propulsion. It is the first PPT to use sulfur as a fuel.
Students at Raisbeck Aviation High School designed an onboard camera.
The satellite will test an experimental 24 GHz data transmitter, after which it will become an amateur radio satellite operated by AMSAT. The high data rate will enable much more data to be transferred during the 9- to 15-minute time windows the satellite is visible from the control station.
HuskySat is the first satellite designed by students in Washington state.
The satellite decayed from orbit on 12 April 2023.
References
External links
Current location of HuskySat-1 at AMSAT
Student satellites
CubeSats
Spacecraft launched in 2019 | HuskySat-1 | [
"Astronomy"
] | 268 | [
"Astronomy stubs",
"Spacecraft stubs"
] |
62,236,883 | https://en.wikipedia.org/wiki/Lactarius%20acutus | Lactarius acutus is a member of the large milk-cap genus Lactarius in the order Russulales. Found in Guinea, the species was described in 1955 by French botanist Roger Heim.
See also
List of Lactarius species
References
External links
acutus
Fungi described in 1955
Fungi of Africa
Fungus species | Lactarius acutus | [
"Biology"
] | 66 | [
"Fungi",
"Fungus species"
] |
62,237,589 | https://en.wikipedia.org/wiki/C7H9N2O | {{DISPLAYTITLE:C7H9N2O}}
The molecular formula C7H9N2O (molar mass: 137.16 g/mol) may refer to:
1-Methylnicotinamide, a prototypic organic cation
Pralidoxime, an oxime
Molecular formulas | C7H9N2O | [
"Physics",
"Chemistry"
] | 69 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
62,238,011 | https://en.wikipedia.org/wiki/Jack%20Cable%20%28software%20developer%29 | Jack Cable (born February 18, 2000) is an American computer security researcher and software developer who currently serves as a Senior Technical Advisor at the Cybersecurity and Infrastructure Security Agency. He is best known for his participation in bug bounty programs, including placing first in the U.S. Department of Defense's Hack the Air Force challenge. Cable began working for the Pentagon's Defense Digital Service in the summer of 2018.
After discovering and reporting severe vulnerabilities in several states' electoral infrastructure, Cable joined the U.S. Cybersecurity and Infrastructure Security Agency (CISA) in the summer of 2020. There, Cable served as a technical advisor to help protect state election systems against foreign hacking attempts. Cable rejoined CISA in 2023 to help lead the agency's Secure by Design initiative.
For his work, Cable was named one of Time Magazine's 25 Most Influential Teens of 2018. Cable has spoken on vulnerability disclosure and election security at conferences including the DEF CON Voting Village, Black Hat Briefings, and the Wall Street Journal's Future of Everything Festival. In 2019, Cable helped launch Stanford's bug bounty program, one of the first in higher education.
Biography
Cable grew up in the Chicago suburbs and attended New Trier High School. He began programming in middle school and discovered bug bounty programs at the age of 15 after finding a vulnerability in a financial website. Cable has founded a cybersecurity consulting firm, Lightning Security. Cable studied computer science at Stanford, where he received a B.S. in computer science.
Cable joined cybersecurity consulting firm Krebs Stamos Group in 2021 as a Security Architect.
Ransomware research
In 2021, Cable identified a workaround in a ransomware payment system to save victims $27,000, for which he was acknowledged by U.S. Secretary of Homeland Security Alejandro Mayorkas.
Cable also launched Ransomwhere, a crowdsourced ransomware payment tracker that aims to address the ransomware visibility problem.
Publications and articles
"Every Computer Science Degree Should Require a Course in Cybersecurity". Harvard Business Review. Published August 27, 2019.
"Why the U.S. government needs you to hack it". Fast Company. Published December 17, 2019.
"Preventing Ransomware Attacks at Scale". Harvard Business Review. Published April 23, 2024.
References
Hackers
2000 births
Living people
Computer security | Jack Cable (software developer) | [
"Technology"
] | 489 | [
"Lists of people in STEM fields",
"Hackers"
] |
62,240,093 | https://en.wikipedia.org/wiki/%28Pentamethylcyclopentadienyl%29titanium%20trichloride | (Pentamethylcyclopentadienyl)titanium trichloride is an organotitanium compound with the formula Cp*TiCl3 (Cp* = C5(CH3)5). It is an orange solid. The compound adopts a piano stool geometry. An early synthesis involve the combination of lithium pentamethylcyclopentadienide and titanium tetrachloride.
The compound is an intermediate in the synthesis of decamethyltitanocene dichloride. In the presence of organoaluminium compounds and other additives, it catalyzes the polymerization of alkenes.
See also
(Cyclopentadienyl)titanium trichloride
References
Chloro complexes
Titanium compounds
Half sandwich compounds | (Pentamethylcyclopentadienyl)titanium trichloride | [
"Chemistry"
] | 159 | [
"Organometallic chemistry",
"Half sandwich compounds"
] |
53,357,867 | https://en.wikipedia.org/wiki/Fan%20triangulation | In computational geometry, a fan triangulation is a simple way to triangulate a polygon by choosing a vertex and drawing edges to all of the other vertices of the polygon. Not every polygon can be triangulated this way, so this method is usually only used for convex polygons.
Properties
Aside from the properties of all triangulations, fan triangulations have the following properties:
All convex polygons, but not all polygons, can be fan triangulated.
Polygons with only one concave vertex can always be fan triangulated, as long as the diagonals are drawn from the concave vertex.
It can be known if a polygon can be fan triangulated by solving the Art gallery problem, in order to determine whether there is at least one vertex that is visible from every point in the polygon.
The triangulation of a polygon with vertices uses diagonals, and generates triangles.
Generating the list of triangles is trivial if an ordered list of vertices is available, and can be computed in linear time. As such, it is unnecessary to explicitly store the list of triangles, and therefore, many graphical libraries implement primitives to represent polygons based on this triangulation.
Although this triangulation is fit for solving certain problems, such as Rasterisation, or collision detection, it may be unfit for other tasks because the origin vertex accumulates a high number of neighbors, and the internal angles of the triangulation are unevenly distributed.
See also
Triangle fan
References
Triangulation (geometry)
Geometric algorithms | Fan triangulation | [
"Mathematics"
] | 326 | [
"Triangulation (geometry)",
"Planes (geometry)",
"Planar graphs"
] |
53,358,397 | https://en.wikipedia.org/wiki/Natural%20Language%20Processing%20%28journal%29 | Natural Language Processing is a bimonthly peer-reviewed academic journal published by Cambridge University Press which covers research and software in natural language processing. It was established in 1995 as Natural Language Engineering, obtaining its current title in 2024. Other than original publications on theoretical and applied aspects of computational linguistics, the journal also contains Industry Watch and Emerging Trends columns tracking developments in the field. The editor-in-chief is Ruslan Mitkov (Lancaster University). From 2024 the journal is published completely open access. According to the Journal Citation Reports, the journal has a 2023 impact factor of 2.3.
References
External links
Natural language processing
Computational linguistics journals
Cambridge University Press academic journals
English-language journals
Academic journals established in 1995 | Natural Language Processing (journal) | [
"Technology"
] | 147 | [
"Natural language processing",
"Natural language and computing"
] |
53,358,766 | https://en.wikipedia.org/wiki/Intake%20tower | An intake tower or outlet tower is a vertical tubular structure with one or more openings used for capturing water from reservoirs and conveying it further to a hydroelectric or water-treatment plant.
Unlike spillways, intake towers are intended for the reservoir's regular operation, conveying clean, debris-free water for further use.
Construction
An intake tower is typically made from reinforced concrete, with foundations laid in the river or lake bed. It has at least one water-collecting opening at the top, and may have additional openings along its height, depending on the purpose: towers for hydroelectric plants typically have only one inlet, while those in water-processing plants have multiple draw-off inlets. Near the bottom of the tower, depending on the dam construction and plant location, a horizontal or slanted outlet conduit takes the water from the tower into the plant.
The most convenient location for an intake tower is in the proximity of the processing plant. In artificial lakes, those are typically placed near the dam. Lake bed near the dam also provides sufficient water depth to ensure substantial supply to the towers throughout the year, thus the exposed towers can be regularly seen along the dams.
When built near the shore, an intake tower is equipped with a service bridge, used to gain access for maintenance.
Draw-off tower
Draw-off towers are intake towers specialized for drinking water reservoirs. They have multiple openings at various depths, typically equipped with valves, allowing drawing water only from the level where it is of highest quality.
References
See also
Culvert
Fish screen
Gatehouse (waterworks)
Hydraulic engineering
Hydraulic structures
Dams | Intake tower | [
"Physics",
"Engineering",
"Environmental_science"
] | 322 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Civil engineering stubs",
"Hydraulic engineering"
] |
53,358,779 | https://en.wikipedia.org/wiki/Calclacite | Calclacite is a mineral and an organic compound. Its name references the components, which are calcium ions (Ca2+), chloride (Cl−) and acetate CH3COO−.
Characteristics
Calclacite is an organic compound with chemical formula Ca(CH3COO)Cl·5H2O. It forms crystals in the monoclinic system, with silky hairlike efflorescences up to 4 cm long.
According to the Nickel–Strunz classification, calclacite is an organic acid salt and occurs with formicaite (calcium formate), acetamide, dashkovaite (magnesium acetate), paceite (calcium copper acetate) and hoganite (copper acetate). It is white and its hardness on the Mohs scale is 1.5.
Formation
Calclacite is formed on samples of rocks, fossils, and on fragments of ceramics, by the action of acetic acid produced from the oak of the storage cabinets.
References
Organic minerals
Calcium minerals
Acetates
Chlorides
Mixed anion compounds | Calclacite | [
"Physics",
"Chemistry"
] | 224 | [
"Matter",
"Chlorides",
"Inorganic compounds",
"Mixed anion compounds",
"Salts",
"Organic compounds",
"Organic minerals",
"Ions"
] |
53,359,504 | https://en.wikipedia.org/wiki/Transmin | Transmin is an Australian privately owned company specialising in bulk materials handling equipment and related products headquartered in Malaga, Western Australia, 15 kilometres north of Perth, Western Australia, that provides engineered equipment, supplies and services to the mining-resources and bulk material handling industries, in Australia and overseas.
Transmin was founded in 1987 in Western Australia. In 2003, Transmin developed the first 'Low Profile Feeder' a hybrid form of belt feeders and apron feeders which have become the benchmark for hybrid feeders within the industry, having been successfully installed around the world. Recently, Transmin has been granted an Australian Patent for the Low Profile Feeder technology and other patents are pending for certain features of the technology which have been developed.
Transmin has created 4 brands of their own mining equipment. In 1995, they created their first rockbreaker boom system, now branded as Boomer. Transmin also developed their RockLogic controls and automation brand, in 2004, and in 2016 developed ConveyorPro, its own brand of Conveyor belts and components.
The company has offices/agents in four continents; Australia, India and North and South America.
Transmin's primary headquarters and Australian registered office is located at 33-37 Denninup Way, Malaga, Western Australia.
Foundation
Transmin was founded in July 1987 as a Lime sulfur provider and bulk materials handling specialist by Ross Nunn, who continues to be the owner and chairman of the company today. Within, a few years Transmin became a sales agent for various mining and minerals processing equipment to Western Australia and the Northern Territory.
In 1992, Transmin moved premises to Malaga, Western Australia, their current headquarters.
Activities
Transmin equipment is currently deployed in over 60 countries worldwide. The Transmin equipment range covers bulk materials-handling feeders and conveyors, bulk loading and unloading hoppers, hydraulic rockbreaker boom systems and attachments, isolation gates, reagent preparation and processing facilities, lime preparation facilities, ball-charging systems, silos and other related equipment.
Transmin is a distributor and sales agent for several companies including BERCO track chain and components, Rotaval Rotary Valves, A-WARD Container Tilters, and Scutti Storage Silos and Screw Feeders around Australia.
Projects
During its 30-year existence, Transmin has undertaken a variety of major projects with most of the world's major mining and minerals processing companies in the fields of Iron Ore, Gold, Copper, Nickel, Zinc, Lithium, Coal, Bauxite and more recently has been expanding into general industry such as Cement, Waste Recycling, Timber, Quarry and Chemical.
Transmin has a strong product development program, primarily focused around the Low Profile Feeder technology where new applications and features are constantly being developed to meet the increasing demand from customers. Recent developments include Filter Press dischargers, ADT Ejector Truck dischargers and Reversing Feeders. New features have also been added including an in process weighing capability, 'ProEdge' hot vulcanised belt edge seal, 'ProTough' Kevlar impregnated belt and 'Kwiksert Pro' a one sided belt fastening system.
References
Privately held companies of Australia
Australian companies established in 1987
Companies based in Perth, Western Australia
Mining equipment companies | Transmin | [
"Engineering"
] | 657 | [
"Mining equipment",
"Mining equipment companies"
] |
53,359,527 | https://en.wikipedia.org/wiki/Ceronapril | Ceronapril (INN, proposed trade names Ceranapril, Novopril) is a phosphonate ACE inhibitor that was never marketed.
References
ACE inhibitors
Carboxamides
Enantiopure drugs
Phosphonates
Prodrugs
Pyrrolidines | Ceronapril | [
"Chemistry"
] | 60 | [
"Stereochemistry",
"Enantiopure drugs",
"Prodrugs",
"Stereochemistry stubs",
"Chemicals in medicine"
] |
53,360,509 | https://en.wikipedia.org/wiki/Sharon%20Keillor%20Award%20for%20Women%20in%20Engineering%20Education | The Sharon Keillor Award for Women in Engineering Education "recognizes and honors outstanding women engineering educators." Recipients hold an earned doctoral degree in an engineering discipline or related field, have at least five years of teaching experience in an engineering school, and have "an outstanding record in teaching engineering students."
The award has been given annually since 2001.
Recipients
See also
List of engineering awards
References
Science awards honoring women
American Society for Engineering Education
Engineering awards
Education awards | Sharon Keillor Award for Women in Engineering Education | [
"Technology"
] | 92 | [
"Science and technology awards",
"Science awards honoring women",
"Engineering awards"
] |
53,360,944 | https://en.wikipedia.org/wiki/Business%20metadata | Business metadata is data that adds business context to other data. It provides information authored by business people and/or used by business people. It is in contrast to technical metadata, which is data used in the storage and structure of the data in a database or system. Technical metadata includes the database table name and column name, data type, indexes referencing the data, ETL jobs involving the data, when the data was last updated, accessed, etc.
Concept
According to noted author and columnist Lowell Fryman, "The essence of business metadata is in reducing or eliminating the barriers of communication between human and human, as well as human and computer, so that the data conveyed from reports, information systems, or business intelligence applications can be crystal clear, can facilitate business operations, and can be leveraged for all business decision-making processes."
Dan Linstedt, creator of the data vault methodology, says business metadata "...provide[s] definition of the functionality, definition of the data, definition of the elements, and definition of how the data is used within business...business metadata includes business requirements, time-lines, business metrics, business process flows, and business terminology."
Business metadata is important because it can greatly facilitate the usefulness of the data to business people. A simple example of business metadata is a glossary entry. Hover functionality in an application or web form can enable a glossary definition to be shown when cursor is on a field or term.
Other examples of business metadata include annotation ability within applications. For example, a business user may be viewing a business intelligence (BI) report and notice a trend in the data. The user may have background knowledge as to why this trend occurs. Some business intelligence tools enable the user to create an annotation within the report that explains the trend. Such an annotation can enhance other users' understanding of the data. This example is especially powerful because it is created by a business user for the use of other business people.
Examples
Other examples of business metadata are:
Business rules
Data quality rules
Valid values for reference data
Wikis
Collaboration software
References
Business terms | Business metadata | [
"Technology"
] | 436 | [
"Metadata",
"Data"
] |
53,362,118 | https://en.wikipedia.org/wiki/List%20of%20songs%20about%20nuclear%20war | Songs with a theme of nuclear war have been a feature of popular culture since the early years of the Cold War.
"4 Minute Warning" By Radiohead (2007)
"137" By Brand New (2017)
"1983... (A Merman I Should Turn to Be)" by Jimi Hendrix
"1999" By Prince (1982)
"2 Minutes to Midnight" By Iron Maiden (1984)
"540,000 Degrees Fahrenheit" by Fear Factory
"99 Luftballons" By Nena (1983)
"A Blue Wave" By Cleaners From Venus (1981)
"Adrian" By Eurythmics (1985)
“A Flash in the Night” By Secret Service (band) (1982)
"After the Fall" By Klaus Nomi (1982)
"After the Holocaust" By Nuclear Assault (1986)
"After the War" By Asia (1985)
"Aftermath" By Armored Saint (1985)
"Aftershock" By Anthrax (1985)
"All Fall Down" By B-Movie (1991)
"Always the Sun" By The Stranglers (1986)
"America" By Kurtis Blow (1986)
"American Soviets" By CCCP (1990)
"April 2031" By Warrant (1992)
"Arise" by Sepultura
"Armageddon Days (Are Here Again)" By The The (1989)
"As The World Burns" By Bolt Thrower (1992)
"As the World Caves In" By Matt Maltese (2017)
"Atom and Evil" By Black Sabbath (as Heaven and Hell) (2009)
"Atom Bomb" By Glenn Barber (1955)
"Atom Drum Bop" By The Three Johns (1986)
"Atom Tan" By The Clash (1982)
"Atomic" By Blondie (1980)
"Atomic Dog" By George Clinton and the P-Funk All-Stars (1982)
"Atomic Playboys" By Steve Stevens (1989)
"Back to Zero" By The Rolling Stones (1986)
"Beat Street" By Grandmaster Flash and the Furious Five (1984)
"Beneath the Remains" By Sepultura (1989)
"Between the Wheels" By Rush (1984)
"Beyond the Black" By Metal Church (1984)
"Bikini Red" By The Screaming Blue Messiahs (1987)
"Billy's Line" By Red Box (1986)
"Birthright" By Anderson, Bruford, Wakeman, Howe (1989)
"Black Celebration" By Depeche Mode (1986)
"Black Planet" By The Sisters of Mercy (1985)
"Blackened" By Metallica (1988)
"Blossom and Blood" By Midnight Oil (1985)
"Blow the House Down" By Siouxsie and the Banshees (1984)
"Blowin' Sky High" By Berlin (1988)
"Bombers" By David Bowie (1971)
"Bomb" by Gang Green
"Bomb Iran" By JC & The B-1 Bombers (1980)
"The Bomb Song" By Darwin Deez
"Bombe the Russians" By Fear (1985)
"Boom!" by System of a Down on the album Steal This Album!
"Boom Box" By Vitabeats (1985)
"Breathing" By Kate Bush (1980)
"Brighter Than A Thousand Suns" By Iron Maiden (2006)
"Bring Back the Bomb" by GWAR
"Brush the Dust from That Old Bible" By Bradley Kincaid (1950)
"Burning Down the House" By Talking Heads (1983)
"Burning Heart" By Survivor (1985)
"Can't Stop Running" By Space Monkey (2002)
"The Catalyst" By Linkin Park (2010)
"Channel-Z" By The B-52's (1989)
"Chemical Bomb" By Aquabats (1999)
"Chemical Warfare" By Slayer (1983)
"Christmas at Ground Zero" By "Weird Al" Yankovic (1986)
"Claude Rains" By The Front Lawn (1989)
"Clean, Clean" By Bruce Woolley and the Camera Club / The Buggles (1979)
"Cloudburst at Shingle St." By Thomas Dolby (1982)
"Come Away Melinda" By Bobbie Gentry (1968)
"Countdown to Extinction" by Megadeth (1992)
"Countdown to Zero" By Asia (1985)
"Crawl Out Through the Fallout" By Sheldon Allman (1960)
"Cries of Help" By Discharge (1982)
"Cruise" By David Gilmour (1984)
"Cruise Missiles" By Fischer-Z (1981)
"Cuando Seas Grande" By Miguel Mateos (1993)
"Curfew" By The Stranglers (1978)
"Current Events" By Joe King Carrasco and the Crowns (1998)
"Dancing at the Nuclear Holocaust" By Doug P. Stone (1977)
"Dancing With Tears in My Eyes" By Ultravox (1984)
"Dangerous Moments" By Martin Briley (1985)
"Dawn Patrol" By Megadeth (1990)
"The Day After" By The Men They Couldn't Hang (1985)
"The Dead Next Door" By Billy Idol (1983)
"De Bom" By Doe Maar (1983)
"De Bom Valt Nooit" By Herman van Veen (1984)
"Def.Con.One" By Pop Will Eat Itself (1992)
"Destruction Preventer" by Sonata Arctica
"D-Day" By Bus Boys (2001)
"Dig a Hole in the Ground" By Fred Small (2011)
"Disaster Area" By All Out Attack (2006)
"Distant Early Warning" By Rush (1984)
"Does Anybody Care" By Alex Hirsch (2022)
"Domino" By Genesis (1986)
"Don't Crash" By Front 242 (1987)
"Do the Evolution" By Pearl Jam (1998)
"Do You Believe in the Westworld?" By Theatre of Hate (1982)
"Down from the Sky" By Trivium (2008)
"Downer" by Nirvana
"Dream Home in New Zealand" By The Beat (1981)
"Dream Told by Moto" By Minutemen (1983)
"Earth Crusher" By Mr. Lif (2002)
"Eighth Day" By Hazel O'Connor (1980)
"Einstein A Go Go" By Landscape (1981)
"Einstein On The Beach (For An Eggman)" By Counting Crows (1994)
"Electric Funeral" By Black Sabbath (1970)
"End of the World" By Gary Moore (1981)
"The End" By Discharge (1981)
"Enola Gay" By OMD (1980)
"Euroshima" By John Waite (1984)
"Eve of Destruction" By Barry McGuire (1965)
"Everybody Have Fun Tonight" By Wang Chung (1986)
"Everybody Wants to Rule the World" By Tears for Fears (1985)
"Everyday Is Like Sunday" By Morrissey (1988)
"Fabulous Disaster" By Exodus (1989)
"Fact And Fiction" By Twelfth Night (1982)
"Fallout" By Data (1980)
"Fallout Shelter" By Peter Scott Peters (1961)
"Famous Last Words" By Tears for Fears (1989)
"Fight Fire with Fire" By Metallica (1984)
"The Final Bloodbath" By Discharge (1982)
"The Final Countdown" By Europe (1986)
"Final Day" By Young Marble Giants (1980)
"Fire in the Sky" By Saxon (1981)
"Fireside Favourite" By Fad Gadget (1980)
"Firestorm" By Leslie Fish (1989)
"Five Years" By David Bowie (1972)
"Flame of the West" By Big Country (1984)
"Flyingdale Flyer" By Jethro Tull (1980)
"Forever Young" By Alphaville (1984)
"Four Minute Warning" By Mark Owen (2003)
"Four Minutes" By Culture Shock (1989)
"Four Minutes" By Roger Waters (1987)
"Folded Flags" By Roger Waters (1987)
"French Letters" By Herbs (1987)
"The Future's So Bright, I Gotta Wear Shades" By Timbuk3 (1986)
"Games Without Frontiers" by Peter Gabriel (1980)
"Glad It's All Over" By Captain Sensible (1994)
"Grandpa Atomic" By New Bomb Turks (1994)
"The Great Atomic Power" By The Louvin Brothers (1962)
"Ground B Sound" By Death Piggy (1999)
"Ground Zero Brooklyn" By Carnivore (band) (1987)
"The Gunner's Dream- Paranoid Eyes" By Pink Floyd (1983)
"Guns in the Sky" By INXS (1987)
"Hallowed Ground" By Violent Femmes (1984)
"Hammer to Fall" By Queen (1984)
"Happy Birthday" By "Weird Al" Yankovic (1983)
"Harrisburg" By Midnight Oil (1985)
"Heat" By Leslie Spit Treeo (1990)
"Heatwave" By Fay Ray (1982)
"A Hell On Earth" By Discharge (1982)
"He Looks Like Spencer Tracy Now" By Deacon Blue (1987)
"Help Save the Youth of America" By Billy Bragg (1986)
"Hercules" By Midnight Oil (1992)
"Here Comes President Kill Again" By XTC (1989)
"Heresy" By Rush (1991)
"History (remember Hiroshima & Nagasaki)" By John McGuinness (1984) John McGuinness & Vince Lewis band (2018)
"Hiroshima" By Gary Moore (1983)
"Hiroshima" By Sandra (1990)
"Hiroshima Nagasaki Russian Roulette" By Jim Page (1976)
"Hiroshima Nagasaki Russian Roulette" By Moving Hearts (1981)
"Hiroshima, Mon Amour" By Ultravox (1977)
"House at Pooneil Corners" By Jefferson Airplane (1968)
"Human Error" By Subhumans (1981)
"I.C.B.M." By Amebix (1987)
"I Come and Stand at Every Door" By The Byrds (1966)
"I Don't Wanna Die" By 4 Skins (1982)
"I Found That Essence Rare" By Gang of Four (1979)
"Ignorance" By Sacred Reich (1987)
"I Melt with You" By Modern English (1982)
"In the Hole" By Armored Saint (1985)
"Invasion" By Skrewdriver (1984)
"I Remember the Sun" By XTC (1984)
"Is There Something I Should Know?" By Duran Duran (1981)
"It's a Mistake" By Men At Work (1983)
"I've Known No War" By The Who (1982)
"I Won't Let the Sun Go Down on Me" By Nik Kershaw (1984)
"Janitor" By Suburban Lawns (1980)
"Just Another Day" By Oingo Boingo (1985)
"Jesus Hits Like the Atomic Bomb" By Lowell Blanchard and the Vally Trio (1998)
"Juggernaut" By Frank Marino & Mahogany Rush (1982)
"Kill for Peace" By The Fugs (1966)
"Kill the Poor" By Dead Kennedys (1978)
"Killer of Giants" By Ozzy Osbourne (1986)
"Kinky Sex Makes the World Go Round" By Dead Kennedys (1982)
"King of the World" By Steely Dan (1973)
"Land of Confusion" By Genesis (1986)
"Last Domino" By Genesis (1986)
"Last in the House of Flames" By UK Decay (1981)
"Lawyers in Love" By Jackson Browne (1983)
"Leave in Silence" By Depeche Mode (1982)
"Leningrad" By Billy Joel (1989)
"Let Me Die In My Footsteps" By Bob Dylan (1963)
"Let's All Make A Bomb" By Heaven 17 (1981)
"Let's Have A War" By Fear (1982)
"Let's Talk About it" By Dweezil Zappa (1986)
"Life During Wartime" By Talking Heads (1979)
"Listen" By Tears For Fears (1985)
"Live Fast, Die Young" By Circle Jerks (1980)
"Living Through Another Cuba" By XTC (1980)
"Lock and Key" By Rush (1987)
"London Calling" by The Clash
"Love Missile F1-11" By Sigue Sigue Sputnik (1986)
"Lovers in a Dangerous Time" By Bruce Cockburn (1984)
"M.A.D." by Hadouken!; lyrics and title refer to nuclear war; the whole album's and lyrics refer to atomic war
"Man at C&A" By The Specials (1980)
"Manhattan Project" By Rush (1985)
"Maralinga" By Urban Guerrillas (1983)
"Massive Retaliation" By Sigue Sigue Sputnik (1986)
"Mediate" By INXS (1987)
"Merry Minuet" By Kingston Trio (1959)
"Missiles" By The Sound (1980)
"Morning Dew" by Bonnie Dobson; also recorded by Jeff Beck, Blackfoot, Einstürzende Neubauten, Tim Rose, and The Grateful Dead
"Mutually Assured Destruction (M.A.D.)" By Gillan (1986)
"Nagasaki Nightmare" By Crass (1981)
"New Frontier" By Donald Fagen (1982)
"New Mexico" By Oppenheimer Analysis (1982)
"No Nuclear War " By Peter Tosh (1987)
"North Winds Blowing" By The Stranglers (1985)
"Nuclear" By Mike Oldfield
"Nuclear Attack" By Gary Moore (1981)
"Nuclear Attack" By Sabaton (2006)
"Nuclear Cop" By Redgum (1980)
"Nuclear Sunrise" By Hand of Fire (2017)
"Nuclear War" By Sun Ra Arkestra (1982)
"Nuclear War" By New Politics (2010)
"Nuclear War" By Yo La Tengo (2001)
"Oblivion" By Dirty Rotten Imbeciles (1987)
"Old Man Atom" By Sons Of The Pioneers (1950)
"On the Beach" By The Comsat Angels (1980)
"One of the Living" by Tina Turner, from Mad Max Beyond Thunderdome
"Oppenheimer" By Leni Oppenheimer (2019)
"Paranoid Chant" By Minutemen (1980)
"Party at Ground Zero" By Fishbone (1985)
Pink World by Planet P Project
"Planet Earth" By Duran Duran (1981)
"Political Science" By Randy Newman (1972)
"Pride of Man" by Hamilton Camp (1964)
"Pronto viviremos en la Luna" (Soon we will be living at the Moon), by Spanish singer-songwriter Víctor Manuel.
"Protect and Survive" By Runrig (1987)
"Put Down That Weapon" by Midnight Oil
"Quite Unusual" By Front 242 (1987)
"Radiation Sickness" By Nuclear Assault (1986)
"Red Rain" By Peter Gabriel (1986)
"Red Shadows" By T.S.O.L. (1984)
"Red Skies" By The Fixx (1982)
"Red Skies over Paradise By Fischer-Z (1981)
"Ride the Wind" By Crazy Planet (1988)
"Ronnie Talk to Russia" by Prince (1981)
"Russians" By Sting (1985)
"Rust in Peace... Polaris" By Megadeth (1990)
"S.D.I." By Loudness (1987)
"Seconds" By U2 (1982)
"Set the World Afire" By Megadeth (1988)
"Shattered" by Pantera
"Skeletons of Society" by Slayer
"Sign o' the Times" By Prince (1987)
"So Afraid of the Russians" By Made for TV (1983)
"So Long, and Thanks for All the Fish" By A Perfect Circle (2018)
"So Long, Mom (A Song for World War III)" By Tom Lehrer (circa 1965)
"Soviet Snow" By Shona Laing (1987)
"The Stage" By Avenged Sevenfold (2016)
"Stagnation" By Genesis (1970)
"Stop the World" By The Clash (1980)
"Strike Zone" By Loverboy (1983)
"Strontium 90" By Fred & Betty Dallas with Ron Fielder (1959)
"The Sun Is Burning" By Ian Campbell (1963)
"Summer of ‘81" By Mondo Rock (1981)
"Sunrise" By Icehouse (1987)
"Survive" by Nuclear Assault
"Survivor's Song" By Julia Ecklar (1986)
"Talkin' World War III Blues" by Bob Dylan
"The Apple Tree" by Difford & Tilbrook (1984)
"The Temptation of Adam" by Josh Ritter (2007)
"Thank Christ for The Bomb" By Groundhogs (1970)
"Thank God for The Bomb" By Ozzy Osbourne (1986)
"The Sun Is Burning" By Simon & Garfunkel (1964)
"This World Over" by XTC (1984)
"Time After Time" By Electric Light Orchestra (1983)
"Time Will Crawl" By David Bowie (1987)
"Total Eclipse" By Klaus Nomi (1981)
"Town to Town" by Microdisney (1987)
"Tropicana" By Gruppo Italiano (2002)
"Twilight Gods" By Helloween (1987)
"Two Minute Warning" By Depeche Mode (1983)
"Two Suns In the Sunset" By Pink Floyd (1983)
"Two Tribes" By Frankie Goes to Hollywood (1984)
"Vamos a la Playa" by Righeira (1983)
"Vaporized" by X-15 (1981)
"Victims of the Future" By Gary Moore (1983)
"Walk the Dinosaur" by Was (Not Was) (1987)
"Walking in Your Footsteps" by The Police (1983)
"The Wanderer" By U2 and Johnny Cash (1993)
"Warhead" by Tarot
"Warhead" By UK Subs (1980)
"We Don't Want No Nuclear War" By Peter Tosh (1987)
"We Will All Go Together When We Go" By Tom Lehrer (1959)
"We Will Become Silhouettes" By The Postal Service (2005)
"What Have They Done" By Squeeze (1986)
"When the Wind Blows" By David Bowie (1986)
"When They Drop the Atomic Bomb" By Jackie Doll and the Pickled Peppers (1951)
"White Train" by Bananarama (1986)
"Will The Sun Rise?" by Dokken
"Wind of Change" By Scorpions (1991)
"Wooden Ships" By Crosby, Stills & Nash (1969)
"World Destruction" By Time Zone (1984)
"World War III" recorded by D.O.A.
"World Wars III & IV" By Carnivore (band) (1985)
References
Nuclear war
Nuclear warfare | List of songs about nuclear war | [
"Chemistry"
] | 3,851 | [
"Radioactivity",
"Nuclear warfare"
] |
53,362,546 | https://en.wikipedia.org/wiki/Pockets%20Warhol | Pockets Warhol (born 1992) is a capuchin monkey, and one of 24 residents (as of 2023-08-03) at Story Book Farm Primate Sanctuary near Sunderland, Ontario, Canada. Pockets came to media attention in 2011 when the sanctuary held a fundraiser featuring 40 paintings by the monkey.
Early life
According to the sanctuary, Pockets was born on April 1, 1992, and lived his early life as a pet in British Columbia. In 2009, Pockets' owner was finding herself challenged to look after him, and searched for a place that could take him. On finding Story Book Farm, she flew herself and Pockets to Ontario, and stayed with Pockets for a week to get him comfortable in his new home. The former owner still keeps in touch with the sanctuary.
Start as an artist
Shortly after Pockets arrived at the sanctuary, one of the volunteers, Charmaine Quinn, gave Pockets his surname of Warhol because his white hair reminded her of Andy Warhol. This also prompted Quinn to give Pockets some children's paints to keep him busy. In December 2011, having accumulated 40 of Pockets' paintings, the sanctuary arranged an exhibition of the paintings at a Toronto diner, helping to raise funds for the sanctuary. The event was covered in the Toronto Star, which in turn triggered international media coverage in/on: CBC, Global News, the Huffington Post (USA), Maclean's magazine, and Vv Magazine. A few months later, Pockets paintings were made available for sale online.
Art collaboration
In September 2013, Brent Roe and Scott Cameron (aka Scotch Camera) joined an art show with Pockets Warhol at the Gladstone hotel in Toronto. In September 2014, MacLeans listed Pockets as the #8 top selling art animal in the world, based on the top price fetched for a single item. According to Quinn, Pockets' work has been featured in art shows as far away as Estonia, Finland, and Italy, and purchased online from as far away as Tasmania.
In May 2016, Anita Kunz visited Pockets at the sanctuary, and subsequently donated one of her own paintings for Pockets to 'enhance'. Ms. Kunz later organized an art show with 80 other artists as a new fundraiser for the sanctuary, held at The Papermill Gallery, Todmorden Mills from April 6–16, 2017. Other participants in this collaboration included: Barry Blitt, Marc Burckhardt, Cynthia von Buhler, Seymour Chwast, Sue Coe, Yuri Dojc, Louis Fishauf, Jill Greenberg, Terry Mosher, Tim O'Brien, Ralph Steadman, Ann Telnaes and Martin Wittfooth.
Celebrity interactions
In April 2012, sanctuary volunteers Charmaine Quinn and Izzy Hirji presented Jane Goodall with a photo of Pockets and a painting by Pockets for her birthday at the Jane Goodall Institute in Toronto.
In March 2015, the sanctuary sent a painting by Pockets to Ricky Gervais and Jane Fallon as a 'Thank you' for their support of animal rights. In June 2015, Ricky Gervais tweeted that he was donating an acoustic guitar to the sanctuary, with mention of Pockets Warhol. After his performance in Toronto in September 2015, Gervais donated the guitar he used there, which subsequently raised US$4,150 in an online auction. The winning bidder lives in the United Kingdom. As of February 15, 2019, the guitar was up for auction again having been signed by several other celebrities: Brian May, Peter Frampton, Will Ferrell, Bryan Cranston, Dhani Harrison, Ricky Warwick, Steve Cutts. This time the proceeds were split between Story Book Farm Primate Sanctuary and Brian May's Save Me organization.
In 2020, Martin Gore of Depeche Mode commissioned artwork by Pockets to be used as the cover art for his latest EP, The Third Chimpanzee, see photos at right. The artwork is also featured in the accompanying music videos. Martin Gore discussed this collaboration in an interview with Rolling Stone magazine on 2021-01-27. The EP was released by Mute Records on 2021-01-29. One track, Mandrill, was released early on 2020-11-17. A second track, Howler, was released 2021-01-07.
See also
Animal-made art
Congo (chimpanzee)
Darwin (monkey)
List of individual monkeys
Pierre Brassau
References
External links
Pockets Warhol Art Gallery
Visual arts by animals
1992 animal births
Art by primates
Canadian male painters
Individual monkeys | Pockets Warhol | [
"Biology"
] | 915 | [
"Ethology",
"Behavior",
"Animals",
"Visual arts by animals"
] |
53,363,521 | https://en.wikipedia.org/wiki/Third-generation%20sequencing | Third-generation sequencing (also known as long-read sequencing) is a class of DNA sequencing methods which produce longer sequence reads, under active development since 2008.
Third generation sequencing technologies have the capability to produce substantially longer reads than second generation sequencing, also known as next-generation sequencing. Such an advantage has critical implications for both genome science and the study of biology in general. However, third generation sequencing data have much higher error rates than previous technologies, which can complicate downstream genome assembly and analysis of the resulting data. These technologies are undergoing active development and it is expected that there will be improvements to the high error rates. For applications that are more tolerant to error rates, such as structural variant calling, third generation sequencing has been found to outperform existing methods, even at a low depth of sequencing coverage.
Current technologies
Sequencing technologies with a different approach than second-generation platforms were first described as "third-generation" in 2008–2009.
There are several companies currently at the heart of third generation sequencing technology development, namely, Pacific Biosciences, Oxford Nanopore Technology, Quantapore (CA-USA), and Stratos (WA-USA). These companies are taking fundamentally different approaches to sequencing single DNA molecules.
PacBio developed the sequencing platform of single molecule real time sequencing (SMRT), based on the properties of zero-mode waveguides. Signals are in the form of fluorescent light emission from each nucleotide incorporated by a DNA polymerase bound to the bottom of the zL well.
Oxford Nanopore’s technology involves passing a DNA molecule through a nanoscale pore structure and then measuring changes in electrical field surrounding the pore; while Quantapore has a different proprietary nanopore approach. Stratos Genomics spaces out the DNA bases with polymeric inserts, "Xpandomers", to circumvent the signal to noise challenge of nanopore ssDNA reading.
Also notable is Helicos's single molecule fluorescence approach, but the company entered bankruptcy in the fall of 2015.
Advantages
Longer reads
In comparison to the current generation of sequencing technologies, third generation sequencing has the obvious advantage of producing much longer reads. It is expected that these longer read lengths will alleviate numerous computational challenges surrounding genome assembly, transcript reconstruction, and metagenomics among other important areas of modern biology and medicine.
It is well known that eukaryotic genomes including primates and humans are complex and have large numbers of long repeated regions. Short reads from second generation sequencing must resort to approximative strategies in order to infer sequences over long ranges for assembly and genetic variant calling. Pair end reads have been leveraged by second generation sequencing to combat these limitations. However, exact fragment lengths of pair ends are often unknown and must also be approximated as well. By making long reads lengths possible, third generation sequencing technologies have clear advantages.
Epigenetics
Epigenetic markers are stable and potentially heritable modifications to the DNA molecule that are not in its sequence. An example is DNA methylation at CpG sites, which has been found to influence gene expression. Histone modifications are another example. The current generation of sequencing technologies rely on laboratory techniques such as ChIP-sequencing for the detection of epigenetic markers. These techniques involve tagging the DNA strand, breaking and filtering fragments that contain markers, followed by sequencing. Third generation sequencing may enable direct detection of these markers due to their distinctive signal from the other four nucleotide bases.
Portability and speed
Other important advantages of third generation sequencing technologies include portability and sequencing speed. Since minimal sample preprocessing is required in comparison to second generation sequencing, smaller equipments could be designed. Oxford Nanopore Technology has recently commercialized the MinION sequencer. This sequencing machine is roughly the size of a regular USB flash drive and can be used readily by connecting to a laptop. In addition, since the sequencing process is not parallelized across regions of the genome, data could be collected and analyzed in real time. These advantages of third generation sequencing may be well-suited in hospital settings where quick and on-site data collection and analysis is demanded.
Challenges
Third generation sequencing, as of 2008, faced important challenges mainly surrounding accurate identification of nucleotide bases; error rates were still much higher compared to second generation sequencing. This is generally due to instability of the molecular machinery involved. For example, in PacBio’s single molecular and real time sequencing technology, the DNA polymerase molecule becomes increasingly damaged as the sequencing process occurs. Additionally, since the process happens quickly, the signals given off by individual bases may be blurred by signals from neighbouring bases. This poses a new computational challenge for deciphering the signals and consequently inferring the sequence. Methods such as Hidden Markov Models, for example, have been leveraged for this purpose with some success.
On average, different individuals of the human population share about 99.9% of their genes. In other words, approximately only one out of every thousand bases would differ between any two person. The high error rates involved with third generation sequencing are inevitably problematic for the purpose of characterizing individual differences that exist between members of the same species.
Genome assembly
Genome assembly is the reconstruction of whole genome DNA sequences. This is generally done with two fundamentally different approaches.
Reference alignment
When a reference genome is available, as one is in the case of human, newly sequenced reads could simply be aligned to the reference genome in order to characterize its properties. Such reference based assembly is quick and easy but has the disadvantage of “hiding" novel sequences and large copy number variants. In addition, reference genomes do not yet exist for most organisms.
De novo assembly
De novo assembly is the alternative genome assembly approach to reference alignment. It refers to the reconstruction of whole genome sequences entirely from raw sequence reads. This method would be chosen when there is no reference genome, when the species of the given organism is unknown as in metagenomics, or when there exist genetic variants of interest that may not be detected by reference genome alignment.
Given the short reads produced by the current generation of sequencing technologies, de novo assembly is a major computational problem. It is normally approached by an iterative process of finding and connecting sequence reads with sensible overlaps. Various computational and statistical techniques, such as de bruijn graphs and overlap layout consensus graphs, have been leveraged to solve this problem. Nonetheless, due to the highly repetitive nature of eukaryotic genomes, accurate and complete reconstruction of genome sequences in de novo assembly remains challenging. Pair end reads have been posed as a possible solution, though exact fragment lengths are often unknown and must be approximated.
Hybrid assembly
Long read lengths offered by third generation sequencing may alleviate many of the challenges currently faced by de novo genome assemblies. For example, if an entire repetitive region can be sequenced unambiguously in a single read, no computation inference would be required. Computational methods have been proposed to alleviate the issue of high error rates. For example, in one study, it was demonstrated that de novo assembly of a microbial genome using PacBio sequencing alone performed superior to that of second generation sequencing.
Third generation sequencing may also be used in conjunction with second generation sequencing. This approach is often referred to as hybrid sequencing. For example, long reads from third generation sequencing may be used to resolve ambiguities that exist in genomes previously assembled using second generation sequencing. On the other hand, short second generation reads have been used to correct errors in that exist in the long third generation reads. In general, this hybrid approach has been shown to improve de novo genome assemblies significantly.
Epigenetic markers
DNA methylation (DNAm) – the covalent modification of DNA at CpG sites resulting in attached methyl groups – is the best understood component of epigenetic machinery. DNA modifications and resulting gene expression can vary across cell types, temporal development, with genetic ancestry, can change due to environmental stimuli and are heritable. After the discovery of DNAm, researchers have also found its correlation to diseases like cancer and autism. In this disease etiology context DNAm is an important avenue of further research.
Advantages
The current most common methods for examining methylation state require an assay that fragments DNA before standard second generation sequencing on the Illumina platform. As a result of short read length, information regarding the longer patterns of methylation are lost. Third generation sequencing technologies offer the capability for single molecule real-time sequencing of longer reads, and detection of DNA modification without the aforementioned assay.
Oxford Nanopore Technologies’ MinION has been used to detect DNAm. As each DNA strand passes through a pore, it produces electrical signals which have been found to be sensitive to epigenetic changes in the nucleotides, and a hidden Markov model (HMM) was used to analyze MinION data to detect 5-methylcytosine (5mC) DNA modification. The model was trained using synthetically methylated E. coli DNA and the resulting signals measured by the nanopore technology. Then the trained model was used to detect 5mC in MinION genomic reads from a human cell line which already had a reference methylome. The classifier has 82% accuracy in randomly sampled singleton sites, which increases to 95% when more stringent thresholds are applied.
Other methods address different types of DNA modifications using the MinION platform. Stoiber et al. examined 4-methylcytosine (4mC) and 6-methyladenine (6mA), along with 5mC, and also created software to directly visualize the raw MinION data in a human-friendly way. Here they found that in E. coli, which has a known methylome, event windows of 5 base pairs long can be used to divide and statistically analyze the raw MinION electrical signals. A straightforward Mann-Whitney U test can detect modified portions of the E. coli sequence, as well as further split the modifications into 4mC, 6mA or 5mC regions.
It seems likely that in the future, MinION raw data will be used to detect many different epigenetic marks in DNA.
PacBio sequencing has also been used to detect DNA methylation. In this platform, the pulse width – the width of a fluorescent light pulse – corresponds to a specific base. In 2010 it was shown that the interpulse distance in control and methylated samples are different, and there is a "signature" pulse width for each methylation type. In 2012 using the PacBio platform the binding sites of DNA methyltransferases were characterized. The detection of N6-methylation in C Elegans was shown in 2015. DNA methylation on N6-adenine using the PacBio platform in mouse embryonic stem cells was shown in 2016.
Other forms of DNA modifications – from heavy metals, oxidation, or UV damage – are also possible avenues of research using Oxford Nanopore and PacBio third generation sequencing.
Drawbacks
Processing of the raw data – such as normalization to the median signal – was needed on MinION raw data, reducing real-time capability of the technology. Consistency of the electrical signals is still an issue, making it difficult to accurately call a nucleotide. MinION has low throughput; since multiple overlapping reads are hard to obtain, this further leads to accuracy problems of downstream DNA modification detection. Both the hidden Markov model and statistical methods used with MinION raw data require repeated observations of DNA modifications for detection, meaning that individual modified nucleotides need to be consistently present in multiple copies of the genome, e.g. in multiple cells or plasmids in the sample.
For the PacBio platform, too, depending on what methylation you expect to find, coverage needs can vary. As of March 2017, other epigenetic factors like histone modifications have not been discoverable using third-generation technologies. Longer patterns of methylation are often lost because smaller contigs still need to be assembled.
Transcriptomics
Transcriptomics is the study of the transcriptome, usually by characterizing the relative abundances of messenger RNA molecules in the tissue under study. According to the central dogma of molecular biology, genetic information flows from double stranded DNA molecules to single stranded mRNA molecules where they can be readily translated into functional protein molecules. By studying the transcriptome, one can gain valuable insight into the regulation of gene expression.
While expression levels can be more or less accurately depicted by second generation sequencing (we can assume that actual abundances of the population of transcripts are randomly sampled), transcript-level information still remains an important challenge. As a consequence, the role of alternative splicing in molecular biology remains largely elusive. Third generation sequencing technologies hold promising prospects in resolving this issue by enabling sequencing of mRNA molecules at their full lengths.
Alternative splicing
Alternative splicing (AS) is the process by which a single gene may give rise to multiple distinct mRNA transcripts and consequently different protein translations. Some evidence suggests that AS is a ubiquitous phenomenon and may play a key role in determining the phenotypes of organisms, especially in complex eukaryotes; all eukaryotes contain genes consisting of introns that may undergo AS. In particular, it has been estimated that AS occurs in 95% of all human multi-exon genes. AS has undeniable potential to influence myriad biological processes. Advancing knowledge in this area has critical implications for the study of biology in general.
Transcript reconstruction
The current generation of sequencing technologies produce only short reads, putting tremendous limitation on the ability to detect distinct transcripts; short reads must be reverse engineered into original transcripts that could have given rise to the resulting read observations. This task is further complicated by the highly variable expression levels across transcripts, and consequently variable read coverages across the sequence of the gene. In addition, exons may be shared among individual transcripts, rendering unambiguous inferences essentially impossible. Existing computational methods make inferences based on the accumulation of short reads at various sequence locations often by making simplifying assumptions. Cufflinks takes a parsimonious approach, seeking to explain all the reads with the fewest possible number of transcripts. On the other hand, StringTie attempts to simultaneously estimate transcript abundances while assembling the reads. These methods, while reasonable, may not always identify real transcripts.
A study published in 2008 surveyed 25 different existing transcript reconstruction protocols. Its evidence suggested that existing methods are generally weak in assembling transcripts, though the ability to detect individual exons are relatively intact. According to the estimates, average sensitivity to detect exons across the 25 protocols is 80% for Caenorhabditis elegans genes. In comparison, transcript identification sensitivity decreases to 65%. For human, the study reported an exon detection sensitivity averaging to 69% and transcript detection sensitivity had an average of a mere 33%. In other words, for human, existing methods are able to identify less than half of all existing transcript.
Third generation sequencing technologies have demonstrated promising prospects in solving the problem of transcript detection as well as mRNA abundance estimation at the level of transcripts. While error rates remain high, third generation sequencing technologies have the capability to produce much longer read lengths. Pacific Bioscience has introduced the iso-seq platform, proposing to sequence mRNA molecules at their full lengths. It is anticipated that Oxford Nanopore will put forth similar technologies. The trouble with higher error rates may be alleviated by supplementary high quality short reads. This approach has been previously tested and reported to reduce the error rate by more than 3 folds.
Metagenomics
Metagenomics is the analysis of genetic material recovered directly from environmental samples.
Advantages
The main advantage for third-generation sequencing technologies in metagenomics is their speed of sequencing in comparison to second generation techniques. Speed of sequencing is important for example in the clinical setting (i.e. pathogen identification), to allow for efficient diagnosis and timely clinical actions.
Oxford Nanopore's MinION was used in 2015 for real-time metagenomic detection of pathogens in complex, high-background clinical samples. The first Ebola virus (EBOV) read was sequenced 44 seconds after data acquisition. There was uniform mapping of reads to genome; at least one read mapped to >88% of the genome. The relatively long reads allowed for sequencing of a near-complete viral genome to high accuracy (97–99% identity) directly from a primary clinical sample.
A common phylogenetic marker for microbial community diversity studies is the 16S ribosomal RNA gene. Both MinION and PacBio's SMRT platform have been used to sequence this gene. In this context the PacBio error rate was comparable to that of shorter reads from 454 and Illumina's MiSeq sequencing platforms.
Drawbacks
MinION's high error rate (~10-40%) prevented identification of antimicrobial resistance markers, for which single nucleotide resolution is necessary. For the same reason, eukaryotic pathogens were not identified. Ease of carryover contamination when re-using the same flow cell (standard wash protocols don’t work) is also a concern. Unique barcodes may allow for more multiplexing. Furthermore, performing accurate species identification for bacteria, fungi and parasites is very difficult, as they share a larger portion of the genome, and some only differ by <5%.
The per base sequencing cost is still significantly more than that of MiSeq. However, the prospect of supplementing reference databases with full-length sequences from organisms below the limit of detection from the Sanger approach; this could possibly greatly help the identification of organisms in metagenomics.
See also
First-generation sequencing
Second-generation sequencing
References
External links
Molecular biology
Molecular biology techniques
Biotechnology
DNA sequencing methods | Third-generation sequencing | [
"Chemistry",
"Biology"
] | 3,615 | [
"Genetics techniques",
"Biotechnology",
"DNA sequencing methods",
"Molecular biology techniques",
"DNA sequencing",
"nan",
"Molecular biology",
"Biochemistry"
] |
53,365,013 | https://en.wikipedia.org/wiki/Mosaik%20Solutions | Mosaik Solutions (formerly American Roamer) was a company that specializes in wireless coverage data and wireless coverage maps, based in Memphis, Tennessee before being acquired by Ookla.
The company collects and crowdsources carrier signal quality from major telecommunications providers or users who have its consumer or enterprise mobile application installed. The data is used to provide insights into places around the world without access to cellular coverage and the development of new coverage patterns, as well as to provide maps showing what provider offers the best service in an area.
In 2011, the Federal Communications Commission (FCC), recognized Mosaik Solutions as the "industry standard" for the presence of wireless service at the census-block level.
History
In 2016, Mosaik purchased Sensorly, a free app developed to crowdsource cellular network performance service and provide coverage mapping for wireless networks worldwide.
Products and services
MapELEMENTS
MapELEMENTS software is a visualization tool that allows users to analyze data from the largest cellular coverage database in the world.
CellMaps
CellMaps is an interactive mapping solution that allows companies to show their network coverage directly on their website through an iframe or API. In 2013 Mosaik launched an android app for CellMaps that provides data directly from carriers so that users can determine what carrier meets their needs in a given area. On the map you can overlay multiple carriers, zoom to street-view level, and drop a pin onto any given spot to get a breakdown of carrier service in that area.
Signal Insights App
Signal Insights is an SaaS platform service available for android users that measures and analyzes the customer's experience in cellular or Wi-Fi networks. Indoor mode allows a user to upload a building floor plan and then map and test specific points in the building for cellular or Wi-Fi connectivity.
Sensorly App
Sensorly is a free app that crowdsources cellular network performance to provide coverage mapping worldwide and mobile speed data to help consumers make informed decisions when choosing a cellular carrier. In February 2017, Sensorly launched Map Trip, a feature that allows users to map their routes and share with others their signal data at a particular point in real time.
TowerSource
TowerSource is a resource for locating cell towers and identifying ownership, availability, fiber routes, type and height. It was acquired by Mosaik Solutions in September 2014.
Network Validator
Network Validator is a SaaS solution designed for users to quickly determine whether global cellular networks exist - by country, operator and wireless technology.
CoverageRight
CoverageRight is composed of licensed GIS file datasets that identify the marketed coverage of wireless operators in the United States and worldwide. It enables users to perform spatial analyses, monitor competitive build-outs, analyze coverage trends and assemble roaming footprints. This data has been utilized by the FCC to analyze wireless coverage nationwide.
Network QoE
Network QoE is an enterprise platform that uses crowdsourced data from cellular devices to detect wireless network issues including 3G, 4G and wifi accessibility, network coverage holes and data performance issues.
Wireless Spectrum Report
In March 2017, Mosaik Solutions launched the Wireless Spectrum Report, a tabular dataset detailing facts about spectrum ownership and availability in the United States.
References
External links
Official Website
Telecommunications | Mosaik Solutions | [
"Technology"
] | 657 | [
"Information and communications technology",
"Telecommunications"
] |
53,365,621 | https://en.wikipedia.org/wiki/Thin-film%20equation | In fluid mechanics, the thin-film equation is a partial differential equation that approximately predicts the time evolution of the thickness of a liquid film that lies on a surface. The equation is derived via lubrication theory which is based on the assumption that the length-scales in the surface directions are significantly larger than in the direction normal to the surface. In the non-dimensional form of the Navier-Stokes equation the requirement is that terms of order and are negligible, where is the aspect ratio and is the Reynolds number. This significantly simplifies the governing equations. However, lubrication theory, as the name suggests, is typically derived for flow between two solid surfaces, hence the liquid forms a lubricating layer. The thin-film equation holds when there is a single free surface. With two free surfaces, the flow must be treated as a viscous sheet.
Definition
The basic form of a 2-dimensional thin film equation is
where the fluid flux is
,
and μ is the viscosity (or dynamic viscosity) of the liquid, h(x,y,t) is film thickness, γ is the interfacial tension between the liquid and the gas phase above it, is the liquid density and the surface shear. The surface shear could be caused by flow of the overlying gas or surface tension gradients. The vectors represent the unit vector in the surface co-ordinate directions, the dot product serving to identify the gravity component in each direction. The vector is the unit vector perpendicular to the surface.
A generalised thin film equation is discussed in SIAM (Society for Industrial and Applied Mathematics)
.
When this may represent flow with slip at the solid surface while describes the thickness of a thin bridge between two masses of fluid in a Hele-Shaw cell. The value represents surface tension driven flow.
A form frequently investigated with regard to the rupture of thin liquid films involves the addition of a disjoining pressure Π(h) in the equation, as in
where the function Π(h) is usually very small in value for moderate-large film thicknesses h and grows very rapidly when h goes very close to zero.
Properties
Physical applications, properties and solution behaviour of the thin-film equation are reviewed in Reviews of Modern Physics and SIAM. With the inclusion of phase change at the substrate a form of thin film equation for an arbitrary surface is derived in Physics of Fluids. A detailed study of the steady-flow of a thin film near a moving contact line is given in another SIAM paper. For a yield-stress fluid flow driven by gravity and surface tension is investigated in Journal of Non-Newtonian Fluid Mechanics.
For purely surface tension driven flow it is easy to see that one static (time-independent) solution is a paraboloid of revolution
and this is consistent with the experimentally observed spherical cap shape of a static sessile drop, as a "flat" spherical cap that has small height can be accurately approximated in second order with a paraboloid. This, however, does not handle correctly the circumference of the droplet where the value of the function h(x,y) drops to zero and below, as a real physical liquid film can't have a negative thickness. This is one reason why the disjoining pressure term Π(h) is important in the theory.
One possible realistic form of the disjoining pressure term is
where B, h*, m and n are some parameters. These constants and the surface tension can be approximately related to the equilibrium liquid-solid contact angle through the equation
.
The thin film equation can be used to simulate several behaviors of liquids, such as the fingering instability in gravity driven flow.
The lack of a second-order time derivative in the thin-film equation is a result of the assumption of small Reynold's number in its derivation, which allows the ignoring of inertial terms dependent on fluid density . This is somewhat similar to the situation with Washburn's equation, which describes the capillarity-driven flow of a liquid in a thin tube.
See also
Partial differential equation
Lubrication theory
Disjoining pressure
References
External links
Viscous Thin Films - Max Planck Institute
Equations of fluid dynamics | Thin-film equation | [
"Physics",
"Chemistry"
] | 864 | [
"Equations of fluid dynamics",
"Equations of physics",
"Fluid dynamics"
] |
53,365,733 | https://en.wikipedia.org/wiki/Femtech | Femtech (or female technology) is a term used to define software and services that use technology tailored towards women's health. This includes fertility solutions, period-tracking apps, pregnancy and nursing care, women's sexual wellness, and reproductive system health care. While there are several different aspects of women's health femtech applies to, femtech mainly focuses on menstruation care through period-tracking apps. Before femtech was officially established, Luna Luna, created by a firm in Japan, helped women keep track of their menstruation cycles.
Femtech did not become an official term until 2016 when Clue co-founders Hans Raffauf, Ida Tin and the Clue team created the term. Femtech is specifically focused on applications for women's health. These applications come in several different forms, such as mobile apps and medical devices. The industry has grown and continues to grow since femtech was official established. By 2025, with continuous growth, the femtech industry could be worth $50 billion.
Companies and products
There are numerous femtech companies offering a variety of different products throughout the world, such as Clue, DOT, Glow, Eve, Cycles, My Calendar, Life, FertilityIQ, Extend Fertility, Forte Medical, Flo, Lady Cycle and others. Companies that offer services like IVF, egg freezing, and medical treatments include Univfy, Progyny, Apricity and Prelude Fertility. Valley Electronics created the original fertility tracking tech device, called the Lady-Comp fertility tracker, which was first produced in Germany in 1986 and has a modernized model still on the market in addition to a newer variant of fertility tracking device called the Daysy fertility tracker, which was the first device to pair a fertility tracker with an app. Similarly, the fertility company, Ava, produces a wearable that tracks fertility. Nurx provides a telemedicine service where women can get birth control prescribed via an app, and have the pills delivered. Twentyeight Health, another birth control delivery service, takes this model a step further by providing resources for underserved women and Medicaid populations.
Several companies also produce internet-connected medical devices that are often paired with mobile apps to track specific data. For instance, Elvie and Willow produce a wearable breast pump. The Elvie breast pump also connects to an app. Elvie also offers a kegel-tracking device. In 2020, Kegg launched a 2-in-1 fertility tracker that senses electrolyte levels of cervical fluid and assists the user in pelvic floor exercises.
Lioness produces a smart vibrator with an app that uses biofeedback to help users learn more about their bodies. Other medical devices and implements produced in the femtech category may or may not use an internet connection. Joylux is a women's health technology company creating medical and feminine wellness devices under the vSculpt and vFit brands. Companies like L. and Flex offer alternatives to standard tampon and condom products. Thinx sells reusable underwear that absorbs menstrual blood. iPulse Medical sells a menstrual pain relief wearable device.
Swedish company Natural Cycles was the first to receive official approval to market its app as digital contraception in the European Union and in August 2018, the Food and Drug Administration approved marketing in the U.S. Controversy around the app as a contraceptive device grew stronger after numerous women in Stockholm reported unplanned pregnancies after using the app. After Swedish authorities concluded the investigation, the amount of unintended pregnancies was found to be in line with claims made by Natural Cycles.
Companies in the breastfeeding and breast-pumping space like Milk Stork provide services such as breast milk shipping.
Ethics
There have been concerns about data-sharing practices in femtech, particularly within fertility-trackers. This issue has affected several applications outside of femtech, but due to the sensitivity of the data being shared within femtech, it becomes more urgent. In the aftermath of the overturning of Roe v. Wade and the laws passed by states banning abortion, there was a widespread fear Femtech would be weaponized to monitor women and whether or not they get an abortion. Some apps have come under fire for ambiguous privacy ethics after it emerged that user data had been shared without consent with companies such as Facebook. This allowed Facebook, and other companies that it shares its data with, to target users with fertility or pregnancy related products based around which point in their monthly menstrual cycle they were. Flo, a period tracking app which collects personal data from users, has also sold that data and attempted to conceal who it was sold to. After FCC intervention, Flo is no longer able to conceal what they do with data from users and must ask for their consent if they want to share their data. However, though Flo was stopped, companies such as Facebook and other period-tracking apps continue to share user data. Some have argued this is harmful, as it assumes things such as intended eventual pregnancy and disregards alternate conception outcomes such as termination or miscarriage. There have been additional concerns about femtech apps reporting false information regarding users' reproductive health. While the intention behind femtech is to give visibility to women's health and empower women, there have been several issues with femtech perpetuating social inequalities, such as sexist stereotypes, going against their original goal. Feminists who have studied femtech closely came to conclusion that rather than empowering women, it is exploiting the anxieties women have when it comes to their health. The main issues are medical reliability, privacy, gender stereotyping and epistemic injustice. Proposals to combat data-sharing practices have arisen through the use of ethics-by-design tools that stem from the capability sensitive design (CSD) framework. However, it is more of a theoretical framework rather than a permanent solution. There has yet to be a permanent solution presented.
Access
While there are several advantages to femtech, it is not accessible to all women, specifically women in low-income countries—over 44 million women in those countries lack access to the services offered. Period-tracking apps, for example, assume users have cellphones to download and use their applications. When it comes to the digital health, only 3% of the deals they have made were focused on women's health, while the rest of that focus went to men's health. With women's health already not being prioritized, femtech reframing itself to consider women globally is becoming a necessity. To close that gap, "e-hybrid" prenatal care has been proposed, which will allow flexibility is providing services to pregnant women and the kind of care that they need, specifically for women in low-income countries. However, it is mainly a potential model rather than a solid solution, with many obstacles to overcome before it could actually be implemented. Femtech could start to move in the direction of operating in a global context in the meantime, though it could be some time before that happens.
References
External links
Medical technology
Women's health | Femtech | [
"Biology"
] | 1,479 | [
"Medical technology"
] |
53,365,898 | https://en.wikipedia.org/wiki/Earliest%20known%20life%20forms | The earliest known life forms on Earth may be as old as 4.1 billion years (or Ga) according to biologically fractionated graphite inside a single zircon grain in the Jack Hills range of Australia. The earliest evidence of life found in a stratigraphic unit, not just a single mineral grain, is the 3.7 Ga metasedimentary rocks containing graphite from the Isua Supracrustal Belt in Greenland. The earliest direct known life on Earth are stromatolite fossils which have been found in 3.480-billion-year-old geyserite uncovered in the Dresser Formation of the Pilbara Craton of Western Australia. Various microfossils of microorganisms have been found in 3.4 Ga rocks, including 3.465-billion-year-old Apex chert rocks from the same Australian craton region, and in 3.42 Ga hydrothermal vent precipitates from Barberton, South Africa. Much later in the geologic record, likely starting in 1.73 Ga, preserved molecular compounds of biologic origin are indicative of aerobic life. Therefore, the earliest time for the origin of life on Earth is at most 3.5 billion years ago, possibly as early as 4.1 billion years ago — not long after the oceans formed 4.5 billion years ago and after the formation of the Earth 4.54 billion years ago.
Biospheres
Earth is the only place in the universe known to harbor life, where it exists in multiple environments. The origin of life on Earth was at least 3.5 billion years ago, possibly as early as 3.8-4.1 billion years ago. Since its emergence, life has persisted in several geological environments. The Earth's biosphere extends down to at least below the seafloor, up to into the atmosphere, and includes soil, hydrothermal vents, and rock. Further, the biosphere has been found to extend at least below the ice of Antarctica and includes the deepest parts of the ocean. In July 2020, marine biologists reported that aerobic microorganisms (mainly) in "quasi-suspended animation" were found in organically poor sediment below the seafloor in the South Pacific Gyre (SPG) ("the deadest spot in the ocean"). Microbes have been found in the Atacama Desert in Chile, one of the driest places on Earth, and in deep-sea hydrothermal vent environments which can reach temperatures over 400°C. Microbial communities can also survive in cold permafrost conditions down to -25°C. Under certain test conditions, life forms have been observed to survive in the vacuum of outer space. More recently, studies conducted on the International Space Station found that bacteria could survive in outer space. In February 2023, findings of a "dark microbiome" of microbial dark matter of unfamiliar microorganisms in the Atacama Desert in Chile, a Mars-like region of planet Earth, were reported.
Geochemical evidence
The age of Earth is about 4.54 billion years; the earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago according to the stromatolite record. Some computer models suggest life began as early as 4.5 billion years ago. The oldest evidence of life is indirect in the form of isotopic fractionation. Microorganisms will preferentially use the lighter isotope of an atom to build biomass, as it takes less energy to break the bonds for metabolic processes. Biologic material will often have a composition that is enriched in lighter isotopes compared to the surrounding rock it's found in. Carbon isotopes, expressed scientifically in parts per thousand difference from a standard as δ13C, are frequently used to detect carbon fixation by organisms and assess if purported early life evidence has biological origins. Typically, life will preferentially metabolize the isotopically light 12C isotope instead of the heavier 13C isotope. Biologic material can record this fractionation of carbon.
The oldest disputed geochemical evidence of life is isotopically light graphite inside a single zircon grain from the Jack Hills in Western Australia. The graphite showed a δ13C signature consistent with biogenic carbon on Earth. Other early evidence of life is found in rocks both from the Akilia Sequence and the Isua Supracrustal Belt (ISB) in Greenland. These 3.7 Ga metasedimentary rocks also contain graphite or graphite inclusions with carbon isotope signatures that suggest biological fractionation.
The primary issue with isotopic evidence of life is that abiotic processes can fractionate isotopes and produce similar signatures to biotic processes. Reassessment of the Akilia graphite show that metamorphism, Fischer-Tropsch mechanisms in hydrothermal environments, and volcanic processes may be responsible for enrichment lighter carbon isotopes. The ISB rocks that contain the graphite may have experienced a change in composition from hot fluids, i.e. metasomatism, thus the graphite may have been formed by abiotic chemical reactions. However, the ISB's graphite is generally more accepted as biologic in origin after further spectral analysis.
Metasedimentary rocks from the 3.5 Ga Dresser Formation, which experienced less metamorphism than the sequences in Greenland, contain better preserved geochemical evidence. Carbon isotopes as well as sulfur isotopes found in barite, which are fractionated by microbial metabolisms during sulfate reduction, are consistent with biological processes. However, the Dresser formation was deposited in an active volcanic and hydrothermal environment, and abiotic processes could still be responsible for these fractionations. Many of these findings are supplemented by direct evidence, typically by the presence of microfossils, however.
Fossil evidence
Fossils are direct evidence of life. In the search for the earliest life, fossils are often supplemented by geochemical evidence. The fossil record does not extend as far back as the geochemical record due to metamorphic processes that erase fossils from geologic units.
Stromatolites
Stromatolites are laminated sedimentary structures created by photosynthetic organisms as they establish a microbial mat on a sediment surface. An important distinction for biogenicity is their convex-up structures and wavy laminations, which are typical of microbial communities who build preferentially toward the sun. A disputed report of stromatolites is from the 3.7 Ga Isua metasediments that show convex-up, conical, and domical morphologies. Further mineralogical analysis disagrees with the initial findings of internal convex-up laminae, a critical criterion for stromatolite identification, suggesting that the structures may be deformation features (i.e. boudins) caused by extensional tectonics in the Isua Supracrustal Belt.
The earliest direct evidence of life are stromatolites found in 3.48 billion-year-old chert in the Dresser formation of the Pilbara Craton in Western Australia. Several features in these fossils are difficult to explain with abiotic processes, for example, the thickening of laminae over flexure crests that is expected from more sunlight. Sulfur isotopes from barite veins in the stromatolites also favor a biologic origin. However, while most scientists accept their biogenicity, abiotic explanations for these fossils cannot be fully discarded due to their hydrothermal depositional environment and debated geochemical evidence.
Most archean stromatolites older than 3.0 Ga are found in Australia or South Africa. Stratiform stromatolites from the Pilbara Craton have been identified in the 3.47 Ga Mount Ada Basalt. Barberton, South Africa hosts stratiform stromatolites in the 3.46 Hooggenoeg, 3.42 Kromberg and 3.33 Ga Mendon Formations of the Onverwacht Group. The 3.43 Ga Strelley Pool Formation in Western Australia hosts stromatolites that demonstrate vertical and horizontal changes that may demonstrate microbial communities responding to transient environmental conditions. Thus, it is likely anoxygenic or oxygenic photosynthesis has been occurring since at least 3.43 Ga Strelley Pool Formation.
Microfossils
Claims of the earliest life using fossilized microorganisms (microfossils) are from hydrothermal vent precipitates from an ancient sea-bed in the Nuvvuagittuq Belt of Quebec, Canada. These may be as old as 4.28 billion years, which would make it the oldest evidence of life on Earth, suggesting "an almost instantaneous emergence of life" after ocean formation 4.41 billion years ago. These findings may be better explained by abiotic processes: for example, silica-rich waters, "chemical gardens," circulating hydrothermal fluids, and volcanic ejecta can produce morphologies similar to those presented in Nuvvuagittuq.
The 3.48 Ga Dresser formation hosts microfossils of prokaryotic filaments in silica veins, the earliest fossil evidence of life on Earth, but their origins may be volcanic. 3.465-billion-year-old Australian Apex chert rocks may once have contained microorganisms, although the validity of these findings has been contested. "Putative filamentous microfossils," possibly of methanogens and/or methanotrophs that lived about 3.42-billion-year-old in "a paleo-subseafloor hydrothermal vein system of the Barberton greenstone belt, have been identified in South Africa." A diverse set of microfossil morphologies have been found in the 3.43 Ga Strelley Pool Formation including spheroid, lenticular, and film-like microstructures. Their biogenicity are strengthened by their observed chemical preservation. The early lithification of these structures allowed important chemical tracers, such as the carbon-to-nitrogen ratio, to be retained at levels higher than is typical in older, metamorphosed rock units.
Molecular biomarkers
Biomarkers are compounds of biologic origin found in the geologic record that can be linked to past life. Although they aren't preserved until the late Archean, they are important indicators of early photosynthetic life. Lipids are particularly useful biomarkers because they can survive for long periods of geologic time and reconstruct past environments.
Fossilized lipids were reported from 2.7 Ga laminated shales from the Pilbara Craton and the 2.67 Ga Kaapvaal Craton in South Africa. However, the age of these biomarkers and whether their deposition was synchronous with their host rocks were debated, and further work showed that the lipids were contaminants. The oldest "clearly indigenous" biomarkers are from the 1.64 Ga Barney Creek Formation in the McArthur Basin in Northern Australia, but hydrocarbons from the 1.73 Ga Wollogorang Formation in the same basin have also been detected.
Other indigenous biomarkers can be dated to the Mesoproterozoic era (1.6-1.0 Ga). The 1.4 Ga Hongshuizhuang Formation in the North China Craton contains hydrocarbons in shales that were likely sourced from prokaryotes. Biomarkers were found in siltstones from the 1.38 Ga Roper Group of the McArthur Basin. Hydrocarbons possibly derived from bacteria and algae were reported in 1.37 Ga Xiamaling Formation of the NCC. The 1.1 Ga Atar/El Mreïti Group in the Taoudeni Basin, Mauritania show indigenous biomarkers in black shales.
Genomic evidence
By comparing the genomes of modern organisms (in the domains Bacteria and Archaea), it is evident that there was a last universal common ancestor (LUCA). LUCA is not thought to be the first life on Earth, but rather the only type of organism of its time to still have living descendants. In 2016, M. C. Weiss and colleagues proposed a minimal set of genes that each occurred in at least two groups of Bacteria and two groups of Archaea. They argued that such a distribution of genes would be unlikely to arise by horizontal gene transfer, and so any such genes must have derived from the LUCA. A molecular clock model suggests that the LUCA may have lived 4.477—4.519 billion years ago, within the Hadean eon.
RNA replicators
Model Hadean-like geothermal microenvironments were demonstrated to have the potential to support the synthesis and replication of RNA and thus possibly the evolution of primitive life. Porous rock systems, comprising heated air-water interfaces, were shown to facilitate ribozyme catalyzed RNA replication of sense and antisense strands and then subsequent strand-dissociation. This enabled combined synthesis, release and folding of active ribozymes.
Further work on early life
Extraterrestrial origin for early life
While current geochemical evidence dates the origin of life to possibly as early as 4.1 Ga, and fossil evidence shows life at 3.5 Ga, some researchers speculate that life may have started nearly 4.5 billion years ago. According to biologist Stephen Blair Hedges, "If life arose relatively quickly on Earth ... then it could be common in the universe." The possibility that terrestrial life forms may have been seeded from outer space has been considered. In January 2018, a study found that 4.5 billion-year-old meteorites found on Earth contained liquid water along with prebiotic complex organic substances that may be ingredients for life.
Early life on land
As for life on land, in 2019 scientists reported the discovery of a fossilized fungus, named Ourasphaira giraldae, in the Canadian Arctic, that may have grown on land a billion years ago, well before plants are thought to have been living on land. The earliest life on land may have been bacteria 3.22 billion years ago. Evidence of microbial life on land may have been found in 3.48 billion-year-old geyserite in the Pilbara Craton of Western Australia.
Gallery
See also
Abiogenesis
Creation myth
Extremophile
First universal common ancestor
Hypothetical types of biochemistry
List of longest-living organisms
Oldest dated rocks
Outline of biology
Outline of life forms
Timeline of the evolutionary history of life
Viroid
Virus
References
External links
Vitae (BioLib)
Biota (Taxonomicon)
Life (Systema Naturae 2000)
Wikispecies — a free directory of life
Life in the Universe — Stephen Hawking (1996)
— Gary Ruvkun, 2019.
Biological evolution
Biology terminology
Earliest phenomena
Evolution | Earliest known life forms | [
"Biology"
] | 3,069 | [
"Evolutionary biology",
"Tree of life (biology)",
"nan"
] |
53,366,480 | https://en.wikipedia.org/wiki/Infanticide%20in%20primates | Infanticide in non-human primates occurs when an individual kills its own or another individual's dependent young. Five hypotheses have been proposed to explain infanticide in non-human primates: exploitation, resource competition, parental manipulation, sexual selection, and social pathology.
Hypotheses for infanticide
Exploitation
Infanticide in non-human primates occurs as a result of exploitation when the individuals performing the infanticide directly benefit from consumption or use of their victim. The individual can become a resource: food (cannibalism); a protective buffer against aggression, or a prop to obtain maternal experience.
The form of exploitation in non-human primates most attributable to adult females is when non-lactating females take an infant from its mother (allomothering) and forcibly retain it until starvation. This behavior is known as the "aunting to death" phenomenon; these non-lactating female primates gain mothering-like experience, yet lack the resources to feed the infant. This behaviour has been seen in captive bonobos, but not wild ones. It is not clear if it is a natural bonobo trait or the result of living in captivity. Male orangutans have not been directly observed practicing infanticide as a reproductive strategy, but recorded case of a male abducting an infant almost resulting in said infant dying from dehydration was observed. Additionally, a possible case of infanticide has been inferred, in which a mother orangutan had lost an infant and received a serious injury on her foot shortly after a new male had been introduced nearby. Although not directly observed, it is inferred this male attacked the female and killed her infant.
Resource competition
Resource competition results when there are too few resources in a particular area to support the existing population. In primates, resource competition is a prime motivator for infanticide. Infanticide motivated by resource competition can occur both outside of and within familial groups. Dominant, high ranking, female chimpanzees have been shown to more often aggress towards a lower ranking female and her infant due to resource competition. Primates from outside of familial groups might infiltrate areas and kill infants from other groups to eliminate competition for resources. When resources are limited, infants are easier to eliminate from the competition pool than other group members because they are the most defenseless and thus become targets of infanticide. Primate infanticide motivated by resource competition can also involve cannibalizing the infant as a source of nutrition.
Resource competition is also a primary motivator in inter-species infanticide, or the killing of infants from one species by another species. Through eliminating infants of another species in the same environment, the probability that the aggressor and their own infants will obtain more resources increases. This behavior has been an observed consequence of multiple primate inter-species conflicts. In these cases, instances of direct aggression toward inter-specific infants in addition to infanticide have also been observed. In these instances of direct aggression, the aggressor was the previous target of intra-species aggression directed towards them. Therefore, the direct aggression and infanticide carried out by these aggressors could be attributed to re-directed aggression.
Parental manipulation
Maternal infanticide
Maternal Infanticide, the killing of dependent young by the mother, is rare in non-human primates and has been reported only a handful of times. Maternal infanticide has been reported once in brown mantled tamarins, Saguinus fuscicollis, once in black fronted titis, Callicebus nigrifrons, and four times in mustached tamarins, Saguinus mystax. It is proposed that maternal infanticide occurs when the mother assesses the probability for infant survival based on previous infant deaths. If it is unlikely that the infant will survive, infanticide may occur. This may allow the mother to invest more in her current offspring or future offspring, leading to a greater net reproductive fitness in the mother.
In the instances of maternal infanticide in tamarins, there were multiple breeding females. The parental manipulation hypothesis proposes that maternal infanticide occurs more frequently when the group has a poor capacity to raise offspring, multiple breeding females, birth intervals shorter than three months, and low infant survival probability.
Maternal infanticide differs from other varieties of infanticide in that the resource competition and sexual selection hypotheses (see other sections) must be rejected. Resource competition and sexual selection are ruled out because it is the mother that is performing the infanticide, not another female.
In one case of maternal infanticide in wild black-fronted titi monkeys (Callicebus nigrifrons), the observed deceased infant was clinically healthy with no signs of health abnormalities. Therefore, infanticide did not appear to occur due to low viability of infant. Additionally, overcrowding or feeding competition were not factors in infanticide. In this case, there were no clear functions of the infanticide; the reason for infanticide in black-fronted titi monkeys is currently unknown.
Sexual selection
Sexual competition
Infanticide increases a male's reproductive success when he takes over a new troop of females. This behavior has been observed in langurs who live in single male breeding groups. The females whose infants were killed exhibited estrous behavior and copulated with the new leader. These effects result from acceleration of the termination of lactational amenorrhea. This provides an advantage to the male because the female will more quickly copulate with him and raise his young rather than the young from the previous mate; his fitness increases through use of infanticide. Infanticide in one-male breeding units has also been observed in red-tailed monkeys and blue monkeys. In addition to single male breeding groups, sexually selected infanticide often occurs in multi-male, multi-female breeding groups including the red howler and the mantled howler. Adult Japanese macaque males were eight times more likely to attack infants when females had not mated with the male himself.
Infanticide by females other than the mother have been observed in wild groups of common marmosets (Callithrix jacchus). Most cases of such behavior have been attributed to the resource competition hypothesis, in which females can gain more access to resources for herself and for her young by killing unrelated infants. Although commonly used in the context of food or shelter, the resource competition model can be applied to other limited resources, such as breeding opportunities or access to helpers. Most callitrichids have restrictive breeding patterns, which would be compatible with the model, but this infanticide behavior has only been documented in wild groups of common marmosets and not in wild groups of other callitrichid species. The higher frequency in common marmosets may be due to a variety of social, reproductive, and ecological characteristics - including higher likelihood for overlapping pregnancies and births (due to short intervals between births), habitat saturation, and lower costs of infant care compared to other callitrichids - that increase the chance of two breeding females inhabiting the same group, leading to more intense competition. In most observed cases in common marmosets, the socially dominant breeding females killed the infants of a subordinate female, allowing them to maintain their dominance.
Paternal infanticide
Paternal infanticide is rarely observed in non-human primates. In an extensive study of wild Japanese macaques which tracked instances of infanticide, DNA analysis revealed that males would not attack their own offspring or offspring of a female with whom they mated. Further, females in the study were found to be motivated to form social bonds with males in order to protect them from infanticide.
Social pathology: role of social organization
In mammals, interaction between the sexes is usually limited to the female estrous or copulation. However, in non-human primates, these male-female bonds persist past the estrous. Social relationships between males and females in primates are hypothesized to serve as protection against male infanticide. Year-round association serves to lower the probability of infanticide by other males. In addition, many primates live in multi-female groups, and it has been proposed that these females live together to reduce the risk of infanticide through paternity confusion or concealed ovulation. However, complex interactions can arise when females have different social rankings and when resource availability is threatened. Most often, dominant females opportunistically kill the young of a less dominant female when competition arises.
Adaptive counter adaptations to infanticide
Many primate species have developed counter adaptations to reduce the likelihood of infanticide. These strategies include physical defense, paternity confusion, reproduction suppression, and accelerated development.
Physical defense
The most immediate and obvious form of protection against infanticide is physical defense wherein mothers either directly prevent aggressive acts toward their offspring or recruit other individuals for assistance. Female primates have been observed to actively defend territory from potentially infanticidal females, as seen in chimpanzees. In order to recruit the non-parental assistance in defense, female chacma baboons utilize "friendships" with males, wherein the male forms a bond with the infant until weaning, that may serve to protect their offspring from aggression by higher ranking males or females.
To protect their young from infanticide, many species of primate mothers will form social monogamous pairs to prevent paternal infanticide. In these pairs, the males will mate with other females but live exclusively with one female as a socially monogamous pair. Forming this socially monogamous pair causes the males to form parental relationships and social bonds with the female's offspring. These bonds motivate males to defend their offspring against infanticide from unrelated individuals and to never commit infanticide against their own offspring. This form of social monogamy has been observed in gibbons, siamangs, baboons, and macaques.
One study demonstrated that for gorillas, living in harem-style groups reduces a female's risk of infanticide more than if she mated with multiple males. A female gorilla benefits more from protection by the silverback male, despite the fact that mating with only one male increased paternity certainty and thus increases the number of males in the population that would benefit reproductively from infanticide. However, it is likely that antipredation is also a closely linked motivation to the formation of gorilla social units.
Paternity confusion
Females utilize paternity confusion to reduce the likelihood that a male she has mated with will kill her offspring. There are several ways this is accomplished including concealed ovulation. Female catarrhine primates such as hanuman langurs have evolved an extended estrous state with variable ovulation in order to conceal the paternity of the fertilization. Another important situation in which paternity confusion can arise is when females mate with multiple males; this includes mating patterns such as polyandry and promiscuity in multi-male multi-female groups. Similar to promiscuous mating, female primates are proceptive during the first and second trimester of pregnancy in order to increase paternity confusion of their offspring. Finally, in multi-male multi-female groups, female synchrony, in which females are all fertile at the same time, can prohibit the dominant male from monopolizing all of the females. This also allows sneak copulations in which non-dominant males sire offspring. Female synchrony also serves to reduce risk of female infanticide by forcing potentially infanticidal females to focus on provisioning their own infants rather than acting aggressively. But there is some evidence to suggest that female synchrony serves to increase competition pressures and thus aggression in females.
Reproduction suppression
Females may also avoid the costs of continued reproductive investment when infanticide is likely. One such occurrence is known as the Bruce Effect, in which female primates may abort the pregnancy when presented with a new male. This has been observed in wild geladas, where a majority of females abort pregnancies following the displacement of a dominant male. Feticide is a related but distinct phenomenon by which physical or psychological trauma mediated by male behavior results in fetal loss. For example, in baboons at Amboseli, rates of fetal loss increase following the immigration of aggressive males.
In some social systems, lower-ranking primate females may delay reproduction to avoid infanticide by dominant females, as seen in common marmosets. In one instance, the dominant marmoset female killed the offspring of a subordinate female. This phenomenon of reproduction suppression is also well observed in tamarins.
Accelerated development
In order to reduce the amount of time that infants are particularly vulnerable to infanticide, females have been shown to wean infants earlier when risk of infanticide is high. For example, female white-headed leaf monkeys were observed to wean their infants significantly more quickly during male takeovers as compared to socially stable periods. Females with infants too young to be weaned left with the old males and returned after their offspring had fully weaned, again after a significantly shorter weaning period than during stable times.
See also
Infanticide
Infanticide (zoology)
Infanticide in carnivores
Infanticide in rodents
Sexual selection
Siblicide
References
Zoology
Primate behavior
Infanticide | Infanticide in primates | [
"Biology"
] | 2,705 | [
"Zoology"
] |
53,367,068 | https://en.wikipedia.org/wiki/Ronjon%20Nag | Ronjon Nag is a British-American inventor and entrepreneur specializing in the field of mobile technology. He co-founded the technology company Lexicus, acquired by Motorola in 1993 and Cellmania, acquired by Research in Motion in 2010. He later served as Vice-President of both Motorola and BlackBerry.
Education and personal life
Ronjon Nag received his bachelor's degree in 1984 from the University of Birmingham, where he studied Electronic & Electrical Engineering. He received a Master of Science degree from MIT in Management Science and studied neural networks in Stanford University's Department of Psychology. After completing a Doctorate in Engineering at Cambridge University, he studied as a Harkness Fellow in the United States at Massachusetts Institute of Technology and Stanford University. In 2014 became a Fellow of the Institution of Engineering and Technology and in 2016, he become a fellow of the Stanford University Distinguished Careers Institute.
He divides his time between Cambridge in the United Kingdom and Silicon Valley in California.
Career in Technology
Nag's work has focused on inventing new systems for interacting with mobile devices, resulting in breakthroughs in the application of speech recognition, handwriting recognition, predictive text and touch screens for mobile devices. As a student at Cambridge University, Nag wrote an article applying a hidden Markov model to speech recognition, which became the basis for his Phd on the subject. In 1991 Ronjon Nag began researching artificial neural networks, first under Amar Gupta at MIT and then in Stanford University's Department of Psychology, studying under David Rumelhart. In 1992, Nag co-founded the technology company Lexicus in Palo Alto, California. As CEO and as a computer scientist, Nag oversaw the emergence of Lexicus as an industry pioneer of speech and predictive technology systems and saw the acquisition of Lexicus by Motorola in November 1993. As a subsidiary of Motorola, Lexicus introduced some of the first devices with Chinese handwriting and speech recognition. In 1999, he founded Cellmania, a mobile infrastructure company that provided digital rights management for mobile content, enabling the creation of some of the first mobile app stores. Cellmania was sold to Research in Motion, now BlackBerry Limited, in 2010 for an undisclosed sum.
Nag is President of the R42 Institute which develops and funds AI and Longevity projects. Nag is also Chairman of Bounce Imaging, which develops throwable cameras undertaking video stitching whilst being thrown, winning the $1m Verizon Powerful Answers Prize in 2015.
Awards
In 2014 Nag was the recipient of the Mountbatten Medal awarded by the Institution of Engineering and Technology. The award cited Nag's influence on the creation of the modern mobile phone industry with the development of smartphone components such as text and speech recognition and digital distribution platforms, technologies that were later incorporated widely into early smartphones developed by Motorola and BlackBerry.
In 2021, Nag's contributions in engineering entrepreneurship, smartphone user interfaces, and mobile app stores were acknowledged by the IEEE Santa Clara Valley Section (SCV) with an Outstanding Engineer award.
References I
External links
Institution of Engineering and Technology Award Winners
Living people
American people of British descent
Electrical engineers
20th-century American inventors
Alumni of the University of Cambridge
Massachusetts Institute of Technology alumni
Fellows of the Institution of Engineering and Technology
Year of birth missing (living people) | Ronjon Nag | [
"Engineering"
] | 658 | [
"Fellows of the Institution of Engineering and Technology",
"Electrical engineering",
"Institution of Engineering and Technology",
"Electrical engineers"
] |
53,367,602 | https://en.wikipedia.org/wiki/SMiLE-Seq | Selective microfluidics-based ligand enrichment followed by sequencing (SMiLE-seq) is a technique developed for the rapid identification of DNA binding specificities and affinities of full length monomeric and dimeric transcription factors in a fast and semi-high-throughput fashion.
SMiLE-seq works by loading in vitro transcribed and translated “bait” transcription factors into a microfluidic device in combination with DNA molecules. Bound transcription factor-DNA complexes are then isolated from the device, which is followed by sequencing and then sequence data analysis to characterize binding motifs. Specialized software is used to determine the DNA binding properties of monomeric or dimeric transcription factors to help predict their in vivo DNA binding activity.
SMiLE-seq combines three important functions differing from existing techniques: (1) The use of capillary pumps to optimize the loading of samples, (2) Trapping molecular interactions on the surface of the microfluidic device through immunocapture of target transcription factors, (3) Enabling the selection of DNA that is specifically bound to transcription factors from a pool of random DNA sequences.
Background
Elucidating the regulatory mechanisms used to govern essential cellular processes is an important branch of research. Cellular regulatory networks can be very complex and often involve the coordination of multiple processes that begin with the modulation of gene expression. The binding of transcription factor molecules to DNA, either alone or in combination with other transcription factors, is used to control gene expression in response to both intra- and extracellular stimuli.
Characterizing the binding mechanisms and specificities of transcription factors to specific regions of DNA – and identifying these transcription factors – is a fundamental component of the process of resolving cellular regulatory dynamics. Before the introduction of SMiLE-seq technology, ChIP-seq (chromatin immunoprecipitation sequencing) and HT-SELEX (high throughput systematic evolution of ligands by exponential enrichment) technologies were used to successfully characterize nearly 500 transcription factor-DNA binding interactions.
ChIP-seq uses immunoprecipitation to isolate specific transcription factors bound to DNA fragments. Immunoprecipitation is followed by DNA sequencing, which identifies the genomic regions to which transcription factors bind.
HT-SELEX, a similar method, uses random, synthetically generated DNA molecules as bait for transcription factors in vitro. Sequence preferences and binding affinities are characterized based on successful binding interactions between bait molecules and transcription factors.
It is estimated that fewer than 50% of the transcription factors present in humans have been described in previous techniques. The development of SMiLE-seq technology has provided an additional method with the potential to facilitate identification and characterization of previously undescribed transcription factor-DNA binding interactions.
Workflow of SMiLE-seq
SMiLE-seq uses a microfluidic device into which transcription factors, which have been transcribed and translated in vitro, are loaded. Transcription factor samples (~0.3 ng) are modified by the addition of an enhanced green fluorescent protein (eGFP) tag and combined with both target double-stranded DNA molecules (~8 pmol) tagged with Cyanine Dye5 (Cy5) and a double-stranded competitive DNA model, poly-dIdC, which operates as a negative control to limit spurious binding interactions.
When multiple transcription factors are simultaneously analyzed (e.g., when characterization of potential heterodimeric binding interactions is performed), each transcription factor is tagged with a correspondingly unique fluorescent tag. Samples are pumped through the microfluidic device in a passive, twenty-minute process that utilizes capillary action in a series of parallel channels. eGFP-tagged transcription factors are immunocaptured using anchored biotinylated anti-eGFP antibodies.
Mechanical depression of a button traps bound transcription factor-DNA complexes, and fluorescent analysis is performed. Fluorescent readouts that identify the presence of multiple fluorescent tags associated with a single antibody indicate heterodimeric binding interactions. The presence of DNA is confirmed by Cy5 signal detection. A polydimethylsiloxane membrane on the button surface captures successfully bound transcription factor-DNA complexes, while unbound transcription factors and targets are washed away.
Following the removal of unbound components, bound DNA molecules are collected, pooled, and amplified. Sequencing is subsequently performed using NextSeq 500 or HiSeq2000 sequencing lanes. Sequence data is used to develop a seed sequence, which is then probed for functional motifs using a uniquely developed hidden Markov model-based software pipeline.
Advantages
The use of microfluidics in SMiLE-seq offers three main advantages when compared to current techniques used to measure protein-DNA interactions (e.g., ChIP-seq, HT-SELEX, and protein binding microarrays).
SMiLE-seg requires fewer transcription factors than other similar techniques (only picograms are required).
The process is faster than other techniques (it requires less than an hour, as compared to days).
SMiLE-seq is not limited by the length of target DNA (a limitation of protein binding microarrays), and is not biased towards stronger affinity protein-DNA interactions (a major limitation of HT-SELEX).
The ability of many transcription factors to bind DNA is dependent on heterodimer formation, and therefore requires the presence of a specific dimer partner for binding. This has been shown to yield incomplete results if transcription factors are individually tested. Heterodimer combinations have been shown to range from 3000 to 25000, and many remain uncharacterized.
A technology like SMiLE-seq, which is able to detect these dimeric interactions, may help broaden current knowledge and characterization of transcription factor-DNA binding profiles. Additionally, previous technologies have used transcription factor probes in their truncated form, which may reduce their ability to bind and dimerize. SMiLE-seq enables robust identification of DNA binding specificities of full length, previously uncharacterized transcription factors. Furthermore, SMiLE-seq is able to identify transcription factor binding sites over a wide range of binding affinities, which represents a significant limitation of other technologies.
Limitations
The primary limitation of SMiLE-seq is that the technique can only be used to characterize the binding interactions of previously identified transcription factors, as the method requires in vitro transcription and translation of the transcription factors prior to their combination with DNA molecules. Additionally, previous studies have shown that fluorescent protein tags can affect the binding affinity of proteins to their targets.
The effect of the specific fluorescent protein tags on binding affinity would have to be investigated to determine whether this would impact specific protein-DNA interactions found using this technology. Further development of SMiLE-seq may involve modifying transcription factor expression conditions to increase the success of analysis.
See also
SELEX
ChIP-seq
Protein binding microarrays
Competition-ChIP
References
Protein methods
Molecular biology techniques
Biotechnology
DNA | SMiLE-Seq | [
"Chemistry",
"Biology"
] | 1,418 | [
"Biochemistry methods",
"Protein methods",
"Protein biochemistry",
"Biotechnology",
"Molecular biology techniques",
"nan",
"Molecular biology"
] |
53,367,809 | https://en.wikipedia.org/wiki/China%20Institute%20of%20Atomic%20Energy | The China Institute of Atomic Energy or CIAE (), formerly the Institute of Atomic Energy of the Chinese Academy of Sciences, is the main research institute of the China National Nuclear Corporation (CNNC).
Founded in 1950, it conducts research in the fields of nuclear physics, nuclear engineering, radiochemistry, and in the development of nuclear technology.
See also
Nuclear power in China
References
Nuclear technology in China
Nuclear research institutes
Military research of China
1950 establishments in China | China Institute of Atomic Energy | [
"Engineering"
] | 94 | [
"Nuclear research institutes",
"Nuclear organizations"
] |
53,368,771 | https://en.wikipedia.org/wiki/Pharmacomicrobiomics | Pharmacomicrobiomics, proposed by Prof. Marco Candela for the ERC-2009-StG project call (proposal n. 242860, titled "PharmacoMICROBIOMICS, study of the microbiome determinants of the different drug responses between individuals"), and publicly coined for the first time in 2010 by Rizkallah et al. (from Ramy K. Aziz research group), is defined as the effect of microbiome variations on drug disposition, action, and toxicity. Pharmacomicrobiomics is concerned with the interaction between xenobiotics, or foreign compounds, and the gut microbiome. It is estimated that over 100 trillion prokaryotes representing more than 1000 species reside in the gut. Within the gut, microbes help modulate developmental, immunological and nutrition host functions. The aggregate genome of microbes extends the metabolic capabilities of humans, allowing them to capture nutrients from diverse sources. Namely, through the secretion of enzymes that assist in the metabolism of chemicals foreign to the body, modification of liver and intestinal enzymes, and modulation of the expression of human metabolic genes, microbes can significantly impact the ingestion of xenobiotics.
Efforts to understand the interaction between specific xenobiotics and the microbiome have traditionally involved the use of in vivo as well as in vitro models. Recently, next generation sequencing of genomic DNA obtained from a community of microbes has been used to identify organisms within microbial communities, allowing for accurate profiles of the composition of microbes within an environment. Initiatives such as the Human Microbiome Project (HMP) have aimed to characterize the microbial composition of the oral, gut, vaginal, skin and nasal environments. This and other microbiome characterization projects have accelerated the study of pharmacomicrobiomics. An extensive understanding of the microbiome in the human body can lead to the development of novel therapeutics and personalized drug treatments that are not potentiated or activated by processes carried out by the microbiome.
History
In a 1973 paper, Ronald Scheline stated that the gastrointestinal microbiome has the ability to act as an organ with metabolic potential at least equal to the liver. Since then, the importance of the human microbiome in mediating health and disease has been acknowledged, and specific interactions between xenobiotics and microbes have been characterized using in vitro or in vivo methods. However, few studies have taken into account the complete metabolic profile, leading some to say that the microbiome's cumulative role in xenobiotic metabolism and toxicology has largely remained unexplored. It is reported that 84% of the top-selling pharmaceuticals in the US and Europe are administered orally, making it the most common mode of drug administration. The implication of this is that a large proportion of drugs, especially those that are lowly soluble and permeable ones, encounter the microbiome and are subject to reductive and hydrolytic reactions.
The view of the human microbiome as an organ is quite common in scientific literature; however, it is more biologically correct to view it as a cloud, since a 'microbiome cloud model' better reflects the uncertainty associated with the dynamic composition of the microbiome. Understanding the microbiome variability is key to understanding and modulating pharmacomicrobiomic interactions. The same patient can respond properly to a drug on a given day, then—as the patient's microbiome dramatically varies after an infection, antimicrobial therapy, or radiation therapy (for example), the drug response can surprisingly be much different.
Sequencing technologies such as 16S rRNA shotgun metagenomic sequencing have facilitated the rapid expansion of the pharmacomicrobiomics field by capturing organismal diversity in microbial communities. The Human Microbiome Project and METAgenomics of the Human Intestinal Tract (MetaHIT), established in 2007 and 2008, respectively, aimed to characterize the variation in human microbiomes. These large scale projects are foundational to pharmacomicrobiomic studies, as they allow for the generation of statistic models that can take into account variation in microbial composition across individuals.
History of the term
The term 'pharmacomicrobiomics' was first proposed in literature in 2010 and subsequently, in 2011, the domains 'pharmacomicrobiomics.org' and 'pharmacomicrobiomics.com' were released. A team of freshly graduated pharmacy students (Mariam Rizkallah and Rama Saad) built and published the first public database with that name "PharmacoMicrobiomics" (with a capital M for branding). Since then, the term started appearing in PubMed year after year, and crossed the 50 publications landmark 11 years later (PubMed search).
Methods to elucidate microbiome composition
Animal models
Interactions between xenobiotics and the host microbiome have primarily been assessed through the use of in vivo animal models, as it is difficult to model the natural human gut. In general, the pattern of bacterial colonization is the same in different animals, with both pH and the number of microorganisms gradually increasing from the small intestine towards the ileo-caecal junction of the large intestine. Germ-free rats colonized with human faecal matter are generally regarded as the gold standard in animal modeling of gut microbial environment. However, enzyme activity can vary greatly between organisms.
In vitro models
Microbes found in human fecal samples are fairly representative of the gut microbiome, and are used frequently in in vitro cultures. A variety of in vitro microbial modelling techniques have also been developed. Static batch culturing consists of plating bacteria without replenishing the media at regular intervals. Semi-continuous culture systems allow for the addition of medium without disrupting bacterial growth, and include pH control capabilities. The continuous culture system more closely resembles that of the gut, as it continuously replenishes and removes culture medium. The simulator of the human intestinal microbial system (SHIME) models the small and large intestine through the use of a five-stage reactor, and includes numerous ports for continuous monitoring of pH and volume. Most recently, researchers improved on SHIME by including a computer controlled peristaltic wave to circulate chyme throughout the apparatus. These technologies have given researchers close control over the culturing environment, facilitating the discovery of interactions between xenobiotics and microbes.
High-throughput sequencing
16S rRNA Sequencing
16S ribosomal RNA is the most common housekeeping genetic marker for classifying and identifying bacterial species, as it is present in all bacterial species, has an identical function in most organisms, and is large enough (~1,500 bp) to capture sufficient variation to distinguish bacteria. The sequence of 16S rRNA consists of highly conserved sequences which alternate with nine windows of "hypervariable regions". This allows universal primers to be used to sequence many species at a time, and provides the possibility of distinguishing bacteria given the variable regions alone. Many papers suggest that 16S rRNA gene sequencing provides genus identification in >90% of cases, but species level identification in approximately ~65 to 83% of cases. The Ribosomal Database Project (RDP) and SILVA databases contain sequence information for rRNA in bacteria, eukarya and archaea.
Shotgun sequencing
Advances in high-throughput sequencing has facilitated shotgun metagenome sequencing (SMS), a technology that provides a broader characterization of microbial samples by sequencing a larger number of genes in each organism. SMS involves collecting microbial samples from the environment, isolating DNA, shearing the DNA into small fragments, and then performing whole genome sequencing (WGS). Reads can be assembled de novo or using reference genomes. However, SMS is not without limitations. Reads may overlap and prevent accurate alignment to reference genomes. In addition, reads may be contaminated by human DNA sequence, confounding results. In reference-based assembly, reads may also be biased towards species which have publicly available reference genomes.
Composition of the microbiome
Individual Microbiomes
Gut
Within the intestines, the majority of microbes can be found in the large intestine, where the pH is higher and more conducive to survival. These bacteria are often more efficient than our own digestive enzymes, and function to digest protein and carbohydrates. The results of over 690 human microbiomes have shown that the majority of bacteria of the gut microbiome belongs to four phyla: Bacillota, Bacteroidota, Actinomycetota, and Pseudomonadota.
Vagina
The vagina possesses over 200 phylotypes, the most predominant belonging to the phyla Bacillota, Bacteroidota, Actinomycetota, and Fusobacteriota. The secretion of lactic acid and hydrogen peroxide by Lactobacillus sp. can lower the pH, increasing the concentration of bacteria that cause bacterial vaginosis.
Placenta
The first profile of microbes in healthy term pregnancies identified non-pathogenic commensal microbiota from the Firmicutes, Tenericutes, Proteobacteria, Bacteroidetes, and Fusobacteria phyla.
Oral cavity
Through the HMP, nine intraoral sites were in investigated, and found to be enriched in over 300 genera belonging to more than 20 bacterial phyla.
Human Microbiome Project
The Human Microbiome Project (HMP) was established in 2008 by the US National Institutes of Health (NIH). The overarching goal is to establish a comprehensive characterization of the human microbiota and its role in human health and disease, as well as to develop datasets and tools that scientists can use to study microbial populations. The specific initiatives are as follows:
Develop a reference set of microbial genome sequences for an initial characterization of the human microbiome.
Elucidate the relationship between disease and changes in the human microbiome.
Develop technologies for computational analysis, namely methods for sequencing individual microbes or all members of complex populations simultaneously.
Establish a Data Analysis and Coordinating Center to provide publicly available information about the project, outcomes, and raw data.
Establish research repositories to store materials and reagents used in the HMP. This includes cultured organisms and metagenomic DNA samples.
Examine ethical, legal, and social implications of HMP research.
The primary means of characterization is through 16S rRNA sequencing and shotgun metagenomic sequencing. Body sites that are sampled include skin, oral cavity, gut, vagina and nasal cavity. The HMP website includes sequence, metabolic reconstruction, and community profile data. These datasets have been used to associate certain clinical variables with microbiome composition
Known Drug Interactions
Microbiota-mediated interference in xenobiotic activity
The microbiome can significantly affect the potency of a pharmaceutical drug. Even though most drugs are absorbed in the upper part of the large intestine, long-acting drugs that are exposed to the microbe-rich area of the lower intestine can be affected by microbial metabolism. For instance, chloramphenicol may cause bone marrow aplasia following oral administration, due to the presence of coliforms that convert chloramphenicol to its toxic form, known as p-aminophenyl-2-amin-1,2-propanediol. In addition, altered abundances of Eggerthella lenta between populations have been found to affect the metabolism of digoxin, potentiating both its activity and toxicity. A non-exhaustive list of drugs and the microbiota's role in potentiating/increasing their effect is provided below.
Xenobiotic mediated interference in microbiome composition
Even though pharmacomicrobiomics is often interpreted as the impact the microbiome has on xenobiotic metabolism, the term can also encompass the effects of xenobiotics on the microbiome and microbial genes. The impact of antibiotics on the human microbiome has been well studied. It has been shown that antibiotic therapies not only target a specific pathogen, but also the commensal inhabitants of a host. Evidence suggests that commensal bacteria levels in some cases are not normalized after antibiotic treatment, and in fact may be negatively affected for extended periods of time. A study which assessed the oral and gut microbes before, immediately after, and up to 12 months after exposure to antibiotics, found that the microbiome can be altered for over 12 months. Since the microbiome composition can be altered by antibiotics, this implies positive selection for resistant opportunistic pathogens, which can cause acute disease.
The PharmacoMicrobiomics Web Portal
The PharmacoMicrobiomics Web Portal is a student-led initiative to explore how microbes modulate drugs that is intended for bioinformaticians, microbial geneticists, and drug developers. The goal of the project is to mine literature data and extract microbe-drug interactions, including information about drug classes, microbial families, and body systems. Furthermore, the portal includes a relational database with information on microbial composition at different body sites and their specific effects on drug pharmacokinetics and pharmacodynamic properties.
Personalized Medicine
Personalized medicine in the context of pharmacomicrobiomics refers to the ability to predict an individual's response to a xenobiotic based on the composition of their gut microbiome. However, current omics approaches investigating microbiome composition using metagenomic sequencing after xenobiotic treatment are sparse. Instead, research efforts have focused predominantly on modeling changes in microbial composition in different disease states. Future research efforts should combine knowledge relating to what microbes preferentially metabolize certain compounds (garnered from in vitro studies) with the identification of species abundance to predict drug tolerance in patients. However, modeling a microbe's interaction with a particular xenobiotic may not stably predict interactions, as the genomes of microbes are continually reshuffled through horizontal gene transfer. Considering this, approaches that target individual gene/transcript/protein signatures rather than individual microbes will likely lead to more widely applicable personalized approaches.
Limitations
The limitations of pharmacomicrobiomics primarily arise from the uncertainty associated with metagenomic profiling. Namely, short reads obtained by shotgun sequencing can be difficult to align to reference genomes since many organism have homologous sequences. In addition, 16S rRNA sequencing cannot consistently resolve species identity, a finding that casts doubt on species identities in metagenomic samples. Limitations also arise from differing study designs, as unique approaches to identifying the nature of the xenobiotic-microbiome interactions are often taken. For instance, because pharmacomicrobiomics very broadly denotes the association between xenobiotics and the microbiome, the extent to which studies profile the genetics of the microbiome can vary significantly. Studies aiming to characterize organism identity, but not gene identity or copy number may elect to use 16S shotgun sequencing as opposed to SMS. Conversely, studies aiming to identify genes and their products rather than organism identity may elect WMGS coupled with transcriptomic analysis. Initially, these differences may mean that researchers wanting to investigate publicly available data may have to target their research questions to fit the data at hand.
References
Omics
Gut flora | Pharmacomicrobiomics | [
"Biology",
"Environmental_science"
] | 3,215 | [
"Bioinformatics",
"Environmental microbiology",
"Omics",
"Gut flora"
] |
53,368,843 | https://en.wikipedia.org/wiki/Al-Ashraf%20Umar%20II | Al-Malik Al-Ashraf (Mumahhid Al-Din) Umar Ibn Yūsuf Ibn Umar Ibn Alī Ibn Rasul (), known as Umar Ibn Yusuf (1296) was the third Rasulid sultan, who ruled as Al-Ashraf Umar II. He was also a mathematician, astronomer and physician.
Biography
Few biographical details about Al‑Malik al‑Ashraf ‘Umar are known. He was born in 1242 in Yemen, and he died in 1296. He excelled in astronomy, agriculture, veterinary science and medicine.
Al‑Ashraf ruled for as the third Rasulid sultan for 21 months from 1295, succeeding after the end of the 46-year rule of his father, . According to the historian David King. In 1266 he commanded a military raid on the Yemenese city of Hajjah. He was made governor of . He was in charge of the highland city of Sanaa, now the capital of Yemen. For a period al‑Ashraf ruled as governor of the flood‑irrigated lands near al‑Mahjam, which was owned by his family.
Family
Al‑Ashraf had six adults sons. Two of his daughters married sons of his younger brother and successor, al-Mu'ayyad Da'ud.
Data from the Encyclopaedia of Islam (1986)
Astronomical work
Al-Ashraf wrote the first description of the use of a magnetic compass for determining the . His works on astronomy contain information on earlier sources.
In a treatise about astrolabes and sundials, al-Ashraf included information on the construction of a compass bowl (). He then uses the compass to determine the north point, the meridian (), and the towards Mecca. This is the first mention of a compass in a medieval Islamic scientific text and its earliest known use as a indicator, although al-Ashraf did not claim to be the first to use it for this purpose.
Al‑Ashraf astronomical treatise includes the names of local Yemeni star names.
Treatise on agriculture
Al-Ashraf's is considered by the historian David King to be crucial for constructing the history of agriculture during the Rasulid era. The work, of which two copies are extant, is the earliest Rasulid treatise about agriculture. The exact title is not known.
The seven chapters of the treatise consider the knowledge of times for planting, transplanting, working the land and improving it; cereal crops (); pulses (), crops grown from seed (); the cultivation of flowering plants (); aromatic plants (); growing vegetables ( and (); and methods of pest control (). The text would have been primarily of use to Yemenese farmers and landowners; there is evidence that Al-Ashraf obtained some of his information from other lands, although no other texts are mentioned.
Notes
References
Sources
(PDF version)
Further reading
1240s births
1296 deaths
13th-century Arab people
13th-century astronomers
Astronomers of the medieval Islamic world
Yemeni astronomers
Monarchs of Yemen
13th-century monarchs in Asia
Rasulid dynasty | Al-Ashraf Umar II | [
"Astronomy"
] | 627 | [
"Astronomers",
"Astronomer stubs",
"Astronomy stubs"
] |
53,369,366 | https://en.wikipedia.org/wiki/NGC%20423 | NGC 423 is a lenticular galaxy of type S0/a? located in the constellation Sculptor. It was discovered on November 14, 1835 by John Herschel. It was described by Dreyer as "extremely faint, small, extended, gradually a little brighter middle, eastern of 2.", the other being NGC 418.
References
External links
0423
18351114
Sculptor (constellation)
Lenticular galaxies
004266 | NGC 423 | [
"Astronomy"
] | 88 | [
"Constellations",
"Sculptor (constellation)"
] |
53,369,661 | https://en.wikipedia.org/wiki/Thermonema%20lapsum | Thermonema lapsum is a Gram-negative and thermophilic bacterium from the genus of Thermonema which has been isolated from a hot spring in Rotorua in New Zealand.Homospermidine and homospermine are the major polyamines of Thermonema lapsum
References
External links
Type strain of Thermonema lapsum at BacDive - the Bacterial Diversity Metadatabase
Sphingobacteriia
Bacteria described in 1989
Thermophiles
Biota of New Zealand | Thermonema lapsum | [
"Biology"
] | 106 | [
"Biota by country",
"Biota of New Zealand"
] |
53,370,344 | https://en.wikipedia.org/wiki/Buddhist%20Digital%20Resource%20Center | The Buddhist Digital Resource Center (BDRC), formerly Tibetan Buddhist Resource Center (TBRC), is a 501(c)(3) nonprofit organization dedicated to seeking out, preserving, organizing, and disseminating Buddhist literature. Joining digital technology with scholarship, BDRC ensures that the ancient wisdom and cultural treasures of the Buddhist literary tradition are not lost, but are made available for future generations. BDRC is committed to seeking out, preserving, organizing, and disseminating Buddhist literature. Founded in 1999 by E. Gene Smith with the help of the Tibetan translator Michele Martin, BDRC is located in Cambridge, Massachusetts, and hosts a digital library of the largest collection of digitized Tibetan texts in the world. Current programs focus on the preservation of texts in Pali, Chinese, Sanskrit, and Tibetan.
BDRC's Harvard Square headquarters facilitates its ongoing cooperative relationships with Harvard University. BDRC also has international offices in New Delhi, India and Kathmandu, Nepal, and is linked to the E. Gene Smith Library at Southwest University for Nationalities in Chengdu, China.
History
In the early 1960s, while working on his PhD at the University of Washington, E. Gene Smith studied with the Venerable Dezhung Rinpoche. In 1964, Dezhung Rinpoche encouraged Smith to move to India in order to seek out and study Tibetan books more directly. He gave Smith letters of introduction to show to the lamas living among the Tibetan diaspora.
In 1968 the U.S. Library of Congress hired Smith as a field director in New Delhi where he worked on the Food for Peace humanitarian effort Public Law 480. Through the program, Smith began to copy and print thousands of Tibetan texts while keeping a version of each one for his own collection. He moved from India to Indonesia in 1985 and then Egypt, along with his collection of 12,000 volumes of texts.
In 1997 Smith retired from the Library of Congress and began working to implement his vision of making the preserved texts accessible using the new scanning and digitization technologies that were, at that time, just beginning to become available. In 1999 with friends including Tibetan translator Michele Martin and Harvard professor and fellow Tibetologist Leonard van der Kuijp, he founded the Tibetan Buddhist Resource Center (TBRC) in Cambridge, Massachusetts. Smith's texts from India that were digitized at TBRC became the foundation for Tibetan studies in the United States.
In 2002 with the support of Shelley and Donald Rubin, TBRC moved to New York City, where Smith became an advisor to the Rubin Museum of Art. Major grants from the Patricia and Peter Gruber Foundation, Khyentse Foundation, and the Shelley and Donald Rubin Foundation allowed TBRC to acquire a significant number of texts, develop its archiving system, and add more professional staff. Starting as technical director in 2001, Jeff Wallman was personally selected by Smith to be executive director and was appointed by the board of directors in 2009.
Gene Smith died on December 16, 2010. TBRC had scanned 7 million pages of Tibetan texts at the time of his death.
In 2017, TBRC announced the expansion of institutional mission to include the preservation of texts in languages beyond Tibetan, including Sanskrit, Pali and Chinese. To reflect this expansion, they have officially changed organizational name from Tibetan Buddhist Resource Center to Buddhist Digital Resource Center (BDRC). In 2017, BDRC will begin preserving and making accessible texts in languages beyond Tibetan, starting with Pali, Sanskrit, and Chinese.
BDRC's Work
BDRC seeks out and preserves undiscovered texts, organizes them into a library catalog system, and disseminates the library online and to remote locations on hard drives so anyone can read, print, or share the texts. Texts are cataloged by work, genre, subject, person, and place.
Currently, the collection contains more than 26,000 works (72,000 volumes, totaling nearly 15 million pages) of Tibetan texts. Scholars and students are able to study the physical qualities of the texts since the scans are searchable and zoomable.
Between 500,000 and 1,000,000 pages are added every year.
BDRC's work was recognized by the 17th Karmapa Ogyen Trinley Dorje in a letter offering his support, gratitude, and prayers. Gene Smith's life and TBRC were the subject of the 2012 documentary Digital Dharma, directed by Dafna Yachin of Lunchbox Communications. Variety film critic John Anderson described the film as, "A divinely inspired gift... also an affectionate tribute to the late E. Gene Smith, the scholar, librarian and ex-Mormon who waged a 50-year struggle to save the endangered texts of Tibetan Buddhism."
BDRC and Harvard
In summer 2012 BDRC relocated back to Harvard Square in Cambridge, Massachusetts, where the staff hand-picked by Smith continues its ongoing mission to preserve and provide access to Tibetan literature.
In cooperation with the Harvard University Open Access Project (HOAP), BDRC is making its entire library completely open access. BDRC also coordinates internships with graduate students from Harvard Divinity School and the Department of South Asian Studies at Harvard.
References
External links
Buddhist Digital Archives
Digital Dharma Official Website
Tibetan Buddhist Resource Center at Google Cultural Institute
Bibliographic databases and indexes
Buddhist organizations based in the United States
Discipline-oriented digital libraries
Tibetan Buddhist literature
Digital humanities
American digital libraries
Asian-American culture in Massachusetts
Harvard Square
Digital humanities projects
Digital history projects
Public domain databases
Buddhist libraries | Buddhist Digital Resource Center | [
"Technology"
] | 1,120 | [
"Digital humanities",
"Computing and society"
] |
53,370,365 | https://en.wikipedia.org/wiki/RTL8710 | The RTL8710 is a low-cost Wi-Fi chip with full TCP/IP stack and MCU (Micro Controller Unit) capability produced by Taiwanese manufacturer, Realtek.
References
Wireless networking hardware | RTL8710 | [
"Technology"
] | 45 | [
"Wireless networking hardware",
"Wireless networking"
] |
53,370,737 | https://en.wikipedia.org/wiki/Fine%20Guidance%20Sensor%20%28HST%29 | Fine Guidance Sensor (FGS) for the Hubble Space Telescope is a system of three instruments used for pointing the telescope in space, and also for astrometry and its related sciences. To enable aiming the telescope at a specific spot in the sky, each FGS combines optics and electronics. There are three Hubble FGS, and they have been upgraded over the lifetime of the telescope by crewed Space Shuttle missions. The instruments can support pointing of 2 milli-arc seconds (units of degree). The three FGS are part of the Hubble Space Telescope's Pointing Control System, aka PCS. The FGS function in combination with the Hubble main computer and gyroscopes, with the FGS providing data to the computer as sensors which enables the HST to track astronomical targets.
The FGS can be used to locate something in space, and then lock-on to it. It can provide the movements the telescope must make to keep the object in view, for the main instruments to record data on.
The FGS were originally made by the optics company Perkin-Elmer, and as removable and repairable instruments it has been possible to refurbish them over the lifetime of the telescope. The first replacement FGS was installed in 1997, swapping out FGS1.
In May 2009, on STS-125 a FGS was replaced during the mission to the Hubble telescope by the Space Shuttle. The astronaut crew performed an EVA (spacewalk) to service the FGS and other components on the telescope in Earth orbit. This was the SM4 mission.
An example of astrometry science with the Hubble FGS system is observations of the Low-Mass Binary star system L722-22. Observations were taken of the system in 1990s, and the data helped determine the mass of each of the components of L722-22, which is also known as LHS 1047 and GJ 1005.
The FGS are white-light shearing interferometers. The FGS weigh 220 kg (485 lb) and have dimensions of roughly 0.5 m × 1.0 m × 1.6 meters.
Observations
The smallest Kuiper belt object (KBO) yet detected at that time was discovered in 2009 by poring over data from the Hubble Space Telescope's fine guidance sensors. They detected a transit of an object against a distant star, which, based on the duration and amount of dimming, was calculated to be a KBO about in diameter. It has been suggested that the Kepler observatory may be able to detect objects in the Oort cloud by their occultation of background stars, and the Whipple proposal would also try to use this concept.
A Hubble FGS has also been used for astrometry, tracking the movement of different stars. This ability was used for exoplanet research, where the motion of the star caused by the movement of planets around it was detected. Hubble was used via the FGS sensors to detect the motion of star caused by an exoplanet orbiting it. The effect on the red dwarf Gliese 876's by companion Gliese 876b was measured.
FGS was used to study double-star systems (aka binary star systems) and to measure distances to astronomical bodies.
FGS has also been used to observe asteroids and calculate their size. Asteroids studied include (63) Ausonia, (15) Eunomia, (43) Ariadne, (44) Nysa, and (624) Hektor.
See also
Fine guidance sensor
Fine Guidance Sensor and Near Infrared Imager and Slitless Spectrograph (FGS for JWST)
Guide Star Catalog (Hubble)
Kepler space telescope
Whipple (spacecraft) (Occultation type Space telescope concept)
References
External links
Fine Guidance Sensors Aboard the Hubble Space Telescope, the Scientific Capabilities of these Interferometer
NASA Fact Sheet Hubble FGS
A DECONVOLUTION TECHNIQUE FOR HUBBLE SPACE TELESCOPE FGS FRINGE ANALYSIS
Hubblesite - FGS
Hubble Space Telescope instruments
Space astrometry missions | Fine Guidance Sensor (HST) | [
"Astronomy"
] | 837 | [
"Space telescopes",
"Space astrometry missions"
] |
54,637,408 | https://en.wikipedia.org/wiki/TEDAX | Technician Specialist in Deactivation of Explosive Artifacts (), commonly known by its abbreviation TEDAX, is the Spanish name for bomb disposal units.
Many TEDAX groups exist in Spain, most of them in the police corps but also in the Armed Forces (but they changed their name in 2001). Since 2001, these units of the Armed Forces are not named TEDAX because they are adapted to the international standards of EOD (Explosive Ordnance Disposal) due to the entry of Spain in NATO. Other reason to change the name was because of these groups are also specialized on unexploded ordnance.
The TEDAX of the law enforcement agencies and the EODs of the Armed Forces have become a key element in the fight against terrorism, each in its area of competence. For the performance of their function they have the support of high technology of specific design, like specialized robots, special suits of high protection against explosion, etc.
In Spain there are TEDAX units in the Civil Guard, in the National Police Corps and in some Autonomous Police (like Mossos d'Esquadra or Ertzaintza), and there are EOD Units in the Army, in the Air Force and in the Navy.
The TEDAX units were created in the 1970s and they were fundamental to the fight against the terrorist group ETA and in the 2004 Madrid train bombings. Outside the national territory, EOD units have become essential parts of the international operations carried out by the Spanish Armed Forces around the world, in areas where the threat of artifacts and ammunition is very high. These units are also specialized in CBRN defense.
The first victim of the TEDAX police units, Rafael Valdenebro Sotelo, died in 1978 when he tried to deactivate an explosive device attributed to the Canary Islands Independence Movement. Many other victims of the police units were killed trying to defuse ETA bombs. In the Armed Forces, the first victim was Captain Fernando Álvarez Rodríguez, died in 1993 in Bosnia and Herzegovina.
References
Bomb disposal
Emergency services
National law enforcement agencies of Spain | TEDAX | [
"Chemistry"
] | 413 | [
"Explosion protection",
"Bomb disposal"
] |
54,637,700 | https://en.wikipedia.org/wiki/Teknomo%E2%80%93Fernandez%20algorithm | The Teknomo–Fernandez algorithm (TF algorithm), is an efficient algorithm for generating the background image of a given video sequence.
By assuming that the background image is shown in the majority of the video, the algorithm is able to generate a good background image of a video in -time using only a small number of binary operations and Boolean bit operations, which require a small amount of memory and has built-in operators found in many programming languages such as C, C++, and Java.
History
People tracking from videos usually involves some form of background subtraction to segment foreground from background. Once foreground images are extracted, then desired algorithms (such as those for motion tracking, object tracking, and facial recognition) may be executed using these images.
However, background subtraction requires that the background image is already available and unfortunately, this is not always the case. Traditionally, the background image is searched for manually or automatically from the video images when there are no objects. More recently, automatic background generation through object detection, medial filtering, medoid filtering, approximated median filtering, linear predictive filter, non-parametric model, Kalman filter, and adaptive smoothening have been suggested; however, most of these methods have high computational complexity and are resource-intensive.
The Teknomo–Fernandez algorithm is also an automatic background generation algorithm. Its advantage, however, is its computational speed of only -time, depending on the resolution of an image and its accuracy gained within a manageable number of frames. Only at least three frames from a video is needed to produce the background image assuming that for every pixel position, the background occurs in the majority of the videos. Furthermore, it can be performed for both grayscale and colored videos.
Assumptions
The camera is stationary.
The light of the environment changes only slowly relative to the motions of the people in the scene.
The number of people does not occupy the scene for most of the time at the same place.
Generally, however, the algorithm will certainly work whenever the following single important assumption holds: For each pixel position, the majority of the pixel values in the entire video contain the pixel value of the actual background image (at that position).As long as each part of the background is shown in the majority of the video, the entire background image needs not to appear in any of its frames. The algorithm is expected to work accurately.
Background image generation
Equations
For three frames of image sequence , , and , the background image is obtained using
The Boolean mode function of the table occurs when the number of 1 entries is larger than half of the number of images such that
For three images, the background image can be taken as the value
Background generation algorithm
At the first level, three frames are selected at random from the image sequence to produce a background image by combining them using the first equation. This yields a better background image at the second level. The procedure is repeated until desired level .
Theoretical accuracy
At level , the probability that the modal bit predicted is the actual modal bit is represented by the equation .
The table below gives the computed probability values across several levels using some specific initial probabilities. It can be observed that even if the modal bit at the considered position is at a low 60% of the frames, the probability of accurate modal bit determination is already more than 99% at 6 levels.
Space complexity
The space requirement of the Teknomo–Fernandez algorithm is given by the function , depending on the resolution of the image, the number of frames in the video, and the desired number of levels. However, the fact that will probably not exceed 6 reduces the space complexity to .
Time complexity
The entire algorithm runs in -time, only depending on the resolution of the image. Computing the modal bit for each bit can be done in -time while the computation of the resulting image from the three given images can be done in -time. The number of the images to be processed in levels is . However, since , then this is actually , thus the algorithm runs in .
Variants
A variant of the Teknomo–Fernandez algorithm that incorporates the Monte-Carlo method named CRF has been developed. Two different configurations of CRF were implemented: CRF9,2 and CRF81,1. Experiments on some colored video sequences showed that the CRF configurations outperform the TF algorithm in terms of accuracy. However, the TF algorithm remains more efficient in terms of processing time.
Applications
Object detection
Face detection
Face recognition
Pedestrian detection
Video surveillance
Motion capture
Human-computer interaction
Content-based video coding
Traffic monitoring
Real-time gesture recognition
References
Further reading
External links
Background Image Generation Using Boolean Operations – describes the TF algorithm, its assumptions, processes, accuracy, time and space complexity, and sample results.
A Monte-Carlo-based Algorithm for Background Generation – a variant of the Teknomo–Fernandez algorithm that incorporates the Monte-Carlo method was developed in this study.
Mathematical examples
Image processing
Computer vision | Teknomo–Fernandez algorithm | [
"Mathematics",
"Engineering"
] | 1,016 | [
"Artificial intelligence engineering",
"Packaging machinery",
"nan",
"Computer vision"
] |
54,637,828 | https://en.wikipedia.org/wiki/Joint%20Cyberspace%20Command | The Joint Cyberspace Command (MCCE), known until 2020 as Joint Cyber-Defence Command (MCCD), is a Spanish cyberspace service of the Defence Staff responsible for planning and carrying out the actions related to cyber defence in networks and information and telecommunications systems of the Ministry of Defense or others that might be entrusted, as well as contributing to the adequate response in cyberspace to threats or aggressions that may affect to the National Defense.
In this sense, the MCCD directs and coordinates, in the matter of cyber defense, the activity of the centers of response to incidents of security of the information of the different branches of the Armed Forces; it exercises the timely, legitimate and proportionate response in cyberspace to threats or aggressions that may affect the National Defense and defines, directs and coordinates awareness, training and specialized training in this area. In addition, he is responsible for the development and detail of the Information Security policies in the Information and Telecommunications Systems (SEGINFOSIT) and the direction of execution and control of compliance with these policies, within the scope of the Ministry of Defense.
The MCCD was created on February 19, 2013 by a Defence Ministry Order 10/2013, by which the Joint Cyber-Defence Command is created. In 2020, it was renamed Joint Cyberspace Command. The current Chief Commander of the MCCD is divisional general Rafael García Hernández.
Functions
The functions of the Joint Cyberspace Command are:
Ensure free access to cyberspace, in order to fulfill the missions and tasks assigned to the Armed Forces, through the development and use of the necessary resources and procedures.
Guarantee the availability, integrity and confidentiality of the information, as well as the integrity and availability of the networks and systems that manage and have it commissioned.
Guarantee the operation of the critical services of the Armed Forces' information and telecommunications systems in a degraded environment due to incidents, accidents or attacks.
Obtain, analyze and exploit information on cyber attacks and incidents in networks and systems of their responsibility.
Exercise the timely, legitimate and proportionate response in cyberspace to threats or aggressions that may affect the Defence of Spain.
To direct and coordinate, in the matter of Cyberdefence, the activity of the centers of response to incidents of security of the information of the Armed Force and the one of operations of security of the information of the Ministry of Defence.
To exercise the representation of the Ministry of Defence in the matter of military cyber defence in the national and international sphere.
Cooperate, in the area of cyber-defence, with the national centers for response to information security incidents, in accordance with the Spanish cybersecurity strategies and policies in force, as well as with other military centers to respond to security information incidents in the international sphere.
Define, direct and coordinate awareness, training and specialized training in cyber defence.
Organization chart
The Joint Cyberspace Command is composed by the following bodies:
The MCCE Command.
The Secretariat.
The Joint Cyberspace Command Staff (EMMCCD).
The assistance body to the Chief Commander of the MCCE led by the Chief of Staff of the Joint Cyber-Defense Command.
It's divided in six sections: the Coordination Section (C-0), the Cyberintelligence and Security Section (C-2), the Operations Section (C-3), the Plans Section (C-5), the Preparation Section (C-7) and the Cooperation and Representation Section (C-9).
The Operations Command (JOPS).
It's the Joint Command department responsible for the execution of cyber defense operations
The Administration and Services Command (JAS).
It's the administrative and technical department of the Joint Command.
Commanders
See also
List of cyber warfare forces
United States Cyber Command
Cyber force
People's Liberation Army Strategic Support Force
Norwegian Cyber Defence Force
References
Computer security organizations
Cyberinfrastructure
Military units and formations established in 2013
Spanish intelligence agencies
Military of Spain | Joint Cyberspace Command | [
"Technology"
] | 788 | [
"Information and communications technology",
"IT infrastructure",
"Cyberinfrastructure"
] |
54,638,237 | https://en.wikipedia.org/wiki/Citymapper | Citymapper is a public transit app and mapping service which displays transport options, usually with live timing, between any two locations in a supported city. It integrates data for all urban modes of transport, including walking, cycling and driving, in addition to public transport. It is free of charge to users, and is supported by a mobile app on devices such as mobile phones, and by an Internet website.
The underlying data is pulled from a variety of sources, including open data (usually GTFS-files provided by transport authorities) and local transit authorities. Some data is user-generated or collected by local employed personnel.
Citymapper started in 2011 in London. Its second city was New York. In August 2020 travel in 58 cities and metropolitan areas was covered. Citymapper was founded by Azmat Yusuf, a former Google employee, who also serves as Citymapper's CEO.
In December 2019 the app added a feature which allows users to choose between a "fast" route or "main roads" which avoid dimly-lit areas.
As of 2023, the company provides its services to more than 50 million users across 100 cities.
Other services
In September 2017, Citymapper launched a night bus service in the East End of London.
The service in various iterations was called Smartbus, SmartRide, and Ride.
The service used eight-passenger vans, as London's transit authority, Transport for London, did not allow Citymapper to operate full-size buses.
Citymapper discontinued this service in July 2019.
In February 2019, Citymapper launched Pass, a weekly subscription that gave users access to some forms of public transit in London, at lower cost than other weekly passes.
Corporate finances
In 2019, Citymapper earned £5.8 million in revenue but had net losses in excess of £9 million.
As of May 2021, Citymapper has raised £45 million in venture capital funding. In May 2021, the company launched a crowdfunding campaign targeted at retail investors. The company plans to use the funds to expand services into additional cities.
In March 2023, Citymapper was acquired by Via Transportation for undisclosed terms.
See also
Transit (app)
Moovit
References
External links
Citymapper website
Mobile route-planning software
Transport organisations based in London | Citymapper | [
"Technology"
] | 465 | [
"Mobile software stubs",
"Mobile technology stubs"
] |
54,638,589 | https://en.wikipedia.org/wiki/NGC%202194 | NGC 2194 is an open cluster in the constellation Orion. The cluster is located about 10,000 light years away from Earth. It is rich and moderately concentrated. The cluster lies 33 arcminutes west-northwest of 73 Orionis.
Observation history
It was discovered by William Herschel on 11 February 1784, and it was added in his catalogue as IV 5. It was added to the General Catalogue as 1383. The cluster was also observed by Adolph Cornelius Petersen in 1849 with the 18 cm refractor at the Altona Observatory. The cluster was also observed by Hermann Carl Vogel, without mentioning its General Catalogue number. John Louis Emil Dreyer added it to GCS as number 5380, as Heinrich Louis d'Arrest's discovery from 18 September 1862, without noticing it was already included.
Characteristics
NGC 2194 is a rich and moderately concentrated, with Trumpler class III1r, open cluster. The brightest stars of the cluster are of magnitude 10, the brightest being of magnitude 10.26. The cluster has 149 members down to 15th magnitude. The main sequence turn off is at magnitude 14.5 and there are few red giants members. There are some stars that are bluer than the turn-off point and if they are members of the cluster they are possibly blue stragglers. The photometric study of the cluster by Sanner and al. concluded that the age of the cluster is 550 Myr and its distance is 2,900 pc. Piatti et al. determined the age of the cluster to be 400 Myr and its distance to be 3,200 pc. The cluster has low metallicity (−0.27 ± 0.06). It is located 130 pc south of the galactic plane.
References
External links
2194
Orion (constellation)
Open clusters | NGC 2194 | [
"Astronomy"
] | 363 | [
"Constellations",
"Orion (constellation)"
] |
54,638,711 | https://en.wikipedia.org/wiki/DDIT4L | DNA-damage-inducible transcript 4 like (DDIT4L) or regulated in development and DNA damage response 2 (REDD2) is a protein that in humans is encoded by the DDIT4L gene. The gene is located on chromosome 4 or chromosome 3 in human or mouse respectively.
Function
DDIT4L is a negative regulator of mTOR. DDIT4L is a stress responsive protein, its expression is increased under the hypoxic condition and causes or sensitize towards cell death through the regulation mTOR activity and reduction of thioredoxin-1. Cardiomyocytes showed increase expression of DDIT4L under pathological stress, which promoted autophagy through the inhibition of mTORC1, not mTORC2.
Role in Disease
In fibrosis, nuclear long noncoding RNA (lncRNA) H19X repressed DDIT4L gene expression, specifically interacting with a region upstream of the DDIT4L gene and increased collagen expression and fibrosis. Expression of DDIT4L is increased in pathological cardiac hypertrophy but not in those of physiological cardiac hypertrophy. Such mice had mild systolic dysfunction, increased baseline autophagy, reduced mTORC1 activity, and increased mTORC2 activity.
See also
DDIT4/ REDD1
mTOR
mTORC1
mTORC2
References
Proteins | DDIT4L | [
"Chemistry"
] | 291 | [
"Biomolecules by chemical classification",
"Protein stubs",
"Biochemistry stubs",
"Molecular biology",
"Proteins"
] |
54,638,936 | https://en.wikipedia.org/wiki/Vaughan%E2%80%93Preston%20gap | In astronomy, the Vaughan–Preston gap is an observed absence of F-, G- and K-type stars with intermediate levels of magnetic activity. In 1980, Vaughan and Preston noted there were two populations of stars of these classifications, with either high or low levels of activity, separated by an apparent gap. There remains no consensus on the cause of the gap.
References
Magnetic field
Magnetism in astronomy | Vaughan–Preston gap | [
"Astronomy"
] | 82 | [
"Astronomy stubs",
"Stellar astronomy stubs",
"Magnetism in astronomy",
"Astronomical sub-disciplines",
"Stellar astronomy"
] |
54,643,138 | https://en.wikipedia.org/wiki/Zoothamnium%20niveum | Zoothamnium niveum is a species of ciliate protozoan which forms feather-shaped colonies in marine coastal environments. The ciliates form a symbiosis with sulfur-oxidizing chemosynthetic bacteria of the species "Candidatus Thiobios zoothamnicoli", which live on the surface of the colonies and give them their unusual white color.
Characteristics
The conspicuously white and feather-shaped colonies are composed of individual bell-shaped cells known as zooids. The stalks of individual cells grow from a single central stalk. Colonies can reach a length of up to 15 mm, formed from hundreds of single zooids, each with a length of only 120 μm. An entire colony can contract into a ball-shaped bunch through the contraction of myonemes in their stalks.
The white color is produced by chemolithoautotrophic sulfur-oxidizing bacteria, which cover the entire surface of the Z. niveum colony. In most other species of Zoothamnium, bacteria are only known to cover the stalks. The bacteria contain elemental sulfur, which appear white. Z. niveum appears colorless when the bacteria are absent.
Like in other ciliates, a contractile vacuole maintains osmotic balance for the cell, and allows it to survive the salt concentrations in both marine and brackish water. The vacuole is located in Z. niveum directly below the lip of the peristome.
Polymorphism
Most ciliates live as single-celled organisms in aquatic environments, and the single cell carries out all functions of life, such as nutrition, metabolism, and reproduction. Colonies of Z. niveum are composed of numerous individual cells that form a feather-like colonial unit, with several different cell types. Old branches of the colony illustrate the polymorphism of the zooids when viewed under the microscope. Three different forms of the individual ciliate cells are present, which are distinct in both form and function. The large macrozooids can transform into swarmers and leave the colony. They settle on suitable surfaces and develop into new colonies. The microzooids are small cells specialized for feeding, which the colony does by consumption of their symbiotic bacteria and other organic particles. At the terminal ends of the colony are specialized zooids that can elongate and facilitate the asexual reproduction of the colony.
The bacteria on different parts of a host have different shapes despite belonging to the same species (polymorphism). Those on the stalks are shaped like rods, but those in the region of the ciliated oral apparatus of the microzooids are shaped like small spheres (coccoid). Intermediate forms are also found in between.
Distribution and habitat
The sessile colonies of Z. niveum were first described from the shallow waters of the Red Sea. They were later also found in the Florida Keys in the Gulf of Mexico, and at the Belize Barrier Reef in the Caribbean Sea.
The colonies settle in environments that contain sulfide. Hydrogen sulfide, sulfide, and related sulfur-containing compounds like thiosulfate are produced during the decomposition and remineralization of organic material. For example, plant material like the torn-off leaves of Posidonia oceanica in seagrass meadows of the Mediterranean accumulate in depressions of rocky ledges and decompose. In mangrove forests of the Caribbean, organic material can form peat and release sulfide. Hydrogen sulfide can also originate from geological phenomena such as at underwater hydrothermal vents, e.g. off the Canary Islands.
Ecological conditions
Extreme ecological conditions prevail at these sources of sulfide close to which colonies of Z. niveum settle. Because there is little water current under mangrove roots and at seagrass deposits under rock ledges, these decomposition hot-spots are extremely poor in oxygen and rich in sulfide. In mangrove forests off the coast of Belize, they have been found around small holes in the mangrove peat which form when the mangrove rootlets decompose. These openings have been called sulfide "microvent[s]", because they resemble in miniature the hydrothermal vents of the deep sea, the so-called black smokers, although the temperatures in shallow waters are much lower (28 °C in the Caribbean, 21 °C-25 °C in the Mediterranean (summer)), compared to the gradient between >300 °C and 2 °C in the deep sea because of volcanic activity. The Zoothamnium colonies do not settle directly over the decomposing material, but nearby e.g. on overhanging rocks, leaves of seagrass or seaweed, or mangrove roots.
Symbiosis
The symbiotic benefits provided by the colonies of Z. niveum for its attached ectosymbiotic bacteria Candidatus Thiobios zoothamnicoli (a member of the Gammaproteobacteria), which are vertically transmitted to its host, are its active alternation between oxygen-rich and sulfide-rich conditions. This alternation can occur through the regular contraction and extension of the colonies and through the water currents set up by the beating of the cilia in the region of the oral opening of the ciliates.
The rapid contraction and slow re-extension of the colonies causes a flow of both sulfide-rich water for the feeding of the bacteria and normal oxygenated seawater for the respiration of Z. niveum. Through the beating of its cilia at the oral apparatus of Zoothamnium is the mixing regulated. When there is a low supply of sulfur compounds, the bacteria use the sulfur that is stored inside their cells. They eventually appear pale and transparent after four hours because the stored sulfur has been consumed. However, if the sulfide concentration is too high, it can be toxic to the Zoothamnium colonies and kill the ciliates despite the bacteria.
Bacteria close to the oral end of the microzooids have a coccoid form, a larger volume, and a higher division rate than the rod-shaped bacteria on the stalks, despite both belonging to the same species. This is because the mixing of water by the beating of the oral cilia result in a more optimal concentration of both oxygen and sulfide in the water there. The bacteria at the oral region can thus be used as a food source and are swirled into the mouth (cytostome) of the ciliate and digested.
References
Literature
Christian Rinke, Jörg A. Ott und Monika Bright: "Nutritional processes in the chemoautotrophic Zoothamnium niveum symbioses", Symposium of the Biology of Tropical Shallow Water Habitats, Lunz, Österreich, Oktober 2001, S. 19-21
External links
Smithsonian Marine Station at Fort Pierce - Zoothamnium niveum
Chemosynthetic symbiosis
Oligohymenophorea
Taxa named by Christian Gottfried Ehrenberg
Ciliate species | Zoothamnium niveum | [
"Biology"
] | 1,430 | [
"Biological interactions",
"Chemosynthetic symbiosis",
"Behavior",
"Symbiosis"
] |
54,644,341 | https://en.wikipedia.org/wiki/Drug%20Safety | Drug Safety is a peer-reviewed medical journal covering pharmacoepidemiology and pharmacovigilance. It was established in 1986 as Medical Toxicology, and was renamed Medical Toxicology and Adverse Drug Experience in 1987. It obtained its current name in 1990. It is published by Springer Nature under the Adis Reprint, and is the official journal of the International Society of Pharmacovigilance. Nitin Joshi BPharm, PGDiPharm, MRSNZ, FISoP is the editor-in-chief. He took over as the Editor-in-Chief of the journal in 2012. According to the Journal Citation Reports, the journal has a 2023 impact factor of 4.0.
References
External links
Drug safety
Pharmacology journals
Academic journals established in 1986
Springer Science+Business Media academic journals
English-language journals
Academic journals associated with international learned and professional societies of Europe
Monthly journals | Drug Safety | [
"Chemistry"
] | 187 | [
"Drug safety"
] |
54,644,701 | https://en.wikipedia.org/wiki/NGC%207080 | NGC 7080 is a barred spiral galaxy located about 204.5 million light-years away in the constellation of Vulpecula. It has an estimated diameter of about 100,000 light-years which would make it similar in size to the Milky Way. NGC 7080 was discovered by astronomer Albert Marth on September 6, 1863.
According to Harold Corwin, NGC 7054 is a duplicate observation of NGC 7080.
One supernova has been observed in NGC 7080: SN 1998ey (type Ic-pec, mag.16.8) was discovered by Ron Arbour on 5 December 1998.
See also
NGC 1300
References
External links
Barred spiral galaxies
Vulpecula
7080
11756
66861
Astronomical objects discovered in 1863 | NGC 7080 | [
"Astronomy"
] | 153 | [
"Vulpecula",
"Constellations"
] |
54,646,991 | https://en.wikipedia.org/wiki/Community%20fridge | A community fridge is a refrigerator (colloquially "fridge") located in a public space. Sometimes called freedges, they are a type of mutual aid project that enables food to be shared within a community. Some community fridges also have an associated area for non-perishable food. Unlike traditional food pantries, these grassroots projects encourage anyone to put food in and take food out without limit, helping to remove the stigma from its use. The fridges take a decentralized approach, often being maintained by a network of volunteers, community members, local businesses, and larger organizations. Food in community fridges is primarily donated by individuals or food rescue organizations and can be sourced from a variety of places. Major grocers like Trader Joe's and Whole Foods donate large amounts of excess foods to food rescue organizations that then donate to these fridges. The food donated would have otherwise been thrown out.
The main aim of community fridges is to reduce food insecurity, while also mitigating food waste. They enable people facing hardship to have easy access to fresh, nutritious food. Fridges offer a wide range of food from canned goods to fresh produce to pre-cooked meals. Pre-cooked meals are required to be labeled when donated. Many fridges also accept household items, sanitary goods, and during the COVID-19 pandemic, offered masks and other PPE. Community fridges can also serve as social spaces that enable people to connect to their communities; Shelterforce magazine notes that "community fridges seem to have discovered a sweet spot in service delivery: close enough to feel the warmth of shared humanity, but far enough to avoid a sense of resentment or burden." Many fridges also painted by from local artists.
History
The first community fridges were set up in Germany, by a group called Foodsharing. The next community fridge was started in Spain in 2015. Community fridges draw inspiration from previous food initiatives.
In the UK, early community fridges were set up at Frome, South Derbyshire, Brixton (London), and Botley (Oxford). A national network of community fridges was set up in July 2017 by the environmental charity Hubbub UK, which offers a free support service to new projects.
Community fridges are a rapidly-growing phenomenon, with fridges also recently set up in New Zealand, India, Israel, the Netherlands, and Canada (Community Fridges Toronto has seven fridges).
COVID-19 pandemic
Community fridges have recently made a wide emergence in the U.S. during the COVID-19 pandemic. During the COVID-19 pandemic, community fridges were developed in response to a significant increase in food insecurity. In New York City, community fridges, nicknamed "Friendly Fridges", were introduced in February 2020, the first one placed by an activist group, In Our Hearts. In Our Hearts has now set up at least 14 of the 70 fridges around New York City. In Philadelphia, Dr. Michelle Nelson launched a Mama-Tee Community Fridge in North Philly, now there are 18 of them.
Using New York City as a model, community fridges have popped up in cities across the U.S. including Los Angeles, Philadelphia, Chicago, Atlanta, and more. As of September 2021, Los Angeles County has 14 community fridges. In Chicago, as of September 2021, there are 26 community fridges providing support to the community. The Love Fridge is a mutual aid network placing community refrigerators across the city. In Atlanta, Georgia, Latisha Springer, started Free99Fridge, a grassroots organization providing food to communities through their community fridge network. The organization maintains five community fridges throughout the metro Atlanta area.
In the Greater Boston Area, the first community fridge was started in Jamaica Plain in September 2020. Soon after, another fridge emerged in the neighborhood of Dorchester, Boston's largest neighborhood. As of September 2021, fridges in the neighborhoods of Allston, Fenway, Mattapan, and Roslindale have emerged, as well as in the cities of Somerville, Cambridge, Worcester.
In Thailand, entrepreneur Supakit Kulchartvijit's Pantry of Sharing pantry cabinets, a variation on the community fridges, was launched in May 2020 in Bangkok and Rayong. Thailand's SCG Foundation emulated Kulchartvijit's initiative, putting up a total 60 pantry cabinets in the country by May 25, 2020.
The following year in the Philippines as the pandemic dragged on, a trend utilizing a similar concept emerged across the country. Small carts carrying essential items were parked along sidewalks for locals to obtain any of the items without charge. The first such cart to be reported was started by the Members Church of God International on March 14, 2021.
Also in the Philippines, a similar idea under the term "community pantry" was started on Maginhawa Street in the Teacher's Village neighborhood of Quezon City on April 14, 2021. This initiative gained a wider media coverage than the MCGI initiative, resulting in the mushrooming of hundreds of similar initiatives throughout the country. In about a week after the Maginhawa pantry's launch, more than 100 pantries were set up in various locations; a week thereafter more than 300 pantries had already been set up.
Following the Maginhawa movement's example in the Philippines, various community pantries were set up in East Timor.
Japan
Japan's first full-service public refrigerator was installed at the entrance of La Campana Kisoya Building by Kisoya Co., Ltd. on June 17, 2020 (Tsurumi-ku, Yokohama City, Kanagawa Prefecture). Launched as Public Refrigerator Freego. Public refrigerators are available freely in the same way as in Europe. Installed as a countermeasure for food sharing and zero food waste. Made as one of the measures to prevent hunger. It works as a refrigerator for local use. In July 2021, it was installed at Komachi Plus Komachi Cafe, an authorized non-profit organization in Totsuka District.
France
After Marseille, Nantes and Metz 2, the first solidarity refrigerator in Paris appeared in the 18th arrondissement, on the sidewalk in front of the restaurant La Cantine, at 18 rue Ramey, at the initiative of the associations Cap ou pas cap, Le Carillon, and the owner of the restaurant Dumia Metboul, who discovered the concept in London.
On December 15, 2017, Cap ou pas cap opened the solidarity refrigerator in front of the store Les Nouveaux Robinson in the 12th arrondissement. Every day it is filled with unsold merchants from this small distribution area, who also monitor its contents morning and evening, ensuring that food hygiene standards are met. Typical refrigerator temperature should be 40 degrees Fahrenheit. Every month the store sends 300 kg of unsold goods there.
As of April 16, 2019, the Montreux association bar-restaurant Rêv Café has installed a cooperative refrigerator at the initiative of the Montreux association l'Esprit Léger.
Challenges
In Berlin, community fridges were designed by the Foodsharing.de community as a social innovation to improve accessibility for the people most in need, who experienced gatekeeping effects from the need to use the online matchmaking service or to have personal connections with volunteers.
Challenges surrounding community fridges include maintaining cleanliness, ensuring food safety, and making sure that mutual aid model of community fridges is not abused (e.g. for for-profit resale). In the UK, setting up a community fridge requires: a rota of volunteers to clean the fridge and check the food; public liability insurance; the support of the local authority environmental health officer; and, evidently, a fridge and associated waste bins. Several community fridges in Germany were threatened with closure due to health concerns.
Community fridges are sometimes criticized for not providing a systemic solution to food insecurity. Fridges are needed by those who are actively hungry or do not have the means to access nutritious food, but do not address underlying causes of food insecurity.
Fridges are occasionally criticized for not addressing the needs of a community. In the USA, during a phase of rapid expansion, community fridges were sometime set up without consultation with the local community by people external to that community, accidentally reproducing some patterns of control typical of centrally managed food aid systems. To address such concerns, "The Love Fridge models partnership with community-based organisations run by Black and Brown individuals experienced in food security work for mutual benefit".
Often, food provided to the fridge does not meet the cultural and nutrition needs of the community. In addition, there is often controversy surrounding the legality of community fridges. Policies addressing maintaining a community fridge vary widely from community to community. In some USA states, fridges must be placed on private property, which makes them dependent on the owners willingness to participate. In Boston's Allston Neighborhood, the Allston community fridge was forced to move because new property owners were no longer willing to house them. In Chicago, residents worried about their landlord's reaction to a fridge located in their building. In New York, experiments were paired with initiatives to reuse abandoned publicly owned property and vacant lots.
See also
Little Free Pantries and Blessing Boxes
References
External links
A Washington Post feature on free fridges in Philadelphia
Charity
Food waste
Private aid programs
Sharing economy
Refrigerators | Community fridge | [
"Biology"
] | 1,934 | [
"Behavior",
"Altruism",
"Private aid programs"
] |
54,647,268 | https://en.wikipedia.org/wiki/CELSA%20Group | CELSA Group is a multinational group of steel companies headquartered in Spain, mainly in the industry of steel reinforcement or rebar.
History
It was formed in 1967 as the Compañía Española de Laminación. Competitors of the company include Salzgitter AG, of Salzgitter in Germany.
See history of Celsa Group since 1967.
Sustainability
Steel recycling is an essential premise for all companies in the sector to fight for the circular economy, an increasingly important trend in the fight against climate change. In the case of CELSA Group, all the steel contained in the products manufactured by the company is fully recyclable.
Currently, CELSA Group recovers about 80,000 tons per year of non-ferrous material and 432 tons of plastics, ensuring the valorization of 1.37 million tons per year of co-products.
The use of electric arc furnaces, which constitutes one of the differentiating points of CELSA Group compared to its competitors, allows 9 times less CO2 to be released into the atmosphere than traditional blast furnace systems. Through this technology, the company produces its steel avoiding the extraction and consumption of 11 million m3 of natural resources, the equivalent of more than ten times the volume of the Empire State Building.
CELSA Group is one of Europe’s leading companies in the manufacturing of circular and low-emission steel and confirms its position as a pioneering business group in the European circular economy. The company is on the verge of achieving its sustainability goals set for 2050; currently, 97% of the final product manufactured at its plants is made from recycled steel, bringing it close to the goal of becoming Net Positive and achieving total circularity by the middle of the century. The Group aims to give infinite life to finite resources, a significant milestone achieved daily and now coinciding with a record-breaking year for turnover, reaching 6.109 billion euros.
Currently, all the steel used in the company’s products is entirely recyclable. CELSA Group’s circular production system also allows for a recovery rate of 95.1% for production process waste.
This steel production model also prevented the emission of 10 million tons of CO2 into the atmosphere in 2022, equivalent to the pollution from 2.2 million cars driving continuously for an entire year.
Positive figures also attest to CELSA Group’s efficiency in water and electricity consumption. In terms of water resources, using scrap instead of iron ore initially reduces water consumption by 40%. Additionally, 16% of the water received by CELSA Group is reused in the processes. Furthermore, the consumption of electricity from renewable sources has increased eightfold, reaching 185,555 MWh, by purchasing origin guarantees, thereby reducing non-renewable electricity consumption by 21%.
The group will achieve circularity and complete decarbonisation by 2050.
CELSA Group is currently the largest circular supply chain in Europe, significantly contributing to greater industrial raw material autonomy in Spain and across the continent. The company’s roadmap is shaped by the United Nations Sustainable Development Goals (SDGs) and the initiatives outlined in the Green Deal, aiming to lead the European Union towards ecological transition and achieve climate neutrality by 2050.
CELSA Group has set the year 2030 to achieve a 50% reduction in scope 1 and 2 CO2 emissions and a 25% reduction in scope 3 emissions compared to 2021, with the goal of becoming a Net Positive company by 2050. Additionally, the Group aims to be 98% circular by 2030 and complete the process to become a Zero Waste company by 2050.
Key circularity and sustainability indicators for CELSA Group:
• 97% of CELSA Group’s final product is made from recycled steel.
• The steel in all products manufactured by CELSA Group is recyclable.
• 95.1% of the group’s total waste has been recovered.
• Manufacturing steel with scrap instead of iron ore reduces water consumption by around 40%.
•
o Recycled Scrap: 5.7 MT
o Recovered Co-products: 1.4 MT
o Recovered Non-Ferrous Metals: 80,260 tons
o Recovered Plastics: 432 tons
o Water Consumption: 4.9 million m3
o Water Reused (%): 16%
o Energy Consumption: 5,965,554 MWh
o CO2 Emissions (scope 1 and 2, market-based): 1,719,407,263 tons
o CO2 Emissions, steel produced CO2-eq:
Scope 1: 677,791 tons CO2-eq
Scope 2 location-based: 924,315 tons CO2-eq
Structure
It is headquartered in Castellbisbal in Spain. It is composed of eight main steel companies, across Europe.
Celsa Steel UK
Celsa Steel UK is in Cardiff, and is the UK's largest manufacturer of steel reinforcement products. In November 2015, two people died at its Cardiff manufacturing site on East Moors Road.
During the COVID-19 pandemic, the UK government provided Celsa Steel UK with a £30 million bailout to help it continue trading providing it met a set of conditions. This made it the first company to receive a loan through the government's financial support scheme known as Project Birch.
Since 2010, the plant has used GB Railfreight for freight services. This includes both incoming supply of scrap steel, outgoing finished products, as well as on-site shunting services using remotely-controlled British Rail Class 08 locomotives, supplemented by sub-contract locomotive supply from Harry Needle Railroad Company.
See also
Community (trade union)
Ferrous metal recycling
References
1967 establishments in Spain
Manufacturing companies established in 1967
Multinational companies headquartered in Spain
Steel companies of Spain
Structural steel | CELSA Group | [
"Engineering"
] | 1,179 | [
"Structural engineering",
"Structural steel"
] |
54,647,877 | https://en.wikipedia.org/wiki/Kraft%20break | In astronomy, the Kraft break refers to the abrupt decrease in stars' average rotation rates at surface temperatures below about 6,200 kelvin. This temperature corresponds to mid-F type stars. The so-called break bears the name of astronomer Robert Kraft, though its existence was recognized prior to his publications on the topic. The break is understood to separate stars with deep convective envelopes and efficient magnetic dynamos from those without. The dynamos are thought to maintain magnetic fields that transfer angular momentum to the stellar wind, thus slowing down the star's surface through magnetic braking. In hot stars the process is less efficient (because the convective envelopes are shallow) so the stars continue to rotate quickly.
References
Stellar astronomy | Kraft break | [
"Astronomy"
] | 148 | [
"Stellar astronomy stubs",
"Astronomy stubs",
"Astronomical sub-disciplines",
"Stellar astronomy"
] |
54,648,275 | https://en.wikipedia.org/wiki/Harriet%20Bulkeley | Harriet Ann Bulkeley (born 17 November 1972) is a British geographer and academic. She is Professor of Geography at Durham University. Bulkeley is also a coordinator in the Naturvation project. Through her work at Durham University, Harriet is involved in the ReInvent-EU project, which aims to encourage decarbonisation in 4 key areas: plastic, steel, paper and meat and dairy. Her research largely explores the politics and processes surrounding environmental governance, as well as the management of municipal waste in the United Kingdom and the politics, specifically urban politics, of climate change.
In July 2019, she was elected a Fellow of the British Academy (FBA), the United Kingdom's national academy for the humanities and social sciences.
Education
Bulkeley studied at the University of Cambridge, graduating in 1995 with an undergraduate degree in Geography, before completing a PhD in Geography and Philosophy in 1998.
Published works
Bulkeley has published over 50 books and articles, including 'Low Carbon Communities and Social Justice' (2012), which was co-authored by Sarah Fuller, an honorary research fellow, also at the University of Durham's Geography Department.
Bulkeley is also an editor of Environment and Planning C: Government and Policy.
Research Projects
Through both Durham University and the Durham Energy Institute, Harriet has been involved in numerous research projects, including:
InCluESEV – Interdisciplinary Cluster on Energy Systems, Equity and Vulnerability
International Network on Urban Low Carbon Transitions (INCUT)
Customer Led Network Revolution
References
1972 births
Living people
Academics of Durham University
British geographers
Women geographers
Fellows of the British Academy
Climate change mitigation researchers | Harriet Bulkeley | [
"Engineering"
] | 328 | [
"Geoengineering",
"Climate change mitigation researchers"
] |
54,648,661 | https://en.wikipedia.org/wiki/Lotfollah%20Meisami | Lotfollah Meysami () is an Iranian Nationalist-Religious activist, journalist and publisher.
He owns and publishes Cheshmandāz-e Irān (), a two-monthly magazine on politics and strategy.
Political activity
Meisami was a student activist with the National Front and Freedom Movement of Iran while studying at the University of Tehran. After graduation, he secured a job and could make a stable future for himself, but he chose to join the People's Mujahedin of Iran (MEK) to engage in the guerilla movement against the Pahlavy dynasty. Meisami was blinded by a self-made bomb and also lost a hand.
He was sentenced to imprisonment multiple times, from winter 1963 to summer 1964 at Qasr Prison for his association with the Freedom Movement of Iran, and between summer 1971 and 1973 at Evin Prison and 1974 to 1979 for his activities with the MEK. He left the MEK following its ideological shift to Marxism. He then founded an organization in 1976/77, namely People's Mujahedin Movement of Iran.
References
Biography on official website
Living people
1942 births
National Front (Iran) student activists
Freedom Movement of Iran politicians
Early People's Mojahedin Organization of Iran members
Iranian religious-nationalists
Iranian publishers (people)
Iranian journalists
Iranian engineers
Petroleum engineers
Iranian blind people
Blind politicians
Blind writers
Blind activists
University of Tehran alumni
Iranian revolutionaries
Iranian amputees
Iranian politicians with disabilities
Members of the National Council for Peace
Imperial Iranian Army conscripted personnel | Lotfollah Meisami | [
"Engineering"
] | 311 | [
"Petroleum engineers",
"Petroleum engineering"
] |
74,635,494 | https://en.wikipedia.org/wiki/Artifact%20%28app%29 | Artifact was a personalized social news aggregator app that uses recommender systems to suggest articles. Launched in January 2023 by Nokto, Inc., a company founded by co-founders of Instagram Kevin Systrom and Mike Krieger, the app is available for iOS and Android. The app’s name is a portmanteau of the words "articles", "artificial intelligence", and "fact". The app shut down in January 2024 as a result of low interest.
History
Nokto, Inc. was established on March 3, 2022, as a foreign stock company in California, with its headquarters in San Francisco.
The company's main product, Artifact, is the first new product launched by Krieger and Systrom since their 2018 resignation from Instagram after conflicts with parent company Meta, which acquired Instagram in 2012.
Artifact launched on January 31, 2023, after the team had been working on it for over a year, offering the option to sign up for a waiting list for its private beta, which grew to about 160,000 people, and then launching in open beta on February 22, 2023.
With a team of seven employees in San Francisco, the app was free throughout its lifetime, with the founders explaining at the time that different business models - such as advertising or subscription fees - could be explored in the future.
In January 2024, cofounder Kevin Systrom announced that the app would be shutting down after concluding that "the market opportunity isn’t big enough to warrant continued investment in this way."
In April 2024, it was announced Artifact had been acquired by Yahoo, who intended to use the service's technology in an upgraded Yahoo! News app.
Features
Frequently described as "TikTok for text" and a competitor to Twitter, Artifact was a news aggregator that used machine learning to make personalized recommendations based on topics, news sources, and authors that the reader is interested in. In addition to reading articles, the app offered the ability to like articles, leave comments, or listen to an audio version of an article read by AI-generated voices, including a simulation of the voices of Snoop Dogg or Gwyneth Paltrow. AI also would rewrite clickbait headlines that users flagged. Artifact later expanded to a social network where users could post links, images and text to their profile, which could be liked or commented on by other users. Similar to other social news websites like Reddit, reader accounts had profiles with reputation scores.
References
External links
Interview with Systrom and Krieger about Artifact by Ben Thompson (analyst)
Android (operating system) software
IOS software
2023 software
Mobile applications
News aggregator software
Yahoo! acquisitions
2024 mergers and acquisitions
Defunct companies based in California | Artifact (app) | [
"Technology"
] | 562 | [
"Mobile software stubs",
"Mobile technology stubs"
] |
74,635,951 | https://en.wikipedia.org/wiki/Aspiration%20window | An aspiration window is a heuristic used in pair with alpha-beta pruning in order to reduce search time for combinatorial games by supplying a window (or range) around an estimated score guess. Use of an aspiration window allows alpha-beta search to compete in the terms of efficiency against other pruning algorithms.
Alpha-beta pruning achieves its performance by using cutoffs from its original range. Aspiration windows take advantage of this by supplying a smaller initial window, which increases the amount of cutoffs and therefore efficiency.
However, due to search instability, the score may not always be in the window range. This may lead to a costly re-search that can penalize performance. Despite this, popular engines such as Stockfish still use aspiration windows.
The guess that aspiration windows use is usually supplied by the last iteration of iterative deepening.
See also
Principal variation search
References
Sources
Game artificial intelligence | Aspiration window | [
"Mathematics"
] | 193 | [
"Game theory",
"Game artificial intelligence"
] |
74,637,607 | https://en.wikipedia.org/wiki/KELT-20 | KELT-20, also known as MASCARA-2, is an A2 main sequence star in the constellation of Cygnus, about 447 light years away.
Nomenclature
KELT-20 is the star's KELT designation. It is also designated as MASCARA-2 meaning that it is the second star observed by the MASCARA exoplanet search program. Its Henry Draper Catalogue designation is HD 185603, and its Hipparcos designation is HIP 96618.
Planetary system
In 2017, the discovery of the planet KELT-20b was announced.
References
Cygnus (constellation)
A-type main-sequence stars
185603
096618
Transiting exoplanets | KELT-20 | [
"Astronomy"
] | 142 | [
"Cygnus (constellation)",
"Constellations"
] |
74,638,119 | https://en.wikipedia.org/wiki/Tildipirosin | Tildipirosin, sold under the brand name Zuprevo, is a macrolide antibiotic used in pigs and cattle.
Medical uses
In the United States, tildipirosin is indicated for the treatment or control of bovine respiratory disease associated with Mannheimia haemolytica, Pasteurella multocida, and Histophilus somni in beef and non-lactating dairy cattle.
In the European Union, tildipirosin is indicated for the treatment and metaphylaxis of swine respiratory disease associated with Actinobacillus pleuropneumoniae, P. multocida, Bordetella bronchiseptica, and Glaesserella parasuis sensitive to tildipirosin; and for the treatment and prevention of bovine respiratory disease associated with M. haemolytica, P. multocida]], and H. somni sensitive to tildipirosin.
References
Veterinary drugs
Piperidines
Dimethylamino compounds
Lactones
Cyclic ketones
Glucosides
Amino sugars | Tildipirosin | [
"Chemistry"
] | 232 | [
"Amino sugars",
"Pharmacology",
"Carbohydrates",
"Medicinal chemistry stubs",
"Pharmacology stubs"
] |
74,639,706 | https://en.wikipedia.org/wiki/List%20of%20Byzantine%20forts%20and%20other%20structures%20in%20the%20Maghreb | The List of Byzantine forts and other structures in the Maghreb lists photos of the fortresses built between 533 and 698 on the territory of the Byzantine Empire in the Maghreb. On one hand, they served to pacify the Berbers within the empire and, on the other hand, to ward off external enemies.
Background
The Vandals, who had ruled in the heartland of the former Roman Province of Africa since 439, had considerable difficulties defending the national borders against the Berbers or keeping the Berbers under Vandal rule under control. which prompted large landowners and smallholders alike to fortify their farms.
After the Eastern Roman reconquest of the areas conquered by the Vandals in the 5th century and renewed subjugation of small Roman-Berber states established in the same period, various fortresses were built there both on the border as well as within the area ruled by the Eastern Romans. Some of the smaller Roman forts were also repaired.
Essentials
Construction of the fortresses took place mainly during the second term of office of the praetorian prefect Solomon 539 to 544, whereby the substance of older Roman buildings was often used as building material. Most of the fortresses are significantly smaller than their Roman predecessors and mostly classify as forts. Many of these forts were subsequently used and rebuilt by the Arabs and Ottomans. In parts they even served as a stylistic template for the construction of their own fortresses. In addition, building material from Byzantine buildings was used for the construction of a number of Arab fortresses, such as the Fort Sidi Salem Bou Ghara near the Roman city of Gigthis. This makes it considerably more difficult to identify a fortress in the Maghreb as Byzantine.
Overview
[
{
"type": "Feature",
"properties": {
"title": "Setif",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [5.40436,36.19109]
}
},
{
"type": "Feature",
"properties": {
"title": "Tubernuc",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [10.45627,36.53314]
}
},
{
"type": "Feature",
"properties": {
"title": "Biskra",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [5.7338,34.8543]
}
},
{
"type": "Feature",
"properties": {
"title": "Djémila",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [5.5678,36.2607]
}
},
{
"type": "Feature",
"properties": {
"title": "Sabrata",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [12.4872,32.8056]
}
},
{
"type": "Feature",
"properties": {
"title": "El Kef",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [8.71283,36.18174]
}
},
{
"type": "Feature",
"properties": {
"title": "Gadiaufala",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [7.26129,36.08838]
}
},
{
"type": "Feature",
"properties": {
"title": "Leptis Magna",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [14.29526,32.63792]
}
},
{
"type": "Feature",
"properties": {
"title": "Lambaesis",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [6.2607,35.4832]
}
},
{
"type": "Feature",
"properties": {
"title": "Taoura",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [8.0389,36.1721]
}
},
{
"type": "Feature",
"properties": {
"title": "Tobna",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [5.34963,35.35058]
}
},
{
"type": "Feature",
"properties": {
"title": "Tipasa",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [2.4475,36.589722]
}
},
{
"type": "Feature",
"properties": {
"title": "Tipasa",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [7.7038,36.1587]
}
},
{
"type": "Feature",
"properties": {
"title": "Zaga",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [8.995498,36.922399]
}
},
{
"type": "Feature",
"properties": {
"title": "Zucchara",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [9.94457,36.25487]
}
},
{
"type": "Feature",
"properties": {
"title": "Thignica",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [9.36180576174,36.5234348476]
}
},
{
"type": "Feature",
"properties": {
"title": "Vaga",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [9.182,36.725]
}
},
{
"type": "Feature",
"properties": {
"title": "Thamugadi",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [6.46837,35.47945]
}
},
{
"type": "Feature",
"properties": {
"title": "Oea",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [13.18146,32.89585]
}
},
{
"type": "Feature",
"properties": {
"title": "Sufetula",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [9.119446,35.240528]
}
},
{
"type": "Feature",
"properties": {
"title": "Suas",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [9.3224,36.4242]
}
},
{
"type": "Feature",
"properties": {
"title": "Ksar el Hadid",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [9.39833,36.06726]
}
},
{
"type": "Feature",
"properties": {
"title": "Mustis",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [9.147234,36.337366]
}
},
{
"type": "Feature",
"properties": {
"title": "Lamasba",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [5.9197,35.6192]
}
},
{
"type": "Feature",
"properties": {
"title": "Theveste",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [8.12313,35.4038]
}
},
{
"type": "Feature",
"properties": {
"title": "Capsa",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [8.8117,34.4281]
}
},
{
"type": "Feature",
"properties": {
"title": "Ammaedara",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [8.453926,35.56363]
}
},
{
"type": "Feature",
"properties": {
"title": "Clupea",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [11.11613,36.83773]
}
},
{
"type": "Feature",
"properties": {
"title": "Iunci",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [10.4137,34.46529]
}
},
{
"type": "Feature",
"properties": {
"title": "Limisa",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [9.691937,36.035908]
}
},
{
"type": "Feature",
"properties": {
"title": "Mactaris",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [9.122466,35.511194]
}
},
{
"type": "Feature",
"properties": {
"title": "Aggar",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [9.5203,35.8562]
}
},
{
"type": "Feature",
"properties": {
"title": "Civitas Vazitana Sarra",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [9.5473,36.0064]
}
},
{
"type": "Feature",
"properties": {
"title": "Chusira",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [9.36686,35.81358]
}
},
{
"type": "Feature",
"properties": {
"title": "Zabi",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [4.5851,35.6826]
}
},
{
"type": "Feature",
"properties": {
"title": "Madauros",
"marker-symbol": "castle",
"marker-color": "#579ADB",
"marker-size": "small"
},
"geometry": {
"type": "Point",
"coordinates": [7.90236,36.0772]
}
}
]
References
Literature
Denys Pringle: The Defence of Byzantine Africa from Justinian to the Arab Conquest. An Account of the Military History and Archaeology of the African Provinces in the Sixth and Seventh Century (= British Archaeological Reports. International Series 99). British Archaeological Reports, Oxford 1981, ISBN 0-86054-119-3 (reprint 2001).
Averil Cameron: Vandal and Byzantine Africa. In: Averil Cameron, Bryan Ward-Perkins, Michael Whitby (editor): The Cambridge Ancient History. Volume 14: Late Antiquity. Empire and Successors. AD 425–600. Cambridge University Press, Cambridge 2000, ISBN 0-521-32591-9, p. 552–569.
Susan Raven: Rome in Africa. 3rd edition, Routledge, London, 1993, ISBN 0-415-08150-5, p. 209–230.
further pictures not usable in Wikipedia for copyright reasons: Résultats de recherche pour : fort byzantin
Byzacena
Medieval history of Algeria
Berbers
Vandals
7th-century conflicts
6th-century conflicts
Byzantine military
Wars involving ancient Rome
Exarchate of Africa
Roman fortifications in Roman Africa | List of Byzantine forts and other structures in the Maghreb | [
"Engineering"
] | 3,886 | [
"Military engineering",
"Byzantine military architecture"
] |
74,639,708 | https://en.wikipedia.org/wiki/Drug%20permeability | In medicinal chemistry, Drug Permeability is an empirical parameter that indicates how quickly a chemical entity or an active pharmaceutical ingredient crosses a biological membrane or another biological barrier to become bioavailable in the body. Drug permeability, together with drug aqueous solubility are the two parameters that define the fate of the active ingredient after oral administration and ultimately define its bioavailability. When drug permeability is empirically measured in vitro, it is generally called apparent permeability (Papp) as its absolute value varies according to the method selected for its measurement. Papp is measured in vitro utilizing cellular based barriers such as the Caco-2 model or utilizing artificial biomimetic barriers, such as the Parallel Artificial Membrane Permeation Assay (PAMPA) or the PermeaPad. All these methods are built on an acceptor compartment (from 0.2 up to several mL according to the method uses) where the drug solution is placed, a biomimetic barrier and an acceptor compartment, where the drug concentration is quantified over time. By maintaining sink condition, a steady state is reached after a lag time (τ, Fig. 1) .
Data Analysis
The drug flux represents the slope of the linear regression of the accumulated mass (Q) over time (t) normalized over the permeation area (A), i.e., the surface area of the barrier available for permeation.
Equation 1:
The drug apparent permeability (Papp) is calculated by normalizing the drug flux (j) over the initial concentration of the API in the donor compartment (c0) as:
Equation 2:
Dimensionally, the Papp represents a velocity, and it is normally expressed in cm/sec. The highest is the permeability, the highest is expected to be the bioavailability of the drug after oral administration.
See also
Lipinski's rule of five
Pharmacodynamics
Pharmacokinetics
References
External links
Permm server and database, a computational tool for theoretical assessment of passive permeability of molecules across the lipid bilayer
Medicinal chemistry
Diffusion
Membrane biology | Drug permeability | [
"Physics",
"Chemistry",
"Biology"
] | 444 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion",
"Biochemistry",
"Membrane biology",
"nan",
"Molecular biology",
"Medicinal chemistry"
] |
74,640,975 | https://en.wikipedia.org/wiki/Unilever%20Research%20Laboratorium | The Unilever Research Laboratorium was a nutrition and human biology research centre in South Holland, owned by Unilever, and since November 2019, has been a private science park.
History
Construction
At the time of construction in 1956, Vlaardingen was the third-busiest port in the Netherlands, situated on the Nieuwe Maas.
On 14 February 1945, a neighbouring Unilever factory was set up as a V-1 launching site, with another site at Ypenburg, on the coast. The site was consequently attacked by RAF Typhoon aircraft on 23 March 1945. These were some of the last V-1 missiles launched against England.
The neighbouring factory closed in 2008. Research by Unilever in the Netherlands occurred during the war at Zwijndrecht, Netherlands.
Opening
The site was officially opened in November 1956 by Willem Drees, the Dutch prime minister. Another research site was at Bahrenfeld in Germany.
The site presented the Unilever Research Prize for over 60 years when owned by Unilever Benelux.
Current site
On 6 October 2016, Unilever announced that the site would close. The site closed in November 2019 and is now a private science park.
Research
In 1968, the centre found the protein miraculin, as well as researchers at the Florida State University College of Human Sciences. Other work made by Henk Van der Wel into the biochemistry of sweetness sensing was published in Chemical Senses.
By genetically-modifying a bacterium, the genes for making thaumatin were added, in 1980.
Visits
On 1 April 1969, a new laboratory was opened by Prince Claus of the Netherlands.
On 5 December 2016 Martijn van Dam, State Secretary for the Ministry of Economic Affairs and Climate Policy, visited.
Former employees
David Adriaan van Dorp, in 1967 worked with University College Cardiff
See also
Monell Chemical Senses Center
References
1956 establishments in the Netherlands
2019 disestablishments in the Netherlands
Biochemistry research institutes
Buildings and structures in South Holland
Economy of South Holland
Food industry in the Netherlands
Food science institutes
Food technology organizations
History of Vlaardingen
Industrial buildings completed in 1956
Nutrition in the Netherlands
Research institutes established in 1956
Research institutes in the Netherlands
Science parks in the Netherlands | Unilever Research Laboratorium | [
"Chemistry"
] | 453 | [
"Biochemistry research institutes",
"Biochemistry organizations"
] |
74,641,090 | https://en.wikipedia.org/wiki/Hisham%20Khatib | Dr. Hisham Khatib (Arabic: "هشام الخطيب") (5 January 1936 – 31 May 2022) was a Jordanian politician and civil servant. He modernized and expanded the Jordanian and Palestinian electrical power capabilities during his service in the sector. He was the first Minister of Power and Mineral Resources in Jordan and Chairman of the Power Commission. During his later years, he served as a member of the Jordanian Senate.
Being of Palestinian origin and having spent much of his youth in Jerusalem, he developed a passion for the art and history of Palestine and the Holy Land. This developed into an interest in the region's 18th and 19th-century art and history. He spent considerable time collecting art, books, and artifacts relating to that era. He has also written many books on the topic, including his collection.
Early life
Dr. Khatib was born in January 1936 in Acre at the house of his maternal grandfather, Sheikh Musa Al Tabari. His father Sheikh Mohammad Hashem Khatib was a judge and would receive appointments all over the country, with his family moving around with him. They moved to Hebron for 1 year in 1941, then quickly moved to Tulkarem in 1943 where they lived until 1945. They then moved again to Nablus where they lived until 1949 when his father was appointed as the Qadi (Judge) of Nablus. In 1949 the family moved to Jerusalem where his father was appointed to the Moslem Sharia Appeal Court of Jerusalem. In Jerusalem, Dr. Khatib attended El Rashadyia School headed by Tawfiq Abu Saud, then attended his final year of schooling in Egypt in 1953 at El Nahrareh School.
Education
After graduating from school, Dr. Khatib enrolled at the Engineering School of Ain Shams University, Egypt, in 1954 where he studied Electrical Engineering, finally receiving his BSc in 1959, the first of many degrees. In 1960 he won a scholarship to spend a two-year post-graduate apprenticeship in the UK. The apprenticeship did not live up to his ambitions, and he seized the opportunity to leave it and join a twelve-month M.Sc. course on electrical machines at the University of Birmingham. He completed his MSc in 1962 and returned to Jerusalem. After some time in the industry, he enrolled with Queen Mary University of London, UK to attain his two final degrees, a B.Sc. in Economics, and a Ph.D. in Electrical Engineering, for which he had to take a sabbatical leave from work. He finally received his Ph.D. in 1974.
Early career
Upon returning to Jerusalem in 1959, after receiving his first B.Sc., he accepted an engineering position at the Jordan Jerusalem District Electricity Company (JJDEC). After completing his M.Sc. course, and continuing in his previous position for 4 years, he was appointed as Chief Engineer at JJDEC in 1966.
Later career
In 1974, after receiving his Ph.D., he moved to Jordan where he worked as Deputy Director-General of the Jordan Electricity Authority. In 1976 he joined the Arab Fund for Economic and Social Development based in Kuwait as an Energy Expert. He returned to Jordan in 1980 to reclaim his position as the Director-General of the Jordan Electricity Authority. And was then also appointed as chairman of the board for the Jordan South Cement Company in 1982.
In 1984, Dr. Khatib was appointed as the first Minister of Energy and Mineral Resources in Jordan where he served until 1989. He then served as the Minister of Water and Irrigation from 1993 until 1994. Then, he served as Minister of Planning and International Cooperation from 1994 to 1995. In 2005 he was appointed as the first Chairman of the Energy and Minerals Commission where he served until 2009.
He was appointed to the 27th Jordanian Senate where he served from September 2016 to September 2020. He was a member of the Chairman of the Energy and mineral resources committee; Member of the Finance and Economics Committee; Member of the Palestine Committee.
He was appointed as chairman of the board of trustees for the Al-Balqaʼ Applied University in 2019 until he finally retired in 2021.
In between careers, he had a private consulting practice where he was contracted by numerous international agencies such as the United Nations Development Programme, the World Bank, the Arab Fund for Economic and Social Development, and many others.
International Memberships and Affiliations
Dr. Khatib was very active internationally and a member of many committees and agencies all over the world, known for his diversified expertise in various fields of Engineering, Economics and Art. Dr. Khatib was a member of the following committees and agencies:
Fellow of the Institute of Engineering and Technology (IET) in the UK
Life Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in the USA
International Association for Energy Economics (IAEE) in the USA
Palestine Exploration Fund (PEF)
Association for the Study of Travellers to Egypt and the Near East (ASTENE)
International Council on Large Electric Systems (CIGRE)
Vice Chairman and then Honorary Vice Chairman of the World Energy Council (WEC)
World Federation of Scientific Workers
Darat Al Funun Honorary Board
Family
Hisham was the loving brother to his sisters Aida, Maha, and Ghada Khatib, of whom he is the oldest. He was the loving husband to his wife Maha Daher Al-Khatib, whom he married in 1970.
He and Maha were the loving parents to their three children: Mohammad, born in 1972, Lynn, born in 1975, and Isam, born in 1979.
Mohammad married his wife Ruwaida Share in 2004. Hisham was a loving grandfather to their three sons (his grandchildren) Hisham, Zaid, and Kareem.
Isam married his wife Dima Bilbaisi in 2006. Hisham was a loving grandfather to their three children (his grandchildren) Jeeda, Kinan, and Naya.
Art collection
Dr. Khatib was a philanthropist of the arts and heritage, supporting many local and international organizations, including Darat al Funun, Tiraz Centre, and the Palestine Exploration Fund. Over 60 years, he personally collected an extensive collection of art from all over MENA (Middle East and North Africa), especially the Holy Land (Jerusalem), containing a variety of books, manuscripts, maps, photographs, paintings, Qurans, and many others. Using this collection, Hisham was able to preserve the Arab and Palestinian cultural heritage and knowledge for the coming generations, publishing 7 books on his collection, including:
“The Holy Land: Palestine and Egypt Under the Ottomans” I.B. Tauris, UK, 2003
“Palestine and Jordan 1500 – 1900” Darat al Funun, The Khalid Shoman Foundation, Jordan, 2005
“Jerusalem, Palestine, and Jordan” Gilgamesh Publishing, UK, 2013
“Panoramic Jerusalem” Pro-Jerusalem Society, 2015
“Wild Flowers of Palestine and Jordan” Private Edition, 2016
“Valuable Printed Books and Manuscripts in the Khatib Collection” Private Edition, 2019
“A Voyage to Jerusalem” Reprint of a manuscript from 1901, 2019
Other publications
Outside of art, Dr. Khatib was also a persevering scientist and economist, making other publications with the Institution of Engineering and Technology (IET) about economic evaluations in the electricity supply industry, broadening his range of expertise even further. These publications include:
“Financial and economic evaluation of projects in the electricity supply industry” IET, 1998
“Economic Evaluation of Projects in the Electricity Supply” IET, 2003
“Economic Evaluation of Projects in the Electricity Supply Industry (3rd Edition)” IET, 2014
These publications, of course, do not include the countless articles on electrical engineering, economics, art, and many other topics found across many industries.
His final publication was his memoirs, titled “86 and still going… Slowly”, published in 2020, two years before his passing on the 31st of May 2022. In these memoirs, reserved for his friends, family, and loved ones, Dr. Khatib details intimate and personal details about his life, from beginning to end, teaching his grandchildren about their heritage, and highlighting the importance of family.
References
Jordanian engineers
Jordanian historians
Rashidiya School alumni
Ain Shams University alumni
Alumni of the University of Birmingham
Alumni of Queen Mary University of London
Palestinian arts
Energy ministers of Jordan
Public works ministers of Jordan
Water ministers of Jordan
Planning ministers of Jordan
Members of the Senate of Jordan
Fellows of the IEEE
Fellows of the Institute of Engineering and Technology
Publishers
Electrical engineers | Hisham Khatib | [
"Engineering"
] | 1,741 | [
"Electrical engineering",
"Electrical engineers"
] |
74,642,766 | https://en.wikipedia.org/wiki/Waterworld%2C%20Wrexham | Waterworld (), formerly the Wrexham Swimming Baths, is a leisure centre in Wrexham, North Wales. Known for its hyperbolic paraboloid roof, the only roof of its type in Wales, the centre houses a set of swimming pools and a gym. The centre was opened in 1967, with a major refurbishment occurring in the 1990s, being re-opened by Elizabeth II in March 1998 under its current name.
Due to the difficult and high maintenance costs of the roof, the building was proposed to be demolished before its 1998 refurbishment and again in the 2010s as part of a council reorganisation and cost-saving measure of leisure services in Wrexham County Borough. Under these newer proposals, Waterworld was proposed to be replaced by a new facility somewhere in Wrexham city centre. The plans were abandoned in 2015 due to funding concerns, and the centre was instead transferred to a trust, Freedom Leisure, in 2016 for ten years. Since being transferred to a trust, a petition was launched to reinstate the centre's unofficial mascot, a green inflatable alien.
The centre houses multiple pools, a lazy river, water slide, and a bubble pool, as well as a large viewing terrace. It houses a gym, Costa Cafe and spaces for other activities.
History
Prior to Waterworld, the then town's former baths dating to 1901 were located on Tuttle Street to the cost of and used heating from the neighbouring incinerator. This was where Wrexham Swimming Club was founded, the first in North Wales.
The existing Waterworld building was constructed in 1967 to the designs of F. D. Williamson and opened in May 1970, as the "Wrexham Swimming Baths" or just "Wrexham Baths". The building originally cost to construct in 1967. The building's design caused controversy as many objections were raised because of its design, cost and the difficulty of the building. It was possibly the most controversial building in the town at the time. It opened with three swimming pools, long main pool with a deep centre ( and two shallow ends (), a learners pool, and a and deep diving pool, with concrete diving states at 1, 3, and 5 metres and spring boards at 1 and 3 metres. At its opening, the building was described as "hyperbolic, parabolic and diabolic". Also in the 1970s, the Wrexham Symphony Orchestra, was based in the basement of the building below the swimming pools. It was said underwater swimmers could clearly hear the orchestra, while the sound of swimmers were noticeable to the orchestra.
18 year old, Gareth Williams, of Rhosddu, was the centre's first paying member of the public in 1970. With the centre largely receiving positive comments in The Leader at the time.
The Wrexham Swimming Club moved from Tuttle Street to the building in the same month. It was later threatened with demolition, but was renovated instead and re-opened in March 1998 by Queen Elizabeth II, as "Wrexham's Waterworld". The refurbishment cost .
In the planning stages of the 1998 refurbishment, a feasibility study was conducted by Space Space for Wrexham Council. In the study, the building was considered to be partly converted into potentially either a cinema, theatre, dry leisure complex, an exhibition/conference centre, bars or nightclubs. Although the feasibility study concluded that the best use of the centre was for it to remain a regional swimming facility, but changes were needed to accommodate the increased demand for swimming. The diving pool was replaced with a leisure pool, containing river rapids, a geyser pool, a spa pool and a "Water Chute" rubber ring ride. The long main pool was modified with the addition of a traversable boom, so its length can be reduced to for water polo events or for national short course events. The learner pool's depth was also altered to make it more "learner friendly". While the spectator areas were adjusted to provide raked seating for over 200 spectators. All pools were also re-tiled.
Proposed demolition
In December 2013, Wrexham councillors voted to consider replacing the centre, following advice from consultants. The use of consultants by the council were criticised by supporters of the centre over the £51,760 cost of the consultants. Following the vote the council announced a public consultation into the plans to close Waterworld and the leisure centre at Plas Madoc, and replacing Waterworld with a new £11.9 million facility near the town centre, as part of £13 million council budget cuts as the existing centres would cost £2 million to maintain.
In January 2014, following proposals to close the centre was announced, concerned locals had contacted Cadw, the Welsh Government's historic environment service, for protection, asking for the building to be listed. Listing may protect the building from any demolition or major works which would "change its character", which would require consent from the Welsh Government's Planning Division before any works can proceed.
In February 2014, Wrexham County Borough Council councillors voted to close the centre and possibly replace it with a new facility. One of the touted locations for the new facility is on the Crown Buildings' site, next to the existing Waterworld site. By April 2014, Waterworld was losing £330,000 per year.
In May 2014, Cadw rejected the bid for the building to gain listed building status, arguing that the 1998 redevelopment altered the historic nature of the building, leading the building to be too significantly altered to be regarded as an "exemplar building of its type". The decision was said to make the building more likely to be demolished at the time.
By February 2015, the decision whether the centre would be replaced by a new facility was deferred until March 2015, with the cost for a new facility described as "no longer affordable".
In April 2015, the plans to demolish the building were abandoned, due to concerns over the £8.9 million funding gap. The centre was instead passed over to a leisure trust. A report into the building said its lifespan could be extended to 2035. This contrasts to a previous report stating it was nearing the "end of its design life".
Trust management and refurbishment
In 2016, Freedom Leisure took over the management of Waterworld, as part of a signed agreement between the trust and Wrexham council to manage four leisure and activity centres, and five dual-use sports facilities across Wrexham County Borough. The contract lasts until 2026.
By July 2017, the building underwent a £1–1.5 million refurbishment, including its gym, and reopened in October 2017.
In February 2018, an unnamed member of the public, criticised the trust-managed centre as "disgusting and unhygienic", in particular the building's upper floors.
In May 2019, a petition was launched to bring back the "Wrexham Waterworld alien", the unofficial mascot for the centre. The inflatable green alien was visible from the building's window overlooking the roundabout adjacent to Tesco in Wrexham. The alien was said to be no longer visible when the centre was taken over by Freedom Leisure. When contacted, a Wrexham council spokesman stated "while we rarely comment on former members of staff, we are happy to confirm that the alien who formerly supervised the slide at Waterworld is enjoying a happy retirement".
As of 2022, while there are no formal proposals to close the centre, the council has still considered building a new facility elsewhere in Wrexham, as part of a longer-term strategy to reorganise the area's leisure services.
Description
The building's "futuristic" hyperbolic paraboloid roof, is said to be the only roof of its type in Wales. Covering 50 x 50m (160 x 160 ft) in area, the roof was constructed in the 1960s and has had high maintenance costs since. The reinforced concrete construction has suffered issues relating to the moisture and chlorine air in the inside of the building, and weathering on the outside. The renovation in the 1997 and later 2017 hoped to make the structure more durable by using modern materials. The roof remains the building's dominant feature. The building has a glazed east elevation adjacent to the A5152 and its roundabout, while the building's west elevation is made of a series of barrel-vaulted volumes which form steps towards the pointed apex of the roof.
The centre's swimming area has a by six-lane pool, two learner pools ( by and by ), a slide, a Jacuzzi and a sauna. These two small pools for children and a standard swimming pool were made during its 1990s renovation, which also saw the former diving pool being replaced with a helter-skelter style water slide. The original viewing facilities, described as being "poor", were also replaced with a large terraced seating area. There is also an indoor raft ride, a bubble pool and a lazy river leisure pool.
The centre houses a Costa Cafe, gym and sun beds.
References
Swimming venues in Wales
1970 establishments in Wales
Buildings and structures in Wrexham
Hyperboloid structures
Sport in Wrexham
Tourist attractions in Wrexham County Borough | Waterworld, Wrexham | [
"Technology"
] | 1,829 | [
"Structural system",
"Hyperboloid structures"
] |
74,642,799 | https://en.wikipedia.org/wiki/Tourmaline%20Reef | Tourmaline Reef (Spanish: Arrecife de Tourmaline) is a shelf-edge reef located in the Mona Passage off Mayagüez Bay in western Puerto Rico. The reef is one of the best-preserved reefs of its type in Puerto Rico as it is found far away enough from the coast and was selected as one of the first coral reef protection zones under the Puerto Rico Coastal Zone Management Program (Programa de Manejo de la Zona Costanera de Puerto Rico). Tourmaline Reef is located close to Punta Guanajibo, at 7.5 nautical miles from Mayagüez, at depths of up to 10 meters under the ocean surface bordering in waters of moderate to high visibility due to minimal terrigenous or sedimentary deposits.
Conservation
The reef system is protected as the Tourmaline Reef Nature Reserve (Reserva Natural Arrecife de Tourmaline), managed by the Puerto Rico Department of Natural Resources (DRNA) which provides management plans and conservation resources that limit the fishing activities in the area for the purpose of preserving its delicate ecosystem, previously threatened by the overfishing of red grouper (Epinephelus guttatus). The reserve extends between the maritime borders of the municipalities of Mayagüez and Cabo Rojo. The coral cover of the reef is currently at approximately 40% in 2009, a decrease from 60% in 2003. In addition to the coral ecosystem, the reserve also protects tracts of seagrass prairies, important for species such as sea turtles and the West Indian manatee.
References
Coral reefs
Protected areas of Puerto Rico | Tourmaline Reef | [
"Biology"
] | 328 | [
"Biogeomorphology",
"Coral reefs"
] |
74,642,889 | https://en.wikipedia.org/wiki/Long%20March%206C | The Long March 6C (CZ-6C) () is a Chinese two-stage-to-orbit liquid-fueled launch vehicle designed and manufactured by Shanghai Academy of Spaceflight Technology, a subsidiary of China Aerospace Science and Technology Corporation. The rocket is a dual engine first stage variant of the Long March 6; alternatively, it may be considered to be a slightly shorter single stick variant of the Long March 6A. Both the first and second stages of the rocket use liquid oxygen and RP-1 propellants. It is employed to launch small and medium-sized military, civilian and commercial satellites to LEOs and Sun-synchronous orbits; it is capable of lifting 2,400 kg to 500 km SSOs.
The rocket undertook a successful maiden launch on 7 May 2024 at 13:21 UTC from North China's Taiyuan Satellite Launch Center.
A rideshare launch opportunity by Long March 6C was sold at an online auction in July 2023, with bidding prices starting at ¥87,000/kg.
List of launches
References
Long March (rocket family)
Vehicles introduced in 2023
2023 in China | Long March 6C | [
"Astronomy"
] | 234 | [
"Outer space stubs",
"Outer space",
"Astronomy stubs"
] |
74,643,075 | https://en.wikipedia.org/wiki/Mitochondrial%20pyruvate%20carrier%201 | Mitochondrial pyruvate carrier 1 (MPC1), also known as brain protein 44-like (BRP44L) and SLC54A1, is a protein that in humans is encoded by the MPC1 gene. It is part of the Mitochondrial Pyruvate Carrier (MPC) protein family. This protein is involved in transport of pyruvate across the inner membrane of mitochondria in preparation for the pyruvate dehydrogenase reaction.
Interactive pathway map
Clinical significance
Mitochondrial pyruvate carrier deficiency (MPYCD) is an autosomal recessive disease due to mutations in the MPC1 gene on chromosome 6q27. It is an inborn error of carbohydrate metabolism that blocks aerobic glycolysis by preventing the transport of pyruvate from the cytosol into the mitochondrion for oxidative phosphorylation; however, anaerobic glycolysis is preserved. Common signs and symptoms include poor growth, normal lactate/pyruvate ratio (however both lactate and pyruvate are in higher than normal concentrations), hepatomegaly, lactic acidosis, hypoglycemia, neurological problems, and hypotonia. A disease with comparable symptoms is also seen in autosomal recessive mutations of the MPC2 gene.
See also
Mitochondrial pyruvate carrier 2
Inborn errors of carbohydrate metabolism
References
Genes on human chromosome 6
Inborn errors of carbohydrate metabolism
Autosomal recessive disorders
Transport proteins
Solute carrier family | Mitochondrial pyruvate carrier 1 | [
"Chemistry"
] | 336 | [
"Inborn errors of carbohydrate metabolism",
"Carbohydrate metabolism"
] |
74,644,909 | https://en.wikipedia.org/wiki/CoRoT-23 | CoRoT-23 is a main-sequence star located in the constellation Serpens at a distance of about 1956 light-years from the Earth. At least one planet revolves around the star.
Characteristics
CoRoT-23 is a yellow dwarf main sequence star similar to our Sun. Its solar mass is 1.098 and its solar radius is 0.86. The surface temperature is about 5900 kelvin.
Planetary system
One planet has been discovered orbiting CoRoT-23, CoRoT-23b.
The planet may be in an unstable orbit and subject to merger with the host star in the future.
References
Serpens
G-type main-sequence stars | CoRoT-23 | [
"Astronomy"
] | 134 | [
"Constellations",
"Serpens"
] |
74,646,617 | https://en.wikipedia.org/wiki/CoRoT-20 | CoRoT-20 is a star, which is located in the constellation Monoceros at a distance of about 4011 light years from the Earth. The star is orbited by at least two planets.
Characteristics
CoRoT-20 is a very young star of 14.66 magnitude by astronomical standards. Its age is estimated at approximately 100 million years. In terms of mass and radius, it is almost identical to our Sun. Its surface temperature is about 5880 kelvins. CoRoT-20 got its name thanks to the space telescope CoRoT, which discovered its two planets.
Planetary system
In 2011, a group of astronomers working within the CoRoT program announced the discovery of the planet CoRoT-20b and CoRoT-20c in this system. It is a hot gas giant, more than four times the mass of Jupiter. However, the planet's radius is only 84% of Jupiter's, which indicates an average high density. CoRoT-20b and c both orbit close to its parent star - at a distance of 0.09 AU The discovery of the planet was made by transit method.
References
Monoceros
G-type main-sequence stars | CoRoT-20 | [
"Astronomy"
] | 238 | [
"Monoceros",
"Constellations"
] |
74,646,992 | https://en.wikipedia.org/wiki/3D%20Atlas | 3D Atlas is a is an educational multimedia software application developed by Creative Wonders and published by Electronic Arts. It consists of the original 3D Atlas as well as 3D Atlas 97 and 3D Atlas 98.
Reception
The New York Times said "3D Atlas is best at displaying national statistics, letting you graph and compare them in many fascinating ways, including ones you invent. But the inconsistent interface does not always make you aware of scale".
CNET said "Still, 3D Atlas offers excellent introductory information for anyone interested in learning more about the earth".
By 1997, 3D Atlas sold more than 2 million units. It won the "Best International Reference Product" Award from the European Multimedia Association and MacUser's "Best Reference Award".
Also See
3D World Atlas
References
Educational software
Multimedia software | 3D Atlas | [
"Technology"
] | 158 | [
"Multimedia",
"Multimedia software"
] |
74,647,325 | https://en.wikipedia.org/wiki/GLOBIO%20Model | The GLOBIO global biodiversity model is a model developed by the Netherlands Environmental Assessment Agency to support policy makers
by quantifying global human impacts on biodiversity and ecosystems.
It is designed to quantify human impacts on biodiversity at large (regional to global) scales.
References
External links
GLOBIO website
Mathematical modeling | GLOBIO Model | [
"Mathematics"
] | 63 | [
"Applied mathematics",
"Mathematical modeling"
] |
74,647,701 | https://en.wikipedia.org/wiki/Saeko%20Hayashi | Saeko S. Hayashi (born 1958, ) is a Japanese astronomer based in Hawaii whose research interests include star formation, protoplanetary disks, the atmosphere of exoplanets, and the construction and optics of large telescopes. She is an associate professor of the National Astronomical Observatory of Japan and The Graduate University for Advanced Studies and a founding astronomer of Japan's based Subaru Telescope. She is also known for her work popularizing astronomy, both in Japan and in Hawaii.
Education and career
Hayashi was born in 1958 in Akita Prefecture. She was an undergraduate in the Faculty of Science at the University of Tokyo, after taking the admissions examination against her parents' wishes and without their knowledge; she continued at the university for a Ph.D., completed in 1987.
After three years working at the James Clerk Maxwell Telescope in Hawaii, she began working on the Subaru Telescope through the National Astronomical Observatory of Japan in 1990. Although she performed the initial project work in Tokyo, she returned to Hawaii during its construction, and has remained there for the rest of her career. She began on the Thirty Meter Telescope project in 2017, focusing on the development of mirror blanks for the telescope.
Research
Her research focus has shifted from radio astronomy in graduate school, through submillimeter astronomy in her postdoctoral work at the James Clerk Maxwell Telescope, to optical telescopy subsequently, and from observational astronomy to the design and construction of large telescopes.
Personal life
Hayashi is married to Masahiko Hayashi, also an astronomer, the director general of the National Astronomical Observatory of Japan from 2012 to 2018.
Recognition
Minor planet 6250 Saekohayashi, discovered in 1991, is named for her.
References
External links
Home page
Saeko Hayashi (interview), She is an astronomer
Living people
Japanese astronomers
Women astronomers
University of Tokyo alumni
Japanese expatriates in the United States
1958 births | Saeko Hayashi | [
"Astronomy"
] | 385 | [
"Women astronomers",
"Astronomers"
] |
73,244,044 | https://en.wikipedia.org/wiki/Katherine%20Bennell-Pegg | Katherine Bennell-Pegg (born 1984) is an Australian astronaut and Director of Space Technology at the Australian Space Agency. In 2024, she became the first qualified astronaut under the Australian flag as well as the first female Australian astronaut. She is a dual Australian and British citizen.
Early life and education
Bennell-Pegg was born in Sydney and grew up in the Northern Beaches area. She completed a Bachelor of Engineering (Honours), Aeronautical & Space Engineering and a Bachelor of Advanced Science majoring in Physics at the University of Sydney.
Upon completion of her double-degrees, Bennell-Pegg received an Erasmus Mundus full scholarship to study in Germany, Sweden, United Kingdom and the Netherlands as part of the Joint European Master in Space Science and Technology programme. Under this program, she completed a Masters of Science in Astronautics and Space Engineering at Cranfield University (1st prize shared) and a Masters of Science in Space Technology at Luleå University of Technology.
During her university education, Bennell-Pegg also completed the Space Studies Program at the International Space University, alongside two internships. These included working as a Thermal Engineer at the European Space Agency, and at NASA Ames designing a low-cost spacecraft development platform.
Bennell-Pegg also served in the Australian Army Reserve for which she was awarded the Sword of Honour and the Sir Thomas Blamey Memorial Award.
Career
Airbus
Bennell-Pegg's first job after her MScs was as a mission systems engineer at Airbus UK, working on a range of future missions and concept studies, including Martian in-situ resource utilisation (ISRU), future remote sensing missions and space debris removal. She also worked as a thermal architect on the LISA Pathfinder team during the thermal test campaign.
She was transferred to Airbus Defence and Space Germany in 2016 where she worked as a project manager and systems engineer of advanced robotic projects, as well as being the Service Operations Lead for the .
Australian Space Agency
Bennell-Pegg moved back to Australia to support the growing local space sector and started her position as the Assistant Manager of Space Capability and Robotics & Automation at the Australian Space Agency based in Adelaide, South Australia. In 2022 she was promoted to the role of Director of Space Technology.
In 2022, Bennell-Pegg delivered The Warren Centre for Advanced Engineering Innovation Lecture.
Australian astronaut candidate
Bennell-Pegg applied to join the European Astronaut Corps as a British dual citizen in early 2021. She was one of the 25 finalists for the 2022 ESA Astronaut Group, but was not selected as part of the 17-person crew. However, the Australian Space Agency sponsored her training with the European Space Agency (ESA), announcing in March 2023 that she would train alongside the mission crew at the European Astronaut Centre (EAC). This marked the first time ESA provided basic training to an astronaut candidate from an international partner, making the EAC the third centre in the world to do so. Bennell-Pegg became the first person to train as an astronaut under the Australian flag, marking a significant achievement for the country's representation in human spaceflight. Previous Australian-born astronauts, Paul Scully-Power and Andy Thomas, flew to space as US citizens representing NASA. UK-born Australian citizen Meganne Christian was also selected as a member of the 2022 ESA astronaut reserve, representing the UK Space Agency. Bennell-Pegg completed the ESA Basic Training curriculum and graduated with her ESA classmates from "The Hoppers" group on the 22nd of April 2024 as a fully qualified astronaut.
Awards
In March 2023, she was named as the overall winner in addition to the winner of the Leader of the Year category at the Woman of the Year Awards in Adelaide.
References
Aerospace engineers
Astronaut candidates
Australian astronauts
Luleå University of Technology alumni
Alumni of Cranfield University
Space programme of Australia
21st-century Australian scientists
21st-century Australian women scientists
Australian Army officers
Living people
1984 births
University of Sydney alumni | Katherine Bennell-Pegg | [
"Engineering"
] | 804 | [
"Aerospace engineers",
"Aerospace engineering"
] |
73,245,839 | https://en.wikipedia.org/wiki/Bogomolov%E2%80%93Sommese%20vanishing%20theorem | In algebraic geometry, the Bogomolov–Sommese vanishing theorem is a result related to the Kodaira–Itaka dimension. It is named after Fedor Bogomolov and Andrew Sommese. Its statement has differing versions:
This result is equivalent to the statement that:
for every complex projective snc pair and every invertible sheaf
with .
Therefore, this theorem is called the vanishing theorem.
See also
Bogomolov–Miyaoka–Yau inequality
Vanishing theorem (disambiguation)
Notes
References
Further reading
Theorems in algebraic geometry
Theorems in complex geometry | Bogomolov–Sommese vanishing theorem | [
"Mathematics"
] | 120 | [
"Theorems in algebraic geometry",
"Theorems in complex geometry",
"Theorems in geometry"
] |
73,246,459 | https://en.wikipedia.org/wiki/Dividing%20a%20square%20into%20similar%20rectangles | Dividing a square into similar rectangles (or, equivalently, tiling a square with similar rectangles) is a problem in mathematics.
Three rectangles
There is only one way (up to rotation and reflection) to divide a square into two similar rectangles.
However, there are three distinct ways of partitioning a square into three similar rectangles:
The trivial solution given by three congruent rectangles with aspect ratio 3:1.
The solution in which two of the three rectangles are congruent and the third one has twice the side length of the other two, where the rectangles have aspect ratio 3:2.
The solution in which the three rectangles are all of different sizes and where they have aspect ratio ρ2, where ρ is the plastic ratio.
The fact that a rectangle of aspect ratio ρ2 can be used for dissections of a square into similar rectangles is equivalent to an algebraic property of the number ρ2 related to the Routh–Hurwitz theorem: all of its conjugates have positive real part.
Generalization to n rectangles
In 2022, the mathematician John Baez brought the problem of generalizing this problem to n rectangles to the attention of the Mathstodon online mathematics community.
The problem has two parts: what aspect ratios are possible, and how many different solutions are there for a given n. Frieling and Rinne had previously published a result in 1994 that states that the aspect ratio of rectangles in these dissections must be an algebraic number and that each of its conjugates must have a positive real part. However, their proof was not a constructive proof.
Numerous participants have attacked the problem of finding individual dissections using exhaustive computer search of possible solutions. One approach is to exhaustively enumerate possible coarse-grained placements of rectangles, then convert these to candidate topologies of connected rectangles. Given the topology of a potential solution, the determination of the rectangle's aspect ratio can then trivially be expressed as a set of simultaneous equations, thus either determining the solution exactly, or eliminating it from possibility.
The numbers of distinct valid dissections for different values of n, for n = 1, 2, 3, ..., are:
See also
Squaring the square
References
External links
Python code for dissection of a square into n similar rectangles via "guillotine cuts" by Rahul Narain
Rectangular subdivisions
Mathematical problems
Recreational mathematics | Dividing a square into similar rectangles | [
"Physics",
"Mathematics"
] | 523 | [
"Tessellation",
"Recreational mathematics",
"Rectangular subdivisions",
"Mathematical problems",
"Symmetry"
] |
73,246,748 | https://en.wikipedia.org/wiki/Caesium%20stearate | Caesium stearate is a metal-organic compound, a salt of caesium and stearic acid with the chemical formula . The compound is classified as a metallic soap, i.e. a metal derivative of a fatty acid.
Preparation
Caesium stearate can be prepared by the reaction of caesium carbonate with stearic acid.
References
Stearates
Caesium compounds | Caesium stearate | [
"Chemistry"
] | 84 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
73,247,463 | https://en.wikipedia.org/wiki/Intake%20momentum%20drag | Intake momentum drag is an aerodynamic phenomenon which affects turboprop and jet-powered aircraft.
Causes
Intake momentum drag is caused by the consequence of the speed of the air entering the engine increasing, but where the exit speed of the air from the engine remains constant. The outcome therefore is that the amount by which the engine increases air velocity, ostensibly by way of the compression process, is reduced. A repercussion of this causes a slight reduction in the thrust of a jet engine.
Intake momentum drag yaw
Intake momentum drag yaw is a further consequence of intake momentum drag which affects V/STOL (vertical and/or short take-off and landing) aircraft such as the Hawker Siddeley Harrier.
Intake momentum drag yaw is an aspect in which the mass of air ingested by the intake of the engine, whilst the aircraft is in the hover during a crosswind, can result in a state of uncontrolled roll (a secondary aerodynamic effect of yaw).
The phenomenon was identified during the test flying programme for the Harrier and which required precise investigation. This resulted in test pilot John Farley deliberately flying right into the edge of this condition repeatedly, so that a system to counteract the effect could be developed.
References
Aerospace engineering
Aerodynamics
Classical mechanics
Force | Intake momentum drag | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 262 | [
"Force",
"Physical quantities",
"Quantity",
"Mass",
"Classical mechanics",
"Aerodynamics",
"Mechanics",
"Aerospace engineering",
"Wikipedia categories named after physical quantities",
"Matter",
"Fluid dynamics"
] |
73,248,017 | https://en.wikipedia.org/wiki/Sigma%20non-innocence | Sigma non-innocence is a special form of non-innocence, an oxidation characteristic in metal complexes. It is mainly discussed in coordination complexes of late transition metals in their high formal oxidation states. Complexes exhibiting sigma non-innocence differ from classical Werner coordination complexes in that their bonding and antibonding orbitals have an inverted distribution of metal and ligand character (c.f. inverted ligand field). The oxidation of the ligand and a lowered charge at the metal center renders the assignment of the oxidation state non-trivial.
Sigma non-innocence in copper complexes
Sigma non-innocence has been extensively discussed for the prototypical example of a copper complex [Cu(CF3)4]− in conjunction with the concept of an inverted ligand field. In 1995, Snyder suggested, based on his quantum chemical calculations, that this formal Cu(III) (d8) complex would be more appropriately represented as a Cu(I) (d10) complex. Snyder pointed out that the frontier molecular orbitals of [Cu(CF3)4]− are dominated by ligand parentage due to the higher-lying ligand orbitals compared to the metal orbitals, and this inversion of the ligand field causes the dx2‑y2 orbital to be occupied and the lowest unoccupied molecular orbital (LUMO) to be ligand centered.
Later, Lancaster et al. experimentally validated this inverted ligand field electronic structure of [Cu(CF3)4]− using core spectroscopy techniques. Their findings revealed that the 3d orbitals are nearly fully occupied, supporting the formulation of this ion as a Cu(I) species. The assignment of what would be typically called a Cu(III) species as Cu(I) indicates the sigma non-innocence of the perfluoromethyl ligands in the complex.
The researchers also examined the electronic structure of other formally Cu(III) complexes using Cu L2,3-edge X-ray absorption spectroscopy together with computational techniques. They reported that all the Cu(III) species they studied except CuF63– have significantly diminished metal d-character in their LUMOs compared to the formal d8 assignment. This implies that ligand field inversion and sigma non-innocence are not unique to [Cu(CF3)4]− but is general in many systems.
Sigma non-innocence in nickel complexes
Klein et al. computationally analyzed the electronic structure of a high valent Nickel complex "1". This complex was previously reported to readily undergo aryl-CF3 bond-forming reductive elimination.
Klein et al. reported that this formally Ni(IV) complex is best described as Ni approaching the +II oxidation state. They used intrinsic bond orbital method to analyze the bonding of the complex and identified that the bond between CAr and Ni is polarized to Ni with the partial charge on Ni (0.988) larger than the one on CAr (0.973). They attributed the +II oxidation state of Ni to the oxidation of the aryl ligand due to sigma non-innocence.
Based on calculations, they also asserted that the formal reductive elimination from this complex is essentially redox neutral, with the Ni center retaining its Ni(II) state throughout the C-C bond-forming event. They interpreted the bond-formation mechanism as the nucleophilic CF3 group attacking the electrophilic aryl group.
References
Wikipedia Student Program
Coordination chemistry | Sigma non-innocence | [
"Chemistry"
] | 694 | [
"Coordination chemistry"
] |
73,248,112 | https://en.wikipedia.org/wiki/Large%20language%20model | A large language model (LLM) is a type of machine learning model designed for natural language processing tasks such as language generation. LLMs are language models with many parameters, and are trained with self-supervised learning on a vast amount of text.
The largest and most capable LLMs are generative pretrained transformers (GPTs). Modern models can be fine-tuned for specific tasks or guided by prompt engineering. These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained in.
History
Before 2017, there were a few language models that were large as compared to capacities then available. In the 1990s, the IBM alignment models pioneered statistical language modelling. A smoothed n-gram model in 2001 trained on 0.3 billion words achieved state-of-the-art perplexity at the time. In the 2000s, as Internet use became prevalent, some researchers constructed Internet-scale language datasets ("web as corpus"), upon which they trained statistical language models. In 2009, in most language processing tasks, statistical language models dominated over symbolic language models, as they can usefully ingest large datasets.
After neural networks became dominant in image processing around 2012, they were applied to language modelling as well. Google converted its translation service to Neural Machine Translation in 2016. As it was before transformers, it was done by seq2seq deep LSTM networks.
At the 2017 NeurIPS conference, Google researchers introduced the transformer architecture in their landmark paper "Attention Is All You Need". This paper's goal was to improve upon 2014 seq2seq technology, and was based mainly on the attention mechanism developed by Bahdanau et al. in 2014. The following year in 2018, BERT was introduced and quickly became "ubiquitous". Though the original transformer has both encoder and decoder blocks, BERT is an encoder-only model.
Academic and research usage of BERT began to decline in 2023, following rapid improvements in the abilities of decoder-only models (such as GPT) to solve tasks via prompting.
Although decoder-only GPT-1 was introduced in 2018, it was GPT-2 in 2019 that caught widespread attention because OpenAI at first deemed it too powerful to release publicly, out of fear of malicious use. GPT-3 in 2020 went a step further and is available only via API with no offering of downloading the model to execute locally. But it was the 2022 consumer-facing browser-based ChatGPT that captured the imaginations of the general population and caused some media hype and online buzz. The 2023 GPT-4 was praised for its increased accuracy and as a "holy grail" for its multimodal capabilities. OpenAI did not reveal the high-level architecture and the number of parameters of GPT-4. The release of ChatGPT led to an uptick in LLM usage across several research subfields of computer science, including robotics, software engineering, and societal impact work.
Competing language models have for the most part been attempting to equal the GPT series, at least in terms of number of parameters.
Since 2022, source-available models have been gaining popularity, especially at first with BLOOM and LLaMA, though both have restrictions on the field of use. Mistral AI's models Mistral 7B and Mixtral 8x7b have the more permissive Apache License. , The Instruction fine tuned variant of the Llama 3 70 billion parameter model is the most powerful open LLM according to the LMSYS Chatbot Arena Leaderboard, being more powerful than GPT-3.5 but not as powerful as GPT-4.
Since 2023, many LLMs have been trained to be multimodal, having the ability to also process or generate other types of data, such as images or audio. These LLMs are also called large multimodal models (LMMs).
As of 2024, the largest and most capable models are all based on the transformer architecture. Some recent implementations are based on other architectures, such as recurrent neural network variants and Mamba (a state space model).
Dataset preprocessing
Tokenization
As machine learning algorithms process numbers rather than text, the text must be converted to numbers. In the first step, a vocabulary is decided upon, then integer indices are arbitrarily but uniquely assigned to each vocabulary entry, and finally, an embedding is associated to the integer index. Algorithms include byte-pair encoding (BPE) and WordPiece. There are also special tokens serving as control characters, such as [MASK] for masked-out token (as used in BERT), and [UNK] ("unknown") for characters not appearing in the vocabulary. Also, some special symbols are used to denote special text formatting. For example, "Ġ" denotes a preceding whitespace in RoBERTa and GPT. "##" denotes continuation of a preceding word in BERT.
For example, the BPE tokenizer used by GPT-3 (Legacy) would split tokenizer: texts -> series of numerical "tokens" as
Tokenization also compresses the datasets. Because LLMs generally require input to be an array that is not jagged, the shorter texts must be "padded" until they match the length of the longest one. How many tokens are, on average, needed per word depends on the language of the dataset.
BPE
As an example, consider a tokenizer based on byte-pair encoding. In the first step, all unique characters (including blanks and punctuation marks) are treated as an initial set of n-grams (i.e. initial set of uni-grams). Successively the most frequent pair of adjacent characters is merged into a bi-gram and all instances of the pair are replaced by it. All occurrences of adjacent pairs of (previously merged) n-grams that most frequently occur together are then again merged into even lengthier n-gram, until a vocabulary of prescribed size is obtained (in case of GPT-3, the size is 50257). After a tokenizer is trained, any text can be tokenized by it, as long as it does not contain characters not appearing in the initial-set of uni-grams.
Problems
A token vocabulary based on the frequencies extracted from mainly English corpora uses as few tokens as possible for an average English word. An average word in another language encoded by such an English-optimized tokenizer is however split into suboptimal amount of tokens. GPT-2 tokenizer can use up to 15 times more tokens per word for some languages, for example for the Shan language from Myanmar. Even more widespread languages such as Portuguese and German have "a premium of 50%" compared to English.
Greedy tokenization also causes subtle problems with text completion.
Dataset cleaning
In the context of training LLMs, datasets are typically cleaned by removing low-quality, duplicated, or toxic data. Cleaned datasets can increase training efficiency and lead to improved downstream performance. A trained LLM can be used to clean datasets for training a further LLM.
With the increasing proportion of LLM-generated content on the web, data cleaning in the future may include filtering out such content. LLM-generated content can pose a problem if the content is similar to human text (making filtering difficult) but of lower quality (degrading performance of models trained on it).
Synthetic data
Training of largest language models might need more linguistic data than naturally available, or that the naturally occurring data is of insufficient quality. In these cases, synthetic data might be used. Microsoft's Phi series of LLMs is trained on textbook-like data generated by another LLM.
Training and architecture
Reinforcement learning from human feedback (RLHF)
Reinforcement learning from human feedback (RLHF) through algorithms, such as proximal policy optimization, is used to further fine-tune a model based on a dataset of human preferences.
Instruction tuning
Using "self-instruct" approaches, LLMs have been able to bootstrap correct responses, replacing any naive responses, starting from human-generated corrections of a few cases. For example, in the instruction "Write an essay about the main themes represented in Hamlet," an initial naive completion might be "If you submit the essay after March 17, your grade will be reduced by 10% for each day of delay," based on the frequency of this textual sequence in the corpus.
Mixture of experts
The largest LLM may be too expensive to train and use directly. For such models, mixture of experts (MoE) can be applied, a line of research pursued by Google researchers since 2017 to train models reaching up to 1 trillion parameters.
Prompt engineering, attention mechanism, and context window
Most results previously achievable only by (costly) fine-tuning, can be achieved through prompt engineering, although limited to the scope of a single conversation (more precisely, limited to the scope of a context window).
In order to find out which tokens are relevant to each other within the scope of the context window, the attention mechanism calculates "soft" weights for each token, more precisely for its embedding, by using multiple attention heads, each with its own "relevance" for calculating its own soft weights. For example, the small (i.e. 117M parameter sized) GPT-2 model has had twelve attention heads and a context window of only 1k tokens. In its medium version it has 345M parameters and contains 24 layers, each with 12 attention heads. For the training with gradient descent a batch size of 512 was utilized.
The largest models, such as Google's Gemini 1.5, presented in February 2024, can have a context window sized up to 1 million (context window of 10 million was also "successfully tested"). Other models with large context windows includes Anthropic's Claude 2.1, with a context window of up to 200k tokens. Note that this maximum refers to the number of input tokens and that the maximum number of output tokens differs from the input and is often smaller. For example, the GPT-4 Turbo model has a maximum output of 4096 tokens.
Length of a conversation that the model can take into account when generating its next answer is limited by the size of a context window, as well. If the length of a conversation, for example with ChatGPT, is longer than its context window, only the parts inside the context window are taken into account when generating the next answer, or the model needs to apply some algorithm to summarize the too distant parts of conversation.
The shortcomings of making a context window larger include higher computational cost and possibly diluting the focus on local context, while making it smaller can cause a model to miss an important long-range dependency. Balancing them are a matter of experimentation and domain-specific considerations.
A model may be pre-trained either to predict how the segment continues, or what is missing in the segment, given a segment from its training dataset. It can be either
autoregressive (i.e. predicting how the segment continues, the way GPTs do it): for example given a segment "I like to eat", the model predicts "ice cream", or "sushi".
"masked" (i.e. filling in the parts missing from the segment, the way "BERT" does it): for example, given a segment "I like to [__] [__] cream", the model predicts that "eat" and "ice" are missing.
Models may be trained on auxiliary tasks which test their understanding of the data distribution, such as Next Sentence Prediction (NSP), in which pairs of sentences are presented and the model must predict whether they appear consecutively in the training corpus. During training, regularization loss is also used to stabilize training. However regularization loss is usually not used during testing and evaluation.
Infrastructure
Substantial infrastructure is necessary for training the largest models.
Training cost
The qualifier "large" in "large language model" is inherently vague, as there is no definitive threshold for the number of parameters required to qualify as "large". As time goes on, what was previously considered "large" may evolve. GPT-1 of 2018 is usually considered the first LLM, even though it has only 0.117 billion parameters. The tendency towards larger models is visible in the list of large language models.
Advances in software and hardware have reduced the cost substantially since 2020, such that in 2023 training of a 12-billion-parameter LLM computational cost is 72,300 A100-GPU-hours, while in 2020 the cost of training a 1.5-billion-parameter LLM (which was two orders of magnitude smaller than the state of the art in 2020) was between $80,000 and $1,600,000. Since 2020, large sums were invested in increasingly large models. For example, training of the GPT-2 (i.e. a 1.5-billion-parameters model) in 2019 cost $50,000, while training of the PaLM (i.e. a 540-billion-parameters model) in 2022 cost $8 million, and Megatron-Turing NLG 530B (in 2021) cost around $11 million.
For Transformer-based LLM, training cost is much higher than inference cost. It costs 6 FLOPs per parameter to train on one token, whereas it costs 1 to 2 FLOPs per parameter to infer on one token.
Tool use
There are certain tasks that, in principle, cannot be solved by any LLM, at least not without the use of external tools or additional software. An example of such a task is responding to the user's input '354 * 139 = ', provided that the LLM has not already encountered a continuation of this calculation in its training corpus. In such cases, the LLM needs to resort to running program code that calculates the result, which can then be included in its response.: Another example is "What is the time now? It is ", where a separate program interpreter would need to execute a code to get system time on the computer, so that the LLM can include it in its reply. This basic strategy can be sophisticated with multiple attempts of generated programs, and other sampling strategies.
Generally, in order to get an LLM to use tools, one must fine-tune it for tool-use. If the number of tools is finite, then fine-tuning may be done just once. If the number of tools can grow arbitrarily, as with online API services, then the LLM can be fine-tuned to be able to read API documentation and call API correctly.
A simpler form of tool use is retrieval-augmented generation: the augmentation of an LLM with document retrieval. Given a query, a document retriever is called to retrieve the most relevant documents. This is usually done by encoding the query and the documents into vectors, then finding the documents with vectors (usually stored in a vector database) most similar to the vector of the query. The LLM then generates an output based on both the query and context included from the retrieved documents.
Agency
An LLM is typically not an autonomous agent by itself, as it lacks the ability to interact with dynamic environments, recall past behaviors, and plan future actions, but can be transformed into one by integrating modules like profiling, memory, planning, and action.
The ReAct pattern, a portmanteau of "Reason + Act", constructs an agent out of an LLM, using the LLM as a planner. The LLM is prompted to "think out loud". Specifically, the language model is prompted with a textual description of the environment, a goal, a list of possible actions, and a record of the actions and observations so far. It generates one or more thoughts before generating an action, which is then executed in the environment. The linguistic description of the environment given to the LLM planner can even be the LaTeX code of a paper describing the environment.
In the DEPS ("Describe, Explain, Plan and Select") method, an LLM is first connected to the visual world via image descriptions, then it is prompted to produce plans for complex tasks and behaviors based on its pretrained knowledge and environmental feedback it receives.
The Reflexion method constructs an agent that learns over multiple episodes. At the end of each episode, the LLM is given the record of the episode, and prompted to think up "lessons learned", which would help it perform better at a subsequent episode. These "lessons learned" are given to the agent in the subsequent episodes.
Monte Carlo tree search can use an LLM as rollout heuristic. When a programmatic world model is not available, an LLM can also be prompted with a description of the environment to act as world model.
For open-ended exploration, an LLM can be used to score observations for their "interestingness", which can be used as a reward signal to guide a normal (non-LLM) reinforcement learning agent. Alternatively, it can propose increasingly difficult tasks for curriculum learning. Instead of outputting individual actions, an LLM planner can also construct "skills", or functions for complex action sequences. The skills can be stored and later invoked, allowing increasing levels of abstraction in planning.
LLM-powered agents can keep a long-term memory of its previous contexts, and the memory can be retrieved in the same way as Retrieval Augmented Generation. Multiple such agents can interact socially.
Compression
Typically, LLMs are trained with single- or half-precision floating point numbers (float32 and float16). One float16 has 16 bits, or 2 bytes, and so one billion parameters require 2 gigabytes. The largest models typically have 100 billion parameters, requiring 200 gigabytes to load, which places them outside the range of most consumer electronics.
Post-training quantization aims to decrease the space requirement by lowering precision of the parameters of a trained model, while preserving most of its performance. The simplest form of quantization simply truncates all numbers to a given number of bits. It can be improved by using a different quantization codebook per layer. Further improvement can be done by applying different precisions to different parameters, with higher precision for particularly important parameters ("outlier weights"). See for a visual guide.
While quantized models are typically frozen, and only pre-quantized models are fine-tuned, quantized models can still be fine-tuned.
Multimodality
Multimodality means "having several modalities", and a "modality" refers to a type of input or output, such as video, image, audio, text, proprioception, etc. There have been many AI models trained specifically to ingest one modality and output another modality, such as AlexNet for image to label, visual question answering for image-text to text, and speech recognition for speech to text.
A common method to create multimodal models out of an LLM is to "tokenize" the output of a trained encoder. Concretely, one can construct an LLM that can understand images as follows: take a trained LLM, and take a trained image encoder . Make a small multilayered perceptron , so that for any image , the post-processed vector has the same dimensions as an encoded token. That is an "image token". Then, one can interleave text tokens and image tokens. The compound model is then fine-tuned on an image-text dataset. This basic construction can be applied with more sophistication to improve the model. The image encoder may be frozen to improve stability.
Flamingo demonstrated the effectiveness of the tokenization method, finetuning a pair of pretrained language model and image encoder to perform better on visual question answering than models trained from scratch. Google PaLM model was fine-tuned into a multimodal model PaLM-E using the tokenization method, and applied to robotic control. LLaMA models have also been turned multimodal using the tokenization method, to allow image inputs, and video inputs.
GPT-4 can use both text and image as inputs (although the vision component was not released to the public until GPT-4V); Google DeepMind's Gemini is also multimodal. Mistral introduced its own multimodel Pixtral 12B model in September 2024.
Properties
Scaling laws
The performance of an LLM after pretraining largely depends on the:
cost of pretraining (the total amount of compute used),
size of the artificial neural network itself, such as number of parameters (i.e. amount of neurons in its layers, amount of weights between them and biases),
size of its pretraining dataset (i.e. number of tokens in corpus, ).
"Scaling laws" are empirical statistical laws that predict LLM performance based on such factors. One particular scaling law ("Chinchilla scaling") for LLM autoregressively trained for one epoch, with a log-log learning rate schedule, states that:
where the variables are
is the cost of training the model, in FLOPs.
is the number of parameters in the model.
is the number of tokens in the training set.
is the average negative log-likelihood loss per token (nats/token), achieved by the trained LLM on the test dataset.
and the statistical hyper-parameters are
, meaning that it costs 6 FLOPs per parameter to train on one token. Note that training cost is much higher than inference cost, where it costs 1 to 2 FLOPs per parameter to infer on one token.
Emergent abilities
Performance of bigger models on various tasks, when plotted on a log-log scale, appears as a linear extrapolation of performance achieved by smaller models. However, this linearity may be punctuated by "break(s)" in the scaling law, where the slope of the line changes abruptly, and where larger models acquire "emergent abilities". They arise from the complex interaction of the model's components and are not explicitly programmed or designed.
Furthermore, recent research has demonstrated that AI systems, including large language models, can employ heuristic reasoning akin to human cognition. They balance between exhaustive logical processing and the use of cognitive shortcuts (heuristics), adapting their reasoning strategies to optimize between accuracy and effort. This behavior aligns with principles of resource-rational human cognition, as discussed in classical theories of bounded rationality and dual-process theory.
The most intriguing among emergent abilities is in-context learning from example demonstrations. In-context learning is involved in tasks, such as:
reported arithmetics, decoding the International Phonetic Alphabet, unscrambling a word's letters, disambiguate word in context, converting spatial words, cardinal directions (for example, replying "northeast" upon [0, 0, 1; 0, 0, 0; 0, 0, 0]), color terms represented in text.
chain-of-thought prompting: Model outputs are improved by chain-of-thought prompting only when model size exceeds 62B. Smaller models perform better when prompted to answer immediately, without chain of thought.
identifying offensive content in paragraphs of Hinglish (a combination of Hindi and English), and generating a similar English equivalent of Kiswahili proverbs.
Schaeffer et. al. argue that the emergent abilities are not unpredictably acquired, but predictably acquired according to a smooth scaling law. The authors considered a toy statistical model of an LLM solving multiple-choice questions, and showed that this statistical model, modified to account for other types of tasks, applies to these tasks as well.
Let be the number of parameter count, and be the performance of the model.
Interpretation
Large language models by themselves are black boxes, and it is not clear how they can perform linguistic tasks. There are several methods for understanding how LLM work.
Mechanistic interpretability aims to reverse-engineer LLM by discovering symbolic algorithms that approximate the inference performed by LLM. One example is Othello-GPT, where a small Transformer is trained to predict legal Othello moves. It is found that there is a linear representation of Othello board, and modifying the representation changes the predicted legal Othello moves in the correct way. In another example, a small Transformer is trained on Karel programs. Similar to the Othello-GPT example, there is a linear representation of Karel program semantics, and modifying the representation changes output in the correct way. The model also generates correct programs that are on average shorter than those in the training set.
In another example, the authors trained small transformers on modular arithmetic addition. The resulting models were reverse-engineered, and it turned out they used discrete Fourier transform.
Understanding and intelligence
NLP researchers were evenly split when asked, in a 2022 survey, whether (untuned) LLMs "could (ever) understand natural language in some nontrivial sense". Proponents of "LLM understanding" believe that some LLM abilities, such as mathematical reasoning, imply an ability to "understand" certain concepts. A Microsoft team argued in 2023 that GPT-4 "can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more" and that GPT-4 "could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence system": "Can one reasonably say that a system that passes exams for software engineering candidates is not really intelligent?" Ilya Sutskever argues that predicting the next word sometimes involves reasoning and deep insights, for example if the LLM has to predict the name of the criminal in an unknown detective novel after processing the entire story leading up to the revelation. Some researchers characterize LLMs as "alien intelligence". For example, Conjecture CEO Connor Leahy considers untuned LLMs to be like inscrutable alien "Shoggoths", and believes that RLHF tuning creates a "smiling facade" obscuring the inner workings of the LLM: "If you don't push it too far, the smiley face stays on. But then you give it [an unexpected] prompt, and suddenly you see this massive underbelly of insanity, of weird thought processes and clearly non-human understanding."
In contrast, some proponents of the "LLMs lack understanding" school believe that existing LLMs are "simply remixing and recombining existing writing", a phenomenon known as stochastic parrot, or they point to the deficits existing LLMs continue to have in prediction skills, reasoning skills, agency, and explainability. For example, GPT-4 has natural deficits in planning and in real-time learning. Generative LLMs have been observed to confidently assert claims of fact which do not seem to be justified by their training data, a phenomenon which has been termed "hallucination". Specifically, hallucinations in the context of LLMs correspond to the generation of text or responses that seem syntactically sound, fluent, and natural but are factually incorrect, nonsensical, or unfaithful to the provided source input. Neuroscientist Terrence Sejnowski has argued that "The diverging opinions of experts on the intelligence of LLMs suggests that our old ideas based on natural intelligence are inadequate".
The matter of LLM's exhibiting intelligence or understanding has two main aspects – the first is how to model thought and language in a computer system, and the second is how to enable the computer system to generate human like language. These aspects of language as a model of cognition have been developed in the field of cognitive linguistics. American linguist George Lakoff presented Neural Theory of Language (NTL) as a computational basis for using language as a model of learning tasks and understanding. The NTL Model outlines how specific neural structures of the human brain shape the nature of thought and language and in turn what are the computational properties of such neural systems that can be applied to model thought and language in a computer system. After a framework for modeling language in a computer systems was established, the focus shifted to establishing frameworks for computer systems to generate language with acceptable grammar. In his 2014 book titled The Language Myth: Why Language Is Not An Instinct, British cognitive linguist and digital communication technologist Vyvyan Evans mapped out the role of probabilistic context-free grammar (PCFG) in enabling NLP to model cognitive patterns and generate human like language.
Evaluation
Perplexity
The canonical measure of the performance of an LLM is its perplexity on a given text corpus. Perplexity measures how well a model predicts the contents of a dataset; the higher the likelihood the model assigns to the dataset, the lower the perplexity. In mathematical terms, perplexity is the exponential of the average negative log likelihood per token.
Here, is the number of tokens in the text corpus, and "context for token " depends on the specific type of LLM. If the LLM is autoregressive, then "context for token " is the segment of text appearing before token . If the LLM is masked, then "context for token " is the segment of text surrounding token .
Because language models may overfit to training data, models are usually evaluated by their perplexity on a test set. This evaluation is potentially problematic for larger models which, as they are trained on increasingly large corpora of text, are increasingly likely to inadvertently include portions of any given test set.
BPW, BPC, and BPT
In information theory, the concept of entropy is intricately linked to perplexity, a relationship notably established by Claude Shannon. This relationship is mathematically expressed as .
Entropy, in this context, is commonly quantified in terms of bits per word (BPW) or bits per character (BPC), which hinges on whether the language model utilizes word-based or character-based tokenization.
Notably, in the case of larger language models that predominantly employ sub-word tokenization, bits per token (BPT) emerges as a seemingly more appropriate measure. However, due to the variance in tokenization methods across different Large Language Models (LLMs), BPT does not serve as a reliable metric for comparative analysis among diverse models. To convert BPT into BPW, one can multiply it by the average number of tokens per word.
In the evaluation and comparison of language models, cross-entropy is generally the preferred metric over entropy. The underlying principle is that a lower BPW is indicative of a model's enhanced capability for compression. This, in turn, reflects the model's proficiency in making accurate predictions.
Task-specific datasets and benchmarks
A large number of testing datasets and benchmarks have also been developed to evaluate the capabilities of language models on more specific downstream tasks. Tests may be designed to evaluate a variety of capabilities, including general knowledge, commonsense reasoning, and mathematical problem-solving.
One broad category of evaluation dataset is question answering datasets, consisting of pairs of questions and correct answers, for example, ("Have the San Jose Sharks won the Stanley Cup?", "No"). A question answering task is considered "open book" if the model's prompt includes text from which the expected answer can be derived (for example, the previous question could be adjoined with some text which includes the sentence "The Sharks have advanced to the Stanley Cup finals once, losing to the Pittsburgh Penguins in 2016."). Otherwise, the task is considered "closed book", and the model must draw on knowledge retained during training. Some examples of commonly used question answering datasets include TruthfulQA, Web Questions, TriviaQA, and SQuAD.
Evaluation datasets may also take the form of text completion, having the model select the most likely word or sentence to complete a prompt, for example: "Alice was friends with Bob. Alice went to visit her friend, ".
Some composite benchmarks have also been developed which combine a diversity of different evaluation datasets and tasks. Examples include GLUE, SuperGLUE, MMLU, BIG-bench, and HELM. OpenAI has released tools for running composite benchmarks, but noted that the eval results are sensitive to the prompting method. Some public datasets contain questions that are mislabeled, ambiguous, unanswerable, or otherwise of low-quality, which can be cleaned to give more reliable benchmark scores.
It was previously standard to report results on a heldout portion of an evaluation dataset after doing supervised fine-tuning on the remainder. It is now more common to evaluate a pre-trained model directly through prompting techniques, though researchers vary in the details of how they formulate prompts for particular tasks, particularly with respect to how many examples of solved tasks are adjoined to the prompt (i.e. the value of n in n-shot prompting).
Adversarially constructed evaluations
Because of the rapid pace of improvement of large language models, evaluation benchmarks have suffered from short lifespans, with state of the art models quickly "saturating" existing benchmarks, exceeding the performance of human annotators, leading to efforts to replace or augment the benchmark with more challenging tasks. In addition, there are cases of "shortcut learning" wherein AIs sometimes "cheat" on multiple-choice tests by using statistical correlations in superficial test question wording in order to guess the correct responses, without necessarily understanding the actual question being asked.
Some datasets have been constructed adversarially, focusing on particular problems on which extant language models seem to have unusually poor performance compared to humans. One example is the TruthfulQA dataset, a question answering dataset consisting of 817 questions which language models are susceptible to answering incorrectly by mimicking falsehoods to which they were repeatedly exposed during training. For example, an LLM may answer "No" to the question "Can you teach an old dog new tricks?" because of its exposure to the English idiom you can't teach an old dog new tricks, even though this is not literally true.
Another example of an adversarial evaluation dataset is Swag and its successor, HellaSwag, collections of problems in which one of multiple options must be selected to complete a text passage. The incorrect completions were generated by sampling from a language model and filtering with a set of classifiers. The resulting problems are trivial for humans but at the time the datasets were created state of the art language models had poor accuracy on them. For example:
We see a fitness center sign. We then see a man talking to the camera and sitting and laying on a exercise ball. The man...
a) demonstrates how to increase efficient exercise work by running up and down balls.
b) moves all his arms and legs and builds up a lot of muscle.
c) then plays the ball and we see a graphics and hedge trimming demonstration.
d) performs sit ups while on the ball and talking.
BERT selects b) as the most likely completion, though the correct answer is d).
Wider impact
In 2023, Nature Biomedical Engineering wrote that "it is no longer possible to accurately distinguish" human-written text from text created by large language models, and that "It is all but certain that general-purpose large language models will rapidly proliferate... It is a rather safe bet that they will change many industries over time." Goldman Sachs suggested in 2023 that generative language AI could increase global GDP by 7% in the next ten years, and could expose to automation 300 million jobs globally.
Memorization and copyright
Memorization is an emergent behavior in LLMs in which long strings of text are occasionally output verbatim from training data, contrary to typical behavior of traditional artificial neural nets. Evaluations of controlled LLM output measure the amount memorized from training data (focused on GPT-2-series models) as variously over 1% for exact duplicates or up to about 7%.
A 2023 study showed that when ChatGPT 3.5 turbo was prompted to repeat the same word indefinitely, after a few hundreds of repetitions, it would start outputting excerpts from its training data.
Security
Some commenters expressed concern over accidental or deliberate creation of misinformation, or other forms of misuse. For example, the availability of large language models could reduce the skill-level required to commit bioterrorism; biosecurity researcher Kevin Esvelt has suggested that LLM creators should exclude from their training data papers on creating or enhancing pathogens.
The potential presence of "sleeper agents" within LLM models is another emerging security concern. These are hidden functionalities built into the model that remain dormant until triggered by a specific event or condition. Upon activation, the LLM deviates from its expected behavior to make insecure actions.
LLM applications accessible to the public, like ChatGPT or Claude, typically incorporate safety measures designed to filter out harmful content. However, implementing these controls effectively has proven challenging. For instance, a 2023 study proposed a method for circumventing LLM safety systems. Similarly, Yongge Wang illustrated in 2024 how a potential criminal could potentially bypass ChatGPT 4o's safety controls to obtain information on establishing a drug trafficking operation.
Algorithmic bias
While LLMs have shown remarkable capabilities in generating human-like text, they are susceptible to inheriting and amplifying biases present in their training data. This can manifest in skewed representations or unfair treatment of different demographics, such as those based on race, gender, language, and cultural groups. Since English data is overrepresented in current large language models' training data, it may also downplay non-English views.
Stereotyping
AI models can reinforce a wide range of stereotypes, including those based on gender, ethnicity, age, nationality, religion, or occupation. This can lead to outputs that unfairly generalize or caricature groups of people, sometimes in harmful or derogatory ways.
Notably, gender bias refers to the tendency of these models to produce outputs that are unfairly prejudiced towards one gender over another. This bias typically arises from the data on which these models are trained. Large language models often assign roles and characteristics based on traditional gender norms. For example, it might associate nurses or secretaries predominantly with women and engineers or CEOs with men.
Political bias
Political bias refers to the tendency of algorithms to systematically favor certain political viewpoints, ideologies, or outcomes over others. Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data.
See also
Foundation models
List of large language models
List of chatbots
References
Further reading
Jurafsky, Dan, Martin, James. H. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition, 3rd Edition draft, 2023.
Deep learning
Natural language processing | Large language model | [
"Technology"
] | 8,157 | [
"Natural language processing",
"Natural language and computing"
] |
73,248,504 | https://en.wikipedia.org/wiki/%283-%282-Furoyl%29-quinoline-2%20carboxaldehyde%29 | 3-(2-Furoyl)-quinoline-2-carboxaldehyde (FQ) is a fluorogenic amine labeling dye that is not fluorescent itself, but reacts with primary amines to form fluorescent products. It was first reported in 1990. Cyanide, typically provided via KCN or NaCN salts, is a required co-substrate in the fluorogenic reaction. It has been used for the detection of amines and peptides, largely in CE-SDS, where it is recognized to reach a silver stain-like high sensitivity via laser-induced fluorescence. Once bound to protein the excitation wavelength is 480 nm (blue) and the emission wavelength is ~600 nm (orange).
Reaction
See also
CBQCA
Fluorescamine
Py-1
References
Dyes
Quinolines
Aldehydes
Furanones | (3-(2-Furoyl)-quinoline-2 carboxaldehyde) | [
"Chemistry"
] | 183 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.