id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
71,785,549 | https://en.wikipedia.org/wiki/Muellerella%20erratica | Muellerella erratica is a species of lichenicolous fungus in the family Verrucariaceae. It has been reported from numerous countries, including India. Known host species include the thallus of Lecidea lapicida and Lecanora.
References
Verrucariales
Fungi described in 1855
Fungi of India
Lichenicolous fungi
Taxa named by Abramo Bartolommeo Massalongo
Fungus species | Muellerella erratica | [
"Biology"
] | 87 | [
"Fungi",
"Fungus species"
] |
71,786,086 | https://en.wikipedia.org/wiki/TrES-5b | TrES-5b is a Hot Jupiter class extrasolar planet discovered by astronomical transit located 1100 light years from Earth in the Cygnus constellation, orbiting the star GSC 03949-00967 of type G in the planetary system TrES-5.
See also
Trans-Atlantic Exoplanet Survey
TrES-2b
Libra (constellation)
Kepler-407b
References
Cygnus (constellation)
Exoplanets discovered in 2011
Transiting exoplanets
Hot Jupiters | TrES-5b | [
"Astronomy"
] | 101 | [
"Cygnus (constellation)",
"Constellations"
] |
56,140,139 | https://en.wikipedia.org/wiki/List%20object | In category theory, an abstract branch of mathematics, and in its applications to logic and theoretical computer science, a list object is an abstract definition of a list, that is, a finite ordered sequence.
Formal definition
Let C be a category with finite products and a terminal object 1.
A list object over an object of C is:
an object ,
a morphism : 1 → , and
a morphism : × →
such that for any object of with maps : 1 → and : × → , there exists a unique : → such that the following diagram commutes:
where〈id, 〉denotes the arrow induced by the universal property of the product when applied to id (the identity on ) and . The notation * (à la Kleene star) is sometimes used to denote lists over .
Equivalent definitions
In a category with a terminal object 1, binary coproducts (denoted by +), and binary products (denoted by ×), a list object over can be defined as the initial algebra of the endofunctor that acts on objects by ↦ 1 + ( × ) and on arrows by ↦ [id1,〈id, 〉].
Examples
In Set, the category of sets, list objects over a set are simply finite lists with elements drawn from . In this case, picks out the empty list and corresponds to appending an element to the head of the list.
In the calculus of inductive constructions or similar type theories with inductive types (or heuristically, even strongly typed functional languages such as Haskell), lists are types defined by two constructors, nil and cons, which correspond to and , respectively. The recursion principle for lists guarantees they have the expected universal property.
Properties
Like all constructions defined by a universal property, lists over an object are unique up to canonical isomorphism.
The object 1 (lists over the terminal object) has the universal property of a natural number object. In any category with lists, one can define the length of a list to be the unique morphism : → 1 which makes the following diagram commute:
References
See also
Natural number object
F-algebra
Initial algebra
Objects (category theory)
Topos theory | List object | [
"Mathematics"
] | 449 | [
"Objects (category theory)",
"Mathematical structures",
"Category theory",
"Topos theory"
] |
56,141,758 | https://en.wikipedia.org/wiki/Sabine%20Brunswicker | Sabine Brunswicker is a Full Professor for Digital Innovation at Purdue University, West Lafayette, United States, and the Founder and Director of Research Center for Open Digital Innovation (RCODI). She is a computational social scientist with a particular focus on open digital innovation who engages with an interdisciplinary group of researchers to predict individual and collective outcomes in open digital innovation. She has written numerous research papers and books chapters on Open Innovation, and is an internationally recognized authority in the field. She chaired the World Economic Forum workshop for a session titled "Open innovation as a driver of business and economic transformation" in 2014. She is known for pioneering Purdue IronHacks, an iterative hacking initiative (www.ironhacks.com) that encourages experiential learning at Purdue University.
Education and career
Brunswicker holds a Bachelor of Science in Engineering and Management Sciences from Technische Universität Darmstadt, Darmstadt, Germany, a Master of Commerce from University of New South Wales, Australia, a Master of Science in Engineering and Management Sciences from University of Technology, Darmstadt, Germany, and a Phd in Engineering Sciences with highest honor from University of Stuttgart, Germany. She won an award from the publisher John Wiley & Sons and the International Society for Professional Innovation Management, ISPIM, for the doctoral dissertation from an engineer named the Best Dissertation Award in 2012.
She is currently a full professor and director of Research Center for Open Digital Innovation (RCODI) at Purdue University. She is also an adjunct professor of Digital Innovation in the School of Information Systems at Queensland University of Technology, Brisbane Australia. Until 2016, she was Visiting Professor for Digital Innovation at ESADE Business School. Prior to joining Purdue she was Head of Open Innovation at the Fraunhofer Institute for Industrial Engineering in Stuttgart, Germany.
She has won numerous awards throughout her professional careers, including runner-up emerging scholar award World Open Innovation Conference, ESADE Business School, Spain, John P. Lisack Early-Career Engagement Award from Purdue Polytechnic Institute, top researcher 2012 from Fraunhofer Society due to her accomplishments in the area of open innovation.
Selected publications (book chapters)
Kremser, W., Pentland, B., & Brunswicker, S. (2019). The Continuous Transformation of Interdependence in Networks of Routines. In Book Series: Research in Sociology of Organizations. Emerald Insight, invited publication.
Brunswicker, S., Majchrzak, A., Almirall, E., & Tee, R. (2016). Co-creating value from open data: from incentivizing developers to inducing co-creation in open data ecosystems. In S. Nambisan (Ed.): Open Innovation and Innovation Networks (Vol. 1). World Scientific Publishing.
Brunswicker, S. (2016). Managing open innovation in small and medium-sized firms in the tourism sector. In W. Egger, I. Gula, & D. Walcher (Eds.), Open tourism: Open innovation, crowdsourcing, and collaborative consumption challenging the tourism industry. Berlin: Springer.
Bagherzadeh, M., & Brunswicker, S. (2016). Governance of Knowledge Flows in Open Exploration: The Role of Behavioral Control. In Das, T.K. (Ed.), Decision Making in Behavioral Strategy (DMBS), Information Age Publishing (IAP).
Brunswicker, S., & Johnson, J. (2015). From governmental open data toward governmental open innovation (GOI). In D. Archibugi & A. Filippetti (Eds.), The handbook of global science, technology, and innovation (1 ed., pp. 504–524): New Jersey: John Wiley & Sons, Ltd. 2
Brunswicker, S., & van de Vrande, V. (2014). Exploring open innovation in small and medium-sized enterprises. In H. Chesbrough, W. Vanhaverbeke, & J. West (Eds.), New frontiers in open innovation (1 ed., pp. 135–156). Oxford, United Kingdom: Oxford University Press.
Selected publications (referred journals)
Brunswicker, S., & Chesbrough, H. (2018). The Adoption of Open Innovation in Large Firms: Practices, Measures, and Risks. Research Technology Management.
Brunswicker, S., Bilgram, V., & Fueller, J. (2017). Taming wicked civic challenges with an innovative crowd. Business Horizons, 60(2), Bogers, M., Zobel, A.-K., Afuah, A., Almirall, E., Brunswicker, S., Dahlander, L., ... Magnussen, M. (2017). The Open Innovation Landscape: Established Perspectives and Emerging Themes Across Different Levels of Analysis. Industry & Innovation, 24(1), 8–40.
Brunswicker, S., Matei, S. A., Zentner, M., Zentner, L., & Klimeck, G. (2017). Creating impact in the digital space: digital practice dependency in communities of digital scientific innovations. Scientometrics, 110(1), 417–426.
Brunswicker, S., & Vanhaverbeke, W. (2015). Open innovation in small and medium-sized enterprises (SMEs): External knowledge sourcing strategies and internal organizational facilitators. Journal of Small Business Management, 53(4), 1241-1263.
Brunswicker, S., Bertino, E., & Matei, S. (2015). Big data for open digital innovation – A research roadmap. Big Data Research, 2(2), 53-58. Chesbrough, H., & Brunswicker, S. (2014). A Fad or a Phenomenon? The Adoption of Open Innovation Practices in Large Firms. Research Technology Management, 57(2), 16–25.
Koch, G., Füller, J., & Brunswicker, S. (2011). Online crowdsourcing in the public sector: How to design open government platforms. Online Communities and Social Computing, 6778, 203-212. doi:10.1007/978-3-642-21796-8_22
Brunswicker, S., & Hutschek, U. (2010). Crossing horizons: Leveraging cross-industry innovation search in the front-end of the innovation process. International Journal of Innovation Management, 14(04), 683-702.
Research grants
08/2016 to 07/2017 Balancing the Grid through Energy Monitoring Systems: Information Visualization for Collective Awareness; Deans Graduate Assistant Award for Outstanding Research Proposals; $20,000; Principal Investigator
08/2016 to 10/2016 Biomedical Big Data Hacking for Civic Health Awareness; NIH grant; $2000; Co-Principal Investigator (with Bethany McGowan as Principal Investigator) 07/2015-06/2017 Creating Impact from Governmental Open Data (OD): Innovation Process Transparency in OD Contest Design; NSF grant; Science of Science and Innovation Policy (SciPI); 24 months grant; $238,641,29; Principal Investigator (with Ann Majchrzak, USC as Co-Principal Investigator)
06/2015 – 05/2017 Red Hat® Doctoral Researcher on Open Innovation Communities; Donation received from Red Hat Inc., Raleigh, North Carolina $100,000; Principal Investigator
04/2015 to 12/2016 Managing Open Innovation in Large Firms: Case Study Analysis; Sponsored Research Project; Sponsor: Accenture High Performance Institute\f; Chicago; $63,531,32 Principal Investigator 01/2015-08/2015 Conceptualization of the Social and Innovation Opportunities of Data Analysis; NSF grant; CIF21 DIBBS; 7months grant; $99,718; Co-Principal Investigator (with Mike Zentner; Principal Investigator; Purdue University)
07/2014 – 11/215 Global Open Innovation Executive Survey 2015; Sponsored Research Grant; Sponsor: University of Berkeley, Garwood Center for Corporate Innovation; $25,000; Principal Investigator
03/2015 – Present Open Innovation Community Research; Donation received from Landcare Research, Gerald Street, Lincoln, New Zealand 7608; $10,000 Principal Investigator (with Jeremiah Johnson and Ann Majchrzak)
2014 Research on CyberInfrastructures and Behavioral analytics, Purdue Internal Funding through nanoHUB.org NCN supported RCODI discovery efforts of a doctoral student with $18,201. Principal Investigator
2014-2015 Exploratory research in the social sciences; $50,000. Executive office of the Vice President (EVPR) Co-Principal Investigator (with Sorin Matei, Curriculum Vitae – April 2016 16 Communications and Gerhard Klimeck, ECE) $22,496.
2014 Open Strategies; Internal Funding from PCRD (Purdue Center for Regional Development); $10,066; Principal Investigator
References
Living people
21st-century German social scientists
University of New South Wales alumni
University of Stuttgart alumni
Purdue University faculty
Academic staff of Queensland University of Technology
Year of birth missing (living people)
Technische Universität Darmstadt alumni
Information systems researchers | Sabine Brunswicker | [
"Technology"
] | 1,922 | [
"Information systems",
"Information systems researchers"
] |
56,141,841 | https://en.wikipedia.org/wiki/Stains-all | Stains-all is a carbocyanine dye, which stains anionic proteins, nucleic acids, anionic polysaccharides and other anionic molecules.
Properties
Stains-all is metachromatic and changes its color dependent on its contact to other molecules. The detection limit for phosphoproteins is below 1 ng after one hour of staining, for anionic polysaccharides between 10 and 500 ng. Highly anionic proteins are stained blue, proteoglycans purple and anionic proteins pink. RNA is stained blueish-purple with a detection limit of 90 ng and DNA is stained blue with a detection limit of 3 ng.
Stains-all is light sensitive, therefore the staining is performed in the absence of light and photographed immediately. Staining of proteins can be improved by a subsequent silver stain. The analogue Ethyl-Stains-all has similar properties as stains-all, with differences in solubility and staining properties.
Applications
Stains-all stains nucleic acids, anionic proteins, anionic polysaccharides such as alginate and pectinate, hyaluronic acid and dermatan sulfate, heparin, heparan sulfate and chondroitin sulfate. It is used in SDS-PAGE, agarose gel electrophoresis and histologic staining, e.g. staining of growth lines in bones.
References
Thiazole dyes
Biochemistry methods
Histology | Stains-all | [
"Chemistry",
"Biology"
] | 313 | [
"Biochemistry methods",
"Histology",
"Biochemistry",
"Microscopy"
] |
56,142,051 | https://en.wikipedia.org/wiki/Pecking | Pecking is the action of a bird using their beak to search for food or otherwise investigate an object or area by tapping it. Pecking can also be used by a bird to attack or fight another bird.
Pecking is frequently observed in chickens and other poultry, and in pigeons. Pecking is typically accomplished by movement of the neck.
Certain birds, particularly woodpeckers, engage in a specialized kind of pecking, using their beak to drill holes in trees in order to find insects under the bark. Woodpeckers also engage in a kind of pecking called drumming, a less-forceful type of pecking that serves to establish territory and attract mates. Woodpeckers drum on various reverberatory structures on buildings such as gutters, downspouts, chimneys, vents and aluminium sheeting.
The phrase, pecking order, referring to the hierarchical system of social organization was coined by Thorleif Schjelderup-Ebbe in 1921, in reference to the expression of dominance in chickens by behaviors including pecking. Schjelderup-Ebbe noted in his 1924 German-language article that "defense and aggression in the hen is accomplished with the beak". This emphasis on pecking led many subsequent studies on fowl behaviour to use it as a primary observation, however, it has been noted that roosters tend to leap and use their claws in conflicts.
References
See also
Feather pecking
Toe pecking
Vent pecking
Bird behavior | Pecking | [
"Biology"
] | 301 | [
"Behavior by type of animal",
"Behavior",
"Bird behavior"
] |
56,142,183 | https://en.wikipedia.org/wiki/Paraphrasing%20%28computational%20linguistics%29 | Paraphrase or paraphrasing in computational linguistics is the natural language processing task of detecting and generating paraphrases. Applications of paraphrasing are varied including information retrieval, question answering, text summarization, and plagiarism detection. Paraphrasing is also useful in the evaluation of machine translation, as well as semantic parsing and generation of new samples to expand existing corpora.
Paraphrase generation
Multiple sequence alignment
Barzilay and Lee proposed a method to generate paraphrases through the usage of monolingual parallel corpora, namely news articles covering the same event on the same day. Training consists of using multi-sequence alignment to generate sentence-level paraphrases from an unannotated corpus. This is done by
finding recurring patterns in each individual corpus, i.e. " (injured/wounded) people, seriously" where are variables
finding pairings between such patterns the represent paraphrases, i.e. " (injured/wounded) people, seriously" and " were (wounded/hurt) by , among them were in serious condition"
This is achieved by first clustering similar sentences together using n-gram overlap. Recurring patterns are found within clusters by using multi-sequence alignment. Then the position of argument words is determined by finding areas of high variability within each cluster, aka between words shared by more than 50% of a cluster's sentences. Pairings between patterns are then found by comparing similar variable words between different corpora. Finally, new paraphrases can be generated by choosing a matching cluster for a source sentence, then substituting the source sentence's argument into any number of patterns in the cluster.
Phrase-based Machine Translation
Paraphrase can also be generated through the use of phrase-based translation as proposed by Bannard and Callison-Burch. The chief concept consists of aligning phrases in a pivot language to produce potential paraphrases in the original language. For example, the phrase "under control" in an English sentence is aligned with the phrase "unter kontrolle" in its German counterpart. The phrase "unter kontrolle" is then found in another German sentence with the aligned English phrase being "in check," a paraphrase of "under control."
The probability distribution can be modeled as , the probability phrase is a paraphrase of , which is equivalent to summed over all , a potential phrase translation in the pivot language. Additionally, the sentence is added as a prior to add context to the paraphrase. Thus the optimal paraphrase, can be modeled as:
and can be approximated by simply taking their frequencies. Adding as a prior is modeled by calculating the probability of forming the when is substituted with
Long short-term memory
There has been success in using long short-term memory (LSTM) models to generate paraphrases. In short, the model consists of an encoder and decoder component, both implemented using variations of a stacked residual LSTM. First, the encoding LSTM takes a one-hot encoding of all the words in a sentence as input and produces a final hidden vector, which can represent the input sentence. The decoding LSTM takes the hidden vector as input and generates a new sentence, terminating in an end-of-sentence token. The encoder and decoder are trained to take a phrase and reproduce the one-hot distribution of a corresponding paraphrase by minimizing perplexity using simple stochastic gradient descent. New paraphrases are generated by inputting a new phrase to the encoder and passing the output to the decoder.
Transformers
With the introduction of Transformer models, paraphrase generation approaches improved their ability to generate text by scaling neural network parameters and heavily parallelizing training through feed-forward layers. These models are so fluent in generating text that human experts cannot identify if an example was human-authored or machine-generated. Transformer-based paraphrase generation relies on autoencoding, autoregressive, or sequence-to-sequence methods. Autoencoder models predict word replacement candidates with a one-hot distribution over the vocabulary, while autoregressive and seq2seq models generate new text based on the source predicting one word at a time. More advanced efforts also exist to make paraphrasing controllable according to predefined quality dimensions, such as semantic preservation or lexical diversity. Many Transformer-based paraphrase generation methods rely on unsupervised learning to leverage large amounts of training data and scale their methods.
Paraphrase recognition
Recursive Autoencoders
Paraphrase recognition has been attempted by Socher et al through the use of recursive autoencoders. The main concept is to produce a vector representation of a sentence and its components by recursively using an autoencoder. The vector representations of paraphrases should have similar vector representations; they are processed, then fed as input into a neural network for classification.
Given a sentence with words, the autoencoder is designed to take 2 -dimensional word embeddings as input and produce an -dimensional vector as output. The same autoencoder is applied to every pair of words in to produce vectors. The autoencoder is then applied recursively with the new vectors as inputs until a single vector is produced. Given an odd number of inputs, the first vector is forwarded as-is to the next level of recursion. The autoencoder is trained to reproduce every vector in the full recursion tree, including the initial word embeddings.
Given two sentences and of length 4 and 3 respectively, the autoencoders would produce 7 and 5 vector representations including the initial word embeddings. The euclidean distance is then taken between every combination of vectors in and to produce a similarity matrix . is then subject to a dynamic min-pooling layer to produce a fixed size matrix. Since are not uniform in size among all potential sentences, is split into roughly even sections. The output is then normalized to have mean 0 and standard deviation 1 and is fed into a fully connected layer with a softmax output. The dynamic pooling to softmax model is trained using pairs of known paraphrases.
Skip-thought vectors
Skip-thought vectors are an attempt to create a vector representation of the semantic meaning of a sentence, similarly to the skip gram model. Skip-thought vectors are produced through the use of a skip-thought model which consists of three key components, an encoder and two decoders. Given a corpus of documents, the skip-thought model is trained to take a sentence as input and encode it into a skip-thought vector. The skip-thought vector is used as input for both decoders; one attempts to reproduce the previous sentence and the other the following sentence in its entirety. The encoder and decoder can be implemented through the use of a recursive neural network (RNN) or an LSTM.
Since paraphrases carry the same semantic meaning between one another, they should have similar skip-thought vectors. Thus a simple logistic regression can be trained to good performance with the absolute difference and component-wise product of two skip-thought vectors as input.
Transformers
Similar to how Transformer models influenced paraphrase generation, their application in identifying paraphrases showed great success. Models such as BERT can be adapted with a binary classification layer and trained end-to-end on identification tasks. Transformers achieve strong results when transferring between domains and paraphrasing techniques compared to more traditional machine learning methods such as logistic regression. Other successful methods based on the Transformer architecture include using adversarial learning and meta-learning.
Evaluation
Multiple methods can be used to evaluate paraphrases. Since paraphrase recognition can be posed as a classification problem, most standard evaluations metrics such as accuracy, f1 score, or an ROC curve do relatively well. However, there is difficulty calculating f1-scores due to trouble producing a complete list of paraphrases for a given phrase and the fact that good paraphrases are dependent upon context. A metric designed to counter these problems is ParaMetric. ParaMetric aims to calculate the precision and recall of an automatic paraphrase system by comparing the automatic alignment of paraphrases to a manual alignment of similar phrases. Since ParaMetric is simply rating the quality of phrase alignment, it can be used to rate paraphrase generation systems, assuming it uses phrase alignment as part of its generation process. A notable drawback to ParaMetric is the large and exhaustive set of manual alignments that must be initially created before a rating can be produced.
The evaluation of paraphrase generation has similar difficulties as the evaluation of machine translation. The quality of a paraphrase depends on its context, whether it is being used as a summary, and how it is generated, among other factors. Additionally, a good paraphrase usually is lexically dissimilar from its source phrase. The simplest method used to evaluate paraphrase generation would be through the use of human judges. Unfortunately, evaluation through human judges tends to be time-consuming. Automated approaches to evaluation prove to be challenging as it is essentially a problem as difficult as paraphrase recognition. While originally used to evaluate machine translations, bilingual evaluation understudy (BLEU) has been used successfully to evaluate paraphrase generation models as well. However, paraphrases often have several lexically different but equally valid solutions, hurting BLEU and other similar evaluation metrics.
Metrics specifically designed to evaluate paraphrase generation include paraphrase in n-gram change (PINC) and paraphrase evaluation metric (PEM) along with the aforementioned ParaMetric. PINC is designed to be used with BLEU and help cover its inadequacies. Since BLEU has difficulty measuring lexical dissimilarity, PINC is a measurement of the lack of n-gram overlap between a source sentence and a candidate paraphrase. It is essentially the Jaccard distance between the sentence, excluding n-grams that appear in the source sentence to maintain some semantic equivalence. PEM, on the other hand, attempts to evaluate the "adequacy, fluency, and lexical dissimilarity" of paraphrases by returning a single value heuristic calculated using N-grams overlap in a pivot language. However, a large drawback to PEM is that it must be trained using large, in-domain parallel corpora and human judges. It is equivalent to training a paraphrase recognition to evaluate a paraphrase generation system.
The Quora Question Pairs Dataset, which contains hundreds of thousands of duplicate questions, has become a common dataset for the evaluation of paraphrase detectors. Consistently reliable paraphrase detection have all used the Transformer architecture and all have relied on large amounts of pre-training with more general data before fine-tuning with the question pairs.
See also
References
External links
Microsoft Research Paraphrase Corpus - a dataset consisting of 5800 pairs of sentences extracted from news articles annotated to note whether a pair captures semantic equivalence
Paraphrase Database (PPDB) - A searchable database containing millions of paraphrases in 16 different languages
Computational linguistics
Machine learning | Paraphrasing (computational linguistics) | [
"Technology",
"Engineering"
] | 2,353 | [
"Artificial intelligence engineering",
"Natural language and computing",
"Computational linguistics",
"Machine learning"
] |
56,143,831 | https://en.wikipedia.org/wiki/Alcove%20%28architecture%29 | In architecture, an alcove is a small recessed section of a room or an arched opening (as in a wall).
The section is partially enclosed by such vertical elements as walls, pillars and balustrades.
Etymology
The word alcove originates from Arabic: القبة (), al-, 'the', and qubbah, 'vault' (through the Spanish, alcoba).
See also
Niche (architecture)
Mihrab
Box-bed
Tokonoma
Setback (architecture)
References
External links
Architectural elements | Alcove (architecture) | [
"Technology",
"Engineering"
] | 116 | [
"Building engineering",
"Architectural elements",
"Components",
"Architecture"
] |
56,144,090 | https://en.wikipedia.org/wiki/Drosophila%20circadian%20rhythm | Drosophila circadian rhythm is a daily 24-hour cycle of rest and activity in the fruit flies of the genus Drosophila. The biological process was discovered and is best understood in the species Drosophila melanogaster. Many behaviors are under circadian control including eclosion, locomotor activity, feeding, and mating. Locomotor activity is maximum at dawn and dusk, while eclosion is at dawn.
Biological rhythms were first studied in Drosophila pseudoobscura. Drosophila circadian rhythm have paved the way for understanding circadian behaviour and diseases related to sleep-wake conditions in other animals, including humans. This is because the circadian clocks are fundamentally similar. Drosophila circadian rhythm was discovered in 1935 by German zoologists, Hans Kalmus and Erwin Bünning. American biologist Colin S. Pittendrigh provided an important experiment in 1954, which established that circadian rhythm is driven by a biological clock. The genetics was first understood in 1971, when Seymour Benzer and Ronald J. Konopka reported that mutation in specific genes changes or stops the circadian behaviour. They discovered the gene called period (per), mutations of which alter the circadian rhythm. It was the first gene known to control behaviour. After a decade, Konopka, Jeffrey C. Hall, Michael Rosbash, and Michael W. Young discovered novel genes including timeless (tim), Clock (Clk), cycle (cyc), cry. These genes and their product proteins play a key role in the circadian clock. The research conducted in Benzer's lab is narrated in Time, Love, Memory by Jonathan Weiner.
For their contributions, Hall, Rosbash and Young received the Nobel Prize in Physiology or Medicine in 2017.
History
During the process of eclosion by which an adult fly emerges from the pupa, Drosophila exhibits regular locomotor activity (by vibration) that occurs during 8-10 hours intervals starting just before dawn. The existence of this circadian rhythm was independently discovered in D. melanogaster in 1935 by two German zoologists, Hans Kalmus at the Zoological Institute of the German University in Prague (now Charles University), and Erwin Bünning at the Botanical Institute of the University of Jena. Kalmus discovered in 1938 that the brain area is responsible for the circadian activity. Kalmus and Bünning were of the opinion that temperature was the main factor. But it was soon realized that even in different temperature, the circadian rhythm could be unchanged. In 1954, Colin S. Pittendrigh at the Princeton University discovered the importance of light-dark conditions in D. pseudoobscura. He demonstrated that eclosion rhythm was delayed but not stopped when temperature was decreased. He concluded that temperature influenced only the peak hour of the rhythm, and was not the principal factor. It was then known that the circadian rhythm was controlled by a biological clock. But the nature of the clock was then a mystery.
After almost two decades, the existence of the circadian clock was discovered by Seymour Benzer and his student Ronald J. Konopka at the California Institute of Technology. They discovered that mutations in the X chromosome of D. melanogaster could make abnormal circadian activities. When a specific part of the chromosome was absent (inactivated), there was no circadian rhythm; in one mutation (called perS, "S" for short or shortened) the rhythm was shortened to ~19 hours; whereas, in another mutation (perL, "L" for long or lengthened) the rhythm was extended to ~29 hours, as opposed to a normal 24 hour rhythm. They published the discovery in 1971. They named the gene location (locus) as period (per for short) as it controls the period of the rhythm. In opposition, there were other scientists that stated genes could not control such complex behaviors as circadian activities.
Another circadian behavior in Drosophila is courtship between the male and female during mating. Courtship involves a song accompanied by a ritual locomotory dance in males. The main flight activity generally takes place in the morning and another peak occurs before sunset. Courtship song is produced by the male's wing vibration and consists of pulses of tone produced at intervals of approximately 34 msec in D. melanogaster (48 msec in D. simulans). In 1980, Jeffrey C. Hall and his student Charalambos P. Kyriacou, at Brandeis University in Waltham, discovered that courtship activity is also controlled by per gene. In 1984, Konopka, Hall, Michael Roshbash and their team reported in two papers that per locus is the centre of the circadian rhythm, and that loss of per stops circadian activity. At the same time, Michael W. Young's team at the Rockefeller University reported similar effects of per, and that the gene covers 7.1-kilobase (kb) interval on the X chromosome and encodes a 4.5-kb poly(A)+ RNA. In 1986, they sequenced the entire DNA fragment and found the gene encodes the 4.5-kb RNA, which produces a protein, a proteoglycan, composed of 1,127 amino acids. At the same time Roshbash's team showed that PER protein is absent in mutant per. In 1994, Young and his team discovered the gene timeless (tim) that influences the activity of per. In 1998, they discovered doubletime (dbt), which regulate the amount of PER protein.
In 1990, Konopka, Rosbash, and identified a new gene called Clock (Clk), which is vital for the circadian period. In 1998, they found a new gene cycle (cyc), which act together with Clk. In the late 1998, Hall and Roshbash's team discovered cryb, a gene for sensitivity to blue light. They simultaneously identified the protein CRY as the main light-sensitive (photoreceptor) system. The activity of cry is under circadian regulation, and influenced by other genes such as per, tim, clk, and cyc. The gene product CRY is a major photoreceptor protein belonging to a class of flavoproteins called cryptochromes. They are also present in bacteria and plants. In 1998, Hall and Jae H. Park isolated a gene encoding a neuropeptide named pigment dispersing factor (PDF), based on one of the roles it plays in crustaceans. In 1999, they discovered that pdf is expressed by lateral neurone ventral clusters (LNv) indicating that PDF protein is the major circadian neurotransmitter and that the LNv neurones are the principal circadian pacemakers. In 2001, Young and his team demonstrated that glycogen synthase kinase-3 (GSK-3) ortholog shaggy (SGG) is an enzyme that regulates TIM maturation and accumulation in the early night, by causing phosphorylation.
Hall, Rosbash, and Young shared the Nobel Prize in Physiology or Medicine 2017 “for their discoveries of molecular mechanisms controlling the circadian rhythm”.
Mechanism
In Drosophila there are two distinct groups of circadian clocks: the clock neurons and the clock genes. They act concertedly to produce the 24-hour cycle of rest and activity. Light is the source of activation of the clocks. The compound eyes, ocelli, and Hofbauer-Buchner eyelets (HB eyelets) are the direct external photoreceptor organs. But the circadian clock can work in constant darkness. Nonetheless, the photoreceptors are required for measuring the day length and detecting moonlight. The compound eyes are important for differentiating long days from constant light. For the normal masking effects of light, such as inducing activity by light and inhibition by darkness. There are two distinct activity peaks termed the M (for morning) peak, happening at dawn, and E (for evening) peak, at dusk. They monitor the different day lengths in different seasons of the year. The light-sensitive proteins in the eye called, rhodopsins (rhodopsin 1 and 6), are crucial in activating the M and E oscillations. When environmental light is detected, approximately 150 neurones (there are about 100,000 neurones in the Drosophila brain) in the brain regulate the circadian rhythm. The clock neurons are located in distinct clusters in the central brain. The best-understood clock neurons are the large and small lateral ventral neurons (l-LNvs and s-LNvs) at the base of the optic lobe. These neurons produce a pigment dispersing factor (PDF), a neuropeptide that acts as a circadian neuromodulator between different clock neurons.
Drosophila circadian keeps time via daily fluctuations of clock-related proteins which interact in a transcription-translation feedback loop. The core clock mechanism consists of two interdependent feedback loops, namely the PER/TIM loop and the CLK/CYC loop. The CLK/CYC loop occurs during the day in which both Clock protein and cycle protein are produced. CLK/CYC heterodimers act as transcription factors and bind together to initiate the transcription of the per and tim genes by binding to a promoter element called E box, around mid-day. DNA is transcribed to produce PER mRNA and TIM mRNA. PER and TIM proteins are synthesized in the cytoplasm and exhibit a smooth increase in levels over the day. Their RNA levels peak early in the evening and protein levels peak around daybreak. But their protein levels are maintained at constantly low levels until dusk because daylight also activates the double-time (dbt) gene. DBT protein induces post-translational modifications, that is phosphorylation and turnover of monomeric PER proteins. As PER is translated in the cytoplasm, it is actively phosphorylated by DBT (casein kinase 1ε) and casein kinase 2 (synthesized by And and Tik) as a prelude to premature degradation. The actual degradation is through the ubiquitin-proteasome pathway and is carried out by a ubiquitin ligase called Slimb (supernumerary limbs). At the same time, TIM is itself phosphorylated by shaggy, whose activity declines after sunset. DBT gradually disappears, and withdrawal of DBT promotes PER molecules to get stabilized by physical association with TIM. Hence, maximum production of PER and TIM occurs at dusk. At the same time, CLK/CYC also directly activates vri and Pdp1 (the gene for PAR domain protein 1). VRI accumulates first, 3-6 hours earlier, and starts to repress Clk; but the incoming PDP1 creates a competition by activating Clk. PER/TIM dimers accumulate in the early night translocate in an orchestrated fashion into the nucleus several hours later, and bind to CLK/CYC dimers. Bound PER completely stops the transcriptional activity of CLK and CYC.
In the early morning, the appearance of light causes PER and TIM proteins to break down in a network of transcriptional activation and repression. First, the light activates the cry gene in the clock neurons. Although CRY is produced deep inside the brain, it is sensitive to UV and blue light, and thus it easily signals the brain cells the onset of light. It irreversibly and directly binds to TIM causing it to break down through proteosome-dependent ubiquitin-mediated degradation. The CRY's photolyase homology domain is used for light detection and phototransduction, whereas the carboxyl-terminal domain regulates CRY stability, CRY-TIM interaction, and circadian photosensitivity. The ubiquitination and subsequent degradation are aided by a different protein JET. Thus PER/TIM dimer dissociates, and the unbound PER becomes unstable. PER undergoes progressive phosphorylation and ultimately degradation. The absence of PER and TIM allows activation of clk and cyc genes. Thus, the clock is reset to commence the next circadian cycle.
References
Circadian rhythm
circadian | Drosophila circadian rhythm | [
"Biology"
] | 2,564 | [
"Behavior",
"Sleep",
"Circadian rhythm"
] |
56,145,576 | https://en.wikipedia.org/wiki/Vestronidase%20alfa | Vestronidase alfa, sold under brand name Mepsevii, is a medication for the treatment of Sly syndrome. It is a recombinant form of the human enzyme beta-glucuronidase. It was approved in the United States in November 2017, to treat children and adults with an inherited metabolic condition called mucopolysaccharidosis type VII (MPS VII), also known as Sly syndrome. MPS VII is an extremely rare, progressive condition that affects most tissues and organs.
The most common side effects after treatment with vestronidase alfa include infusion site reactions, diarrhea, rash (urticaria) and anaphylaxis (sudden, severe allergic reaction).
The US. Food and Drug Administration (FDA) considers it to be a first-in-class medication. It was approved for use in the European Union in August 2018.
Medical uses
Mepsevii is indicated for the treatment of non-neurological manifestations of Mucopolysaccharidosis VII (MPS VII; Sly syndrome).
History
The safety and efficacy of vestronidase alfa were established in a clinical trial and expanded access protocols enrolling a total of 23 participants ranging from five months to 25 years of age. Participants received treatment with vestronidase alfa at doses up to 4mg/kg once every two weeks for up to 164 weeks. Efficacy was primarily assessed via the six-minute walk test in ten participants who could perform the test. After 24 weeks of treatment, the mean difference in distance walked relative to placebo was 18 meters. Additional follow-up for up to 120 weeks suggested continued improvement in three participants and stabilization in the others. Two participants in the vestronidase alfa development program experienced marked improvement in pulmonary function. Overall, the results observed would not have been anticipated in the absence of treatment. The effect of vestronidase alfa on the central nervous system manifestations of MPS VII has not been determined.
The FDA approved vestronidase alfa-vjbk based primarily on evidence from one clinical trial (NCT02230566) of 12 participants with mucopolysaccharidosis VII. The trial was conducted at four sites in the United States.
The benefit and side effects of vestronidase alfa were based primarily on one trial. Participants were randomly assigned to four groups. Three groups of participants received placebo treatment before starting vestronidase alfa treatment and one group received vestronidase alfa only. vestronidase alfa or placebo were given once every two weeks as intravenous (IV) infusions. Neither participants nor healthcare providers knew which treatment was given until after the trial was completed.
The benefit of 24 weeks of vestronidase alfa treatment was primarily evaluated by the 6-minute walking test (6MWT) and compared to placebo treatment in ten participants who could perform the test. The 6MWT measured the distance a patient could walk on a flat surface in 6 minutes. An additional follow-up using 6MWT was done for up to 120 weeks.
The application for vestronidase alfa was granted fast track designation, orphan drug designation, and a rare pediatric disease priority review voucher. This was the twelfth rare pediatric disease priority review voucher issued.
The US Food and Drug Administration (FDA) granted approval of Mepsevii to Ultragenyx Pharmaceutical, Inc, and required the manufacturer to conduct a post-marketing study to evaluate the long-term safety of the product.
References
External links
Orphan drugs
Recombinant proteins | Vestronidase alfa | [
"Biology"
] | 729 | [
"Recombinant proteins",
"Biotechnology products"
] |
56,146,819 | https://en.wikipedia.org/wiki/List%20of%20sexually%20transmitted%20infections%20by%20prevalence | Sexually transmitted infections (STIs), also referred to as sexually transmitted diseases (STDs), are infections that are commonly spread by sexual activity, especially vaginal intercourse, anal sex and oral sex. The most prevalent STIs may be carried by a significant fraction of the human population.
References
Sexually transmitted diseases and infections
Infections with a predominantly sexual mode of transmission
Papillomavirus
Infectious disease-related lists | List of sexually transmitted infections by prevalence | [
"Biology"
] | 84 | [
"Viruses",
"Papillomavirus"
] |
56,147,496 | https://en.wikipedia.org/wiki/Give%20a%20dog%20a%20bad%20name%20and%20hang%20him | Give a dog a bad name and hang him is an English proverb. Its meaning is that if a person's reputation has been besmirched, then he will suffer difficulty and hardship. A similar proverb is he that has an ill name is half hanged.
The proverb dates back to the 18th century or before. In 1706, John Stevens recorded it as "Give a Dog an ill name and his work is done". In 1721, James Kelly had it as a Scottish proverb – "Give a Dog an ill Name, and he'll soon be hanged. Spoken of those who raise an ill Name on a Man on purpose to prevent his Advancement." In Virginia, it appeared as an old saying in the Norfolk Herald in 1803 – "give a dog a bad name and hang him".
The observation is due to negativity bias – that people are apt to think poorly of others on weak evidence. This is then reinforced by confirmation bias as people give more weight to evidence that supports a preconception than evidence which contradicts it.
See also
Character assassination
Scapegoat
References
1700s neologisms
18th-century quotations
Harassment and bullying
Defamation
English proverbs
Metaphors referring to dogs | Give a dog a bad name and hang him | [
"Biology"
] | 251 | [
"Harassment and bullying",
"Behavior",
"Aggression"
] |
56,149,116 | https://en.wikipedia.org/wiki/Ronald%20G.%20Amundson | Ronald Amundson is an American environmental scientist who is currently Professor at University of California, Berkeley.
Early life and education
Amundson received his PhD from the University of California, Riverside in 1984. For this work, he had analyzed the carbon and oxygen stable isotopic ratios of calcium carbonate within the soils in the San Joaquin Valley.
Research
His interests are isotope biogeochemistry, environment history & ethics, ecosystems, pedology and soils. His highest cited paper is "Rapid exchange between soil carbon and atmospheric carbon dioxide driven by temperature change" at 759 times, according to Google Scholar.
Awards and honors
2019 - Elected Fellow of the American Geophysical Union
Selected publications
References
University of California, Berkeley College of Natural Resources faculty
American environmental scientists
Living people
Year of birth missing (living people)
Fellows of the American Geophysical Union | Ronald G. Amundson | [
"Environmental_science"
] | 170 | [
"American environmental scientists",
"Environmental scientists"
] |
74,592,933 | https://en.wikipedia.org/wiki/Polar%20BEAR | Polar BEAR, short for Polar Beacon Experiment and Auroral Research, was a 1986 U.S. military space mission. Also known as STP P87-1 or STP P87-A, the craft was built for the Air Force by Johns Hopkins University's Applied Physics Laboratory (APL).
To save money, the spacecraft was built on the Transit-O satellite retrieved from the National Air and Space Museum, where it had been on display for almost a decade. It was launched on November 13, 1986, from Vandenberg AFB. Its science mission was to investigate communications interference caused by solar flares and auroral activity, continuing the work of the previous HILAT ("High Latitude") mission.
References
External links
- from Johns Hopkins Applied Physics Laboratory
Spacecraft launched in 1986
Military satellites of the United States
Derelict satellites orbiting Earth | Polar BEAR | [
"Astronomy"
] | 174 | [
"Astronomy stubs",
"Spacecraft stubs"
] |
74,593,188 | https://en.wikipedia.org/wiki/Smoldyn | Smoldyn is an open-source software application for cell-scale biochemical simulations. It uses particle-based simulation, meaning that it simulates each molecule of interest individually, in order to capture natural stochasticity and yield nanometer-scale spatial resolution. Simulated molecules diffuse, react, are confined by surfaces, and bind to membranes in similar manners as in real biochemical systems.
History
Smoldyn was initially released in 2003 as a simulator that represented chemical reactions between diffusing particles in rectilinear volumes. Further development added support for surfaces, multiscale simulation molecules with excluded volume, rule-based modeling and C/C++ and Python APIs. Smoldyn development has been funded by a postdoctoral NSF grant awarded to Steve Andrews, a US DOE contract awarded to Adam Arkin, a grant from the Computational Research Laboratories (Pune, India) awarded to Upinder Bhalla, a MITRE contract and several NIH grants awarded to Roger Brent, and a Simons Foundation grant awarded to Steve Andrews.
Development team
Smoldyn has been developed primarily by Steve Andrews, over the course of multiple research and teaching positions. Other contributors have included Nathan Addy, Martin Robinson, and Diliwar Singh.
Features
Smoldyn is primarily a tool for biophysics and systems biology research. It focuses on spatial scales that are between nanometers and microns. The following features descriptions are drawn from the Smoldyn documentation.
Model definition: Models are entered as text files that describe the system. This includes: lists of molecule species, their diffusion coefficients, and their chemical reactions; lists of surfaces and their interactions with molecules; initial molecule and surface locations; and actions that a "virtual experimenter" carries out during the simulation.
Real-time graphics: Smoldyn displays the simulated system to a graphics window as the simulation runs.
Simulated behaviors: Smoldyn's simulated behaviors focus on molecular diffusion, interaction with surfaces, and interactions with each other. This enables simulation of: molecular diffusion and drift, chemical reactions, excluded volume interactions, macromolecular crowding, allosteric interactions, surface adsorption and desorption, partial transmission through surfaces, on-surface diffusion, and long-range intermolecular forces.
Accuracy: Smoldyn development has focused strongly on quantitative accuracy. Tests have been run and published to show that diffusion, chemical reactions, surface interactions, excluded volume interactions, and on-surface diffusion simulate with high quantitative accuracy, typically with substantially less than 1% error.
Rule-based modeling: Smoldyn supports two types of rule-based modeling. It reads the BNGL language, which it parses with the BioNetGen software. It also supports a method that is based on wildcard characters.
Multi-scale simulation: Because particle-based simulation is computationally intensive, Smoldyn also supports simulation using a spatial version of the Gillespie algorithm. These algorithms are linked together to enable both to be used in a single simulation.
C/C++ and Python APIs: All of Smoldyn's functions can be accessed through either a C/C++ or a Python API.
GPU acceleration
Smoldyn has been refactored twice to run on GPUs, each time offering approximately 200-fold speed improvements. However, neither version supports the full range of features that is available in the CPU version. They are not being supported currently.
See also
List of systems biology modeling software
References
External links
Systems biology
Free simulation software | Smoldyn | [
"Biology"
] | 714 | [
"Systems biology"
] |
74,594,353 | https://en.wikipedia.org/wiki/Diphenyl%20sulfide | Diphenyl sulfide is an organosulfur compound with the chemical formula , often abbreviated as , where Ph stands for phenyl. It is a colorless liquid with an unpleasant odor. Diphenyl sulfide is an aromatic sulfide. The molecule consists of two phenyl groups attached to a sulfur atom.
Synthesis, reactions, occurrence
Many methods exist for the preparation of diphenyl sulfide. It arises by a Friedel-Crafts-like reaction of sulfur monochloride and benzene. Diphenyl sulfide and its analogues can also be produced by coupling reactions using metal catalysts. It can also be prepared by reduction of diphenyl sulfone.
Diphenyl sulfide is a product of the photodegradation of the fungicide edifenphos.
Diphenyl sulfide is a precursor to triarylsulfonium salts, which are used as photoinitiators. The compound can be oxidized to the sulfoxide with hydrogen peroxide.
References
Aromatic compounds
Cyclic compounds
Organosulfur compounds
Sulfur compounds
Thioethers | Diphenyl sulfide | [
"Chemistry"
] | 225 | [
"Organic compounds",
"Aromatic compounds",
"Organosulfur compounds"
] |
74,595,119 | https://en.wikipedia.org/wiki/Mustard%20cake | Mustard cake is the residue obtained after extraction of oil from mustard, which is used as organic fertilizer. Mustard cake powder is excellent organic fertilizer containing food ingredients and even catalysts for herbaceous plants (fruit, flower and vegetable plants). Mustard cake are very useful as feed for the livestock and cattle.
Effectivity
Mustard cake powder is a universal and harmless fertilizer as it contains no other ingredients except mustard. It can be used both by mixing it with the soil and as a liquid organic fertilizer.
Meets the needs of nitrogen, potassium and various macro and micro elements required by plants. It makes flowers, fruits and plants grow to the right size.
Natural and eco-friendly best organic fertilizer.
Provides phosphorus to plants.
How to use
Mustard cake powder can be used in two ways.
Direct use
Mustard cakes must be finely ground to apply directly to the soil. The powder can be applied at a distance of half a meter from the base of the plant.
Liquid form
Mustard cake powder need to be soaked in water for a week to rot. After a week, it should be applied to the soil near the base of the plant in a ratio of 1:10 with composted water and fresh water.
References
Fertilizers | Mustard cake | [
"Chemistry"
] | 256 | [
"Fertilizers",
"Soil chemistry"
] |
74,596,521 | https://en.wikipedia.org/wiki/CodeWeek | EU Codeweek (also stylized as CodeWeek) is an initiative started in 2013 by the European Union to increase basic programming knowledge among children and young people.
History
Codeweek was launched in 2013 by Neelie Kroes – at the time Vice President of the European Commission – as part of a broader European digital agenda. With the increase in the number of devices running software, there is a growing need for programmers, and the organization wants to introduce children to computer language at a young age through this initiative.
Codeweek is a week during which free events are organized in schools, libraries, and other locations across Europe to teach more children and young people the basics of programming. In 2022, eighty European countries participated.
Participating countries
By 2024, 45 countries are taking part in Code Week, featuring an annual competition that ranks countries by the ratio of activities to their population size. These are the participating countries.
Albania
Argentina
Armenia
Austria
Belgium
Bosnia and Herzegovina
Bulgaria
Croatia
Cyprus
Czech Republic
Denmark
Estonia
France
Georgia
Germany
Greece
Hungary
Ireland
Italy
Jordan
Kenya
Latvia
Lebanon
Lithuania
Malta
Moldova
Monaco
Netherlands
North Macedonia
Pakistan
Palestine
Poland
Portugal
Romania
Serbia
Slovakia
Slovenia
Spain
Sweden
Thailand
Tunisia
Türkiye (Turkey)
Ukraine
United Kingdom
United States
References
Computer science education | CodeWeek | [
"Technology"
] | 243 | [
"Computer science education",
"Computer science"
] |
74,596,929 | https://en.wikipedia.org/wiki/List%20of%20insecticides | This is a list of insecticides. These are chemical compounds which have been registered as insecticides. Biological insecticides are not included. The names on the list are the ISO common names. A complete list of pesticide common names is published by the BCPC. The University of Hertfordshire maintains a database of the chemical and biological properties of these materials, including their brand names and the countries and dates where and when they have been introduced. The industry-sponsored Insecticide Resistance Action Committee (IRAC) advises on the use of insecticides in crop protection and classifies the available compounds according to their chemical classes and mechanism of action so as to manage the risks of pesticide resistance developing. The 2024 IRAC poster of insecticide modes of action includes the majority of chemicals listed below. The pesticide manual provides much information on pesticides.
Many of the insecticides in the list are not in use. The developer of a pesticide applies for a common name when they intend to sell it, but some nevertheless do not reach the market. Many insecticides have been banned or otherwise withdrawn from the market over the decades.
Brand names and common names
Pesticides are sold under their brand names. The purchased pesticide is a mixture (formulation) of the active ingredient, which is the pesticide itself, and inert ingredients, such as emulsifiers, or surfactants. Only the common names of the active ingredients are shown in this list. These are approved by ISO committee (TC81). Common names are used by the authorities, in scientific literature and on the labelling information of purchasable pesticides.
0-9
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
X
Y
Z
See also
Biopesticide
Federal Insecticide, Fungicide, and Rodenticide Act
List of fungicides
List of herbicides
References
Insecticide
Chemistry-related lists | List of insecticides | [
"Chemistry"
] | 391 | [
"Lists of chemical compounds",
"nan"
] |
74,597,445 | https://en.wikipedia.org/wiki/Chain%20reactions%20in%20living%20organisms | Chain reaction in chemistry and physics is a process that produces products capable of initiating subsequent processes of a similar nature. It is a self-sustaining sequence in which the resulting products continue to propagate further reactions. Examples of chain reactions in living organisms are lipid peroxidation in cell membranes and propagation of excitation of neurons in epilepsy.
Lipid peroxidation in cell membranes
Nonenzymatic peroxidation occurs through the action of reactive oxygen species (ROS), specifically hydroxyl (HO•) and hydroperoxyl (HO) radicals, which initiate the oxidation of polyunsaturated fatty acids. Other initiators of lipid peroxidation include ozone (O3), nitrogen oxide (NO), nitrogen dioxide (NO2), and sulfur dioxide. The process of nonenzymatic peroxidation can be divided into three phases: initiation, propagation, and termination.
During the initiation phase, fatty acid radicals are generated, which can propagate peroxidation to other molecules. This occurs when a free radical removes a hydrogen atom from a fatty acid, resulting in a lipid radical (L•) with an unpaired electron.
In the propagation phase, the lipid radical reacts with oxygen (O2) or a transition metal, forming a peroxyl radical (LOO•). This peroxyl radical continues the chain reaction by reacting with a new unsaturated fatty acid, producing a new lipid radical (L•) and lipid hydroperoxide (LOOH). These primary products can further decompose into secondary products.
The termination phase involves the interaction of a radical with an antioxidant molecule, such as α-tocopherol (vitamin E), which inhibits the propagation of chain reactions, thus terminating peroxidation. Another method of termination is the reaction between a lipid radical and a lipid peroxide, or the combination of two lipid peroxide molecules, resulting in stable nonreactive molecules. Reinforced lipids that become part of the membrane if consumed with heavy isotope diet also inhibit peroxidation.
Propagation of excitation of neurons in epilepsy
Epilepsy is a neurological condition marked by recurring seizures. It occurs when the brain's electrical activity becomes unbalanced, leading to repeated seizures. These seizures disrupt the normal electrical patterns in the brain, causing sudden and synchronized bursts of electrical energy. As a result, individuals may experience temporary changes in consciousness, movements, or sensations.
Glutamate excitotoxicity is thought to play an important role in the initiation and maintenance of epileptic seizures. The seizure-induced high flux of glutamate overstimulated glutamate receptors, which triggered a chain reaction of excitation in glutamatergic networks.
References
Chemical reactions
Biochemistry | Chain reactions in living organisms | [
"Chemistry",
"Biology"
] | 592 | [
"Biochemistry",
"Chemical reaction stubs",
"nan"
] |
74,597,943 | https://en.wikipedia.org/wiki/List%20of%20herbicides | This is a list of herbicides. These are chemical compounds which have been registered as herbicides. The names on the list are the ISO common name for the active ingredient which is formulated into the branded product sold to end-users. The University of Hertfordshire maintains a database of the chemical and biological properties of these materials, including their brand names and the countries and dates where and when they have been introduced. The industry-sponsored Herbicide Resistance Action Committee (HRAC) advises on the use of herbicides in crop protection and classifies the available compounds according to their chemical structures and mechanism of action so as to manage the risks of pesticide resistance developing. The 2024 HRAC poster of herbicide modes of action includes the majority of chemicals listed below.
The Weed Science Society of America also classifies herbicides by their mechanism of action, with the HRAC classification system.
0-9
A
B
C
D
E
F
G
H
I
K
L
M
N
O
P
Q
R
S
T
V
X
Z
See also
List of fungicides
List of insecticides
References
herbicides
Herbicides | List of herbicides | [
"Chemistry",
"Biology"
] | 218 | [
"Herbicides",
"Biocides",
"Lists of chemical compounds",
"nan"
] |
74,598,694 | https://en.wikipedia.org/wiki/Reciprocal%20human%20machine%20learning | Reciprocal Human Machine Learning (RHML) is an interdisciplinary approach to designing human-AI interaction systems. RHML aims to enable continual learning between humans and machine learning models by having them learn from each other. This approach keeps the human expert "in the loop" to oversee and enhance machine learning performance and simultaneously support the human expert continue learning.
Background
RHML emerged in the context of the rise of big data analytics and artificial intelligence for intelligent tasks like sense-making and decision-making. As machine learning advanced to take on more roles, researchers realized fully autonomous systems had limitations and needed human guidance.
RHML extends the concept of human-in-the-loop systems by promoting reciprocal learning. Humans learn from their interactions with machine learning models, staying up-to-date on evolving technology. The models also learn from human feedback and oversight. This amplification of learning on both sides is a key focus of RHML.
The approach draws on theories of learning in dyads from education and psychology. It also builds on human-computer interaction and human-centered design principles. Implementing RHML requires developing specialized tools and interfaces tailored to the application
Applications
RHML has been explored across diverse domains including:
Cybersecurity - Software to enable reciprocal learning between experts and AI models for social media threat detection.
Organizational decision-making - RHML to structure collaboration between humans and AI systems.
Workplace training - Using RHML for workers to learn from AI technologies on the job.
Open science - Using human and AI collaboration to promote open science.
Production and logistics - turning workers and intelligent machines into teammates.
RHML maintains human oversight and control over AI systems, while enabling cutting-edge machine learning performance. This collaborative approach highlights the importance of keeping the human expert involved in the loop.
References
Machine learning
Human–computer interaction | Reciprocal human machine learning | [
"Engineering"
] | 374 | [
"Artificial intelligence engineering",
"Human–computer interaction",
"Human–machine interaction",
"Machine learning"
] |
74,599,226 | https://en.wikipedia.org/wiki/International%20Academy%20of%20Wood%20Science | The International Academy of Wood Science (IAWS) is an international academy and a non-profit assembly of wood scientists, recognizing all fields of wood science with their associated technological domains and securing a worldwide representation.
Since June 2023, the academy is represented by Dr. Stavros Avramidis, a Greek-Canadian professor and wood scientist who serves as the 19th President of the IAWS, and, also by Dr. Ingo Burgert, a Swiss wood engineer who is presently the elected vice-president for the period 2023–2026.
History
The academy was first established on June 2, 1966, at the Centre Technique du Bois in Paris.
The development and establishment of the International Academy of Wood Science involved many people, but the key-person who had the idea of creating a wood academy was Professor Franz Gustav Kollmann, of the Wood Research and Technology at the University of Munich, Germany. In fact, he was also the first elected President of the academy in the years 1966–1972.
Since 1967, the official scientific journal of the IAWS is the journal of Wood Science and Technology.
Objectives
The academy has the objective of promoting at the international level the concerted development of wood science and its standing by recognizing meritorious wood scientists by their election as Fellows thereby honouring distinguished achievements in the science of wood, and by promoting a high standard of research and publication. In addition, the academy holds annual plenary meetings, including business meetings and technical sessions, in the form of international scientific conferences.
Fellows of the IAWS are wood scientists who are elected as actively engaged in wood research in the broadest sense, their election being evidence of high scientific standards. New Fellows are nominated and evaluated by Fellows. The executive committee determines the number of nominees to be accepted as Fellows each year, based on those evaluations.
The tasks of the Fellows of the IAWS are highlighted as to:
Promote high level wood research and technology
Present wood research and science at international/national meetings of its own and of other organizations
Focus attention on the importance of wood research and science to governments, parliaments, industries, associations, press etc.
Promote the publication of research of IAWS members in the Quarterly Review
Executive committee
The executive committee of the IAWS consists of the following officers:
President: Dr. Stavros Avramidis, Department of Wood Science, University of British Columbia, Canada
Vice President: Dr. Ingo Burgert, Institute for Building Materials, ETH Zurich, Switzerland
Secretary/Bulletin editor: Dr. Rupert Wimmer, Natural Materials Technology, BOKU University, Vienna, Austria
Treasurer: Dr. Howard Rosen, Forest Products USDA Forest Service, Arlington, USA
Past President: Dr. Yoon Soo Kim, Wood Science & Engineering, Chonnam National University, Gwangju, South Korea
Chair of the Academy Board: Dr. Katarina Čufar, Department of Wood Science and Technology, University of Ljubljana, Slovenia
Distinguished members
Josef Gierer (1919–), member and recipient of Anselme Payen Award
Kyosti Vilho Sarkanen (1921–1990), member and recipient of Anselme Payen Award
Walter Liese (1926–2023), member
Erich Adler (1905–1985), member and recipient of Anselme Payen Award
Alfred J. Stamm (1897–1985), member
Pieter Baas (1944–2024), member
Edmone Roffael (1939–2021), member
Gerd Wegener (1945–), member
Roger M. Rowell (1939–), member
Raymond A. Young (1945–), member
Antonio Pizzi (1946–), member
Peter Niemz (1950–), member
John Ralph (1954–), member and recipient of Anselme Payen Award
Alfred Teischinger (1954–), member
Thomas Rosenau (1963–), member and recipient of Anselme Payen Award
Holger Militz (1960–), member and recipient of Schweighofer Prize
Rupert Wimmer (1960–), member
Publications
Wood Science and Technology
IAWS Bulletin
References
External links
Official website
Bulletins
History of science organizations
Wood sciences | International Academy of Wood Science | [
"Materials_science",
"Engineering"
] | 848 | [
"Wood sciences",
"Materials science"
] |
74,599,690 | https://en.wikipedia.org/wiki/Citadel%20of%20Safed | The Citadel of Safed is a now-defunct fortress castle situated on the peak of the mountain housing the modern city of Safed. Furthermore, fortifications existed during the late period of the Second Temple as well as the Roman Empire. However most of the remains left in the place are from the Crusader, Mamluk and Ottoman periods. The citadel was an important administrative center of the crusaders. The citadel was severely damaged in the strong earthquake that struck Safed in 1837. During the last few decades, extensive archaeological excavations have been carried out at the site, revealing remains and ancient findings from all periods of the citadel's existence. Today, the citadel is visited by tourists for its historical value as well as the view, since due to its high location you can see from it the surroundings of Safed, from the Meron mountain massif in the west, the Sea of Galilee in the east, the lower Galilee in the south, and the Naftali and Hermon mountains to the north. The citadel garden was established in 1950, designed by landscape architect Shlomo Oren Weinberg. A memorial monument was erected to the Safed residents who lost their lives in the 1948 Palestine War.
Ancient history
In excavations conducted in the area of Giv'at HaMitzuda, evidence of the existence of a Canaanite settlement from the Bronze Age was discovered. Characteristic burial caves for the period were exposed, indicating that burial practices at the site spanned over several centuries. Additionally, remnants from the Iron Age, contemporaneous with the Israelites' settlement in the land according to biblical narrative, were found after they entered the region. While the city of Safed is within the borders of the inheritance of the tribe of Naphtali, it is not mentioned in the Bible, at least not by that name.
First Jewish Roman War
In anticipation of the Great Revolt, Galilee commander Yosef ben Matityahu decided to build a number of strong fortresses in strategically important locations in the Galilee. In his work 'The Jewish Wars,' Yosef enumerates that he fortified 'Sela Akbara, Safed, Yavne'el, and Meron.' Safed is likely the fortress mentioned. Yosef chose to build the fortress on the mountain adjacent to Safed, a peak that rises 834 meters above sea level, overlooking its surroundings with abundant slopes. In this way, he aimed to protect the Jewish settlement from Roman soldiers expected to attempt to conquer the Galilee initially.
Crusader Period
Prior to the Crusader period, the tower was known as Burj Yatim, and described by thirteenth-century Muslim historian Ibn Shaddad as standing above a "flourishing village." Although Ibn Shaddad ascribes the tower's creation to the Knights Templar, it was most likely built during the early Muslim period, before the creation of the Templars. William of Tyre, a Frankish chronicler, noted this tower, or burgus, in 1157, which he referred to as 'Sephet' or 'Castrum Sephet.' In 1102, a fortress was first constructed. In 1140, the fortress was expanded under the orders of Fulk of Anjou, the King of Jerusalem. In 1168 King Amalric I purchased the castle, reinforced its defenses, and instructed the transfer of the stronghold to the care of the Knights Templar, who held it for two decades.
In 1187, Salah ad-Din ascended to the Galilee, annihilating the crusader army in the Battle of Hattin. Nevertheless, the Templars continued to hold the fortress for an additional year and a half until December 1188. After an extended siege and the retreat of the crusaders, the fortress finally succumbed to Muslim forces. The fortress remained under Ayyubid control for approximately fifty years. Al-Mu'azzam Isa, emir of Damascus, ordered the castle to be destroyed during the 1218-1219 siege of Damietta to prevent it from falling into crusader hands.
In 1240, the crusaders returned to Safed, according to diplomatic agreements between Theobald I of Navarre and al-Salih Ismail, then emir of Damascus. They rebuilt and transformed the fortress into one of the largest crusader citadels in the Middle East. Marseille Bishop Benoît d'Alignan, who wrote De constructione castri Saphet, a detailed report on the fortress and its siege in 1264, described a citadel sprawling over an area of about forty dunams, with a length of 252 meters and a width of 112 meters, surrounded by a double wall for comprehensive defense. The circumference of the inner wall reached 580 meters, with a height of 28 meters, while the outer wall had a circumference of approximately 850 meters and a height of 28 meters. Between the walls, a trench was excavated at a depth of 15-18 meters and a width of 13 meters (the outer trench currently crosses Jerusalem Street). Seven solid towers, 26 meters high, were built in the outer wall to safeguard the citadel. The fortress itself included numerous structures, such as walls with fixed arrow slits, a suspended gate tower, vaulted halls, numerous rooms, and wells.
The construction of the fortified citadel by the Templars lasted two and a half years and cost over a million gold coins.
The Templars effectively utilized the fortress. Historian Abu al-Fida reports in his book Compendium of Human History or Chronicles that there were eighty knights, servants, and fifteen commanders stationed in the citadel. Each of them had fifty orders, as well as laborers. With the capture of the fortress, it housed a thousand knights. Estimates suggest that during times of peace, around 1,700 people resided there, and during wartime, approximately 2,200 individuals.
Historian Shams al-Din al-Uthmani wrote in 1372: "The fortress of Safed was among the most robust of the Frankish fortifications and was the one most closely tied to the Muslims. The Templars resided in it, knights like real eagles, ready to launch raids on cities from Damascus to Daria," (likely Deiraya in the southern Hebron hills) "and its surroundings, and from Jerusalem to Karak," (east of the Jordan) "and its region."
From the Crusader period, impressive remnants have been uncovered: a hall with a Gothic vault, similar to those found at Montfort and other places; columns adorned with floral motifs, resembling those found in other Crusader fortresses in the Middle East; a straight wall made of hewn stones, 25 meters long and 2 meters wide, with arrow slits and loopholes; an octagonal wall, 6 meters long and 3.2 meters wide; wide stones with engraved figures of a gargoyle and a sundial; paved passages with drainage channels; a well plastered with lime to prevent water leakage, and more.
Mamluk period
In 1266, the army of the Mamluk Sultan Baibars besieged the fortress of Safed. On July 23, after six weeks of siege, the Mamluks successfully captured the fortress and slaughtered its defenders. The conquerors, who turned Safed into the capital of the Galilee district, feared the remaining Crusaders in Acre and also the Mongols who, at that time, had seized substantial parts of Asia. These concerns prompted the Mamluks to initiate an impressive reconstruction in the fortress.
The additions made by the Mamluks to the fortress of Safed are described in detail in the book of Al-Otmani. According to him, two giant towers were added to the fortress; one is a round victory tower, resembling a chess piece, in the southern part of the fortress. Its dimensions were enormous by all standards: 60 meters in height and 35 meters in diameter. Inside the tower, a massive well was dug, supplying drinking water to thousands of soldiers in the fortress. The remnants of the tower and the well are now located beneath the monument at the summit of the fortress.
The second tower built by Bibars is a solid gate tower, measuring 15 by 20 meters, in the southwest of the fortress. Remnants of this tower also exist to this day.
The Arab geographer al-Dimashqi, who wrote his book "Selected Times and Wonders on Land and Sea" in the year 1300, and also described the Mamluk fortress.
From the Mamluk period, numerous additional remnants have been discovered, shedding light on the extent of their investment in the fortress: utility buildings connected by a rectangular corridor; a paved access road to the gate tower, 24 meters in length and 7-8 meters in width; arches and archivolts; large square stones, one of which bears the carved image of a roaring lion, likely a symbol of Sultan Baybars; a circular well, with a depth of 10.5 meters and a diameter of 10 meters; many pottery artifacts, some locally made and some imported from various places; Mamluk and Venetian coins, and more. Facilities suitable for artistic activity have also been discovered, dating to the late Mamluk or early Ottoman period. Additionally, ceramic artifacts from the sixteenth century have been uncovered.
It is assumed that significant Mamluk structures were destroyed in the earthquake that struck the region in 1303. The fortress remained abandoned for a long period, but in 1475, the Mamluk Sultan Qaitbay ordered its renovation and restoration.
For further reading
Denys Pringle, Safad, in Secular buildings in the Crusader Kingdom of Jerusalem: an archaeological Gazetteer, Cambridge University Press, (1997), pp. 91–92
See also
Safed
Crusader castles
Mamluk Architecture
Gothic Architecture
References
Bibliography
Safed
Crusader castles
Mamluk castles
Fortifications
Ancient sites in Israel | Citadel of Safed | [
"Engineering"
] | 1,980 | [
"Fortifications",
"Military engineering"
] |
74,600,482 | https://en.wikipedia.org/wiki/Insect%20pheromones | Insect pheromones are neurotransmitters that serve the chemical communication between individuals of an insect species. They thus differ from kairomones, in other words, neurotransmitters that transmit information to non-species organisms. Insects produce pheromones in special glands and release them into the environment. In the pheromone receptors of the sensory cells of the recipient, they produce a nerve stimulus even in very low concentrations, which ultimately leads to a behavioral response. Intraspecific communication of insects via these substances takes place in a variety of ways and serves, among other things, to find sexual partner, to maintain harmony in a colony of socially living insects, to mark territories or to find nest sites and food sources.
In 1959, the German biochemist and Nobel Prize winner Adolf Butenandt identified and synthesized the unsaturated fatty alcohol bombycol, the sex pheromone of the domestic silk moth (Bombyx mori), as the first known insect pheromone. The sex pheromones of female butterflies are mostly mono- or bis-olefinic fatty acids or their esters, fatty alcohols, their esters or the corresponding aldehydes. Male butterflies use a wide range of chemicals as sex pheromones, for example pyrrolizidine alkaloids, terpenes and aromatic compounds such as benzaldehyde.
Research into the chemical communication of insects is expanding our understanding of how they locate their food sources or places to lay eggs. For example, beekeepers use an artificially produced Nasanov pheromone containing terpenes such as geraniol and citral to attract bees to an unused hive. The agriculture and forestry industries use insect pheromones commercially in pest control using insect traps to prevent egg laying and in practicing the mating disruption. It is expected that insect pheromones can also contribute in this way to the control of insect-borne infectious diseases such as malaria, dengue fever or African trypanosomiasis.
Etymology and classification
Adolf Butenandt and Peter Karlson proposed the term pheromones in 1959 for substances that serve intraspecific communication. The definition of the term pheromone was given in the same year by Karlson and the Swiss zoologist Martin Lüscher. According to this, pheromones are"Substances released externally by one individual that elicit specific responses in another individual of the same species."
– Peter Karlson, Martin Lüscher, 1959.The word pheromone consists of the ancient Greek parts of speech φέρειν phérein, überbringen, melden, and ὁρμᾶν hormān, antreiben, erregen. According to Karlson and Lüscher, the goal was to coin an internationally understandable scientific term for a class of substances based on a clear definition. It was to be a short word that could be spoken in many languages. The ending mone served as a suffix, as it occurs in the words hormone, kairomone, and allomone thus emphasizing their relationship. The term pheromone replaced the term ectohormone or homoiohormone, which Albrecht Bethe had already proposed in 1932 with the same definition. Bethe's term was not accepted because, according to Butenandt, the terms ecto and hormone were mutually exclusive. The mechanism of action of a pheromone also does not correspond to that of a hormone absorbed into the circulatory system by another individual and was therefore considered misleading.
The classification of intraspecific pheromones in the group of semiochemicals, in other words, neurotransmitters that serve communication between organisms, is shown in the following diagram:
Karlson further divided them into the sense of smell and the oral-acting insect pheromones according to the mode of reception. In 1963, Edward O. Wilson, who had discovered ant trace pheromones the year before, and William H. Bossert introduced the concepts of releaser and primer pheromones. Releaser pheromones, which are usually perceived olfactorily, cause an instantaneously observable behavioral reaction, whereas primer pheromones, which are often oral, trigger physiological changes in the recipient. Primerpheromones, for example, suppress the formation of ovaries in worker bees.
Often, pheromones are defined according to their behavior-triggering function. In addition to the well-known sex attractants, they act, among other things, as aggregation pheromones, dispersion pheromones, alarm pheromones, tracking pheromones, marker pheromones, brood recognition pheromones, egg-laying pheromones, recruitment pheromones, or as caste recognition agents.
Vincent Dethier divided insect pheromones into six categories according to their general behavior-triggering effects. These include prisoners, which are normally only perceptible at short distances and cause an insect in motion to stop, and locomotor stimulants, which increase the insect's speed or decrease the number of directional changes. Lockstoffe are attractants that trigger an oriented movement toward the odor source, whereas repellents trigger an escape movement away from it. Feeding and oviposition stimulants, respectively, trigger feeding or oviposition. Deterrents, on the other hand, inhibit feeding or oviposition.
Functionally defined insect pheromones often contain mixtures of different components in precisely defined proportions. These so-called pheromone cocktails often contain substances of different categories with near and far orientation functions. For example, the aggregation pheromone cocktail of the German cockroach Blattella germanica contains both substances that act as attractants and substances that act as arrestants.
In part, insect pheromones are named after the site of their biological production. Males of various moth species, such as the banana butterfly, possess so-called androconial organs in the abdomen that release pheromones. These insect pheromones are appropriately called androconial pheromones. The queen bee of the Western honey bee produce the queen bee pheromone in mandibular glands. In English, they are therefore often referred to as queen mandibular gland pheromones.
History
First discoveries
In 1609, the English beekeeper Charles Butler observed that the sting of a bee released a liquid. This liquid attracted other bees, which then began to sting en masse. Butler thus demonstrated for the first time the effect of an alarm pheromone of bees, which was identified as isoamyl acetate in the 1960s. Butler's observation was the first to show the effect of an alarm pheromone of bees, which was identified as isoamyl acetate in the 1960s.
As early as 1690, Sir John Ray suspected that peppered moths attracted male conspecifics by means of a scent:"It emerged out of a stick-shaped geometer caterpillar: it was a female and came out from its chrysalis shut up in my cage: the windows were open in the room or closet where it was kept, and two male moths flying round were caught by my wife who by a lucky chance were into the room in the night: they were attracted, as it seems to me, by the scent of the female and came in from outside."
"It developed from a stick-shaped birch moth caterpillar: it was a female, and came out of her chrysalis, which was shut up in my cage: the windows were open in the room or chamber where she was kept, and two male moths flying about were caught by my wife, who by a lucky chance was in the room that night: they were attracted, it seems to me, by the scent of the female, and came in from outside."
– John Ray
The French entomologist Jean-Henri Fabre also reported in the mid-19th century on experiments with Saturnia and oak eggar in which females trapped in wire cages attracted hundreds of males within a few days at specific times. In experiments with tagged domestic silk moth, 40% of males from a distance of four kilometers and 26% of males from eleven kilometers still found their way to a trapped female. The morphology of the moths was also reported in the mid-19th century.
In many insect species, researchers long puzzled over the mechanism of mating: visual or acoustic stimuli could not explain Fabre's experiments, nor how moths found females ready to mate with great certainty. Theories of attraction by infrared or other radiation were not confirmed. The organization of eusociality remained equally inexplicable for a long time. The writer and bee researcher Maurice Maeterlinck speculated about the spirit of the hive, the (team) Spirit of the hive, without being able to determine its essence in more detail. Theories about the attraction by infrared or radiation also remained unexplained for a long time.
Definitions of Bethe
At the beginning of the 20th century, Ernest Starling discovered hormones as the first biological neurotransmitters. In 1932, the neurophysiologist Albrecht Bethe, who at that time headed the Institute of Animal Physiology at the Goethe University Frankfurt, published an article on an expanded hormone concept in which he distinguished between endohormones and ectohormones. According to this, endohormones act in the producing organism itself and correspond to the classical hormone definition. In contrast, the organism releases ectohormones externally and transfers them to other individuals. As an example, Bethe cited the effect of the lactation hormone, which is released by a fetus to its mother and causes the growth of the mammary gland and subsequently the secretion of milk. He also proposed this concept for chemical communication among insects."In bees, for example, the workers (i.e. not the mothers) are able to raise a sexually capable queen from an egg or a young larva [...] by special food and transfer of secretions of their salivary glands. There can hardly be any doubt (although it has not been proven) that ectohormones of the salivary gland secretions play the main role in this redifferentiation."
– Albrecht Bethe[6]Bethe further divided the ectohormones into homoiohormones, which – according to today's definition of a pheromone – act on individuals of the same species, and alloiohormones, which act on individuals of a different species. He thus coined the precursor term of allelochemicals.
Works from Butenandt
Adolf Butenandt also suspected that communication among insects was based on neurotransmitters, and in the 1940s he began a project to identify the sexual attractant of the domestic silk moth (Bombyx mori). It is a butterfly originally native to China, belonging to the family of the Bombycidae, which is used for sericulture and its breeding and keeping was well known. It was only after almost 20 years of work that he finally succeeded in extracting and purifying a substance from more than 500,000 insects, which Butenandt later named Bombykol.
By elemental analysis, Butenandt determined the chemical formula of the substance to be C16H30O. Infrared spectroscopic studies indicated the presence of conjugated double bonds. Using methods common at the time, such as catalytic hydrogenation, melting point determination, and oxidative degradation by potassium permanganate, Butenandt showed that the substance sought was an unsaturated fatty alcohol, (10E,12Z)-10,12-Hexadecadien-1-ol.
Butenandt then synthesized bombycol from vernolic acid [(12R,13S)-Epoxy-9-cis-octadecenoic acid] in several steps via diol formation, its cleavage to the aldehyde, double bond isomerization, and Wittig reaction. He synthesized the four possible stereoisomers and tested them for their biological activity. Only one isomer showed the same activity as the extract. Butenandt thus provided evidence that communication among insects takes place on a substance-by-substance basis."By extraction and condensation experiments, however, it has been convincingly shown that a material principle must be present which is secreted by the female butterflies from scent organs of the last abdominal segments and perceived by the males with their antennae."
– Adolf Butenandt
Primer and releaser pheromones
Towards the end of the 1950s, Edward O. Wilson defined substances that trigger the alarm and burrowing behavior of ants as chemical releaser. In 1961, the British biochemist Robert Kenneth Callow identified another pheromone, also known as the queen bee pheromone, with the compound (E)-9-oxo-dec-2-enoic acid, or 9-ODA for short. The effect of this pheromone was obviously of a different nature than that of the alarm pheromones, since it had a long-term effect on the physiology of the recipients.
In 1963, Wilson, who the year before had already discovered the trace pheromones of ants, and William H. Bossert introduced the term releaser and primer pheromones for this purpose, to distinguish the behavior-controlling effect of, for example, sex attractants from the pheromones that interfere with the endocrine system of the receiver.
Modern research directions
As a result of the enormously refined extraction and analytical chemistry over the years, chemists and biologists identified numerous other pheromones. For the detection of the second component of the pheromone cocktail of Bombyx mori, the bombycal [(10Z,12E)-hexadecadienal], an extract of 460 glands from which 15 nanograms of the aldehyde were isolated was already sufficient in 1978.
In addition to studying the function and reception of pheromones and their chemical identification, scientists extensively investigated the biochemistry of pheromone production. In 1984, Ashok Raina and Jerome Klun discovered that the production of the female sex attractant of the owlet moths Helicoverpa zea is controlled by hormonal substances called pheromone biosynthesis-activating neuropeptides (PBANs) in the brains of female moths. Other modern research focuses on the study of insect pheromone reception by means of the sense of smell and taste, genetic factors, and evolutionary biology issues, such as the coevolution of female sex pheromone production and reception in the male.
Combating disease vectors such as Anopheles is another focus of research. According to World Health Organization estimates, the number of malaria infections in 2012 was about 207 million with 627,000 deaths. Culex mosquitoes transmit the causative agent of filariasis or West Nile virus. Traps equipped with oviposition pheromones offer one way to contain these populations. To optimize these, the scent-binding proteins in the antennae of females, which play a critical role in detecting oviposition sites, are being intensively studied.
Production
The pheromones in insects are often the secondary products of fatty acids, such as saturated and unsaturated hydrocarbons, fatty alcohols, esters and aldehydes, but also isoprenoids and other compounds. Pheromones are often not pure substances, but so-called pheromone cocktails consisting of various components. Often, only one specific enantiomer of a compound triggers a behavioral reaction, while the other enantiomer triggers no reaction or a different reaction.
Sometimes the biosynthesis of the pheromone occurs only when the biochemical precursors in the form of certain alkaloids have been ingested from food plants. In this case, the sex pheromone simultaneously signals the presence of food sources.
Due to the potential commercial application in crop protection, the intensity of the study of pheromones increased greatly after Butenandt's discovery, leading to the development of highly sensitive analytical methods and the widespread application of chemo-, regio-, and stereoselective syntheses in organic chemistry.
Biosynthesis
Insect pheromones are produced by a variety of exocrine glands consisting primarily of modified epidermal cells at various sites on the insect body. For example, the abdominal glands of the female silkmoth release traces of the (E,E)-isomer of alcohol as well as the analogous (E,Z)-aldehyde bombycal, in addition to the sex pheromone bombycol. Suitable surface geometries in the vicinity of the glands, such as grooved pore plates, may favor effective evaporation of a leaked pheromone. Honey bees possess 15 glands with which they produce and release a number of different substances, thus maintaining a complex communication system based on pheromones. Males of various butterfly species possess so-called androconial organs in the abdomen with which they can disseminate pheromones, while other moths release them via scent shed or scent bristles on their forewings or the end of the abdomen. The scent bristles and scent scales serve to increase surface area and facilitate the evaporation of insect pheromones.
Rather than evolving a completely unique set of enzymes for pheromone biosynthesis, insects often modify normal metabolites into pheromones with high regio-, chemo-, (E/Z)-, diastereo-, or enantioselectivity and in well-defined ratios. Biosynthesis of insect pheromones occurs either de novo following the scheme of fatty acid synthesis by successive addition of malonyl-CoA to an initial acetyl or by uptake of precursors from the diet. Many butterflies use the biosynthetic possibility of producing a specific mixture of derivatives of simple fatty acids. The evolution of the enzyme Δ-11-desaturase combined with chain-shortening reactions allows them to produce a variety of unsaturated acetates, aldehydes, and alcohols in various combinations.
Dehydrogenation of the carbon chain and reduction of the acid function to the alcohol occurs, if necessary, through special enzyme systems. Further steps can be oxidation to the aldehyde or acetylation to the acetate. In Bombyx mori, the biosynthesis is activated diurnally by pheromones through a neurohormone, the so-called pheromone biosynthesis-activating neuropeptide (PBAN). The pheromone biosynthesis-activating neuropeptide (PBAN) is a neurohormone.
The hormonal mechanisms of pheromone production vary considerably from species to species. Juvenile hormones, for example, control pheromone production in the owl butterfly Mythimna unipuncta. These are produced in mostly paired corpora allata located behind the brain and released into the hemolymph. There they bind to certain transport proteins. If the corpora allata are removed, the females do not produce pheromones. Juvenile hormones, however, interfere more indirectly with the circadian release of PBAN.
Male butterflies of the Danainae family sometimes wound caterpillars that have ingested alkaloids from milk weeds with tiny claws on their feet to ingest the exuding fluid, a behavior described as kleptopharmacophagy. The moths use the ingested alkaloids for defense against predators and to produce sex pheromones.
Pheromones from plant ingredients
Male fire-coloured beetles of the species Neopyrochroa flabellata and also various other beetle species use the terpenoid cantharidin as a sex pheromone and aphrodisiac pheromone, respectively. This isoprenoid is ingested by Neopyrochroa flabellata with food and transferred to females and subsequently to the brood during the mating act. Females test the content of a gland on the head of the male before mating. The cantharidin acts as a feeding toxin and renders the eggs unpalatable to predators; females therefore prefer males with a high cantharidin content.[47]
Moths such as Utetheisa ornatrix and Tirumala limniace ingest pyrrolizidine alkaloids from food plants such as crotalaria, heliotropium, or Achillea ageratum in the caterpillar, which the adult male converts by oxidation into pheromones such as hydroxydanaidal. As in the fire beetle, the alkaloids, which are potent feeding poisons and act against predators such as spiders, ants, or net-winged insects, are transferred to females and eggs. Adult monarch butterflies ingest plant secondary metabolism to increase their pheromonal attractiveness. Sometimes biosynthesis of the pheromone occurs only when biochemical precursors in the form of certain alkaloids have been ingested from food plants. In this case, the sex pheromone simultaneously signals the presence of food sources.
The uptake of pheromone precursors from plants is also known for certain species of orchid bees and peacock flies. Male bees collect a mixture of terpenoids from orchids and use them as an aggregation pheromone to form lek mating. Sometimes the plant constituents control the development of the pheromone glands of male butterflies.
Laboratory synthesis
Karl Ziegler and Günther Otto Schenck succeeded in synthesizing cantharidin as early as 1941. The preparation of pheromones requires the use of highly chemo-, regio- and stereoselective syntheses. In the 1970s, asymmetric synthesis using the SAMP method succeeded in producing various pheromones enantiomerically pure. Furthermore, chemists used asymmetric epoxidation, asymmetric dihydroxylation, biocatalysis, olefin metathesis, and many other stereoselective reactions to synthesize pheromones. The Wittig reaction is suitable for the chemical synthesis of pheromones with (Z)-olefinic double bonds.
Genetically modified tobacco plants can also produce sex pheromones. The fatty alcohols obtained from them by extraction are subsequently acetylated to obtain the respective target sex pheromones. This semisynthetic route of production produces insect pheromones in relatively large quantities and with high purity.
Properties
Chemical communication between living beings by means of pheromones follows the same principles as technical data transmission. A transmitter, for example the gland of a female insect, emits the signal in the form of a chemical substance. Both the chemical structure of the molecules and their quantity ratio determine the information content and serve as a model of communication for the species. The physical properties of the substances, such as vapor pressure, determine the function of their molecules as short- or long-range information transmitters.
The insect pheromone is transmitted by direct contact or via a medium such as water or air. From the receiver, for example the pheromone receptors in the antenna of a male insect, the substance is received and triggers a behavioral response. The term antenna was first used to refer to the antennae of insects and subsequently in engineering. Insect pheromones have a highly species-specific effect, meaning that they elicit the desired behavioral response only in biological specificity but not in individuals of other species. For example, although the chemical compounds that act as sex pheromones in butterflies may be the same in different species, the composition of the pheromone cocktail is different in all species. In addition, pheromone cocktails often contain substances that act as behavioral inhibitors for other species, such as significantly reducing the rate of approach of males of alien species to an attracting female. The pheromone cocktail is a highly effective means of inhibiting the behavior of males of other species.
Physico-chemical properties
The pheromones are usually produced as a liquid and are either transmitted by direct contact or released into the environment as a liquid or vapor. They can be either heavy or light volatile. Diffusivity significantly affects the function of the pheromone. Alarm pheromones are often highly volatile to spread quickly by diffusion. Therefore, they are often short-chain substances with relatively high vapor pressure and low complexity. There is no high requirement for species-specific coding effect as in sex pheromones. Sex pheromones have a higher complexity than most alarm pheromones, but a lower molar mass than marker pheromones, which permanently indicate an area.
In the case of flying insects - such as butterflies - the pheromone as a molecule must not be too large, otherwise the vapor pressure and volatility are too low. Thus, over 200 identified sex pheromones of butterfly species are mono- and bis-olefinic fatty aldehydes, fatty alcohols and their acetates with chains of 10 to 18 carbon atoms.
Depending on the function, there are different emission and reception scenarios. Ants, for example, emit alarm pheromones intermittently or continuously in the usually windless environment of the anthill. Trace pheromones are emitted by an ant as a moving source. Silkmoth sex pheromones are emitted in discrete scent threads in an air stream.
Male monarch butterflies do not emit volatile pheromones, but pheromone-laden nanoparticles called pheromone transfer particles, which they use to transfer arrestants or aphrodisiac pheromones to females. The pheromone transfer particles position the males on their brush hairs and scatter them during courtship flight. The nanoparticles adhere to the antennae of the females, which are equipped with pheromone receptors, where they slowly release the pheromones, resulting in a long-lasting stimulus for the female.
Females of the Arctiinae Pyrrharctia isabella emit an aerosol consisting exclusively of sex pheromone droplets. The amount of pheromone released in this process is much greater than in other known female moths. The apparent wastefulness of the sex pheromone is explained by the short amount of time an adult has to find a reproductive partner due to the short Arctic spring.
Recipients usually perceive pheromones in an environment characterized by the presence of many other chemicals. To ensure specific perception, the pheromone chemical must either be so complex that it does not occur more than once in nature, or the correct ratio of several individual components must trigger the stimulus. However, it has been shown that only in exceptional cases does a single substance convey the message. Often, a mixture of substances must be present in very precise proportions that, in addition to the chemical structure of the individual pheromones, determine the informational content of the pheromone cocktail.
The chemical structure of pheromones is directly related to their signaling function and signaling environment. Pheromones released into the air often have a carbon chain of 5 to 20 atoms and a molar mass of about 80 to 300 g-mol-1. With a carbon chain of less than five carbon atoms, the number of possible isomers is small and targeted species-specific coding is difficult. With longer carbon chains, the number of possible isomers increases rapidly.
Periplanon B, the sex pheromone of the American cockroach, is an example of a complex single substance to which males respond at extremely low levels of 10−5 nanograms.
Biological properties
The sex pheromone cocktail emitted by a female insect spreads downwind. In the recipient male, the molecules strike the antennae, where reception of the pheromones takes place by means of olfactory cells on the olfactory hairs or sensilla. The antennae adsorb about 30% of the pheromone molecules contained in an airstream. The remaining molecules hit the outer exoskeleton where they are enzymatically degraded.
The pheromone molecules first reach the cuticle of the olfactory hairs and diffuse through pores into a pore cone and from there into tubules. From there, the molecules diffuse further to the dendritic membrane. This membrane has receptors that, when a pheromone is received, cause a change in electrical resistance via the opening of ion channels, creating an electrical resistance that results in a perception. Even a single pheromone molecule can trigger a nerve impulse.[59] However, the recognition of a specific pheromone cocktail requires a certain level of excitation of different cell types of varying specificity. It is assumed that the characteristic excitations received from the different receptors in the central nervous system are modulated there into an excitation pattern. If this excitation pattern, which depends on the quantitative ratio of the received pheromone molecules, matches the coding of an innate behavioral pattern, this leads to the triggering of a corresponding behavioral response, such as the headwind approach to a pheromone source.
Pheromone species
According to their effect, two classes of pheromones can be distinguished, the primer and the releaser pheromones. Under certain conditions, certain pheromones act as both releaser and primer pheromones.
Releaserpheromone
Releaser pheromones have a brief, immediate behavior-controlling effect. The first pheromone discovered, bombycol, is an example. Releaser pheromones typically include aggregation pheromones, dispersal pheromones, alarm pheromones, tracking pheromones, and marking pheromones, among others, in addition to the well-known sex pheromones.
Aggregation pheromones
Aggregation pheromones are produced by both sexes and are used for the sex-nonspecific attraction of individuals of the same species. These are known, for example, in the bark beetle and other beetle species, bipeds, hemiptera and grasshoppers. Insects use aggregation pheromones for defense against predators, in mate choice, and to overcome host plant resistance to mass attack. A group of individuals at a site is referred to as an aggregation, regardless of sex. Aggregation pheromones, in addition to sex attractants, play a significant role in the development of pheromone traps for selective pest control.
Studies using electroantennogram techniques showed that aggregation pheromones elicited relatively high receptor potentials in a sex-nonspecific manner, whereas sex pheromones elicited high receptor potentials in only one sex. Aggregation pheromones may therefore be evolutionary precursors of sex pheromones.
Sex pheromones
Sex pheromones signal the female animal's readiness to mating. Male animals also emit pheromones; they contain information about sex and genotype. Many insects release sex pheromones; some butterfly species still perceive the pheromone at a distance of 10 kilometers. The sensory cell response in the male silkmoth begins at a concentration of about 1000 molecules per cubic centimeter of air. The scent signal of a female, as soon as a certain concentration threshold is exceeded, initially triggers an oriented headwind flight in the silkmoth male. In other species such as the codling moth, on the other hand, the male tests the stereochemical purity of the attractant molecule. As soon as a small admixture of another stereoisomer is present in the pheromone cocktail, the approach to the source stops. In this case, the other stereoisomer acts as a repellent. In addition to the main components, some species release so-called short-range components in small amounts, which influence the behavioral response. In addition to the main components, some species release so-called short-range components in small amounts.
Fouraging honey bees spread the scent of (Z)-11-eicosen-1-ol. European bee wolves are guided by this scent to prey on honey bees. Male bee wolves use this component, and thus the existing sensory preference of females for bee scent, as part of their sex pheromone cocktail to attract them.
Aphrodisiac apheromones
Aphrodisiac pheromones stimulate the readiness to mate. The spiroacetal olean, for example, is the aphrodisiac pheromone of the olive fruit fly (Bactrocera oleae). Only the (R) enantiomer is effective on males; the (S) enantiomer is ineffective on them. The female produces the racemate, responds to (R)- and (S)-olean, and also stimulates herself with it.
So-called anti-aphrodisiacs have exactly the opposite effect. Bedbug nymphs protect themselves with such a pheromone, which has a specific mixing ratio of the aldehydes (E)-2-hexenal, (E)-2-octenal and 4-oxo-(E)-2-hexenal, against mating attempts by male bedbugs. The latter directly drills a hole into the abdomen of sexually mature female bedbugs and injects its sperm there (traumatic insemination). For mated nymphs, however, such injury can be fatal.
Alarm pheromones
Some insect species emit alarm pheromones when attacked. These trigger either flight or increased aggression. In bees, for example, two alarm pheromone mixtures are known. One is released by the Koshevnikov gland near the sting and contains more than 40 different compounds, such as isoamyl acetate, already described by Butler in its effect, besides butyl acetate, 1-Hexanol, 1-Butanol, 1-Octanol, hexyl acetate, octyl acetate and 2-Nonanol. These components have low molar mass, are volatile, and are the most non-specific of all pheromones. Alarm pheromones are released when a bee stings another animal to attract and entice other bees to attack. Smoke suppresses the effect of alarm pheromones, which is exploited by beekeepers.
The other alarm pheromone of the honey bee contains mainly 2-Heptanone, another volatile substance released by the mandibular glands. This component has a repellent effect on predatory insects. The alarm pheromone cocktail of the bedbug contains unsaturated witch and octenaldehydes, which are perceived as a characteristic sweetish odor in bedbug-infested rooms.
Marking and dispersion pheromones
Certain insects, such as the cherry fruit fly, mark their oviposition sites in such a way that other females of the same species avoid the site and lay their eggs elsewhere to avoid competition for food among the offspring. Territorial social insects, such as colonies of ants, also mark territories they claim with pheromones.
The marking pheromones include the dispersion pheromones, with which, for example, bark beetles prevent overcolonization of a tree. The females and nymphs of the German cockroach transmit dispersion pheromones in direct contact via their saliva. In the nymphal stage, these serve to deter adult cockroaches and thus protect against cannibalism. In adults, they prevent the translocation of a habitat.
Trace pheromones
Trace pheromones are mainly known in insects living in colonies, which mark their paths with low-volatile substances such as higher molecular weight hydrocarbons. Ants in particular often mark the path from a food source to the nest in this way. As long as the food source exists, the trail is renewed. When the food source dries up, the ants spray over the trail pheromone with a repellent pheromone. In 1921, the U.S. naturalist Charles William Beebe reported on the ant mill phenomenon, which trace pheromones can trigger in army ants: If the animals are separated from the main trail of the colony, the blind ants follow the pheromone trails of ants running in front of them. These run in large circles until complete exhaustion or death, without finding their way back to the colony.
Recruitment pheromones
Recruitment pheromones are widely used as an element of chemical communication in social insects and have been demonstrated for bees, termites, and ants. These pheromones are used by insects to stimulate other members of the colony to forage at a food source. Bumblebees perform a dance similar to the waggle dance primarily to distribute recruitment pheromones.
Primerpheromones
The order Hymenoptera contains the largest group of eusocial insects, including many bees, especially of the subfamily Apinae, ants, and some species of vespidae, especially of the subfamily of Vespidaes. Characteristics often include the presence of a reproductive queen and castes with specialized workers bees and soldiers. Termites form the second major group of eusocial insects. Colonies are divided into distinct castes, with a queen and king as reproductive individuals, workers, and soldiers defending the colony. Primer pheromones have a major influence on the organization of hymenopteran states formed by Hymenoptera and of termite colonies. These pheromones influence the hormonal system of the recipient; they often interfere with metabolism via a signaling cascade or activate proteins that can bind to DNA. In contrast to the releaser pheromones, the primer pheromones are less well studied. For a long time, only one primer pheromone, 9-ODA, was known.
Primer pheromones of bees
A well-known example of primer pheromones are the queen bee pheromones. These pheromones control social behavior, comb maintenance, swarming, and ovary formation of worker bees. The components are carboxylic acids and aromatic compounds. (E)-9-oxo-dec-2-enoic acid (9-ODA), for example, suppresses further breeding of queens and inhibits development of ovaries of worker bees. It is also a potent sex pheromone for drones on nuptial flight.
Brood recognition pheromones are emitted by larvae and pupae and discourage worker bees from leaving the hive while there are still offspring to be cared for. Furthermore, they suppress the formation of ovaries in worker bees. The pheromones consist of a mixture of ten fatty acid esters, including glyceryl 1,2-dioleate-3-palmitate. Worker pupae contain 2 to 5, drone pupae about 10 and queen pupae 30 micrograms of the pheromone.
Older, fouraging worker bees release oleic acid ethyl ester, which inhibits the development of nurse bees and makes them care for brood longer. The oleic acid ester acts as a primer pheromone and stabilizes the ratio of brood-caring and food-producing bees. The foragers produce it from nectar mixed with traces of ethanol, which they feed to the nurse bees. This delays their development until the number of older foragers decreases and with it the exposure of nurse bees to ethyl oleate.
Caste determinant pheromones
The Reticulitermes flavipes uses terpenes such as γ-cadinene and γ-cadinenal as caste-stimulating or -inhibiting primerpheromones. These assist the juvenile hormone in determining the position of totipotent workers in the caste system. In ants, female larvae possess bipotentiality for some time and thus the possibility of developing as either queens or workers. At some point in larval development, continued feeding determines the fate of the larva. If the juvenile hormone titer is raised above a certain threshold, gynomorphs develop; otherwise, workers develop. Control of larval feeding is governed by a primerpheromone of the queen ant.
Application
In the 19th century, Lymantria dispar escaped from entomologist Étienne Léopold Trouvelot in Massachusetts and spread throughout the United States by the mid-20th century, becoming one of the most feared pests today. As early as 1898, Edward Forbush and Charles Fernald made attempts to control the population of the gypsy moth by luring males into traps set with attractant females. The United States Department of Agriculture continued these attempts in the 1930s, using extracts of female abdominal spikes to attract male moths. The United States Department of Agriculture was the first to use this method. The application of insect pheromones in pest control has been intensively studied, especially since the first syntheses, with the aim of developing environmentally friendly methods to control population dynamics.
In pest control, the use of pheromones in attractant traps to control insects is common practice. This can involve attracting the insects to kill them with an insecticide or physically to trap them or for monitoring. Bark beetles are attracted with aggregation pheromones to trap them. The attractant is usually released when drilling into spruce wood, signaling that the tree can be colonized. The bark beetle trap is an important tool for controlling bark beetles. However, the use of attractant traps poses the problem that the pheromone may act as a kairomone and thus attract predatory insects. By reducing the population of natural predators of the bark beetle, the pheromone trap has a counterproductive effect in this case. Monitoring by means of attractant traps, such as window traps, is used for the quantitative detection of pests to control them more specifically with insecticides depending on the activity detected. In addition, they are used in the identification of new species.
Trap beetles work on the same principle as attractant traps. The bark beetles of the initial infestation attract further conspecifics by aggregation pheromones. Windthrow is suitable as trap beetles, which can be equipped with pheromone dispensers to enhance the attraction effect. Trees prepared in this way divert approaching bark beetles from the stand and bind them to controllable stems. The use of trap trees requires regular inspection of the trees. When larval galleries appear, the trees are debarked and larvae and pupae dry out. If necessary, the infested tree may be treated with insecticides or burned to prevent escape of the next generation.
Egg-laying-preventing marking pheromones are widely used in the insect world. Various experiments have demonstrated the possibility of controlling population dynamics by these pheromones. Application of the oviposition-preventing marking pheromone of Rhagoletis cerasi, which cannot be controlled with yellow boards, for example, reduced infestations of cherries by 90%.
Another application is the confusion method or mating disruption. Here, a high substance concentration of artificially produced pheromones is applied. This makes it impossible for the males to follow the pheromones of the females, thus hindering the reproduction of the pest. The confusion method has a species-specific effect. It is usually successful with respect to one species if sufficient dispensers are applied, but in some cases related species occupy the ecological niche that is released.
Bees use the Nasanov pheromone to guide worker bees back to the hive. The pheromone contains terpenes such as geraniol and citral. Beekeepers use a man-made product to attract bees to an unused hive. The process is suitable for trapping Africanized honey bees in trap boxes.
Toxicology
Toxicological studies were mainly carried out in connection with the approval of pheromone traps and dispensers. A health hazard cannot be generally assessed due to the large chemical diversity of pheromones, but is usually excluded because only small amounts are emitted. However, in higher doses, orally administered pheromones such as cantharidin cause death in rare cases.
Evidence
Commercial application in crop protection intensified the study of pheromones and led to the development of highly sensitive analytical methods. The identification of a pheromone proceeds through several steps. First, an extract of the pheromone is obtained. This is done by the method already used by Butenandt of extracting glands or whole animals with an easily evaporated solvent, ideally at the time of high pheromone production. Alternatively, the pheromone is adsorbed on activated carbon from the gas phase and an extract is obtained with little solvent. For very small traces, solid-phase microextraction is suitable. For identification, the extracts or the solid-phase microextraction samples are analyzed by gas chromatography-mass spectrometry.
The electroantennogram technique is suitable for studying the biological activity of insect pheromones. In this technique, an electrode inserted into the antenna main stem and an antenna branch measures the change in electrical voltage as a function of the concentration of pheromone molecules impinging on the antenna, which are transported to the antenna in a defined manner by an air current. [58] By varying the pheromone molecule, the influence of certain functional groups interacting with the chiral elements of the receptors can be determined.
The coupling of gas chromatography and electroantennogram allows the biological activity of the compounds present in an extract to be verified. The shape of the electroantennogram depends on the fragrance component in the air stream, and the amplitude increases with the concentration and flow velocity of the air.
References
Bibliography
Edward O. Wilson, W. H. Bossert (1963): Chemical communication among animals. In: Recent Progress in Hormone Research. Bd. 19, S. 673–716,
Hans Jürgen Bestmann, Otto Vostrowsky (1993): Chemische Informationssysteme der Natur: Insektenpheromone. In: Chemie in unserer Zeit. Bd. 27, Nr. 3, S. 127–133,
Stefan Schulz: The Chemistry of Pheromones and Other Semiochemicals II. Springer, 2005,
R. T. Carde, A. K. Minks: Insect Pheromone Research: New Directions. Springer, 1997,
External links
Collection of images, videos and audio data
Meaning explanations, word origin, synonyms, translations
Insect pheromone database, pherobase.com. Retrieved November 15, 2013.
Insects
Pheromones
Neurotransmitters | Insect pheromones | [
"Chemistry",
"Biology"
] | 9,852 | [
"Animals",
"Chemical ecology",
"Neurotransmitters",
"Pheromones",
"Insects",
"Neurochemistry"
] |
74,602,215 | https://en.wikipedia.org/wiki/Fruit%20production%20and%20deforestation | Fruit production is a major driver of deforestation around the world. In tropical countries, forests are often cleared to plant fruit trees, such as bananas, pineapples, and mangos. This deforestation is having a number of negative environmental impacts, including biodiversity loss, ecosystem disruption, and land degradation.
Background
The deforestation of tropical forests for fruit production has a number of negative environmental impacts. First, it is leading to the loss of biodiversity. Tropical forests are home to a wide variety of plant and animal species, many of which are found nowhere else in the world. When forests are cleared, these species are often displaced or killed.
Second, the deforestation of tropical forests is disrupting ecosystems. Forests play a vital role in regulating the environment. They help to absorb rainwater, prevent flooding, and mitigate climate change. When forests are cleared, these important ecosystem services are disrupted.
Third, the deforestation of tropical forests is leading to land degradation. When forests are cleared, the soil is often left exposed to erosion. This can lead to land degradation, which can make it difficult to grow crops.
The deforestation of tropical forests for fruit production is a serious problem with a number of negative environmental impacts. It is important to find ways to produce fruits without further destroying forests. Some possible solutions include:
Promoting sustainable fruit production practices: Sustainable fruit production practices can help to reduce the environmental impact of fruit production. These practices include using shade-tolerant crops, planting trees on farms, and using integrated pest management.
Conserving forests: It is important to conserve forests so that they can continue to provide important ecosystem services. This can be done by creating protected areas, enforcing laws against illegal logging, and supporting sustainable forest management practices.
Changing consumer behavior: Consumers can play a role in reducing the demand for fruits that are produced at the expense of forests. They can do this by choosing fruits that are produced in a sustainable way and by supporting organizations that are working to conserve forests.
Fruit production
Fruit production: Fruit production is a major driver of deforestation around the world. In tropical countries, forests are often cleared to plant fruit trees, such as bananas, pineapples, and mangoes. This deforestation is having a number of negative environmental impacts, including:
Biodiversity loss: Forests are home to a wide variety of plant and animal species, many of which are found nowhere else in the world. Deforestation is threatening these species with extinction.
Ecosystem disruption: Forests play a vital role in regulating the environment. They help to absorb rainwater, prevent flooding, and mitigate climate change. Deforestation can disrupt these important ecosystem services.
Land degradation: When forests are cleared, the soil is often left exposed to erosion. This can lead to land degradation, which can make it difficult to grow crops.
Deforestation
Deforestation is the clearing of forests to make way for other land uses, such as agriculture, logging, and mining. It is a major problem around the world, and it is having a number of negative environmental impacts.
Agricultural expansion: Agricultural expansion is the process of increasing the amount of land that is used for agriculture. This can be done by clearing forests, converting grasslands to cropland, or draining wetlands. Agricultural expansion is often driven by the need to produce more food to meet the demands of a growing population. Agricultural expansion can have a number of negative environmental impacts, including:
Deforestation: When forests are cleared for agriculture, it releases carbon dioxide into the atmosphere, contributes to climate change, and destroys wildlife habitat.
Habitat loss: Agricultural expansion can lead to the loss of habitat for plants and animals. This can have a negative impact on biodiversity.
Water pollution: Agricultural runoff can pollute rivers and lakes with fertilizers and pesticides. This can harm aquatic life and make water unsafe for drinking and swimming.
Soil erosion: When land is cleared for agriculture, it can be easily eroded by wind and water. This can lead to the loss of topsoil and the degradation of land.
Desertification: Agricultural expansion can contribute to desertification, which is the process of land becoming dry and barren. This can be caused by a number of factors, including climate change, overgrazing, and deforestation. Agricultural expansion can also have a number of social impacts, including:
Land conflicts: Agricultural expansion can lead to conflicts between farmers and other land users, such as indigenous peoples and conservationists.
Displacement of people: Agricultural expansion can displace people from their homes and land. This can have a negative impact on their livelihoods and well-being.
Increasing inequality: Agricultural expansion can lead to increasing inequality, as those who own land benefit from the expansion, while those who do not own land may be displaced or lose access to natural resources. There are a number of ways to mitigate the negative environmental impacts of agricultural expansion, including:
Conservation agriculture: Conservation agriculture is a set of farming practices that help to protect the environment. These practices include crop rotation, cover cropping, and minimum tillage.
Reforestation: Reforestation is the process of planting trees on land that has been cleared for agriculture. This helps to absorb carbon dioxide from the atmosphere and to restore wildlife habitat.
Integrated pest management: Integrated pest management is a system of pest control that uses a variety of methods, such as crop rotation, biological control, and natural enemies, to reduce the need for pesticides.
Water conservation: Water conservation is the practice of using less water in agriculture. This can be done by using drip irrigation, planting drought-tolerant crops, and mulching.
Sustainable land use: Sustainable land use is the practice of using land in a way that meets the needs of the present without compromising the ability of future generations to meet their own needs. This can be done by using practices that protect the environment, such as conservation agriculture and reforestation. Agricultural expansion is a complex issue with a number of environmental and social impacts. By taking steps to mitigate the negative impacts of agricultural expansion, we can help ensure a more sustainable future for food production.
Logging: Logging is another major driver of deforestation. Logging is the process of cutting down trees for timber or other purposes. It is a major industry in many parts of the world, and it is essential for providing wood for construction, furniture, and other products. There are two main types of logging:
Clear-cutting: This is the practice of cutting down all of the trees in an area. Clear-cutting is often used to create large areas of land for agriculture or development.
Selective cutting: This is the practice of cutting down only certain trees, such as those that are mature or diseased. Selective cutting is often used to manage forests and protect wildlife habitat.
Logging can have a number of environmental impacts, including:
Deforestation: When forests are logged, it can lead to deforestation, which is the loss of forest cover. Deforestation can have a number of negative impacts, such as climate change, soil erosion, and habitat loss.
Water pollution: Logging can also lead to water pollution, as runoff from logging sites can carry sediment and pollutants into streams and rivers. This can harm aquatic life and make water unsafe for drinking and swimming.
Air pollution: Logging can also contribute to air pollution, as the burning of trees releases smoke and other pollutants into the air. This can harm human health and contribute to climate change.
Habitat loss: Logging can lead to the loss of habitat for plants and animals. This can have a negative impact on biodiversity.
Soil erosion: Logging can also lead to soil erosion, as the removal of trees can make the soil more vulnerable to wind and water. This can lead to the loss of topsoil and the degradation of land.
Logging can also have a number of social impacts, including:
Land conflicts: Logging can lead to conflicts between loggers and other land users, such as indigenous peoples and conservationists.
Displacement of people: Logging can displace people from their homes and land. This can have a negative impact on their livelihoods and well-being.
Increasing inequality: Logging can lead to increasing inequality, as those who own land benefit from the logging, while those who do not own land may be displaced or lose access to natural resources.
There are a number of ways to mitigate the negative environmental impacts of logging, including:
Sustainable forest management: Sustainable forest management is the practice of managing forests in a way that meets the needs of the present without compromising the ability of future generations to meet their own needs. This can be done by using practices that protect the environment, such as selective cutting and replanting trees.
Reforestation: Reforestation is the process of planting trees on land that has been logged. This helps to absorb carbon dioxide from the atmosphere and to restore wildlife habitat.
Water conservation: Water conservation is the practice of using less water in logging operations. This can be done by using drip irrigation and by mulching.
Efficient machinery: Using efficient machinery can help to reduce the amount of damage caused by logging operations.
Public education: Public education can help raise awareness of the environmental impacts of logging and to encourage sustainable forest management practices. Logging is a complex issue with a number of environmental and social impacts. By taking steps to mitigate the negative impacts of logging, we can help to ensure a more sustainable future for forest management.
Mining: Mining can also lead to deforestation, as trees are cleared to access mineral resources. Mining is the process of extracting minerals from the Earth's crust. It is a major industry in many parts of the world, and it is essential for providing raw materials for a variety of products, such as metals, minerals, and fuels.
There are many different types of mining, but some of the most common include:
Surface mining: This is the process of extracting minerals from the Earth's surface. It is often used for coal, sand, and gravel.
Underground mining: This is the process of extracting minerals from below the Earth's surface. It is often used for metals, such as copper and gold. Opens in a new windowwww.srk.com Underground mining
Placer mining: This is the process of extracting minerals that have been deposited by rivers or streams. It is often used for gold and diamonds.
Offshore mining: This is the process of extracting minerals from the seabed. It is often used for oil and gas.
Mining can have a number of environmental impacts, including:
Deforestation: Mining can lead to deforestation, as trees are often cleared to access mineral deposits. This can have a number of negative impacts, such as climate change, soil erosion, and habitat loss.
Water pollution: Mining can also lead to water pollution, as runoff from mining sites can carry sediment and pollutants into streams and rivers. This can harm aquatic life and make water unsafe for drinking and swimming.
Air pollution: Mining can also contribute to air pollution, as the burning of fuel and the use of explosives can release smoke and other pollutants into the air. This can harm human health and contribute to climate change.
Habitat loss: Mining can lead to the loss of habitat for plants and animals. This can have a negative impact on biodiversity.
Soil erosion: Mining can also lead to soil erosion, as the removal of vegetation can make the soil more vulnerable to wind and water. This can lead to the loss of topsoil and the degradation of land.
Mining can also have a number of social impacts, including:
Land conflicts: Mining can lead to conflicts between miners and other land users, such as indigenous peoples and conservationists.
Displacement of people: Mining can displace people from their homes and land. This can have a negative impact on their livelihoods and well-being.
Increasing inequality: Mining can lead to increasing inequality, as those who own land benefit from the mining, while those who do not own land may be displaced or lose access to natural resources.
There are a number of ways to mitigate the negative environmental impacts of mining, including:
Reclamation: Reclamation is the process of restoring land that has been mined. This can be done by replanting trees, restoring water quality, and creating wildlife habitat.
Wastewater treatment: Wastewater treatment can help to reduce the amount of pollutants that are released from mining operations.
Efficient machinery: Using efficient machinery can help to reduce the amount of damage caused by mining operations.
Public education: Public education can help to raise awareness of the environmental impacts of mining and to encourage sustainable mining practices.
Mining is a complex issue with a number of environmental and social impacts. By taking steps to mitigate the negative impacts of mining, we can help to ensure a more sustainable future for mineral extraction.
This deforestation is disrupting ecosystems and contributing to climate change.
Land conversion
Land conversion is the change in the primary use of land from one type to another. For example, agricultural land may be converted to urban land, or forest land may be converted to pasture land. Land conversion can have a significant impact on the environment, the economy, and society.
There are many different types of land conversion, but some of the most common include:
Deforestation: This is the clearing of forests for agricultural, urban, or industrial development. Deforestation can have a number of negative environmental impacts, including soil erosion, flooding, and climate change.
Urbanization: This is the growth of cities and towns. Urbanization can lead to the conversion of agricultural land, forests, and wetlands to urban uses. This can have a number of negative environmental impacts, including air pollution, water pollution, and loss of biodiversity.
Intensive agriculture: This is the practice of using large amounts of inputs, such as fertilizers and pesticides, to produce high yields of crops. Intensive agriculture can lead to the degradation of soil and water resources, and the loss of biodiversity.
Mining: This is the extraction of minerals from the ground. Mining can lead to the destruction of forests, wetlands, and other habitats. It can also pollute water resources and the air.
Land conversion can have a number of negative environmental impacts, including:
Soil erosion: This is the removal of topsoil by wind or water. Soil erosion can lead to flooding, water pollution, and the loss of agricultural productivity.
Water pollution: This is the contamination of water by human activities. Water pollution can lead to the death of fish and other aquatic organisms, and the spread of disease.
Loss of biodiversity: This is the decline in the number and variety of plant and animal species. Loss of biodiversity can have a number of negative impacts, including the disruption of food chains and the loss of ecosystem services.
Land conversion can also have a number of negative economic impacts, including:
Decreased agricultural productivity: This can lead to higher food prices and food insecurity.
Increased unemployment: This can occur when people are displaced from their land due to land conversion.
Loss of tourism revenue: This can occur when land conversion destroys natural attractions.
Land conversion can also have a number of negative social impacts, including:
Conflicts between different groups: This can occur when different groups have different interests in the land, such as farmers, developers, and conservationists.
Displacement of people: This can occur when people are forced to leave their land due to land conversion.
Loss of cultural heritage: This can occur when land conversion destroys archaeological sites and other cultural landmarks.
Land conversion is a complex issue with a wide range of environmental, economic, and social impacts. It is important to weigh the benefits and costs of land conversion carefully before making a decision about whether or not to proceed.
Here are some ways to mitigate the negative impacts of land conversion:
Planning: Careful planning can help minimize the negative impacts of land conversion. This includes identifying the potential impacts of land conversion, and developing strategies to mitigate those impacts.
Rehabilitation: Land that has been converted can be rehabilitated to restore its environmental functions. This can involve planting trees, restoring wetlands, and reintroducing native species.
Sustainable land use: Sustainable land use practices can help to reduce the need for land conversion. This includes practices such as crop rotation, conservation tillage, and integrated pest management.
By taking these steps, we can help minimize the negative impacts of land conversion and protect our natural resources.
Sustainable farming
Sustainable farming is the practice of producing food and other agricultural products in a way that does not deplete natural resources or harm the environment. It is a way of farming that meets the needs of the present without compromising the ability of future generations to meet their own needs.
Sustainable farming practices include:
Crop rotation: This is the practice of planting different crops in the same field each year. This helps to maintain soil fertility and prevent pests and diseases.
Conservation tillage: This is the practice of minimizing soil disturbance during cultivation. This helps to reduce soil erosion and improve water infiltration.
Integrated pest management: This is a system of pest control that uses a variety of methods, such as crop rotation, biological control, and natural enemies, to reduce the need for pesticides.
Water conservation: This is the practice of using water efficiently in agriculture. This can be done by using drip irrigation, planting drought-tolerant crops, and mulching.
Regenerative agriculture: This is a system of farming that aims to improve soil health and biodiversity. It includes practices such as cover cropping, composting, and livestock grazing.
Sustainable farming can provide a number of benefits, including:
Improved soil health: Sustainable farming practices can help to improve soil health by increasing organic matter content, improving water infiltration, and reducing erosion.
Reduced pollution: Sustainable farming practices can help to reduce pollution by reducing the use of pesticides and fertilizers, and by improving water quality.
Increased biodiversity: Sustainable farming practices can help to increase biodiversity by providing habitat for wildlife and pollinators.
Improved climate resilience: Sustainable farming practices can help to improve climate resilience by reducing greenhouse gas emissions and increasing carbon sequestration.
Economic viability: Sustainable farming can be economically viable for farmers. There are a number of government programs and private businesses that provide financial support for sustainable farming practices.
Sustainable farming is an important way to meet the challenges of the 21st century, such as climate change, food security, and environmental degradation. By adopting sustainable farming practices, farmers can help protect our natural resources and ensure a sustainable future for food production.
Here are some of the challenges of sustainable farming:
Cost: Sustainable farming practices can be more expensive than conventional farming practices. This is because they often require more labor and time.
Acceptance: There is still a lack of acceptance of sustainable farming practices among some farmers. This is due to a number of factors, such as the perception that these practices are not as productive as conventional farming practices.
Regulation: There is a lack of government regulation to support sustainable farming practices. This makes it difficult for farmers to adopt these practices.
Despite these challenges, there is a growing movement towards sustainable farming. There are a number of organizations that are working to promote sustainable farming practices and to provide support to farmers who are adopting these practices. With continued effort, sustainable farming can become the norm in the future.
Food security
Food security is defined as "access by all people at all times to enough safe and nutritious food for an active and healthy life." It is a multidimensional concept that includes the availability of food, access to food, utilization of food, and stability of food supply.
Availability of food: This refers to the quantity of food that is produced or available for import. It is influenced by factors such as agricultural production, climate change, and trade policies.
Access to food: This refers to the ability of people to obtain food, either through their own production or through purchase. It is influenced by factors such as income, prices, and access to markets.
Utilization of food: This refers to the ability of people to use the food they consume to meet their nutritional needs. It is influenced by factors such as food preparation, cooking practices, and health status.
Stability of food supply: This refers to the ability to maintain a consistent food supply over time, even in the face of shocks and stresses such as climate change, conflict, and economic instability.
Food insecurity can occur at the individual, household, national, or global level. It can be caused by a number of factors, including:
Natural disasters: such as droughts, floods, and pests can damage crops and livestock, leading to food shortages.
Conflict: War and civil unrest can disrupt food production and distribution, leading to food insecurity.
Economic instability: High food prices, unemployment, and poverty can make it difficult for people to afford food.
Climate change: Climate change is expected to lead to more extreme weather events, such as droughts and floods, which could disrupt food production and distribution.
Population growth: The world's population is expected to grow by 2 billion people by 2050, which will put a strain on the global food supply.
Food insecurity can have a number of negative consequences, including:
Malnutrition: People who are food insecure are more likely to be malnourished, which can lead to health problems such as stunting, wasting, and micronutrient deficiencies.
Poverty: Food insecurity can contribute to poverty, as people who are unable to afford food may be forced to spend less on other necessities, such as housing and healthcare.
Instability: Food insecurity can lead to social unrest and instability, as people may resort to violence or protests in order to obtain food.
There are a number of things that can be done to improve food security, including:
Investing in agriculture: This includes providing farmers with access to land, water, and technology, as well as supporting research and development of new crops and agricultural practices.
Improving access to food: This includes providing food assistance to the poor and vulnerable, and promoting policies that make food more affordable.
Addressing the root causes of food insecurity: This includes addressing climate change, conflict, and economic instability.
Promoting sustainable agriculture: This includes practices that protect the environment and natural resources, such as crop rotation, conservation tillage, and integrated pest management.
Food security is a complex issue, but it is one that is essential to ensuring the well-being of people around the world. By taking action to improve food security, we can help to create a more just and sustainable world.
Carbon emissions
Carbon emissions are the release of carbon dioxide and other carbon-containing gases into the atmosphere. These gases trap heat, which contributes to climate change.
The main sources of carbon emissions are:
Fossil fuel combustion: This is the burning of coal, oil, and natural gas for electricity, heat, and transportation.
Deforestation: When trees are cut down, the carbon they store is released into the atmosphere.
Agriculture: This includes the production of food, feed, and fiber. Livestock production is a major source of methane, a greenhouse gas that is more potent than carbon dioxide.
Industrial processes: This includes the production of cement, steel, and other products.
Waste management: This includes the disposal of garbage and sewage.
Carbon emissions have a number of negative impacts on the environment, including:
Climate change: Carbon dioxide is the main greenhouse gas that traps heat in the atmosphere. As carbon emissions increase, the Earth's temperature is rising, which is causing a number of problems, such as more extreme weather events, rising sea levels, and melting glaciers.
Ocean acidification: Carbon dioxide dissolves in water and forms carbonic acid, which makes the ocean more acidic. This is harmful to marine life, as it can damage their shells and skeletons.
Air pollution: Carbon emissions contribute to air pollution, which can cause respiratory problems and other health problems.
Water pollution: Carbon emissions can also pollute water, making it unsafe to drink and swim in.
There are a number of things that can be done to reduce carbon emissions, including:
Shifting to renewable energy: This includes using solar, wind, and other renewable sources of energy to generate electricity.
Improving energy efficiency: This means using less energy to power our homes, businesses, and transportation systems.
Reducing deforestation: This can be done by planting trees and protecting existing forests.
Changing agricultural practices: This includes reducing the use of fertilizer and manure, which release methane into the atmosphere.
Improving waste management: This includes recycling and composting, which can help to reduce the amount of waste that ends up in landfills.
Reducing carbon emissions is essential to tackling climate change and protecting our planet. By taking action to reduce our emissions, we can help to create a more sustainable future for ourselves and generations to come.
Carbon emissions are a major contributor to climate change. Deforestation is a major source of carbon emissions, as trees absorb carbon dioxide from the atmosphere. When forests are cleared, this carbon dioxide is released into the atmosphere, contributing to climate change.
Ecosystem services: Ecosystem services are the benefits that humans derive from ecosystems. They include things like clean air and water, food, and climate regulation. Deforestation can disrupt ecosystem services, making it more difficult for humans to meet their needs.
Conservation
Conservation is the protection of natural resources and the environment. It is a broad term that encompasses a wide range of activities, from protecting endangered species to reducing pollution.
There are many different reasons why conservation is important. Some of the most important reasons include:
To protect biodiversity: Biodiversity is the variety of life on Earth. It includes the variety of plants, animals, and microorganisms, as well as the variety of ecosystems. Biodiversity is essential for the health of the planet, as it provides us with food, clean air and water, and other essential services.
To prevent extinction: Extinction is the permanent loss of a species. It is a natural process, but it is happening at an alarming rate today due to human activities. Conservation is essential to prevent extinction and to protect the plants and animals that are still alive.
To reduce pollution: Pollution is the contamination of the environment by harmful substances. It can come from a variety of sources, such as factories, cars, and farms. Pollution can harm human health, wildlife, and ecosystems. Conservation can help to reduce pollution by reducing our reliance on harmful substances and by finding more sustainable ways to produce and consume goods and services.
To mitigate climate change: Climate change is the long-term change in the average weather patterns that have come to define Earth's local, regional and global climates. These changes have a broad range of observed effects that are synonymous with the term. Conservation can help to mitigate climate change by reducing our emissions of greenhouse gases and by protecting forests, which absorb carbon dioxide from the atmosphere.
There are many different ways to conserve natural resources and the environment. Some of the most common ways include:
Reducing our consumption: One of the best ways to conserve natural resources is to reduce our consumption. This means buying less stuff, using less energy, and wasting less food.
Recycling and composting: Recycling and composting are two great ways to reduce waste and conserve resources. Recycling helps to reduce the amount of waste that goes to landfills, while composting helps to reduce the amount of food waste that goes to landfills.
Using renewable energy: Renewable energy is energy that comes from sources that are naturally replenished, such as solar, wind, and hydroelectric power. Using renewable energy helps to reduce our reliance on fossil fuels and to conserve natural resources.
Protecting endangered species: Endangered species are plants and animals that are at risk of extinction. Conservation can help to protect endangered species by creating protected areas, reducing pollution, and educating the public about the importance of conservation.
Sustainable development: Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs. Conservation is essential for sustainable development, as it helps to ensure that we use natural resources wisely and that we protect the environment for future generations.
Conservation is an important issue that affects everyone on Earth. By taking action to conserve natural resources and the environment, we can help to ensure a healthy and sustainable future for ourselves and generations to come.
References
Horticulture
Intensive farming
Deforestation | Fruit production and deforestation | [
"Chemistry"
] | 5,690 | [
"Eutrophication",
"Intensive farming"
] |
74,604,626 | https://en.wikipedia.org/wiki/Unknown%3A%20Cosmic%20Time%20Machine | Unknown: Cosmic Time Machine is an episode of the four-episode 2023 Netflix documentary series Unknown, about NASA's development and launch of the James Webb Space Telescope. The episode follows the development of the telescope from the 1990s through its 2021 launch and 2022 deployment, and the unveiling of some of the first images from the new space telescope.
The episode was directed by Shai Gal and released on Netflix on July 24, 2023.
Reception
Decider gave the episode a mostly positive review, praising its scope and presentation. Jeff Foust of The Space Review described it as "extraordinary".
References
External links
Official websites: NASA / STScI / ESA / French
James Webb Space Telescope
Netflix original documentary films
Works about NASA
2023 documentary films
Documentary films about the space program of the United States | Unknown: Cosmic Time Machine | [
"Astronomy"
] | 163 | [
"Space telescopes",
"James Webb Space Telescope"
] |
74,604,869 | https://en.wikipedia.org/wiki/KIF18B | Kinesin family member 18B (KIF18B), also known as kinesin-8, is a human protein encoded by the KIF18B gene. It is part of the kinesin family of motor proteins.
References
Motor proteins | KIF18B | [
"Chemistry"
] | 54 | [
"Molecular machines",
"Protein stubs",
"Motor proteins",
"Biochemistry stubs"
] |
74,605,246 | https://en.wikipedia.org/wiki/KIF21B | Kinesin family member 21B (KIF21B), also known as kinesin-4, is a human protein encoded by the KIF21B gene. It is part of the kinesin family of motor proteins.
References | KIF21B | [
"Chemistry"
] | 52 | [
"Biochemistry stubs",
"Protein stubs"
] |
74,605,265 | https://en.wikipedia.org/wiki/KIF26A | Kinesin family member 26A (KIF26A), also known as kinesin-11, is a human protein encoded by the KIF26A gene. It is part of the kinesin family of motor proteins.
References | KIF26A | [
"Chemistry"
] | 52 | [
"Biochemistry stubs",
"Protein stubs"
] |
74,605,567 | https://en.wikipedia.org/wiki/Krylo-SV | Krylo-SV is a Russian Roscosmos project to develop a reusable space launch vehicle.
History
The conceptual design was approved in May 2019.
By mid 2023, drop tests of a technology demonstrator were planned for September 2023.
Design
The first stage will deploy a wing and use a jet engine for recovery to allow reuse.
See also
Energia (rocket)
References
Russian spacecraft
Reusable launch systems
Roscosmos | Krylo-SV | [
"Astronomy"
] | 95 | [
"Rocketry stubs",
"Astronomy stubs"
] |
76,121,062 | https://en.wikipedia.org/wiki/Namespace%20security | Namespace security is a digital security discipline that refers to the practices and technologies employed to protect the names and identifiers within a digital namespace from unauthorized access, manipulation, or misuse. It involves ensuring the integrity and security of domain names and other digital identifiers within networked environments, such as the Internet's Domain Name System (DNS), software development namespaces and containerization platforms. Effective namespace security is crucial for maintaining the reliability and trustworthiness of brands and their digital services and for preventing cyber threats including impersonation, domain name hijacking or spoofing of digital identifiers like domain names and social media handles.
Namespace security in the Domain Name System
In the digital age, the significance of namespace security has been magnified as the internet is predominantly navigated through the use of the Domain Name System, which constitutes a collection of namespaces, which together comprise the Internet as defined by the Internet Assigned Numbers Authority (IANA) managed DNS root zone. This includes Top Level Domains (TLDs) such as .com and .net as well as domain names such as google.com and IBM.com.
These digital namespaces and the identifiers they contain are fundamental to maintaining the integrity and security of the internet and its stakeholders. If these identifiers can not be trusted, it erodes the foundational trust in the internet itself. The DNS functions as the internet's phone book, translating human-friendly domain names into IP addresses that computers use to identify each other on the network.
Given its role in internet architecture, securing digital namespaces and identifiers from domain name hijacking, DNS hijacking, DNS spoofing, and other forms of cyber attacks is imperative for the safety of users and the reliability of internet services.
Good namespace security contributes to prevention of corporate identity theft and preserving the trust and confidence of stakeholders. The management and lifecycle oversight of these digital identifiers are essential for mitigating risks associated with cybersecurity vulnerabilities and operational disruptions.
Breach examples in DNS Namespace security
Namespace Security breaches within the DNS happen regularly on the Internet and can in some scenarios have catastrophic consequences. Examples of namespace breaches include:
Namespace security in private namespaces
Namespace security within private namespaces, such as those on social media platforms like Twitter (now X), Facebook, TikTok, play a critical role in safeguarding users' digital identities and the integrity of digital interactions. These platforms utilize unique identifiers, commonly known as usernames or handles, to distinguish between millions of users within their private namespaces. Ensuring the security of these namespaces involves preventing unauthorized access, impersonation, and other forms of cyber threats that could compromise user privacy, spread misinformation, or facilitate other malicious activities.
Platforms such as Twitter/X and Facebook implement various security measures, including multi-factor authentication (MFA), rigorous password policies, and automated systems to detect suspicious activities. These measures help to protect users' accounts from being compromised and prevent unauthorized parties from hijacking or misusing identifiers within these private namespaces.
Breach examples in namespace security
An illustrative example of a breach in namespace security occurred with the hacking of the United States Securities and Exchange Commission's (SEC) X account. The account was compromised due to the apparent lack of two-factor authentication (2FA), a basic but critical layer of security that requires a second form of verification in addition to the password. This incident highlights the vulnerability of digital identifiers to cyber threats and underscores the importance of employing robust security measures to protect identifiers against unauthorized access.
The security of private namespaces is vital for protecting digital identities and the overall integrity of online platforms. As cyber threats continue to evolve, so too must the strategies and technologies employed to defend against them. The incident involving the SEC's Twitter account is a stark reminder of the ongoing need for vigilance and robust security practices in the digital age.
See also
Cyber-security regulation
Cyber security standards
Cybersecurity Information Sharing Act
List of data breaches
References
Computer security procedures
Cyberwarfare
Politics and technology
Data laws
Computer law | Namespace security | [
"Technology",
"Engineering"
] | 878 | [
"Cybersecurity engineering",
"Computer law",
"Computing and society",
"Computer security procedures"
] |
76,121,942 | https://en.wikipedia.org/wiki/Neural%20network | A neural network is a group of interconnected units called neurons that send signals to one another. Neurons can be either biological cells or mathematical models. While individual neurons are simple, many of them together in a network can perform complex tasks. There are two main types of neural network.
In neuroscience, a biological neural network is a physical structure found in brains and complex nervous systems – a population of nerve cells connected by synapses.
In machine learning, an artificial neural network is a mathematical model used to approximate nonlinear functions. Artificial neural networks are used to solve artificial intelligence problems.
In biology
In the context of biology, a neural network is a population of biological neurons chemically connected to each other by synapses. A given neuron can be connected to hundreds of thousands of synapses.
Each neuron sends and receives electrochemical signals called action potentials to its connected neighbors. A neuron can serve an excitatory role, amplifying and propagating signals it receives, or an inhibitory role, suppressing signals instead.
Populations of interconnected neurons that are smaller than neural networks are called neural circuits. Very large interconnected networks are called large scale brain networks, and many of these together form brains and nervous systems.
Signals generated by neural networks in the brain eventually travel through the nervous system and across neuromuscular junctions to muscle cells, where they cause contraction and thereby motion.
In machine learning
In machine learning, a neural network is an artificial mathematical model used to approximate nonlinear functions. While early artificial neural networks were physical machines, today they are almost always implemented in software.
Neurons in an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (the hidden layers) to the final layer (the output layer).
The "signal" input to each neuron is a number, specifically a linear combination of the outputs of the connected neurons in the previous layer. The signal each neuron outputs is calculated from this number, according to its activation function. The behavior of the network depends on the strengths (or weights) of the connections between neurons. A network is trained by modifying these weights through empirical risk minimization or backpropagation in order to fit some preexisting dataset.
Neural networks are used to solve problems in artificial intelligence, and have thereby found applications in many disciplines, including predictive modeling, adaptive control, facial recognition, handwriting recognition, general game playing, and generative AI.
History
The theoretical base for contemporary neural networks was independently proposed by Alexander Bain in 1873 and William James in 1890. Both posited that human thought emerged from interactions among large numbers of neurons inside the brain. In 1949, Donald Hebb described Hebbian learning, the idea that neural networks can change and learn over time by strengthening a synapse every time a signal travels along it.
Artificial neural networks were originally used to model biological neural networks starting in the 1930s under the approach of connectionism. However, starting with the invention of the perceptron, a simple artificial neural network, by Warren McCulloch and Walter Pitts in 1943, followed by the implementation of one in hardware by Frank Rosenblatt in 1957,
artificial neural networks became increasingly used for machine learning applications instead, and increasingly different from their biological counterparts.
See also
Emergence
Biological cybernetics
Biologically-inspired computing
References
Broad-concept articles | Neural network | [
"Engineering"
] | 692 | [
"Artificial intelligence engineering",
"Neural networks"
] |
76,122,044 | https://en.wikipedia.org/wiki/SmithKline%20Beecham%20Clinical%20Laboratories | SmithKline Beecham Clinical Laboratories (SBCL) was an American-based medical laboratory company that was acquired by Quest Diagnostics in 1999 for $1.3 billion.
Controversies
In 1989, SBCL had to pay a $1.5 million fine for illegal laboratory referral kickbacks.
In 1997, Operation LabScam forced SBCL to agree to pay a $325 million settlement for billing Medicare and Medicaid for tests that physicians were misled into believing were free, violating the 1863 False Claims Act.
In 1998, a phlebotomist at an SBCL facility in Palo Alto, California was exposed as reusing needles to save money. As a result, over 3,600 patients had to receive testing and counseling for HIV and hepatitis. The incident led to phlebotomy licensure in California.
References
Life sciences industry | SmithKline Beecham Clinical Laboratories | [
"Biology"
] | 172 | [
"Life sciences industry"
] |
76,122,400 | https://en.wikipedia.org/wiki/Astrobee%20%28robot%29 | Astrobee is a robotic system developed by the US space agency NASA to assist astronauts aboard the International Space Station (ISS). Astrobee consists of three 12.5-inch cube-shaped robots named Honey, Queen, and Bumble, along with software and a docking station for recharging. Astrobee was created to perform some routine tasks on the ISS, allowing astronauts to focus on tasks which require human activities.
Overview
Astrobee can operate either autonomously or under remote control by astronauts, flight controllers, or ground researchers. The robots are equipped with cameras and sensors to navigate the microgravity environment and perform tasks such as inventory management, experiment documentation, and cargo movement. The robots utilize an electric fan to push air through 12 nozzles, enabling free flight within the space station.
Astrobee was designed to improve upon the design of the Synchornized Position Hold, Engage, Reorient, Experimental Satellites (SPHERES) which were already aboard the ISS.
Each robot is a 12.5-inch cube with a perching arm that allows it to grasp handrails for energy conservation, to manipulate items, and assist astronauts.
History
The docking station launched on November 17, 2018, aboard Northrop Grumman's Cygnus NG-10 mission and was installed on February 15, 2019, in the Japanese Experiment Module.
Free-flying robots Bumble and Honey launched on April 17, 2019, via the NG-11 mission. The third robot, Queen, and three perching arms were launched on July 25, 2019, aboard SpaceX's SpX-18 mission. Honey ultimately ended up returning to Earth aboard SpX-23 for maintenance and returned to the space station aboard NG-19.
Additional information
Astrobee, like SPHERES before it, is a part of NASA's initiative to advance guest research on the ISS. Using the robots, researchers on Earth are able to access most of the space station without the need for astronaut interaction. In addition to research opportunities, Astrobee has been used for educational purposes, with teams of students using them to complete challenges similar to tasks robots may be used for in the future.
The propulsion system of each Astrobee robot relies on a pair of impellers that pressurize air inside the robot. This pressurized air can then be vented through 12 different nozzles, allowing the robot to rotate or translate in any direction without the need for external moving parts or pressurized gas canisters. Astrobee is equipped with multiple cameras, a touch screen, a laser pointer, lights, and a 'Terminate Button' that, when pressed, quickly shuts down the propulsion and payload systems while keeping the main processors operational for communication with ground control.
The onboard sensing and computing capabilities enable Astrobee to operate autonomously, and its flight software, based on ROS, is upgradable in-orbit. The robot's modular design additionally allows for expanded capabilities in the future.
Future contributions
Astrobee's modular design allows guest scientists to conduct diverse experiments to help develop technology for future space missions. The system is expected to play a crucial role in NASA's lunar exploration plans and other deep space missions, potentially serving as caretakers for spacecraft like the Lunar Gateway during crew absences.
References
External links
NASA Astrobee Page
NASA Ames Research Center
Space robots
NASA vehicles | Astrobee (robot) | [
"Astronomy"
] | 674 | [
"Outer space",
"Space robots"
] |
76,122,969 | https://en.wikipedia.org/wiki/Action%20principles | Action principles lie at the heart of fundamental physics, from classical mechanics through quantum mechanics, particle physics, and general relativity. Action principles start with an energy function called a Lagrangian describing the physical system. The accumulated value of this energy function between two states of the system is called the action. Action principles apply the calculus of variation to the action. The action depends on the energy function, and the energy function depends on the position, motion, and interactions in the system: variation of the action allows the derivation of the equations of motion without vector or forces.
Several distinct action principles differ in the constraints on their initial and final conditions.
The names of action principles have evolved over time and differ in details of the endpoints of the paths and the nature of the variation. Quantum action principles generalize and justify the older classical principles. Action principles are the basis for Feynman's version of quantum mechanics, general relativity and quantum field theory.
The action principles have applications as broad as physics, including many problems in classical mechanics but especially in modern problems of quantum mechanics and general relativity. These applications built up over two centuries as the power of the method and its further mathematical development rose.
This article introduces the action principle concepts and summarizes other articles with more details on concepts and specific principles.
Common concepts
Action principles are "integral" approaches rather than the "differential" approach of Newtonian mechanics. The core ideas are based on energy, paths, an energy function called the Lagrangian along paths, and selection of a path according to the "action", a continuous sum or integral of the Lagrangian along the path.
Energy, not force
Introductory study of mechanics, the science of interacting objects, typically begins with Newton's laws based on the concept of force, defined by the acceleration it causes when applied to mass: This approach to mechanics focuses on a single point in space and time, attempting to answer the question: "What happens next?". Mechanics based on action principles begin with the concept of action, an energy tradeoff between kinetic energy and potential energy, defined by the physics of the problem. These approaches answer questions relating starting and ending points: Which trajectory will place a basketball in the hoop? If we launch a rocket to the Moon today, how can it land there in 5 days? The Newtonian and action-principle forms are equivalent, and either one can solve the same problems, but selecting the appropriate form will make solutions much easier.
The energy function in the action principles is not the total energy (conserved in an isolated system), but the Lagrangian, the difference between kinetic and potential energy. The kinetic energy combines the energy of motion for all the objects in the system; the potential energy depends upon the instantaneous position of the objects and drives the motion of the objects. The motion of the objects places them in new positions with new potential energy values, giving a new value for the Lagrangian.
Using energy rather than force gives immediate advantages as a basis for mechanics. Force mechanics involves 3-dimensional vector calculus, with 3 space and 3 momentum coordinates for each object in the scenario; energy is a scalar magnitude combining information from all objects, giving an immediate simplification in many cases. The components of force vary with coordinate systems; the energy value is the same in all coordinate systems. Force requires an inertial frame of reference; once velocities approach the speed of light, special relativity profoundly affects mechanics based on forces. In action principles, relativity merely requires a different Lagrangian: the principle itself is independent of coordinate systems.
Paths, not points
The explanatory diagrams in force-based mechanics usually focus on a single point, like the center of momentum, and show vectors of forces and velocities. The explanatory diagrams of action-based mechanics have two points with actual and possible paths connecting them. These diagrammatic conventions reiterate the different strong points of each method.
Depending on the action principle, the two points connected by paths in a diagram may represent two particle positions at different times, or the two points may represent values in a configuration space or in a phase space. The mathematical technology and terminology of action principles can be learned by thinking in terms of physical space, then applied in the more powerful and general abstract spaces.
Action along a path
Action principles assign a number—the action—to each possible path between two points. This number is computed by adding an energy value for each small section of the path multiplied by the time spent in that section:
action
where the form of the kinetic () and potential () energy expressions depend upon the physics problem, and their value at each point on the path depends upon relative coordinates corresponding to that point. The energy function is called a Lagrangian; in simple problems it is the kinetic energy minus the potential energy of the system.
Path variation
A system moving between two points takes one particular path; other similar paths are not taken. Each path corresponds to a value of the action.
An action principle predicts or explains that the particular path taken has a stationary value for the system's action: similar paths near the one taken have very similar action value. This variation in the action value is key to the action principles.
The symbol is used to indicate the path variations so an action principle appears mathematically as
meaning that at the stationary point, the variation of the action with some fixed constraints is zero.
For action principles, the stationary point may be a minimum or a saddle point, but not a maximum. Elliptical planetary orbits provide a simple example of two paths with equal action one in each direction around the orbit; neither can be the minimum or "least action". The path variation implied by is not the same as a differential like . The action integral depends on the coordinates of the objects, and these coordinates depend upon the path taken. Thus the action integral is a functional, a function of a function.
Conservation principles
An important result from geometry known as Noether's theorem states that any conserved quantities in a Lagrangian imply a continuous symmetry and conversely. For examples, a Lagrangian independent of time corresponds to a system with conserved energy; spatial translation independence implies momentum conservation; angular rotation invariance implies angular momentum conservation.
These examples are global symmetries, where the independence is itself independent of space or time; more general local symmetries having a functional dependence on space or time lead to gauge theory. The observed conservation of isospin was used by Yang Chen-Ning and Robert Mills in 1953 to construct a gauge theory for mesons, leading some decades later to modern particle physics theory.
Distinct principles
Action principles apply to a wide variety of physical problems, including all of fundamental physics. The only major exceptions are cases involving friction or when only the initial position and velocities are given. Different action principles have different meaning for the variations; each specific application of an action principle requires a specific Lagrangian describing the physics. A common name for any or all of these principles is "the principle of least action". For a discussion of the names and historical origin of these principles see action principle names.
Fixed endpoints with conserved energy
When total energy and the endpoints are fixed, Maupertuis's least action principle applies. For example, to score points in basketball the ball must leave the shooters hand and go through the hoop, but the time of the flight is not constrained. Maupertuis's least action principle is written mathematically as the stationary condition
on the abbreviated action
(sometimes written ), where are the particle momenta or the conjugate momenta of generalized coordinates, defined by the equation
where is the Lagrangian. Some textbooks write as , to emphasize that the variation used in this form of the action principle differs from Hamilton's variation. Here the total energy is fixed during the variation, but not the time, the reverse of the constraints on Hamilton's principle. Consequently, the same path and end points take different times and energies in the two forms. The solutions in the case of this form of Maupertuis's principle are orbits: functions relating coordinates to each other in which time is simply an index or a parameter.
Time-independent potentials; no forces
For time-invariant system, the action relates simply to the abbreviated action on the stationary path as
for energy and time difference . For a rigid body with no net force, the actions are identical, and the variational principles become equivalent to Fermat's principle of least time:
Fixed events
When the physics problem gives the two endpoints as a position and a time, that is as events, Hamilton's action principle applies. For example, imagine planning a trip to the Moon. During your voyage the Moon will continue its orbit around the Earth: it's a moving target. Hamilton's principle for objects at positions is written mathematically as
The constraint means that we only consider paths taking the same time, as well as connecting the same two points and . The Lagrangian is the difference between kinetic energy and potential energy at each point on the path. Solution of the resulting equations gives the world line . Starting with Hamilton's principle, the local differential Euler–Lagrange equation can be derived for systems of fixed energy. The action in Hamilton's principle is the Legendre transformation of the action in Maupertuis' principle.
Classical field theory
The concepts and many of the methods useful for particle mechanics also apply to continuous fields. The action integral runs over a Lagrangian density, but the concepts are so close that the density is often simply called the Lagrangian.
Quantum action principles
For quantum mechanics, the action principles have significant advantages: only one mechanical postulate is needed, if a covariant Lagrangian is used in the action, the result is relativistically correct, and they transition clearly to classical equivalents.
Both Richard Feynman and Julian Schwinger developed quantum action principles based on early work by Paul Dirac. Feynman's integral method was not a variational principle but reduces to the classical least action principle; it led to his Feynman diagrams. Schwinger's differential approach relates infinitesimal amplitude changes to infinitesimal action changes.
Feynman's action principle
When quantum effects are important, new action principles are needed. Instead of a particle following a path, quantum mechanics defines a probability amplitude at one point and time related to a probability amplitude at a different point later in time:
where is the classical action.
Instead of single path with stationary action, all possible paths add (the integral over ), weighted by a complex probability amplitude . The phase of the amplitude is given by the action divided by the Planck constant or quantum of action: . When the action of a particle is much larger than , , the phase changes rapidly along the path: the amplitude averages to a small number.
Thus the Planck constant sets the boundary between classical and quantum mechanics.
All of the paths contribute in the quantum action principle. At the end point, where the paths meet, the paths with similar phases add, and those with phases differing by subtract. Close to the path expected from classical physics, phases tend to align; the tendency is stronger for more massive objects that have larger values of action. In the classical limit, one path dominates the path of stationary action.
Schwinger's action principle
Schwinger's approach relates variations in the transition amplitudes to variations in an action matrix element:
where the action operator is
The Schwinger form makes analysis of variation of the Lagrangian itself, for example, variation in potential source strength, especially transparent.
The optico-mechanical analogy
For every path, the action integral builds in value from zero at the starting point to its final value at the end. Any nearby path has similar values at similar distances from the starting point. Lines or surfaces of constant partial action value can be drawn across the paths, creating a wave-like view of the action. Analysis like this connects particle-like rays of geometrical optics with the wavefronts of Huygens–Fresnel principle.
Applications
Action principles are applied to derive differential equations like the Euler–Lagrange equations or as direct applications to physical problems.
Classical mechanics
Action principles can be directly applied to many problems in classical mechanics, e.g. the shape of elastic rods under load,
the shape of a liquid between two vertical plates (a capillary),
or the motion of a pendulum when its support is in motion.
Chemistry
Quantum action principles are used in the quantum theory of atoms in molecules (QTAIM), a way of decomposing the computed electron density of molecules in to atoms as a way of gaining insight into chemical bonding.
General relativity
Inspired by Einstein's work on general relativity, the renowned mathematician David Hilbert applied the principle of least action to derive the field equations of general relativity. His action, now known as the Einstein–Hilbert action,
contained a relativistically invariant volume element and the Ricci scalar curvature . The scale factor is the Einstein gravitational constant.
Other applications
The action principle is so central in modern physics and mathematics that it is widely applied including in thermodynamics, fluid mechanics, the theory of relativity, quantum mechanics, particle physics, and string theory.
History
The action principle is preceded by earlier ideas in optics. In ancient Greece, Euclid wrote in his Catoptrica that, for the path of light reflecting from a mirror, the angle of incidence equals the angle of reflection. Hero of Alexandria later showed that this path has the shortest length and least time.
Building on the early work of Pierre Louis Maupertuis, Leonhard Euler, and Joseph-Louis Lagrange defining versions of principle of least action,
William Rowan Hamilton and in tandem Carl Gustav Jacob Jacobi developed a variational form for classical mechanics known as the Hamilton–Jacobi equation.
In 1915, David Hilbert applied the variational principle to derive Albert Einstein's equations of general relativity.
In 1933, the physicist Paul Dirac demonstrated how this principle can be used in quantum calculations by discerning the quantum mechanical underpinning of the principle in the quantum interference of amplitudes. Subsequently Julian Schwinger and Richard Feynman independently applied this principle in quantum electrodynamics.
References
Dynamics (mechanics)
Classical mechanics | Action principles | [
"Physics"
] | 2,939 | [
"Physical phenomena",
"Classical mechanics",
"Motion (physics)",
"Mechanics",
"Dynamics (mechanics)"
] |
76,124,117 | https://en.wikipedia.org/wiki/Phyllotopsis%20rhodophyllus | Phyllotopsis rhodophyllus is a species of fungus in the family Phyllotopsidaceae.
References
Fungi described in 1936
Inedible fungi
Fungus species
Agaricales | Phyllotopsis rhodophyllus | [
"Biology"
] | 41 | [
"Fungi",
"Fungus species"
] |
76,124,124 | https://en.wikipedia.org/wiki/Phyllotopsis%20ealaensis | Phyllotopsis ealaensis is a species of fungus in the family Phyllotopsidaceae.
References
Fungi described in 1971
Inedible fungi
Fungus species
Agaricales | Phyllotopsis ealaensis | [
"Biology"
] | 39 | [
"Fungi",
"Fungus species"
] |
76,124,384 | https://en.wikipedia.org/wiki/UTI%20vaccine | A UTI vaccine is a vaccine used for prevention of recurrent urinary tract infections (UTIs). A number of UTI vaccines have been developed and/or marketed. These include Uromune (MV-140; sublingual spray), UroVaxom (OM-89, OM-8980; oral tablet), Solco-Urovac (Strovac; vaginal suppository or intramuscular injection), ExPEC4V (V10, JNJ-63871860; intramuscular injection), and SEQ-400 (route unspecified).
Background
Recurrent UTIs are common in women, causing significant physical and emotional distress. While antibiotics are widely used to treat them, the recurrence of UTIs poses a significant challenge. Long-term antibiotic use not only poses health risks but also contributes to the growing problem of antibiotic resistance, making effective treatment more challenging.
Uromune (MV-140)
Uromune (developmental code name MV-140) is a vaccine developed to treat recurrent UTIs that is made from heat-inactivated bacteria mixed with in glycerol, sodium chloride, artificial pineapple flavoring, and water. It contains specific strains of four types of bacteria: Escherichia coli, Klebsiella pneumoniae, Enterococcus faecalis, and Proteus vulgaris.
This vaccine is taken by spraying it under the tongue twice a day for three months. It is currently being tested in clinical trials. Studies suggest that MV140 works by stimulating the immune system to produce antibodies and activate certain immune cells, which help protect against UTIs.
In a conducted study involving 89 individuals with a history of urinary tract infections (UTIs), participants were instructed to use two sprays of the vaccine daily for three months. Preliminary results presented at the European Association of Urology Congress in Paris revealed that nine years later, 54 percent of the participants remained free from UTIs. Women in the study remained UTI-free for approximately 4.5 years on average, while men experienced around 3.5 years without UTIs. Dr. Bob Yang, who co-led the study and serves as a consultant urologist at the Royal Berkshire NHS Foundation Trust in the United Kingdom, noted that before receiving the vaccine, all participants had struggled with recurrent UTIs, which can be challenging to treat.
Regulatory status
UTI vaccines are approved for medical use in some countries or are available via special-access programs. However, UTI vaccines remain in the experimental stage of development in much of the world and remain to be approved by the Food and Drug Administration (FDA) in the United States.
See also
Methenamine
LACTIN-V
TOL-463
References
Experimental urogynecological drugs
Urinary system
Vaccines | UTI vaccine | [
"Biology"
] | 584 | [
"Vaccination",
"Vaccines"
] |
76,125,137 | https://en.wikipedia.org/wiki/2010%20California%20contrail%20incident | On the evening of Monday, November 8, 2010, an unusually conspicuous contrail appeared about 35miles west of Los Angeles, California in the vicinity of Catalina Island. News footage of the event from a KCBS helicopter led to intense media coverage and speculation about a potential military missile launch, with many reporters and experts discussing the contrail and theorizing about its source.
Coverage continued for several days. The Pentagon released a statement on November 9 that it could not identify the source of the vapor trail, but both the North American Aerospace Defense Command (NORAD) and U.S. Northern Command stated it was not a foreign military launch. On November 9, the FAA also issued a statement that it had not approved any commercial space launches in the area for the prior day. On November 10, some 30 hours after the "mystery missile" first gained press attention, a Pentagon spokesman stated there was "no evidence to suggest" the plume was anything but an aircraft contrail. Some experts, however, held that the vapor trail could not be identified as an aircraft contrail with total certainty, and others stated it was a missile.
While some uncertainty over the vapor trail's origin persisted, the incident came to be seen as an example of news outlets being "captives of their sources" and irresponsibly pushing unverified theses; it was also interpreted as a lesson in the importance of exploring alternative hypotheses that fit available data. U.S. federal and military authorities were also criticized for giving a series of "inconclusive" answers about the event and allowing the issue, in the words of one commentator, to "fester for days" without a clear resolution.
Background
Contrails, short for "condensation trails," are linear cloud formations produced by aircraft exhaust or air pressure changes, usually at commercial cruising altitudes several miles above the ground. Contrails often only last for minutes, but can last for hours and expand to several miles across, coming to resemble naturally formed cirrus or altocumulus clouds.A contrail from an airplane flying towards an observer can create the illusion of a vertically moving object, as happened with a contrail off the coast of San Clemente, California on December 31, 2009, which some observers mistook for a missile launch. The San Clemente "New Year's Eve Contrail" was a horizontal trail at about 32,000 feet, or six miles, in altitude, that appeared to be oriented vertically due to the ground-level perspective from which it was observed and photographed. There are also historic examples of observers mistaking aircraft contrails for other phenomena, especially when contrails were still uncommon, including incidents in Galveston, Texas on October 27, 1951; in several areas of Iowa on April 15, 1950; and throughout the San Francisco Peninsula on January 11, 1950, when a B-50 Superfortress bomber flying at 35,000 feet caused many residents to call police stations to report a "burning plane," "meteors," and "flying saucers."
While a number of experts concluded the 2010 "mystery missile" was simply a common aircraft contrail, other experts held that while an airplane was the most likely source a missile launch could not be entirely ruled out based on existing evidence. San Nicolas Island, approximately 75 miles west of Los Angeles and the site of a U.S. military installation, has hosted a number of secret operations. The 2010 incident occurred not far from San Nicolas, leading some experts to speculate about a connection between the vapor trail and activity on the island. On Friday, November 5, 2010, several days prior to the contrail incident, Vandenberg Air Force Base launched a Delta II rocket carrying a Thales Alenia Space-Italia COSMO SkyMed satellite, but a sergeant from Vandenberg informed CBS News 8 that there had been no subsequent launches.
Incident and response
At around 5:00p.m. Pacific Time on Monday, November 8, 2010, a helicopter from the KCBS news station recorded the vapor trail of what was described as a "missile" about 35miles west of Los Angeles, California and somewhat north of Catalina Island. It was later characterized as a "large vertical column set against the bright orange sky at sunset" and was clearly visible from the Los Angeles area. Scott Diener, the news director at KCBS, stated that the experts interviewed by his station on "Tuesday night and Wednesday morning had leaned toward the missile theory" to explain the vapor plume. News anchors continued to cover the event and, by the end of Tuesday, it had attracted international press attention, being described as a "mystery missile" or "vapor trail reminiscent of a missile launch."
Several experts argued that the plume was simply a jet contrail, yet others disagreed, and U.S. government sources did not immediately reach a public conclusion about the vapor trail or its source. Pentagon spokesman Colonel David Lapan said that any missile test in the area was "implausible" due to the close proximity of the sighting to Los Angeles International Airport. Col. Lapan also stated that were no known airspace closures or notifications to mariners at the time of the incident as would be expected for a missile test. Robert Ellsworth, former U.S. Ambassador to NATO and former Deputy Secretary of Defense, stated to CBS News 8 that it did not appear to be a Tomahawk missile but, stressing it was only a theory, also remarked: "It could be a test firing of an intercontinental ballistic missile from an underwater submarine, to demonstrate mainly to Asia, that we can do that." Ellsworth further said of the vapor trail: "It's spectacular...It takes people's breath away," and described the projectile as "a big missile." Doug Richardson, editor of Jane's Missiles and Rockets, said, "It's a solid propellant missile, you can tell from the efflux [smoke] but they're not showing enough of the tape to show whether it's staging [jettisoning its sections]." Richardson theorized it might be a standard interceptor missile of the type used by U.S. Navy Aegis guided-missile cruisers.
The United States Northern Command and the North American Aerospace Defense Command released a statement in response to the sighting, saying, "At this time, we can confirm that there is no threat to our nation and from all indications this was not a launch by a foreign military." The Pentagon released a statement on November 9 declaring that it could not identify the source of the vapor trail. Col. Lapan stated that officials were "still trying to find out what the contrail off the coast of southern California was caused by," but that currently, "all indications are that it was not a DoD activity." The Pentagon determined that there was no "scheduled or inadvertent" missile launches off the coast of California on the night of November 8. Adm. Gary Roughead, Chief of Naval Operations, stated to The Washington Post that it "wasn't a Navy missile," yet declined to offer more detail.
The website ContrailScience.com produced a widely circulated report that explained how an airplane contrail moving directly toward a viewer has the appearance of rising vertically. The website referenced the December 31, 2009 San Clemente "New Year's Eve Contrail," a horizontal contrail which some observers thought was a vertical missile launch.
John E. Pike, the director of GlobalSecurity.org, stated that the flying object producing the vapor trail was not a missile because it was too slow, and described it as "obviously an airplane contrail." On Tuesday, Pike said that what the KCBS crew recorded was "clearly an airplane contrail. It's an optical illusion that looks like it's going up, whereas in reality it's going towards the camera. The tip of the contrail is moving far too slowly to be a rocket. When it's illuminated by the sunset, you can see hundreds of miles of it...all the way to the horizon." Light at the head of the contrail that was initially speculated to be an exhaust "flame" was later interpreted as sun reflecting from an aircraft exterior.As reported on November 10 by CNN, an unnamed official from the U.S. Northern Command stated the vapor trail may have been caused by a plane, and likened it to observers mistaking the New Year's Eve contrail for a missile. FAA spokesman Ian Gregor stated: "The FAA ran radar replays of a large area west of Los Angeles based on media reports of the possible missile launch at approximately 5 p.m. (PT) on Monday. The radar replays did not reveal any fast moving, unidentified targets in that area...The FAA did not receive reports...of unusual sightings from pilots who were flying in the area on Monday afternoon." Eventually, on November 10, about 30 hours after the contrail first gained press attention, a Pentagon spokesman stated “there is no evidence to suggest that this is anything else other than a condensation trail from an aircraft.” U.S. Airways flight 808 from Honolulu, Hawaii—a Boeing 757-200—emerged as a candidate for the contrail's source. UPS flight 902—a McDonnell Douglas/Boeing MD-11—was also raised as a possibility.
Eventually, several news outlets came to report that the vapor trail was simply an airplane contrail or optical illusion, as various experts had argued. The public and news reaction to the event was characterized by CNN as the "mystery missile mania." Nonetheless, some experts continued to hold that the plume could not be conclusively identified. In a November 14, 2010 article, the New York Times quoted Theodore A. Postol, a former Pentagon science adviser and professor at the Massachusetts Institute of Technology, who stated that while he inclined to the jet explanation, he could not "rule out a missile launch." On November 16, Space.com published an article discussing an image taken by the NASA/NOAA GOES 11 satellite on November 8 that reportedly showed the "mystery" contrail visible as a "horizontal white streak." The NASA website itself also discussed the GOES 11 imagery in an article. Patrick Minnis, a contrail expert in the Science Directorate at the NASA Langley Research Center, said he first "assumed it was a missile" but after research concluded that an aircraft was the "most likely" source of the contrail.
Aftermath
In the November 14 New York Times article, several days after the event, the incident was cast as an example of how news outlets can be "captives of their sources" and irresponsibly push unproven theses; it was also interpreted as a lesson in the importance of exploring alternative hypotheses. Federal and military authorities in the United States also faced criticism in the aftermath of the event, particularly for the "inconclusive" nature of the answers provided to the public, and a perceived delay in resolving the issue.
Time magazine ranked the incident as number two on its "Top 10 Oddball News Stories" list for 2010, remarking how "within hours, the footage had been picked up by every major news network," but also observing that "as quickly as the mystery missile arrived on the scene...it was set aside." ABC News also mentioned the incident in a 2012 story about test missile launches from Fort Wingate and the White Sands Missile Range, recalling the 2010 "'mystery missile'...spotted off the coast of Southern California by a TV news helicopter." The Telegraph referred to the incident in a July 2017 article, stating how in "November 2010, The Pentagon was left baffled by what was reported to be a 'mystery missile launch' off the coast of California." Live Science also mentioned the event in a 2017 article.
Former Director of Israel Missile Defense, Uzi Rubin, cited the incident in a September 2014 briefing on Israeli Air Defense held in Washington, D.C. and broadcast by C-SPAN. Aviation Week Network reported on the briefing, describing how Rubin used "the November 2010 'mystery missile launch' seen from California" as an example of foreshortened perspective, to illustrate how observers can mistake the direction that a rocket is traveling.
See also
Contrail
Twilight phenomenon
Channel Islands (California)
Robert Ellsworth
Battle of Los Angeles
Notes
References
External links
Wall Street Journal video of the "Mystery Missile"
Photos sent to ABC News, taken by Richard Warren
Contrail Science page, with raw download of Warren’s ABC photos
Fox News coverage of the "mystery missile" November 10, 2010
"Missile Launched Off Calif. Coast" CBS video
"Mystery Missile, Not A Missile" CBS video
"Who fired the 'mystery missile?" Channel 4 video
"Mystery missile launch caught on camera off California coast" WMAR-2 ABC video
"Mystery Missile" News 5 Cleveland ABC video
Michio Kaku on ABC News: "Professor Explains Mystery Plume"
November 10, 2010 Wikinews article on the incident
2010 in aviation
2010 in California
Santa Catalina Island (California)
Atmospheric optical phenomena
November 2010 events in the United States
Aviation accidents and incidents in the United States in 2010
2010 in American politics
Los Angeles
Boeing 757
North American Aerospace Defense Command
US Airways Group
Channel Islands of California
Honolulu
Los Angeles International Airport | 2010 California contrail incident | [
"Physics"
] | 2,731 | [
"Optical phenomena",
"Physical phenomena",
"Atmospheric optical phenomena",
"Earth phenomena"
] |
76,125,865 | https://en.wikipedia.org/wiki/Lifileucel | Lifileucel, sold under the brand name Amtagvi, is an adoptive T cell therapy used for the treatment of melanoma.
Specifically, lifileucel is a tumor-derived T cell immunotherapy composed of a recipient's own T cells. A portion of the recipient's tumor tissue is removed during a surgical procedure prior to treatment. The recipient's T cells (the tumor-infiltrating lymphocytes) are separated from the tumor tissue, multiplied and then infused into the patient in a single dose. T cells are a type of cell that helps the immune system fight cancer and infections.
Lifileucel is the first tumor-derived T cell immunotherapy approved by the US Food and Drug Administration (FDA). It was approved for medical use in the United States in February 2024.
Medical uses
Lifileucel is indicated for the treatment of adults with unresectable (unable to be removed with surgery) or metastatic (spread to other parts of the body) melanoma previously treated with other therapies (a PD-1 blocking antibody, and if BRAF V600 mutation positive, a BRAF inhibitor with or without a MEK inhibitor).
Side effects
The most common adverse reactions include chills, fever, fatigue, tachycardia (abnormally fast heart rate), diarrhea, febrile neutropenia (fever associated with a low level of certain white blood cells), edema (swelling due to buildup of fluid in body tissues), rash, hypotension, hair loss, infection, hypoxia (abnormally low oxygen levels in the body) and feeling short of breath.
History
The safety and effectiveness of lifileucel was evaluated in a global, multicenter, multicohort, clinical study including adult participants with unresectable or metastatic melanoma who had previously been treated with at least one systemic therapy, including a PD-1 blocking antibody, and if positive for the BRAF V600 mutation, a BRAF inhibitor or BRAF inhibitor with an MEK inhibitor. Effectiveness was measured via the objective response rate to treatment and duration of response (measured from the date of confirmed initial objective response to the date of progression, death from any cause, starting a new anti-cancer treatment or discontinuation from follow-up, whichever came first).
The US Food and Drug Administration (FDA) approved Lifileucel through the accelerated approval pathway and granted the application orphan drug, regenerative medicine advanced therapy, fast track, and priority review designations under the brand name Amtagvi to Iovance Biotherapeutics.
Society and culture
Legal status
Lifileucel was approved for medical use in the United States in February 2024.
Names
Lifileucel is the international nonproprietary name.
References
External links
Cancer treatments
Orphan drugs | Lifileucel | [
"Biology"
] | 606 | [
"Cell therapies"
] |
76,126,769 | https://en.wikipedia.org/wiki/MRTX1133 | MRTX1133 is an investigational drug that targets the G12D mutation in KRAS dependent cancers. It is currently in a phase 1/2 clinical trial for the treatment of solid tumors. MRTX1133 is considered to be harmful from direct skin or eye exposure other than transient irritation. It may cause irritation of the respiratory system if inhaled.
See also
Adagrasib
RMC-9805
References
Antineoplastic drugs
Pyridopyrimidines
Pyrrolizidines
Fluoroarenes
Ethynyl compounds
Naphthols
Aromatic ethers | MRTX1133 | [
"Chemistry"
] | 121 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
76,126,836 | https://en.wikipedia.org/wiki/Hellings-Downs%20curve | The Hellings-Downs curve (also known as the Hellings and Downs curve) is a theoretical tool used to establish the telltale signature that a galactic-scale pulsar timing array has detected gravitational waves, typically of wavelengths . The method entails searching for spatial correlations of the timing residuals from pairs of pulsars and comparing the data with the Hellings-Downs curve. When the data fit exceeds the standard 5 sigma threshold, the pulsar timing array can declare detection of gravitational waves. More precisely, the Hellings-Downs curve is the expected correlations of the timing residuals from pairs of pulsars as a function of their angular separation on the sky as seen from Earth. This theoretical correlation function assumes Einstein's general relativity and a gravitational wave background that is isotropic.
Pulsar timing array residuals
Albert Einstein's theory of general relativity predicts that a mass will deform spacetime causing gravitational waves to emanate outward from the source. These gravitational waves will affect the travel time of any light that interacts with them. A pulsar timing residual is the difference between the expected time of arrival and the observed time of arrival of light from pulsars. Because pulsars flash with such a consistent rhythm, it is hypothesised that if a gravitational wave is present, a specific pattern may be observed in the timing residuals from pairs of pulsars. The Hellings-Downs curve is used to infer the presence of gravitational waves by finding patterns of angular correlations in the timing residual data of different pulsar pairings. More precisely, the expected correlations on the vertical axis of the Hellings-Downs curve are the expected values of pulsar-pairs correlations averaged over all pulsar-pairs with the same angular separation and over gravitational-wave sources very far away with noninterfering random phases. Pulsar timing residuals are measured using pulsar timing arrays.
History
Not long after the first suggestions of pulsars being used for gravitational wave detection in the late 1970’s, Donald Backer discovered the first millisecond pulsar in 1982. The following year Ron Hellings and George Downs published the foundations of the Hellings-Downs curve in their 1983 paper "Upper Limits on the Isotropic Gravitational Radiation Background from Pulsar Timing Analysis". Donald Backer would later go on to become one of the founders of the North American Nanohertz Observatory for Gravitational Waves (NANOGrav).
Examples in the scientific literature
In 2023, NANOGrav used pulsar timing array data collected over 15 years in their latest publications supporting the existence of a gravitational wave background. A total of 2,211 millisecond pulsar pair combinations (67 individual pulsars) were used by the NANOGrav team to construct their Hellings-Downs plot comparison. The NANOGrav team wrote that "The observation of Hellings–Downs correlations points to the gravitational-wave origin of this signal." The Hellings-Downs curve has also been referred to as the "smoking gun" or "fingerprint" of the gravitational-wave background. These examples highlight the critical role that the Hellings-Downs curve plays in contemporary gravitational wave research.
Equation of the Hellings-Downs curve
Reardon et al. (2023) from the Parkes pulsar timing array team give the following equation for the Hellings-Downs curve, which in the literature is also called the overlap reduction function:
where:
,
is the kronecker delta function
represents the angle of separation between the two pulsars and as seen from Earth
is the expected angular correlation function.
This curve assumes an isotropic gravitational wave background that obeys Einstein's general relativity. It is valid for "long-arm" detectors like pulsar timing arrays, where the wavelengths of typical gravitational waves are much shorter than the "long-arm" distance between Earth and typical pulsars.
References
External links
Pulsars
Functions of space and time
Equations of astronomy | Hellings-Downs curve | [
"Physics",
"Astronomy"
] | 828 | [
"Concepts in astronomy",
"Spacetime",
"Functions of space and time",
"Equations of astronomy"
] |
76,128,123 | https://en.wikipedia.org/wiki/Driver%20safety%20arms%20race | The driver safety arms race is phenomenon whereby car drivers are incentivized to buy larger auto-vehicles in order to protect themselves against other large auto-vehicles. This has a spiralling effect whereby cars get increasingly larger, which has adverse overall effects on traffic safety. It is an example of a prisoners' dilemma, as it can be individually rational to attain larger vehicles while having adverse outcomes on all traffic users.
References
Road safety
Moral psychology
Social psychology | Driver safety arms race | [
"Physics"
] | 92 | [
"Physical systems",
"Transport",
"Transport stubs"
] |
76,128,461 | https://en.wikipedia.org/wiki/Tr%E1%BA%A7n%20%C4%90%E1%BB%A9c%20Thi%E1%BB%87p | Trần Đức Thiệp (; ) (born 1949) is a Vietnamese nuclear scientist and former deputy director of Vietnam Academy of Science and Technology. In 1992, he accidentally put his hand over an electron beam, leading to amputation of his right hand and International Atomic Energy Agency's investigation of safety practices of Hanoi's Institute for Nuclear Science and Technology.
Early life and career
Thiệp was born in Yên Hồ ward, Đức Thọ district, Hà Tĩnh province, with seven other siblings. His mother was in bad health making her unable to make income, and his father died when Thiệp was in fourth grade. To alleviate the financial situation, Thiệp frequented Vinh for summer break work. After graduating high school, he enrolled in Sofia University in Bulgaria, initially intending to graduate in geodesy, but he changed his discipline to physics and refined to nuclear physics in his second year in university. He cited Tzvetan Bonchev, a Bulgarian professor in nuclear physics, to be his inspiration for pursuing the discipline.
After graduating with a Master of Science in 1977, Thiệp was invited to be an assistant at the Sofia University's physics department. One year later, he returned to Vietnam and offered a position in the Nuclear Physics Department, Vietnam Institute of Physics. In this time, Vietnam was under a central planned economy, with multiple U.S. sanctions and in a state of general poverty. To quote from the Dân Trí magazine: "During the late 70s and early 80s, [...], the nuclear physics discipline was forgotten without investment [from the government], and there were no state-level projects or projects for the nuclear field." The nuclear physics department would later be spun off into the Institute for Nuclear Science and Technology as part of Vietnam Atomic Energy Commission.
Radiation accident
In 1982, the Soviet Union gifted the Institute two experimental microtron particle accelerators that had been used for ten years: a neutron generator and an electron accelerator. They are also the first two particle accelerators operated in Southeast Asia.
On 17 November 1992 Thiệp was employed as the director of the Vietnam National Centre for Scientific Research in Hanoi. During a routine task, he placed his hands into a particle accelerator to adjust a sample of gold ore. This adjustment would usually be done using compressed air, but Thiệp entered the room and adjusted the samples by hand. At the same time, his colleagues, mistakenly believing he had left the room to wash his hands with soap in a sink placed outside the containment room, switched the machine on. Thiệp was exposed to a beam current of 6 μA for between two and four minutes. As a result, he suffered severe tissue necrosis in his hands, requiring specialist treatment in Paris, and ultimately had to have his right hand amputated. Thiệp lost the fourth and fifth fingers on his left hand, which subsequently suffered chronic stiffness and radiation-induced fibrosis. He returned to work at the facility in Hanoi in 1994, after more than 600 days of treatment for acute radiation injuries.
In their report, the IAEA acknowledged the pressures that Vietnamese scientists, working in a developing country, were under. Nevertheless, they recommended better regulation of radiation safety and sweeping recommendations to all irradiation facilities regarding safety systems improvements, including automatic warning signals; emergency cut-out buttons; door interlock systems; a search and lock-up fail-safe system; area radiation monitoring and CCTV to ensure the irradiation room was empty before being switched on.
Later academic career
Thiệp's research centered around the Mössbauer effect and nuclear reaction mechanics. He had held a professorship at Russia's Joint Institute for Nuclear Research.
See also
List of civilian radiation accidents
Notes
References
Further reading
What if you put your hand in a particle accelerator? - a detailed account of the Hanoi radiation incident
1949 births
20th-century Vietnamese scientists
21st-century Vietnamese scientists
Living people
People from Hà Tĩnh province
Nuclear physicists
Vietnamese amputees | Trần Đức Thiệp | [
"Physics"
] | 811 | [
"Nuclear physicists",
"Nuclear physics"
] |
76,128,534 | https://en.wikipedia.org/wiki/WY%20Geminorum | WY Geminorum is a binary star system in the northern constellation of Gemini, abbreviated WY Gem. It has an apparent visual magnitude that ranges from 7.26 down to 7.51, which is too faint to be readily viewed with the naked eye. This system is located at a distance of approximately 6,300 light years from the Sun based on parallax measurements, and is receding with a radial velocity of 19.5 km/s.
In 1922, M. L. Humason found the spectrum of HD 42474 matches an M-type star, but with the peculiarity of numerous bright lines. It appears similar to the spectrum of VV Cephei (HR 8383), an eclipsing binary. P. Swings and O. Struve in 1941 discovered emission lines in the ultraviolet spectrum. The presence of a B-type stellar spectrum from a companion star was confirmed by W. P. Bidelman in 1954. The primary spectrum matches an M-type supergiant star. A. Cowley in 1970 found evidence of an atmospheric eclipse of the companion. In 1981, A. Buzzoni determined the primary to be a semi-regular variable of the SRb type with a period of 169 days.
This is a double-lined spectroscopic binary star system with an approximate orbital period of and a high eccentricity (ovalness) of 0.61. The primary component is an M-type supergiant star with a stellar classification of M2Ia or b. The companion is most likely a hotter B-type main-sequence star with a class of B2V. The pair are classified as a VV Cephei-type star system. The companion may be accreting matter from the supergiant around the time of periastron passage, resulting in the formation of an intermittent accretion disk orbiting the hotter star. Radio emission has been detected, which is most likely coming from an ionized region in the stellar wind of the supergiant.
References
M-type supergiants
B-type main-sequence stars
Semiregular variable stars
Spectroscopic binaries
Astronomical radio sources
Gemini (constellation)
BD=+23 1243
042474
029425
Geminorum, WY | WY Geminorum | [
"Astronomy"
] | 466 | [
"Gemini (constellation)",
"Astronomical radio sources",
"Astronomical events",
"Constellations",
"Astronomical objects"
] |
76,129,348 | https://en.wikipedia.org/wiki/Emadike%20Shoreline%20project | The Emadike Shoreline project is an ecological intervention effort of the Federal Government of Nigeria to address the coastal and flooding challenges at Emadike and Epebu communities in the Ogbia Local Government Area of Bayelsa State, Nigeria. The project, which was executed through the Ecological Project Office of the Federal Government of Nigeria, involved an 850m shoreline protection with the use of sheet pile method to address ocean surge, while also sand filling up to 16 hectares of land as a reclamation strategy to arrest flood erosion in the community.
History of the Project
The Emadike Shoreline project was awarded in 2022 under the President Muhammadu Buhari administration of the Federal Government of Nigeria. The project was awarded to help address the perennial flooding issues facing the communities over the years.
References
Projects in Africa
Flood control | Emadike Shoreline project | [
"Chemistry",
"Engineering"
] | 162 | [
"Flood control",
"Environmental engineering"
] |
76,130,790 | https://en.wikipedia.org/wiki/Eliyahu%20Tamler | Eliyahu Tamler (simetimes rendered as Temler, ; August 25, 1919 – April 29, 1948) was a senior commander in the Irgun underground paramilitary group (also known as Etzel), who fought in the 1947–1948 civil war in Mandatory Palestine.
Biography
Eliyahu Tamler was born in 1919 in Zastavna, Kingdom of Romania (in present-day Ukraine), as Eduard Samuel Tamler, to Abraham and Sabina Tamler. His father died when he was 13 years old. He studied at a school in his hometown and at a gymnasium in the neighboring city of Cernăuți. His father was a Zionist activist. Eliyahu enrolled at the University of Cernăuți, but didn't graduate.
In 1939, he immigrated to Eretz Israel, then Mandate Palestine, as an immigrant on the ship Parita, which sailed from the port of Constanța on July 12, carrying 857 Beitar activists, and after 42 days it arrived at the coast of Tel Aviv. Upon his arrival, Tamler joined the Beitar groups operating in Mandate Palestine. He later became part of the Irgun underground and was active in its ranks. In 1942, Tamler was arrested by the British Mandate authorities in Haifa and sent to a detention camp near Mizra, from where he was released in March 1943. After his release, he was appointed commander of the Irgun's operational activity in the Petah Tikva District and later in Tel Aviv District. As part of this role, he planned and carried out a large number of actions against the British authorities, such as the capture of a truck carrying explosives en route to the Migdal Tzedek quarries, the blowing up of communication pillars in the Petach Tikva area (1945), the capture of weapons from the British Army's warehouses in Sarafand, the attack on the immigration and customs offices in Haifa (February 12, 1944), the blowing up of oil pipelines (May 1945), the capture of explosives from the Solel Boneh Company warehouses (August 1945), the
attack on the Haifa police building (September 29, 1945) and the attack on the Lydda railway station during the Night of the Trains (1 November 1945).
On April 2, 1946, he was caught by the British carrying a weapon, sentenced to seven years in prison and detained in the Jerusalem Central Prison. On February 20, 1948, together with 11 other Irgun and Lehi prisoners, he managed to dig a tunnel and escape from prison. After his escape, he was appointed to a central role in the Irgun's national headquarters.
On April 29, 1948, Tamler was one of the Etzel commanders in the Battle of Jaffa. During this operation, he was hit by a British bombshell and killed. Menachem Begin, who knew him closely, praised him, saying that "Eliyahu is gone - there was nobody better in the organization than him". He was buried in the Nahalat Yitzhak Cemetery.
References
1919 births
1948 deaths
Bukovina Jews
Israeli military personnel killed in the 1948 Arab–Israeli War
Escapees from British military detention
Romanian escapees
Romanian Zionists
Romanian people of Ukrainian-Jewish descent
Romanian emigrants to Mandatory Palestine
Irgun members
Betar members
People from Chernivtsi Oblast
People convicted by British military courts
People convicted of illegal possession of weapons
Prisoners and detainees of Mandatory Palestine
Deaths from explosion
Members of Aliyah Bet | Eliyahu Tamler | [
"Chemistry"
] | 710 | [
"Deaths from explosion",
"Explosions"
] |
76,130,972 | https://en.wikipedia.org/wiki/Play%20equity | Play equity is the concept of ensuring all children have equitable access to play opportunities, sports programs and healthy movement. Youth sports, as well as structured and unstructured play, can contribute to the physical, emotional, social and academic development of young people. Play equity relates to the ongoing effort of providing opportunities and pathways to these benefits by removing barriers to access that can include economics, gender, ability, or where children reside.
This concept is predicated on the relationship between positive youth development and physical activity, commonly referred to in the United States as sports-based youth development. Play equity broadly encompasses the issue of a lack of access to resources – including facilities, trained coaches, organized programs and equipment – in underrepresented or lower-income communities.
Origins
The term “play equity” was conceptualized and introduced by LA84 Foundation President & CEO Renata Simril, who proposed driving greater access to play for youth being confronted as a social justice issue.
Simril also set in motion an effort to address the disparities in access through collective impact by collaborating with government entities, pro sports teams, philanthropy, educational institutions, youth sports organizations and health providers to ensure children's access to play for their lifelong well-being.
The term play equity has since been adopted as part of growing movement to provide more youth with the opportunity to realize the benefits of sport and play for their future success and well-being, by organizations in youth sports, government, pro sports, the national media and others.
In the United Nations Rights of the Child, Article 31 declares that play is the right of all children. It recognizes "the right of the child to rest and leisure, to engage in play and recreational activities appropriate to the age of the child and to participate freely in cultural life and the arts."
History
Youth ages 6-18 from low-income homes in the United States quit sports because of the financial costs at six times the rate of kids from high-income homes.
In the United States, participation studies find that kids from households that earn below $25,000 are five times less likely to participate in sports than children from more affluent homes. Black and Latino youth are twice as likely to reside in areas with subpar park space per capita. And 80 percent of young people, many in poor communities, do not meet federal guidelines for daily physical activity.
Advocates addressing youth sports and the barriers to access have witnessed the trend grow significantly over the last decade, with participation rates for healthy levels of activity for kids continuing to fall.
Part of the rise of the gap between children from less affluent households and those from families with higher incomes is the rise of youth sports as a lucrative industry while at the same time budget cuts and changing priorities at some public schools have also limited traditional physical education classes and school-based organized sports.
In this environment where schools have been forced to defund sports enrichment programs, profits for the youth sports industry in the United States have been projected to reach $77.6 billion by 2026, Many parents cannot afford private team registrations, travel, apparel, equipment and other expenses related to sports participation.
Work and applications
Those working to achieve play equity broadly seek to eliminate the barriers to participation in quality youth sports, play and healthy movement experiences by kids in low-income communities, so every child, regardless of their social or economic circumstances, or their ability, have access to a positive youth sports experience.
An initial step was the LA84 Foundation establishing the Play Equity Fund, a 501(c)(3) public charity, in 2014 with the mission to address the growing inequities in youth sport, to drive social change and create more collaboration regionally and nationally.
In 2020 following the murder of George Floyd, the Play Equity Fund created a partnership with 11 pro sports teams in Los Angeles and Orange County, titled the Alliance, where the teams merged their resources in a five-year effort to aid underserved Black and Latino children through sports. In 2023, Angel City Football Club began playing as an NWSL club in Los Angeles and joined the Alliance to bring the collaboration to 12 teams in Southern California.
This model of pro sports organizations supporting play equity was followed in Seattle in 2023 with the Seattle Alliance For Play Equity.
In 2022, the Play Equity Fund and Nike partnered to address play equity for girls in the communities of Boyle Heights and Watts. By 2023, the partnership had supported 1,000 girls, trained 130 coaches and awarded $770,000 in grants to 13 partner organizations in Boyle Heights and Watts.
The King County Play Equity Coalition was established in Seattle, a network of organizations dedicated to challenging systems to center physical activity as a key part of health and youth development. Milwaukee has established a play equity-based model for their fields and neighborhood sports facilities.
The Philadelphia Youth Sports Collaborative also uses a play equity model to bring all children high-quality, sports-based youth development programs in schools and neighborhoods.
References
Play (activity)
Children's health | Play equity | [
"Biology"
] | 1,008 | [
"Play (activity)",
"Behavior",
"Human behavior"
] |
76,131,781 | https://en.wikipedia.org/wiki/Exidia%20umbrinella | Exidia umbrinella is a species of fungus in the family Auriculariaceae. Basidiocarps (fruit bodies) are gelatinous, orange-brown, and turbinate (top-shaped). It grows on dead attached twigs and branches of conifers in Europe.
Taxonomy
The species was originally found growing on larch and fir in Hungary and Italy and was described in 1900 by Italian mycologist Giacomo Bresadola.
Description
Exidia umbrinella forms yellowish to reddish brown, gelatinous fruit bodies that are button-shaped at first, becoming more pendulous with age, and around 2.5 cm (1 in) across. The fruit bodies typically grow gregariously, but do not normally coalesce. The upper, spore-bearing surface is smooth becoming slightly furrowed. Fruit bodies are attached to the wood at a point, sometimes with a short stem. The spore print is white.
Microscopic characters
The microscopic characters are typical of the genus Exidia. The basidia are ellipsoid and septate. The spores are weakly allantoid (sausage-shaped), 11 to 18 by 3 to 5 μm.
Similar species
Fruit bodies of Exidia repanda and E. recisa are similar, but occur on broad leaved trees. Fruit bodies of E. saccharina are similarly coloured and occur on conifers, but typically coalesce to form large, irregular clumps.
Habitat and distribution
Exidia umbrinella is a wood-rotting species, typically found on dead attached twigs and branches. It was originally described from fir (Abies species) and larch (Larix species), but has also been reported on spruce and pine (Picea and Pinus species). It has been recorded from Andorra, Austria, France, Germany, Hungary, Italy, Liechtenstein, North Macedonia, Poland, and Ukraine.
References
Fungi described in 1900
Taxa named by Giacomo Bresadola
Fungi of Europe
Auriculariales
Fungus species | Exidia umbrinella | [
"Biology"
] | 416 | [
"Fungi",
"Fungus species"
] |
77,559,041 | https://en.wikipedia.org/wiki/Siemens%20locomotive%20of%201879 | The Siemens locomotive of 1879 was the first electric passenger train. It was first presented at the Berlin Trade Exhibition 1879 by Werner von Siemens and transported exhibition visitors on an oval track.
History
Werner von Siemens, ennobled in 1888, developed an electric generator, based on the dynamo-electric principle, which he patented in 1866. On January 17, 1867, he gave a lecture before the Berlin Academy of Sciences, in which he provided the first scientific description of the dynamo-electric principle. Based on these foundations, electric motors were built, initially used in stationary applications.
Siemens then tried to use the electric motor in vehicles and had his designer Hemming Wesslau design an electric locomotive for the Senftenberger Stadtgrube Marie III. However, the proposed design from July 1878, which aimed to drive the locomotive with two rubber discs on an iron band in the center, and subsequent adjustments did not convince the mine owner Carl Westphal. He doubted the reliability, was concerned about the investment costs, and could not reach an agreement with Siemens. No locomotive was built from these initial designs.
Siemens had the design of his electric locomotive further revised. Now the axles were driven via a gear transmission, and the iron band was used for power transmission. Based on a design from October 1878, a locomotive was built to be presented to the public at the Berlin Trade Exhibition 1879. This exhibition took place on the ULAP site at the Lehrter Bahnhof. A circular track about 300 meters long was built in the machine hall near the entrance. The small locomotive had three open passenger cars, each for six people. At the exhibition opening, Werner Siemens personally presented his development on May 31, 1879.
The electric railway was a highlight and a major attraction during the four-month-long Berlin Trade Exhibition. By September 30, 1879, a total of 86,398 people had been transported at a speed of 7 km/h.
The locomotive was operated with a direct current of 150 V. The electrical energy was transmitted via the two rails and an insulated iron band positioned in between. It had a rotor mounted along the longitudinal axis and side-mounted excitation coils. The track had a track gauge of 490 mm.
In the following years, this electric exhibition railway was also demonstrated in other cities. Interest was so great that Siemens had to build several similar railways. Exhibition locations included Brussels (for the 50th anniversary of Belgian independence in the Parc du Cinquantenaire 1880), London Sydenham (Crystal Palace 1880), Frankfurt am Main (General Patent and Sample Protection Exhibition in the Palmengarten 1881), Copenhagen (Tivoli Park 1882), and Moscow (All-Russian Industrial and Handicraft Exhibition 1882).
An original preserved locomotive is part of the collection at the Deutsches Museum in Munich. The German Museum of Technology in Berlin has a replica of the locomotive on display.
References
History
History of electrical engineering | Siemens locomotive of 1879 | [
"Engineering"
] | 598 | [
"Electrical engineering",
"History of electrical engineering"
] |
77,560,587 | https://en.wikipedia.org/wiki/NGC%205875 | NGC 5875 is a spiral galaxy in the constellation of Boötes. Its velocity with respect to the cosmic microwave background is 3585 ± 6 km/s, which corresponds to a Hubble distance of 52.87 ± 3.70 Mpc (∼173 million light-years). It was discovered by German-British astronomer William Herschel on 1 May 1788.
The SIMBAD database lists NGC 5875 as a Seyfert II Galaxy, i.e. it has a quasar-like nuclei with very high surface brightnesses whose spectra reveal strong, high-ionisation emission lines, but unlike quasars, the host galaxy is clearly detectable.
Two supernovae have been observed in NGC 5875:
SN 2022oqm (type Ic-pec, mag 17.3) was discovered by the Zwicky Transient Facility on 11 July 2022. This supernova has been described as one of the brightest calcium-rich supernovae known.
SN 2023ldh (typeIIn, mag 20.7423) was discovered by the Zwicky Transient Facility on 28 May 2023.
See also
List of NGC objects (5001–6000)
References
External links
5875
054095
+09-25-027
09745
15077+5243
Boötes
Astronomical objects discovered in 1788
Discoveries by William Herschel
Unbarred spiral galaxies | NGC 5875 | [
"Astronomy"
] | 288 | [
"Boötes",
"Constellations"
] |
77,561,753 | https://en.wikipedia.org/wiki/Fascial%20Net%20Plastination%20Project | The Fascial Net Plastination Project is an anatomical research initiative established in 2018 aimed at plastinating and studying the human fascial network. The collaboration was initiated by Robert Schleip as a joint effort between Body Worlds, Fascia Research Group, and the Fascia Research Society. The project focuses on preserving the fascia, a complex connective tissue network that plays a crucial role in the human body's structure and function.
One outcome of this three-year project is the creation of the world's first 3-D representation of the fascial network of a whole human body, named FR:EIA (Fascia Revealed: Educating Interconnected Anatomy), which is on display at the Body Worlds museum in Berlin, Germany.
Origination and objectives
The project was conceived to provide a comprehensive and tangible understanding of the fascial system through plastination. This technique, developed by Gunther von Hagens, involves replacing water and fat in biological tissues with polymers to create durable, lifelike specimens. The specific goals of the project include:
Enhancing Educational Outreach: By creating detailed and durable plastinated specimens of the fascial net, the project aims to elevate the anatomical education of medical professionals and the general public.
Advancing Research: Detailed anatomical studies of plastinated fascia specimens facilitate a deeper understanding of its structure and function.
Public Exhibitions: Specimens from the project are displayed in Body Worlds exhibitions worldwide, providing an unprecedented view of the human fascial system.
Background
The fascia is a band or sheet of connective tissue, primarily collagen, that supports and surrounds muscles, bones, nerves, and blood vessels. It extends from head to toe without interruption. Recent studies have highlighted the fascia's significance in movement, stability, and overall bodily function, debunking the previous notion of fascia being merely passive tissue.
Plastination is a technique used in anatomy to preserve bodies or body parts, first developed by Gunther von Hagens in 1977. The process involves replacing water and fat in tissues with plastics, resulting in specimens that can be touched, do not smell or decay, and retain most properties of the original sample.
The standard process of plastination consists of four steps: fixation, dehydration, forced impregnation in a vacuum, and hardening. Fixation, often using a formaldehyde-based solution, serves to prevent decomposition and maintain the shape of the specimen. This can be particularly useful for time-consuming dissections intended to highlight specific anatomical features.
Following fixation, the specimen is submerged in a bath of acetone at temperatures ranging from −20 to −30 °C. The acetone, which is renewed multiple times over six weeks, replaces the water in the cells. In the third step, the specimen is placed in a bath of liquid polymer, such as silicone rubber, polyester, or epoxy resin. Under a partial vacuum, the acetone is vaporized and replaced by the liquid polymer, filling the cells with plastic.
The final step involves curing the plastic with gas, heat, or ultraviolet light to harden the specimen. The resulting plastinates, which can range from entire human bodies to small pieces of animal organs, can be further manipulated and positioned before the polymer chains are fully hardened.
Overview
In January 2018, the Fascia Research Society, Somatics Academy, the Plastinarium, and Body Worlds embarked on a collaborative journey to create the world's first 3D representation of the fascial network of a whole human body via plastination. Directed by fascia research scientist Robert Schleip, professor of anatomy Carla Stecco, with the assistance of clinical anatomist John Sharkey and support from several other experts, the project is taking place in Guben, Germany, at the renowned Plastinarium under the direction of Dr. Vladimir Chereminskiy.
The project was supported by a Scientific Advisory Board consisting of Vladimir Chereminskiy, Gil Hedley, Thomas W. Myers, John Sharkey, Robert Schleip, Carla Stecco, Jaap Van der Wal, Gunther von Hagens, Angelina Whalley.
The first ten plastinated specimens from this project, demonstrating fascial architecture of different selected body regions from this project were exhibited for the first time at the Fifth International Fascia Research Congress in Berlin, Germany, on November 14 and 15, 2018, in an exhibition titled "Fascia in a NEW LIGHT: The Exhibition."
Phase one
The first phase began in January 2018 with a team of scientists, academics, and anatomy enthusiasts. Several formalin-fixed specimens were dissected to illustrate fascial structures from superficial fascia/subcutaneous tissues, including the abdomen, arm, and lower limb. Additionally, several deep fascia structures were dissected, such as the fascia lata, a 5 cm cross-section of the thigh, a 5 cm cross-section of the leg, the fibrous pericardium with the respiratory diaphragm, and the lumbodorsal fascia.
These specimens went through the first two stages of plastination; soaking in high and low temperature baths to replace water with acetone and dissolve fats, followed by another bath to replace acetone with plastic polymer. These stages typically take up to six months depending on the size of the specimen.
Phase two
In June 2018, the team returned to Guben to position the specimens. Now infused with silicone rubber, the specimens were still supple and could be positioned back into their original shapes. The team created forms to support the soft specimens so they could undergo the final stage of gas curing to harden them into durable plastinates ready for exhibition.
During this phase, additional dissections were undertaken, including a second attempt at the lumbodorsal fascia, a 10 cm cross-section of the abdomen, the deep fascia of the arm, and an anterior prosection of the pelvis.
Phase three
The third phase aimed to create a full-body fascia plastinate for exhibition at the Sixth International Fascia Research Congress in Montreal, Canada, in 2021. This phase involved complex decisions on how best to dissect and display the fascial structures in a meaningful way. This plastinate has now become a major highlight at the Body Worlds museum in Berlin.
A collection of ten plastinated specimens from this project showing fascial architecture of selected human body regions was given as a long-term loan to the University of Padova in Italy in 2023, where the collection is currently displayed for the purpose of educating medical students at the entrance hall of the Department of Neuroscience.
Techniques and methodologies
The project employs advanced plastination techniques to preserve the intricate details of the fascial network. This involves a meticulous process where water and lipids in biological tissues are replaced with curable polymers like silicone, epoxy, or polyester, resulting in odorless, durable, and anatomically precise specimens. These plastinates are then used for educational and research purposes, showcasing the complexity and functionality of fascia.
Scientific significance
The plastination of the fascial net has significant implications for both medical research and education. It allows for detailed examination of the fascia's role in musculoskeletal health, its contribution to proprioception, and its involvement in various medical conditions. The project has provided critical insights into how fascia affects movement, stability, and overall physical health, thus influencing treatment approaches in physiotherapy, sports medicine, and surgery.
Controversies and ethical considerations
Plastination, while groundbreaking, has not been without controversy. Ethical concerns have been raised regarding the sourcing of bodies for plastination and the display of human remains in public exhibitions. The Fascial Net Plastination Project however, adheres to strict ethical guidelines, ensuring that all specimens are sourced from legally and ethically approved donations, with explicit consent from donors or their families.
Presentation and reception
The project was prominently featured at the 2021 Fascia Research Congress. This presentation included detailed discussions on the techniques used, the scientific findings from the plastinated specimens, and their applications in medical education and research. The project received considerable attention from the scientific community for its innovative approach to studying fascia and its potential to revolutionize anatomical science.
FR:EIA was officially unveiled on November 24, 2021, over a webinar with 1000+ participants as they unvealed its permanent display at the Body Worlds museum in Berlin, Germany.
Impact and future directions
The Fascial Net Plastination Project has already made significant contributions to the field of fascia research. By providing a durable and detailed representation of the fascial network, it has enhanced the understanding of this critical component of human anatomy. Future directions for the project include expanding the range of specimens, refining plastination techniques, and fostering international collaborations to further explore the clinical implications of fascia.
External links
The Plastination Project
FR:EIA at Body Worlds Museum
References
Anatomy
Medical education
Anatomical preservation
Exhibitions
Human anatomy
Biological techniques and tools
Educational projects
Science in society
Medical research
2018 establishments | Fascial Net Plastination Project | [
"Biology"
] | 1,874 | [
"Anatomy",
"nan"
] |
77,562,364 | https://en.wikipedia.org/wiki/NGC%204734 | NGC 4734 is a spiral galaxy in the constellation of Virgo. Its velocity with respect to the cosmic microwave background is 7835 ± 23 km/s, which corresponds to a Hubble distance of 115.56 ± 8.10 Mpc (∼377 million light-years). It was discovered by British astronomer John Herschel on 7 April 1828.
The SIMBAD database lists NGC 4734 as a LINER-type active galaxy nucleus, i.e. a galaxy whose nucleus has an emission spectrum characterized by broad lines of weakly ionized atoms.
One supernova has been observed in NGC 4734: SN 2024gvc (type Ic, mag 19.7178) was discovered by the Zwicky Transient Facility on 17 April 2024.
See also
List of NGC objects (4001–5000)
References
External links
4734
043525
+01-33-019
07998
12486+0507
Virgo (constellation)
18280407
Discoveries by John Herschel
Spiral galaxies | NGC 4734 | [
"Astronomy"
] | 207 | [
"Virgo (constellation)",
"Constellations"
] |
77,565,726 | https://en.wikipedia.org/wiki/Exidia%20aeruginosa | Exidia aeruginosa is a species of fungus in the family Auriculariaceae. Basidiocarps (fruit bodies) are gelatinous, brown to bluish green, and occur on twigs. The species is currently known only from Jamaica.
References
Auriculariales
Fungus species
Fungi described in 2006
Fungi of the Caribbean | Exidia aeruginosa | [
"Biology"
] | 71 | [
"Fungi",
"Fungus species"
] |
77,566,185 | https://en.wikipedia.org/wiki/NGC%207466 | NGC 7466 is a spiral galaxy in the constellation of Pegasus. Its velocity with respect to the cosmic microwave background is 7160 ± 25 km/s, which corresponds to a Hubble distance of 105.60 ± 7.40 Mpc (∼344 million light-years). It was discovered by French astronomer Édouard Stephan on 20 September 1873. It was independently rediscovered by the French astronomer Guillaume Bigourdan on 19 November 1895 and listed as IC 5281 in the Index Catalogue.
NGC 7466 is listed as a Seyfert II Galaxy, i.e. it has a quasar-like nucleus with very high surface brightnesses whose spectra reveal strong, high-ionisation emission lines, but unlike quasars, the host galaxy is clearly detectable.
NGC 7466 is a galaxy with a nucleus that has excessive amounts of ultraviolet emissions, and is thus listed in the Markarian Galaxy Catalog as Mrk 1127.
One supernova has been observed in NGC 7466: SN 2023uu (type Ia, mag 20.1) was discovered by The Young Supernova Experiment (YSE) on 15 January 2023.
See also
List of NGC objects (7001–7840)
References
External links
7466
IC objects
070299
+04-54-017
12319
22596+2647
Pegasus (constellation)
18730920
Discoveries by Édouard Stephan
Spiral galaxies
Markarian galaxies
Seyfert galaxies | NGC 7466 | [
"Astronomy"
] | 296 | [
"Pegasus (constellation)",
"Constellations"
] |
77,566,678 | https://en.wikipedia.org/wiki/NGC%202283 | NGC 2283 is a barred spiral galaxy in the constellation of Canis Major. Its velocity with respect to the cosmic microwave background is 994 ± 11 km/s, which corresponds to a Hubble distance of 14.66 ± 1.04 Mpc (∼48 million light-years). It was discovered by German-British astronomer William Herschel on 6 February 1785.
NGC 2283 forms a physical pair with galaxy IC 2171, collectively named RR 140, with an optical separation of between them.
SIMBAD lists NGC 2283 as an active galaxy nucleus candidate.
One supernova has been observed in NGC 2283: SN 2023axu (type II, mag 15.64) was discovered by the Distance Less Than 40 Mpc Survey (DLT40) on 28 January 2023.
Image gallery
See also
List of NGC objects (2001–3000)
References
External links
2283
019562
-03-18-002
557-13
06436-1809
Canis Major
17850206
Discoveries by William Herschel
Barred spiral galaxies | NGC 2283 | [
"Astronomy"
] | 217 | [
"Canis Major",
"Constellations"
] |
77,566,712 | https://en.wikipedia.org/wiki/Foslevodopa | Foslevodopa is a medication which acts as a prodrug for levodopa, originally invented in the 1980s but not developed for medical use at that time. It is approved for use in a subcutaneous infusion as a fixed-dose combination with foscarbidopa for the treatment of Parkinson's disease, under the brand name Vyalev.
References
Amino acids
Antiparkinsonian agents
Catecholamines
Dopamine agonists
Monoamine precursors
Organophosphates
Prodrugs | Foslevodopa | [
"Chemistry"
] | 111 | [
"Amino acids",
"Biomolecules by chemical classification",
"Chemicals in medicine",
"Prodrugs"
] |
77,566,729 | https://en.wikipedia.org/wiki/Foscarbidopa | Foscarbidopa is a medication which acts as a prodrug for carbidopa. It is used in a subcutaneous infusion as a fixed-dose combination with foslevodopa for the treatment of Parkinson's disease, under the brand name Vyalev.
References
Prodrugs
Organophosphates
Hydrazines
Amino acids | Foscarbidopa | [
"Chemistry"
] | 76 | [
"Amino acids",
"Biomolecules by chemical classification",
"Chemicals in medicine",
"Prodrugs"
] |
77,567,102 | https://en.wikipedia.org/wiki/Red%20Data%20Book%20of%20the%20Republic%20of%20Bulgaria | Red Data Book of the Republic of Bulgaria () consists of detailed publications that catalog the status of endangered and threatened species in the country. These books play an important role in the conservation of biodiversity by identifying species at risk of extinction and documenting their current status. They provide thorough information on various species, as well as their habitats, threats, and the conservation measures needed to protect them. Additionally, they serve as educational resources, raising awareness about the importance of protecting Bulgaria's biodiversity. Compiled by scientists, researchers, and conservationists, the Red Data Books are used by environmental organizations, policymakers, and academics to support and implement conservation initiatives.
History of Red Data Books
The history of the Red Books can be traced back to the International Union for Conservation of Nature (IUCN), which conceived the idea in the 1960s. The Red Data Book was created as a comprehensive tool to assess and document species at risk of extinction. It aimed to provide detailed information on species’ populations, distribution, and the threats they faced.
Initially, the Red Data Book focused on global species, serving as a resource for identifying conservation priorities and guiding protective measures. Over time, the concept expanded to include national versions, focusing on the specific needs and conditions of individual countries. These national Red Data Books adapted the global framework to local contexts, offering understandings of regional species and their conservation status.
See also
Geography of Bulgaria
List of protected areas of Bulgaria
List of mammals of Bulgaria
List of birds of Bulgaria
List of reptiles of Bulgaria
List of amphibians of Bulgaria
References
Ecology literature
Conservation biology
Books about Bulgaria
Bulgarian books
IUCN Red List | Red Data Book of the Republic of Bulgaria | [
"Biology"
] | 320 | [
"Conservation biology"
] |
77,567,130 | https://en.wikipedia.org/wiki/NGC%205936 | NGC 5936 is a barred spiral galaxy in the constellation of Serpens. Its velocity with respect to the cosmic microwave background is 4131 ± 11 km/s, which corresponds to a Hubble distance of 60.93 ± 4.27 Mpc (∼199 million light-years). It was discovered by German-British astronomer William Herschel on 12 April 1784.
NGC 5936 is listed as a luminous infrared galaxy (LIRG), and as a field galaxy, i.e. one that does not belong to a larger galaxy group or cluster and hence is gravitationally alone.
Supernovae
Two supernovae have been observed in NGC 5936:
SN 2013dh (type Ia, mag 18) was discovered by the Lick Observatory Supernova Search (LOSS) on 12 June 2013.
SN 2023awp (type IIn, mag 19.58) was discovered by the Zwicky Transient Facility on 27 January 2023.
See also
List of NGC objects (5001–6000)
References
External links
5936
055255
+02-39-030
09867
15276+1309
Serpens
17840412
Discoveries by William Herschel
Barred spiral galaxies
Luminous infrared galaxies
Field galaxies | NGC 5936 | [
"Astronomy"
] | 251 | [
"Constellations",
"Serpens"
] |
77,567,287 | https://en.wikipedia.org/wiki/Lichenostigma%20svandae | Lichenostigma svandae is a species of lichenicolous (lichen-dwelling) fungus in the family Phaeococcomycetaceae. It was described as a new species in 2007 by Jan Vondrák and Jaroslav Šoun. The authors collected the type specimen from a limestone hill in the protected area Karadagskyj Zapovednik (Feodosia, Crimea) at an elevation of about ; it was growing on the thalli and apothecia (fruiting bodies) crustose lichen species Acarospora cervina, which was itself growing on sun-exposed limestone rock. The species epithet honours bus driver
Jaroslav Švanda, who drove the bus to the excursion where the type was collected.
References
Arthoniomycetes
Fungus species
Fungi described in 2007
Fungi of Europe
Lichenicolous fungi
Taxa named by Jan Vondrák | Lichenostigma svandae | [
"Biology"
] | 186 | [
"Fungi",
"Fungus species"
] |
77,567,392 | https://en.wikipedia.org/wiki/Low-gravity%20process%20engineering | Low-gravity process engineering is a specialized field that focuses on the design, development, and optimization of industrial processes and manufacturing techniques in environments with reduced gravitational forces. This discipline encompasses a wide range of applications, from microgravity conditions experienced in Earth orbit to the partial gravity environments found on celestial bodies such as the Moon and Mars.
As humanity extends its reach beyond Earth, the ability to efficiently produce materials, manage fluids, and conduct chemical processes in reduced gravity becomes crucial for sustained space missions and potential colonization efforts. Furthermore, the unique conditions of microgravity offer opportunities for novel materials and pharmaceuticals that cannot be easily produced on Earth, potentially leading to groundbreaking advancements in various industries.
The historical context of low-gravity research dates back to the early days of space exploration. Initial experiments conducted during the Mercury and Gemini programs in the 1960s provided the first insights into fluid behavior in microgravity. Subsequent missions, including Skylab and the Space Shuttle program, expanded our understanding of materials processing and fluid dynamics in space. The advent of the International Space Station (ISS) in the late 1990s marked a significant milestone, providing a permanent microgravity laboratory for continuous research and development in low-gravity process engineering.
Fundamentals of low-gravity environments
Low-gravity environments, encompassing both microgravity and reduced gravity conditions, exhibit unique characteristics that significantly alter physical phenomena compared to Earth's gravitational field. These environments are typically characterized by gravitational accelerations ranging from to , where represents Earth's standard gravitational acceleration .
Microgravity, often experienced in orbiting spacecraft, is characterized by the near absence of perceptible weight. In contrast, reduced gravity conditions, such as those on the Moon () or Mars (), maintain a fractional gravitational pull relative to Earth.
These environments differ markedly from Earth's gravity in several key aspects:
Absence of natural convection: On Earth, density differences in fluids due to temperature gradients drive natural convection. In microgravity, this effect is negligible, leading to diffusion-dominated heat and mass transfer.
Surface tension dominance: Without the overwhelming force of gravity, surface tension becomes a dominant force in fluid behavior, significantly affecting liquid spreading and containment.
Particle suspension: In low-gravity environments, particles in fluids remain suspended for extended periods, as sedimentation and buoyancy effects are minimal.
Effects of low-gravity conditions on various physical processes
Fluid dynamics
In microgravity, fluid behavior is primarily governed by surface tension, viscous forces, and inertia. This leads to phenomena such as large stable liquid bridges, spherical droplet formation, and capillary flow dominance. The absence of buoyancy-driven convection alters mixing processes and phase separations, necessitating alternative methods for fluid management in space applications.
Heat transfer
The lack of natural convection in microgravity significantly impacts heat transfer processes. Conduction and radiation become the primary modes of heat transfer, while forced convection must be induced artificially. This alteration affects cooling systems, boiling processes, and thermal management in spacecraft and space-based manufacturing.
Material behavior
Low-gravity environments offer unique conditions for materials processing. The absence of buoyancy-driven convection and sedimentation allows for more uniform crystal growth and the formation of novel alloys and composites. Additionally, the reduced mechanical stresses in microgravity can lead to changes in material properties and behavior, influencing fields such as materials science and pharmaceutical research.
Challenges
Low-gravity process engineering faces a number of challenges that require innovative solutions and adaptations of terrestrial technologies. These challenges stem from the unique physical phenomena observed in microgravity and reduced gravity environments.
Fluid management issues
The absence of buoyancy and the dominance of surface tension in low-gravity environments significantly alter fluid behavior, presenting several challenges:
Liquid-gas separation: Without buoyancy, separating liquids and gases becomes difficult, affecting processes such as fuel management and life support systems.
Capillary effects: Surface tension dominance leads to unexpected fluid migrations and containment issues, requiring specialized designs for fluid handling systems.
Bubble formation and coalescence: In microgravity, bubbles tend to persist and coalesce more readily, potentially disrupting fluid processes and heat transfer mechanisms.
Heat transfer limitations
The lack of natural convection in low-gravity environments poses significant challenges for heat transfer processes:
Reduced convective heat transfer: Without buoyancy-driven flows, heat transfer becomes primarily dependent on conduction and radiation, potentially leading to localized hot spots and thermal management issues.
Boiling and condensation: These phase change processes behave differently in microgravity, affecting cooling systems and thermal management strategies.
Temperature gradients: The absence of natural mixing can result in sharp temperature gradients, impacting reaction kinetics and material processing.
Material handling and containment difficulties
Low-gravity environments present unique challenges in manipulating and containing materials:
Particle behavior: Without settling due to gravity, particles tend to remain suspended and disperse differently, affecting filtration, separation, and mixing processes.
Liquid containment: Surface tension effects can cause liquids to adhere unexpectedly to container walls, complicating storage and transfer operations.
Phase separation: The lack of density-driven separation makes it challenging to separate immiscible fluids or different phases of materials.
Equipment design considerations
Designing equipment for low-gravity operations requires addressing several unique factors
Mass and volume constraints: Space missions have strict limitations on payload mass and volume, necessitating compact and lightweight designs.
Automation and remote operation: Many processes must be designed for autonomous or remote operation due to limited human presence in space environments.
Reliability and redundancy: The inaccessibility of space environments demands highly reliable systems with built-in redundancies to mitigate potential failures.
Microgravity-specific mechanisms: Equipment must often incorporate novel mechanisms to replace gravity-dependent functions, such as pumps for fluid transport or centrifuges for separation processes.
Multi-functionality: Due to resource constraints, equipment is often designed to serve multiple purposes, increasing complexity but reducing overall payload requirements.
Addressing these challenges requires interdisciplinary approaches, combining insights from fluid dynamics, heat transfer, materials science, and aerospace engineering. As research in low-gravity process engineering progresses, new solutions and technologies continue to emerge, expanding the possibilities for space-based manufacturing and resource utilization.
Key areas
Fluid processing
Multiphase flow behavior in microgravity differs substantially from terrestrial conditions. The absence of buoyancy-driven phase separation leads to complex flow patterns and phase distributions. These phenomena affect heat transfer, mass transport, and chemical reactions in multiphase systems, necessitating novel approaches to fluid management in space.
Boiling and condensation processes are fundamentally altered in microgravity. The lack of buoyancy affects bubble dynamics, heat transfer coefficients, and critical heat flux. Understanding these changes is crucial for designing efficient thermal management systems for spacecraft and space habitats.
Capillary flow and wetting phenomena become dominant in low-gravity environments. Surface tension forces drive fluid behavior, leading to unexpected liquid migrations and containment challenges. These effects are particularly important in the design of fuel tanks, life support systems, and fluid handling equipment for space applications.
Materials processing
Materials processing in space offers unique opportunities for producing novel materials and improving existing manufacturing techniques.
Crystal growth in space benefits from the absence of gravity-induced convection and sedimentation. This environment allows for the growth of larger, more perfect crystals with fewer defects. Space-grown crystals have applications in electronics, optics, and pharmaceutical research.
Metallurgy and alloy formation in microgravity can result in materials with unique properties. The absence of buoyancy-driven convection allows for more uniform mixing of molten metals and the creation of novel alloys and composites that are difficult or impossible to produce on Earth.
Additive manufacturing in low-gravity environments presents both challenges and opportunities. While the absence of gravity can affect material deposition and layer adhesion, it also allows for the creation of complex structures without the need for support materials. This technology has potential applications in on-demand manufacturing of spare parts and tools for long-duration space missions.
Biotechnology applications
Microgravity conditions offer unique advantages for various biotechnology applications.
Protein crystallization in space often results in larger, more well-ordered crystals compared to those grown on Earth. These high-quality crystals are valuable for structural biology studies and drug design. The microgravity environment reduces sedimentation and convection, allowing for more uniform crystal growth.
Cell culturing and tissue engineering benefit from the reduced mechanical stresses in microgravity. This environment allows for three-dimensional cell growth and the formation of tissue-like structures that more closely resemble in vivo conditions. Such studies contribute to our understanding of cellular biology and may lead to advancements in regenerative medicine.
Pharmaceutical production in space has the potential to yield purer drugs with improved efficacy. The absence of convection and sedimentation can lead to more uniform crystallization and particle formation, potentially enhancing drug properties.
Chemical engineering processes
Chemical engineering processes in microgravity often exhibit different behaviors compared to their terrestrial counterparts.
Reaction kinetics in microgravity can be altered due to the absence of buoyancy-driven convection. This can lead to more uniform reaction conditions and potentially different reaction rates or product distributions.
Separation processes, such as distillation and extraction, face unique challenges in low-gravity environments. The lack of buoyancy affects phase separation and mass transfer, requiring novel approaches to achieve efficient separations. These challenges have led to the development of alternative separation technologies for space applications.
Catalysis in space presents opportunities for studying fundamental catalytic processes without the interfering effects of gravity. The absence of natural convection and sedimentation can lead to more uniform catalyst distributions and potentially different reaction pathways. This research may contribute to the development of more efficient catalysts for both space and terrestrial applications.
Experimental platforms and simulation techniques
The study of low-gravity processes requires specialized platforms and techniques to simulate or create microgravity conditions. These methods range from ground-based facilities to orbital laboratories and computational simulations.
Drop towers and parabolic flights
Drop towers provide short-duration microgravity environments by allowing experiments to free-fall in evacuated shafts. These facilities typically offer 2–10 seconds of high-quality microgravity. Notable examples include NASA's Glenn Research Center 2.2-Second Drop Tower and the 146-meter ZARM Drop Tower in Bremen, Germany.
Parabolic flights, often referred to as "vomit comets," create repeated periods of microgravity lasting 20–25 seconds by flying aircraft in parabolic arcs. These flights allow researchers to conduct hands-on experiments and test equipment destined for space missions.
Sounding rockets and suborbital flights
Sounding rockets offer extended microgravity durations ranging from 3 to 14 minutes, depending on the rocket's apogee. These platforms are particularly useful for experiments requiring longer microgravity exposure than drop towers or parabolic flights can provide.
Suborbital flights, such as those planned by commercial spaceflight companies, present new opportunities for microgravity research. These flights can offer several minutes of microgravity time and the potential for frequent, cost-effective access to space-like conditions.
International space station facilities
The International Space Station serves as a permanent microgravity laboratory, offering long-duration experiments in various scientific disciplines. Key research facilities on the ISS include:
Fluid Science Laboratory (FSL): Designed for studying fluid physics in microgravity.
Materials Science Laboratory (MSL): Used for materials research and processing experiments.
Microgravity Science Glovebox (MSG): A multipurpose facility for conducting a wide range of microgravity experiments.
These facilities enable researchers to conduct complex, long-term studies in a true microgravity environment, advancing our understanding of fundamental physical processes and developing new technologies for space exploration.
Computational fluid dynamics for low-gravity simulations
Computational Fluid Dynamics (CFD) plays a crucial role in predicting and analyzing fluid behavior in low-gravity environments. CFD simulations complement experimental research by:
Providing insights into phenomena difficult to observe experimentally.
Allowing parametric studies across a wide range of conditions.
Aiding in the design and optimization of space-based systems.
CFD models for low-gravity applications often require modifications to account for the dominance of surface tension forces and the absence of buoyancy-driven flows. Validation of these models typically involves comparison with experimental data from microgravity platforms.
As computational power increases, CFD simulations are becoming increasingly sophisticated, enabling more accurate predictions of complex multiphase flows and heat transfer processes in microgravity.
References
Material handling
Crystallography | Low-gravity process engineering | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,523 | [
"Material handling",
"Materials science",
"Crystallography",
"Materials",
"Condensed matter physics",
"Matter"
] |
77,568,086 | https://en.wikipedia.org/wiki/1-Iodohexane | 1-Iodohexane is a chemical compound from the group of aliphatic saturated halogenated hydrocarbons. The chemical formula is . It is a colorless liquid.
Synthesis
1-Iodohexane can be obtained by treating 1-bromohexane with potassium iodide.
The compound can also be prepared by treating 1-hexanol with iodine and triphenylphosphine.
Physical properties
1-Iodohexane is a flammable, difficult to ignite, light-sensitive liquid that is practically insoluble in water. Copper is usually added to the compound as a stabilizer.
Uses
The compound is used as an alkylating agent in organic synthesis. Also, it is used as an intermediate in the production of other chemical compounds such as tetradecane.
See also
1-Bromohexane
1-Chlorohexane
1-Fluorohexane
References
Iodoalkanes
Alkylating agents | 1-Iodohexane | [
"Chemistry"
] | 208 | [
"Alkylating agents",
"Reagents for organic chemistry"
] |
77,568,785 | https://en.wikipedia.org/wiki/List%20of%20Swedish%20counties%20by%20life%20expectancy |
Statistics Sweden
Average values for 5-year periods. By default the table is sorted by 2019-2023.
Data source: Statistics Sweden (SCB).
Maps of division of Sweden into counties and NUTS-2 regions:
AB: Stockholm County
AC: Västerbotten County
BD: Norrbotten County
C: Uppsala County
D: Södermanland County
E: Östergötland County
F: Jönköping County
G: Kronoberg County
H: Kalmar County
I: Gotland County
K: Blekinge County
M: Skåne County
N: Halland County
O: Västra Götaland County
S: Värmland County
T: Örebro County
U: Västmanland County
W: Dalarna County
X: Gävleborg County
Y: Västernorrland County
Z: Jämtland County
Eurostat (2019—2022)
By default, the table is sorted by 2022.
Data source: Eurostat
Global Data Lab (2019–2022)
Data source: Global Data Lab
Charts
See also
List of countries by life expectancy
List of European countries by life expectancy
Counties of Sweden
Demographics of Sweden
References
Health in Sweden
Demographics of Sweden
Sweden, life expectancy
Sweden
Provinces of Sweden
Provinces by life expectancy
Sweden | List of Swedish counties by life expectancy | [
"Biology"
] | 260 | [
"Senescence",
"Life expectancy"
] |
77,571,993 | https://en.wikipedia.org/wiki/Pleistophora%20hyphessobryconis | Pleistophora hyphessobryconis unicellular, spore-forming protozoan parasite. It is the causative agent of "neon tetra disease." P. hyphessobryconis is a known pathogen of the zebra danio, a model organism used in many research laboratories.
Disease
The primary host of Pleistophora hyphessobryconis is the neon tetra; however, this parasite demonstrates a broad range of host specificity and has been isolated from numerous species of aquarium fish. P. hyphessobryconis primarily infects the skeletal muscle with no involvement of smooth or cardiac muscle. There is minimal inflammation until the spores are released from their intracellular environment. Spores have been observed to spread to other organs within macrophages of infected fish. Infections are characterized with a high parasitic load with over half of the myofibers containing the parasite in some fish.
Treatment
Diagnosis of Pleistophora hyphessobryconis infection is made through visual microscopy of damaged tissue. The Lilly-Twort and Accustain Gram stain procedures are effective at visualizing spores within infected tissue. Some veterinary diagnostic laboratories offer PCR testing for this parasite.
There are no treatment options available. Fish infected with P. hyphessobryconis should be removed from the enclosure to prevent the spread of the disease.
References
Microsporidia
Parasitic fungi
Fungus species
Fungi described in 1941 | Pleistophora hyphessobryconis | [
"Biology"
] | 300 | [
"Fungi",
"Fungus species"
] |
77,572,591 | https://en.wikipedia.org/wiki/NGC%205630 | NGC 5630 is a barred spiral galaxy in the constellation of Boötes. Its velocity with respect to the cosmic microwave background is 2826 ± 11 km/s, which corresponds to a Hubble distance of 41.68 ± 2.92 Mpc (∼136 million light-years). It was discovered by German-British astronomer William Herschel on 9 April 1787.
NGC 5630 is listed as a field galaxy, i.e. one does not belong to a larger galaxy group or cluster and hence is gravitationally alone.
Supernovae
Three supernovae have been observed in NGC 5630:
SN 2005dp (type II, mag. 16) was discovered by Kōichi Itagaki on 27 August 2005.
SN 2006am (type IIn, mag. 18.5) was discovered by the Lick Observatory Supernova Search (LOSS) on 22 February 2006.
SN 2023zdx (Type II-P, mag. 17) was discovered by ATLAS on 8 December 2023.
See also
List of NGC objects (5001–6000)
References
External links
5630
051635
+07-30-014
09270
14256+4128
Boötes
17870409
Discoveries by William Herschel
Barred spiral galaxies
Field galaxies | NGC 5630 | [
"Astronomy"
] | 257 | [
"Boötes",
"Constellations"
] |
77,572,628 | https://en.wikipedia.org/wiki/Main%20group%20organometallic%20chemistry | Main group organometallic chemistry concerns the preparation and properties of main-group elements directly bonded to carbon. The inventory is large. The compounds exhibit a wide range of properties, including ones that are water-stable and others that are pyrophoric. Many are very useful themselves, as chemical reagents, or as catalysts.
Classification and structure
Main group organometallic chemistry are typically categorized according to their position in the periodic table. Another possible classification scheme organizes these compounds according to the nature of the organic substituent: alkyls, aryls, vinyls, etc.
Most homoleptic organo-main group compounds adopt a characteristic oxidation state: RLi, R2Be, R3B/R3Al, R4Si, R3P, R2S. Members where the simplest stoichiometry violates the octet rule often aggregate by formation of bridging alkyl groups. When the alkyl group bridges two main group elements, the bonding is called three-center two-electron bonds. This pattern is seen for dimethyl beryllium and trimethylaluminium. In the case of methyl lithium, the methyl group can be shared (bonded to) three Li centers. These bonding aspects influence the structures: Trimethylaluminium, dimethyl beryllium, and methyl lithium are dimers, polymers, and clusters, respectively.
Academic research often seeks exceptions to conventional stoichiometries and oxidation states, often by use of bulky ligands. Examples include (pentamethylcyclopentadienyl)aluminium(I) (Al(I)), stannylenes (Sn(II)), and diphosphenes (P(I)).
Reactivity
Thus organic derivatives of the electropositive alkali metals and alkaline earth metals tend to be highly reactive toward electrophiles, e.g. oxygen and water. Organic derivatives of the less electropositive main group elements are often robust.
Like their derivatives lacking organic substituents, halides and alkoxide ligands for the later organomain group compounds tend to hydrolyze. Organophosphorus and silanes exhibit this pattern:
Applications
Consisting of the terrestrially most abundant elements, main group organometallic compounds have many and often large-scale uses. Commercially important examples include:
siloxanes, which are used as coatings, surfactants, and caulks.
alkyl lithium compounds, which are used as initiators of polymerization reactions (i.e. the polymerization of low cis polybutadiene) and as reagents
ethyl aluminum chlorides are important in catalyzing polymer production, as in Ziegler Natta catalyst
tertiary phosphines, which are used as ligands in homogeneous catalysis.
History
Main group organometallic chemistry is sometimes thought to start with publications on the organoarsenic compound called "Cadet's fuming liquid". This derivative of dimethylarsine is easily prepared and hence the subject of an early discovery. A major development was the popularization of Salvarsan, another organoarsenic compound at the beginning of the 1910s. Although ultimately a failed therapeutic, its use ushered in the field of chemotherapy.
An early step in main group organometallic chemistry (if one considers Zn to be a main group element) involved the synthesis of organozinc compounds diethyl zinc by Frankland.
Tetraethyllead is the focus of an infamous episode involving main group organometallic compounds. It was widely used as a fuel additive for much of the 20th century, until it was found to be chronic toxin.
References
Organometallic chemistry
Inorganic chemistry | Main group organometallic chemistry | [
"Chemistry"
] | 782 | [
"Organometallic chemistry",
"nan"
] |
77,573,773 | https://en.wikipedia.org/wiki/2-Chloronicotinic%20acid | 2-Chloronicotinic acid (2-CNA) is a halogenated derivative of nicotinic acid that is used as an intermediate in the production of a variety of bioactive compounds, including boscalid and diflufenican. It can be synthesized by chlorination of the N-oxide of nicotinic acid or related nicotinyl compounds, by substitution of the hydroxyl group of 2-hydroxynicotinic acid, or by a tandem reaction involving cyclization of various acrolein derivatives.
References
Aromatic acids
Chloropyridines | 2-Chloronicotinic acid | [
"Chemistry"
] | 125 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
77,576,650 | https://en.wikipedia.org/wiki/NGC%205784 | NGC 5784 is a lenticular galaxy in the constellation of Boötes. Its velocity with respect to the cosmic microwave background is 5493 ± 18 km/s, which corresponds to a Hubble distance of 81.01 ± 5.68 Mpc (∼264 million light-years). It was discovered by German-British astronomer William Herschel on 9 April 1787.
Supernovae
Two supernovae have been observed in NGC 5784:
SN 2018mef (type Ia, mag. 17.52) was discovered by the Zwicky Transient Facility on 7 June 2018.
SN 2023bch (type Ia, mag. 15.4) was discovered by ASAS-SN on 30 January 2023.
NGC 5739 Group
According to Abraham Mahtessian, NGC 5784 is part of the seven member NGC 5739 group (also known as [M98j] 234). The other six galaxies are: NGC 5598, NGC 5603, NGC 5696, NGC 5739, NGC 5787, and NGC 5860.
See also
List of NGC objects (5001–6000)
References
External links
5784
053265
+07-31-006
09592
14524+4245
Boötes
17870409
Discoveries by William Herschel
Lenticular galaxies | NGC 5784 | [
"Astronomy"
] | 276 | [
"Boötes",
"Constellations"
] |
77,577,860 | https://en.wikipedia.org/wiki/NGC%202708 | NGC 2708 is a spiral galaxy in the constellation of Hydra. Its velocity with respect to the cosmic microwave background is 2315 ± 22 km/s, which corresponds to a Hubble distance of 34.15 ± 2.41 Mpc (∼111 million light-years). It was discovered by German-British astronomer William Herschel on 6 January 1785. This galaxy was also observed by British astronomer John Herschel on 12 March 1826, and later listed as NGC 2727.
The SIMBAD database lists NGC 2708 as a Seyfert II galaxy, i.e. a galaxy with a quasar-like nucleus with very high surface brightnesses whose spectra reveal strong, high-ionisation emission lines, but unlike quasars, the host galaxy is clearly detectable.
One supernova has been observed in NGC 2708: SN 2023bee (type Ia, mag. 17.2621) was discovered by the Distance Less Than 40 Mpc Survey (DLT40) on 1 February 2023.
NGC 2708 Group
According to A.M. Garcia, NGC 2708 is the namesake of the four member NGC 2708 group (also known as LGG 164). The other three galaxies are: NGC 2695, NGC 2699, and NGC 2706.
See also
List of NGC objects (2001–3000)
References
External links
2708
025097
+00-23-015
08535-0309
Hydra (constellation)
17850106
Discoveries by William Herschel
Spiral galaxies
Seyfert galaxies | NGC 2708 | [
"Astronomy"
] | 318 | [
"Hydra (constellation)",
"Constellations"
] |
77,578,778 | https://en.wikipedia.org/wiki/NGC%204330 | NGC 4330 is a spiral galaxy in the constellation of Virgo. Its velocity with respect to the cosmic microwave background is 1898 ± 24 km/s, which corresponds to a Hubble distance of 27.99 ± 1.99 Mpc (∼112 million light-years). However, a dozen non-redshift measurements give a much closer distance of 19.642 ± 1.559 Mpc (∼64.1 million light-years). The galaxy was discovered by Irish engineer Bindon Stoney on 14 April 1852.
One supernova has been observed in NGC 4330: SN 2024phz (type II, mag. 17.669) was discovered by ATLAS on 11 July 2024.
M87 Group and Virgo Cluster
According to A.M. Garcia, NGC 4330 is a member of the M87 group (also known as LGG 289). This group contains at least 96 members.
NGC 4330 is also listed as catalog number VCC 0630, a member of the Virgo Cluster.
See also
List of NGC objects (4001–5000)
References
External links
4330
040201
+02-32-020
07456
12207+1138
Virgo (constellation)
18520414
Discoveries by Bindon Blood Stoney
Spiral galaxies
Virgo Cluster | NGC 4330 | [
"Astronomy"
] | 274 | [
"Virgo (constellation)",
"Constellations"
] |
77,579,504 | https://en.wikipedia.org/wiki/Jane%20F.%20Desforges%20Distinguished%20Teacher%20Award | Jane F. Desforges Distinguished Teacher Award is an annual award of the American College of Physicians (ACP). The award is given to a Fellow or Master of the ACP who has, "demonstrated the ennobling qualities of a great teacher as judged by the accomplishments of former students". (In some years, there have been two winners.) It was originally established as the Distinguished Teacher Award of the American College of Physicians (ACP) on April 5, 1968 by the board of regents. In 2007, it was renamed to honor Jane F. Desforges, the first woman to receive the award, in 1988.
If the winner has not previously been elected a Master of the ACP, they are so recognized.
Past recipients
The following members of the ACP have been selected to receive the Jane F. Desforges Distinguished Teacher Award:
1968 – George L. Engel
1969 – Eugene A. Stead Jr.
1970 – Duncan Archibald Graham, Tinsley R. Harrison
1971 – George C. Griffith
1972 – Thomas M. Durant
1973 – Thomas Hale Ham, Robert F. Loeb
1974 – William S. Middleton
1975 – Franz J. Ingelfinger
1976 – Francis C. Wood
1977 – A. McGehee Harvey
1978 – William B. Castle
1979 – Gerald Klatskin
1980 – Donald W. Seldin
1981 – Jack D. Myers
1982 – Louis Weinstein
1983 – C. Lockard Conley
1984 – W. Proctor Harvey
1985 – J. Willis Hurst
1986 – Saul J. Farber
1987 – Richard V. Ebert
1988 – Jane F. Desforges
1989 – Morton N. Swartz
1990 – Paul B. Beeson
1991 – Telfer B. Reynolds
1992 – Norman G. Levinsky, Theodore E. Woodward
1993 – Robert G. Petersdorf
1994 – Robert A. Kreisberg
1995 – Daniel D. Federman
1996 – Faith T. Fitzgerald
1997 – Walter Lester Henry
1998 – Alvan R. Feinstein
1999 – Marvin Turck
2000 – Thomas E. Andreoli
2001 – Norton J. Greenberger
2002 – George A. Sarosi
2003 – Charles C. J. Carpenter
2004 – Herbert L. Fred
2005 – Edward C. Lynch
2006 – Shahbudin H. Rahimtoola
2007 – David B. Hellmann
2008 – Matthew A. Araoye
2009 – J. Michael Criley
2010 – Anthony J. Grieco
2011 – Ruth-Marie E. Fincher
2012 – Eugene C. Corbett Jr.
2013 – Bennett Lorber
2014 – Frank M. Calia
2015 – John F. Fisher
2016 – Joyce P. Doyle, William L. Morgan Jr.
2017 – John E. Bennett, Angeline A. Lazarus
2018 – Auguste H. Fortin VI, Lisa K. Moores
2019 – Lewis E. Braverman, Jeffrey G. Wiese
2020 – David V. O'Dell, Douglas S. Paauw
2021 – Robert M. Centor, Joseph P. Cleary
2022 – Thomas M. De Fer
2023 – Lawrence Loo, Martin Samuels
2024 – Princy N. Kumar
References
American College of Physicians
American education awards
Medicine awards | Jane F. Desforges Distinguished Teacher Award | [
"Technology"
] | 644 | [
"Science and technology awards",
"Medicine awards"
] |
77,579,885 | https://en.wikipedia.org/wiki/NGC%203689 | NGC 3689 is an intermediate spiral galaxy in the constellation of Leo. Its velocity with respect to the cosmic microwave background is 3049 ± 22 km/s, which corresponds to a Hubble distance of 44.97 ± 3.16 Mpc (∼147 million light-years). However, 16 non-redshift measurements give a closer distance of 39.350 ± 2.088 Mpc (∼128 million light-years). The galaxy was discovered by German-British astronomer William Herschel on 6 April 1785.
According to the SIMBAD database, NGC 3689 is a radio galaxy.
The SAGA Astronomical Survey for the search for satellite galaxies orbiting another galaxy confirmed the presence of two satellite galaxies for NGC 3689.
One calcium-rich supernova has been observed in NGC 3689: AT 2024mxe (type Gap, mag. 17.7) was discovered by GOTO on 26 June 2024.
See also
List of NGC objects (3001–4000)
References
External links
3689
035294
+04-27-037
06467
Leo (constellation)
17850406
Discoveries by William Herschel
Intermediate spiral galaxies
Radio galaxies | NGC 3689 | [
"Astronomy"
] | 242 | [
"Leo (constellation)",
"Constellations"
] |
60,963,732 | https://en.wikipedia.org/wiki/C24H31N3O2 | {{DISPLAYTITLE:C24H31N3O2}}
The molecular formula C24H31N3O2 (molar mass: 393.52 g/mol, exact mass: 393.2416 u) may refer to:
Adamantyl-THPINACA
1B-LSD
1P-ETH-LAD | C24H31N3O2 | [
"Chemistry"
] | 74 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
60,963,831 | https://en.wikipedia.org/wiki/William%20Barnes%20%28entomologist%29 | William David Barnes (September 3, 1860 – May 1, 1930, Decatur, Illinois) was an American entomologist and surgeon. He was the son of William A. and Eleanor Sawyer Barnes. He graduated from the Decatur High School in 1877. Then spent a year at Illinois State University followed by a year at University of Illinois at Urbana–Champaign. In 1879, he entered Harvard Medical School and graduated in 1886. While at Harvard, he met naturalist Louis Agassiz and his love of Lepidoptera grew. Agassiz taught him how to preserve and classify the butterflies. He completed an internship at Boston City Hospital and then studied abroad in Heidelberg, Munich, and Vienna. In 1890, Dr. Barnes came home to Decatur and opened his medical practice. That same year he married Charlotte L. Gillette. The couple had two children.
He was one of the founders of Decatur Memorial Hospital. Barnes served as its president until his death. During that time he donated his time, talent and around $200,000.
From 1910 to 1919 James Halliday McDunnough produced, with Barnes credited as co-author, an impressive volume of research on the taxonomy of North American Lepidoptera, including the first four volumes of the privately published Contributions to the Natural History of the Lepidoptera of North America, the 1917 Check List of the Lepidoptera of Boreal America, Illustrations of the North American Species of the Genus Catocala, and numerous journal articles, 67 papers in all.
In 1913, Barnes became a Fellow of the Entomological Society of America.
At his death in 1930, his collection of butterflies was regarded as the largest and finest in the world with possibly 10,000 species and an estimated 473,000 specimens, even receiving praise in the 1936 National Geographic magazine. As a result of Barnes' collection, hundreds of species new to science were discovered and described. A few months after his death, the U.S. government bought Barnes' collection for $50,000; the collection was to be housed in the Smithsonian Institution. It is the largest collection by number of specimens in the history of the Smithsonian.
References
1860 births
1930 deaths
American lepidopterists
People from Decatur, Illinois
Illinois State University alumni
University of Illinois Urbana-Champaign alumni
Harvard Medical School alumni
Physicians from Illinois
Taxon authorities
Fellows of the Entomological Society of America
Taxa named by William Barnes (entomologist)
American surgeons | William Barnes (entomologist) | [
"Biology"
] | 492 | [
"Taxon authorities",
"Taxonomy (biology)"
] |
60,964,611 | https://en.wikipedia.org/wiki/Plasmodium%20helical%20interspersed%20subtelomeric%20protein | The Plasmodium helical interspersed subtelomeric proteins (PHIST) or ring-infected erythrocyte surface antigens (RESA) are a family of protein domains found in the malaria-causing Plasmodium species. It was initially identified as a short four-helical conserved region in the single-domain export proteins, but the identification of this part associated with a DnaJ domain in P. falciparum RESA (named after the ring stage of the parasite) has led to its reclassification as the RESA N-terminal domain. This domain has been classified into three subfamilies, PHISTa, PHISTb, and PHISTc.
The PHIST proteins are exported to the cytoplasm of the infected erythrocyte. The human malaria parasites P. falciparum and P. vivax have shown a lineage-specific expansion of proteins with this domain. Of the two PHIST genes in the mouse parasite P. berghei, only one is required for infection. The PHIST domain folds into three long helices (forming a bundle) and two smaller N-terminal helices, and is monomeric in solution. It binds PfEMP1 ATS C-terminus and plays a role in "knob" formation.
RESA
The full RESA protein in P. falciparum also contains a few other domains, namely the DnaJ domain and the DnaJ-associated X domain. A part of the X-domain, RESA/P13830663-670, appears to bind and reinforce spectrin cytoskeleton so that each erythrocyte only hosts one parasite.
P. falciparum isolate 3D7 encodes three RESA-family proteins, RESA-1 (P13830//PF3D7_0102200), RESA-2 (M91672.1//PF3D7_1149500), RESA-3 (/PF3D7_1149200). RESA-2 is usually considered a transcribed pseudogene due to a premature stop codon. However, a missense mutation T1526G or T1526C in RESA-2 that removes this stop codon is commly found. It is associated with increased severity of disease.
Notes
References
Protein domains
Helical interspersed subtelomeric protein
Antigens
Apicomplexan proteins | Plasmodium helical interspersed subtelomeric protein | [
"Chemistry",
"Biology"
] | 516 | [
"Antigens",
"Protein domains",
"Biomolecules",
"Protein classification"
] |
60,965,378 | https://en.wikipedia.org/wiki/Maxwell%E2%80%93Lodge%20effect | The Maxwell-Lodge effect is a phenomenon of electromagnetic induction in which an electric charge, near a solenoid in which current changes slowly, feels an electromotive force (e.m.f.) even if the magnetic field is practically static inside and null outside. It can be considered a classical analogue of the quantum mechanical Aharonov–Bohm effect, where instead the field is exactly static inside and null outside.
The term appeared in the scientific literature in a 2008 article, referring to an article of 1889 by physicist Oliver Lodge.
Description
Consider an infinite solenoid (ideal solenoid) with n turns per length unit, through which a current flows. The magnetic field inside the solenoid is,
(1)
while the field outside the solenoid is null.
From the second and third Maxwell's equations,
and from definitions of magnetic potential and electric potential stems:
that without electric charges reduces to
(2)
Resuming the original definition of Maxwell on the potential vector, according to which is a vector that its circuitation along a closed curve is equal to the flow of through the surface having the above curve as its edge, i.e.
,
we can calculate the induced e.m.f., as Lodge did in his 1889 article, considering the closed line around the solenoid, or convenience a circumference, and the surface having as border. Assuming the radius of the solenoid and the radius of , the surface crossing it is subjected to a magnetic flux which is equal to circuitation : . From that stems
.
From (2) we have that the e.m.f. is null for constant, which means, due to (1), at constant current.
On the other hand, if the current changes, must also change, producing electromagnetic waves in the surrounding space that can induce an e.m.f. outside the solenoid.
But if the current changes very slowly, one finds oneself in an almost stationary situation in which the radiative effects are negligible and therefore, excluding , the only possible cause of the e.m.f. is .
It is possible to make calculations without referring to the field . Indeed, in the framework of Maxwell equations as written above:
,
being negligible outside the solenoid. Thus
.
This doesn't avoid the problem that is practically null in places where the e.m.f. manifests itself.
Interpretation
Bearing in mind that the concept of field was introduced into physics to ensure that actions on objects are always local, i.e. by contact (direct and mediated by a field) and not by remote action, as Albert Einstein feared in the EPR paradox, the result of the Maxwell-Lodge effect, like the Aharonov-Bohm effect, seems contradictory. In fact, even though the magnetic field is zero outside the solenoid and the electromagnetic radiation is negligible, a test charge experiences the presence of an electric field.
The question arises as to how the information on the presence of the magnetic field from inside the solenoid reaches the electric charge. In terms of the fields and the explanation is very simple: the variation of inside the solenoid produces an electric field both inside and outside the solenoid, in the same way in which a charge distribution produces an electric field both inside and outside the distribution. In this sense the information from inside and outside is mediated by the electric field which must be continuous over all space due to the Maxwell equations and their boundary condition.
From the calculations it seems evident that the source can be considered either the variation of the potential vector , if you choose to introduce it, or that of the magnetic field , if you do not want to use the potential vector, which in the classical context has always been considered a mathematical aid, unlike the quantum case, in which Richard Feynman proclaimed its existence as a physical reality.
See also
Principle of locality
References
Further reading
Electromagnetism
1889 introductions | Maxwell–Lodge effect | [
"Physics"
] | 808 | [
"Electromagnetism",
"Physical phenomena",
"Fundamental interactions"
] |
60,966,862 | https://en.wikipedia.org/wiki/Skimmianine | Skimmianine is a furoquinoline alkaloid found in Skimmia japonica, a flowering plant in family Rutaceae that is native to Japan and China. It is also a strong acetylcholinesterase (AChE) inhibitor.
Biosynthesis
The biosynthesis of skimmianine starts from anthranilic acid, which is very abundant in the family Rutaceae. By combining anthranilic acid acetate, anthraniloyl-CoA is formed as a starting unit and able to extend side chain by adding malonyl-CoA by Claisen condensation. Next, lactam is formed through the cyclization and generate a heterocyclic system, leading the dienol tautomer adopt the 4-hydroxy quinolone tautomer, which is 4-hydroxy-2-quinolone.
With the formation of quinolone, alkylation is happening at C-3 position by introducing dimethylallyl diphosphate. Another key step is the cyclization on the dimethylallyl sidechain, forming a new heterocyclic five-member-ring. Platydesmine is then forming an intermediate through the oxidative cleavage reaction by losing an isopropyl group to form dictamine. Finally, skimmianine is formed through the hydroxylation of dictamine.
References
Quinoline alkaloids
Furans
Plant toxins
Methoxy compounds
Heterocyclic compounds with 3 rings | Skimmianine | [
"Chemistry"
] | 325 | [
"Quinoline alkaloids",
"Alkaloids by chemical classification",
"Chemical ecology",
"Plant toxins"
] |
60,968,714 | https://en.wikipedia.org/wiki/Blood%E2%80%93spinal%20cord%20barrier | The blood–spinal cord barrier (BSCB) is a semipermeable anatomical interface that consists of the specialized small blood vessels that surround the spinal cord. While similar to the blood–brain barrier in function and morphology, it is physiologically independent and has several distinct characteristics. The BSCB is involved in many disorders affecting the central nervous system, including neurodegenerative diseases, pain disorders, and traumatic spinal cord injury. In conjunction with the blood–brain barrier, the BSCB contributes to the difficulty in delivering drugs to the central nervous system, which makes drug targeting of the BSCB an important goal in pharmaceutical research.
Anatomy and physiology
The primary function of the BSCB is to protect the spinal cord from potentially toxic substances within the blood while still delivering necessary molecules to maintain spinal cord activities. The BSCB consists of endothelial cells, a basal membrane, pericytes, and astrocyte endfeet. The endothelial cells have highly exclusionary tight junctions that prohibit most molecules from passing between cells and form specialized capillaries that, unlike capillaries in the periphery, lack fenestrations, have more mitochondria, and contain limited endocytic vesicles. The lack of fenestrations and endocytic vesicles reflects the restricted transcellular flow, while the increased number of mitochondria contributes to a high metabolic rate in these endothelial cells. Surrounding the BSCB capillaries is a basal membrane (also called the basal lamina) that contains pericytes. The basal membrane is formed and maintained by all cell types in the BSCB and contributes to the cytoskeletal morphology of the endothelial cells, which affects the integrity of tight junctions and, by extension, the BSCB. Pericytes are microcirculatory cells that, in the BSCB, regulate the proliferation, migration, and differentiation of endothelial cells. Finally, astrocytes within spinal cord tissue extend endfoot processes that surround the outer surface of the capillaries. Astrocytes are critically important for developing and maintaining the neuroprotective mechanisms of BSCB endothelial cells. They release secretory compounds that influence the phenotype of endothelial cells and express aquaporin and potassium channels that help regulate ion concentration and fluid volume within the spinal cord.
The anatomy of the BSCB is very similar to the anatomy of the blood–brain barrier (BBB); however many key differences exist between the two that affect both the maintenance of healthy tissue and development of pathophysiology within the central nervous system (CNS). In general, the BSCB is more permeable than the BBB, largely due to the relatively low expression of tight junction proteins like ZO-1 and occludin. For example, mannitol, an osmotic diuretic, crosses BSCB endothelium more readily than it does BBB endothelium. Cytokines are also able to pass through the BSCB with more ease, making it more vulnerable to disruption and inflammation relative to the BBB. This vulnerability to disruption leaves the spinal cord susceptible to toxins that can inflict tissue damage. Susceptibility to toxins is already increased in BSCB endothelium through the relative downregulation of the critical efflux protein p-glycoprotein, thus slowing elimination of the toxins that penetrate the barrier. Overall, the differences between the BSCB and the BBB result in vulnerability to certain diseases and disorders affecting the CNS.
Implication in disease
Research shows that the BSCB plays a significant role in the development and progression of neurodegenerative diseases, spinal cord injury, pain conditions, and other disorders that affect the CNS. Because of its function as a protective barrier for the spinal cord, disruption of the BSCB exposes spinal cord tissue to inflammatory signals, pathogens, and toxins.
Spinal cord injury
Spinal cord injuries (SCIs) are debilitating incurable conditions that frequently result in lifelong disability and dysfunction. They occur when a mechanical force is applied to the spinal column, most often from motor vehicle accidents and falls. This impact disrupts the capillaries around the spinal cord and initiates many pathophysiological cascades by allowing the molecules and cells within the bloodstream to enter nervous tissue. Within five minutes following the initial injury, normally impermeable blood components like the large molecule albumin or red blood cells can be detected in spinal cord tissue. Because of this sudden infiltration of normally absent particles, an inflammatory response is triggered by astrocytes and microglia that extends the initial injury into previously uncompromised segments of cord. This secondary injury can contribute heavily to long term loss of function and disability.
Because the disruption of the BSCB is a major exacerbating factor in SCIs, reestablishment of normal function is critical to reducing the severity of injury outcomes. Some studies in rats with induced SCIs have shown that reestablishment can occur as early as 14 days following the initial injury; however, other studies have shown the BSCB can be compromised as long as 56 days after injury. Depending on the type and severity of the injury inducing force, the time it takes the BSCB to reestablish itself varies. For example, if the initial injury includes excessive separation of the basement membrane from the blood vessels, a space is created that can accommodate more immune cells and thus propagate damaging inflammation. Additionally, widening of the tight junctions adhering endothelial cells and oxidative stress are both associated with a long-term increase in BSCB permeability. Inflammation, compromised tight junctions, and oxidative stress are all contributors to a severe complication of SCI called post-traumatic syringomyelia. In post-traumatic syringomyelia, the structural and function loss of integrity of the BSCB encourages the development of fluid filled cysts within the spinal cord, causing pain and worsening disability. As such, the sooner the BSCB is restored, the better the prognosis is for someone after a SCI.
Neurodegenerative disease
BSCB disruption has been found as a predecessor and possible initiator for certain neurodegenerative diseases, though the mechanisms by which this occurs are not fully understood.
Amyotrophic lateral sclerosis (ALS) is a chronic, progressive, and ultimately fatal neurodegenerative disease that is caused by the death of motor neurons in the brain and spinal cord. These motor neurons control voluntary movements, so their degradation and death lead to symptoms like difficulty moving, weakness, trouble speaking and swallowing, muscle cramping, and muscle twitching that worsen over time. Only 5-10% of ALS cases have an identifiable cause, most often is a mutation in the gene SOD1. Regardless of whether the cause is idiopathic or genetic, ALS is associated with significant dysfunction of the endothelium and basement membrane of the CNS, with the BSCB being more effected than the BBB. Patients with ALS have been found to have decreased expression of certain proteins, like ZO-1 or claudin-5, that weaken the integrity of the tight junctions. The membranes of endothelial cells have increased permeability, and this may contribute more to pathology than the tight junctions being compromised. Furthermore, studies with rat models have shown that the initial phases of ALS result in a loss of 10-15% of the BSCB capillary length, edema, and collapsed capillary beds. These vasculature changes preceded motor neuron death, indicating that BSCB breakdown may facilitate tissue damage.
With the endothelium compromised, proteins and antibodies can infiltrate the cerebrospinal fluid (CSF). Proteins like albumin and antibodies like IgG are too large to pass through the BSCB on their own, so their presence in CSF indicates a leaky endothelium. These blood compounds are toxic to nervous tissue and precipitate neuroinflammation, edema, and eventual motor neuron death.
Multiple sclerosis (MS), a neurodegenerative, autoimmune disease that deteriorates the myelin sheath coating axons and causes permanent damage to nerves, has also been associated with capillary and barrier dysfunction. Many studies support the idea that CNS barrier disruption precedes MS, particularly through altered tight junction protein expression. Weakened tight junctions cause fragility in the barrier that can lead to immune system invasion, demyelination, and axonal damage. The BSCB is more susceptible to immune invasion because tight junctions are weak by default, relative to the BBB. Neutrophils are among the first immune cells found in early MS pathology; through their role as potent modulators of dendritic cell recruitment and function, they initiate an immune response that ultimately results in T-cell activation and infiltration. As such, decreased population of neutrophils correlates strongly with decreased MS pathology, indicating that strengthening of BSCB tight junctions may be a useful therapeutic target.
Neuropathic pain
Neuropathy is a type of severe chronic pain that is caused by disease and dysfunction of the somatosensory system that allows for perception of touch, pain, vibration, and other sensations. Many conditions can cause neuropathy, including diabetes, HIV, SCI, transverse myelitis, MS, amputation, peripheral nerve pain. At its core, neuropathy occurs when the central excitatory and inhibitory nerves in the somatosensory system are altered in some way, as this leads to dysfunctional sensory signaling. Triggering changes in the pain signaling pathway occurs the neuroimmune interactions, such as the release of proinflammatory mediators into the spinal cord that are known to promote central sensitization (wherein sensory nerves are activated more easily) and hyperalgesia (increased sensitivity to pain). The BSCB is the main anatomical structure that regulates the interaction between the immune system and the nervous system, so its role in neuropathy is of pharmacological interest.
While it does not occur in all cases, some people who experience peripheral nerve injury develop neuropathy through the signaling induced by the injury. It results in increased BSCB permeability that allows white blood cells to migrate into the spinal cord, where they can release mediators that alter function. This forms a positive feedback loop: an injury causes cytokines, a type of signaling molecule, to be released, which induces BSCB disruption that allows white blood cell migration, followed by the white blood cells releasing more cytokines within the spinal cord, keeping the BSCB hyperpermeable. It has been found that blocking the signaling of certain cytokines in cases of neuropathy attenuates the BSCB permeability, decreases the amount of infiltrating immune cells, and ultimately decreases hyperalgesia.
Current treatment strategies and challenges of drug delivery
Similar to the BBB, the majority of small molecule drugs and virtually all large molecule and biological drugs cannot penetrate the BSCB, making treatment of spinal cord diseases difficult. Furthermore, drugs that do pass through the barrier may be subjected to efflux by specialized efflux pumps such as p-glycoprotein and breast-cancer resistance protein, which prevents concentration of therapeutic relevance being reached. Because of this, the majority of nervous system disorders have limited treatments that may involve invasive means of administration.
There are two major pharmacological goals regarding drug delivery to the CNS: the first is finding ways to repair CNS tissue (including the BBB and BSCB) after it has been damaged, and the second is creating methods of drug delivery that improve therapy. If a drug is capable of crossing the BBB or BSCB, then it can be administered orally, rectally, intravenously, etc. However, for drugs that cannot cross the BBB/BSCB, there must be either a method of bypassing the barriers or the creation of a drug delivery system that allows penetration. Methods of bypassing the BSCB directly include intrathecal injection (into the spinal canal), intracerebroventricular injection (into the ventricles of the brain, which produce CSF that flows down the CNS), epidural injection (into the epidural space), and depot or implant placement, such as a placement of a catheter.
Regarding current treatment strategies for CNS diseases, there is unfortunately no cure for most damaging afflictions of the spinal cord. Treatment is generally aimed at reducing symptoms and slowing progression of disease. In MS, treatment centers around immunomodulatory drugs that can cross the BBB/BSCB. Corticosteroids are a common choice, as they can modulate immune function and decrease symptom severity during an attack. Some anti-cancer drugs, like mitoxantrone and cyclophosphamide, have immunosuppressant effects that slow disease progression. In ALS, only two drugs are approved to slow progression and they are incapable of reversing symptoms once they arise: masitinib, which prevents abnormal glial cells from activating and proliferating, and ibudilast, which inhibits cytokine release form activated microglia. Both drugs have been shown to slow degeneration and improve overall survival time, though it is of utmost importance to begin treatment as early as possible. Antiepileptics are sometimes given off-label for ALS, since hyperexcitability is thought to contribute to motor neuron death; no antiepileptic has been approved for this use, though. Neuropathic conditions can be managed with pharmaceutically with anticonvulsants, antidepressants, and opioids; however, other treatments involving neuromodulation (deep brain stimulation, motor cortex stimulation, spinal cord stimulation, etc.) have shown efficacy in reducing pain.
Current research
Most pharmaceutical research involving the BSCB aims to find efficient methods of overcoming its barrier mechanism to improve drug delivery. This can include directly bypassing the barrier, such as by placing a drug-eluting depot or injecting directly into the CNS, preventing efflux, such as with p-glycoprotein inhibitors, or by enhancing drug penetration.
Disruption
The BBB/BSCB can be deliberately disrupted to increase paracellular transport of drugs to the CNS. An established method of accomplishing this is through osmotic disruption. CSF is nearly isosmotic to blood, so if a hyperosmotic agent, such as mannitol, is administered intravascularly, the osmolarity gap between the blood and CSF widens, resulting in fluid flow from CNS tissue into blood vessels. Hyperosmotic agents are already in clinical use as diuretics, though mannitol is also approved to treat increased intracranial pressure, as it draws fluid out of the brain and thus decreases the volume-induced pressure. However, hyperosmolarity may also be of use in drug delivery since it causes the membranes of endothelial cells to shrink, resulting in tight junctions stretching and paracellular gaps widening. Widened paracellular gaps can facilitate passage of larger molecules that otherwise would be unable to cross the BBB/BSCB. This method is not commonly used due to adverse effects such as dehydration and, in some cases, exacerbation of injury.
Nanoparticles
Nanoparticles are composed of nano-sized particles that have widespread medical application. There are different types of nanoparticle, including polymeric (natural and artificial), liposomal/lipid-based, and inorganic. The most relevant nanoparticle type to treating CNS disease are lipid-based nanoparticles, as the lipophilicity of the particle allows for higher permeability in the BBB/BSCB. The goal of nanoparticles in this context are to successfully enter CNS tissue and release its drug cargo safely. They gain entry in two major ways: adsorption-mediated transport and receptor-mediated transport. In adsorption-mediated transport, the nanoparticle is covered in positive charges that interact with the negative charges found on the endothelium. In receptor-mediated transport, nanoparticles are covered in ligands that bind receptors found on the BBB/BSCB, so that when it encounters these receptors, endocytosis is triggered, and the nanoparticle is ferried across the endothelium. There is ongoing issue with nanoparticles as a whole, particularly regarding accumulation in the targeted region and cytotoxicity; however, nanoparticles have been found to be of incredible use to medical research and hold promise as a method of drug delivery to the CNS.
See also
References
External links
PubMed The blood–spinal cord barrier: morphology and clinical implications.
Central nervous system
Circulatory system | Blood–spinal cord barrier | [
"Biology"
] | 3,517 | [
"Organ systems",
"Circulatory system"
] |
60,968,880 | https://en.wikipedia.org/wiki/Weak%20supervision | Weak supervision (also known as semi-supervised learning) is a paradigm in machine learning, the relevance and notability of which increased with the advent of large language models due to large amount of data required to train them. It is characterized by using a combination of a small amount of human-labeled data (exclusively used in more expensive and time-consuming supervised learning paradigm), followed by a large amount of unlabeled data (used exclusively in unsupervised learning paradigm). In other words, the desired output values are provided only for a subset of the training data. The remaining data is unlabeled or imprecisely labeled. Intuitively, it can be seen as an exam and labeled data as sample problems that the teacher solves for the class as an aid in solving another set of problems. In the transductive setting, these unsolved problems act as exam questions. In the inductive setting, they become practice problems of the sort that will make up the exam.
Problem
The acquisition of labeled data for a learning problem often requires a skilled human agent (e.g. to transcribe an audio segment) or a physical experiment (e.g. determining the 3D structure of a protein or determining whether there is oil at a particular location). The cost associated with the labeling process thus may render large, fully labeled training sets infeasible, whereas acquisition of unlabeled data is relatively inexpensive. In such situations, semi-supervised learning can be of great practical value. Semi-supervised learning is also of theoretical interest in machine learning and as a model for human learning.
Technique
More formally, semi-supervised learning assumes a set of independently identically distributed examples with corresponding labels and unlabeled examples are processed. Semi-supervised learning combines this information to surpass the classification performance that can be obtained either by discarding the unlabeled data and doing supervised learning or by discarding the labels and doing unsupervised learning.
Semi-supervised learning may refer to either transductive learning or inductive learning. The goal of transductive learning is to infer the correct labels for the given unlabeled data only. The goal of inductive learning is to infer the correct mapping from to .
It is unnecessary (and, according to Vapnik's principle, imprudent) to perform transductive learning by way of inferring a classification rule over the entire input space; however, in practice, algorithms formally designed for transduction or induction are often used interchangeably.
Assumptions
In order to make any use of unlabeled data, some relationship to the underlying distribution of data must exist. Semi-supervised learning algorithms make use of at least one of the following assumptions:
Continuity / smoothness assumption
Points that are close to each other are more likely to share a label. This is also generally assumed in supervised learning and yields a preference for geometrically simple decision boundaries. In the case of semi-supervised learning, the smoothness assumption additionally yields a preference for decision boundaries in low-density regions, so few points are close to each other but in different classes.
Cluster assumption
The data tend to form discrete clusters, and points in the same cluster are more likely to share a label (although data that shares a label may spread across multiple clusters). This is a special case of the smoothness assumption and gives rise to feature learning with clustering algorithms.
Manifold assumption
The data lie approximately on a manifold of much lower dimension than the input space. In this case learning the manifold using both the labeled and unlabeled data can avoid the curse of dimensionality. Then learning can proceed using distances and densities defined on the manifold.
The manifold assumption is practical when high-dimensional data are generated by some process that may be hard to model directly, but which has only a few degrees of freedom. For instance, human voice is controlled by a few vocal folds, and images of various facial expressions are controlled by a few muscles. In these cases, it is better to consider distances and smoothness in the natural space of the generating problem, rather than in the space of all possible acoustic waves or images, respectively.
History
The heuristic approach of self-training (also known as self-learning or self-labeling) is historically the oldest approach to semi-supervised learning, with examples of applications starting in the 1960s.
The transductive learning framework was formally introduced by Vladimir Vapnik in the 1970s. Interest in inductive learning using generative models also began in the 1970s. A probably approximately correct learning bound for semi-supervised learning of a Gaussian mixture was demonstrated by Ratsaby and Venkatesh in 1995.
Methods
Generative models
Generative approaches to statistical learning first seek to estimate , the distribution of data points belonging to each class. The probability that a given point has label is then proportional to by Bayes' rule. Semi-supervised learning with generative models can be viewed either as an extension of supervised learning (classification plus information about ) or as an extension of unsupervised learning (clustering plus some labels).
Generative models assume that the distributions take some particular form parameterized by the vector . If these assumptions are incorrect, the unlabeled data may actually decrease the accuracy of the solution relative to what would have been obtained from labeled data alone.
However, if the assumptions are correct, then the unlabeled data necessarily improves performance.
The unlabeled data are distributed according to a mixture of individual-class distributions. In order to learn the mixture distribution from the unlabeled data, it must be identifiable, that is, different parameters must yield different summed distributions. Gaussian mixture distributions are identifiable and commonly used for generative models.
The parameterized joint distribution can be written as by using the chain rule. Each parameter vector is associated with a decision function .
The parameter is then chosen based on fit to both the labeled and unlabeled data, weighted by :
Low-density separation
Another major class of methods attempts to place boundaries in regions with few data points (labeled or unlabeled). One of the most commonly used algorithms is the transductive support vector machine, or TSVM (which, despite its name, may be used for inductive learning as well). Whereas support vector machines for supervised learning seek a decision boundary with maximal margin over the labeled data, the goal of TSVM is a labeling of the unlabeled data such that the decision boundary has maximal margin over all of the data. In addition to the standard hinge loss for labeled data, a loss function is introduced over the unlabeled data by letting . TSVM then selects from a reproducing kernel Hilbert space by minimizing the regularized empirical risk:
An exact solution is intractable due to the non-convex term , so research focuses on useful approximations.
Other approaches that implement low-density separation include Gaussian process models, information regularization, and entropy minimization (of which TSVM is a special case).
Laplacian regularization
Laplacian regularization has been historically approached through graph-Laplacian.
Graph-based methods for semi-supervised learning use a graph representation of the data, with a node for each labeled and unlabeled example. The graph may be constructed using domain knowledge or similarity of examples; two common methods are to connect each data point to its nearest neighbors or to examples within some distance . The weight of an edge between and is then set to .
Within the framework of manifold regularization, the graph serves as a proxy for the manifold. A term is added to the standard Tikhonov regularization problem to enforce smoothness of the solution relative to the manifold (in the intrinsic space of the problem) as well as relative to the ambient input space. The minimization problem becomes
where is a reproducing kernel Hilbert space and is the manifold on which the data lie. The regularization parameters and control smoothness in the ambient and intrinsic spaces respectively. The graph is used to approximate the intrinsic regularization term. Defining the graph Laplacian where and is the vector , we have
.
The graph-based approach to Laplacian regularization is to put in relation with finite difference method.
The Laplacian can also be used to extend the supervised learning algorithms: regularized least squares and support vector machines (SVM) to semi-supervised versions Laplacian regularized least squares and Laplacian SVM.
Heuristic approaches
Some methods for semi-supervised learning are not intrinsically geared to learning from both unlabeled and labeled data, but instead make use of unlabeled data within a supervised learning framework. For instance, the labeled and unlabeled examples may inform a choice of representation, distance metric, or kernel for the data in an unsupervised first step. Then supervised learning proceeds from only the labeled examples. In this vein, some methods learn a low-dimensional representation using the supervised data and then apply either low-density separation or graph-based methods to the learned representation. Iteratively refining the representation and then performing semi-supervised learning on said representation may further improve performance.
Self-training is a wrapper method for semi-supervised learning. First a supervised learning algorithm is trained based on the labeled data only. This classifier is then applied to the unlabeled data to generate more labeled examples as input for the supervised learning algorithm. Generally only the labels the classifier is most confident in are added at each step. In natural language processing, a common self-training algorithm is the Yarowsky algorithm for problems like word sense disambiguation, accent restoration, and spelling correction.
Co-training is an extension of self-training in which multiple classifiers are trained on different (ideally disjoint) sets of features and generate labeled examples for one another.
In human cognition
Human responses to formal semi-supervised learning problems have yielded varying conclusions about the degree of influence of the unlabeled data. More natural learning problems may also be viewed as instances of semi-supervised learning. Much of human concept learning involves a small amount of direct instruction (e.g. parental labeling of objects during childhood) combined with large amounts of unlabeled experience (e.g. observation of objects without naming or counting them, or at least without feedback).
Human infants are sensitive to the structure of unlabeled natural categories such as images of dogs and cats or male and female faces. Infants and children take into account not only unlabeled examples, but the sampling process from which labeled examples arise.
See also
PU learning
References
Sources
External links
Manifold Regularization A freely available MATLAB implementation of the graph-based semi-supervised algorithms Laplacian support vector machines and Laplacian regularized least squares.
KEEL: A software tool to assess evolutionary algorithms for Data Mining problems (regression, classification, clustering, pattern mining and so on) KEEL module for semi-supervised learning.
Semi-Supervised Learning Software
Semi-Supervised learning — scikit-learn documentation Semi-supervised learning in scikit-learn.
Machine learning | Weak supervision | [
"Engineering"
] | 2,281 | [
"Artificial intelligence engineering",
"Machine learning"
] |
60,971,511 | https://en.wikipedia.org/wiki/Psychic%20equivalence | Psychic equivalence describes a mind-state where no distinction is drawn between the contents of the mind and the external world – where what is thought in the mind is assumed to be automatically true.
Origins
Psychic equivalence is a primitive mind-state which precedes in infancy the capacity for mentalization, that is, for reflection upon both inner and outer worlds. In psychic equivalence mode, if the child thinks there is a monster in the closet it believes there really is a monster in the closet; if the inner world feels harmonious, the world outside is also harmonious.
Psychic equivalence is thus a form of concrete understanding of the world, self-convinced, that blocks all curiosity about alternative mind-views.
In later life
Psychic equivalence reappears in later life in the course of dreams, delusions, and traumatic flashbacks. It involves a temporary loss of awareness of the difference between external reality and the contents of the mind: thus in post traumatic stress disorder the individual is convinced (perhaps years later) they are actually back in the situation of the original trauma, a complete loss of perspective.
Where a defensive false self has been built up since infancy to defend against the anxieties of psychic equivalence, subsequent collapse of the narcissistic structure can lead to the re-emergence of the terrifying impingement by reality of psychic equivalence.
See also
References
External links
psychic equivalence
Developmental psychology
Psychological concepts
Psychodynamics
Mindfulness (psychology)
Borderline personality disorder | Psychic equivalence | [
"Biology"
] | 293 | [
"Behavioural sciences",
"Behavior",
"Developmental psychology"
] |
60,972,435 | https://en.wikipedia.org/wiki/Metaphenomics | Metaphenomics studies the phenome of plants or other organisms by means of meta-analysis. Main goal is to establish dose-response relationships of a wide range of phenotypic traits for a large set of a-biotic environmental factors.
Rationale
A popular way to study the effect of the environment on plants is to set up experiments where subgroups of individuals of a species of interest are exposed to different levels of one environmental factor (e.g. light, CO2), while all other factors are similar. These studies have yielded a lot of insight into the way plants respond to the environment, but may be challenging to integrate by means of a classical meta-analysis. One of the reasons for that is that phenotypic traits often respond to the environment in a non-linear way. Rather than evaluating the difference between ‘low-CO2’ and ‘high-CO2’ grown plants, it would be better to derive dose-response curves which take into account at which levels experiments were carried out. Metaphenomics uses a method to calculate dose-response curves from a variety of experiments, and is applicable to any phenotypic trait and many environmental variables.
Method
Core of the method used in metaphenomics is to scale all phenotypic data for a given species or genotype across all the levels of the environmental variable of interest (say CO2) to the value they have at a reference value of that environmental variable (for example, a CO2 concentration of 400 ppm). In this way, inherent variation among species or genotypes in the trait of interest is removed, as for all experiments and species, the scaled value at 400 ppm will be 1.0. Subsequently, general dose-response curves can be derived by fitting mathematical equations to the data.
Outcome
The results generally are a family of curves where dose-response curves for one phenotypic trait are compared for a range of different environmental variables, or where many different phenotypic traits are analysed for their response to one environmental factor. This provides a simple and quantitative overview of the many ways plants or other organisms respond to their environmental.
See also
Dose-response relationship
Meta-analyses
Phenome
References
Plants
Meta-analysis
Phenomics | Metaphenomics | [
"Biology"
] | 458 | [
"Plants"
] |
60,974,262 | https://en.wikipedia.org/wiki/Breeding%20blanket | The tritium breeding blanket (also known as a fusion blanket, lithium blanket or simply blanket), is a key part of many proposed fusion reactor designs. It serves several purposes; primarily it is to produce (or "breed") further tritium fuel for the nuclear fusion reaction, which owing to the scarcity of tritium would not be available in sufficient quantities, through the reaction of neutrons with lithium in the blanket. The blanket may also act as a cooling mechanism, absorbing the energy from the neutrons produced by the reaction between deuterium and tritium ("D-T"), and further serves as shielding, preventing the high-energy neutrons from escaping to the area outside the reactor and protecting the more radiation-susceptible portions, such as ohmic or superconducting magnets, from damage.
Of these three duties, it is only the breeding portion that cannot be replaced by other means. For instance, a large quantity of water makes an excellent cooling system and neutron shield, as in the case of a conventional nuclear reactor. However, tritium is not a naturally occurring resource, and thus is difficult to obtain in sufficient quantity to run a reactor through other means, so if commercial fusion using the D-T cycle is to be achieved, successful breeding of the tritium in commercial quantities is a requirement.
ITER runs a major effort in blanket design and will test a number of potential solutions. Concepts for the breeder blanket include helium-cooled lithium lead (HCLL), helium-cooled pebble bed (HCPB), and water-cooled lithium lead (WCLL) methods. Six different tritium breeding systems, known as Test Blanket Modules (TBM) will be tested in ITER.
Some breeding blanket designs are based on lithium containing ceramics, with a focus on lithium titanate and lithium orthosilicate. These materials, mostly in a pebble form, are used to produce and extract tritium and helium; must withstand high mechanical and thermal loads; and should not become excessively radioactive upon completion of their useful service life.
To date no large-scale breeding system has been attempted, and it is an open question whether such a system is possible to create.
A fast breeder reactor uses a blanket of uranium or thorium.
References
External links
Nuclear fusion
Lithium | Breeding blanket | [
"Physics",
"Chemistry"
] | 470 | [
"Nuclear fission",
"Physical quantities",
"Nuclear power",
"Plasma physics",
"Fusion power",
"Power (physics)",
"Nuclear energy",
"Nuclear physics",
"Nuclear fusion",
"Radioactivity"
] |
60,975,285 | https://en.wikipedia.org/wiki/Biyagama%20Water%20Treatment%20Plant | The Biyagama Water Treatment Plant or BWTP is a water treatment facility located at the bank of Kelani River, in Biyagama, Sri Lanka. At a daily output capacity of , it is the second largest water treatment facility in the country. The plant provides drinking water to approximately one million people, in Wattala, Ja-Ela, Kelaniya, Biyagama, Ragama, Kandana, Kadawatha, Kiribathgoda, Seeduwa and Ganemulla, within the Gampaha District.
Construction of the facility began on 22 October 2008. It was ceremonially inaugurated by President Mahinda Rajapakse on 22 July 2013. The facility uses raw water from the Kelani River. The maximum water loss during the purification process is 5%, due to raw water transmission, sludge de-watering and backwash.
Extension
The Kelani Right Bank Water Supply Project - Phase 2 is a project to significantly increase the production to . The expansion project is ongoing as of June 2019, by Maga Engineering and Suez.
References
External links
Water treatment facilities
Industrial buildings completed in 2013
Water supply and sanitation in Sri Lanka
2013 establishments in Sri Lanka
Buildings and structures in Gampaha District | Biyagama Water Treatment Plant | [
"Chemistry"
] | 254 | [
"Water treatment",
"Water treatment facilities"
] |
60,976,257 | https://en.wikipedia.org/wiki/Eighth%20power | In arithmetic and algebra, the eighth power of a number n is the result of multiplying eight instances of n together. So:
.
Eighth powers are also formed by multiplying a number by its seventh power, or the fourth power of a number by itself.
The sequence of eighth powers of integers is:
0, 1, 256, 6561, 65536, 390625, 1679616, 5764801, 16777216, 43046721, 100000000, 214358881, 429981696, 815730721, 1475789056, 2562890625, 4294967296, 6975757441, 11019960576, 16983563041, 25600000000, 37822859361, 54875873536, 78310985281, 110075314176, 152587890625 ...
In the archaic notation of Robert Recorde, the eighth power of a number was called the "zenzizenzizenzic".
Algebra and number theory
Polynomial equations of degree 8 are octic equations. These have the form
The smallest known eighth power that can be written as a sum of eight eighth powers is
The sum of the reciprocals of the nonzero eighth powers is the Riemann zeta function evaluated at 8, which can be expressed in terms of the eighth power of pi:
()
This is an example of a more general expression for evaluating the Riemann zeta function at positive even integers, in terms of the Bernoulli numbers:
Physics
In aeroacoustics, Lighthill's eighth power law states that the power of the sound created by a turbulent motion, far from the turbulence, is proportional to the eighth power of the characteristic turbulent velocity.
The ordered phase of the two-dimensional Ising model exhibits an inverse eighth power dependence of the order parameter upon the reduced temperature.
The Casimir–Polder force between two molecules decays as the inverse eighth power of the distance between them.
See also
Seventh power
Sixth power
Fifth power (algebra)
Fourth power
Cube (algebra)
Square number
References
Integers
Number theory
Elementary arithmetic
Integer sequences
Unary operations
Figurate numbers | Eighth power | [
"Mathematics"
] | 473 | [
"Sequences and series",
"Figurate numbers",
"Functions and mappings",
"Integer sequences",
"Elementary arithmetic",
"Unary operations",
"Mathematical structures",
"Discrete mathematics",
"Recreational mathematics",
"Mathematical objects",
"Combinatorics",
"Elementary mathematics",
"Arithmeti... |
60,977,793 | https://en.wikipedia.org/wiki/Magnetotropism | Magnetotropism is the movement or plant growth in response to the stimulus provided by the magnetic field in plants (specifically agricultural plants) around the world. As a natural environmental factor in the Earth, variations of magnetic field level causes many biological effects, including germination rate, flowering time, photosynthesis, biomass accumulation, activation of cryptochrome, and shoot growth.
Biological effects
As an adaptive behavior, magnetotropism is recognizing as a method to improve agriculture success, using the well-studied plant model, Arabidopsis thaliana, a typical small plant which is native in the Europe and Asia with well-known genomic functions. In 2012, Xu et al. conducted a Near-Null Magnetic Field experiment under white light and long-day conditions using the homemade equipment of combining three couples of Helmholtz coils in vertical, north–south, east–west direction compensating near-null magnetic field. Xu noted that under the near-null magnetic field, Arabidopsis thaliana delays the flowering time by altering the transcription level of three cryptochrome related florigen genes:PHYB, CO, and FT; Arabidopsis thaliana also induced longer hypocotyl length under white light in the Near-Null magnetic field compared to standard geomagnetic field and either dark or white light conditions. Furthermore, the biomass accumulation reduces in the near-null magnetic field while Arabidopsis thaliana switches from vegetative growth to reproductive growth. Not until recently, Agliassa conducted a similar experiment continuing Xu et al. ’s discovery found out that Arabidopsis thaliana delay flowering by shortening stem length and reduction of leaf size. This expression shows that the near-null magnetic field has caused downregulation of several flowering genes, including FT genes in the meristem and leaves, which is cryptochrome related.
Physiological mechanism
Although preliminary experiments have shown a wide range of effects due to the magnetic field, the mechanism has not yet been elucidated. Having known that the delay of flowering is downregulating in cryptochrome related genes affected by the near-null magnetic field under blue light, cryptochrome is taking as a potential magneto-sensor by a few considerations. Based on the radical pair model, cryptochrome would be the magneto-sensor in the light-dependent magnetoreception since cryptochrome has evolved a significant role of plant behavior, including blue-light reception and regulation, de-etiolation, circadian rhythm, and photolyase. In the photoactivation process, blue light hits cryptochrome and accepts a photon to Flavin while tryptophan receives a photon by another tryptophan donor simultaneously. Due to the geomagnetic field, this combination would rotate from south pole to north pole of the Earth and convert the two single photons back to its inactive resting states under aerobic environment.
Based on a few behavior changes due to variations of the magnetic field, many plant scientists have paid attention to cryptochrome being the candidate for the magneto-sensory receptor. So far, the interactions between signals and magnetoreceptor molecules have not yet discovered, thus leaving potential space for future research while understanding magnetotropism would be significant for improving life forms and ecology such as agriculture.
References
Tropism
Botany | Magnetotropism | [
"Biology"
] | 698 | [
"Plants",
"Botany"
] |
70,329,684 | https://en.wikipedia.org/wiki/Ciltacabtagene%20autoleucel | Ciltacabtagene autoleucel, sold under the brand name Carvykti, is an anti-cancer medication used to treat multiple myeloma. Ciltacabtagene autoleucel is a BCMA (B-cell maturation antigen)-directed genetically modified autologous chimeric antigen receptor (CAR) T-cell therapy. Each dose is customized using the recipient's own T-cells, which are collected and genetically modified, and infused back into the recipient.
The most common adverse reactions include pyrexia, cytokine release syndrome, hypogammaglobulinemia, musculoskeletal pain, fatigue, infections, diarrhea, nausea, encephalopathy, headache, coagulopathy, constipation, and vomiting. Additional common side effects include neutropenia (low levels of neutrophils), lymphopenia and leucopenia (low levels of lymphocytes or other white blood cells), anemia (low levels of red blood cells), thrombocytopenia (low levels of blood platelets), hypotension (low blood pressure), pain of the muscles and bones, high level of liver enzymes, upper respiratory tract infection (nose and throat infection), diarrhea, hypokalemia (low level of potassium), hypocalcemia (low levels of calcium), hypophosphatemia (low levels of phosphate in the blood), nausea, headache, cough, tachycardia (rapid heartbeat), encephalopathy (a brain disorder), edema (fluid retention), decreased appetite, chills, fever, tiredness, as well as cytokine release syndrome (a potentially life-threatening condition that can cause fever, vomiting, shortness of breath, pain and low blood pressure).
Ciltacabtagene autoleucel was approved for medical use in the United States in February 2022, and in the European Union in May 2022.
Medical uses
Ciltacabtagene autoleucel is indicated for the treatment of adults with relapsed or refractory multiple myeloma after one or more prior lines of therapy, including a proteasome inhibitor and an immunomodulatory agent, and are refractory to lenalidomide.
Adverse effects
In April 2024, the FDA label boxed warning was expanded to include T cell malignancies.
History
The safety and efficacy of ciltacabtagene autoleucel were evaluated in CARTITUDE-1 (NCT03548207), an open label, multicenter clinical trial evaluating ciltacabtagene autoleucel in 97 participants with relapsed or refractory multiple myeloma who received at least three prior lines of therapy which included a proteasome inhibitor, an immunomodulatory agent, and an anti-CD38 monoclonal antibody and who had disease progression on or after the last chemotherapy regimen; 82% had received four or more prior lines of antimyeloma therapy.
The U.S. Food and Drug Administration (FDA) granted the application for ciltacabtagene autoleucel priority review, breakthrough therapy, and orphan drug designations.
Society and culture
Legal status
In March 2022, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a conditional marketing authorization for the medicinal product Carvykti, intended for the treatment of adults with relapsed and refractory multiple myeloma. The applicant for this medicinal product is Janssen-Cilag International NV. Ciltacabtagene autoleucel was approved for medical use in the European Union in May 2022.
Names
Ciltacabtagene autoleucel is the international nonproprietary name.
References
Antineoplastic drugs
Approved gene therapies
CAR T-cell therapy
Orphan drugs | Ciltacabtagene autoleucel | [
"Biology"
] | 845 | [
"Cell therapies",
"CAR T-cell therapy"
] |
70,329,762 | https://en.wikipedia.org/wiki/Moral%20equality%20of%20combatants | The moral equality of combatants (MEC) or moral equality of soldiers is the principle that soldiers fighting on both sides of a war are equally honorable, unless they commit war crimes, regardless of whether they fight for a just cause. MEC is a key element underpinning international humanitarian law (IHL)—which applies the rules of war equally to both sides—and traditional just war theory. According to philosopher Henrik Syse, MEC presents a serious quandary because "it makes as little practical sense to ascribe blame to individual soldiers for the cause of the war in which they fight as it makes theoretical sense to hold the fighters on the two sides to be fully morally equal". The moral equality of combatants has been cited in relation to the Israeli–Palestinian conflict or the U.S.-led wars in Iraq and Afghanistan.
Traditional view
MEC as a formal doctrine was articulated in Just and Unjust Wars (1977) by Michael Walzer, although earlier just war theorists such as Augustine and Aquinas argued that soldiers should obey their leaders when fighting. There is dispute over whether early modern just war theory promoted MEC. A full crystallization of MEC could only occur after both jus ad bellum and jus in bello were developed. Proponents of MEC argue that individual soldiers are not well-placed to determine the justness of a war. Walzer, for example, argues that the entire responsibility for an unjust war is borne by military and civilian leaders who choose to go to war, rather than individual soldiers who have little say in the matter.
MEC is one of the underpinnings of international humanitarian law (IHL), which applies equally to both sides regardless of the justice of their cause. In IHL, this principle is known as equality of belligerents. This contradicts the legal principle of ex injuria jus non oritur that no one should be able to derive benefit from their illegal action. British jurist Hersch Lauterpacht articulated the pragmatic basis of belligerent equality, stating: "it is impossible to visualize the conduct of hostilities in which one side would be bound by rules of warfare without benefiting from them and the other side would benefit from them without being bound by them". International law scholar Eliav Lieblich states that the moral responsibility of soldiers who participate in unjust wars is "one of the stickiest problems in the ethics of war".
Revisionist challenge
There is no equivalent to MEC in peacetime circumstances. In 2006, philosopher Jeff McMahan began to contest MEC, arguing that soldiers fighting an unjust or illegal war are not morally equal to those fighting in self-defense. Although they do not favor criminal prosecution of individual soldiers who fight in an unjust war, they argue that individual soldiers should assess the legality or morality of the war they are asked to fight, and refuse if it is an illegal or unjust war. According to the revisionist view, a soldier or officer who knows or strongly suspects that their side is fighting an unjust war has a moral obligation not to fight it, unless this would entail capital punishment or some other extreme consequence.
Opponents of MEC—sometimes grouped under the label of revisionist just war theory—nevertheless generally support the belligerent equality principle of IHL on pragmatic grounds. In his 2018 book The Crime of Aggression, Humanity, and the Soldier, law scholar Tom Dannenbaum was one of the first to propose legal reforms based on rejection of MEC. Dannenbaum argued that soldiers who refuse to fight illegal wars should be allowed selective conscientious objection and be accepted as refugees if they have to flee their country. He also argued that soldiers fighting against a war of aggression should be recognized as victims in postwar reparations processes.
Public opinion
A 2019 study found that the majority of Americans endorse the revisionist view on MEC and many are even willing to allow a war crime against noncombatants to go unpunished when committed by soldiers who are fighting a just war. Responding to the study, Walzer cautioned that differently phrased questions might have led to different results.
References
Sources
Further reading
Military ethics
Just war theory | Moral equality of combatants | [
"Biology"
] | 863 | [
"Just war theory",
"Behavior",
"Aggression"
] |
70,330,392 | https://en.wikipedia.org/wiki/Topographic%20Rossby%20waves | Topographic Rossby waves are geophysical waves that form due to bottom irregularities. For ocean dynamics, the bottom irregularities are on the ocean floor such as the mid-ocean ridge. For atmospheric dynamics, the other primary branch of geophysical fluid dynamics, the bottom irregularities are found on land, for example in the form of mountains. Topographic Rossby waves are one of two types of geophysical waves named after the meteorologist Carl-Gustaf Rossby. The other type of Rossby waves are called planetary Rossby waves and have a different physical origin. Planetary Rossby waves form due to the changing Coriolis parameter over the earth. Rossby waves are quasi-geostrophic, dispersive waves. This means that not only the Coriolis force and the pressure-gradient force influence the flow, as in geostrophic flow, but also inertia.
Physical derivation
This section describes the mathematically simplest situation where topographic Rossby waves form: a uniform bottom slope.
Shallow water equations
A coordinate system is defined with x in eastward direction, y in northward direction and z as the distance from the earth's surface. The coordinates are measured from a certain reference coordinate on the earth's surface with a reference latitude and a mean reference layer thickness . The derivation begins with the shallow water equations:
where
In the equation above, friction (viscous drag and kinematic viscosity) is neglected. Furthermore, a constant Coriolis parameter is assumed ("f-plane approximation"). The first and the second equation of the shallow water equations are respectively called the zonal and meridional momentum equations, and the third equation is the continuity equation. The shallow water equations assume a homogeneous and barotropic fluid.
Linearization
For simplicity, the system is limited by means of a weak and uniform bottom slope that is aligned with the y-axis, which in turn enables a better comparison to the results with planetary Rossby waves. The mean layer thickness for an undisturbed fluid is then defined as
where is the slope of bottom, the topographic parameter and the horizontal length scale of the motion. The restriction on the topographic parameter guarantees that there is a weak bottom irregularity. The local and instantaneous fluid thickness can be written as
Utilizing this expression in the continuity equation of the shallow water equations yields
The set of equations is made linear to obtain a set of equations that is easier to solve analytically. This is done by assuming a Rossby number Ro (= advection / Coriolis force), which is much smaller than the temporal Rossby number RoT (= inertia / Coriolis force). Furthermore, the length scale of is assumed to be much smaller than the thickness of the fluid . Finally, the condition on the topographic parameter is used and the following set of linear equations is obtained:
Quasi-geostrophic approximation
Next, the quasi-geostrophic approximation Ro, RoT 1 is made, such that
where and are the geostrophic flow components and and are the ageostrophic flow components with and . Substituting these expressions for and in the previously acquired set of equations, yields:
Neglecting terms where small component terms ( and ) are multiplied, the expressions obtained are:
Substituting the components of the ageostrophic velocity in the continuity equation the following result is obtained:
in which R, the Rossby radius of deformation, is defined as
Dispersion relation
Taking for a plane monochromatic wave of the form
with the amplitude, and the wavenumber in x- and y- direction respectively, the angular frequency of the wave, and a phase factor, the following dispersion relation for topographic Rossby waves is obtained:
If there is no bottom slope (), the expression above yields no waves, but a steady and geostrophic flow. This is the reason why these waves are called topographic Rossby waves.
The maximum frequency of the topographic Rossby waves is
which is attained for and . If the forcing creates waves with frequencies above this threshold, no Rossby waves are generated. This situation rarely happens, unless is very small. In all other cases exceeds and the theory breaks down. The reason for this is that the assumed conditions: and RoT are no longer valid. The shallow water equations used as a starting point also allow for other types of waves such as Kelvin waves and inertia-gravity waves (Poincaré waves). However, these do not appear in the obtained results because of the quasi-geostrophic assumption which is used to obtain this result. In wave dynamics this is called filtering.
Phase speed
The phase speed of the waves along the isobaths (lines of equal depth, here the x-direction) is
which means that on the northern hemisphere the waves propagate with the shallow side at their right and on the southern hemisphere with the shallow side at their left. The equation of shows that the phase speed varies with wavenumber so the waves are dispersive. The maximum of is
which is the speed of very long waves (). The phase speed in the y-direction is
which means that can have any sign. The phase speed is given by
from which it can be seen that as . This implies that the maximum of is the maximum of .
Analogy between topographic and planetary Rossby waves
Planetary and topographic Rossby waves are the same in the sense that, if the term is exchanged for in the expressions above, where is the beta-parameter or Rossby parameter, the expression of planetary Rossby waves is obtained. The reason for this similarity is that for the nonlinear shallow water equations for a frictionless, homogeneous flow the potential vorticity q is conserved:
with being the relative vorticity, which is twice the rotation speed of fluid elements about the z-axis, and is mathematically defined as
with an anticlockwise rotation about the z-axis. On a beta-plane and for a linearly sloping bottom in the meridional direction, the potential vorticity becomes
.
In the derivations above it was assumed that
so
where a Taylor expansion was used on the denominator and the dots indicate higher order terms. Only keeping the largest terms and neglecting the rest, the following result is obtained:
Consequently, the analogy that appears in potential vorticity is that and play the same role in the potential vorticity equation. Rewriting these terms a bit differently, this boils down to the earlier seen and , which demonstrates the similarity between planetary and topographic Rossby waves. The equation for potential vorticity shows that planetary and topographic Rossby waves exist because of a background gradient in potential vorticity.
The analogy between planetary and topographic Rossby waves is exploited in laboratory experiments that study geophysical flows to include the beta effect which is the change of the Coriolis parameter over the earth. The water vessels used in those experiments are far too small for the Coriolis parameter to vary significantly. The beta effect can be mimicked to a certain degree in these experiments by using a tank with a sloping bottom. The substitution of the beta effect by a sloping bottom is only valid for a gentle slope, slow fluid motions and in the absence of stratification.
Conceptual explanation
As shown in the last section, Rossby waves are formed because potential vorticity must be conserved. When the surface has a slope, the thickness of the fluid layer is not constant. The conservation of the potential vorticity forces the relative vorticity or the Coriolis parameter to change. Since the Coriolis parameter is constant at a given latitude, the relative vorticity must change. In the figure a fluid moves to a shallower environment, where is smaller, causing the fluid to form a crest. When the height is smaller, the relative vorticity must also be smaller. In the figure, this becomes a negative relative vorticity (on the northern hemisphere a clockwise spin) shown with the rounded arrows. On the southern hemisphere this is an anticlockwise spin, because the Coriolis parameter is negative on the southern hemisphere. If a fluid moves to a deeper environment, the opposite is true. The fluid parcel on the original depth is sandwiched between two fluid parcels with one of them having a positive relative vorticity and the other one a negative relative vorticity. This causes a movement of the fluid parcel to the left in the figure. In general, the displacement causes a wave pattern that propagates with the shallower side to the right on the northern hemisphere and to the left on the southern hemisphere.
Measurements of topographic Rossby waves on earth
From 1 January 1965 till 1 January 1968, The Buoy Project at the Woods Hole Oceanographic Institution dropped buoys on the western side of the Northern Atlantic to measure the velocities. The data has several gaps because some of the buoys went missing. Still they managed to measure topographic Rossby waves at 500 meters depth. Several other research projects have confirmed that there are indeed topographic Rossby waves in the Northern Atlantic.
In 1988, barotropic planetary Rossby waves were found in the Northwest Pacific basin. Further research done in 2017 concluded that the Rossby waves are no planetary Rossby waves, but topographic Rossby waves.
In 2021, research in the South China Sea confirmed that topographic Rossby waves exist.
In 2016, research in the East Mediterranean showed that topographic Rossby Waves are generated south of Crete due to lateral shifts of a mesoscale circulation structure over the sloping bottom at 4000 m (https://doi.org/10.1016/j.dsr2.2019.07.008).
References
Geophysics
Oceans | Topographic Rossby waves | [
"Physics"
] | 1,962 | [
"Applied and interdisciplinary physics",
"Geophysics"
] |
70,330,820 | https://en.wikipedia.org/wiki/Phalaenopsis%20%C3%97%20amphitrite | Phalaenopsis × amphitrite is a species of orchid native to the Philippines. It is a natural hybrid of Phalaenopsis sanderiana and Phalaenopsis stuartiana.
History
Like all known herbarium specimens from Friedrich Wilhelm Ludwig Kraenzlin, the type specimen faced destruction during the second world war in Berlin.
Conservation
This species appears to be unfrequent and uncommon.
References
amphitrite
Orchid hybrids
Hybrid plants
Plant nothospecies
Interspecific plant hybrids
Plants described in 1892
Orchids of the Philippines | Phalaenopsis × amphitrite | [
"Biology"
] | 111 | [
"Hybrid plants",
"Plants",
"Hybrid organisms"
] |
70,331,004 | https://en.wikipedia.org/wiki/Phalaenopsis%20%C3%97%20gersenii | Phalaenopsis × gersenii is a species of orchid native to Borneo and Sumatra. It is a natural hybrid of Phalaenopsis violacea and Phalaenopsis sumatrana. It is named after Gerrit Jan Gersen (1826-1877). He was a Dutch official, who was deployed to the Dutch East Indies, where he also was active as a plant collector of the Malesian region.
Taxonomy
Phalaenopsis × singuliflora has been viewed as a synonym of Phalaenopsis × gersenii. The other natural hybrid however involves Phalaenopsis bellina instead of Phalaenopsis violacea.
References
gersenii
Orchid hybrids
Hybrid plants
Plant nothospecies
Interspecific plant hybrids
Plants described in 1892
Orchids of Borneo
Orchids of Sumatra
Orchids of Indonesia | Phalaenopsis × gersenii | [
"Biology"
] | 173 | [
"Hybrid plants",
"Plants",
"Hybrid organisms"
] |
70,331,391 | https://en.wikipedia.org/wiki/Mikhail%20Shchadov | Mikhail Shchadov (; 14 November 1927 – 13 November 2011) was a Russian engineer who served as the minister of coal industry between 15 December 1985 and 24 August 1991, being the last Soviet minister to hold the post.
Early life and education
Shchadov was born in the village of Kamenka, Irkutsk Oblast on 14 November 1927. He graduated from Tomsk Polytechnic Institute, majoring in mining engineering in 1953. He also graduated from the All-Union Financial and Economic Correspondence Institute with a degree in economics in 1965. The same year, he also graduated from the Higher Party School which was attached to the central committee of the Communist Party.
Career
Shchadov began his career at 15, working in a mine in Cheremkhovo as a site manager. Then, he became the chief engineer and head of a mine. Next, he was named the manager of a trust, Mamslyuda, in 1961, which he held until 1963. From 1966, he worked at the Vostsibugol plant as deputy head and then head of the plant.He was made the general director of the Vostsibugol Production Association. In 1977, he was appointed deputy minister of the coal industry, and in 1981 first deputy minister. He was named the coal industry minister on 15 December 1985, replacing Boris F. Bratchenko in the post. Shchadov served in the cabinet led by Nikolai Ryzhkov.
Shchadov's term was extended in March 1989. Just four days after this the mine workers started a large-scale strike. He remained in office until 24 August 1991 when he was fired due to his support for the coup against Mikhail Gorbachev. However, two months later in October 1991 Shchadov was appointed chairman of the board of the Credit Bank of Moscow.
Party career
Shchadov joined the Communist Party in 1947. He became a deputy at the Supreme Soviet in the 11th convocation and was a member of the Communist Party's central committee from 1986 to 1990.
In popular culture
Shchadov was portrayed in Chernobyl, TV mini series, dated 2019, but the show lowered his age and depth of experience in the coal industry for dramatic purposes.
References
20th-century Russian engineers
1927 births
2011 deaths
Members of the Central Committee of the 27th Congress of the Communist Party of the Soviet Union
Eleventh convocation members of the Supreme Soviet of the Soviet Union
Members of the Supreme Soviet of Russia
Mining engineers
People from Irkutsk Oblast
People's commissars and ministers of the Soviet Union
Soviet economists
Soviet engineers
Tomsk Polytechnic University alumni | Mikhail Shchadov | [
"Engineering"
] | 530 | [
"Mining engineering",
"Mining engineers"
] |
70,333,387 | https://en.wikipedia.org/wiki/DivestOS | DivestOS was an open source, Android operating system. It was a soft fork of LineageOS that aimed to increase security and privacy with support for end-of-life devices. It removed many proprietary blobs and pre-installed open source apps.
DivestOS builds were signed with release-keys so bootloaders may be re-locked on supported devices. An automated CVE patcher was used to patch the kernels against many known vulnerabilities.
DivestOS included few default applications. F-Droid was included, as well as Hypatia, a "real-time malware scanner" and Carrion, a robocall blocker.
History
The DivestOS project began in 2014, with the first properly signed builds being released in 2015.
Public release of DivestOS was announced on F-Droid forums in June 2020.
In December 2024, Tavi announced the discontinuation of DivestOS and its apps.
Supported devices
DivestOS primarily supported devices that had been supported by LineageOS.
Reception
In February 2022, TechTracker.in said DivestOS is one of few custom ROMs focusing on security and privacy, with monthly and incremental updates.
GNU/Linux.ch Linux and Freie Software News called DivestOS "relatively new and ambitious" and said it supports many devices, both newer and older.
DevsJournal called DivestOS 18.1 one of the best custom ROMs for the One Plus One phone.
DivestOS' Hypatia malware scanner for Android, and how to use their F-Droid repository, was reviewed by Gadget Hacks in March 2021.
In March 2023, the 2022 Free Software Foundation Award for Outstanding New Free Software Contributor went to Tad (SkewedZeppelin), chief developer of the DivestOS project.
See also
Comparison of mobile operating systems
List of custom Android distributions
Security-focused operating system
Tor Phone
References and notes
External links
Android (operating system) forks
Computing platforms
Custom Android firmware
Embedded Linux distributions
Free mobile software
Free and open-source Android software
Linux distributions
Linux distributions without systemd
Mobile Linux
Mobile operating systems
Smartphones
Software forks | DivestOS | [
"Technology"
] | 452 | [
"Computing platforms"
] |
70,333,497 | https://en.wikipedia.org/wiki/Brisavirus | Brisavirus (isolate LC KY052047) is a species of Redondoviridae in the genus Torbevirus. Brisa- is from the Spanish word for "Breeze", which refers to their isolation from the human respiratory tract. It was discovered in a throat swab in a male traveler who presented with fever, enlarged adenoids, flushed skin and myalgia after testing negative for other viruses. Brisavirus like other viruses in the Redondoviridae family, are present and putatively replicate in the oro-respiratory tract. They are associated in patients with critical illness and periodontitis.
Genome
Brisavirus has a CRESS DNA genome with 3 open reading frames that are inversely oriented that encode for the capsid and Replication-associated protein protein with a small and larger intergenic region. They replicate using rolling-circle replication like other CRESS viruses. By using metagenomic analysis, researchers found that the encoded proteins were most similar to porcine stool-associated circular virus 5 isolate CP33 (PoSCV5).
References
DNA viruses | Brisavirus | [
"Biology"
] | 223 | [
"Virus stubs",
"Viruses",
"DNA viruses"
] |
70,333,843 | https://en.wikipedia.org/wiki/Doug%20%28tuber%29 | Doug, also known as Dug, is a tuber in the Cucurbitaceae family that was grown by Colin and Donna Craig-Brown near Hamilton in New Zealand. Weighing roughly , it was thought to be the largest potato on record for a period after its discovery, topping the record holder at the time. However, genetic testing revealed that Doug is not, in fact, a potato.
Background
On 30 August 2021, while the Craig-Browns were weeding their garden near Hamilton, Colin's hoe struck what he initially thought was a fungal growth or a sweet potato beneath the surface; he discarded these ideas after realising the object's size. The couple dug around the object. Colin extracted it with a garden fork, scratched its skin, tasted it, and decided that it was a potato. The couple weighed it and named it Doug, after the word dug.
Doug grew in popularity locally and on Facebook, where the couple occasionally posted photographs of it. At the suggestion of friends, the Craig-Browns submitted an application for the tuber, which was being kept in a freezer at the time, to Guinness World Records, in the category of largest potato. Doug was verified as being a potato by several gardening experts, but doubts persisted. In March 2022, the Craig-Browns' application was declined after genetic testing conducted by Plant & Food Research confirmed that Doug was the "tuber of a type of gourd". Chris Claridge, who assisted in the genetic testing, suggested that the tuber may have grown as the result of infection or disease.
Colin Craig-Brown had previously stated that he planned to turn Doug into vodka once the tuber's popularity died down.
References
Tubers
Gardening in New Zealand
Cucurbitaceae
Individual plants | Doug (tuber) | [
"Biology"
] | 360 | [
"Individual organisms",
"Plants",
"Individual plants"
] |
70,334,254 | https://en.wikipedia.org/wiki/Neodymium%28III%29%20hydride | Neodymium(III) hydride is an inorganic compound composed of neodymium and hydrogen with a chemical formula NdH3. In this compound, the neodymium atom is in the +3 oxidation state and the hydrogen atoms are -1. It is highly reactive.
Preparation
Neodymium(III) hydride can be produced by directly reacting neodymium and hydrogen gas:
2Nd + 3H2 → 2NdH3
It can also be made by hydrogenerating neodymium(II) hydride.
Properties
Neodymium hydride is a blue crystal of the hexagonal system, with unit cell parameters a=0.385 nm, c=0.688 nm.
It reacts with water to form neodymium hydroxide and hydrogen gas:
NdH3 + 3 H2O → Nd(OH)3 + 3 H2
See also
Neodymium
Hydrogen
Lanthanide
References
Neodymium(III) compounds
Metal hydrides
Pyrophoric materials | Neodymium(III) hydride | [
"Chemistry",
"Technology"
] | 209 | [
"Metal hydrides",
"Inorganic compounds",
"Reducing agents"
] |
70,334,691 | https://en.wikipedia.org/wiki/Martin%20Hetzer | Martin Hetzer (born in Vienna, Austria) is an Austrian-born molecular biologist and President of the Institute of Science and Technology Austria (ISTA). He is holder of the Jesse and Caryl Philips Foundation Chair in Molecular Cell Biology. His research focuses on fundamental aspects of organismal aging with a special focus on the heart and central nervous system. His laboratory has also made important contributions in the area of cancer research and cell differentiation.
Career
Hetzer received his Ph.D. in biochemistry and genetics from the University of Vienna, Austria, and completed postdoctoral work in the lab of Iian Mattaj at the European Molecular Biology Laboratory (EMBL) in Heidelberg, Germany. He joined the faculty at the Salk Institute in La Jolla as an assistant professor in 2004 and became full professor in 2011. In 2016, he was Salk's Senior Vice President and Chief Science Officer until he moved back to Austria in 2023, becoming the second President of the Institute of Science and Technology Austria (ISTA) succeeding Thomas A. Henzinger.
Awards
Hetzer has received a number of awards including a Pew Scholar Award, an Early Life Scientist Award from the American Society of Cell Biology, a Senior Scholar Award for Aging from the Ellison Medical Foundation, a Senior Scholar Award from the American Cancer Society, a Royal Society Research Merit Award, the 2013 Glenn Award for Research in Biological Mechanisms of Aging and the 2015 NIH Director's Transformative Research Award as well as the Keck Foundation Research Award.
References
Molecular biologists
Living people
Year of birth missing (living people)
Salk Institute for Biological Studies people
University of Vienna alumni | Martin Hetzer | [
"Chemistry"
] | 329 | [
"Molecular biologists",
"Biochemists",
"Molecular biology"
] |
70,334,831 | https://en.wikipedia.org/wiki/Susan%20D.%20Richardson | Susan D. Richardson is the Arthur Sease Williams Professor of Chemistry at the University of South Carolina. Richardson's research primarily focuses on emerging environmental contaminants, particularly those affecting drinking water systems and including disinfection by-products (DBPs) that can occur in water purification systems. She is a member of the National Academy of Engineering.
Education
She earned her bachelor's degree in chemistry and mathematics at Georgia College & State University. Additionally, she completed her Ph.D. in chemistry at Emory University, under the direction of Fred Menger. She received an honorary doctorate from Cape Breton University.
Career and awards
Prior to joining the faculty at University of South Carolina, Richardson worked at the National Exposure Research Laboratory of U.S. Environmental Protection Agency for 25 years, first as a postdoctoral fellow and then a research chemist.
Richardson has been the recipient of numerous awards, including (among others): American Chemical Society Award for Creative Advances in Environmental Science and Technology (2008), Fellow of the American Chemical Society (2016); Fellow of the American Association for the Advancement of Sciences (2019); the Herty Medal (2020).
Richardson served on a number of board positions with the American Society for Mass Spectrometry: Treasurer (2002-2004), Vice President for Programs (2018-2020), and President (2020-2022). In 2023, she was named one of the top ten "Connectors and Interdisciplinarians" in the Power List by the Analytical Scientist. In 2024, she was ranked #2 in the "Plant Protectors" field of the Analytical Scientist Power List.
References
American women chemists
Living people
Year of birth missing (living people)
Georgia College & State University alumni
Emory University alumni
University of South Carolina faculty
21st-century American chemists
Mass spectrometrists
21st-century American women scientists | Susan D. Richardson | [
"Physics",
"Chemistry"
] | 381 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
70,334,857 | https://en.wikipedia.org/wiki/Notexin | Notexin is a toxin produced by the tiger snake (Notechis scutatus). It is a myotoxic and presynaptic, neurotoxic phospholipase A2 (PLA2s). These are enzymes that hydrolyze the bond between a fatty acid tail and glycerol in fatty acids on the 2-position.
History
The name notexin comes from the fact that this toxin was first found to be the major component in the venom of the tiger snake. The name notexin is thus a combination of the genus name Notechis and the word toxin. The tiger snake was first described by Wilhelm Peters in 1861. The toxin was first purified more than a hundred years later in 1972 by Karlsson et al. This prompted more research into notexin.
Structure
Notexin consists of a single molecule. This molecule is a single-peptide chain of 119 amino acid residues that are cross-linked with 7 disulfide-bridges.
X-ray diffraction has been used to determine the crystal structure of notexin and led to the conclusion that notexin belongs to either the P3121 or P3221 space group with lattice parameters a = b = 74.6 Å, c = 49.0 Å with a β of 120⁰. This data was found with a resolution of 2.0 Å and had an R-factor of 16.5%. For protein data, this R-factor is usually 20%, indicating that the crystal structure of notexin is relatively well defined.
The supramolecular structure of notexin is very similar to that of other PLA2s. Both notexin and many PLA2s contain four characteristic main helices (the αA, αB, αC and αE helices) and a short carboxyl end helix in their secondary structure. Also the active site seems to be similar enough to that of other PLA2s in order to use their model building studies when discussing enzymatic properties. Notexin does deviate significantly from other PLA2s due to different main chain lengths and its conformation in the 69th amino acid residue.
The active site of notexin contains His-48. This residue is in close contact with the carboxylate oxygens of an Asp-99 residue, which is also present in notexin. For most PLA2s the wall of the active site is covered with hydrophobic residues. When a lone pair on the oxygen of water attacks the ester, the His-48 residue facilitates a proton transfer and the substrates's carbonyl oxygen is possibly fixated and stabilized by the positively charged NH-groups on the PLA2s.
Mechanism of action
Notexin is generally lethal if it enters the bloodstream in rats. This lethal effect is the result of a presynaptic blockade of transmission across neuromuscular junctions of the breathing muscles, causing asphyxiation. It has also been shown to have myotoxic effects upon intravenous injection. Myotoxic effects generally entail muscle necrosis.
It was proposed (Dixon et al., 1996) that this myotoxicity of notexin is the result of notexin binding to the sarcolemma, causing hypercontraction and thereby muscle necrosis as a result of the membrane between places of hypercontraction rupturing. The presynaptic activity is, however, much more potent, at least in mice.
Notexin causes an indirect reduction or complete end of the release of acetylcholine in the affected nerve terminals. This acetylcholine normally causes an action potential and thereby muscle contraction. It was found that this reduction of acetylcholine release was caused by an impaired recycling of synaptic vesicles as a reduction in the content of synaptic vesicles and abnormally large vesicles were observed in the affected tissues. This was followed by shrinking of the nerve terminals and the amount of vesicles in these terminals decreasing.
The exact way of interaction with the cell is unknown, but it is suggested that notexin, like other PLA2s, interacts with high-affinity specific protein receptors or low-affinity lipid domains of muscle cells and motor neurons. Interaction of notexin with the plasma membrane results in the hydrolysis of the phospholipids in the cell membrane. A study showed that without the PLA2 activity, notexin also has membrane damaging effects, suggesting that notexin has multiple mechanisms to damage the cell membrane.
Cell membranes become permeable for ions and cause an influx of Ca2+ from the extracellular medium. In muscle cells the influx of Ca2+ causes hypercontraction of myofilaments, which can cause mechanical damage to the plasma membrane. The mitochondria will take up Ca2+, eventually leading to a reduced mitochondrial functionality. The high Ca2+ concentration in the cytosol activates Ca2+-dependent proteinases, calpains, and the endogenous Ca2+-dependent phospholipase A2. The calpains degrade the cytoskeletal components of the cell and Ca2+-dependent phospholipase A2 hydrolyses the cell membrane, which leads to further cell degradation and bigger influx of Ca2+. At a certain point the damage is irreversible and necrosis of the cell occurs.
In neurons the influx of calcium causes the release of the ready-to-release synaptic vesicles and the reserve pool of synaptic vesicles. Research showed that neurons, after treatment with notexin, had strongly reduced numbers of synaptic vesicles. These results seem to indicate that notexin inhibits the endocytosis of new synaptic vesicles, besides the exocytosis as a result of the Ca2+ influx. Like in muscle cells, the Ca2+ influx in neurons also leads to reduced mitochondrial functionality and the activation of calpains and endogenous Ca2+-dependent PLA2s. This leads to the same structural damage as in muscle cells.
Experimental research also showed that notexinhad nephrotoxic effects on mice. A study showed that depending on the dose, there was renal tubular and glomerular damage within 24 hours.
Immune response
Not much about the metabolism of notexin is known. However, studies have shown that the toxin can be made ineffective by specific antibodies. In a certain study, mice become resistant to notexin, similar isoforms of the toxin and other venoms from the same origin. This was done by exposing the mice to the non-detoxified notexin. It was found that the C-terminal part of the notexin peptide chain is the binding site for these antibodies and thus it is known that this is the site for an antigenic domain. Another study showed that at least some of the antigens that are able to block the effects of notexin do this by cross-neutralizations. Certain antibodies have also been shown to have different affinities for different notexin isoforms. These different isoforms occur in snakes that have different geographical locations. It is thus not the case that notexin antibodies necessarily bind to all notexin isoforms with the same affinity.
Symptoms
Notexin causes pain at the site of the bite followed by excessive salivation, weakness, drowsiness, difficulty breathing, decreased blood pressure and paralysis of lips, larynx, tongue and facial muscles. Possibly, blurring of vision, ptosis, headaches and convulsions may also occur.
Toxicity
There has been no research on the reaction of notexin in humans, however it is known that upon injection with the toxin, muscle damage and myoglobinuria will follow. Data has shown that the tiger snake is one of the major causes of snake bites in Australia leading to being the second most common cause of death from snakebites.
Several results on the toxic effects of notexin in rodents have been reported. When injecting 1 to 2 μg of pure toxin into a rat soleus muscle, it will destroy all of the muscle fibers. In mice the LD50 is 0.214 mg/kg when applied subcutaneously and 0.04 mg/kg when applied intravenously.
There have also been researches into functional and morphological properties of regrowing mouse extensor digitorum longus (EDL) muscles after a notexin injection. Three days after injection there was complete fiber breakdown and loss of functional capacity. After ten days the muscles were made up entirely of regrowing fibers.
Treatment
There is no notexin antivenom available on the market. There are two general tiger snake antivenoms available that potentially could work. There are no studies found on the effectiveness of the tiger snake antivenoms on notexin.
References
Neurotoxins
Snake toxins
Acetylcholine release inhibitors | Notexin | [
"Chemistry"
] | 1,839 | [
"Neurochemistry",
"Neurotoxins"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.