id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
37,740,436 | https://en.wikipedia.org/wiki/Sonali%20Mukherjee | Sonali Mukherjee is a woman from Dhanbad, India, whose face was permanently disfigured by an acid attack in 2003 when she was 18. Her family has spent all their savings on her treatment.
Early life
Mukherjee was born in Dhanbad. She was a National Cadet Corps cadet, which she had to quit after her attack.
Incident
In 2003, almost one and half months prior to the incident, three alleged assailants - Tapas Mitra, and his two friends Sanjay Paswan and Bhrahmadev Hajra - told her that she was a Ghamandi (arrogant) person, and they would teach her a lesson. Her father later complained to the families of the three men. On 22 April, when she was asleep on the roof of her house, she was attacked with acid and left with a burnt face and other severe injuries. Her sister was also injured in the incident.
Aftermath
The perpetrators were sentenced to nine years in jail, but were granted bail when they appealed to the High Court. Mukherjee's family approached the court and various other authorities for justice, including the Chief Minister of Jharkhand, multiple MPs, but she received "aashwasan [assurances] ... nothing else".
Chandidas Mukherjee, Sonali's father, later stated in an interview: "We appealed in the high court... Nothing happened. They were sent to jail, but were released soon after. Now, they are busy enjoying their lives. The law against acid attackers needs to be made tougher. Otherwise, we will have many more Sonalis".
In February 2014, the State Government of Jharkhand appointed Sonali Mukherjee as Grade III clerk in the welfare department of the Bokaro deputy commissioner's office.
Appearance in Kaun Banega Crorepati
Mukherjee drew global attention when she appealed for euthanasia. Her wish to meet Amitabh Bachchan on the sets of Kaun Banega Crorepati, season 6 was granted in 2012. Accompanied by Lara Dutta in the game, they won .
References
External links
Sonali Mukherjee's Petition for Justice
Violence against women in India
Women's rights in Asia
Acid attack victims
Chemical weapons attacks
People from Dhanbad
Living people
Year of birth missing (living people) | Sonali Mukherjee | Chemistry | 466 |
862,494 | https://en.wikipedia.org/wiki/Gamma%20camera | A gamma camera (γ-camera), also called a scintillation camera or Anger camera, is a device used to image gamma radiation emitting radioisotopes, a technique known as scintigraphy. The applications of scintigraphy include early drug development and nuclear medical imaging to view and analyse images of the human body or the distribution of medically injected, inhaled, or ingested radionuclides emitting gamma rays.
Imaging techniques
Scintigraphy ("scint") is the use of gamma cameras to capture emitted radiation from internal radioisotopes to create two-dimensional images.
SPECT (single photon emission computed tomography) imaging, as used in nuclear cardiac stress testing, is performed using gamma cameras. Usually one, two or three detectors or heads, are slowly rotated around the patient.
Construction
A gamma camera consists of one or more flat crystal planes (or detectors) optically coupled to an array of photomultiplier tubes in an assembly known as a "head", mounted on a gantry. The gantry is connected to a computer system that both controls the operation of the camera and acquires and stores images. The construction of a gamma camera is sometimes known as a compartmental radiation construction.
The system accumulates events, or counts, of gamma photons that are absorbed by the crystal in the camera. Usually a large flat crystal of sodium iodide with thallium doping NaI(Tl) in a light-sealed housing is used. The highly efficient capture method of this combination for detecting gamma rays was discovered in 1944 by Sir Samuel Curran whilst he was working on the Manhattan Project at the University of California at Berkeley. Nobel prize-winning physicist Robert Hofstadter also worked on the technique in 1948.
The crystal scintillates in response to incident gamma radiation. When a gamma photon leaves the patient (who has been injected with a radioactive pharmaceutical), it knocks an electron loose from an iodine atom in the crystal, and a faint flash of light is produced when the dislocated electron again finds a minimal energy state. The initial phenomenon of the excited electron is similar to the photoelectric effect and (particularly with gamma rays) the Compton effect. After the flash of light is produced, it is detected. Photomultiplier tubes (PMTs) behind the crystal detect the fluorescent flashes (events) and a computer sums the counts. The computer reconstructs and displays a two dimensional image of the relative spatial count density on a monitor. This reconstructed image reflects the distribution and relative concentration of radioactive tracer elements present in the organs and tissues imaged.
Signal processing
Hal Anger developed the first gamma camera in 1957. His original design, frequently called the Anger camera, is still widely used today. The Anger camera uses sets of vacuum tube photomultipliers (PMT). Generally each tube has an exposed face of about in diameter and the tubes are arranged in hexagon configurations, behind the absorbing crystal. The electronic circuit connecting the photodetectors is wired so as to reflect the relative coincidence of light fluorescence as sensed by the members of the hexagon detector array. All the PMTs simultaneously detect the (presumed) same flash of light to varying degrees, depending on their position from the actual individual event. Thus the spatial location of each single flash of fluorescence is reflected as a pattern of voltages within the interconnecting circuit array.
The location of the interaction between the gamma ray and the crystal can be determined by processing the voltage signals from the photomultipliers; in simple terms, the location can be found by weighting the position of each photomultiplier tube by the strength of its signal, and then calculating a mean position from the weighted positions. The total sum of the voltages from each photomultiplier, measured by a pulse height analyzer is proportional to the energy of the gamma ray interaction, thus allowing discrimination between different isotopes or between scattered and direct photons.
Spatial resolution
In order to obtain spatial information about the gamma-ray emissions from an imaging subject (e.g. a person's heart muscle cells which have absorbed an intravenous injected radioactive, usually thallium-201 or technetium-99m, medicinal imaging agent) a method of correlating the detected photons with their point of origin is required.
The conventional method is to place a collimator over the detection crystal/PMT array. The collimator consists of a thick sheet of lead, typically thick, with thousands of adjacent holes through it. There are three types of collimators: low energy, medium energy, and high energy collimators. As the collimators transitioned from low energy to high energy, the hole sizes, thickness, and septations between the holes also increased. Given a fixed septal thickness, the collimator resolution decreases with increased efficiency and also increasing distance of the source from the collimator. Pulse-height analyser determines the Full width at half maximum that selects certain photons to contribute to the final image, thus determining the collimator resolution.
The individual holes limit photons which can be detected by the crystal to a cone shape; the point of the cone is at the midline center of any given hole and extends from the collimator surface outward. However, the collimator is also one of the sources of blurring within the image; lead does not totally attenuate incident gamma photons, there can be some crosstalk between holes.
Unlike a lens, as used in visible light cameras, the collimator attenuates most (>99%) of incident photons and thus greatly limits the sensitivity of the camera system. Large amounts of radiation must be present so as to provide enough exposure for the camera system to detect sufficient scintillation dots to form a picture.
Other methods of image localization (pinhole, rotating slat collimator with CZT) have been proposed and tested; however, none have entered widespread routine clinical use.
The best current camera system designs can differentiate two separate point sources of gamma photons located at 6 to 12 mm depending on distance from the collimator, the type of collimator and radio-nucleide. Spatial resolution decreases rapidly at increasing distances from the camera face. This limits the spatial accuracy of the computer image: it is a fuzzy image made up of many dots of detected but not precisely located scintillation. This is a major limitation for heart muscle imaging systems; the thickest normal heart muscle in the left ventricle is about 1.2 cm and most of the left ventricle muscle is about 0.8 cm, always moving and much of it beyond 5 cm from the collimator face. To help compensate, better imaging systems limit scintillation counting to a portion of the heart contraction cycle, called gating, however this further limits system sensitivity.
See also
Nuclear medicine
Scintigraphy
References
Further reading
H. Anger. A new instrument for mapping gamma-ray emitters. Biology and Medicine Quarterly Report UCRL, 1957, 3653: 38. (University of California Radiation Laboratory, Berkeley)
External links
Nuclear medicine
Image sensors
Medical physics
American inventions
Gamma rays
Articles containing video clips | Gamma camera | Physics | 1,481 |
8,036,853 | https://en.wikipedia.org/wiki/Molecular%20cytogenetics | Molecular cytogenetics combines two disciplines, molecular biology and cytogenetics, and involves the analysis of chromosome structure to help distinguish normal and cancer-causing cells. Human cytogenetics began in 1956 when it was discovered that normal human cells contain 46 chromosomes. However, the first microscopic observations of chromosomes were reported by Arnold, Flemming, and Hansemann in the late 1800s. Their work was ignored for decades until the actual chromosome number in humans was discovered as 46. In 1879, Arnold examined sarcoma and carcinoma cells having very large nuclei. Today, the study of molecular cytogenetics can be useful in diagnosing and treating various malignancies such as hematological malignancies, brain tumors, and other precursors of cancer. The field is overall focused on studying the evolution of chromosomes, more specifically the number, structure, function, and origin of chromosome abnormalities. It includes a series of techniques referred to as fluorescence in situ hybridization, or FISH, in which DNA probes are labeled with different colored fluorescent tags to visualize one or more specific regions of the genome. Introduced in the 1980s, FISH uses probes with complementary base sequences to locate the presence or absence of the specific DNA regions. FISH can either be performed as a direct approach to metaphase chromosomes or interphase nuclei. Alternatively, an indirect approach can be taken in which the entire genome can be assessed for copy number changes using virtual karyotyping. Virtual karyotypes are generated from arrays made of thousands to millions of probes, and computational tools are used to recreate the genome in silico.
Common techniques
Fluorescence in situ hybridization (FISH)
Fluorescence In Situ Hybridization maps out single copy or repetitive DNA sequences through localization labeling of specific nucleic acids. The technique utilizes different DNA probes labeled with fluorescent tags that bind to one or more specific regions of the genome. It labels all individual chromosomes at every stage of cell division to display structural and numerical abnormalities that may arise throughout the cycle. This is done with a probe that can be locus specific, centromeric, telomeric, and whole-chromosomal. This technique is typically performed on interphase cells and paraffin block tissues. FISH maps out single copy or repetitive DNA sequences through localization labeling of specific nucleic acids. The technique utilizes different DNA probes labeled with fluorescent tags that bind to one or more specific regions of the genome. Signals from the fluorescent tags can be seen with microscopy, and mutations can be seen by comparing these signals to healthy cells. For this to work, DNA must be denatured using heat or chemicals to break the hydrogen bonds; this allows hybridization to occur once two samples are mixed. The fluorescent probes create new hydrogen bonds, thus repairing DNA with their complementary bases, which can be detected through microscopy. FISH allows one to visualize different parts of the chromosome at different stages of the cell cycle. FISH can either be performed as a direct approach to metaphase chromosomes or interphase nuclei. Alternatively, an indirect approach can be taken in which the entire genome can be assessed for copy number changes using virtual karyotyping. Virtual karyotypes are generated from microarrays made of thousands to millions of probes, and computational tools are used to recreate the genome in silico.
Comparative genomic hybridization (CGH)
Comparative genomic hybridization (CGH), derived from FISH, is used to compare variations in copy number between a biological sample and a reference. CGH was originally developed to observe chromosomal aberrations in tumour cells. This method uses two genomes, a sample and a control, which are labeled fluorescently to distinguish them. In CGH, DNA is isolated from a tumour sample and biotin is attached. Another labelling protein, digoxigenin, is attached to the reference DNA sample. The labelled DNA samples are co-hybridized to probes during cell division, which is the most informative time for observing copy number variation. CGH uses creates a map that shows the relative abundance of DNA and chromosome number. By comparing the fluorescence in a sample compared to a reference, CGH can point to gains or losses of chromosomal regions. CGH differs from FISH because it does not require a specific target or previous knowledge of the genetic region being analyzed. CGH can also scan an entire genome relatively quickly for various chromosome imbalances, and this is helpful in patients with underlying genetic issues and when an official diagnosis is not known. This often occurs with hematological cancers.
Array comparative genomic hybridization (aCGH)
Array comparative genomic hybridization (aCGH) allows CGH to be performed without cell culture and isolation. Instead, it is performed on glass slides containing small DNA fragments. Removing the cell culture and isolation step dramatically simplifies and expedites the process. Using similar principles to CGH, the sample DNA is isolated and fluorescently labelled, then co-hybridized to single stranded probes to generate signals. Thousands of these signals can be detected for at once, and this process is referred to as parallel screening. Fluorescence ratios between the sample and reference signals are measured, representing the average difference between the amount of each. This will show if there is more or less sample DNA than is expected by reference.
Applications
FISH chromosome in-situ hybridization allows the study cytogenetics in pre- and postnatal samples and is also widely used in cytogenetic testing for cancer. While cytogenetics is the study of chromosomes and their structure, cytogenetic testing involves the analysis of cells in the blood, tissue, bone marrow, or fluid to identify changes in chromosomes of an individual. This was often done through karyotyping, and is now done with FISH. This method is commonly used to detect chromosomal deletions or translocations often associated with cancer. FISH is also used for melanocytic lesions, distinguishing atypical melanocytic or malignant melanoma.
Cancer cells often accumulate complex chromosomal structural changes such as loss, duplication, inversion or movement of a segment. When using FISH, any changes to a chromosome will be made visible through discrepancies between fluorescent-labelled cancer chromosomes and healthy chromosomes. The findings of these cytogenetic experiments can shed light on the genetic causes for the cancer and can locate potential therapeutic targets.
Molecular cytogenetics can also be used as a diagnostic tool for congenital syndromes in which the underlying genetic causes of the disease are unknown. Analysis of a patient's chromosome structure can reveal causative changes. New molecular biology methods developed in the past two decades such as next generation sequencing and RNA-seq have largely replaced molecular cytogenetics in diagnostics, but recently the use of derivatives of FISH such as multicolour FISH and multicolour banding (mBAND) has been growing in medical applications.
Cancer projects
One of the current projects involving Molecular Cytogenetics involves genomic research on rare cancers, called the Cancer Genome Characterization Initiative (CGCI). The CGCI is a group interested in describing the genetic abnormalities of some rare cancers, by employing advanced sequencing of genomes, exomes, and transcriptomes, which may ultimately play a role in cancer pathogenesis. Currently, the CGCI has elucidated some previously undetermined genetic alterations in medulloblastoma and B-cell non-Hodgkin lymphoma. The next steps for the CGCI is to identify genomic alternations in HIV+ tumors and in Burkitt's Lymphoma.
Some high-throughput sequencing techniques that are used by the CGCI include: whole genome sequencing, transcriptome sequencing, ChIP-sequencing, and Illumina Infinum MethylationEPIC BeadCHIP.
References
External links
Cytogenetics Resources
Human Cytogenetics - Chromosomes and Karyotypes
Association for Genetic Technologists
Association of Clinical Cytogeneticists
Cytogenetics - Technologies, markets and companies
Genetics | Molecular cytogenetics | Biology | 1,679 |
71,272,853 | https://en.wikipedia.org/wiki/Kaniadakis%20logistic%20distribution | The Kaniadakis Logistic distribution (also known as κ-Logisticdistribution) is a generalized version of the Logistic distribution associated with the Kaniadakis statistics. It is one example of a Kaniadakis distribution. The κ-Logistic probability distribution describes the population kinetics behavior of bosonic () or fermionic () character.
Definitions
Probability density function
The Kaniadakis κ-Logistic distribution is a four-parameter family of continuous statistical distributions, which is part of a class of statistical distributions emerging from the Kaniadakis κ-statistics. This distribution has the following probability density function:
valid for , where is the entropic index associated with the Kaniadakis entropy, is the rate parameter, , and is the shape parameter.
The Logistic distribution is recovered as
Cumulative distribution function
The cumulative distribution function of κ-Logistic is given by
valid for . The cumulative Logistic distribution is recovered in the classical limit .
Survival and hazard functions
The survival distribution function of κ-Logistic distribution is given by
valid for . The survival Logistic distribution is recovered in the classical limit .
The hazard function associated with the κ-Logistic distribution is obtained by the solution of the following evolution equation:with , where is the hazard function:
The cumulative Kaniadakis κ-Logistic distribution is related to the hazard function by the following expression:
where is the cumulative hazard function. The cumulative hazard function of the Logistic distribution is recovered in the classical limit .
Related distributions
The survival function of the κ-Logistic distribution represents the κ-deformation of the Fermi-Dirac function, and becomes a Fermi-Dirac distribution in the classical limit .
The κ-Logistic distribution is a generalization of the κ-Weibull distribution when .
A κ-Logistic distribution corresponds to a Half-Logistic distribution when , and .
The ordinary Logistic distribution is a particular case of a κ-Logistic distribution, when .
Applications
The κ-Logistic distribution has been applied in several areas, such as:
In quantum statistics, the survival function of the κ-Logistic distribution represents the most general expression of the Fermi-Dirac function, reducing to the Fermi-Dirac distribution in the limit .
See also
Giorgio Kaniadakis
Kaniadakis statistics
Kaniadakis distribution
Kaniadakis κ-Exponential distribution
Kaniadakis κ-Gaussian distribution
Kaniadakis κ-Gamma distribution
Kaniadakis κ-Weibull distribution
Kaniadakis κ-Erlang distribution
References
External links
Kaniadakis Statistics on arXiv.org
Probability distributions
Mathematical and quantitative methods (economics) | Kaniadakis logistic distribution | Mathematics | 547 |
4,661,664 | https://en.wikipedia.org/wiki/Radiation%20intelligence | Unintentional Radiation intelligence, or RINT, is military intelligence gathered and produced from unintentional radiation created as induction from electrical wiring, usually of computers, data connections and electricity networks.
See also
TEMPEST
References
Military intelligence
Radiation
Measurement and signature intelligence | Radiation intelligence | Physics,Chemistry | 53 |
9,202,889 | https://en.wikipedia.org/wiki/THC-O-phosphate | THC-O-phosphate is a water-soluble organophosphate ester derivative of tetrahydrocannabinol (THC), which functions as a metabolic prodrug for THC itself. It was invented in 1978 in an attempt to get around the poor water solubility of THC and make it easier to inject for the purposes of animal research into its pharmacology and mechanism of action. The main disadvantage of THC phosphate ester is the slow rate of hydrolysis of the ester link, resulting in delayed onset of action and lower potency than the parent drug. Pharmacologically, it is comparable to the action of psilocybin as a metabolic prodrug for psilocin.
THC phosphate ester is made by reacting THC with phosphoryl chloride using pyridine as a solvent, following by quenching with water to produce THC phosphate ester. In the original research the less active but more stable isomer Δ8-THC was used, but the same reaction scheme could be used to make the phosphate ester of the more active isomer Δ9-THC.
See also
THC-O-acetate
THC hemisuccinate
THC morpholinylbutyrate
References
Benzochromenes
Cannabinoids
Phosphate esters
Prodrugs | THC-O-phosphate | Chemistry | 284 |
5,883,751 | https://en.wikipedia.org/wiki/Robi%20%28company%29 | Robi Axiata PLC. (d/b/a Robi) is the second largest mobile network operator in Bangladesh. Axiata of Malaysia holds a major controlling stake of 61.82% in the company, while Bharti Airtel of India holds 28.18% and investors in DSE and CSE hold 10%. Robi first commenced operation in 1997 as Telekom Malaysia International (Bangladesh) with the brand name ‘AKTEL’. In 2010, the company was re-branded to ‘Robi’ and the company changed its name to Robi Axiata Limited. As per government rule, the name changed to Robi Axiata PLC in 2024 as Robi is listed in Stock Market and a Public Limited Company. Robi Axiata has spectrum on GSM 900, 1800 and 2100 MHz bands. On 16 November 2016, Airtel Bangladesh was merged into Robi as a product brand of Robi, where Robi Axiata PLC is the licensee of Airtel brand only in Bangladesh. Having successfully completed the merger process, Robi has emerged as the second largest mobile phone operator in Bangladesh.
History
Robi Axiata PLC started as a joint venture company between Telekom Malaysia and AK Khan and Company. It was formerly known as Telekom Malaysia International Bangladesh Limited which commenced operations in Bangladesh in 1997 with the brand name 'AKTEL'. In 2007, AK Khan and Company exited the business by selling its 30% stake to Japan's NTT Docomo for US$350 million.
In 2009, then AKTEL, now Robi Axiata, was the first operator to introduce GPRS and 3.5G services in the country.
On 28 March 2010, 'AKTEL' was rebranded as 'Robi' which means "sun" in Bengali. It also took the logo of parent company Axiata Group which itself also went through a major rebranding in 2009. In 2013, after five years of presence, Docomo reduced its ownership to 8% for Axiata to take 92%.
On 28 January 2016, it was announced that Robi Axiata and Airtel Bangladesh will merge in Q1 2016. The combined entity will be called Robi, to serve about 40 million subscribers combined by both networks. Axiata Group will own 68.3% share, while Bharti Group will own 25%, and NTT Docomo held 6.31% shares. Finally Robi and Airtel were merged on 16 November 2016 and Robi set sail as the merged company. Later on, in 2020, after a decade with Robi, NTT Docomo decided to leave Bangladesh by selling its remaining stake in Robi Axiata PLC to Bharti International.
In August 2021, CEO Mahtab Uddin Ahmed stepped down as Robi CEO. Company CFO M Riyaaz Rasheed stepped in as acting CEO in addition to his current duties.
Mobile network operator Robi Axiata PLC has appointed Rajeev Sethi as the company’s chief executive officer from October 2022. He will be replacing M Riyaaz Rasheed, who has been serving as Robi’s acting chief executive officer since August 2021.
Robi won the GSMA Glomo award for the Best Mobile Innovation for Education and Learning in the "Connected Life Awards" category at the Mobile World Congress (MWC) 2017, is highly noteworthy.
Numbering scheme
Robi uses the following numbering scheme for its subscribers:
+880 18 NNNNNNNN (ROBI brand)
+880 16 NNNNNNNN (AIRTEL brand)
Where, +880 is the International subscriber dialing code for Bangladesh.
18 and 16 is the access code for Robi as assigned by the Government of Bangladesh. Omitting +88 will require to use 0 in place of it instead to represent local call, hence 018 and 016 is the general access code.
N1N2N3N4N5N6N7N8 is the subscriber number.
After merger with Airtel, besides '018', Robi owns '016' number series too.
In 2018, when Mobile number portability was introduced users can port to any operator without changing its number.
Network
Robi is currently the second largest mobile operator in Bangladesh in terms of total number of mobile towers or BTS. As of August 2024, Robi has a total of 18,473 Mobile Network Tower or BTS across the country.
Spectrums
The Robi/Airtel network is also GPRS/EDGE/3G-enabled, with a growing 4G network, allowing internet access within its coverage area. its Total spectrum volume is 124.00 MHz, 104.00 MHz is currently being used by Robi/Airtel, the remaining 20 MHz will be added from June 2025.
Frequencies used on ROBI/AIRTEL Network in Bangladesh:
Band: 8/900 MHz, Total Frequency: 09 MHz, Width: EDGE/LTE/LTE-A, Protocol: 2CC,3CC,4CC,5CC,6CC
Band: 3/ 1800 MHz, Total Frequency: 20 MHz, Width: EDGE/LTE/LTE-A, Protocol: 2CC,3CC,4CC,5CC,6CC
Band: 1/ 2100 MHz, Total Frequency: 15 MHz, Width: LTE/LTE-A, Protocol: 2CC,3CC,4CC,5CC,6CC
Band: 41/ 2600 MHz, Total Frequency: 60+20 MHz, Width: LTE/LTE-A, Protocol: 2CC,3CC,4CC,5CC,6CC
Services
Binge
Binge is a Bangladeshi subscription video on-demand over-the-top streaming platform and original programming production company. It is owned by RedDot Digital via subsidiary of Robi Axiata. It was launched on 21 May 2020 through a Facebook event. The platform primarily offers and distributes live television channels, series and films licensed to Binge, as Binge Originals. The service also hosting content from other providers, content add-ons, live sporting events.
Binge stepped into international market on 25 June 2021 by launching it's service in Malaysia with the co-operation of Celcom. In March 2022, Binge expanded it's availability to more than 120 countries. Currently operating worldwide, Binge has both free and premium subscription options.
At the very beginning, Robi Axiata had decided to use Wowza's Streaming Engine but later they designed a streaming platform that can work with the Streaming Engine and could be configured for best performance. On 21 May 2020, the streaming platform launched by the name, "Binge".
Binge is being made of combining IPTV and digital entertainment services into a single online streaming site. This is the first Bangladeshi Google-certified online video streaming service that can be used on any Android device, including phones and televisions.
Content
Binge mainly focuses on Bangla language content. They have two categories of contents: premium and free respectively. To watch premium, content subscription is required.
Binge is available now from more than120 countries around the world. Binge has come up with two subscription plans for global audience. Users can avail of the service both in monthly and yearly subscription plans.
Subsequently, it has logged in 1 lakh subscriptions in just over three months.
Binge is maintained by RedDot Digital currently in partnership with Genex Infosys. Binge Android smart device and streaming platform- both were developed by Genex Infosys.
See also
Telecommunications in Bangladesh
List of companies of Bangladesh
Grameenphone
Banglalink
Airtel Bangladesh
Internet in Bangladesh
List of telecommunications companies of Bangladesh
List of media companies of Bangladesh
Mass media in Bangladesh
Ministry of Information and Broadcasting (Bangladesh)
References
External links
official website
Binge (Streaming Service)
Axiata
Mobile phone companies of Bangladesh
NTT Docomo
Companies established in 1997
Telecommunications companies of Bangladesh
Bharti Airtel | Robi (company) | Technology | 1,649 |
23,905,432 | https://en.wikipedia.org/wiki/C8H10O3 | {{DISPLAYTITLE:C8H10O3}}
The molecular formula C8H10O3 (molar mass: 154.16 g/mol, exact mass: 154.0629938 u) may refer to:
Hydroxytyrosol
Methacrylic anhydride
Syringol
Vanillyl alcohol
Terrein
Molecular formulas | C8H10O3 | Physics,Chemistry | 78 |
20,994,326 | https://en.wikipedia.org/wiki/Peripheral%20DMA%20controller | A peripheral DMA controller (PDC) is a feature found in modern microcontrollers. This is typically a FIFO with automated control features for driving implicitly included modules in a microcontroller such as UARTs.
This takes a large burden from the operating system and reduces the number of interrupts required to service and control these type of functions.
See also
Direct memory access (DMA)
Autonomous peripheral operation
References
Integrated circuits | Peripheral DMA controller | Technology,Engineering | 91 |
38,103,199 | https://en.wikipedia.org/wiki/Gabriel%20Wagner | Gabriel Wagner (c. 1660 – c. 1717) was a radical German philosopher and materialist who wrote under the nom-de-plume Realis de Vienna. A follower of Spinoza and acquaintance of Leibniz, Wagner did not believe that the universe or bible were divine creations, and sought to extricate philosophy and science from the influence of theology. Wagner also held radical political views critical of the nobility and monarchy. After failing to establish lasting careers in cities throughout German-speaking Europe, Wagner died in or shortly after 1717.
Life
Wagner studied under scholar Christian Thomasius in Leipzig, and in 1691 published a philosophical tract critical of Thomasius, "Discourse and doubts in Christ: a Thomasian introduction to courtly philosophy." The tract satirically dubbed Thomasius the "German Socrates" and attracted attention within philosophical circles, including from Leibniz, who sought to contact Wagner. In the same year, after a dispute over rent, Wagner was expelled from university and imprisoned. Following his release, Wagner traveled in 1693 to Halle, where as a result of his increasingly libertine views he wholly broke with Thomasius, who by contrast was becoming more conservative. Moving to Berlin later in 1693 and then to Vienna, Wagner was in 1696 given a temporary position in Hamburg, which he lost due to his novel and sometimes polemical philosophical positions.
Receiving support from Leibniz, Wagner worked for a time at the Herzog August Library in Wolfenbüttel; and maintained his contact with Leibniz. Leibniz wrote to Wagner in 1696, describing his admiration for Aristotle and opposing contemporary attacks on him, despite his view that Aristotle had discovered only a small portion of the discipline.
Opposing his former mentor Thomasius' belief in the soul, Wagner published another text in 1707, "Critique of Thomasian views on the nature of the soul." Theologian Johann Joachim Lange accused Wagner of Spinozist sympathies in 1710, and Wagner replied to these criticisms in the same year.
The last record of Wagner is found in Göttingen in 1717, where he came into conflict with historian of philosophy Christoph August Heumann. Wagner presumably died shortly thereafter.
Philosophy
Wagner believed that both education and philosophy should be modernized and focus on mathematics, physics and medicine, but not theology. In this regard he held that Germany had made more progress, while French, Italian and Spanish thinkers were overly influenced by followers of Aristotle, Galen and Ptolemy. Believing in intellectual freedom, Wagner was an admirer of German philosopher and professor Nicolaus Hieronymus Gundling, who favored "atheistic" classical Greek philosophy.
As articulated in his 1707 critique of Thomasius, Wagner did not believe in a soul, in divine providence, in the divinity of the bible, or in divine creation. He instead advocated reason, the most "godly" aspect of humankind, as a means of eradicating superstition. Wagner therefore celebrated advances in science facilitated by Descartes and even considered himself a Cartesian, though he disagreed with the latter's Christian metaphysical beliefs and even sought to undermine them. Deeply influenced by Spinoza, Wagner placed even greater emphasis on the importance of experimentation and empiricism in developing knowledge.
Wagner held radical political beliefs, advocating a restructuring of society according to more egalitarian principles and advocating greater emphasis on administration, education and culture. Reform of educational institutions was a particular concern of his writing. Wagner contested that aristocracy by birth was inferior to intellectual achievement. He also believed that Germany's fragmentary political system resulted in a weak and mismanaged government. In these beliefs Wagner was influenced by but disagreed with political thinkers such as Hugo Grotius, Thomas Hobbes and Niccolò Machiavelli. Much of Wagner's political and philosophical system was oriented, ultimately, towards securing religious, intellectual, and personal freedom, a project of the Enlightenment as a whole.
Legacy
Wagner is known for his longstanding correspondence with Leibniz, and his erudition and innovative understanding of philosophy and natural sciences during his time, according to historian Cornelio Fabro.
Historian Jonathan Israel writes that Wagner is an important materialist philosopher of the late 17th and early 18th centuries, and an example of both radical philosophy and atheism produced by the growing university system of the period. Historian Frederick Beiser writes that Wagner and his fellow materialists in Germany, though they were less numerous than those found in France and England, developed mechanistic explanations for human behavior and raised fears of spreading religious skepticism.
See also
Baruch Spinoza
Christian Thomasius
Matthias Knutzen
Nicolaus Hieronymus Gundling
René Descartes
Gottfried Wilhelm Leibniz
Jonathan Israel
Eclecticism
Notes
Sources
1660 births
1717 deaths
Enlightenment philosophers
People from the Electorate of Saxony
Leipzig University alumni
Materialists
Rationalists
Metaphysicians
17th-century German philosophers
18th-century German philosophers
German philosophers of religion
German male writers | Gabriel Wagner | Physics | 994 |
41,806,743 | https://en.wikipedia.org/wiki/23%20Cygni | 23 Cygni is a single, blue-white hued star in the northern constellation Cygnus. It is a faint star, visible to the naked eye, with an apparent visual magnitude of 5.14. The distance to this star, as estimated from its annual parallax shift of , is about 550 light years. It is moving closer to the Earth with a heliocentric radial velocity of −32 km/s, and is expected to come as near as in around 5.6 million years. At that distance, the current star would be of magnitude 2.24.
This is an ordinary B-type main-sequence star of spectral type B5V, a star that is generating energy through hydrogen fusion at its core. It is roughly 26 million years old with 4.7 times the mass of the Sun and 4.3 times the Sun's radius. The star has a high rate of spin, having a projected rotational velocity of 145 km/s. It is radiating 612 times the Sun's luminosity from its photosphere at an effective temperature of 14,893 K.
References
B-type main-sequence stars
Cygnus (constellation)
Durchmusterung objects
Cygni, 23
188665
097870
7608 | 23 Cygni | Astronomy | 260 |
3,766,560 | https://en.wikipedia.org/wiki/Entropy%20of%20activation | In chemical kinetics, the entropy of activation of a reaction is one of the two parameters (along with the enthalpy of activation) that are typically obtained from the temperature dependence of a reaction rate constant, when these data are analyzed using the Eyring equation of the transition state theory. The standard entropy of activation is symbolized and equals the change in entropy when the reactants change from their initial state to the activated complex or transition state ( = change, = entropy, = activation).
Importance
Entropy of activation determines the preexponential factor of the Arrhenius equation for temperature dependence of reaction rates. The relationship depends on the molecularity of the reaction:
for reactions in solution and unimolecular gas reactions
,
while for bimolecular gas reactions
.
In these equations is the base of natural logarithms, is the Planck constant, is the Boltzmann constant and the absolute temperature. is the ideal gas constant. The factor is needed because of the pressure dependence of the reaction rate. = .
The value of provides clues about the molecularity of the rate determining step in a reaction, i.e. the number of molecules that enter this step. Positive values suggest that entropy increases upon achieving the transition state, which often indicates a dissociative mechanism in which the activated complex is loosely bound and about to dissociate. Negative values for indicate that entropy decreases on forming the transition state, which often indicates an associative mechanism in which two reaction partners form a single activated complex.
Derivation
It is possible to obtain entropy of activation using Eyring equation. This equation is of the form
where:
= reaction rate constant
= absolute temperature
= enthalpy of activation
= gas constant
= transmission coefficient
= Boltzmann constant = R/NA, NA = Avogadro constant
= Planck constant
= entropy of activation
This equation can be turned into the formThe plot of versus gives a straight line with slope from which the enthalpy of activation can be derived and with intercept from which the entropy of activation is derived.
References
Chemical kinetics | Entropy of activation | Chemistry | 421 |
11,392,845 | https://en.wikipedia.org/wiki/Wildlife%20of%20Egypt | The wildlife of Egypt is composed of the flora and fauna of this country in northeastern Africa and southwestern Asia, and is substantial and varied. Apart from the fertile Nile Valley, which bisects the country from south to north, the majority of Egypt's landscape is desert, with a few scattered oases. It has long coastlines on the Mediterranean Sea, the Gulf of Suez, the Gulf of Aqaba and the Red Sea. Each geographic region has a diversity of plants and animals each adapted to its own particular habitat.
Geography
Egypt is bordered by the Mediterranean Sea to the north, Libya to the west and Sudan to the south. To the east lies the Red Sea, and the Sinai Peninsula, the Asian part of the country, which is bordered by the Gaza Strip and Israel. Egypt is a transcontinental nation, providing a land bridge between Africa and Asia. This is traversed by the Suez Canal which connects the Mediterranean Sea with the Indian Ocean by way of the Red Sea. This results in the flora and fauna having influences from both Africa and Asia, and the marine life from both the Atlantic / Mediterranean Sea and the Red Sea / Indian Ocean.
The River Nile enters Egypt as it flows through Lake Nasser, formed by the building of the Aswan Dam. In its lower reaches, the river is about wide and the alluvial plain about wide. The annual flooding of the Nile no longer occurs and the fertility of the Nile Valley is now maintained by irrigation rather than the deposition of silt. Much of the Nile is bordered by flat land but in some places there are low cliffs. Where the river flows into the Mediterranean, there is an extensive fan-shaped delta area with channels, lakes and salt marshes.
To the west of the Nile lies the Western Desert, occupying about two-thirds of the area of the country. It consists largely of high stony and sandy plains with rocky plateaux in places. In the extreme southwest of the country on the border with Libya and Sudan, is Jebel Uweinat, a mountainous region and in the northwest lies the Qattara Depression, a large area of land some below sea level. Another depression, the Faiyum Oasis lies south west of Cairo and is connected to the Nile by a channel. To the east of the Nile lies the much smaller Eastern Desert, a high mountain ridge running parallel with the Red Sea, seamed with wadis on either flank. At the border with Sudan this rises to the rocky massif of Gebel Elba. The Sinai Peninsula is a mountainous area, deeply cleft by canyon-like wadis that flow towards the Gulf of Aqaba, the Gulf of Suez and the Mediterranean Sea.
In general, Egypt is a very dry country. The Western Desert receives only occasional rainfall, the winters being mild and the summers very hot. The Eastern Desert receives some precipitation in the south in the form of orographic rainfall from winds that have crossed the Red Sea; this may cause torrential flows in the wadis. The winters here are mild and the summers hot, and Gebel Elba is cooler and wetter than other parts. The northern areas of the country, particularly close to the coast, receive some precipitation from Mediterranean weather systems.
Flora
The Nile is the lifeline of Egypt, the land bordering the river being rendered fertile by the irrigation it receives. Crops grown in the Nile Valley include cotton, cereals, sugarcane, beans, oil seed crops and peanuts. Date palms grow here as well as sycamore, carob and Acacia. Fruit trees are planted here and eucalyptus has been introduced. The rich delta soil is used for the cultivation of grapes, vegetables and flowers. The papyrus reeds that used to line the river are now restricted to the far south of the country, as are the crocodiles and hippopotamuses that also used to be plentiful.
Large parts of the Western Desert are completely devoid of vegetation. The plants that do grow are adapted to the arid conditions and tend to be small and wiry, have small, leathery leaves, long shallow roots to exploit any available water, prickles or thorns to deter herbivores, and sometimes thick stems or leaves to store water. They include acacia trees, palms, succulents, spiny shrubs, and grasses. Some plants adopt an ephemeral life style, sprouting or springing into life when rain falls, rapidly reaching the flowering stage and producing long-lived durable seed. In depressions in the Western Desert, some plant communities are dominated by Zygophyllum album, Nitraria retusa and Tamarix nilotica. In the Siwa Oasis there are small lakes, reedbeds dominated by Phragmites australis and Typha domingensis, and saltmarshes with Arthrocnemum macrostachyum, Juncus rigidus, Alhagi maurorum, Cladium mariscus and Cressa cretica.
In the mountains of the Eastern Desert grows the tree Balanites aegyptiaca, the open patchy woodland being remnants of forests that used to cover this region. In the Gulf of Suez coastal area the rainfall is supplemented by condensation from clouds. Water may ooze from cracks, flow down runnels and collect in potholes. Here mosses, ferns and various vascular plants grow, and Ficus pseudosycamorus and stunted date palms grow from cracks.
The flora of the Sinai Peninsula mountains is very varied and is largely of Irano-Turanian origin. Here soil and plant litter accumulates in crevices and depressions in the rock and provides anchorage for roots. The most common plant is Artemisia inculta, and rocky slopes support shrubs, semi-shrubs and trees.
Fauna
At one time Egypt had a cooler, wetter climate than it has today; ancient tomb paintings show giraffes, hippopotamuses, crocodiles and ostriches, and the petroglyphs at Silwa Bahari on the upper Nile, between Luxor and Aswan, show African bush elephants, white rhinoceroses, gerenuk and more ostriches, a fauna akin to that of present-day East Africa. Nor does the country have many endemic species, these being limited to the Egyptian weasel, pallid gerbil, Mackilligin's gerbil (this may possibly extend into the Sudan), Flower's shrew, Nile Delta toad, and two butterflies, the Sinai baton blue and Satyrium jebelia.
Mammals of the Western Desert have been depleted over the years and the addax and scimitar oryx are no longer found there, and the Atlas lion has probably gone as well. The remaining mammals include the rhim gazelle, dorcas gazelle, Barbary sheep, Rüppell's fox, lesser Egyptian jerboa and Giza gerbil. Notable birds from this desert include the spotted sandgrouse, greater hoopoe-lark and white-crowned wheatear.
The Eastern Desert has a quite different range of fauna and has much in common with the Sinai Peninsula, showing the importance of the broad Nile in separating the two desert regions. Here are found the striped hyena, Nubian ibex, bushy-tailed jird, golden spiny mouse, Blanford's fox and Rüppell's fox. The sand partridge, streaked scrub warbler, mourning wheatear and white-crowned wheatear are typical of this region. The high rocky mountains of Gebel Elba in the south have a distinctive range of animals including the aardwolf, striped polecat, and common genet, and there may still be African wild ass in this area.
Birds are abundant in Egypt, especially in the Nile Valley and the Delta region. Birds of prey include vultures, eagles, hawks, falcons and owls. Other large birds include storks, flamingoes, herons, egrets, pelicans, quail, sunbirds and golden orioles. About four hundred and eighty species of bird have been recorded, the globally endangered ones being the red-breasted goose, white-headed duck, Balearic shearwater, Egyptian vulture, Rüppell's vulture, sociable lapwing, slender-billed curlew, saker falcon and yellow-breasted bunting. Egypt is on a major bird migratory route between Eurasia and East Africa and around two hundred species of migrants pass through twice a year.
About thirty species of snake occur in Egypt, about half of them venomous. These include the Egyptian cobra, false smooth snake and horned viper. There are also numerous species of lizards. Above the Aswan Dam, the shores of Lake Nasser are largely barren, but the lake does support the last remaining Nile crocodiles and African softshell turtle in Egypt.
Over one hundred species of fish live in the Nile and the Delta region. Egypt also has a large aquaculture industry producing tilapia in semi-intensive pond systems.
References
Biota of Egypt
Egypt | Wildlife of Egypt | Biology | 1,850 |
4,057,311 | https://en.wikipedia.org/wiki/%C3%89cole%20nationale%20sup%C3%A9rieure%20d%27informatique%20et%20de%20math%C3%A9matiques%20appliqu%C3%A9es%20de%20Grenoble | The École nationale supérieure d'informatique et de mathématiques appliquées, or Ensimag, is a prestigious French grande école located in Grenoble, France. Ensimag is part of the Institut polytechnique de Grenoble (Grenoble INP). The school specializes in computer science, applied mathematics and telecommunications.
Students are usually admitted to Ensimag competitively following two years of undergraduate studies in classes préparatoires aux grandes écoles. Studies at Ensimag are of three years' duration and lead to the French degree of "Diplôme National d'Ingénieur" (equivalent to a master's degree).
Ensimag was founded in 1960 by French mathematician Jean Kuntzmann. About 250 students graduate from Ensimag each year in its different degrees, and the school counts more than 5500 alumni worldwide.
Ensimag graduate specializations
Ensimag's curriculum offers a variety of compulsory and elective advanced courses, making up specific profiles.
Most of the common core courses are taught in the first year and the first semester of the second year, allowing students to acquire the basics in applied mathematics and informatics. Students then choose a graduate specialization.
International Master's programs
Master of Science in Informatics at Grenoble (MoSIG)
Since September 2008, an English-language joint degree program with the University of Grenoble provides a highly competitive, two-year graduate Master's degree program.
Master in Communication Systems Engineering
Offered jointly by Ensimag and Politecnico di Torino (Italy), this four-semester course aims to train engineers to specialize in the design and management of communication systems, ranging from simple point-to-point transmissions to diversified telecommunications networks.
Research at Ensimag
Ensimag students can perform research work as part of their curriculum in second year, as well as a second-year internship and their end of studies project in a research laboratory. 15% of Ensimag graduates choose to pursue a Ph.D.
Rankings
The school is one of the top French engineering institutions. In the field of computer science, Ensimag was ranked first in France by Codingame, as measured by the position of its students in the national admission examinations and by the ranking of companies hiring its students and specialized media.
References
External links
(fr) The official Ensimag website
(en) The official Ensimag website
(en) Ensimag English-language Master's degree programs
Informatique et de mathématiques appliquées de Grenoble
Grenoble Tech Ensimag
Grenoble Tech Ensimag
Grenoble Tech Ensimag
Grenoble Tech Ensimag
Grenoble Tech Ensimag
Grenoble Tech Ensimag
Universities and colleges established in 1960
1960 establishments in France | École nationale supérieure d'informatique et de mathématiques appliquées de Grenoble | Technology | 605 |
9,668,061 | https://en.wikipedia.org/wiki/Legion%20%28taxonomy%29 | The legion, in biological classification, is a non-obligatory taxonomic rank within the Linnaean hierarchy sometimes used in zoology.
Taxonomic rank
In zoological taxonomy, the legion is:
subordinate to the class
superordinate to the cohort.
consists of a group of related orders
Legions may be grouped into superlegions or subdivided into sublegions, and these again into infralegions.
Use in zoology
Legions and their super/sub/infra groups have been employed in some classifications of birds and mammals. Full use is made of all of these (along with cohorts and supercohorts) in, for example, McKenna and Bell's classification of mammals.
See also
Linnaean taxonomy
Mammal classification
References
Biology terminology
Taxa by rank
rank08a | Legion (taxonomy) | Biology | 156 |
65,627,586 | https://en.wikipedia.org/wiki/N%C3%BCzhet%20G%C3%B6kdo%C4%9Fan | Hatice Nüzhet Gökdoğan (; 14 August 1910 – 24 April 2003) was a Turkish astronomer, mathematician and academic. After studying mathematics and astronomy in France as a young adult, Gökdoğan joined the faculty of Istanbul University in 1934 and completed her PhD. She was elected Dean of the university's Faculty of Science in 1954, becoming the first Turkish woman to serve as a university dean, and she was later made Chair of the astronomy department, significantly expanding her department's capacity and working to improve national and international collaboration between astronomers.
Gökdoğan co-founded the Turkish Mathematical Society, the Turkish Astronomy Association and the Turkish University Women's Association. She was Turkey's first national representative at the International Astronomical Union (IAU), and has been credited as Turkey's first female astronomer.
Early life and education
Nüzhet Gökdoğan was born on 14 August 1910 in Istanbul (then Constantinople). Her mother was named Nebihe Hanım, while her father was Mehmet Zihni Toydemir, a major general.
In her late teens, Gökdoğan received a scholarship to study in France; she enrolled in the University of Lyon and in 1932 she completed her undergraduate degree in mathematics. She had a strong interest in astronomy and subsequently studied physics at the University of Paris, where she received a Diplome d'Etudes Superieures. She then completed an internship at the Paris Observatory.
Career
Returning to Turkey in 1934, Gökdoğan applied to work at the Kandilli Observatory, but was turned down because the director did not want a woman working there. She instead joined Istanbul University as a faculty member in the Astronomy Department. She was the first woman member of the school's faculty of science. She completed her PhD three years later, submitting a dissertation entitled Contribution aux recherches sur l'existence d'une matière obscure interstellaire homogène autour du soleil (Contribution to research on the existence of homogeneous interstellar dark matter around the sun). Gökdoğan's dissertation was recorded as the first doctoral thesis completed at Istanbul University's faculty of science.
In 1948, Gökdoğan was made full professor at the university, and also co-founded the Turkish Mathematical Society. She served as president of the Turkish Union of Soroptimists in the early 1950s. Upon being elected Dean of Istanbul University's science faculty in 1954, Gökdoğan became the first Turkish woman to serve as a university dean. She was a founding member of the Turkish Astronomy Association that same year, and she served as president of the association for the next two decades. In 1958, she was appointed Chair of the Astronomy Department at Istanbul University, and she held the role for the rest of her time as a faculty member. Gökdoğan worked hard to expand her department, gradually increasing the number of staff from 5 to 18, and she developed a number of new collaborative programs with observatories in France, Italy and Switzerland. She wrote introductory textbooks on astronomy and spectroscopy for students in Turkish high schools. She also co-founded the Turkish University Women's Association, and served as its president more than once.
Gökdoğan was a member of the International Astronomical Union (IAU), and in 1961 she became the first national representative of Turkey to the IAU. In August 1961, she represented Turkey as a delegate at an IAU conference in Berkeley, California. During her time as a member in the IAU, she participated in two of its commissions on "theory of stellar atmospheres" and "solar radiation and structure".
She organized a number of national and international astronomy symposiums in Turkey. One of these events in the late 1970s was credited with solidifying broader interest in building a new national observatory.
Gökdoğan retired from Istanbul University in 1980.
Personal life
Gökdoğan married Mukbil Gökdoğan, who was an architecture professor and former minister of public works. They had two children, both of whom grew up to become university professors. Mukbil died in 1992.
Gökdoğan died on 24 April 2003.
Legacy
A Google Doodle was published on 14 August 2023 celebrating the 113rd birthday of Gökdoğan.
References
1910 births
2003 deaths
Turkish astronomers
Academic staff of Istanbul University
Istanbul University alumni
University of Lyon alumni
University of Paris alumni
20th-century Turkish mathematicians
20th-century Turkish women scientists
20th-century Turkish scientists
Scientists from Istanbul
Women astronomers
Women mathematicians
20th-century astronomers
Turkish expatriates in France | Nüzhet Gökdoğan | Astronomy | 932 |
63,539,530 | https://en.wikipedia.org/wiki/Distyly | Distyly is a breeding system in plants that is characterized by two separate flower morphs, where individual plants produce flowers that have either long styles and short stamens (L-morph flowers) or short styles and long stamens (S-morph flowers). However, distyly can refer to any plant that shows some degree of self-incompatibility and has two morphs if at least one of the following characteristics is true; there is a difference in style length, filament length, pollen size or shape, or the surface of the stigma. Specifically these plants exhibit intra-morph self-incompatibility, flowers of the same style morph are incompatible. Distylous species that do not exhibit true self-incompatibility generally show a bias towards inter-morph crosses - meaning they exhibit higher success rates when reproducing with an individual of the opposite morph.
Distyly is a type of heterostyly in which a plant demonstrates reciprocal herkogamy.
Background
The first scientific account of distyly can be found in Stephan Bejthe's Caroli book Clusii Atrebatis Rariorum aliquot stirpium . Bejthe describes the two floral morphs of Primula veris. Charles Darwin popularized distyly with his account of it in his book The Different Forms of Flowers on Plants of the Same Species. Darwin's book represents the first account of intramorphic self-incompatibility in distylous plants and focuses on garden experiments in which he looks at seed set of different distylous Primula. Darwin names the two floral morphs S- and L-morph, moving away from the vernacular names, Pin (for L-morph) and Thrum (for S-morph), which he states were initially assigned by florist.
Distylous species have been identified in 28 families of Angiosperm, likely evolving independently in each family. This means, the system has evolved at least 28 times, though it has been suggested the system has evolved multiple times within some families. Since distyly has evolved more than once, it is considered a case of convergent evolution.
Reciprocal herkogamy
Reciprocal herkogamy likely evolved to prevent the pollen of the same flower from landing on its own stigma. This in turn promotes outcrossing.
In a study of Primula veris it was found that pin flowers exhibit higher rates of self-pollination and capture more pollen than the thrum morph. Different pollinators show varying levels of success while pollinating the different Primula morphs, the head or proboscis length of a pollinator is positively correlated to the uptake of pollen from long styled flowers and negatively correlated for pollen uptake on short styled flowers. The opposite is true for pollinators with smaller heads, such as bees, they uptake more pollen from short styled morphs than long styled ones. The differentiation in pollinators allows the plants to reduce levels of intra-morph pollination.
Models of evolution
There are two main hypothetical models for the order in which the traits of distyly evolved, the 'selfing avoidance model' and the 'pollen transfer model'.
The selfing avoidance model suggests self-incompatibility (SI) evolved first, followed by the morphological difference. It was suggested that the male component of SI would evolve first via a recessive mutation, followed by female characteristics via a dominant mutation, and finally male morphological differences would evolve via a third mutation.
The pollen transfer model argues that morphological differences evolved first, and if a species is facing inbreeding depression, it may evolve SI. This model can be used to explain the presence of reciprocal herkogamy in self-compatible species.
Genetic control of distyly
A supergene, called the self-incompatibility (or S-) locus, is responsible for the occurrence of distyly. The S-locus is composed of three tightly linked genes (S-genes) which segregate as a single unit.
Traditionally it was hypothesized that one S-gene controls all female aspects of distyly, one gene that controls the male morphological aspects, and one gene that determines the male mating type. While this hypothesis appears to be true in Turnera, it is not true in Primula nor Linum. The S-morph is hemizygous for the S-locus and the L-morph does not have an allelic counterpart . The hemizygotic nature of the S-locus has been shown in Primula , Gelsemium, Linum , Fagopyrum , Turnera, Nymphoides and Chrysojasminum.
The presence of the S-locus results in changes to gene expression between the two floral morphs, as has been demonstrated using transcriptomic analyses of Lithospermum multiflorum , Primula veris, Primula oreodoxa , Primula vulgaris and Turnera subulata, and Forsythia suspensa.
The S-locus of Chrysojasminum
In Chrysojasminum, the S-locus is composed of two S-genes, BZR1 and GA2ox. GA2ox is hypothetically involved in establishing self-incompatibility.
The S-locus of Fagopyrum
The S-morph of Fagopyrum contains ~2.8 Mb hemizygous region which likely represents the S-locus as it contains S-ELF4 which establishes female morphology and mating type.
The S-locus of Gelsemium
In Gelsemium, the S-locus is composed of four genes, GeCYP, GeFRS6, and GeGA3OX are hemizygous and TAF2 appears to be allelic with a truncated copy in the L-morph. GeCYP appears to share a last common ancestor (or ortholog) with the Primula S-gene CYPT. It is currently hypothesized that the for S-genes in Gelsemium were inherited as a group rather than separately. This is the only known case of the S-genes being inherited as a group rather than individually.
The S-locus of Linum
In Linum the S-locus is composed of nine genes, two are LtTSS1 and LtWDR-44 the other seven are unnamed and are of unknown function. LtTSS1 is hypothesized to regulate style length in the S-morph. Synonymous substitution analysis of three of the S-genes suggest the S-locus in Linum evolved in a step by step manner, though only three of the nine genes were analyzed.
The S-locus of Nymphoides
The S-locus of Nymphoides contains three genes NinS1, NinKHZ2, and NinBAS1. NinBAS1 is only expressed in the style and is hypothetical involved in regulation of brassinosteroids, NinS1 is only expressed in the stamen, NinKHZ2 is expressed in both stamen and style. Similar to other S-loci, the Nymphoides S-locus appears to have evolved via stepwise duplication events.
The S-locus of Primula
In Primula the S-locus is composed of five genes, CYPT(or CYP734A50), GLOT (or GLOBOSA2), KFBT, PUMT, and CCMT. The supergene evolved in a step-by-step manner, meaning each S-gene duplicated and move to the pre-S-locus independently of the others. Synonymous substitution analysis of the S-genes suggest the oldest S-gene in Primula is likely KFBT which likely duplicated about 104 million years ago, followed by CYPT(42.7 MYA),GLOT (37.4 MYA), CCMT(10.3 MYA). It is unknown when PUMT evolved as it does not have a paralog within the Primula genome.
Of the five S-genes, two have been characterized. CYPT, a cytochrome P450 family member, is the female morphology and it is the female self-incompatibility gene, meaning it promotes rejection of self pollen. CYPT is likely producing these phenotypes via inactivation of brassinosteroids. Inactivation of brassinosteroids in the S-morph by CYPT results in repression of cell elongation in the style by repressing expression of PIN5, ultimately producing the short pistil phenotype. GLOT , a MADS-BOX family member, is the male morphology gene as it promotes corolla tube growth under the stamen. It is unknown how the other three S-genes are contributing to distyly in Primula.
The S-locus of Turnera
In Turnera the S-locus is composed of three genes, BAHD, SPH1, and YUC6. BAHD is likely an acyltransferase involved in inactivation of brassinosteroids; it is both the female morphology and female self-incompatibility gene. YUC6 is likely involved in auxin biosynthesis based on homology; it is the male self-incompatibility gene and establishes pollen size dimorphisms. SPH1 is likely involved in filament elongation based on short filament mutant analysis.
List of families with distylous species
Source:
References
Plant reproduction
Plant morphology
Pollination
Genetics
Evolution | Distyly | Biology | 2,011 |
78,142,178 | https://en.wikipedia.org/wiki/Meluadrine | Meluadrine (), also known as meluadrine tartrate (; developmental code name HSR-81) in the case of the tartrate salt, is a sympathomimetic and β2-adrenergic receptor agonist which was studied as a tocolytic drug but was never marketed. It was first described in the literature by 1994. The drug is also known as (R)-4-hydroxytulobuterol and is an active metabolite of tulobuterol.
References
Abandoned drugs
Beta2-adrenergic agonists
Chloroarenes
Enantiopure drugs
Human drug metabolites
Phenylethanolamines
Sympathomimetics
Tert-butyl compounds | Meluadrine | Chemistry | 161 |
5,783,949 | https://en.wikipedia.org/wiki/Extremally%20disconnected%20space | In mathematics, an extremally disconnected space is a topological space in which the closure of every open set is open. (The term "extremally disconnected" is correct, even though the word "extremally" does not appear in most dictionaries, and is sometimes mistaken by spellcheckers for the homophone extremely disconnected.)
An extremally disconnected space that is also compact and Hausdorff is sometimes called a Stonean space. This is not the same as a Stone space, which is a totally disconnected compact Hausdorff space. Every Stonean space is a Stone space, but not vice versa. In the duality between Stone spaces and Boolean algebras, the Stonean spaces correspond to the complete Boolean algebras.
An extremally disconnected first-countable collectionwise Hausdorff space must be discrete. In particular, for metric spaces, the property of being extremally disconnected (the closure of every open set is open) is equivalent to the property of being discrete (every set is open).
Examples and non-examples
Every discrete space is extremally disconnected. Every indiscrete space is both extremally disconnected and connected.
The Stone–Čech compactification of a discrete space is extremally disconnected.
The spectrum of an abelian von Neumann algebra is extremally disconnected.
Any commutative AW*-algebra is isomorphic to , for some space which is extremally disconnected, compact and Hausdorff.
Any infinite space with the cofinite topology is both extremally disconnected and connected. More generally, every hyperconnected space is extremally disconnected.
The space on three points with base provides a finite example of a space that is both extremally disconnected and connected. Another example is given by the Sierpinski space, since it is finite, connected, and hyperconnected.
The following spaces are not extremally disconnected:
The Cantor set is not extremally disconnected. However, it is totally disconnected.
Equivalent characterizations
A theorem due to says that the projective objects of the category of compact Hausdorff spaces are exactly the extremally disconnected compact Hausdorff spaces. A simplified proof of this fact is given by .
A compact Hausdorff space is extremally disconnected if and only if it is a retract of the Stone–Čech compactification of a discrete space.
Applications
proves the Riesz–Markov–Kakutani representation theorem by reducing it to the case of extremally disconnected spaces, in which case the representation theorem can be proved by elementary means.
See also
Totally disconnected space
References
Properties of topological spaces | Extremally disconnected space | Mathematics | 558 |
443,101 | https://en.wikipedia.org/wiki/Finite%20impulse%20response | In signal processing, a finite impulse response (FIR) filter is a filter whose impulse response (or response to any finite length input) is of finite duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely (usually decaying).
The impulse response (that is, the output in response to a Kronecker delta input) of an Nth-order discrete-time FIR filter lasts exactly samples (from first nonzero element through last nonzero element) before it then settles to zero.
FIR filters can be discrete-time or continuous-time, and digital or analog.
Definition
For a causal discrete-time FIR filter of order N, each value of the output sequence is a weighted sum of the most recent input values:
where:
is the input signal,
is the output signal,
is the filter order; an th-order filter has terms on the right-hand side
is the value of the impulse response at the ith instant for of an -order FIR filter. If the filter is a direct form FIR filter then is also a coefficient of the filter.
This computation is also known as discrete convolution.
The in these terms are commonly referred to as s, based on the structure of a tapped delay line that in many implementations or block diagrams provides the delayed inputs to the multiplication operations. One may speak of a 5th order/6-tap filter, for instance.
The impulse response of the filter as defined is nonzero over a finite duration. Including zeros, the impulse response is the infinite sequence:If an FIR filter is non-causal, the range of nonzero values in its impulse response can start before , with the defining formula appropriately generalized.
Properties
An FIR filter has a number of useful properties which sometimes make it preferable to an infinite impulse response (IIR) filter. FIR filters:
Require no feedback. This means that any rounding errors are not compounded by summed iterations. The same relative error occurs in each calculation. This also makes implementation simpler.
Are inherently stable, since the output is a sum of a finite number of finite multiples of the input values, so can be no greater than times the largest value appearing in the input.
Can easily be designed to be linear phase by making the coefficient sequence symmetric. This property is sometimes desired for phase-sensitive applications, for example data communications, seismology, crossover filters, and mastering.
The main disadvantage of FIR filters is that considerably more computation power in a general purpose processor is required compared to an IIR filter with similar sharpness or selectivity, especially when low frequency (relative to the sample rate) cutoffs are needed. However, many digital signal processors provide specialized hardware features to make FIR filters approximately as efficient as IIR for many applications.
Frequency response
The filter's effect on the sequence is described in the frequency domain by the convolution theorem: and
where operators and respectively denote the discrete-time Fourier transform (DTFT) and its inverse. Therefore, the complex-valued, multiplicative function is the filter's frequency response. It is defined by a Fourier series:where the added subscript denotes -periodicity. Here represents frequency in normalized units (radians per sample). The function has a periodicity of with in units of cycles per sample, which is favored by many filter design applications. The value , called Nyquist frequency, corresponds to When the sequence has a known sampling-rate (in samples per second), ordinary frequency is related to normalized frequency by cycles per second (Hz). Conversely, if one wants to design a filter for ordinary frequencies etc., using an application that expects cycles per sample, one would enter etc.
can also be expressed in terms of the Z-transform of the filter impulse response:
Filter design
FIR filters are designed by finding the coefficients and filter order that meet certain specifications, which can be in the time domain (e.g. a matched filter) or the frequency domain (most common). Matched filters perform a cross-correlation between the input signal and a known pulse shape. The FIR convolution is a cross-correlation between the input signal and a time-reversed copy of the impulse response. Therefore, the matched filter's impulse response is "designed" by sampling the known pulse-shape and using those samples in reverse order as the coefficients of the filter.
When a particular frequency response is desired, several different design methods are common:
Window design method
Frequency sampling method
Least MSE (mean square error) method
Parks–McClellan method (also known as the equiripple, optimal, or minimax method). The Remez exchange algorithm is commonly used to find an optimal equiripple set of coefficients. Here the user specifies a desired frequency response, a weighting function for errors from this response, and a filter order N. The algorithm then finds the set of coefficients that minimize the maximum deviation from the ideal. Intuitively, this finds the filter that is as close as possible to the desired response given that only coefficients can be used. This method is particularly easy in practice since at least one text includes a program that takes the desired filter and N, and returns the optimum coefficients.
Equiripple FIR filters can be designed using the DFT algorithms as well. The algorithm is iterative in nature. The DFT of an initial filter design is computed using the FFT algorithm (if an initial estimate is not available, h[n]=delta[n] can be used). In the Fourier domain, or DFT domain, the frequency response is corrected according to the desired specs, and the inverse DFT is then computed. In the time-domain, only the first N coefficients are kept (the other coefficients are set to zero). The process is then repeated iteratively: the DFT is computed once again, correction applied in the frequency domain and so on.
Software packages such as MATLAB, GNU Octave, Scilab, and SciPy provide convenient ways to apply these different methods.
Window design method
In the window design method, one first designs an ideal IIR filter and then truncates the infinite impulse response by multiplying it with a finite length window function. The result is a finite impulse response filter whose frequency response is modified from that of the IIR filter. Multiplying the infinite impulse by the window function in the time domain results in the frequency response of the IIR being convolved with the Fourier transform (or DTFT) of the window function. If the window's main lobe is narrow, the composite frequency response remains close to that of the ideal IIR filter.
The ideal response is often rectangular, and the corresponding IIR is a sinc function. The result of the frequency domain convolution is that the edges of the rectangle are tapered, and ripples appear in the passband and stopband. Working backward, one can specify the slope (or width) of the tapered region (transition band) and the height of the ripples, and thereby derive the frequency-domain parameters of an appropriate window function. Continuing backward to an impulse response can be done by iterating a filter design program to find the minimum filter order. Another method is to restrict the solution set to the parametric family of Kaiser windows, which provides closed form relationships between the time-domain and frequency domain parameters. In general, that method will not achieve the minimum possible filter order, but it is particularly convenient for automated applications that require dynamic, on-the-fly, filter design.
The window design method is also advantageous for creating efficient half-band filters, because the corresponding sinc function is zero at every other sample point (except the center one). The product with the window function does not alter the zeros, so almost half of the coefficients of the final impulse response are zero. An appropriate implementation of the FIR calculations can exploit that property to double the filter's efficiency.
Least mean square error (MSE) method Goal:To design FIR filter in the MSE sense, we minimize the mean square error between the filter we obtained and the desired filter.
, where is sampling frequency, is the spectrum of the filter we obtained, and is the spectrum of the desired filter.Method:Given an N-point FIR filter , and .
Step 1: Suppose even symmetric. Then, the discrete time Fourier transform of is defined as
Step 2: Calculate mean square error.
Therefore,
Step 3: Minimize the mean square error by doing partial derivative of MSE with respect to
After organization, we have
Step 4: Change back to the presentation of
and
In addition, we can treat the importance of passband and stopband differently according to our needs by adding a weighted function,
Then, the MSE error becomes
Moving average example
A moving average filter is a very simple FIR filter. It is sometimes called a boxcar filter, especially when followed by decimation, or a sinc-in-frequency. The filter coefficients, , are found via the following equation:
To provide a more specific example, we select the filter order:
The impulse response of the resulting filter is:The block diagram on the right shows the second-order moving-average filter discussed below. The transfer function is:The next figure shows the corresponding pole–zero diagram. Zero frequency (DC) corresponds to (1, 0), positive frequencies advancing counterclockwise around the circle to the Nyquist frequency at (−1, 0). Two poles are located at the origin, and two zeros are located at , .
The frequency response, in terms of normalized frequency ω, is:'
The magnitude and phase components of are plotted in the figure. But plots like these can also be generated by doing a discrete Fourier transform (DFT) of the impulse response.
And because of symmetry, filter design or viewing software often displays only the [0, π] region. The magnitude plot indicates that the moving-average filter passes low frequencies with a gain near 1 and attenuates high frequencies, and is thus a crude low-pass filter. The phase plot is linear except for discontinuities at the two frequencies where the magnitude goes to zero. The size of the discontinuities is π, representing a sign reversal. They do not affect the property of linear phase, as illustrated in the final figure.
See also
Cascaded integrator–comb filter
Compact support
Digital delay line
Electronic filter
Filter (signal processing)
Filter design
FIR transfer function
Infinite impulse response (IIR) filter
Z-transform (specifically Linear constant-coefficient difference equation)
Notes
References
Digital signal processing
Filter theory | Finite impulse response | Engineering | 2,189 |
1,989,020 | https://en.wikipedia.org/wiki/Indeterminate%20growth | In biology and botany, indeterminate growth is growth that is not terminated, in contrast to determinate growth that stops once a genetically predetermined structure has completely formed. Thus, a plant that grows and produces flowers and fruit until killed by frost or some other external factor is called indeterminate. For example, the term is applied to tomato varieties that grow in a rather gangly fashion, producing fruit throughout the growing season. In contrast, a determinate tomato plant grows in a more bushy shape and is most productive for a single, larger harvest, then either tapers off with minimal new growth or fruit or dies.
Inflorescences
In reference to an inflorescence (a shoot specialised for bearing flowers, and bearing no leaves other than bracts), an indeterminate type (such as a raceme) is one in which the first flowers to develop and open are from the buds at the base, followed progressively by buds nearer to the growing tip. The growth of the shoot is not impeded by the opening of the early flowers or development of fruits and its appearance is of growing, producing, and maturing flowers and fruit indefinitely. In practice the continued growth of the terminal end necessarily peters out sooner or later, though without producing any definite terminal flower, and in some species it may stop growing before any of the buds have opened.
Not all plants produce indeterminate inflorescences however; some produce a definite terminal flower that terminates the development of new buds towards the tip of that inflorescence. In most species that produce a determinate inflorescence in this way, all of the flower buds are formed before the first ones begin to open, and all open more or less at the same time. In some species with determinate inflorescences however, the terminal flower blooms first, which stops the elongation of the main axis, but side buds develop lower down. One type of example is Dianthus; another type is exemplified by Allium; and yet others, by Daucus.
Animals
In zoology, indeterminate growth refers to the condition where animals grow rapidly when young, and continue to grow after reaching adulthood although at a slower pace. It is common in fish, amphibians, reptiles, and many molluscs. The term also refers to the pattern of hair growth sometimes seen in humans and a few domestic breeds, where hair continues to grow in length until it is cut.
Mushrooms
Some mushrooms – notably Cantharellus californicus – also exhibit indeterminate growth.
See also
Determinate cultivar
References
Plant morphology
Developmental biology | Indeterminate growth | Biology | 529 |
76,899,013 | https://en.wikipedia.org/wiki/Lipoprotein%20rotamase%20A | Lipoprotein rotamase A (SlrA), also known as peptidyl prolyl isomerase A (PpiA), functions as a molecular chaperone that operates within the Streptococcus pneumoniae cell membrane-cell wall interface as well as outside the bacteria. SlrA shares homology with the cyclophilin-type peptidyl-prolyl isomerases (PPIases). PPIases accelerate the folding of proteins by catalyzing the cis-trans isomer conversions of peptide bonds in the amino acid proline.
Structure
SlrA is a 29kDa, 267-amino acid long membrane-bound lipoprotein. It is encoded by the S. pneumoniae gene, SP_0771, located at position 729,840–730,643 on the complementary strand. The structure of SlrA is predicted to contain an eight-strand β-bundle and two associated α-helices, similar to the PPIase domains of cyclophilins.
Lipidated forms of SlrA occur in all sequenced streptococcal genomes with the homologs sharing 60-70% amino acid sequence identity. SlrA also shares homology with other Gram-positive cyclophilins such as the membrane-bound PpiA in Lactococcus lactis.
Function
As a PPIase, SlrA functions at the rate-limiting step of protein folding of secreted proteins. The identity of the proteins folded by SlrA and SlrA homologs are still under investigation, but the roles of these proteins can be hypothesized based on the phenotypes observed in mutants without SlrA. The SlrA homologs in Streptococcus mutans and Streptococcus gordonii, PpiA, also display anti-phagocytic activity in their respective bacteria. SlrA has been implicated in S. pneumoniae colonization, competence, cell wall integrity, and adhesion to human cells derived from the upper and lower respiratory tract. It is hypothesized that SlrA acts as a protein-folding chaperone for client proteins involved in those key processes. Additionally, SlrA has been shown to indirectly contribute to S. pneumoniae anti-phagocytic activity
References
Lipoproteins | Lipoprotein rotamase A | Chemistry | 488 |
44,078,902 | https://en.wikipedia.org/wiki/Upstream%20contamination | Upstream contamination by floating particles is a counterintuitive phenomenon in fluid dynamics. When pouring water from a higher container to a lower one, particles floating in the latter can climb upstream into the upper container. A definitive explanation is still lacking: experimental and computational evidence indicates that the contamination is chiefly driven by surface tension gradients, however the phenomenon is also affected by the dynamics of swirling flows that remain to be fully investigated.
Origins
The phenomenon was observed in 2008 by the Argentine Sebastian Bianchini during mate tea preparation, while studying physics at the University of Havana.
It rapidly attracted the interest of professor Alejandro Lage-Castellanos, who performed, with Bianchini, a series of controlled experiments. Later on professor Ernesto Altshuler completed the trio in Havana, which resulted in the Diploma thesis of Bianchini and a short original paper posted in the web arXiv and mentioned as a surprising fact in some online journals.
Bianchini's Diploma thesis showed that the phenomenon could be reproduced in a controlled laboratory setting using mate leaves or chalk powder as contaminants, and that temperature gradients (hot in the top, cold in the bottom) were not necessary to generate the effect. The research also showed that surface tension was key to the explanation through the Marangoni effect. This was suggested by two facts: (a) both mate and chalk lowered the surface tension of water, and (b) if an industrial surfactant was added on the upper reservoir, the upstream motion of particles would stop.
Confirmation
After a talk by Lage-Castellanos at the First Workshop on Complex Matter Physics in Havana (MarchCOMeeting'2012), professor Troy Shinbrot of Rutgers University became interested in the subject. Together with student Theo Siu, Cuban results were confirmed and expanded with new experiments and numerical simulations at Rutgers, which resulted in a joint peer-reviewed paper.
See also
List of unsolved problems in physics
References
Fluid dynamics
Physical paradoxes
Physical phenomena
External links | Upstream contamination | Physics,Chemistry,Engineering | 409 |
5,067,669 | https://en.wikipedia.org/wiki/Potassium%20sulfide | Potassium sulfide is an inorganic compound with the formula K2S. The colourless solid is rarely encountered, because it reacts readily with water, a reaction that affords potassium hydrosulfide (KSH) and potassium hydroxide (KOH). Most commonly, the term potassium sulfide refers loosely to this mixture, not the anhydrous solid.
Structure
It adopts "antifluorite structure," which means that the small K+ ions occupy the tetrahedral (F−) sites in fluorite, and the larger S2− centers occupy the eight-coordinate sites. Li2S, Na2S, and Rb2S crystallize similarly.
Synthesis and reactions
It can be produced by heating K2SO4 with carbon (coke):
K2SO4 + 4 C → K2S + 4 CO
In the laboratory, pure K2S may be prepared by the reaction of potassium and sulfur in anhydrous ammonia.
Sulfide is highly basic, consequently K2S completely and irreversibly hydrolyzes in water according to the following equation:
K2S + H2O → KOH + KSH
For many purposes, this reaction is inconsequential since the mixture of SH− and OH− behaves as a source of S2−. Other alkali metal sulfides behave similarly.
Use in fireworks
Potassium sulfides are formed when black powder is burned and are important intermediates in many pyrotechnic effects, such as senko hanabi and some glitter formulations.
See also
Liver of sulfur
References
Potassium compounds
Sulfides
Inorganic compounds
Fluorite crystal structure | Potassium sulfide | Chemistry | 338 |
1,751,486 | https://en.wikipedia.org/wiki/Human%20ecosystem | Human ecosystems are human-dominated ecosystems of the anthropocene era that are viewed as complex cybernetic systems by conceptual models that are increasingly used by ecological anthropologists and other scholars to examine the ecological aspects of human communities in a way that integrates multiple factors as economics, sociopolitical organization, psychological factors, and physical factors related to the environment.
A human ecosystem has three central organizing concepts: human environed unit (an individual or group of individuals), environment, interactions and transactions between and within the components. The total environment includes three conceptually distinct, but interrelated environments: the natural, human constructed, and human behavioral. These environments furnish the resources and conditions necessary for life and constitute a life-support system.
Further reading
Basso, Keith 1996 “Wisdom Sits in Places: Landscape and Language among the Western Apache.” Albuquerque: University of New Mexico Press.
Douglas, Mary 1999 “Implicit Meanings: Selected Essays in Anthropology.” London and New York: Routledge, Taylor & Francis Group.
Nadasdy, Paul 2003 “Hunters and Bureaucrats: Power, Knowledge, and Aboriginal-State Relations in the Southwest Yukon.” Vancouver and Toronto: UBC Press.
References
See also
Media ecosystem
Urban ecosystem
Total human ecosystem
Anthropology
Ecosystems
Environmental sociology
Social systems concepts
Systems biology | Human ecosystem | Biology,Environmental_science | 262 |
55,254,460 | https://en.wikipedia.org/wiki/Schools%20Consent%20Project | The Schools Consent Project is a charity organisation based in the UK which delivers sexual education workshops focusing on the topic of consent. It was founded in 2014, delivering its first workshop in March 2015.
Pupils aged 11–18 are taken through topics such as harassment, revenge porn and sexting.
The organisation makes use of pro-bono and voluntary contributions of expertise from lawyers and law students.
References
External links
Educational charities based in the United Kingdom
Sexual harassment in the United Kingdom
Sexology organizations
2015 establishments in the United Kingdom | Schools Consent Project | Biology | 103 |
71,961,383 | https://en.wikipedia.org/wiki/HD%20101782 | HD 101782, also known as HR 4507, is a yellowish-orange hued star located in the southern circumpolar constellation of Chamaeleon. It has an apparent magnitude of 6.33, placing it near the limit for naked eye visibility. Based on parallax measurements from Gaia DR3, the object is estimated to be 356 light years away from the Solar System. It appears to be receding with a heliocentric radial velocity of . De Mederios found the radial velocity to be variable, suggesting that it may be a spectroscopic binary. Eggen (1989) lists it as a member of the young disk population.
HD 101782 has a stellar classification of K0 III, indicating that it is an evolved red giant. It is currently on the horizontal branch (HB), fusing helium at its core. The star is located on the cool end of the red clump, a region on the HR diagram with metal-rich HB stars. It has double the mass of the Sun but has expanded to 10.1 times its girth. It radiates 55 times the luminosity of the Sun from its photosphere at an effective temperature of . It has an iron abundance 110% that of the Sun's, placing it at solar metallicity. Like most giants it spins slowly, having a projected rotational velocity lower than .
TYC 9507-3649-1 is a 10th magnitude optical companion located away along a position angle of 139°. This companion was first noticed by Sir John Herschel in 1837.
References
K-type giants
Horizontal-branch stars
Double stars
101782
Chamaeleon
056996
4507
CD-82 00224
Chamaeleontis, 33 | HD 101782 | Astronomy | 366 |
12,799,896 | https://en.wikipedia.org/wiki/Geochemical%20Society | The Geochemical Society is a nonprofit scientific organization founded to encourage the application of chemistry to solve problems involving geology and cosmology. The society promotes understanding of geochemistry through the annual Goldschmidt Conference, publication of a peer-reviewed journal and electronic newsletter, awards programs recognizing significant accomplishments in the field, and student development programs. The society's offices are located on the campus of the Carnegie Institution for Science in Washington, DC.
Organization and meetings
The Geochemical Society was founded in 1955 at a meeting of the Geological Society of America. Its first president was Earl Ingerson and dues started at two dollars per year. In 1990 it was incorporated as a 501(c)(3) nonprofit organization in 1990.
In 1988, the Geochemical society created the Goldschmidt Conferences in honor of the geochemist Victor Goldschmidt (1888–1947), "considered to be the founder of modern geochemistry and crystal chemistry". It was soon joined by the European Association of Geochemistry, and at the 2014 meeting the two organizations signed a Memorandum of Understanding for the governance and trademark protection of the meeting. The conference is one of the world's largest devoted to geochemistry. The society's board of directors holds its annual meeting during the conference.
Membership
The Geochemical Society has nearly 4,000 members from more than 70 countries. Most members are students, researchers and faculty of geochemistry related fields, although anyone with an interest in geochemistry may join. Membership is calendar year and dues are US$35 for a Professional, US$15 for Student, and $20 for Seniors. Membership includes a subscription to Elements Magazine and also offers discounts on Geochemical Society publications, Mineralogical Society of America publications and conference registration discounts at the Goldschmidt Conference, Fall AGU, and the annual GSA conference.
Publications
The Geochemical Society publishes, co-publishes, or sponsors the following:
Geochimica et Cosmochimica Acta (GCA) – peer-reviewed journal with 24 issues per year, co-sponsored with the Meteoritical Society.
Elements: An International Magazine of Mineralogy, Geochemistry, and Petrology – 6 issues per year
Geochemical News – electronic newsletter published weekly
Special Publications Series – published at various times
Reviews in Mineralogy and Geochemistry (RiMG) – peer-reviewed multi-author volumes on topics approved by the governing councils of the Geochemical Society and the Mineralogical Society of America.
Geochemistry, Geophysics, Geosystems (G-cubed) – online journal with peer-reviewed original research papers. 12 issues per year published in collaboration with the American Geophysical Union.
Awards
The Geochemical Society presents the following annual awards:
V. M. Goldschmidt Award – the society's highest honor, it is awarded for major achievements in geochemistry or cosmochemistry.
F.W. Clarke Medal – named after Frank Wigglesworth Clarke (1847–1931), a chemist who determined the composition of the Earth's crust, it goes to an early-career scientist for an outstanding contribution to geochemistry or cosmochemistry.
C.C. Patterson Medal – named after Clair Cameron Patterson (1922–1995), who developed uranium–lead dating, it recognizes an innovative breakthrough in environmental geochemistry, particularly one of value to society.
Alfred Treibs Medal – Named after Alfred E. Treibs (1899–1983), whose papers on porphyrins were the beginning of the field of organic chemistry, it is awarded by the Organic Geochemistry Division (OGD) for major achievements in organic geochemistry. The OGD also presents an annual Best Paper Award for a publication in the previous year.
Geochemical Fellows – Starting in 1996, the Geochemical Society and the European Association of Geochemistry (EAG) bestow this honor on outstanding scientists who have made a major contribution to the field of geochemistry. Holders of the Goldschmidt and Treibs medals, as well as the Urey Medal of the EAG, are automatically inducted.
The Distinguished Service Award, which recognizes outstanding service to the Society or the geochemical community, is not awarded every year.
The Geochemical Society sponsors a special lecture at the annual meeting of the Geological Society of America. Called the F. Earl Ingerson Lecture Series, it honors the first president of the Geochemical Society. At the Goldschmidt Conference, the Paul W. Gast Lecture is awarded to a mid-career scientist (under 45 years old) in honor of the first Goldschmidt medalist.
References
External links
Geochemical Society homepage
Geochemistry organizations
Geology societies
Non-profit organizations based in Washington, D.C.
Scientific organizations based in the United States
Scientific organizations established in 1955 | Geochemical Society | Chemistry | 986 |
65,826,995 | https://en.wikipedia.org/wiki/Dominguez%20Butte | Dominguez Butte is a 4,476-foot (1,364 meter) elevation sandstone summit located south of Lake Powell, in San Juan County of southern Utah. It is situated on Navajo Nation land, northeast of the town of Page, and towers over 700 feet above the surrounding terrain as a landmark of the area. Dominguez Butte has a brief appearance in the 1968 film Planet of the Apes, when a spaceship crash lands in Lake Powell.
Geology
Dominguez Butte is a butte composed primarily of Entrada Sandstone, similar to Padres Butte to the north, and Boundary Butte to the south. The Entrada Sandstone overlays Carmel Formation, and below that Page Sandstone at lake level. Above the Entrada layers is Romana Sandstone capped by Morrison Formation. It is located in the southern edge of the Great Basin Desert on the Colorado Plateau. Precipitation runoff from this feature drains into the Colorado River watershed.
History
Francisco Atanasio Domínguez (1740–1805) was a Franciscan missionary and explorer who led the 1776 Domínguez–Escalante expedition. Guided by local Native Americans, the expedition attempted to cross the Colorado River at Lee's Ferry, but found it too difficult. A second ford of the Colorado River, named the Crossing of the Fathers, was successfully made two miles north of Dominguez Butte on November 7, 1776. The descent to the crossing was so treacherous that they had to carve steps into the stone to ensure the livestock could make it down to the river. Today, this ford lies beneath Lake Powell.
This butte's name was officially adopted in 1976 by the U.S. Board on Geographic Names to commemorate Atanacio Domínguez.
Gallery
Climate
According to the Köppen climate classification system, Dominguez Butte is located in an arid climate zone with hot, very dry summers, and chilly winters with very little snow. Spring and fall are the most favorable seasons to visit.
See also
Colorado Plateau
List of rock formations in the United States
References
External links
Weather forecast: Dominguez Butte
Colorado Plateau
Landforms of San Juan County, Utah
Geography of the Navajo Nation
Glen Canyon National Recreation Area
Lake Powell
Buttes of Utah
One-thousanders of the United States
Sandstone formations of the United States | Dominguez Butte | Engineering | 457 |
74,788,131 | https://en.wikipedia.org/wiki/Sylvicola%20dubius | Sylvicola dubius is a species of wood gnat in the genus Sylvicola. The species is predominantly found in southeastern Australia, but can also be found in New Zealand, southwestern Australia and East Timor.
Taxonomy
The species was first described by French entomologist Pierre-Justin-Marie Macquart in 1850, who named the species Chrysopyla dubius.
Behaviour
The species is known to thrive on fallen apples.
Distribution
The species is found in south-eastern Australia, south-western Australia, Tasmania, Lord Howe Island, New Zealand and in East Timor.
Gallery
References
Anisopodidae
Biota of Timor-Leste
Diptera of Australasia
Diptera of New Zealand
Insects described in 1850
Insects of Australia
Taxa named by Pierre-Justin-Marie Macquart | Sylvicola dubius | Biology | 162 |
908,032 | https://en.wikipedia.org/wiki/Hair%20iron | A hair iron or hair tong is a tool used to change the arrangement of the hair using heat. There are three general kinds: curling irons, used to make the hair curl; straightening irons, commonly called straighteners or flat irons, used to straighten the hair; and crimping irons, used to create crimps of the desired size in the hair.
Most models have electric heating; cordless curling irons or flat irons typically use butane, and some flat irons use batteries that can last up to 30 minutes for straightening. Overuse of these tools can cause severe damage to hair.
Types of hair irons
Curling iron
Curling irons, also known as curling tongs, create waves or curls in hair using a variety of different methods. There are many different types of modern curling irons, which can vary by diameter, material, and shape of barrel and the type of handle. The barrel's diameter can be anywhere from to . Smaller barrels typically create spiral curls or ringlets, and larger barrels are used to give shape and volume to a hairstyle.
Curling irons are typically made of ceramic, metal, Teflon, titanium, tourmaline. The barrel's shape can either be a cone, reverse cone, or cylinder, and the iron can have brush attachments or double and triple barrels.
The curling iron can also have either a clipless, Marcel, or spring-loaded handle. Spring-loaded handles are the most popular and use a spring to work the barrel's clamp. When using a Marcel handle, one applies pressure to the clamp. Clipless wands have no clamp: the user simply wraps hair around a rod. Most clipless curling irons come with a Kevlar glove to avoid burns.
Straightening irons
Straightening irons, straighteners, or flat irons work by breaking down the positive hydrogen bonds found in the hair's cortex, which cause hair to open, bend and become curly. Once the bonds are broken, hair is prevented from holding its original, natural form, though the hydrogen bonds can re-form if exposed to moisture. Straightening irons use mainly ceramic material for their plates. Low-end straighteners use a single layer of ceramic coating on the plates, whereas high-end straighteners use multiple layers or even 100% ceramic material. Some straightening irons are fitted with an automatic shut off feature to prevent fire accidents.
Early hair straightening systems relied on harsh chemicals that tended to damage the hair. In the 1870s, the French hairdresser Marcel Grateau introduced heated metal hair care implements such as hot combs to straighten hair. Madame C.J. Walker used combs with wider teeth and popularized their use together with her system of chemical scalp preparation and straightening lotions. Her mentor Annie Malone is sometimes said to have patented the hot comb. Heated metal implements slide more easily through the hair, reducing damage and dryness. Women in the 1960s sometimes used clothing irons to straighten their hair.
In 1909, Isaac K. Shero patented the first hair straightener composed of two flat irons that are heated and pressed together.
Ceramic and electrical straighteners were introduced later, allowing adjustment of heat settings and straightener size. A ceramic hair straightener brush was patented in 2013. Sharon Rabi released the first straightening brush in 2015 under the DAFNI brand name. The ceramic straightening brush has a larger surface area than a traditional flat iron.
Crimping irons
Crimping irons or crimpers work by crimping hair in sawtooth style. The look is similar to the crimps left after taking out small braids. Crimping irons come in different sizes with different sized ridges on the paddles. Larger ridges produce larger crimps in the hair and smaller ridges produce smaller crimps. Crimped hair was very popular in the 1980s and 1990s.
See also
Hot comb
Hair dryer
Hair roller
References
Hairdressing
Home appliances | Hair iron | Physics,Technology | 809 |
46,982 | https://en.wikipedia.org/wiki/Transmission%20system | In telecommunications, a transmission system is a system that transmits a signal from one place to another. The signal can be an electrical, optical or radio signal. The goal of a transmission system is to transmit data accurately and efficiently from point A to point B over a distance, using a variety of technologies such as copper cable and fiber-optic cables, satellite links, and wireless communication technologies.
The International Telecommunication Union (ITU) and the European Telecommunications Standards Institute (ETSI) define a transmission system as the interface and medium through which peer physical layer entities transfer bits. It encompasses all the components and technologies involved in transmitting digital data from one location to another, including modems, cables, and other networking equipment.
Some transmission systems contain multipliers, which amplify a signal prior to re-transmission, or regenerators, which attempt to reconstruct and re-shape the coded message before re-transmission.
One of the most widely used transmission system technologies in the Internet and the public switched telephone network (PSTN) is synchronous optical networking (SONET).
Also, transmission system is the medium through which data is transmitted from one point to another. Examples of common transmission systems people use everyday are: the internet, mobile networks, cordless cables, etc.
Digital transmission system
The ITU defines a digital transmission system as a system that uses digital signals to transmit information. In a digital transmission system, the data is first converted into a digital format and then transmitted over a communication channel. The digital format provides a number of benefits over analog transmission systems, including improved signal quality, reduced noise and interference, and increased data accuracy.
ITU defines digital transmission system (DTS) as following:A specific means of providing a digital section.The ITU sets global standards for digital transmission systems, including the encoding and decoding methods used, the data rates and transmission speeds, and the types of communication channels used. These standards ensure that digital transmission systems are compatible and interoperable with each other, regardless of the type of data being transmitted or the geographical location of the sender and receiver.
Basic components of a DTS
Point-to-point links are communication systems between two endpoints, usually a sender (transmitter) and a receiver.
System performance analysis:
Link power budget is a power loss model for a point-to-point link.
Rise time budget is analysis method used to measure the amount of dispersion which is present in a link.
Line coding is the process of transforming data into digital signals for transmission over a point-to-point link. Can include binary data source, multiplexer and line coder.
Non-return-to-zero (NRZ)
Return-to-zero (RZ)
Phase-encoded (PE)
Block codes
Error correction techniques are used to detect and correct errors that occur during transmission.
Automatic repeat request (ARQ)
Forward error correction (FEC)
Noise effects on system performance can be minimized by using signal conditioning techniques such as signal amplification and filtering.
These techniques are used to improve signal-to-noise ratio, which helps to maintain the integrity of the signal during transmission.
See also
Signal transmission
Communications satellite
Communications system
Submarine communications cable – a cable on the sea bed
References
Telecommunications systems | Transmission system | Technology | 665 |
33,819,140 | https://en.wikipedia.org/wiki/Vaginal%20microbicide | A vaginal microbicide is a microbicide for vaginal use, generally as protection against the contraction of a sexually transmitted infection during vaginal sexual intercourse. Vaginal microbicides are topical gels or creams inserted into the vagina.
Target market
Researchers have investigated who has interest in using a vaginal microbicide. Condoms are highly effective in preventing the transmission of infection, but worldwide, the decision to use condoms is more often a decision made by males than females. A vaginal microbicide which could prevent sexual transmission of infection would further empower women to influence the result of their sexual encounters. The demographic interested in using the produce included women with the following characteristics:
* use condoms to prevent infection
have previously had a sexually transmitted infection
have a sexual partner who had another sexual partner in the past year
minority group
low income
unmarried and not cohabiting
no steady sexual partner
The number of women interested in using such a product has been characterized as being significant enough to merit product development and marketing.
Characteristics
The ideal vaginal microbicide would have the following characteristics: provide protection against infection not require application at the time of intercourse not harm the natural tissue , not harming natural tissue was the most troublesome aspect of development.
For HIV
Studies for using vaginal microbicides for HIV treatment rapidly increased through 2011 to 2013 due mostly to the observation that antiretroviral drugs designed for HIV treatment sometimes also achieve preexposure prophylaxis and significantly reduced HIV risks. Several unrelated chemical mechanisms have been proposed for vaginal microbicides treating HIV. One obstacle to effective research is that trials may involve social harms for trial participants, although one 2019 study found these social harms to be relatively small. There is also often a self-reporting bias in condom and vaginal microbicide use in trials, suggesting the need for vaginal applicator staining to confirm whether the vaginal microbicides were effectively applied.
Surfactants
The first vaginal microbicide studied was nonoxynol-9, which acted as a surfactant.
Blocking HIV binding
PRO 2000, carrageenan, and cellulose sulphate have been studied as microbicides to block HIV binding.
Topical antiretrovirals
Tenofovir has been studied as a topical antiretroviral. One example of a tenofovir study is CAPRISA 004 in 2010, finding its use reduced HIV infection risk by 39% overall.
See also
Rectal microbicide
Microbicides for sexually transmitted diseases
References
External links
Tips to Maintaining a Healthy Vagina
Microbicides
Prevention of HIV/AIDS
Sexually transmitted diseases and infections
Vagina | Vaginal microbicide | Biology | 542 |
23,981,239 | https://en.wikipedia.org/wiki/C20H32O2 | {{DISPLAYTITLE:C20H32O2}}
The molecular formula C20H32O2 (molar mass : 304.46 g/mol, exact mass : 304.24023) may refer to:
Arachidonic acid, a fatty acid
Copalic acid, a diterpenoid
(C6)-CP 47,497
5α-Dihydronorethandrolone
Drostanolone, an anabolic steroid
Eicosatetraenoic acid, a type of fatty acid
Mestanolone, a steroid hormone
Mesterolone, a steroid
Methandriol, an androstenediol | C20H32O2 | Chemistry | 140 |
1,258,983 | https://en.wikipedia.org/wiki/Dry%20quicksand | Dry quicksand is loose sand whose bulk density is reduced by blowing air through it and which yields easily to weight or pressure. It acts similarly to normal quicksand, but it does not contain any water and does not operate on the same principle. Dry quicksand can also be a resulting phenomenon of contractive dilatancy.
Historically, the existence of dry quicksand was doubted, and the reports of humans and complete caravans being lost in dry quicksand were considered to be folklore. In 2004, it was created in the laboratory, but it is still not clear what its actual prevalence in nature is.
Scientific research
Writing in Nature, physicist Detlef Lohse and coworkers of University of Twente in Enschede, Netherlands allowed air to flow through very fine sand (typical grain diameter was about 40 micrometers) in a container with a perforated base. They then turned the air stream off before the start of the experiment and allowed the sand to settle: the packing fraction of this sand was only 41% (compared to 55–60% for untreated sand).
Lohse found that a weighted table tennis ball (radius 2 cm, mass 133 g), when released from just above the surface of the sand, would sink to about five diameters. Lohse also observed a "straight jet of sand [shooting] violently into the air after about 100 ms". Objects are known to make a splash when they hit sand, but this type of jet had never been described before.
Lohse concluded that:
In nature, dry quicksands may evolve from the sedimentation of very fine sand after it has been blown into the air and, if large enough, might be a threat to humans. Indeed, reports that travellers and whole vehicles have been swallowed instantly may even turn out to be credible in the light of our results.
During the planning of the Project Apollo Moon missions, dry quicksand on the Moon was considered as a potential danger to the missions. The successful landings of the unmanned Surveyor probes a few years earlier and their observations of a solid, rocky surface largely discounted this possibility, however. The large plates at the end of legs of the Apollo Lunar Module were designed to reduce this danger, but the astronauts did not encounter dry quicksand.
See also
Fech fech
Fluidization
Dilatancy (granular material)
Kekexili: Mountain Patrol (film that features dry quicksand)
References
External links
Pictures of the quicksand experiment by Lohse et al. .
Links to video of the quicksand experiment by Lohse et al. .
Sediments
Geological hazards
Granularity of materials
Soil mechanics
ru:Зыбучий песок | Dry quicksand | Physics,Chemistry | 564 |
30,942,109 | https://en.wikipedia.org/wiki/Short-path%20distillation | Short-path distillation is a distillation technique that involves the distillate traveling a short distance, often only a few centimeters, and is normally done at reduced pressure. Short-path distillation systems often have a variety of names depending on the manufacturer of the system and what compounds are being distilled within them. A classic example would be a distillation involving the distillate traveling from one glass bulb to another, without the need for a condenser separating the two chambers. This technique is often used for compounds which are unstable at high temperatures or to purify small amounts of compound. The advantage is that the heating temperature can be considerably lower at reduced pressure than the boiling point of the liquid at standard pressure, and the distillate only has to travel a short distance before condensing. A short path ensures that little compound is lost on the sides of the apparatus. The Kugelrohr is a kind of a short path distillation apparatus which can contain multiple chambers to collect distillate fractions. To increase the evaporation rate without increasing temperature there are several modern techniques that increase the surface area of the liquid such as thin film, wiped film or 'wiper' film, and rolled film all of which involve mechanically spreading a film of the liquid over a large surface.
See also
Fragrance extraction
References
Distillation
Separation processes
Laboratory techniques | Short-path distillation | Chemistry | 287 |
15,369,660 | https://en.wikipedia.org/wiki/Ceramic%20flux | Fluxes are substances, usually oxides, used in glasses, glazes and ceramic bodies to lower the high melting point of the main glass forming constituents, usually silica and alumina. A ceramic flux functions by promoting partial or complete liquefaction. The most commonly used fluxing oxides in a ceramic glaze contain lead, sodium, potassium, lithium, calcium, magnesium, barium, zinc, strontium, and manganese. These are introduced to the raw glaze as compounds, for example lead as lead oxide. Boron is considered by many to be a glass former rather than a flux.
Some oxides, such as calcium oxide, flux significantly only at high temperature. Lead oxide is the traditional low temperature flux used for crystal glass, but it is now avoided because it is toxic even in small quantities. It is being replaced by other substances, especially boron and zinc oxides.
In clay bodies a flux creates a limited and controlled amount of glass, which works to cement crystalline phases together. Fluxes play a key role in the vitrification of clay bodies by lowering the overall melting point. The most common fluxes used in clay bodies are potassium oxide and sodium oxide which are found in feldspars. A predominant flux in glazes is calcium oxide which is usually obtained from limestone. The two most common feldspars in the ceramic industry are potash feldspar (orthoclase) and soda feldspar (albite).
Common oxides
List of commonly used ceramic oxides:
See also
Flux (metallurgy)
Loss on ignition
References
See also
Secondary flux
Ceramic materials | Ceramic flux | Engineering | 336 |
56,894,853 | https://en.wikipedia.org/wiki/Maharram%20Mammadyarov | Maharram Ali oghlu Mammadyarov (; 17 October 1924 – 2 January 2022) was an Azerbaijani scientist, doctor of chemistry, real member of Azerbaijan National Academy of Sciences.
Biography
Maharram Mammadyarov was born in Yayji, Julfa District, Nakhichivan ASSR, Azerbaijan SSR. In 1941, he graduated from Nakhchivan Pedagogical Technical School. He participated in WWII, serving in the Army. Mammadyarov graduated from Azerbaijan State University in 1949, and in 1953 from Leningrad Technical University by obtaining PhD. During 1953–1955, he worked as scientific secretary at Institute of Chemistry of Azerbaijan of National Academy of Sciences of Azerbaijan Soviet Socialist Republic. In 1955–1959 he was Senior Research Fellow at the Institute of Organic Chemistry named after N. Zelinski of the USSR Academy of Sciences. In 1959–1969 Mammadyarov worked at the Institute of Petrochemical Processes named after Y.H. Mammadaliyev. In 1973–1979 he was the head of Nakhchivan regional scientific center of ANAS. For his work on the use of carbon dioxide in the industry, he was awarded the State Prize of the Republic of Azerbaijan. In 1975–1978 Mammadyarov worked as teacher at the Nakhchivan State Pedagogical Institute (present Nakhchivan State University). From 1981 to 1994 he worked as the head of the department, and from 1994 to 2002 he worked as director of the Microbiology Institute of ANAS. Since 1969 he had been working at the Institute of Petrochemical Processes named after Y.H. Mammadaliyev as the Head of Synthesis and Technology of Synthetic Fats laboratory until his death in 2022.
Mammadyarov died on 2 January 2022, at the age of 97. He was the father of Minister of Foreign Affairs of Azerbaijan, Elmar Mammadyarov.
Awards
1st degree order of the Great Patriotic War
State Prize of the Republic of Azerbaijan (1980)
Y. Mammadaliyev medal (1995)
Shohrat Order (2005)
"OGS" golden medal and diploma (2006)
Honorary title of Honored Scientist (2009)
Y. Mammadaliyev prize (2014)
Sharaf Order (2014)
References
1924 births
2022 deaths
Soviet organic chemists
Academic staff of Nakhchivan State University
Azerbaijani chemists
20th-century chemists
People from the Nakhchivan Autonomous Republic
Soviet military personnel of World War II | Maharram Mammadyarov | Chemistry | 511 |
30,647 | https://en.wikipedia.org/wiki/Tidal%20acceleration | Tidal acceleration is an effect of the tidal forces between an orbiting natural satellite (e.g. the Moon) and the primary planet that it orbits (e.g. Earth). The acceleration causes a gradual recession of a satellite in a prograde orbit (satellite moving to a higher orbit, away from the primary body), and a corresponding slowdown of the primary's rotation. The process eventually leads to tidal locking, usually of the smaller body first, and later the larger body (e.g. theoretically with Earth in 50 billion years). The Earth–Moon system is the best-studied case.
The similar process of tidal deceleration occurs for satellites that have an orbital period that is shorter than the primary's rotational period, or that orbit in a retrograde direction.
The naming is somewhat confusing, because the average speed of the satellite relative to the body it orbits is decreased as a result of tidal acceleration, and increased as a result of tidal deceleration. This conundrum occurs because a positive acceleration at one instant causes the satellite to loop farther outward during the next half orbit, decreasing its average speed. A continuing positive acceleration causes the satellite to spiral outward with a decreasing speed and angular rate, resulting in a negative acceleration of angle. A continuing negative acceleration has the opposite effect.
Earth–Moon system
Discovery history of the secular acceleration
Edmond Halley was the first to suggest, in 1695, that the mean motion of the Moon was apparently getting faster, by comparison with ancient eclipse observations, but he gave no data. (It was not yet known in Halley's time that what is actually occurring includes a slowing-down of Earth's rate of rotation: see also Ephemeris time – History. When measured as a function of mean solar time rather than uniform time, the effect appears as a positive acceleration.) In 1749 Richard Dunthorne confirmed Halley's suspicion after re-examining ancient records, and produced the first quantitative estimate for the size of this apparent effect: a centurial rate of +10″ (arcseconds) in lunar longitude, which is a surprisingly accurate result for its time, not differing greatly from values assessed later, e.g. in 1786 by de Lalande, and to compare with values from about 10″ to nearly 13″ being derived about a century later.
Pierre-Simon Laplace produced in 1786 a theoretical analysis giving a basis on which the Moon's mean motion should accelerate in response to perturbational changes in the eccentricity of the orbit of Earth around the Sun. Laplace's initial computation accounted for the whole effect, thus seeming to tie up the theory neatly with both modern and ancient observations.
However, in 1854, John Couch Adams caused the question to be re-opened by finding an error in Laplace's computations: it turned out that only about half of the Moon's apparent acceleration could be accounted for on Laplace's basis by the change in Earth's orbital eccentricity. Adams' finding provoked a sharp astronomical controversy that lasted some years, but the correctness of his result, agreed upon by other mathematical astronomers including C. E. Delaunay, was eventually accepted. The question depended on correct analysis of the lunar motions, and received a further complication with another discovery, around the same time, that another significant long-term perturbation that had been calculated for the Moon (supposedly due to the action of Venus) was also in error, was found on re-examination to be almost negligible, and practically had to disappear from the theory. A part of the answer was suggested independently in the 1860s by Delaunay and by William Ferrel: tidal retardation of Earth's rotation rate was lengthening the unit of time and causing a lunar acceleration that was only apparent.
It took some time for the astronomical community to accept the reality and the scale of tidal effects. But eventually it became clear that three effects are involved, when measured in terms of mean solar time. Beside the effects of perturbational changes in Earth's orbital eccentricity, as found by Laplace and corrected by Adams, there are two tidal effects (a combination first suggested by Emmanuel Liais). First there is a real retardation of the Moon's angular rate of orbital motion, due to tidal exchange of angular momentum between Earth and Moon. This increases the Moon's angular momentum around Earth (and moves the Moon to a higher orbit with a lower orbital speed). Secondly, there is an apparent increase in the Moon's angular rate of orbital motion (when measured in terms of mean solar time). This arises from Earth's loss of angular momentum and the consequent increase in length of day.
Effects of Moon's gravity
The plane of the Moon's orbit around Earth lies close to the plane of Earth's orbit around the Sun (the ecliptic), rather than in the plane of the Earth's rotation (the equator) as is usually the case with planetary satellites. The mass of the Moon is sufficiently large, and it is sufficiently close, to raise tides in the matter of Earth. Foremost among such matter, the water of the oceans bulges out both towards and away from the Moon. If the material of the Earth responded immediately, there would be a bulge directly toward and away from the Moon. In the solid Earth tides, there is a delayed response due to the dissipation of tidal energy. The case for the oceans is more complicated, but there is also a delay associated with the dissipation of energy since the Earth rotates at a faster rate than the Moon's orbital angular velocity. This lunitidal interval in the responses causes the tidal bulge to be carried forward. Consequently, the line through the two bulges is tilted with respect to the Earth-Moon direction exerting torque between the Earth and the Moon. This torque boosts the Moon in its orbit and slows the rotation of Earth.
As a result of this process, the mean solar day, which has to be 86,400 equal seconds, is actually getting longer when measured in SI seconds with stable atomic clocks. (The SI second, when adopted, was already a little shorter than the current value of the second of mean solar time.) The small difference accumulates over time, which leads to an increasing difference between our clock time (Universal Time) on the one hand, and International Atomic Time and ephemeris time on the other hand: see ΔT. This led to the introduction of the leap second in 1972 to compensate for differences in the bases for time standardization.
In addition to the effect of the ocean tides, there is also a tidal acceleration due to flexing of Earth's crust, but this accounts for only about 4% of the total effect when expressed in terms of heat dissipation.
If other effects were ignored, tidal acceleration would continue until the rotational period of Earth matched the orbital period of the Moon. At that time, the Moon would always be overhead of a single fixed place on Earth. Such a situation already exists in the Pluto–Charon system. However, the slowdown of Earth's rotation is not occurring fast enough for the rotation to lengthen to a month before other effects make this irrelevant: about 1 to 1.5 billion years from now, the continual increase of the Sun's radiation will likely cause Earth's oceans to vaporize, removing the bulk of the tidal friction and acceleration. Even without this, the slowdown to a month-long day would still not have been completed by 4.5 billion years from now when the Sun will probably evolve into a red giant and likely destroy both Earth and the Moon.
Tidal acceleration is one of the few examples in the dynamics of the Solar System of a so-called secular perturbation of an orbit, i.e. a perturbation that continuously increases with time and is not periodic. Up to a high order of approximation, mutual gravitational perturbations between major or minor planets only cause periodic variations in their orbits, that is, parameters oscillate between maximum and minimum values. The tidal effect gives rise to a quadratic term in the equations, which leads to unbounded growth. In the mathematical theories of the planetary orbits that form the basis of ephemerides, quadratic and higher order secular terms do occur, but these are mostly Taylor expansions of very long time periodic terms. The reason that tidal effects are different is that unlike distant gravitational perturbations, friction is an essential part of tidal acceleration, and leads to permanent loss of energy from the dynamic system in the form of heat. In other words, we do not have a Hamiltonian system here.
Angular momentum and energy
The gravitational torque between the Moon and the tidal bulge of Earth causes the Moon to be constantly promoted to a slightly higher orbit and Earth to be decelerated in its rotation. As in any physical process within an isolated system, total energy and angular momentum are conserved. Effectively, energy and angular momentum are transferred from the rotation of Earth to the orbital motion of the Moon (however, most of the energy lost by Earth (−3.78 TW) is converted to heat by frictional losses in the oceans and their interaction with the solid Earth, and only about 1/30th (+0.121 TW) is transferred to the Moon). The Moon moves farther away from Earth (+38.30±0.08 mm/yr), so its potential energy, which is still negative (in Earth's gravity well), increases, i. e. becomes less negative. It stays in orbit, and from Kepler's 3rd law it follows that its average angular velocity actually decreases, so the tidal action on the Moon actually causes an angular deceleration, i.e. a negative acceleration (−25.97±0.05"/century2) of its rotation around Earth. The actual speed of the Moon also decreases. Although its kinetic energy decreases, its potential energy increases by a larger amount, i. e. Ep = -2Ec (Virial Theorem).
The rotational angular momentum of Earth decreases and consequently the length of the day increases. The net tide raised on Earth by the Moon is dragged ahead of the Moon by Earth's much faster rotation. Tidal friction is required to drag and maintain the bulge ahead of the Moon, and it dissipates the excess energy of the exchange of rotational and orbital energy between Earth and the Moon as heat. If the friction and heat dissipation were not present, the Moon's gravitational force on the tidal bulge would rapidly (within two days) bring the tide back into synchronization with the Moon, and the Moon would no longer recede. Most of the dissipation occurs in a turbulent bottom boundary layer in shallow seas such as the European Shelf around the British Isles, the Patagonian Shelf off Argentina, and the Bering Sea.
The dissipation of energy by tidal friction averages about 3.64 terawatts of the 3.78 terawatts extracted, of which 2.5 terawatts are from the principal M lunar component and the remainder from other components, both lunar and solar.
An equilibrium tidal bulge does not really exist on Earth because the continents do not allow this mathematical solution to take place. Oceanic tides actually rotate around the ocean basins as vast gyres around several amphidromic points where no tide exists. The Moon pulls on each individual undulation as Earth rotates—some undulations are ahead of the Moon, others are behind it, whereas still others are on either side. The "bulges" that actually do exist for the Moon to pull on (and which pull on the Moon) are the net result of integrating the actual undulations over all the world's oceans.
Historical evidence
This mechanism has been working for 4.5 billion years, since oceans first formed on Earth, but less so at times when much or most of the water was ice. There is geological and paleontological evidence that Earth rotated faster and that the Moon was closer to Earth in the remote past. Tidal rhythmites are alternating layers of sand and silt laid down offshore from estuaries having great tidal flows. Daily, monthly and seasonal cycles can be found in the deposits. This geological record is consistent with these conditions 620 million years ago: the day was 21.9±0.4 hours, and there were 13.1±0.1 synodic months/year and 400±7 solar days/year. The average recession rate of the Moon between then and now has been 2.17±0.31 cm/year, which is about half the present rate. The present high rate may be due to near resonance between natural ocean frequencies and tidal frequencies.
Analysis of layering in fossil mollusc shells from 70 million years ago, in the Late Cretaceous period, shows that there were 372 days a year, and thus that the day was about 23.5 hours long then.
Quantitative description of the Earth–Moon case
The motion of the Moon can be followed with an accuracy of a few centimeters by lunar laser ranging (LLR). Laser pulses are bounced off corner-cube prism retroreflectors on the surface of the Moon, emplaced during the Apollo missions of 1969 to 1972 and by Lunokhod 1 in 1970 and Lunokhod 2 in 1973. Measuring the return time of the pulse yields a very accurate measure of the distance. These measurements are fitted to the equations of motion. This yields numerical values for the Moon's secular deceleration, i.e. negative acceleration, in longitude and the rate of change of the semimajor axis of the Earth–Moon ellipse. From the period 1970–2015, the results are:
−25.97 ± 0.05 arcsecond/century2 in ecliptic longitude
+38.30 ± 0.08 mm/yr in the mean Earth–Moon distance
This is consistent with results from satellite laser ranging (SLR), a similar technique applied to artificial satellites orbiting Earth, which yields a model for the gravitational field of Earth, including that of the tides. The model accurately predicts the changes in the motion of the Moon.
Finally, ancient observations of solar eclipses give fairly accurate positions for the Moon at those moments. Studies of these observations give results consistent with the value quoted above.
The other consequence of tidal acceleration is the deceleration of the rotation of Earth. The rotation of Earth is somewhat erratic on all time scales (from hours to centuries) due to various causes. The small tidal effect cannot be observed in a short period, but the cumulative effect on Earth's rotation as measured with a stable clock (ephemeris time, International Atomic Time) of a shortfall of even a few milliseconds every day becomes readily noticeable in a few centuries. Since some event in the remote past, more days and hours have passed (as measured in full rotations of Earth) (Universal Time) than would be measured by stable clocks calibrated to the present, longer length of the day (ephemeris time). This is known as ΔT. Recent values can be obtained from the International Earth Rotation and Reference Systems Service (IERS). A table of the actual length of the day in the past few centuries is also available.
From the observed change in the Moon's orbit, the corresponding change in the length of the day can be computed (where "cy" means "century"):
+2.4 ms/d/century or +88 s/cy2 or +66 ns/d2.
However, from historical records over the past 2700 years the following average value is found:
+1.72 ± 0.03 ms/d/century or +63 s/cy2 or +47 ns/d2. (i.e. an accelerating cause is responsible for -0.7 ms/d/cy)
By twice integrating over the time, the corresponding cumulative value is a parabola having a coefficient of T2 (time in centuries squared) of (1/2) 63 s/cy2 :
ΔT = (1/2) 63 s/cy2 T2 = +31 s/cy2 T2.
Opposing the tidal deceleration of Earth is a mechanism that is in fact accelerating the rotation. Earth is not a sphere, but rather an ellipsoid that is flattened at the poles. SLR has shown that this flattening is decreasing. The explanation is that during the ice age large masses of ice collected at the poles, and depressed the underlying rocks. The ice mass started disappearing over 10000 years ago, but Earth's crust is still not in hydrostatic equilibrium and is still rebounding (the relaxation time is estimated to be about 4000 years). As a consequence, the polar diameter of Earth increases, and the equatorial diameter decreases (Earth's volume must remain the same). This means that mass moves closer to the rotation axis of Earth, and that Earth's moment of inertia decreases. This process alone leads to an increase of the rotation rate (phenomenon of a spinning figure skater who spins ever faster as they retract their arms). From the observed change in the moment of inertia the acceleration of rotation can be computed: the average value over the historical period must have been about −0.6 ms/century. This largely explains the historical observations.
Other cases of tidal acceleration
Most natural satellites of the planets undergo tidal acceleration to some degree (usually small), except for the two classes of tidally decelerated bodies. In most cases, however, the effect is small enough that even after billions of years most satellites will not actually be lost. The effect is probably most pronounced for Mars's second moon Deimos, which may become an Earth-crossing asteroid after it leaks out of Mars's grip.
The effect also arises between different components in a binary star.
Moreover, this tidal effect isn't solely limited to planetary satellites; it also manifests between different components within a binary star system. The gravitational interactions within such systems can induce tidal forces, leading to fascinating dynamics between the stars or their orbiting bodies, influencing their evolution and behavior over cosmic timescales.
Tidal deceleration
This comes in two varieties:
Mercury and Venus are believed to have no satellites chiefly because any hypothetical satellite would have suffered deceleration long ago and crashed into the planets due to the very slow rotation speeds of both planets; in addition, Venus also has retrograde rotation.
See also
Tidal locking
Tidal force
Tides
Tidal heating
References
External links
The Recession of the Moon and the Age of the Earth-Moon System
Tidal Heating as Described by University of Washington Professor Toby Smith
Acceleration
Geodesy
Orbits
Orbit of the Moon | Tidal acceleration | Mathematics | 3,855 |
5,178,835 | https://en.wikipedia.org/wiki/PROP%20%28category%20theory%29 | In category theory, a branch of mathematics, a PROP is a symmetric strict monoidal category whose objects are the natural numbers n identified with the finite sets and whose tensor product is given on objects by the addition on numbers. Because of “symmetric”, for each n, the symmetric group on n letters is given as a subgroup of the automorphism group of n. The name PROP is an abbreviation of "PROduct and Permutation category".
The notion was introduced by Adams and Mac Lane; the topological version of it was later given by Boardman and Vogt.
Following them, J. P. May then introduced the term “operad”, which is a particular kind of PROP, for the object which Boardman and Vogt called the "category of operators in standard form".
There are the following inclusions of full subcategories:
where the first category is the category of (symmetric) operads.
Examples and variants
An important elementary class of PROPs are the sets of all matrices (regardless of number of rows and columns) over some fixed ring . More concretely, these matrices are the morphisms of the PROP; the objects can be taken as either (sets of vectors) or just as the plain natural numbers (since objects do not have to be sets with some structure). In this example:
Composition of morphisms is ordinary matrix multiplication.
The identity morphism of an object (or ) is the identity matrix with side .
The product acts on objects like addition ( or ) and on morphisms like an operation of constructing block diagonal matrices: .
The compatibility of composition and product thus boils down to
.
As an edge case, matrices with no rows ( matrices) or no columns ( matrices) are allowed, and with respect to multiplication count as being zero matrices. The identity is the matrix.
The permutations in the PROP are the permutation matrices. Thus the left action of a permutation on a matrix (morphism of this PROP) is to permute the rows, whereas the right action is to permute the columns.
There are also PROPs of matrices where the product is the Kronecker product, but in that class of PROPs the matrices must all be of the form (sides are all powers of some common base ); these are the coordinate counterparts of appropriate symmetric monoidal categories of vector spaces under tensor product.
Further examples of PROPs:
the discrete category of natural numbers,
the category FinSet of natural numbers and functions between them,
the category Bij of natural numbers and bijections,
the category Inj of natural numbers and injections.
If the requirement “symmetric” is dropped, then one gets the notion of PRO category. If “symmetric” is replaced by braided, then one gets the notion of
PROB category.
the category BijBraid of natural numbers, equipped with the braid group Bn as the automorphisms of each n (and no other morphisms).
is a PROB but not a PROP.
the augmented simplex category of natural numbers and order-preserving functions.
is an example of PRO that is not even a PROB.
Algebras of a PRO
An algebra of a PRO in a monoidal category is a strict monoidal functor from to . Every PRO and category give rise to a category of algebras whose objects are the algebras of in and whose morphisms are the natural transformations between them.
For example:
an algebra of is just an object of ,
an algebra of FinSet is a commutative monoid object of ,
an algebra of is a monoid object in .
More precisely, what we mean here by "the algebras of in are the monoid objects in " for example is that the category of algebras of in is equivalent to the category of monoids in .
See also
Lawvere theory
Permutation category
References
Monoidal categories | PROP (category theory) | Mathematics | 786 |
36,426,069 | https://en.wikipedia.org/wiki/Hasse%20invariant%20of%20an%20algebra | In mathematics, the Hasse invariant of an algebra is an invariant attached to a Brauer class of algebras over a field. The concept is named after Helmut Hasse. The invariant plays a role in local class field theory.
Local fields
Let K be a local field with valuation v and D a K-algebra. We may assume D is a division algebra with centre K of degree n. The valuation v can be extended to D, for example by extending it compatibly to each commutative subfield of D: the value group of this valuation is (1/n)Z.
There is a commutative subfield L of D which is unramified over K, and D splits over L. The field L is not unique but all such extensions are conjugate by the Skolem–Noether theorem, which further shows that any automorphism of L is induced by a conjugation in D. Take γ in D such that conjugation by γ induces the Frobenius automorphism of L/K and let v(γ) = k/n. Then k/n modulo 1 is the Hasse invariant of D. It depends only on the Brauer class of D.
The Hasse invariant is thus a map defined on the Brauer group of a local field K to the divisible group Q/Z. Every class in the Brauer group is represented by a class in the Brauer group of an unramified extension of L/K of degree n, which by the Grunwald–Wang theorem and the Albert–Brauer–Hasse–Noether theorem we may take to be a cyclic algebra (L,φ,πk) for some k mod n, where φ is the Frobenius map and π is a uniformiser. The invariant map attaches the element k/n mod 1 to the class. This exhibits the invariant map as a homomorphism
The invariant map extends to Br(K) by representing each class by some element of Br(L/K) as above.
For a non-Archimedean local field, the invariant map is a group isomorphism.
In the case of the field R of real numbers, there are two Brauer classes, represented by the algebra R itself and the quaternion algebra H. It is convenient to assign invariant zero to the class of R and invariant 1/2 modulo 1 to the quaternion class.
In the case of the field C of complex numbers, the only Brauer class is the trivial one, with invariant zero.
Global fields
For a global field K, given a central simple algebra D over K then for each valuation v of K we can consider the extension of scalars Dv = D ⊗ Kv The extension Dv splits for all but finitely many v, so that the local invariant of Dv is almost always zero. The Brauer group Br(K) fits into an exact sequence
where S is the set of all valuations of K and the right arrow is the sum of the local invariants. The injectivity of the left arrow is the content of the Albert–Brauer–Hasse–Noether theorem. Exactness in the middle term is a deep fact from global class field theory.
References
Further reading
Field (mathematics)
Algebraic number theory | Hasse invariant of an algebra | Mathematics | 680 |
29,031,119 | https://en.wikipedia.org/wiki/Almud | The almud is a unit of measurement of volume used in France, Spain and in parts of the Americas that were colonized by each country. The word comes from the Arabic al-múdd." The exact value of the almud was different from region to region, and also varied according to the nature of the measured good. In Portugal the name almude was used and their values were much larger than the Spanish ones. It is still used in rural Mexico, Panama, Chile and other countries. An almud is a box with internal marks, indicating different measurements.
It was also used to name a given surface of land, said surface corresponding to how much could be seeded with the quantity of grain contained in an almud.
Iberian Spain: 4.625 liters
Canary Islands, at Las Palmas: 5.50 liters
Argentina
Córdoba: 18.08 liters
Corrientes: 21.49 liters
Mendoza: 9.31 liters
Belize: 5.683 liters
Chile: 8.08 liters
Mexico: 7.568 liters
Philippines: 1.76 liters
Puerto Rico: 20 liters
United States, New Mexico: 412.71 cubic inches, approximately 6.76 liters.
As unit of mass
In some South American countries an almud was a unit of mass.
Bolivia
Tarata, Cochabamba: 7.36 kg.
Arampampa, Potosí: 4.14 kg.
Buena Vista, Santa Cruz: 14.72 kg.
Ecuador: 12.88 kg.
Venezuela: varied between 9 and 50 kg.
See also
Spanish customary units
References
Units of volume | Almud | Mathematics | 336 |
13,365,973 | https://en.wikipedia.org/wiki/Nathaniel%20Kleitman | Nathaniel Kleitman (April 26, 1895 – August 13, 1999) was an American physiologist and sleep researcher who served as Professor Emeritus in Physiology at the University of Chicago. He is recognized as the father of modern sleep research, and is the author of the seminal 1939 book Sleep and Wakefulness.
Biography
Early life
Nathaniel Kleitman was born in Chișinău, also known as Kishinev, the capital of the province of Bessarabia (now Moldova), in 1895 to a Jewish family. He was deeply interested in consciousness and reasoned that he could get insight in consciousness by studying the unconsciousness of sleep. Pogroms drove him to Palestine, and in 1915 he emigrated to the United States as a result of World War I. At the age of twenty, he landed in New York City penniless; in 1923, at age twenty-eight, he had worked his way through City College of New York and earned a PhD from the University of Chicago's Department of Physiology. His thesis was "Studies on the physiology of sleep." Soon after, in 1925, he joined the faculty there. An early sponsor of Kleitman's sleep research was the Wander Company, which manufactured Ovaltine and hoped to promote it as a remedy for insomnia.
REM sleep
Eugene Aserinsky, one of Kleitman's graduate students, decided to hook sleepers up to an early version of an electroencephalogram machine, which scribbled across of paper each night. In the process, Aserinsky noticed that several times each night the sleepers went through periods when their eyes darted wildly back and forth. Kleitman insisted that the experiment be repeated yet again, this time on his daughter, Esther. In 1953, he and Aserinsky introduced the world to "rapid-eye movement," or REM sleep. Kleitman and Aserinsky demonstrated that REM sleep was correlated with dreaming and brain activity. Another of Kleitman's graduate students, William C. Dement, who was a professor of psychiatry at the Stanford medical school, described this as the year that "the study of sleep became a true scientific field."
Rest activity cycle
Kleitman made countless additional contributions to the field of sleep research and was especially interested in "rest-activity" cycles, leading to many fundamental findings on circadian and ultradian rhythms. Kleitman proposed the existence of a Basic rest activity cycle, or BRAC, during both sleep and wakefulness.
Other experiments
Renowned for his personal and experimental rigor, he conducted well-known sleep studies underground in Mammoth Cave, Kentucky and lesser-known studies underwater in submarines during World War II and above the Arctic Circle.
See also
Chronotype
References
External links
Guide to the Nathaniel Kleitman Papers 1896-2001 at the University of Chicago Special Collections Research Center
1895 births
1999 deaths
Scientists from Chișinău
People from Kishinyovsky Uyezd
Moldovan Jews
Emigrants from the Russian Empire to the Ottoman Empire
Emigrants from the Russian Empire to the United States
American people of Moldovan-Jewish descent
American physiologists
Sleep researchers
American men centenarians
Chronobiologists
University of Chicago alumni
University of Chicago faculty
Jewish neuroscientists
American neuroscientists
Jewish centenarians | Nathaniel Kleitman | Biology | 674 |
57,483,702 | https://en.wikipedia.org/wiki/Hydrogen%20ditelluride | Hydrogen ditelluride or ditellane is an unstable hydrogen dichalcogenide containing two tellurium atoms per molecule, with structure or . Hydrogen ditelluride is interesting to theorists because its molecule is simple yet asymmetric (with no centre of symmetry) and is predicted to be one of the easiest to detect parity violation, in which the left handed molecule has differing properties to the right handed one due to the effects of the weak force.
Production
Hydrogen ditelluride can possibly be formed at the tellurium cathode in electrolysis in acid. When electrolysed in alkaline solutions, a tellurium cathode produces ditelluride ions, as well as and a red polytelluride. The greatest amount of ditelluride is made when pH is over 12.
Apart from its speculative detection in electrolysis, ditellane has been detected in the gas phase produced from di-sec-butylditellane.
Properties
Hydrogen ditelluride has been investigated theoretically, with various properties predicted. The molecule is twisted with a C2 symmetry. There are two enantiomers. Hydrogen ditelluride is one of the simplest possible unsymmetrical molecules; any simpler molecule will not have the required low symmetry. The equilibrium geometry (not counting zero point energy or vibrational energy) has bond lengths of 2.879 Å between the tellurium atoms and 1.678 Å between hydrogen and tellurium. The angle is 94.93°. The angle of lowest energy between the two bonds (the dihedral angle between the and planes) is 89.32°. The trans configuration is higher in energy (3.71 kcal/mol), and the cis would be even higher (4.69 kcal/mol).
Being chiral, the molecule is predicted to show evidence of parity violation, though this may get interference from stereomutation tunneling, where the P enantiomer and M enantiomer spontaneously convert into each other by quantum tunneling. The parity violation effect on energy comes about from virtual Z boson exchanges between the nucleus and electrons. It is proportional to the cube of the atomic number, so is stronger in tellurium molecules than others higher up in the periodic table (O, S, Se). Because of parity violation, the energy of the two enantiomers differs, and is likely to be higher in this molecule than most molecules, so an effort is underway to observe this so-far undetected effect. The tunneling effect is reduced by higher masses, so that the deuterium form, will show less tunneling. In a torsional vibrational mode, the molecule can twist back and forward storing energy. Seven different quantum vibration levels are predicted below the energy to jump to the other enantiomer. The levels are numbered vt = 0 up to 6. The sixth level is predicted to be split into two energy levels because of quantum tunneling. The parity violation energy is calculated as or 90 Hz.
The different vibrational modes for are symmetrical stretch of , symmetrical bend of , torsion, stretch , asymmetrical stretch , asymmetrical bend of . The time to tunnel between enantiomers is only 0.6 ms for , but is seconds (18 h 20 min) for the tritium isotopomer .
Related
There are organic derivatives, in which the hydrogen is replaced by organic groups. One example is bis(2,4,6-tributylphenyl)ditellane. Others are diphenyl ditelluride and 1,2-bis(cyclohexylmethyl)ditellane. A ligand -TeTeH is known in some transition metal complexes. IUPAC nomenclature calls this "ditellanido".
References
Tellurium compounds
Binary compounds
Hydrogen compounds
Asymmetry | Hydrogen ditelluride | Physics | 805 |
7,709,376 | https://en.wikipedia.org/wiki/Excitation%20filter | An excitation filter is a high quality optical-glass filter commonly used in fluorescence microscopy and spectroscopic applications for selection of the excitation wavelength of light from a light source. Most excitation filters select light of relatively short wavelengths from an excitation light source, as only those wavelengths would carry enough energy to cause the object the microscope is examining to fluoresce sufficiently. The excitation filters used may come in two main types — short pass filters and band pass filters. Variations of these filters exist in the form of notch filters or deep blocking filters (commonly employed as emission filters). Other forms of excitation filters include the use of monochromators, wedge prisms coupled with a narrow slit (for selection of the excitation light) and the use of holographic diffraction gratings, etc. [for beam diffraction of white laser light into the required excitation wavelength (selected for by a narrow slit)].
An excitation filter is commonly packaged with an emission filter and a dichroic beam splitter in a cube so that the group is inserted together into the microscope. The dichroic beam splitter controls which wavelengths of light go to their respective filter.
References
Optical filters | Excitation filter | Chemistry | 260 |
3,430,399 | https://en.wikipedia.org/wiki/Robert%20Maillart | Robert Maillart (16 February 1872 – 5 April 1940) was a Swiss civil engineer who revolutionized the use of structural reinforced concrete with such designs as the three-hinged arch and the deck-stiffened arch for bridges, and the beamless floor slab and mushroom ceiling for industrial buildings. His Salginatobel (1929–1930) and Schwandbach (1933) bridges changed the aesthetics and engineering of bridge construction dramatically and influenced decades of architects and engineers after him. In 1991 the Salginatobel Bridge was declared an International Historic Civil Engineering Landmark by the American Society of Civil Engineers.
Early life and education
Robert Maillart was born on 6 February 1872 in Bern, Switzerland. He attended the Federal Institute of Technology in Zurich and studied structural engineering at Zurich ETH from 1890 to 1894, lectures by Wilhelm Ritter on graphical statics forming part of the curriculum. Maillart did not excel in academic theories, but understood the necessity to make assumptions and visualize when analyzing a structure. A traditional method prior to the 1900s was to use shapes that could be analyzed easily using mathematics.
This overuse of mathematics annoyed Maillart, as he greatly preferred to stand back and use common sense to predict full-scale performance. Also, as he rarely tested his bridges prior to construction, only upon completion would he verify the bridge was adequate. He often tested his bridges by crossing them himself. This attitude towards bridge design and construction was what provided him with his innovative designs.
Career
Maillart returned to Bern to work for three years with Pümpin & Herzog (1894–1896). He next worked for two years with the city of Zurich, then for a few years with a private firm there.
By 1902, Maillart established his own firm, Maillart & Cie. In 1912 he moved his family with him to Russia while he managed construction of major projects for large factories and warehouses in Kharkov, Riga and St. Petersburg, as Russia was industrializing, with the help of Swiss investments. Unaware of the outbreak of World War I, Maillart was caught in the country with his family. In 1916 his wife died, and in 1917 the Communist Revolution and nationalizing of assets caused him to lose his projects and bonds. When the widower Maillart and his three children returned to Switzerland, he was penniless and heavily in debt to Swiss banks. After that he had to work for other firms, but the best of his designs were still to come. By 1920 he moved to an engineering office in Geneva, which later had offices in Bern and Zurich.
Development and use of reinforced concrete
The first use of concrete as a major bridge construction material was in 1856. It was used to form a multiple-arch structure on the Grand Maître Aqueduct in France. The concrete was cast in its crudest form, a huge mass without reinforcement. Later in the nineteenth century, engineers explored the possibilities of reinforced concrete as a structural material. They found that the concrete carried compressive forces, while steel bars carried the tension forces. This made concrete a better material for structures.
Joseph Monier, from France, is credited with being the first to understand the principles of reinforced concrete. He embedded an iron-wire mesh into concrete. He was a gardener, not a licensed engineer, and sold his patents to contractors who built the first generation of reinforced concrete bridges in Europe. He also perfected the technique of pre-stressing concrete, which leaves permanent compressive stresses in concrete arches.
By the early twentieth century, reinforced concrete became an acceptable substitute in construction for all previous structural materials, such as stone, wood, and steel. People such as Monier had developed useful techniques for design and construction, but no one had created new forms that showed the full aesthetic nature of reinforced concrete.
Robert Maillart had an intuition and genius that exploited the aesthetic of concrete. He designed three-hinged arches in which the deck and the arch ribs were combined, to produce closely integrated structures that evolved into stiffened arches of very thin reinforced concrete and concrete slabs. The Salginatobel Bridge (1930) and Schwandbach Bridge (1933) are classic examples of Maillart's three-hinged arch bridges and deck-stiffened arch bridges, respectively. They have been recognized for their elegance and their influence on the later design and engineering of bridges.
These designs went beyond the common boundaries of concrete design in Maillart's time. Both of the bridges mentioned above are great examples of Maillart's ability to simplify design in order to allow for maximum use of materials and to incorporate the natural beauty of the structure's environment. Selected from among 19 entrants in a design competition in part because of the low cost of his proposal, Maillart began construction of the Salginatobal Bridge in Schiers, Switzerland in 1929; it opened on 13 August 1930.
Maillart is known also for his revolutionary column design in a number of buildings. He constructed his first mushroom ceiling for a warehouse in Zurich, together with treating the concrete floor as a slab, rather than reinforcing it with beams. One of his most famous is the design of the columns in the water filtration plant in Rorschach, Switzerland. Maillart decided to abandon standard methods in order to create "the more rational and more beautiful European method of building". Maillart's design of the columns included flaring the tops to reduce the bending moment in the beams between the columns. With the flare, the columns formed slight arches to transfer the loads from the ceiling beams to the columns.
Maillart also flared the bottom of the columns to reduce the pressure (force per area) on a certain point of the soil foundation. By flaring the bottoms of the columns, the area of the load was more widely distributed, therefore reducing the pressure over the soil foundation.
Many of his predecessors had modeled by this method using wood and steel, but Maillart was revolutionary in being the first to use concrete. He used concrete because it could support a large mound of earthen material for insulation against freezing. Since concrete is very good in compression situations, it was the perfect material to support a large, unmoving mass of earth.
His technique was used to build the Ponte Del Ciolo (Ciolo's Bridge), which is located at Ciolo in Apulia.
Legacy and honors
1936, elected as Fellow to Royal Institute of British Architects (RIBA)
1947, an exhibit on Robert Maillart at the Museum of Modern Art in New York featured his bridges and design work
Salginatobel Bridge was designated a Swiss heritage site of national significance.
1991, the American Society of Civil Engineers declared the Salginatobel Bridge an International Historic Civil Engineering Landmark.
2001, the British trade journal, Bridge – Design and Engineering, voted Maillart's Salginatobel Bridge "the most beautiful bridge of the century".
Analytical methods
By the second half of the nineteenth century, major advances in design theory, graphic statics, and knowledge of material strengths had been achieved. As the nineteenth century neared its end, the major factor contributing to the need for scientific design of bridges was the railroads. Engineers had to know the precise levels of stresses in bridge members, in order to accommodate the impact of trains. The first design solution was obtained by Squire Whipple in 1847. His major breakthrough was that truss members could be analyzed as a system of forces in equilibrium. This system, known as the "method of joints," permits the determination of stresses in all known members of a truss if two forces are known. The next advance in design was the "method of sections," developed by Wilhelm Ritter in 1862. Ritter simplified the calculations of forces by developing a very simple formula for determining the forces in the members intersected by a cross-section. A third advance was a better method of graphical analysis, developed independently by James Clerk Maxwell (UK) and Karl Culmann. (Switzerland).
Robert Maillart learned the analytical methods of his era, but he was most influenced by the principles developed by his mentor, Wilhelm Ritter, mentioned above. Maillart studied under Ritter, who had three basic principles of design. The first of these was to value calculations based on simple analysis, so that appropriate assumptions could be made based on common sense. The second was to consider carefully the construction process of the structure, not just the final product. The last principle was to test a structure always with full-scale load tests. All these principles are an adaptation of the available techniques, but with an emphasis on the careful study of previously built structures.
At the time of Maillart and Ritter, other designers preferred that their designs evolve from previously successful structures and designs. German engineers and scientists had developed elaborate mathematical techniques, and were confident that they did not need practical load tests of their designs developed using those techniques. However, these techniques did not encourage designers to think of unusual shapes, because those shapes could not be completely analyzed using the available mathematical techniques. Ritter's principles did allow for uncommon shapes.
Bridges
Tavanasa Bridge
Arve Bridge
Zuoz Bridge
Stauffacher Bridge
Salginatobel Bridge
Schwandbach Bridge
Bohlbach Bridge
Rossgraben Bridge
Traubach Bridge
Vessy Bridge
See also
Hugh John Flemming Bridge (1960), Canada
Mike O'Callaghan – Pat Tillman Memorial Bridge (2010), United States
References
Sources
ASCE, Notable Engineers - Robert Maillart, History and Heritage of Civil Engineering, undated
Bill, Max, Robert Maillart Bridges and Constructions, Verlag für Architektur, Zurich, 1949
Billington, David P., Robert Maillart’s Bridges: The Art of Engineering, Princeton University Press, 1978
Billington, David P., Robert Maillart and the Art of Reinforced Concrete, Architectural History Foundation, 1991
Billington, David P., Robert Maillart: Builder, Designer, and Artist, Cambridge University Press, 1997
Billington, David P., The Art of Structural Design: A Swiss Legacy, Princeton University Press, 2003
DeLony, E., Context for World Heritage Bridges, ICOMOS and TICCIH, 1996
Molgaard, John, "The Engineering Profession", lecture to Faculty of Engineering and Applied Science, Memorial University of Newfoundland, 1995
Laffranchi, Massimo and Peter Marti. "Robert Maillart's curved concrete arch bridges", Journal of Structural Engineering 123.10 (1997): 1280 Academic Search Elite. 8 February 2007
Fausto Giovannardi "Robert Maillart e l'emancipazione del cemento armato", Fausto Giovannardi, Borgo San Lorenzo, 2007.
External links
"Maillart's Bridges" documentary by Heinz Emigholz
Structurae web page with list of works
Swiss civil engineers
1872 births
1940 deaths
Bridge engineers
Concrete pioneers
Structural engineers
20th-century Swiss engineers | Robert Maillart | Engineering | 2,234 |
27,380,847 | https://en.wikipedia.org/wiki/Color%20killer | The color killer is an electronic stage in color TV receiver sets which acts as a cutting circuit to cut off color processing when the TV set receives a monochrome signal.
Monochromatic transmission
When a receiver is tuned to a monochrome transmission, the displayed scene should have no color components. Hardware failure in the color killer stage may cause false color pattern display even during monochrome transmission.
In normal color reception, high frequency luminance is mistaken for color, causing relatively invisible false color patterns. The reason for this invisibility is due to a key feature of NTSC/PAL, chroma/luminance frequency interleaving, where these false patterns are in complementary colors for adjacent video frames, allowing the human eye to average out the false color patterns. If, during a monochrome transmission, a color killer failure allows color processing activation when it should not, a chroma subcarrier in the color processing stages is regenerated with no reference, giving that subcarrier enough frequency error that the chroma/luminance interleaving feature of NTSC/PAL no longer works, making the false color patterns, overlaying the otherwise monochrome picture, much more visible to the human eye.
Also, when the color killer fails during a monochrome transmission, external noise caused by a weak signal shows up as colored confetti interference.
Color transmission
In a color TV waveform, a reference pulse, called the burst, is transmitted along the back porch portion of the video signal. If the transmitted signal is monochromatic, then the burst is not transmitted. The color killer is actually a muting circuit in the chroma section which supervises the burst and turns off the color processing if no burst is received (i.e. when the received signal is monochromatic.) The main purpose of the color burst in the first place is a reference for the receiver to regenerate the chroma subcarrier, which in turn is utilized to demodulate the color difference signals.
High frequency external interference caused by poor reception conditions causes colored confetti interference overlaying the picture.
Equation
In NTSC and PAL transmissions, the color TV signal can be represented as:
In this equation and are attenuation factors, is the luminance signal, and are the so-called color difference signals and is the angular frequency of the color carrier. is within the luminance bandwidth.
Color eraser (Mehikon)
In the 1970s, the Israeli government considered the import of color televisions as frivolous and a luxury which would increase social gaps. Therefore, the government ordered the Israel Broadcasting Authority to cease broadcasting in color. As it was impractical to remove the chrominance signal from programs previously recorded in color, this was accomplished by simply omitting the burst phase signal from the broadcast. The "damaged" signal triggered the "color killer" mechanism in color television sets which prevented the appearance of color pictures. This method was named Mehikon ( "eraser").
Shortly after the introduction of the "Color eraser", special TV sets equipped with Anti-Mehikon ( "anti-eraser") devices were offered. This device re-constructed the burst phase signal according to several known standards. The viewer had to adjust a knob until the picture on the screen appeared in natural colors. According to a report in Yediot Aharonoth from January 1979, viewers had to perform adjustments every 15 minutes on average in normal conditions, or up to 10 times an hour when special problems occurred, in order to restore colors if the picture suddenly turned black and white.
Based on information from owners of appliance stores, the report estimated that 90% of those who purchased color television sets also purchased the Anti-Mehikon device, which added about 5–10% to the price of the television.
Eventually, the Mehikon idea was proven futile, and the Israeli television stopped using it in 1980, allowing freely receivable color transmission.
Notes
References
Television technology
Color
he:אנטי-מחיקון | Color killer | Technology | 838 |
28,182 | https://en.wikipedia.org/wiki/Social%20epistemology | Social epistemology refers to a broad set of approaches that can be taken in epistemology (the study of knowledge) that construes human knowledge as a collective achievement. Another way of characterizing social epistemology is as the evaluation of the social dimensions of knowledge or information.
As a field of inquiry in analytic philosophy, social epistemology deals with questions about knowledge in social contexts, meaning those in which knowledge attributions cannot be explained by examining individuals in isolation from one another. The most common topics discussed in contemporary social epistemology are testimony (e.g. "When does a belief that x is true which resulted from being told 'x is true' constitute knowledge?"), peer disagreement (e.g. "When and how should I revise my beliefs in light of other people holding beliefs that contradict mine?"), and group epistemology (e.g. "What does it mean to attribute knowledge to groups rather than individuals, and when are such knowledge attributions appropriate?"). Social epistemology also examines the social justification of belief.
One of the enduring difficulties with defining "social epistemology" that arises is the attempt to determine what the word "knowledge" means in this context. There is also a challenge in arriving at a definition of "social" which satisfies academics from different disciplines. Social epistemologists may exist working in many of the disciplines of the humanities and social sciences, most commonly in philosophy and sociology. In addition to marking a distinct movement in traditional and analytic epistemology, social epistemology is associated with the interdisciplinary field of science and technology studies (STS).
History of the term
The consideration of social dimensions of knowledge in relation to philosophy started in 380 B.C.E with Plato’s dialogue: Charmides. This dialogue included Socrates' argument about whether anyone is capable of examining if another man's claim that he knows something, is true or not. In it he questions the degree of certainty an unprofessional in a field can have towards a person’s claim to be a specialist in that same field. Charmides also explored the tendency of the utopian vision of social relations to degenerate into dystopian fantasy. As the exploration of a dependence on authoritative figures constitutes a part of the study of social epistemology, it confirms the existence of the ideology in minds long before it was given its label.
In 1936, Karl Mannheim turned Karl Marx‘s theory of ideology (which interpreted the “social” aspect in epistemology to be of a political or sociological nature) into an analysis of how the human society develops and functions in this respect. Particularly, this Marxist analysis prompted Mannheim to write Ideology and Utopia, which investigated the classical sociology of knowledge and the construct of ideology.
The term “social epistemology” was first coined by the library scientists Margaret Egan. and Jesse Shera in a Library Quarterly paper at the University of Chicago Graduate Library School in the 1950s. The term was used by Robert K. Merton in a 1972 article in the American Journal of Sociology and then by Steven Shapin in 1979. However, it was not until the 1980s that the current sense of “social epistemology” began to emerge.
The rise of social epistemology
In the 1980s, there was a powerful growth of interest amongst philosophers in topics such as epistemic value of testimony, the nature and function of expertise, proper distribution of cognitive labor and resources among individuals in the communities and the status of group reasoning and knowledge.
In 1987, the philosophical journal Synthese published a special issue on social epistemology which included two authors that have since taken the branch of epistemology in two divergent directions: Alvin Goldman and Steve Fuller. Fuller founded a journal called Social Epistemology: A journal of knowledge, culture, and policy in 1987 and published his first book, Social Epistemology, in 1988. Goldman’s Knowledge in a Social World came out in 1999. Goldman advocates for a type of epistemology which is sometimes called “veritistic epistemology” because of its large emphasis on truth. This type of epistemology is sometimes seen to side with “essentialism” as opposed to “multiculturalism”. But Goldman has argued that this association between veritistic epistemology and essentialism is not necessary. He describes Social Epistemology as knowledge derived from one’s interactions with another person, group or society.
Goldman looks into one of the two strategies of the socialization of epistemology. This strategy includes the evaluation of social factors that impact knowledge formed on true belief. In contrast, Fuller takes preference for the second strategy that defines knowledge influenced by social factors as collectively accepted belief. The difference between the two can be simplified with exemplars e.g.: the first strategy means analyzing how your degree of wealth (a social factor) influences what information you determine to be valid whilst the second strategy occurs when an evaluation is done on wealth’s influence upon your knowledge acquired from the beliefs of the society in which you find yourself.
Fuller's position supports the conceptualization that social epistemology is a critique of context, particularly in his approach to "knowledge society" and the "university" as integral contexts of modern learning. It is said that this articulated a reformulation of the Duheim-Quine thesis, which covers the underdetermination of theory by data. It explains that the problem of context will assume this form: :knowledge is determined by its context". In 2012, on the occasion of the 25th anniversary of Social Epistemology, Fuller reflected upon the history and the prospects of the field, including the need for social epistemology to re-connect with the larger issues of knowledge production first identified by Charles Sanders Peirce as ‘’cognitive economy’’ and nowadays often pursued by library and information science. As for the “analytic social epistemology”, to which Goldman has been a significant contributor, Fuller concludes that it has “failed to make significant progress owing, in part, to a minimal understanding of actual knowledge practices, a minimised role for philosophers in ongoing inquiry, and a focus on maintaining the status quo of epistemology as a field.”
Kuhn, Foucault, and the sociology of scientific knowledge
The basic view of knowledge that motivated the emergence of social epistemology as it is perceived today can be traced to the work of Thomas Kuhn and Michel Foucault, which gained acknowledgment at the end of the 1960s. Both brought historical concerns directly to bear on problems long associated with the philosophy of science. Perhaps the most notable issue here was the nature of truth, which both Kuhn and Foucault described as a relative and contingent notion. On this background, ongoing work in the sociology of scientific knowledge (SSK) and the history and philosophy of science (HPS) was able to assert its epistemological consequences, leading most notably to the establishment of the strong programme at the University of Edinburgh. In terms of the two strands of social epistemology, Fuller is more sensitive and receptive to this historical trajectory (if not always in agreement) than Goldman, whose “veritistic” social epistemology can be reasonably read as a systematic rejection of the more extreme claims associated with Kuhn and Foucault.
Social epistemology as a field
In the standard sense of the term today, social epistemology is a field within analytic philosophy. It focuses on the social aspects of how knowledge is created and disseminated. What precisely these social aspects are, and whether they have beneficial or detrimental effects upon the possibilities to create, acquire and spread knowledge is a subject of continuous debate. The most common topics discussed in contemporary social epistemology are testimony (e.g. "When does a belief that 'x is true' which resulted from being told that 'x is true' constitute knowledge?"), peer disagreement (e.g. "When and how should I revise my beliefs in light of other people holding beliefs that contradict mine?"), and group epistemology (e.g. "What does it mean to attribute knowledge to groups rather than individuals, and when are such knowledge attributions appropriate?").
Within the field, "the social" is approached in two complementary and not mutually exclusive ways: "the social" character of knowledge can either be approached through inquiries in inter-individual epistemic relations or through inquiries focusing on epistemic communities. The inter-individual approach typically focuses on issues such as testimony, epistemic trust as a form of trust placed by one individual in another, epistemic dependence, epistemic authority, etc. The community approach typically focuses on issues such as community standards of justification, community procedures of critique, diversity, epistemic justice, and collective knowledge.
Social epistemology as a field within analytic philosophy has close ties to, and often overlaps with philosophy of science. While parts of the field engage in abstract, normative considerations of knowledge creation and dissemination, other parts of the field are "naturalized epistemology" in the sense that they draw on empirically gained insights---which could mean natural science research from, e.g., cognitive psychology, be that qualitative or quantitative social science research. (For the notion of "naturalized epistemology" see Willard Van Orman Quine.) And while parts of the field are concerned with analytic considerations of rather general character, case-based and domain-specific inquiries in, e.g., knowledge creation in collaborative scientific practice, knowledge exchange on online platforms or knowledge gained in learning institutions play an increasing role.
Important academic journals for social epistemology as a field within analytic philosophy are, e.g., Episteme, Social Epistemology, and Synthese. However, major works within this field are also published in journals that predominantly address philosophers of science and psychology or in interdisciplinary journals which focus on particular domains of inquiry (such as, e.g., Ethics and Information Technology).
Major philosophers who influenced social epistemology
Plato in his dialogue Charmides
John Locke in Problem of Testimony
David Hume in Problem of Testimony
Thomas Reid in Problem of Testimony
Karl Marx in interrelating Ideology and Knowledge.
used by Karl Mannheim who concentrated on the social conditioning of knowledge with the reasoning that a knowledge claim's validity is restricted by the social conditions with regard to which the claim was initially made.
Miranda Fricker in Problem of Testimony
Present and future concerns
In both stages, both varieties of social epistemology remain largely "academic" or "theoretical" projects. Yet both emphasize the social significance of knowledge and therefore the cultural value of social epistemology itself. A range of journals publishing social epistemology welcome papers that include a policy dimension.
More practical applications of social epistemology can be found in the areas of library science, academic publishing, guidelines for scientific authorship and collaboration, knowledge policy and debates over the role of the Internet in knowledge transmission and creation.
Social epistemology is still considered a relatively new addition to philosophy, with its problems and theories still fresh and in rapid movement. Of increasing importance is social epistemology developments within transdisciplinarity as manifested by media ecology.
See also
Bayesian epistemology
Collaborative intelligence
Collective intelligence
Distributed cognition
Double hermeneutic
Epistemic democracy
Epistemology
Feminist epistemology
Group cognition
Intersubjectivity
Knowledge falsification
Shared intentionality
Situated cognition
Sociology of knowledge
Social constructionism
Social philosophy
Reflexivity (social theory)
Media ecology
Notes
References
Berlin, James A. Rhetorics, Poetics, and Cultures: Refiguring College English Studies, Indiana: Parlor Press, 2003.
Egan, Margaret and Jesse Shera. 1952. "Foundations of a Theory of Bibliography." Library Quarterly 44:125-37.
Goldman, Alvin; Blanchard, Thomas (2016-01-01). Zalta, Edward N., ed. The Stanford Encyclopedia of Philosophy (Winter 2016 ed.). Metaphysics Research Lab, Stanford University.
Goldman, Alvin,. "Social Epistemology". stanford.library.sydney.edu.au. Retrieved 2017-02-22.
Longino, Helen. 1990. Science as Social Knowledge. Princeton: Princeton University Press.
Longino, Helen. 2001. The Fate of Knowledge. Princeton: Princeton University Press.
Remedios, Francis. 2003. Legitimizing Scientific Knowledge: An Introduction to Steve Fuller’s Social Epistemology. Lexington Books.
Rimkutė, Audronė (2014-09-28). "The Problem of Social Knowledge in Contemporary Social Epistemology: Two Approaches". Problemos (in Lithuanian). 0 (65): 4–19. doi:10.15388/Problemos.2004.65.6645. ISSN 1392-1126.
Schmitt, Frederick F. 1994. Socializing Epistemology. Rowman & Littlefield.
Schmitt, Frederick F.; Scholz, Oliver R. (2010-02-01). "Introduction: The History of Social Epistemology". Episteme. 7 (1): 1–6. doi:10.3366/E174236000900077X. ISSN 1750-0117.
Solomon, Miriam. 2001. Social Empricism. Cambridge: MIT Press.
Further reading
"What Is Social Epistemology? A Smorgasbord of projects", in Pathways to Knowledge: Private and Public, Oxford University Press, Pg:182-204,
"Relativism, Rationalism and the Sociology of Knowledge", Barry Barnes and David Bloor, in Rationality and Relativism, Pg:22
Social Epistemology, Steve Fuller, Indiana University Press, p. 3.
External links
The journal Social Epistemology
Interdisciplinary subfields of sociology
Epistemology
Philosophy of science
Social philosophy | Social epistemology | Technology | 2,861 |
691,839 | https://en.wikipedia.org/wiki/W.%20G.%20Unruh | William George Unruh (; born August 28, 1945) is a Canadian physicist at the University of British Columbia, Vancouver who described the hypothetical Unruh effect in 1976.
Early life and education
Unruh was born into a Mennonite family in Winnipeg, Manitoba. His parents were Benjamin Unruh, a refugee from Russia, and Anna Janzen, who was born in Canada. He obtained his B.Sc. from the University of Manitoba in 1967, followed by an M.A. (1969) and Ph.D. (1971) from Princeton University, New Jersey, under the direction of John Archibald Wheeler.
Areas of research
Unruh has made seminal contributions to our understanding of gravity, black holes, cosmology, and quantum fields in curved spaces, including the discovery of what is now known as the Unruh effect. Unruh has contributed to the foundations of quantum mechanics in areas such as decoherence and the question of time in quantum mechanics. He has helped to clarify the meaning of nonlocality in a quantum context, in particular that quantum nonlocality does not follow from Bell's theorem and that ultimately quantum mechanics is a local theory. Unruh is also one of the main critics of the Afshar experiment.
Unruh is also interested in music and teaches the Physics of Music.
Unruh effect
The Unruh effect, described by Unruh in 1976, is the prediction that an accelerating observer will observe black-body radiation where an inertial observer would observe none. In other words, the accelerating observer will find itself in a warm background, the temperature of which is proportional to the acceleration. The same quantum state of a field, which is taken to be the ground state for observers in inertial systems, is seen as a thermal state for the uniformly accelerated observer. The Unruh effect therefore means that the very notion of the quantum vacuum depends on the path of the observer through spacetime.
The Unruh effect can be expressed in a simple equation giving the equivalent energy kT of a uniformly accelerating particle (with a being the constant acceleration), as:
References
External links
University of British Columbia Physics Dept. page
Dr. Unruh's course webpage - PHYS 200 Introduction to Relativity and Quanta
UBC Theoretical Physics Homepage - a web server ran by Unruh
1945 births
Living people
Canadian physicists
Canadian Mennonites
Fellows of the Royal Society
People from Winnipeg
Princeton University alumni
Relativity theorists
Academic staff of the University of British Columbia
University of Manitoba alumni
Fellows of the American Physical Society | W. G. Unruh | Physics | 527 |
480,658 | https://en.wikipedia.org/wiki/List%20of%20web%20service%20specifications | There are a variety of specifications associated with web services. These specifications are in varying degrees of maturity and are maintained or supported by various standards bodies and entities. These specifications are the basic web services framework established by first-generation standards represented by WSDL, SOAP, and UDDI. Specifications may complement, overlap, and compete with each other. Web service specifications are occasionally referred to collectively as "WS-*", though there is not a single managed set of specifications that this consistently refers to, nor a recognized owning body across them all.
Web service standards listings
These sites contain documents and links about the different Web services standards identified on this page.
IBM Developerworks: Standard and Web Service
innoQ's WS-Standard Overview ()
MSDN .NET Developer Centre: Web Service Specification Index Page
OASIS Standards and Other Approved Work
Open Grid Forum Final Document
XML CoverPage
W3C's Web Services Activity
XML specification
XML (eXtensible Markup Language)
XML Namespaces
XML Schema
XPath
XQuery
XML Information Set
XInclude
XML Pointer
Messaging specification
SOAP (formerly known as Simple Object Access Protocol)
SOAP-over-UDP
SOAP Message Transmission Optimization Mechanism
WS-Notification
WS-BaseNotification
WS-Topics
WS-BrokeredNotification
WS-Addressing
WS-Transfer
WS-Eventing
WS-Enumeration
WS-MakeConnection
Metadata exchange specification
JSON-WSP
WS-Policy
WS-PolicyAssertions
WS-PolicyAttachment
WS-Discovery
WS-Inspection
WS-MetadataExchange
Universal Description Discovery and Integration (UDDI)
WSDL 2.0 Core
WSDL 2.0 SOAP Binding
Web Services Semantics (WSDL-S)
WS-Resource Framework (WSRF)
Security specification
WS-Security
XML Signature
XML Encryption
XML Key Management (XKMS)
WS-SecureConversation
WS-SecurityPolicy
WS-Trust
WS-Federation
WS-Federation Active Requestor Profile
WS-Federation Passive Requestor Profile
Web Services Security Kerberos Binding
Web Single Sign-On Interoperability Profile
Web Single Sign-On Metadata Exchange Protocol
Security Assertion Markup Language (SAML)
XACML
Privacy
P3P
Reliable messaging specifications
WS-ReliableMessaging
WS-Reliability
WS-RM Policy Assertion
Resource specifications
Web Services Resource Framework
WS-Resource
WS-BaseFaults
WS-ServiceGroup
WS-ResourceProperties
WS-ResourceLifetime
WS-Transfer
WS-Fragment
Resource Representation SOAP Header Block
Web services interoperability (WS-I) specification
These specifications provide additional information to improve interoperability between vendor implementations.
WS-I Basic Profile
WS-I Basic Security Profile
Simple Soap Binding Profile
Business process specifications
WS-BPEL
WS-CDL
Web Service Choreography Interface (WSCI)
WS-Choreography
XML Process Definition Language
Web Services Conversation Language (WSCL)
Transaction specifications
WS-BusinessActivity
WS-AtomicTransaction
WS-Coordination
WS-CAF
WS-Transaction
WS-Context
WS-CF
WS-TXM
Management specifications
WS-Management
WS-Management Catalog
WS-ResourceTransfer
WSDM
Presentation-oriented specification
Web Services for Remote Portlets
Draft specifications
WS-Provisioning – Describes the APIs and schemas necessary to facilitate interoperability between provisioning systems in a consistent manner using Web services
Other
Devices Profile for Web Services (DPWS)
ebXML
Standardization
ISO/IEC 19784-2:2007 Information technology -- Biometric application programming interface -- Part 2: Biometric archive function provider interface
ISO 19133:2005 Geographic information -- Location-based services -- Tracking and navigation
ISO/IEC 20000-1:2005 Information technology -- Service management -- Part 1: Specification
ISO/IEC 20000-2:2005 Information technology -- Service management -- Part 2: Code of practice
ISO/IEC 24824-2:2006 Information technology -- Generic applications of ASN.1: Fast Web Services
ISO/IEC 25437:2006 Information technology -- Telecommunications and information exchange between systems -- WS-Session -- Web Services for Application Session Services
See also
Web service
References
Specifications
Web service specifications | List of web service specifications | Technology | 897 |
14,427,502 | https://en.wikipedia.org/wiki/GPR27 | Probable G-protein coupled receptor 27 is a protein that in humans is encoded by the GPR27 gene.
See also
SREB
References
G protein-coupled receptors | GPR27 | Chemistry | 34 |
26,608,799 | https://en.wikipedia.org/wiki/C21H24N2O3 | {{DISPLAYTITLE:C21H24N2O3}}
The molecular formula C21H24N2O3 may refer to:
Ajmalicine
16-Hydroxytabersonine
Lochnericine
Preakuammicine
Raucaffrinoline
Vobasine
Molecular formulas | C21H24N2O3 | Physics,Chemistry | 65 |
67,944,487 | https://en.wikipedia.org/wiki/Knowledge%20graph%20embedding | In representation learning, knowledge graph embedding (KGE), also referred to as knowledge representation learning (KRL), or multi-relation learning, is a machine learning task of learning a low-dimensional representation of a knowledge graph's entities and relations while preserving their semantic meaning. Leveraging their embedded representation, knowledge graphs (KGs) can be used for various applications such as link prediction, triple classification, entity recognition, clustering, and relation extraction.
Definition
A knowledge graph is a collection of entities , relations , and facts . A fact is a triple that denotes a link between the head and the tail of the triple. Another notation that is often used in the literature to represent a triple (or fact) is . This notation is called resource description framework (RDF). A knowledge graph represents the knowledge related to a specific domain; leveraging this structured representation, it is possible to infer a piece of new knowledge from it after some refinement steps. However, nowadays, people have to deal with the sparsity of data and the computational inefficiency to use them in a real-world application.
The embedding of a knowledge graph translates each entity and relation of a knowledge graph, into a vector of a given dimension , called embedding dimension. In the general case, we can have different embedding dimensions for the entities and the relations . The collection of embedding vectors for all the entities and relations in the knowledge graph can then be used for downstream tasks.
A knowledge graph embedding is characterized by four different aspects:
Representation space: The low-dimensional space in which the entities and relations are represented.
Scoring function: A measure of the goodness of a triple embedded representation.
Encoding models: The modality in which the embedded representation of the entities and relations interact with each other.
Additional information: Any additional information coming from the knowledge graph that can enrich the embedded representation. Usually, an ad hoc scoring function is integrated into the general scoring function for each additional information.
Embedding procedure
All the different knowledge graph embedding models follow roughly the same procedure to learn the semantic meaning of the facts. First of all, to learn an embedded representation of a knowledge graph, the embedding vectors of the entities and relations are initialized to random values. Then, starting from a training set until a stop condition is reached, the algorithm continuously optimizes the embeddings. Usually, the stop condition is given by the overfitting over the training set. For each iteration, is sampled a batch of size from the training set, and for each triple of the batch is sampled a random corrupted facti.e., a triple that does not represent a true fact in the knowledge graph. The corruption of a triple involves substituting the head or the tail (or both) of the triple with another entity that makes the fact false. The original triple and the corrupted triple are added in the training batch, and then the embeddings are updated, optimizing a scoring function. At the end of the algorithm, the learned embeddings should have extracted the semantic meaning from the triples and should correctly predict unseen true facts in the knowledge graph.
Pseudocode
The following is the pseudocode for the general embedding procedure.
algorithm Compute entity and relation embeddings is
input: The training set ,
entity set ,
relation set ,
embedding dimension
output: Entity and relation embeddings
initialization: the entities and relations embeddings (vectors) are randomly initialized
while stop condition do
// From the training set randomly sample a batch of size b
for each in do
// sample a corrupted fact of triple
end for
Update embeddings by minimizing the loss function
end while
Performance indicators
These indexes are often used to measure the embedding quality of a model. The simplicity of the indexes makes them very suitable for evaluating the performance of an embedding algorithm even on a large scale. Given Q as the set of all ranked predictions of a model, it is possible to define three different performance indexes: Hits@K, MR, and MRR.
Hits@K
Hits@K or in short, H@K, is a performance index that measures the probability to find the correct prediction in the first top K model predictions. Usually, it is used . Hits@K reflects the accuracy of an embedding model to predict the relation between two given triples correctly.
Hits@K
Larger values mean better predictive performances.
Mean rank (MR)
Mean rank is the average ranking position of the items predicted by the model among all the possible items.
The smaller the value, the better the model.
Mean reciprocal rank (MRR)
Mean reciprocal rank measures the number of triples predicted correctly. If the first predicted triple is correct, then 1 is added, if the second is correct is summed, and so on.
Mean reciprocal rank is generally used to quantify the effect of search algorithms.
The larger the index, the better the model.
Applications
Machine learning tasks
Knowledge graph completion (KGC) is a collection of techniques to infer knowledge from an embedded knowledge graph representation. In particular, this technique completes a triple inferring the missing entity or relation. The corresponding sub-tasks are named link or entity prediction (i.e., guessing an entity from the embedding given the other entity of the triple and the relation), and relation prediction (i.e., forecasting the most plausible relation that connects two entities).
Triple Classification is a binary classification problem. Given a triple, the trained model evaluates the plausibility of the triple using the embedding to determine if a triple is true or false. The decision is made with the model score function and a given threshold. Clustering is another application that leverages the embedded representation of a sparse knowledge graph to condense the representation of similar semantic entities close in a 2D space.
Real world applications
The use of knowledge graph embedding is increasingly pervasive in many applications. In the case of recommender systems, the use of knowledge graph embedding can overcome the limitations of the usual reinforcement learning. Training this kind of recommender system requires a huge amount of information from the users; however, knowledge graph techniques can address this issue by using a graph already constructed over a prior knowledge of the item correlation and using the embedding to infer from it the recommendation.
Drug repurposing is the use of an already approved drug, but for a therapeutic purpose different from the one for which it was initially designed. It is possible to use the task of link prediction to infer a new connection between an already existing drug and a disease by using a biomedical knowledge graph built leveraging the availability of massive literature and biomedical databases.
Knowledge graph embedding can also be used in the domain of social politics.
Models
Given a collection of triples (or facts) , the knowledge graph embedding model produces, for each entity and relation present in the knowledge graph a continuous vector representation. is the corresponding embedding of a triple with and , where is the embedding dimension for the entities, and for the relations. The score function of a given model is denoted by and measures the distance of the embedding of the head from the embedding of tail given the embedding of the relation, or in other words, it quantifies the plausibility of the embedded representation of a given fact.
Rossi et al. propose a taxonomy of the embedding models and identifies three main families of models: tensor decomposition models, geometric models, and deep learning models.
Tensor decomposition model
The tensor decomposition is a family of knowledge graph embedding models that use a multi-dimensional matrix to represent a knowledge graph, that is partially knowable due to the gaps of the knowledge graph describing a particular domain thoroughly. In particular, these models use a three-way (3D) tensor, which is then factorized into low-dimensional vectors that are the entities and relations embeddings. The third-order tensor is a suitable methodology to represent a knowledge graph because it records only the existence or the absence of a relation between entities, and for this reason is simple, and there is no need to know a priori the network structure, making this class of embedding models light, and easy to train even if they suffer from high-dimensional and sparsity of data.
Bilinear models
This family of models uses a linear equation to embed the connection between the entities through a relation. In particular, the embedded representation of the relations is a bidimensional matrix. These models, during the embedding procedure, only use the single facts to compute the embedded representation and ignore the other associations to the same entity or relation.
DistMult: Since the embedding matrix of the relation is a diagonal matrix, the scoring function can not distinguish asymmetric facts.
ComplEx: As DistMult uses a diagonal matrix to represent the relations embedding but adds a representation in the complex vector space and the hermitian product, it can distinguish symmetric and asymmetric facts. This approach is scalable to a large knowledge graph in terms of time and space cost.
ANALOGY: This model encodes in the embedding the analogical structure of the knowledge graph to simulate inductive reasoning. Using a differentiable objective function, ANALOGY has good theoretical generality and computational scalability. It is proven that the embedding produced by ANALOGY fully recovers the embedding of DistMult, ComplEx, and HolE.
SimplE: This model is the improvement of canonical polyadic decomposition (CP), in which an embedding vector for the relation and two independent embedding vectors for each entity are learned, depending on whether it is a head or a tail in the knowledge graph fact. SimplE resolves the problem of independent learning of the two entity embeddings using an inverse relation and average the CP score of and . In this way, SimplE collects the relation between entities while they appear in the role of subject or object inside a fact, and it is able to embed asymmetric relations.
Non-bilinear models
HolE: HolE uses circular correlation to create an embedded representation of the knowledge graph, which can be seen as a compression of the matrix product, but is more computationally efficient and scalable while keeping the capabilities to express asymmetric relation since the circular correlation is not commutative. HolE links holographic and complex embeddings since, if used together with Fourier, can be seen as a special case of ComplEx.
TuckER: TuckER sees the knowledge graph as a tensor that could be decomposed using the Tucker decomposition in a collection of vectorsi.e., the embeddings of entities and relationswith a shared core. The weights of the core tensor are learned together with the embeddings and represent the level of interaction of the entries. Each entity and relation has its own embedding dimension, and the size of the core tensor is determined by the shape of the entities and relations that interact. The embedding of the subject and object of a fact are summed in the same way, making TuckER fully expressive, and other embedding models such as RESCAL, DistMult, ComplEx, and SimplE can be expressed as a special formulation of TuckER.
MEI: MEI introduces the multi-partition embedding interaction technique with the block term tensor format, which is a generalization of CP decomposition and Tucker decomposition. It divides the embedding vector into multiple partitions and learns the local interaction patterns from data instead of using fixed special patterns as in ComplEx or SimplE models. This enables MEI to achieve optimal efficiency—expressiveness trade-off, not just being fully expressive. Previous models such as TuckER, RESCAL, DistMult, ComplEx, and SimplE are suboptimal restricted special cases of MEI.
MEIM: MEIM goes beyond the block term tensor format to introduce the independent core tensor for ensemble boosting effects and the soft orthogonality for max-rank relational mapping, in addition to multi-partition embedding interaction. MEIM generalizes several previous models such as MEI and its subsumed models, RotaE, and QuatE. MEIM improves expressiveness while still being highly efficient in practice, helping it achieve good results using fairly small model sizes.
Geometric models
The geometric space defined by this family of models encodes the relation as a geometric transformation between the head and tail of a fact. For this reason, to compute the embedding of the tail, it is necessary to apply a transformation to the head embedding, and a distance function is used to measure the goodness of the embedding or to score the reliability of a fact.
Geometric models are similar to the tensor decomposition model, but the main difference between the two is that they have to preserve the applicability of the transformation in the geometric space in which it is defined.
Pure translational models
This class of models is inspired by the idea of translation invariance introduced in word2vec. A pure translational model relies on the fact that the embedding vector of the entities are close to each other after applying a proper relational translation in the geometric space in which they are defined. In other words, given a fact, when the embedding of head is added to the embedding of relation, the expected result should be the embedding of the tail. The closeness of the entities embedding is given by some distance measure and quantifies the reliability of a fact.
TransE: This model uses a scoring function that forces the embeddings to satisfy a simple vector sum equation in each fact in which they appear: . The embedding will be exact if each entity and relation appears in only one fact, and, for this reason, in practice does not well represent one-to-many, many-to-one, and asymmetric relations.
TransH: It is an evolution of TransE introducing a hyperplane as geometric space to solve the problem of representing correctly the types of relations. In TransH, each relation has a different embedded representation, on a different hyperplane, based on which entities it interacts with. Therefore, to compute, for example, the score function of a fact, the embedded representation of the head and tail need to be projected using a relational projection matrix on the correct hyperplane of the relation.
TransR: TransR is an evolution of TransH because it uses two different spaces to represent the embedded representation of the entities and the relations, and separate completely the semantic space of entities and relations. Also TransR uses a relational projection matrix to translate the embedding of the entities to the relation space.
TransD: Given a fact, in TransR, the head and the tail of a fact could belongs to two different types of entities, for example, in the fact, Obama and USA are two entities but one is a person and the other is a country. The matrix multiplication also is an expensive procedure in TransR to compute the projection. In this context, TransD employs two vector for each entity-relation pair to compute a dynamic mapping that substitutes the projection matrix while reducing the dimensional complexity. The first vector is used to represent the semantic meaning of the entities and relations, the second one to compute the mapping matrix.
TransA: All the translational models define a score function in their representation space, but they oversimplify this metric loss. Since the vector representation of the entities and relations is not perfect, a pure translation of could be distant from , and a spherical equipotential Euclidean distance makes it hard to distinguish which is the closest entity. TransA, instead, introduces an adaptive Mahalanobis distance to weights the embedding dimensions, together with elliptical surfaces to remove the ambiguity.
Translational models with additional embeddings
It is possible to associate additional information to each element in the knowledge graph and their common representation facts. Each entity and relation can be enriched with text descriptions, weights, constraints, and others in order to improve the overall description of the domain with a knowledge graph. During the embedding of the knowledge graph, this information can be used to learn specialized embeddings for these characteristics together with the usual embedded representation of entities and relations, with the cost of learning a more significant number of vectors.
STransE: This model is the result of the combination of TransE and of the structure embedding in such a way it is able to better represent the one-to-many, many-to-one, and many-to-many relations. To do so, the model involves two additional independent matrix and for each embedded relation in the KG. Each additional matrix is used based on the fact the specific relation interact with the head or the tail of the fact. In other words, given a fact , before applying the vector translation, the head is multiplied by and the tail is multiplied by .
CrossE: Crossover interactions can be used for related information selection, and could be very useful for the embedding procedure. Crossover interactions provide two distinct contributions in the information selection: interactions from relations to entities and interactions from entities to relations. This means that a relation, e.g.'president_of' automatically selects the types of entities that are connecting the subject to the object of a fact. In a similar way, the entity of a fact inderectly determine which is inference path that has to be choose to predict the object of a related triple. CrossE, to do so, learns an additional interaction matrix , uses the element-wise product to compute the interaction between and . Even if, CrossE, does not rely on a neural network architecture, it is shown that this methodology can be encoded in such architecture.
Roto-translational models
This family of models, in addition or in substitution of a translation they employ a rotation-like transformation.
TorusE: The regularization term of TransE makes the entity embedding to build a spheric space, and consequently loses the translation properties of the geometric space. To address this problem, TorusE leverages the use of a compact Lie group that in this specific case is n-dimensional torus space, and avoid the use of regularization. TorusE defines the distance functions to substitute the L1 and L2 norm of TransE.
RotatE: RotatE is inspired by the Euler's identity and involves the use of Hadamard product to represent a relation as a rotation from the head to the tail in the complex space. For each element of the triple, the complex part of the embedding describes a counterclockwise rotation respect to an axis, that can be describe with the Euler's identity, whereas the modulus of the relation vector is 1. It is shown that the model is capable of embedding symmetric, asymmetric, inversion, and composition relations from the knowledge graph.
Deep learning models
This group of embedding models uses deep neural network to learn patterns from the knowledge graph that are the input data. These models have the generality to distinguish the type of entity and relation, temporal information, path information, underlay structured information, and resolve the limitations of distance-based and semantic-matching-based models in representing all the features of a knowledge graph. The use of deep learning for knowledge graph embedding has shown good predictive performance even if they are more expensive in the training phase, data-hungry, and often required a pre-trained embedding representation of knowledge graph coming from a different embedding model.
Convolutional neural networks
This family of models, instead of using fully connected layers, employs one or more convolutional layers that convolve the input data applying a low-dimensional filter capable of embedding complex structures with few parameters by learning nonlinear features.
ConvE: ConvE is an embedding model that represents a good tradeoff expressiveness of deep learning models and computational expensiveness, in fact it is shown that it used 8x less parameters, when compared to DistMult. ConvE uses a one-dimensional -sized embedding to represent the entities and relations of a knowledge graph. To compute the score function of a triple, ConvE apply a simple procedure: first concatenes and merge the embeddings of the head of the triple and the relation in a single data [h; \mathcal{r}], then this matrix is used as input for the 2D convolutional layer. The result is then passed through a dense layer that apply a linear transformation parameterized by the matrix and at the end, with the inner product is linked to the tail triple. ConvE is also particularly efficient in the evaluation procedure: using a 1-N scoring, the model matches, given a head and a relation, all the tails at the same time, saving a lot of evaluation time when compared to the 1-1 evaluation program of the other models.
ConvR: ConvR is an adaptive convolutional network aimed to deeply represent all the possible interactions between the entities and the relations. For this task, ConvR, computes convolutional filter for each relation, and, when required, applies these filters to the entity of interest to extract convoluted features. The procedure to compute the score of triple is the same as ConvE.
ConvKB: ConvKB, to compute score function of a given triple , it produces an input [h; \mathcal{r}; t]of dimension without reshaping and passes it to series of convolutional filter of size . This result feeds a dense layer with only one neuron that produces the final score. The single final neuron makes this architecture as a binary classifier in which the fact could be true or false. A difference with ConvE is that the dimensionality of the entities is not changed.
Capsule neural networks
This family of models uses capsule neural networks to create a more stable representation that is able to recognize a feature in the input without losing spatial information. The network is composed of convolutional layers, but they are organized in capsules, and the overall result of a capsule is sent to a higher-capsule decided by a dynamic process routine.
CapsE: CapsE implements a capsule network to model a fact . As in ConvKB, each triple element is concatenated to build a matrix [h; \mathcal{r}; t]and is used to feed to a convolutional layer to extract the convolutional features. These features are then redirected to a capsule to produce a continuous vector, more the vector is long, more the fact is true.
Recurrent neural networks
This class of models leverages the use of recurrent neural network. The advantage of this architecture is to memorize a sequence of fact, rather than just elaborate single events.
RSN: During the embedding procedure is commonly assumed that, similar entities has similar relations. In practice, this type of information is not leveraged, because the embedding is computed just on the undergoing fact rather than a history of facts. Recurrent skipping networks (RSN) uses a recurrent neural network to learn relational path using a random walk sampling.
Model performance
The machine learning task for knowledge graph embedding that is more often used to evaluate the embedding accuracy of the models is the link prediction. Rossi et al. produced an extensive benchmark of the models, but also other surveys produces similar results. The benchmark involves five datasets FB15k, WN18, FB15k-237, WN18RR, and YAGO3-10. More recently, it has been discussed that these datasets are far away from real-world applications, and other datasets should be integrated as a standard benchmark.
Libraries
See also
Knowledge graph
Embedding
Machine learning
Knowledge base
Knowledge extraction
Statistical relational learning
Representation learning
Graph embedding
References
External links
Open Graph Benchmark - Stanford
WordNet - Princeton
Knowledge graphs
Machine learning
Graph algorithms
Information science | Knowledge graph embedding | Engineering | 4,945 |
70,355,431 | https://en.wikipedia.org/wiki/Bis%28fulvalene%29diiron | Bis(fulvalene)diiron is the organoiron complex with the formula (C5H4-C5H4)2Fe2. Structurally, the molecule consists of two ferrous centers sandwiched between fulvalene dianions. The compound is an orange solid with lower solubility in benzene than ferrocene. Its structure has been verified by X-ray crystallography. The compound has attracted some interest for its redox properties.
Preparation
It was first prepared by Ullmann coupling of 1,1'-diiodoferrocene using copper but subsequent work produces the complex is 20-40% yield from dilithiofulvalene and ferrous chloride:
2(C5H4Li)2 + 2FeCl2 → (C5H4-C5H4)2Fe2 + 4LiCl
Related compounds
Biferrocene
References
Ferrocenes
Sandwich compounds
Cyclopentadienyl complexes | Bis(fulvalene)diiron | Chemistry | 202 |
26,555,589 | https://en.wikipedia.org/wiki/Leahill%20Turret%2C%20Hadrian%27s%20Wall | Leahill Turret is a typical example of one of the lookout towers located between the milecastles on Hadrian's Wall in Cumbria; located on the Lanercost Road near Banks, Parish of Waterhead. It is designated turret 51b and lies east of the Signal Tower at Pike Hill.
Location
Leahill Turret lies on the lower slope of Allieshaw Rigg; Milecastle 52, Bankshead, is 540 yards to the West, Turret 51A, Piper Syke, lying 540 yards to the East and Milecastle 51, Bowers Wall 540 yards to the East of it.
History
Leahill 51b was built shortly after AD 122 as part of Hadrian's Wall, dismantled under the Emperor Septimius Severus, and casually re-occupied late in the 4th century. Such lookout towers were only occupied on a temporary basis by soldiers who were patrolling the wall.
This turret was until 1927 buried beneath the road, when excavations led to its discovery and also the discovery of the precise location of the turf wall that preceded the later stone structure. The new road formation was created behind the turrets and the wall. In 1958 Leahill turret was fully excavated prior to its consolidation.
This Roman turret was a detached structure abutting the Wall, with internal measurements 13 feet 8 inches North-South by 14 feet 6 inches East-West. The slight remains of the original turf wall to the East and West had been overlain by occupation materials. Leahill had been much robbed surviving only to a maximum height of 9 courses or approximately one metre height; a platform was found in the centre of the North wall and in the 4th century a shelter was built internally against the South wall. Several occupation layers were located prior to a stone flag floor being laid.
A small cottage, still occupied in living memory, once stood close by on the opposite side of the existing road, and robbing also took place to supply material for the drystone dyking and the farm of Leahill.
The Roman ditch in front of the Wall is clearly defined in this area, except at Leahill Farm; as is the Vallum.
Micro-history
The road running past Leahill is followed by both the Hadrian's Wall Path National Trail and Hadrian's Cycleway.
Metal detectorists have found a number of Roman coins in the area and a skeleton was uncovered during the 1958 excavations at Leahill Turret.
Illuminating Hadrian's Wall
On 13 March 2010, all 84 miles of Hadrian's Wall was illuminated from Tyneside to Cumbria with points of light. The route was lit by 500 gas beacons, flares and torches at 250m intervals, with the assistance of over 1000 volunteers.
Leahill Turret was part of this event, garrisoned by two volunteers, marking the 1600th anniversary of the cessation of Roman rule in Britain in AD410. The 500 points of light were filmed by a helicopter at dusk.
Turrets
These structures were built to a standard pattern, two storeys high with the ground floor used for cooking with a movable ladder. The upper storey probably had sleeping accommodation for two soldiers, whilst the other two were on patrol. A tradition exists that the troops used pipes to communicate between turrets. The fire provided some light; the absence of a chimney was made up for by an unglazed window. A stone water tank would have been set into the floor.
Views of Leahill
References
Notes;
Sources;
Embleton, Ronald & Graham, Frank (1990). Hadrian's Wall in the Days of the Romans. Newcastle upon Tyne : Frank Graham.
External links
Illuminating Hadrian's Wall
Buildings and structures completed in the 2nd century
Hadrian's Wall
Roman sites in Cumbria
English Heritage sites in Cumbria
Archaeological sites in Cumbria
History of Cumbria
Military history of Cumbria
Ruins in Cumbria
Tourist attractions in Cumbria | Leahill Turret, Hadrian's Wall | Engineering | 775 |
10,584,608 | https://en.wikipedia.org/wiki/Hematine | Hematine (also magnetic hematite, hemalyke or hemalike) is an artificial magnetic material. Hematine is widely used in jewelry.
Although it is claimed by many that it is made from ground hematite or iron oxide mixed with a resin, analysis (of one object) has demonstrated it to be an entirely artificial compound, a barium-strontium ferrite.
References
Synthetic minerals
Iron compounds | Hematine | Physics,Chemistry | 90 |
51,668,579 | https://en.wikipedia.org/wiki/LG%20V20 | LG V20 is an Android phone manufactured by LG Electronics, in its LG V series, succeeding the LG V10 released in 2015. Unveiled on September 6, 2016, it was the first phone with the Android Nougat operating system. Like the V10, the V20 has a secondary display panel near the top of the device that can display additional messages and controls, and a quad DAC for audio. The V20 has a user-replaceable battery, unlike its successor, the LG V30, unveiled on 31 August 2017.
Specifications
The LG V20 was released in 2016 as LG's second V-series flagship smartphone. Its list of specifications includes the Qualcomm Snapdragon 820 system-on-chip, 4GB of RAM and 64GB of storage, 5.7-inch Quad HD (2560×1440) IPS LCD with additional secondary display, dual 16MP (75°, f/1.8) + 8MP (135°, f/2.4) rear cameras, 5MP (120°, f/1.9) front-facing camera, and a 3,200mAh removable battery.
Hardware
The LG V20 continues the user-friendly hardware access design of the LG G5, having a removable back chassis of aluminum alloy for a significantly streamlined and convenient battery removal as well as easy access to internal components for any repairs, with polycarbonate plastic top and bottom caps, a USB-C connector compliant with Qualcomm's Quick Charge 3.0, and a rear-mounted power button with an integrated fingerprint reader. It is available in Dark Grey (named "Titan"), Pink, and Silver color finishes. The V20 features a 5.7-inch 1440p IPS LCD display with up to 500 nits of brightness, coated in Gorilla Glass 4, utilizing the Qualcomm Snapdragon 820 processor with 4 GB of LPDDR4 RAM. The device includes 64 GB of internal storage, expandable via microSD card up to 2TB, and a 3,200 mAh removable battery. The removable aluminum alloy cover, as well as the removable battery, is designed to act as shock and impact dissipation in the event that the V20 is dropped, in which case both will pop out from the main body and absorb the impact, dispersing the weight over the battery and cover, leaving the main components and screen less affected by drop damage compared to other smartphones. This makes the LG V20 one of the most drop shock resistant, durable and resilient consumer smartphones currently available. Similar to the V10, a second, supplemental display is located at the top of the device to the right of the 120° wide-angle front-facing camera. The secondary separate display can be used to show notifications, access controls, and apps, as well as display time and incoming messages. Both screens were made larger and brighter than those found on the V10.
Additional features include an IR blaster, FM radio, a dedicated 24-bit high fidelity audio recorder able to record up to 24-bit/192 kHz with manual channel controls for effective noise elimination of up to 50% in audio/video recording compared to other smartphone audio recorders, Bluetooth 4.2, NFC, as well as dual sim support for the H990N/H990DS international versions which doesn't take the microSD card slot like in most other dual sim supported smartphones. The V20 shipped with Bang & Olufsen H3 in-ear headphones for a limited time, and the phone's audio specifications and sound is tuned by the same company in some countries, including the international variants (indicated by the B&O logo on the back of the cover). Every model of the V20 includes the dedicated ESS Sabre ES9218 32-bit Hi-Fi Quad DAC, able to drive up to 600 ohm headphones to enhance wired headphones sound output quality with specifications of 130 dB SNR, 124 dB DNR and -112 dB THD+N. The LG V20 was the most powerful smartphone to have a removable battery at the time, having later been superseded by the newer Fairphone devices.
Videos can be recorded with FLAC (lossless) audio tracks.
Software
The V20 ships with Android 7.0 Nougat and LG UX 5.0+ software. It was the first Android device to ship with Nougat. Updates to Android 8.0 Oreo for various models were released, but later versions are not supported.
LG supports unlocking the bootloader on US996, allowing them to be rooted and custom ROM images to be installed if available. There is no LG support for unlocking the V20 bootloader; it is reported to be possible though means of DirtySanta (except H918 which instead relies on Lafsploit), but difficult and with the risk of damaging the phone, and custom ROM images such as LineageOS have been unofficially produced.
Unique features
The V20 released with a strong combination of features, including user-replaceable battery, higher quality audio than the competition and strong camera hardware and software. It also had an infrared (IR) blaster that allowed it to control televisions and other remote controlled devices.
As part of its focus on audio and video, it had several strong points. It was one of the few phones at the time with an ultrawide camera, as well as laser autofocus. It had high acoustic overload point (AOP) microphones that allowed recording in very loud concert settings. It also had configurable bitrate video and audio recording, with lossless audio and steerable sound focus and waveform display while recording.
As of Q1 2021, the V20 remains the only phone with a user-replaceable battery, DAC audio, 3.5mm headphone jack and IR transmitter. The phone developed a cult following, despite the bloatware and lack of an easily unlocked bootloader.
References
External links
Phone specifications
Measured technical specifications of the ESS Sabre ES9218 32-bit DAC
Audiophile DAC
Reddit community discussion
LG Electronics smartphones
LG Electronics mobile phones
Mobile phones introduced in 2016
Android (operating system) devices
Mobile phones with multiple rear cameras
Mobile phones with user-replaceable battery
Mobile phones with 4K video recording
Discontinued flagship smartphones
Mobile phones with infrared transmitter | LG V20 | Technology | 1,345 |
9,210,345 | https://en.wikipedia.org/wiki/Gaussian%20adaptation | Gaussian adaptation (GA), also called normal or natural adaptation (NA) is an evolutionary algorithm designed for the maximization of manufacturing yield due to statistical deviation of component values of signal processing systems. In short, GA is a stochastic adaptive process where a number of samples of an n-dimensional vector x[xT = (x1, x2, ..., xn)] are taken from a multivariate Gaussian distribution, N(m, M), having mean m and moment matrix M. The samples are tested for fail or pass. The first- and second-order moments of the Gaussian restricted to the pass samples are m* and M*.
The outcome of x as a pass sample is determined by a function s(x), 0 < s(x) < q ≤ 1, such that s(x) is the probability that x will be selected as a pass sample. The average probability of finding pass samples (yield) is
Then the theorem of GA states:
For any s(x) and for any value of P < q, there always exist a Gaussian p. d. f. [ probability density function ] that is adapted for maximum dispersion. The necessary conditions for a local optimum are m = m* and M proportional to M*. The dual problem is also solved: P is maximized while keeping the dispersion constant (Kjellström, 1991).
Proofs of the theorem may be found in the papers by Kjellström, 1970, and Kjellström & Taxén, 1981.
Since dispersion is defined as the exponential of entropy/disorder/average information it immediately follows that the theorem is valid also for those concepts. Altogether, this means that Gaussian adaptation may carry out a simultaneous maximisation of yield and average information (without any need for the yield or the average information to be defined as criterion functions).
The theorem is valid for all regions of acceptability and all Gaussian distributions. It may be used by cyclic repetition of random variation and selection (like the natural evolution). In every cycle a sufficiently large number of Gaussian distributed points are sampled and tested for membership in the region of acceptability. The centre of gravity of the Gaussian, m, is then moved to the centre of gravity of the approved (selected) points, m*. Thus, the process converges to a state of equilibrium fulfilling the theorem. A solution is always approximate because the centre of gravity is always determined for a limited number of points.
It was used for the first time in 1969 as a pure optimization algorithm making the regions of acceptability smaller and smaller (in analogy to simulated annealing, Kirkpatrick 1983). Since 1970 it has been used for both ordinary optimization and yield maximization.
Natural evolution and Gaussian adaptation
It has also been compared to the natural evolution of populations of living organisms. In this case s(x) is the probability that the individual having an array x of phenotypes will survive by giving offspring to the next generation; a definition of individual fitness given by Hartl 1981. The yield, P, is replaced by the mean fitness determined as a mean over the set of individuals in a large population.
Phenotypes are often Gaussian distributed in a large population and a necessary condition for the natural evolution to be able to fulfill the theorem of Gaussian adaptation, with respect to all Gaussian quantitative characters, is that it may push the centre of gravity of the Gaussian to the centre of gravity of the selected individuals. This may be accomplished by the Hardy–Weinberg law. This is possible because the theorem of Gaussian adaptation is valid for any region of acceptability independent of the structure (Kjellström, 1996).
In this case the rules of genetic variation such as crossover, inversion, transposition etcetera may be seen as random number generators for the phenotypes. So, in this sense Gaussian adaptation may be seen as a genetic algorithm.
How to climb a mountain
Mean fitness may be calculated provided that the distribution of parameters and the structure of the landscape is known. The real landscape is not known, but figure below shows a fictitious profile (blue) of a landscape along a line (x) in a room spanned by such parameters. The red curve is the mean based on the red bell curve at the bottom of figure. It is obtained by letting the bell curve slide along the x-axis, calculating the mean at every location. As can be seen, small peaks and pits are smoothed out. Thus, if evolution is started at A with a relatively small variance (the red bell curve), then climbing will take place on the red curve. The process may get stuck for millions of years at B or C, as long as the hollows to the right of these points remain, and the mutation rate is too small.
If the mutation rate is sufficiently high, the disorder or variance may increase and the parameter(s) may become distributed like the green bell curve. Then the climbing will take place on the green curve, which is even more smoothed out. Because the hollows to the right of B and C have now disappeared, the process may continue up to the peaks at D. But of course the landscape puts a limit on the disorder or variability. Besides — dependent on the landscape — the process may become very jerky, and if the ratio between the time spent by the process at a local peak and the time of transition to the next peak is very high, it may as well look like a punctuated equilibrium as suggested by Gould (see Ridley).
Computer simulation of Gaussian adaptation
Thus far the theory only considers mean values of continuous distributions corresponding to an infinite number of individuals. In reality however, the number of individuals is always limited, which gives rise to an uncertainty in the estimation of m and M (the moment matrix of the Gaussian). And this may also affect the efficiency of the process. Unfortunately very little is known about this, at least theoretically.
The implementation of normal adaptation on a computer is a fairly simple task. The adaptation of m may be done by one sample (individual) at a time, for example
m(i + 1) = (1 – a) m(i) + ax
where x is a pass sample, and a < 1 a suitable constant so that the inverse of a represents the number of individuals in the population.
M may in principle be updated after every step y leading to a feasible point
x = m + y according to:
M(i + 1) = (1 – 2b) M(i) + 2byyT,
where yT is the transpose of y and b << 1 is another suitable constant. In order to guarantee a suitable increase of average information, y should be normally distributed with moment matrix μ2M, where the scalar μ > 1 is used to increase average information (information entropy, disorder, diversity) at a suitable rate. But M will never be used in the calculations. Instead we use the matrix W defined by WWT = M.
Thus, we have y = Wg, where g is normally distributed with the moment matrix μU, and U is the unit matrix. W and WT may be updated by the formulas
W = (1 – b)W + bygT and WT = (1 – b)WT + bgyT
because multiplication gives
M = (1 – 2b)M + 2byyT,
where terms including b2 have been neglected. Thus, M will be indirectly adapted with good approximation. In practice it will suffice to update W only
W(i + 1) = (1 – b)W(i) + bygT.
This is the formula used in a simple 2-dimensional model of a brain satisfying the Hebbian rule of associative learning; see the next section (Kjellström, 1996 and 1999).
The figure below illustrates the effect of increased average information in a Gaussian p.d.f. used to climb a mountain Crest (the two lines represent the contour line). Both the red and green cluster have equal mean fitness, about 65%, but the green cluster has a much higher average information making the green process much more efficient. The effect of this adaptation is not very salient in a 2-dimensional case, but in a high-dimensional case, the efficiency of the search process may be increased by many orders of magnitude.
The evolution in the brain
In the brain the evolution of DNA-messages is supposed to be replaced by an evolution of signal patterns and the phenotypic landscape is replaced by a mental landscape, the complexity of which will hardly be second to the former. The metaphor with the mental landscape is based on the assumption that certain signal patterns give rise to a better well-being or performance. For instance, the control of a group of muscles leads to a better pronunciation of a word or performance of a piece of music.
In this simple model it is assumed that the brain consists of interconnected components that may add, multiply and delay signal values.
A nerve cell kernel may add signal values,
a synapse may multiply with a constant and
An axon may delay values.
This is a basis of the theory of digital filters and neural networks consisting of components that may add, multiply and delay signalvalues and also of many brain models, Levine 1991.
In the figure below the brain stem is supposed to deliver Gaussian distributed signal patterns. This may be possible since certain neurons fire at random (Kandel et al.). The stem also constitutes a disordered structure surrounded by more ordered shells (Bergström, 1969), and according to the central limit theorem the sum of signals from many neurons may be Gaussian distributed. The triangular boxes represent synapses and the boxes with the + sign are cell kernels.
In the cortex signals are supposed to be tested for feasibility. When a signal is accepted the contact areas in the synapses are updated according to the formulas below in agreement with the Hebbian theory. The figure shows a 2-dimensional computer simulation of Gaussian adaptation according to the last formula in the preceding section.
m and W are updated according to:
m1 = 0.9 m1 + 0.1 x1; m2 = 0.9 m2 + 0.1 x2;
w11 = 0.9 w11 + 0.1 y1g1; w12 = 0.9 w12 + 0.1 y1g2;
w21 = 0.9 w21 + 0.1 y2g1; w22 = 0.9 w22 + 0.1 y2g2;
As can be seen this is very much like a small brain ruled by the theory of Hebbian learning (Kjellström, 1996, 1999 and 2002).
Gaussian adaptation and free will
Gaussian adaptation as an evolutionary model of the brain obeying the Hebbian theory of associative learning offers an alternative view of free will due to the ability of the process to maximize the mean fitness of signal patterns in the brain by climbing a mental landscape in analogy with phenotypic evolution.
Such a random process gives us much freedom of choice, but hardly any will. An illusion of will may, however, emanate from the ability of the process to maximize mean fitness, making the process goal seeking. I. e., it prefers higher peaks in the landscape prior to lower, or better alternatives prior to worse. In this way an illusive will may appear. A similar view has been given by Zohar 1990. See also Kjellström 1999.
A theorem of efficiency for random search
The efficiency of Gaussian adaptation relies on the theory of information due to Claude E. Shannon (see information content). When an event occurs with probability P, then the information −log(P) may be achieved. For instance, if the mean fitness is P, the information gained for each individual selected for survival will be −log(P) – on the average - and the work/time needed to get the information is proportional to 1/P. Thus, if efficiency, E, is defined as information divided by the work/time needed to get it we have:
E = −P log(P).
This function attains its maximum when P = 1/e = 0.37. The same result has been obtained by Gaines with a different method.
E = 0 if P = 0, for a process with infinite mutation rate, and if P = 1, for a process with mutation rate = 0 (provided that the process is alive).
This measure of efficiency is valid for a large class of random search processes provided that certain conditions are at hand.
1 The search should be statistically independent and equally efficient in different parameter directions. This condition may be approximately fulfilled when the moment matrix of the Gaussian has been adapted for maximum average information to some region of acceptability, because linear transformations of the whole process do not affect efficiency.
2 All individuals have equal cost and the derivative at P = 1 is < 0.
Then, the following theorem may be proved:
All measures of efficiency, that satisfy the conditions above, are asymptotically proportional to –P log(P/q) when the number of dimensions increases, and are maximized by P = q exp(-1) (Kjellström, 1996 and 1999).
The figure above shows a possible efficiency function for a random search process such as Gaussian adaptation. To the left the process is most chaotic when P = 0, while there is perfect order to the right where P = 1.
In an example by Rechenberg, 1971, 1973, a random walk is pushed thru a corridor maximizing the parameter x1. In this case the region of acceptability is defined as a (n − 1)-dimensional interval in the parameters x2, x3, ..., xn, but a x1-value below the last accepted will never be accepted. Since P can never exceed 0.5 in this case, the maximum speed towards higher x1-values is reached for P = 0.5/e = 0.18, in agreement with the findings of Rechenberg.
A point of view that also may be of interest in this context is that no definition of information (other than that sampled points inside some region of acceptability gives information about the extension of the region) is needed for the proof of the theorem. Then, because, the formula may be interpreted as information divided by the work needed to get the information, this is also an indication that −log(P) is a good candidate for being a measure of information.
The Stauffer and Grimson algorithm
Gaussian adaptation has also been used for other purposes as for instance shadow removal by "The Stauffer-Grimson algorithm" which is equivalent to Gaussian adaptation as used in the section "Computer simulation of Gaussian adaptation" above. In both cases the maximum likelihood method is used for estimation of mean values by adaptation at one sample at a time.
But there are differences. In the Stauffer-Grimson case the information is not used for the control of a random number generator for centering, maximization of mean fitness, average information or manufacturing yield. The adaptation of the moment matrix also differs very much as compared to "the evolution in the brain" above.
See also
Entropy in thermodynamics and information theory
Fisher's fundamental theorem of natural selection
Free will
Genetic algorithm
Hebbian learning
Information content
Simulated annealing
Stochastic optimization
Covariance matrix adaptation evolution strategy (CMA-ES)
Unit of selection
References
Bergström, R. M. An Entropy Model of the Developing Brain. Developmental Psychobiology, 2(3): 139–152, 1969.
Brooks, D. R. & Wiley, E. O. Evolution as Entropy, Towards a unified theory of Biology. The University of Chicago Press, 1986.
Brooks, D. R. Evolution in the Information Age: Rediscovering the Nature of the Organism. Semiosis, Evolution, Energy, Development, Volume 1, Number 1, March 2001
Gaines, Brian R. Knowledge Management in Societies of Intelligent Adaptive Agents. Journal of intelligent Information systems 9, 277–298 (1997).
Hartl, D. L. A Primer of Population Genetics. Sinauer, Sunderland, Massachusetts, 1981.
Hamilton, WD. 1963. The evolution of altruistic behavior. American Naturalist 97:354–356
Kandel, E. R., Schwartz, J. H., Jessel, T. M. Essentials of Neural Science and Behavior. Prentice Hall International, London, 1995.
S. Kirkpatrick and C. D. Gelatt and M. P. Vecchi, Optimization by Simulated Annealing, Science, Vol 220, Number 4598, pages 671–680, 1983.
Kjellström, G. Network Optimization by Random Variation of component values. Ericsson Technics, vol. 25, no. 3, pp. 133–151, 1969.
Kjellström, G. Optimization of electrical Networks with respect to Tolerance Costs. Ericsson Technics, no. 3, pp. 157–175, 1970.
Kjellström, G. & Taxén, L. Stochastic Optimization in System Design. IEEE Trans. on Circ. and Syst., vol. CAS-28, no. 7, July 1981.
Kjellström, G., Taxén, L. and Lindberg, P. O. Discrete Optimization of Digital Filters Using Gaussian Adaptation and Quadratic Function Minimization. IEEE Trans. on Circ. and Syst., vol. CAS-34, no 10, October 1987.
Kjellström, G. On the Efficiency of Gaussian Adaptation. Journal of Optimization Theory and Applications, vol. 71, no. 3, December 1991.
Kjellström, G. & Taxén, L. Gaussian Adaptation, an evolution-based efficient global optimizer; Computational and Applied Mathematics, In, C. Brezinski & U. Kulish (Editors), Elsevier Science Publishers B. V., pp 267–276, 1992.
Kjellström, G. Evolution as a statistical optimization algorithm. Evolutionary Theory 11:105–117 (January, 1996).
Kjellström, G. The evolution in the brain. Applied Mathematics and Computation, 98(2–3):293–300, February, 1999.
Kjellström, G. Evolution in a nutshell and some consequences concerning valuations. EVOLVE, , Stockholm, 2002.
Levine, D. S. Introduction to Neural & Cognitive Modeling. Laurence Erlbaum Associates, Inc., Publishers, 1991.
MacLean, P. D. A Triune Concept of the Brain and Behavior. Toronto, Univ. Toronto Press, 1973.
Maynard Smith, J. 1964. Group Selection and Kin Selection, Nature 201:1145–1147.
Maynard Smith, J. Evolutionary Genetics. Oxford University Press, 1998.
Mayr, E. What Evolution is. Basic Books, New York, 2001.
Müller, Christian L. and Sbalzarini Ivo F. Gaussian Adaptation revisited - an entropic view on Covariance Matrix Adaptation. Institute of Theoretical Computer Science and Swiss Institute of Bioinformatics, ETH Zurich, CH-8092 Zurich, Switzerland.
Pinel, J. F. and Singhal, K. Statistical Design Centering and Tolerancing Using Parametric Sampling. IEEE Transactions on Circuits and Systems, Vol. Das-28, No. 7, July 1981.
Rechenberg, I. (1971): Evolutionsstrategie — Optimierung technischer Systeme nach Prinzipien der biologischen Evolution (PhD thesis). Reprinted by Fromman-Holzboog (1973).
Ridley, M. Evolution. Blackwell Science, 1996.
Stauffer, C. & Grimson, W.E.L. Learning Patterns of Activity Using Real-Time Tracking, IEEE Trans. on PAMI, 22(8), 2000.
Stehr, G. On the Performance Space Exploration of Analog Integrated Circuits. Technischen Universität Munchen, Dissertation 2005.
Taxén, L. A Framework for the Coordination of Complex Systems’ Development. Institute of Technology, Linköping University, Dissertation, 2003.
Zohar, D. The quantum self : a revolutionary view of human nature and consciousness rooted in the new physics. London, Bloomsbury, 1990.
Evolutionary algorithms
Creationism
Free will | Gaussian adaptation | Biology | 4,264 |
32,210,654 | https://en.wikipedia.org/wiki/Reference%20dimension | A reference dimension is a dimension on an engineering drawing provided for information only. Reference dimensions are provided for a variety of reasons and are often an accumulation of other dimensions that are defined elsewhere (e.g. on the drawing or other related documentation). These dimensions may also be used for convenience to identify a single dimension that is specified elsewhere (e.g. on a different drawing sheet).
Reference dimensions are not intended to be used directly to define the geometry of an object. Reference dimensions do not normally govern manufacturing operations (such as machining) in any way and, therefore, do not typically include a dimensional tolerance (though a tolerance may be provided if such information is deemed helpful). Consequently, reference dimensions are also not subject to dimensional inspection under normal circumstances.
Reference dimensions are commonly used in CAD software along with constraints that usually denote the opposite: mandatory dimensions to be precisely followed.
Notation
In Computer-Aided Design (CAD) it's commonly used to denote dimensions.
REF
Prior to use of modern CAD software, reference dimensions were traditionally indicated on a drawing by the abbreviation "REF" written adjacent to the dimension (typically to the right or underneath the dimension).
However, standard ASME Y14.5 has changed the way references are marked, and the abbreviation "REF" has been replaced with the use of parentheses around the dimension. As an example, a distance of 1500 millimeters might be denoted by instead of .
This implementation has followed in modern CAD software that makes use of parentheses as the default denotation method whenever reference dimensions are "automatically" created by the software. The method for identifying a reference dimension (or reference data) on drawings is to enclose the dimension (or data) within parentheses.
See also
Engineering drawing abbreviations and symbols
Geometric dimensioning and tolerancing
ASME Y14.5
References
External links
Y14.5 Dimensioning and Tolerancing, 2018, ASME
Technical drawing | Reference dimension | Engineering | 392 |
14,463,701 | https://en.wikipedia.org/wiki/Outline%20of%20abnormal%20psychology | The following outline is provided as an overview of and topical guide to abnormal psychology:
Abnormal psychology – is the scientific study of abnormal behavior in order to describe, predict, explain, and change abnormal patterns of functioning. Abnormal psychology in clinical psychology studies the nature of psychopathology, its causes, and its treatments. Of course, the definition of what constitutes 'abnormal' has varied across time and across cultures. Individuals also vary in what they regard as normal or abnormal behavior. Additionally, many current theories and approaches are held by psychologists, including biological, psychological, behavioral, humanistic, existential, and sociocultural. In general, abnormal psychology can be described as an area of psychology that studies people who are consistently unable to adapt and function effectively in a variety of conditions. The main contributing factors to how well an individual is able to adapt include their genetic makeup, physical condition, learning and reasoning, and socialization.
Nature of abnormal psychology
What type of thing is abnormal psychology?
Abnormal psychology can be described as all of the following:
An academic discipline – focused study in one academic field or profession. A discipline incorporates expertise, people, projects, communities, challenges, studies, inquiry, and research areas that are strongly associated with a given discipline.
One of the social sciences – concerned with society and the relationships among individuals within a society.
A branch of psychology – study of mind and behavior.
An applied science – discipline of science that applies existing scientific knowledge to develop more practical applications, like treating the mentally ill.
Essence of abnormal psychology
Abnormality
Mental disorder
Psychology
Psychopathology
Approaches of abnormal psychology
Somatogenic – abnormality is seen as a result of biological disorders in the brain. This approach has led to the development of radical biological treatments, e.g. lobotomy.
Psychogenic – abnormality is caused by psychological problems. Psychoanalytic (Freud), Cathartic, Hypnotic and Humanistic Psychology (Carl Rogers, Abraham Maslow) treatments were all derived from this paradigm.
Mental disorders
Mental disorder
– examples of mental disorders include:
Anxiety disorder
Bipolar disorder
Delusional disorder
Impulse control disorder
Kleptomania
Pyromania
Personality disorder
Obsessive–compulsive personality disorder
Schizoaffective disorder
Schizophrenia
Substance use disorder
Substance abuse
Substance dependence
Thought disorder
Treatment of mental disorders
Psychological evaluation
Psychotherapy
Psychiatric medication
Mental health professions
Mental health profession
Psychiatry
Clinical psychology
Psychiatric rehabilitation
School psychology
Clinical social work
Mental health professionals
Mental health professional
Psychiatrist
Clinical psychologist
School psychologist
Mental health counselor
History of abnormal psychology
History of mental disorders
History of mental disorders, by type
History of anxiety disorders
History of posttraumatic stress disorder
History of bipolar disorder
History of depression
History of major depressive disorder
History of neurodevelopmental disorders
History of autism
History of Asperger syndrome
History of obsessive–compulsive disorder
History of personality disorders
History of psychopathy
History of schizophrenia
History of the treatment of mental disorders
History of clinical psychology
History of electroconvulsive therapy
History of electroconvulsive therapy in the United Kingdom
History of psychiatry
History of psychiatric institutions
History of psychosurgery
History of psychosurgery in the United Kingdom
Lobotomy – consists of cutting or scraping away most of the connections to and from the prefrontal cortex, the anterior part of the frontal lobes of the brain. The purpose of the operation was to reduce the symptoms of mental disorder, and it was recognized that this was accomplished at the expense of the patient's personality and intellect! By the late 1970s, the practice of lobotomy had generally ceased.
History of psychotherapy
Abnormal psychology organizations
American Psychological Association (APA) – largest organization of psychologists in the United States.
National Institute of Mental Health (NIMH) – part of the U.S. Department of Health and Human Services, it specializes in mental illness research.
National Alliance on Mental Illness (NAMI) – provides support, education, and advocacy for people affected by mental illness.
Abnormal psychology publications
Journals
Behavior Genetics
British Journal of Clinical Psychology
Communication Disorders Quarterly
Journal of Abnormal Child Psychology
Journal of Abnormal Psychology
Journal of Clinical Psychology
Journal of Consulting and Clinical Psychology
Molecular Psychiatry
Psychological Medicine
Psychology of Addictive Behaviors
Psychology of Violence
Psychosis (journal)
Persons influential in abnormal psychology
Sigmund Freud
Jacques Lacan
B.F. Skinner
Deirdre Barrett
Kay Redfield Jamison
Theodore Millon
See also
Outline of psychology
References
External links
Definition of abnormal psychology, from Merriam-Webster MedlinePlus Medical Dictionary
Abnormal Psychology Students Practice Resources
Science Direct
A Course in Abnormal Psychology
NIMH.NIH.gov - National Institute of Mental Health
International Committee of Women Leaders on Mental Health
Mental Illness Watch
Metapsychology Online Reviews: Mental Health
The New York Times: Mental Health & Disorders
The Guardian: Mental Health
Mental Illness (Stanford Encyclopedia of Philosophy)
Abnormal psychology
Abnormal psychology | Outline of abnormal psychology | Biology | 971 |
42,610,100 | https://en.wikipedia.org/wiki/Part%20program | The part program is a sequence of instruction that describe the work that is to be done to a part. Typically these instructions are generated in Computer-aided manufacturing software and are then fed into the computer numerical control (CNC) software on the machines, such as drills, lathes, mills, grinders, routers, that are performing work on the part. The CNC computer then translates the set of instructions into a standardized format of G-code and M-code commands and follow the instructions in the order they are written left to right or top to bottom.
When multiple repetitive operations are needed on a large number of parts canned cycles can be used to reduce the number of operation blocks in a part program. In some cases a part might need to go between multiple machines and have multiple operations performed on it to generate the geometry that is need.Example of the order of operations seen in a typical program
Program start
Load selected tool
Turn spindle on
Turn coolant on
G00 Rapid to starting position above part
All machining operations to part
Turn coolant off
Turn spindle off
Move to safe position
End program
Types of operations
Each type CNC machine will have different name for the operation they perform to parts to achieve the desired geometry on a part such as tasks like drilling, or cutting threads, but perform these actions differently. The choice to use a mill or lathe comes down to the geometry of the part you are making and how the machine can hold onto the part and what operation you want to perform to achieve the geometry needed. If the part is very complicated it may go through multiple part programs to achieve the desired results.
CNC Mill
Contour milling
Milling pockets
Cutting slots
Chamfering edges
Thread milling
Drilling holes
Tapping holes
CNC Lathe
Rough facing surfaces
Finish facing surfaces
Face drilling
Cross drilling
Face contouring
Cutting threads
Cutting groves
Cutting parts off
See also
G-code
Computer-aided manufacturing
Computer-aided design
Canned cycle
Computer-aided technologies
References
FUNDAMENTALS OF PART PROGRAMMING
Computer-aided engineering | Part program | Engineering | 403 |
23,692,995 | https://en.wikipedia.org/wiki/C8H6O3 | {{DISPLAYTITLE:C8H6O3}}
The molecular formula C8H6O3 (molar mass: 150.13 g/mol) may refer to:
2-Carboxybenzaldehyde
4-Carboxybenzaldehyde
Phenylglyoxylic acid
Piperonal
Molecular formulas | C8H6O3 | Physics,Chemistry | 72 |
31,924,450 | https://en.wikipedia.org/wiki/Belevitch%27s%20theorem | Belevitch's theorem is a theorem in electrical network analysis due to the Russo-Belgian mathematician Vitold Belevitch (1921–1999). The theorem provides a test for a given S-matrix to determine whether or not it can be constructed as a lossless rational two-port network.
Lossless implies that the network contains only inductances and capacitances – no resistances. Rational (meaning the driving point impedance Z(p) is a rational function of p) implies that the network consists solely of discrete elements (inductors and capacitors only – no distributed elements).
The theorem
For a given S-matrix of degree ;
where,
p is the complex frequency variable and may be replaced by in the case of steady state sine wave signals, that is, where only a Fourier analysis is required
d will equate to the number of elements (inductors and capacitors) in the network, if such network exists.
Belevitch's theorem states that, represents a lossless rational network if and only if,
where,
, and are real polynomials
is a strict Hurwitz polynomial of degree not exceeding
for all .
References
Bibliography
Belevitch, Vitold Classical Network Theory, San Francisco: Holden-Day, 1968 .
Rockmore, Daniel Nahum; Healy, Dennis M. Modern Signal Processing, Cambridge: Cambridge University Press, 2004 .
Circuit theorems
Two-port networks | Belevitch's theorem | Physics,Engineering | 294 |
40,809,862 | https://en.wikipedia.org/wiki/Centrifugal%20pendulum%20absorber | A centrifugal pendulum absorber is a type of tuned mass damper. It reduces the amplitude of a torsional vibration in drive trains that use a combustion engine.
History
The centrifugal pendulum absorber was first patented in 1937 by R. Sarazin and a different version by R. Chilton in 1938. Generally, both Sarazin and Chilton are credited with the invention. Sarazin's work was used during World War II by Pratt & Whitney for aircraft engines with increased power output. The power increase caused an increase in torsional vibrations which threatened the durability. This resulted in the Pratt & Whitney R-2800 engine that used pendulum weights attached to the crank shaft.
The use of centrifugal pendulum absorbers in land vehicles did not start until later. Although internal combustion engines had always caused torsional vibrations in the drive train, the vibration amplitude was generally not high enough to affect durability or driver comfort. One application existed in tuned racing engines where torsional crank shaft vibrations could cause damage to the cam shaft or valves. In this application a centrifugal pendulum absorber is directly attached to the crank shaft.
In 2010, centrifugal pendulum absorbers following the patents of Sarazin and Chilton were introduced in a BMW 320D. The reason for it was again the increase in torsional vibrations from higher power engines. In this case, the 4-cylinder diesel engine BMW N47. Unlike the previous designs, the centrifugal pendulum absorber was not attached to the combustion engine but attached to a dual mass flywheel.
Function
The function of a centrifugal pendulum absorber is as with any tuned mass absorbers based on an absorption principle rather than a damping principle. The distinction is significant since dampers reduce the vibration amplitude by converting the vibration energy into heat. Absorbers store the energy and return it to the vibration system at the appropriate time. Centrifugal pendulum absorbers like tuned mass absorbers are not part of the force/torque flow.
The centrifugal pendulum absorber differs from the tuned mass absorber in the absorption range. It is effective for an entire order instead of a narrow frequency range.
Modern Applications
Internal combustion engines follow a development trend towards a reduction in the number of cylinders, increased energy output per cylinder and driving at lower engine speeds. This leads to an increased engine efficiency but causes the engine's torsional vibrations to increase. The vibrations lead to durability concerns as well as a comfort reduction for the passengers and have to be avoided through the use of harmonic dampers and absorbers. This situation moves the balance between the cost of the centrifugal pendulum absorber technology and the benefit for the drive train efficiency.
The following cars use centrifugal pendulum absorbers
BMW 320D
Mercedes E250 Diesel
Chevrolet Colorado Diesel
GM Products Equipped with the 2.7L L3B Engine
Chevrolet Corvette (C8)
References
External links
Schaeffler Media Library - Centrifugal Pendulum Absorber - video depicting a dual mass flywheel with a centrifugal pendulum absorber
EPI Crankshaft Torsional Absorbers - centrifugal pendulum absorber on the crankshaft of an airplane engine
Engine History R-2800 - development of an engine crank shaft with centrifugal pendulum absorber
Mechanical vibrations
Pendulums
Engine technology
Mechanical engineering | Centrifugal pendulum absorber | Physics,Technology,Engineering | 692 |
24,270,717 | https://en.wikipedia.org/wiki/Wall-crossing | In algebraic geometry and string theory, the phenomenon of wall-crossing describes the discontinuous change of a certain quantity, such as an integer geometric invariant, an index or a space of BPS state, across a codimension-one wall in a space of stability conditions, a so-called wall of marginal stability.
References
Kontsevich, M. and Soibelman, Y. "Stability structures, motivic Donaldson–Thomas invariants and cluster transformations" (2008). .
M. Kontsevich, Y. Soibelman, "Motivic Donaldson–Thomas invariants: summary of results",
Joyce, D. and Song, Y. "A theory of generalized Donaldson–Thomas invariants," (2008). .
Gaiotto, D. and Moore, G. and Neitzke, A. "Four-dimensional wall-crossing via three-dimensional field theory" (2008). .
Mina Aganagic, Hirosi Ooguri, Cumrun Vafa, Masahito Yamazaki, "Wall crossing and M-theory",
Kontsevich, M. and Soibelman, Y., "Wall-crossing structures in Donaldson-Thomas invariants, integrable systems and Mirror Symmetry",
String theory
Algebraic geometry | Wall-crossing | Astronomy,Mathematics | 266 |
8,188,259 | https://en.wikipedia.org/wiki/Vis%20medicatrix%20naturae | Vis medicatrix naturae (literally "the healing power of nature", and also known as natura medica) is the Latin rendering of the Greek Νόσων φύσεις ἰητροί ("Nature is the physician(s) of diseases"), a phrase attributed to Hippocrates. While the phrase is not actually attested in his corpus, it nevertheless sums up one of the guiding principles of Hippocratic medicine, which is that organisms left alone can often heal themselves (cf. the Hippocratic primum non nocere).
Hippocrates
Hippocrates believed that an organism is not passive to injuries or disease, but rebalances itself to counteract them. The state of illness, therefore, is not a malady but an effort of the body to overcome a disturbed equilibrium. It is this capacity of organisms to correct imbalances that distinguishes them from non-living matter.
From this follows the medical approach that “nature is the best physician” or “nature is the healer of disease”. To do this Hippocrates considered a doctor's chief aim was to help this natural tendency of the body by observing its action, removing obstacles to its action, and thus allow an organism to recover its own health. This underlies such Hippocratic practices as blood letting in which a perceived excess of a humors is removed, and thus was taken to help the rebalancing of the body's humor.
Renaissance and modern history
After Hippocrates, the idea of vis medicatrix naturae continued to play a key role in medicine. In the early Renaissance, the physician and early scientist Paracelsus had the idea of “inherent balsam”. Thomas Sydenham, in the 18th century considered fever as a healing force of nature.
In the nineteenth-century, vis medicatrix naturae came to be interpreted as vitalism, and in this form it came to underlie the philosophical framework of homeopathy, chiropractic, hydropathy, osteopathy and naturopathy.
Relation to homeostasis
Walter Cannon's notion of homeostasis also has its origins in vis medicatrix naturae. "All that I have done thus far in reviewing the various protective and stabilizing devices of the body is to present a modern interpretation of the natural vis medicatrix.". In this, Cannon stands in contrast to Claude Bernard (the father of modern physiology), and his earlier idea of milieu interieur that he proposed to replace vitalistic ideas about the body. However, both the notions of homeostasis and milieu interieur are ones concerned with how the body's physiology regulates itself through multiple mechanical equilibrium adjustment feedbacks rather than nonmechanistic life forces.
Relation to evolutionary medicine
More recently, evolutionary medicine has identified many medical symptoms such as fever, inflammation, sickness behavior, and morning sickness as evolved adaptations that function as darwinian medicatrix naturae due to their selection as means to protect, heal, or restore the injured, infected or physiologically disrupted body.
See also
Appeal to nature
Medicus curat, natura sanat
Royal Commission on Animal Magnetism
References
Latin medical words and phrases
Ancient Greek medicine
Natural philosophy
Biology theories
ja:自然治癒力 | Vis medicatrix naturae | Biology | 685 |
46,820,469 | https://en.wikipedia.org/wiki/Collybiopsis%20biformis | Collybiopsis biformis is a species of agaric fungus in the family Omphalotaceae found in North America. The species was originally described by Charles Horton Peck in 1903 as Marasmius biformis. The specific epithet biformis refers to the two distinct cap shapes, which Peck noted could be either campanulate (bell-shaped) or flattened. R.H. Petersen transferred the fungus to the genus Collybiopsis in 2021.
References
Marasmiaceae
Fungus species
Fungi of North America
Fungi described in 1903
Taxa named by Charles Horton Peck | Collybiopsis biformis | Biology | 117 |
38,621,229 | https://en.wikipedia.org/wiki/Spontaneous%20conception%20%28psychology%29 | In psychology, spontaneous conception refers to conceptions about the world that we form without any formal education. Often these are connected with physics. They may be wrong concepts, like "heavier objects fall faster" or "bigger objects are heavier". Piaget thinks they are made by introspection. Vygotsky believes they may help the learning of scientific concepts more than having no conception about one event.
References
Cognitive psychology | Spontaneous conception (psychology) | Biology | 86 |
43,159,113 | https://en.wikipedia.org/wiki/Kim%20Beom | Kim Beom (born 1963) is a South Korean multimedia artist.
Education and early life
Kim Beom was born in 1963 in Seoul, Korea, to parents Kim Se-Choong (1928–1986), a sculptor known for various public monuments in Korea, and poet Kim Nam-Jo (b. 1927). He attended Seoul National University during South Korea's student democratization movement and obtained both a BFA and an MFA there in 1986 and 1988, respectively. Kim then moved to New York City where he completed a second MFA at the School of Visual Arts in 1991. He remained in New York until returning to South Korea in 1997. He now lives and works in Seoul.
Work
Kim's drawings, sculptures, installations, videos, and artist books often use absurd situations and deadpan humor to evince themes involving pedagogy, education, animism, and the life of objects. In addition, Kim's work references the traditions of his native Korean culture to comment, in both a meditative and critical way, on contemporary Korean society's complex, often contradictory, relationship with the West.
Kim's early works were influenced by cartoons, the drawings of children and outsider artists, and traditional crafts like wood carving, paper cutting, and ceramics. Many of his drawings and paintings from the early 1990s feature aggressive subject matter such as "a hammer, an ax, a knife, a nail, sharp pieces of glass, a barbed-wire fence, and a weapon," and often exhibit "physical 'violence' inflicted on the material itself, through acts like cutting, tearing, folding over, and sewing the canvas." Curator Paola Morsiani has compared this work with Western artist Lucio Fontana's use of void in the 1950s–60s.
Morsiani also suggests that Kim's matchstick drawings from the mid-1990s, such as Bad Heads Fuck (1–4) (1995), collages of wooden matches on paper with pencil inscriptions, focus on the mass-produced object to "convey a narrative about the repression of individuality that [Kim] witnessed under dictatorial rule while growing up in Korea." In the drawings, stick figures "stand in for subjugated people" together corralled into forming collective assemblages and playing highly coordinated "games" that evoke the mass games the artist witnessed growing up in South Korea under repressive governments in the 1960s through the 1980s.
Kim describes his later series of satirical drawings, begun in 2002 and entitled "Perspectives and Blueprints," as "a sort of 'semiotic view on humankind.'" The series includes works on paper like A Wiring Diagram of a Lighthouse (2005), in which a video projection of propaganda replaces a guiding light out at sea, and School of Inversion (2009), which reiterates the artist's critique of repressive social norms in Korean society. The drawing delineates a blueprint of a school building showing classrooms from multiple conflicting perspectives, where "by third grade, students have learned to exist upside down." Other related drawings include A Draft of a Safe House for a Tyrant (2009) and A Design of an Immigration Bureau Complex on a Border Line (2005).
Kim's recent video works also deal with inversion and agency. Spectacle (2010) features the paradoxical event of an antelope chasing a cheetah, spoofing typical television footage of predatory animals in the wild, and Horse Riding Horse (After Eadweard Muybridge) (2008) similarly parodies Eadweard Muybridge's 1878 The Horse in Motion by replacing a horse's human rider with another horse.
Among Kim's most frequently exhibited works in recent years is his series "The Educated Objects" (2010), which comprises several sculptural installations and videos of the artist instructing inanimate objects such as rocks, a model ship, and various household items (e.g., a table fan and a bottle of dishwashing detergent). As critic Jennifer S. Li describes, "The loosely linked works' bizarre and unlikely tutorial and classroom scenarios mock and deride the structure and ideology behind educational systems." In Objects Being Taught They Are Nothing But Tools (2010), a collection of commonplace sundries are placed on miniature wooden chairs facing a blackboard. A video of the artist's torso plays on a screen at the front of the classroom tableaux and he delivers a lecture on the history of human rights, capitalism, and consumerism to his object-pupils, ultimately insisting they have no "essential value" outside of their economic use.
"The Educated Objects" continues the commentary on animism, education, and the "uncritical absorption of Western mores by Asian countries" of Kim's 1997 artist's book The Art of Transforming. This book comprises a series of prose poems instructing the reader on how to morph into various natural entities (both a sentient leopard, and non-sentient plant life and landscape elements), as well as man-made structures and commodities (a ladder; an air conditioner).
Exhibitions
In 2012 Kim's work was the subject of the solo exhibition The School of Inversion at Hayward Gallery's Project Space, London. In the United States, Kim has had solo exhibitions at REDCAT Gallery, Los Angeles (Animalia, 2011) and the Cleveland Museum of Art (Objects Being Taught They are Nothing but Tools, 2010–11). Recent museum solo exhibitions in Korea include Kim Beom at Artsonje Center in 2010 and How to become a rock at the Leeum Museum of Art in 2023.
In addition to featuring prominently in recent surveys of contemporary art from Korea at venues like the Museo Tamayo Arte Contemporaneo in Mexico City, the Los Angeles County Museum of Art, and the Museum of Fine Arts in Houston, Kim's work has been included in such notable international exhibitions as the 2003 Istanbul Biennial, the 2005 Venice Biennale, Media City Seoul 2010, and the 9th Gwangju Biennale.
Museum collections
Kim's work is included in the collections of the Museum of Fine Arts, Houston; the Cleveland Museum of Art; the Walker Art Center, Minneapolis in the United States; the Museum für Kommunikation, in Bern, Switzerland; the Seoul Museum of Art; the Ho-Am Art Museum, Artsonje Center; the Horim Museum in Seoul; the Museum of Modern Art, New York City; and the National Museum of Modern and Contemporary Art, in Gwachun, Korea.
References
Bibliography
External links
Kim Beom in the collection of The Museum of Modern Art
REDCAT Exhibition brochure including interview with the artist:
Review of Animalia exhibition at REDCAT (2011) on the Los Angeles Times blog:
Review of Animalia in ArtAsiaPacific Magazine:
Review of Animalia in Artforum:
Walker Art Center blog post on Kim's video Yellow Scream:
1963 births
Living people
Multimedia artists
South Korean contemporary artists
Seoul National University alumni | Kim Beom | Technology | 1,441 |
3,355,656 | https://en.wikipedia.org/wiki/Concrete%20recycling |
Concrete recycling is the use of rubble from demolished concrete structures. Recycling is cheaper and more ecological than trucking rubble to a landfill. Crushed rubble can be used for road gravel, revetments, retaining walls, landscaping gravel, or raw material for new concrete. Large pieces can be used as bricks or slabs, or incorporated with new concrete into structures, a material called urbanite.
Circular economy
Concrete is an excellent material with which to make long-lasting and energy-efficient buildings. However, even with good design, human needs change and potential waste will be generated.
Concrete may be considered waste according to the European Commission decision of 2014/955/EU for the List of Waste under the codes: 17 (construction and demolition wastes, including excavated soil from contaminated sites) 01 (concrete, bricks, tiles and ceramics), 01 (concrete), and 17.01.06* (mixtures of, separate fractions of concrete, bricks, tiles and ceramics containing hazardous substances), and 17.01.07 (mixtures of, separate fractions of concrete, bricks, tiles and ceramics other than those mentioned in 17.01.06). It is estimated that in 2018 the European Union generated 371,910 thousand tons of mineral waste from construction and demolition, and close to 4% of this quantity is considered hazardous. Germany, France and the United Kingdom were the top three polluters with 86,412 thousand tons, 68,976 and 68,732 thousand tons of construction waste generation, respectively.
In the context of a circular economy the most efficient way to utilise concrete after fulfilling its initial purpose may not be clear. Factors to be considered include the quality of the recovered material as well as technical or other regulatory requirements.
Currently, there is not an End-of-Waste criteria for concrete materials in the EU. However, different sectors have been proposing alternatives for concrete waste and re purposing it as a secondary raw material in various applications, including concrete manufacturing itself.
Reuse
Reuse of blocks in original form, or by cutting into smaller blocks, has even less environmental impact; however, only a limited market currently exists. Improved building designs that allow for slab reuse and building transformation without demolition could increase this use. Hollow core concrete slabs are easy to dismantle and the span is normally constant, making them good for reuse.
Other cases of re-use are possible with pre-cast concrete pieces: through selective demolition, such pieces can be disassembled and collected for further use in other building sites. Studies show that back-building and remounting plans for building units (i.e., re-use of pre-fabricated concrete) is an alternative for a kind of construction which protects resources and saves energy. Especially long-living, durable, energy-intensive building materials, such as concrete, can be kept in the life-cycle longer through recycling. Prefabricated constructions are the prerequisites for constructions necessarily capable of being taken apart. In the case of optimal application in the building carcass, savings in costs are estimated in 26%, a lucrative complement to new building methods. However, this depends on several courses to be set. The viability of this alternative has to be studied as the logistics associated with transporting heavy pieces of concrete can impact the operation financially and also increase the carbon footprint of the project. Also, ever changing regulations on new buildings worldwide may require higher quality standards for construction elements and inhibit the use of old elements which may be classified as obsolete.
Recycling
Concrete debris is routinely shipped to landfills for disposal, but recycling is increasing due to improved environmental awareness, changing regulation/laws and economic benefits. Concrete can be recovered – crushed and reused as aggregate in new projects.
Recovering concrete reduces resource exploitation and associated transport costs, and reduces landfill. However, it has little impact on reducing greenhouse gas emissions as most emissions occur when cement is made. At present, most recovered concrete is used for road sub-base and civil engineering projects.
By far the most common method for recycling dry and hardened concrete involves crushing. The input material can be returned concrete which is still fresh (wet), from ready-mix trucks, production waste at a pre-cast production facility, or waste from demolition. The most significant source is demolition waste, preferably pre-sorted post-demolition. Specific processing sites are typically able to produce higher quality aggregate. Screens are used to achieve desired particle size, and remove dirt, foreign particles and fine material from the coarse aggregate.
The final product, Recycled Concrete Aggregate (RCA), has an angular shape, rougher surface, lower specific gravity (20%), higher water absorption, and pH greater than 11 – this elevated pH increases the risk of alkali reactions. RCA's lower density usually increases project efficiency and lowers job cost – RCA yields more volume by weight (up to 15%). The physical properties make it the preferred material for applications such as road base and sub-base. This is because recycled aggregates often have better compaction properties and require less cement for sub-base uses. Furthermore, it is generally cheaper to obtain than virgin material.
Cement
Pulverized concrete can replace flux material in electric arc furnaces. The process produces “reactivated cement” as a byproduct. Furnaces need flux (typically lime), to purify the steel. If the leftover slag is cooled quickly in air, it becomes Portland cement. The technique also significantly reduces CO2 emissions compared to conventional methods.
Applications
The main commercial applications are:
Aggregate base course (road base), or the untreated aggregates used as foundation for roadway pavement, is the underlying layer (under pavement) which provides a structural foundation for paving.
Aggregate for ready-mix concrete, by replacing from 10 to 45% of the virgin aggregates with a blend of cement, sand and water. Because the RCA contains cement, the ratios of the mix have to be adjusted to achieve desired structural requirements such as workability, strength and water absorption.
Soil Stabilization, with the incorporation of recycled aggregate, lime, or fly ash into marginal quality subgrade material used to enhance the load bearing capacity of that subgrade.
Pipe bedding: serving as a stable bed or firm foundation in which to lay underground utilities. Some countries' regulations prohibit the use of RCA and other construction and demolition wastes in filtration and drainage beds due to potential contamination with chromium and pH impacts.
Landscape Materials: Includes boulder/stacked rock walls, underpass abutment structures, erosion structures, water features, retaining walls.
Cradle-to-cradle challenges
The applications developed for RCA so far are not exhaustive, and many more uses are to be developed as regulations, institutions and norms find ways to accommodate construction and demolition waste as secondary raw materials in a safe and economic way. However, considering the purpose of having a circularity of resources in the concrete life cycle, the only application of RCA that could be considered as recycling of concrete is the replacement of natural aggregates on concrete mixes. All the other applications would fall under the category of downcycling. It is estimated that even near complete recovery of concrete from construction and demolition waste will only supply about 20% of total aggregate needs in the developed world.
The path towards circularity goes beyond concrete technology itself, depending on multilateral advances in the cement industry, research and development of alternative materials, building design and management, and demolition as well as conscious use of spaces in urban areas to reduce consumption.
Process
Re-purposing urbanite (concrete rubble pieces) involves selecting and transporting the pieces, and using them as slabs or bricks. The pieces can be shaped, for example using a chisel; this can be labor-intensive.
Crushing involves removing trash, wood and paper; removing metals such as rebar, using magnets and other devices, to be recycled separately; sorting the aggregate by size; crushing it using a crushing machine; and removing other particulates by methods such as hand-picking and water flotation.
Crushing at the construction site using portable crushers is cheaper and causes less pollution than transporting material to and from a quarry. Large road-portable plants can crush concrete and asphalt rubble at 600 tons per hour. These systems normally include a side discharge conveyor, a screening plant, and a return conveyor from the screen back to the crusher for re-crushing large chunks. Compact, self-contained crushers can crush up to 150 tons per hour and fit into tighter areas. Crusher attachments to construction equipment such as excavators can crush up to 100 tons per hour and make crushing of smaller volumes economical.
To produce clean aggregates from crushed concrete waste, very careful dismantling and demolishing is needed to keep the concrete stream away from other materials that would diminish its quality. Once separated, the broken concrete is then sent to a wet recycling process, where the coarse fraction of broken concrete is washed to produce clean aggregate, whereas the residue generated from the washing process is sent to landfill in the form of sludge.
Uses
Large pieces of concrete rubble (urbanite) can be used in walls as building stones, as slabs in walkways, or as riprap revetments to reduce stream bank erosion. Ecology blocks (eco-blocks) are made from recycled concrete and used for retaining walls and other temporary structures, and have also been used for hostile architecture.
Small pieces are used as gravel for new construction projects. Sub-base gravel is laid as the lowest layer in a road, with fresh concrete or asphalt poured over it. The US Federal Highway Administration may use such techniques to build new highways from the materials of old highways. Concrete pavements can be broken in place and used as a base layer for an asphalt pavement through a process called rubblization.
Crushed concrete free of contaminants can be used as raw material (sometimes mixed with natural aggregate) to make new concrete.
Well-graded and aesthetically pleasing materials can be used as landscaping stone and mulch.
Wire gabions (cages), can be filled with crushed concrete and stacked as retaining walls or privacy walls (instead of fencing).
Chemical recycling of concrete waste
Source:
Soil amendment and stabilization
Improper disposal and treatment of concrete waste negatively affect soil, but proper treatment and recycling processes can be used to amend and stabilize soil. In general, alkali-activated mixtures improve and stabilize soil through cation exchange, hydration reactions, and enhanced pozzolanic reactions. Ca2+ ions in an alkali-activated mixture exchanges with other metal ions, decreasing electric double layers and increasing flocculation, making soil more granular and friable. Alkali-activated mixtures improve soil by sorbing the water in the soil through hydration reactions, which decreases the water content in the soil and improves soft soil with a high moisture content. Finally, the dissociation of calcium oxide in water in the soil increases electrolyte concentrations and pH, and hence SiO2 and Al2O3 dissolve more readily and promotes pozzolanic reactions. Materials such as Portland cement, fly ash, and lime are already used extensively to amend and stabilize soil, so the same concept can be extended to concrete waste, which is itself an alkali-activated mixture. In general, studies have shown that the cementitious material of concrete waste that is added to weak soil causes hydration reactions that increase the soil pH, amount of Ca2+, and amount of free Ca(OH)2 that could react with SiO2 and Al2O3 through pozzolanic reactions that improve soil.
Construction Material Production
Concrete waste contains abundant silicon and some aluminum, so they can be used to synthesize geopolymers. Geopolymeric binder combined with metakaolin can yield material with desired silicon, aluminum, and calcium contents. Geopolymer concrete from waste concrete has been analyzed, and it has been suggested that it could be used in applications that require moderately strong concrete, thermally insulating concrete, lightweight concrete, and bricks or blocks.
Water and Gas Treatment
Concrete waste that is rich in alkaline calcium compounds can be used to remove and recover various elements from an aqueous solution. Waste concrete has been used as a sorbent to remove phosphorus from wastewater after the removal of excess sludge in sewage treatment plants. Concrete waste may also be used as an inexpensive gas treatment agent. This would offer advantages over using conventional gas treatment agents because concrete waste is cheap and produced in large amounts. Research has shown that waste concrete can contribute to the sorption of NO2, SO2, and Fluorine gas.
Precautions
There have been concerns about the recycling of painted concrete due to possible lead content. The Army Corps of Engineers' Construction Engineering Research Laboratory (CERL) and others have studied the risks, and concluded that concrete with lead-based paint should be safely used as fill without an impervious cover as long as it is covered by soil.
Some experiments showed that recycled concrete is less strong and durable than concrete from natural aggregate. This can be remedied by mixing in materials such as fly ash.
References
Further reading
External links
Construction Materials Recycling Association
Electric Arc Furnace (EAF) Slag
Omer Haciomeroglu's ERO Concrete Recycling Robot
Strength and Durability Evaluation of Recycled Aggregate Concrete
Use of aggregates from recycled construction and demolition waste in concrete
Concrete
Recycling by material
Water conservation | Concrete recycling | Engineering | 2,734 |
23,421,676 | https://en.wikipedia.org/wiki/Cyclo%286%29carbon | Cyclo[6]carbon is an allotrope of carbon with molecular formula . The molecule is a ring of six carbon atoms, connected by alternating double bonds. It is, therefore, a member of the cyclo[n]carbon family.
There have been a few attempts to synthesize cyclo[6]carbon, e.g. by pyrolysis of mellitic anhydride, but without success until 2023, when it was successfully synthesized by atom manipulation of hexachlorobenzene.
Calculations suggest that the alternative cyclic cumulene structure, called cyclohexahexaene, is the potential energy minimum of the cyclo[6]carbon framework.
References
Cyclocarbons
Polyynes
Six-membered rings
Cycloalkynes | Cyclo(6)carbon | Chemistry | 172 |
13,004,986 | https://en.wikipedia.org/wiki/C-number | The term c-number (classical number) is an old nomenclature introduced by Paul Dirac which refers to real and complex numbers. It is used to distinguish from operators (q-numbers or quantum numbers) in quantum mechanics.
Although c-numbers are commuting, the term anti-commuting c-number is also used to refer to a type of anti-commuting numbers that are mathematically described by Grassmann numbers. The term is also used to refer solely to "commuting numbers" in at least one major textbook.
References
External links
WordWeb Online
Numbers
Quantum mechanics | C-number | Physics,Mathematics | 123 |
39,516,424 | https://en.wikipedia.org/wiki/Grey%20box%20model | In mathematics, statistics, and computational modelling, a grey box model combines a partial theoretical structure with data to complete the model. The theoretical structure may vary from information on the smoothness of results, to models that need only parameter values from data or existing literature. Thus, almost all models are grey box models as opposed to black box where no model form is assumed or white box models that are purely theoretical. Some models assume a special form such as a linear regression or neural network. These have special analysis methods. In particular linear regression techniques are much more efficient than most non-linear techniques. The model can be deterministic or stochastic (i.e. containing random components) depending on its planned use.
Model form
The general case is a non-linear model with a partial theoretical structure and some unknown parts derived from data. Models with unlike theoretical structures need to be evaluated individually, possibly using simulated annealing or genetic algorithms.
Within a particular model structure, parameters or variable parameter relations may need to be found. For a particular structure it is arbitrarily assumed that the data consists of sets of feed vectors f, product vectors p, and operating condition vectors c. Typically c will contain values extracted from f, as well as other values. In many cases a model can be converted to a function of the form:
m(f,p,q)
where the vector function m gives the errors between the data p, and the model predictions. The vector q gives some variable parameters that are the model's unknown parts.
The parameters q vary with the operating conditions c in a manner to be determined. This relation can be specified as q = Ac where A is a matrix of unknown coefficients, and c as in linear regression includes a constant term and possibly transformed values of the original operating conditions to obtain non-linear relations between the original operating conditions and q. It is then a matter of selecting which terms in A are non-zero and assigning their values. The model completion becomes an optimization problem to determine the non-zero values in A that minimizes the error terms m(f,p,Ac) over the data.
Model completion
Once a selection of non-zero values is made, the remaining coefficients in A can be determined by minimizing m(f,p,Ac) over the data with respect to the nonzero values in A, typically by non-linear least squares. Selection of the nonzero terms can be done by optimization methods such as simulated annealing and evolutionary algorithms. Also the non-linear least squares can provide accuracy estimates for the elements of A that can be used to determine if they are significantly different from zero, thus providing a method of term selection.
It is sometimes possible to calculate values of q for each data set, directly or by non-linear least squares. Then the more efficient linear regression can be used to predict q using c thus selecting the non-zero values in A and estimating their values. Once the non-zero values are located non-linear least squares can be used on the original model m(f,p,Ac) to refine these values .
A third method is model inversion, which converts the non-linear m(f,p,Ac) into an approximate linear form in the elements of A, that can be examined using efficient term selection and evaluation of the linear regression. For the simple case of a single q value (q = aTc) and an estimate q* of q. Putting dq = aTc − q* gives
m(f,p,aTc) = m(f,p,q* + dq) ≈ m(f,p.q*) + dq m’(f,p,q*) = m(f,p.q*) + (aTc − q*) m’(f,p,q*)
so that aT is now in a linear position with all other terms known, and thus can be analyzed by linear regression techniques. For more than one parameter the method extends in a direct manner. After checking that the model has been improved this process can be repeated until convergence. This approach has the advantages that it does not need the parameters q to be able to be determined from an individual data set and the linear regression is on the original error terms
Model validation
Where sufficient data is available, division of the data into a separate model construction set and one or two evaluation sets is recommended. This can be repeated using multiple selections of the construction set and the resulting models averaged or used to evaluate prediction differences.
A statistical test such as chi-squared on the residuals is not particularly useful. The chi squared test requires known standard deviations which are seldom available, and failed tests give no indication of how to improve the model. There are a range of methods to compare both nested and non nested models. These include comparison of model predictions with repeated data.
An attempt to predict the residuals m(, ) with the operating conditions c using linear regression will show if the residuals can be predicted. Residuals that cannot be predicted offer little prospect of improving the model using the current operating conditions. Terms that do predict the residuals are prospective terms to incorporate into the model to improve its performance.
The model inversion technique above can be used as a method of determining whether a model can be improved. In this case selection of nonzero terms is not so important and linear prediction can be done using the significant eigenvectors of the regression matrix. The values in A determined in this manner need to be substituted into the nonlinear model to assess improvements in the model errors. The absence of a significant improvement indicates the available data is not able to improve the current model form using the defined parameters. Extra parameters can be inserted into the model to make this test more comprehensive.
See also
References
Mathematical modeling
Mathematical theorems | Grey box model | Mathematics | 1,183 |
65,496,256 | https://en.wikipedia.org/wiki/Daisy%20Yen%20Wu | Daisy Yen Wu (, 12 June 190227 May 1993) was the first Chinese woman engaged as an academic researcher in biochemistry and nutrition. Born into a wealthy industrial family in Shanghai, from a young age she was tutored in English and encouraged to study. She graduated from Nanjing Jinling Women's University in 1921 and then studied in the United States, graduating with a master's degree in biochemistry from Teachers College, Columbia University in 1923. Returning to China, she became an assistant professor at Peking Union Medical College between 1923 and her marriage at the end of 1924 to Hsien Wu. Collaborating with him, she conducted research on proteins and studied nutrition. After their marriage she continued to assist in the research conducted by Wu as an unpaid staff member until 1928. She and her husband collaborated in writing the first Chinese textbook on nutrition, which remained in print through the 1990s.
While raising their children, Yen Wu recognized that educational opportunities were limited and founded the Mingming School () in 1934 to provide a modern comprehensive education for Chinese children. She also raised funds in 1936 to build a school hospital for their alma mater, the Jinling Women's College, and earned a degree in French. In 1949, as her husband was in the United States and unable to return because of the Chinese Communist Revolution, she took the children abroad. Hired as a researcher for the Medical College of Alabama, she resumed collaboration with her husband, until his death in 1959. Moving to New York City in 1960, she conducted research for the United Nations Children's Fund to develop nutritional standards from 1960 to 1964. From 1964 to 1971 she worked as a lecturer and created a reference library for the Institute of Human Nutrition at Columbia University College of Physicians and Surgeons and from 1971 to 1987 she worked at St. Luke's Hospital Center, creating a library for the New York Obesity Research Center. Throughout her life, Yen Wu created numerous scholarships in China, Taiwan, and the United States which bear the name of family members and allow students to further their education. She died in 1993 in Ithaca, New York.
Early life
Yan Caiyun was born on 12 June 1902 in Shanghai, China, to Yang Lifen () and (). Her mother was a Christian and raised the couple's twelve children. Her father was from the well-to-do () and was employed in the Ministry of Agriculture, Industry and Commerce. He eventually took over and managed the family businesses. Yan's paternal grandfather, () served as an advisor to Li Hongzhang, an official of the Qing dynasty, and was an industrialist. He founded China's salt industry, as well as banks, factories, pharmacies, and tea shops around the country. Both her grandfather and father were also talented painters and calligraphers. Convinced of the importance of education, Yan Zijun hired university teachers to tutor the children from a young age in English and Chinese before they attended primary school in Shanghai.
In 1908, Yan entered McTyeire School, a private girls' school. In 1913, the family moved to Tianjin, where Yan and two of her sisters, and Youyun prepared for the entrance exams of the Chinese and Western Girls' High School. She passed the examination and enrolled in the middle of 1914, completing her studies there in June 1917. She was admitted to Nanjing Jinling Women's University (later renamed Ginling College), earning a bachelor's degree with honours in 1921. As Yan was keen to continue her education, her father allowed her to go abroad to study in the United States. She enrolled in chemistry studies at Smith College in 1922, and began using the English name "Daisy Yen". Over the summer break, she took courses at the University of Chicago in chemistry, nutrition, and physics and the next fall enrolled in biochemistry courses at Teachers College, Columbia University. She studied nutrition, a field which at the time was in its infancy, under Henry Clapp Sherman and Mary Swartz Rose, taking particular interest in the analysis of vitamin content in food. She received her master's degree in May 1923.
Career
Scientific work (1923–1928)
Yen was hired by the China Medical Board of the Rockefeller Foundation as an assistant professor in biochemistry at Peking Union Medical College and contracted for a year from September 1923. The biochemistry department had just been founded and Yen was its second employee. She lectured and worked as an assistant to Hsien Wu, whose research initially focused on blood chemistry. She assisted in his research on protein denaturation and published several papers with him: 关于稀酸、稀碱对蛋白质作用的一些新观察 (Some New Observations on the Effects of Dilute Acids and Bases on Proteins, 1924), 蛋白质变性的研究,I.稀酸和稀碱对蛋白质的影响 (Research on Protein Denaturation, I: The Effect of Dilute Acid and Alkali on Protein, 1924), 蛋白质的热变性 (Thermal Denaturation of Protein, 1925), and 乳胶体对有色溶液的作用 (The Effect of Latex on Colored Solution, 1926). These studies would later become the basis for Hsien Wu's theory on protein denaturation first presented in 1931.
Despite her contract being renewed for another year, when Yen and Wu decided to marry, she knew her position would be terminated, as there was a policy that spouses could not work together. The couple married on 20 December 1924 in Shanghai and Yen Wu resigned. They honeymooned in the United States and Yen Wu made plans to resume her studies and complete her doctoral work under Sherman at Columbia. She accompanied Hsien Wu to Europe and discovered she was pregnant, which put an end to her pursuit for further education, and she returned to China. Working as unpaid staff in Hsien Wu's lab, she continued to assist in research, but papers published rarely listed her name as the primary researcher. She also temporarily taught organic chemistry for students at the Xiehe Nursing School.
Yen Wu conducted her own research on nutrition for the biochemistry department, believed to be the first such studies carried out by a woman in China. She analyzed the chemical composition of many types of Chinese foods. Vitamin research was still in its infancy, but she determined the amount of carbohydrates, fat, fiber, protein, and water in various foods. Together with Hsien Wu, she began researching vegetarianism, the predominant Chinese diet at the time, using white mice as subjects, a technique Yen had learned from Sherman. By feeding one set of mice a typical diet based on grains such as corn, rice, sorghum, and wheat combined with peas, soybeans, and other vegetables; and another mouse group a diet of grain and meat, they discovered significant growth differentials and problems with rickets in the vegetarian group. Altering the vegetarian diet by adding bell peppers, cabbage, mustard greens, or rapeseed they found that growth rates were similar to the meat-eating mice and the animals had no signs of vitamin deficiencies.
Her next joint project with Hsien Wu was to conduct research on the diet of people in Beijing. The Department of Public Health and Sanitation collected materials from various groups throughout the country, including businesses, factories, farms, households, restaurants, and schools and presented their survey results to Yen and Hsien. They analyzed the survey, determining daily consumption rates of carbohydrates, fats, and proteins finding that the diet of people in Beijing was fairly representative of a typical diet throughout the country. They noted that compared to a Western diet, there were deficiencies in high-quality protein, calcium, phosphorus, and vitamins A and D. Their conclusions were that malnutrition was the cause of the high rates of disease and mortality, as well as intellectual disabilities and short stature, prevalent among Chinese children at the time. Their collaboration produced 营养概论 (Introduction to Nutrition, 1929) the first textbook on nutrition in China. Hsien Wu also published 中国食物之营养价值 (The Nutritional Value of Chinese Food, 1928) incorporating Yen's research.
Family and philanthropy (1929–1949)
In 1928, after the birth of her third child, Yen Wu withdrew from active work in the laboratory and focused on raising her children while compiling the research notes of her husband and assisting him in the development of his career. Within seven years of marrying, she had given birth to five children and was concerned about the level of educational opportunities available for them. She founded the Mingming School () in 1934 with the aim of providing a modern comprehensive education and hired Wang Suyi, an alumnus of Columbia, as principal and a full-time teacher. The private school was operated by a board of her friends upon which she served as treasurer. Yen Wu was a member of various civic improvement clubs and worked with her sisters to raise funds in 1936 to build a school hospital for their alma mater, the Jinling Women's College. She returned to school, studying French at the and graduating in 1944. In 1947, Hsien Wu went to the United States to work as a visiting professor at Columbia University, but was unable to return because of the Chinese Communist Revolution. When the communists took over their home in Beijing in 1949, Yen Wu decided to join him abroad.
Life abroad (1949–1992)
Yen Wu brought the children to Birmingham, Alabama, where Hsien Wu had become chair of the biochemistry department at the University of Alabama. In 1950, she was hired to work as a biochemical researcher for the Medical College of Alabama. As before, she conducted research jointly with her husband, until he suffered a heart attack in 1953 and retired. They moved to Boston, where Yen Wu cared for Hsien Wu and compiled their research. Between 1949 and 1959, they published four papers and wrote three abstracts for presentation at academic conferences, mostly about amino acid metabolism. After Hsien Wu died on 8 August 1959, Yen Wu published her husband's biography and in 1960, moved to New York City, to be near her children.
In the spring of 1960, Yen Wu was hired as a researcher by the Food Conservation Division of the United Nations Children's Fund. She was tasked with testing various foods and making recommendations to improve nutritional standards for children. In 1961, she established the Yen Tse-King Memorial Scholarship, in honor of her father, and the Wu Hsien Memorial Scholarship, in honor of her husband, at the Tunghai University in Taichung, Taiwan. The scholarships were intended to be awarded annually to assist women students in become physicians or any student of biological chemistry in completing their education. In August 1964, Yen Wu went to work at the Institute of Human Nutrition at Columbia University College of Physicians and Surgeons, where she built a reference library and organized finding aids for the materials for the staff and students of the college.
In 1971, she retired but began working three days a week as a consultant in nutrition and metabolism for St. Luke's Hospital Center. Her work there was to establish the library for the New York Obesity Research Center. She also lectured on public health and nutrition at Columbia University. In addition to her employment, Yen Wu began editing and updating the publication 营养概论 (Introduction to Nutrition). She wrote eight supplemental chapters and a new edition of the book was published in Taiwan in 1974, and remained in print until the 1990s.
After a thaw in the Cold War relations between China and the US leading to normalization in the 1970s, Yen Wu returned to China. She visited relatives in 1980 and 1984. In 1983, she attended the celebration for the 70th anniversary of the founding of Jinling Women's University with family and former classmates. That year, she established a scholarship fund named after her husband to be awarded by the Chinese Academy of Medical Sciences to fund the research of professors who have contributed to the development of Chinese biochemistry and molecular biology. She retired in 1987 and lived alone until 1992, when she moved to the home of her eldest son in Ithaca, New York. In 1993, to honor her husband's 100th birthday, she donated funds to the Chinese Academy of Medical Sciences to establish a biochemical library and purchase books, and created a scholarship at Harvard Medical School, both bearing his name.
Death and legacy
Yen Wu died on 27 May 1993 at Tompkins Community Hospital in Ithaca, after a heart attack, and was buried at Forest Hills Cemetery, in the Jamaica Plain neighborhood of Boston, Massachusetts. The Wus' research papers on metabolism, diet, and nutrition were foundational to the development of later ideas on modern Chinese health and nutrition. At the time their work was completed, it was one of the most influential in China and led the biochemical department at Peking Union Medical College to prioritize studying nutrition. The papers they produced have become a prerequisite to any discussion of the historical development of nutritional study in China.
The couple's son, Ray Wu, became a noted molecular biologist at Cornell University, having "developed the first method for sequencing DNA" and was "widely recognized as one of the fathers of plant genetic engineering." In addition to the scholarships she founded, the Hsien and Daisy Yen Wu Scholarship was founded at Cornell to assist graduate students in completing their education. Harvard has an endowed chair, the Hsien Wu and Daisy Yen Wu Professor of Biological Chemistry and Molecular Pharmacology, named in their honor.
Selected works
Notes
References
Citations
Bibliography
1902 births
1993 deaths
Smith College alumni
University of Chicago alumni
Teachers College, Columbia University alumni
Biologists from Shanghai
Chinese biochemists
Chinese women biologists
20th-century Chinese scientists
20th-century Chinese biologists
20th-century Chinese chemists
Chinese women chemists
Chinese chemists
Women biochemists
People's Republic of China emigrants to the United States
Chemists from Shanghai
Academic staff of Peking Union Medical College
University of Alabama at Birmingham faculty
Chinese women philanthropists
20th-century philanthropists
20th-century Chinese people
Chinese philanthropists
20th-century women philanthropists | Daisy Yen Wu | Chemistry | 2,846 |
48,543,688 | https://en.wikipedia.org/wiki/Henry%20Freke | Henry Freke (1813–1888) was an Irish physician and early evolutionary writer.
Biography
Freke took a B. A. at Trinity in 1840, his M. B. in 1845 and his M.D. in 1855. He worked as a physician in various hospitals in Dublin and worked at the first Irish lunatic asylum founded by Jonathan Swift. He is credited for developing the concept of negative entropy.
Evolution
His early writings in the Dublin Quarterly Journal of Medical Science experimented with evolutionary ideas such as all organisms descending from a single germ. Freke proposed an evolutionary theory in 1851 and more fully in a book for 1861.
Freke argued to have published on evolution before Charles Darwin. In 1851 he wrote a pamphlet that claimed animals and plants had evolved from a single filament. He sent a copy of the pamphlet to Darwin who described the writing style as "ill-written" and "beyond my scope". However, in the Historical Sketch which first appeared in the 3rd Edition of Darwin's On the Origin of Species (1861), Freke is listed as an early evolution proponent. On page 83 of his On the Origin of Species by Means of Organic Affinity (1861), Freke developed a theory of pangenesis, in which he proposed that all life was developed from microscopic organic agents which he named granules, which existed as 'distinct species of organizing matter' and would develop into different biological structures.
Publications
Reflections on Organization, or Suggestions for the Construction of an Organic Atomic Theory (1848)
On the Origin of Species by Means of Organic Affinity (1861)
An Appeal to Physiologists and the Press (1862)
References
1813 births
1888 deaths
19th-century Irish medical doctors
Proto-evolutionary biologists
Place of birth missing
19th-century Irish scientists | Henry Freke | Biology | 358 |
60,185,919 | https://en.wikipedia.org/wiki/Opsys | Opsys is an educational adventure video game by Polish studio Lemon Interactive and published by [hyper]media limited in 2000 on Macintosh and Windows.
Plot and gameplay
When someone breaks into the Museum of the History of Cypriot Coinage and steals all the ancient coins, the player must travel through time and recover them, from 500 BC to 1960.
Players can travel to locations via a map, and can access a clue book to complete the puzzles.
Opsys is a 3D virtual reality game with Myst-like graphics and full-motion video.
Production
Lemon Interactive, the game's Polish developer, announced a competition where by the first player to find all the coins would win 10,000 dollars, but the competition was never finalised. The competition was also extended to the English-speaking world. The demo version lacked some gameplay elements and only allowed players to walk through the wardrobe in their own apartment to the virtual reality lab, and access the temple VR, tomb VR and theatre VR.
Reception
Gamepressure/Gry-Online praised the artwork of the landscapes that the player traverses through. Absolute Games deemed it a "boring and tedious game". Gamezone felt it was a "terrific cerebral challenge". Quandaryland felt that the one of the only reasons someone would play this game is for the chance to win $10,000. Just Adventure described it as more than a contest than a game.
External links
Main page
References
2000 video games
Adventure games
Classic Mac OS games
Educational video games
Hypermedia
Single-player video games
Video games about time travel
Video games developed in Poland
Windows games | Opsys | Technology | 321 |
67,984,176 | https://en.wikipedia.org/wiki/Shataranji | Shataranji () is a weaving technique traditionally used in the Rangpur region of Bangladesh. In 2021, it was declared a Geographical Indication Product of Bangladesh. It is used to produce carpets that are fashionable, artistic, and practical, especially when used as a blanket. Due to the expense involved in its production, Shataranji has historically been considered a symbol of aristocracy.
History
Shataranji is believed to date back to the Mughal Empire by locals, however, the exact origin of Shataranji is unknown. The weaving techniques are passed down from generation to generation among the same weaver families. In the 1830s, Ms. Nisbet, a British civil servant and then Collector of Rangpur, visited the village of Peerpur, nearby Rangpur. This led to his discovery of local villages where locals weaved using Shataranji. Impressed by the product, Nisbet used his government influence to promote it; the region was named Nisbetganj in his honor. During British rule, Shataranji carpets became commonplace throughout the Indian subcontinent, being exported to various locations in Sri Lanka, Burma, Indonesia, Thailand and Malaysia. After the Partition of India, Shataranji started losing popularity, nearly becoming extinct. It has seen a resurgence in the past few decades due to demand increase, the appreciation for handloom process and increased marketing.
Weaving style
Shataranji is a handloom process; no modern technology is used. The most common materials used to weave Shataranji are cotton yarn, jute yarn, wool, among others. Ropes made out of fibers are woven in geometrical patterns, typically measured by hand. During this process, specialized techniques and different colors are used to create unique geometrical patterns and designs. Designs represent the weaver's own expertise, techniques, and style, and typically draw on local traditions from northern India.
Typically, a Shataranji measures at least 30 x 20 inches, with the largest being 30 x 20 feet. A 6 x 9 foot carpet requires two workers to work for two full days, while a 1.5 x 3-foot carpet requires one weaver to work for 3 hours.
References
External links
Geographical indications in Bangladesh
Culture of Bangladesh
Floors
Bangladeshi handicrafts
Asian folk art
Culture of Bengal
Arts in Bangladesh
Bangladeshi art | Shataranji | Engineering | 470 |
7,721,927 | https://en.wikipedia.org/wiki/Siegel%20modular%20form | In mathematics, Siegel modular forms are a major type of automorphic form. These generalize conventional elliptic modular forms which are closely related to elliptic curves. The complex manifolds constructed in the theory of Siegel modular forms are Siegel modular varieties, which are basic models for what a moduli space for abelian varieties (with some extra level structure) should be and are constructed as quotients of the Siegel upper half-space rather than the upper half-plane by discrete groups.
Siegel modular forms are holomorphic functions on the set of symmetric n × n matrices with positive definite imaginary part; the forms must satisfy an automorphy condition. Siegel modular forms can be thought of as multivariable modular forms, i.e. as special functions of several complex variables.
Siegel modular forms were first investigated by for the purpose of studying quadratic forms analytically. These primarily arise in various branches of number theory, such as arithmetic geometry and elliptic cohomology. Siegel modular forms have also been used in some areas of physics, such as conformal field theory and black hole thermodynamics in string theory.
Definition
Preliminaries
Let and define
the Siegel upper half-space. Define the symplectic group of level , denoted by as
where is the identity matrix. Finally, let
be a rational representation, where is a finite-dimensional complex vector space.
Siegel modular form
Given
and
define the notation
Then a holomorphic function
is a Siegel modular form of degree (sometimes called the genus), weight , and level if
for all .
In the case that , we further require that be holomorphic 'at infinity'. This assumption is not necessary for due to the Koecher principle, explained below. Denote the space of weight , degree , and level Siegel modular forms by
Examples
Some methods for constructing Siegel modular forms include:
Eisenstein series
Theta functions of lattices and Siegel theta series
Saito–Kurokawa lift for degree 2
Ikeda lift
Miyawaki lift
Products of Siegel modular forms.
Level 1, small degree
For degree 1, the level 1 Siegel modular forms are the same as level 1 modular forms. The ring of such forms is a polynomial ring C[E4,E6] in the (degree 1) Eisenstein series E4 and E6.
For degree 2, showed that the ring of level 1 Siegel modular forms is generated by the (degree 2) Eisenstein series E4 and E6 and 3 more forms of weights 10, 12, and 35. The ideal of relations between them is generated by the square of the weight 35 form minus a certain polynomial in the others.
For degree 3, described the ring of level 1 Siegel modular forms, giving a set of 34 generators.
For degree 4, the level 1 Siegel modular forms of small weights have been found. There are no cusp forms of weights 2, 4, or 6. The space of cusp forms of weight 8 is 1-dimensional, spanned by the Schottky form. The space of cusp forms of weight 10 has dimension 1, the space of cusp forms of weight 12 has dimension 2, the space of cusp forms of weight 14 has dimension 3, and the space of cusp forms of weight 16 has dimension 7 .
For degree 5, the space of cusp forms has dimension 0 for weight 10, dimension 2 for weight 12. The space of forms of weight 12 has dimension 5.
For degree 6, there are no cusp forms of weights 0, 2, 4, 6, 8. The space of Siegel modular forms of weight 2 has dimension 0, and those of weights 4 or 6 both have dimension 1.
Level 1, small weight
For small weights and level 1, give the following results (for any positive degree):
Weight 0: The space of forms is 1-dimensional, spanned by 1.
Weight 1: The only Siegel modular form is 0.
Weight 2: The only Siegel modular form is 0.
Weight 3: The only Siegel modular form is 0.
Weight 4: For any degree, the space of forms of weight 4 is 1-dimensional, spanned by the theta function of the E8 lattice (of appropriate degree). The only cusp form is 0.
Weight 5: The only Siegel modular form is 0.
Weight 6: The space of forms of weight 6 has dimension 1 if the degree is at most 8, and dimension 0 if the degree is at least 9. The only cusp form is 0.
Weight 7: The space of cusp forms vanishes if the degree is 4 or 7.
Weight 8:In genus 4, the space of cusp forms is 1-dimensional, spanned by the Schottky form and the space of forms is 2-dimensional. There are no cusp forms if the genus is 8.
There are no cusp forms if the genus is greater than twice the weight.
Table of dimensions of spaces of level 1 Siegel modular forms
The following table combines the results above with information from and and .
Koecher principle
The theorem known as the Koecher principle states that if is a Siegel modular form of weight , level 1, and degree , then is bounded on subsets of of the form
where . Corollary to this theorem is the fact that Siegel modular forms of degree have Fourier expansions and are thus holomorphic at infinity.
Applications to physics
In the D1D5P system of supersymmetric black holes in string theory, the function that naturally captures the microstates of black hole entropy is a Siegel modular form. In general, Siegel modular forms have been described as having the potential to describe black holes or other gravitational systems.
Siegel modular forms also have uses as generating functions for families of CFT2 with increasing central charge in conformal field theory, particularly the hypothetical AdS/CFT correspondence.
References
Modular forms
Automorphic forms | Siegel modular form | Mathematics | 1,194 |
48,510,007 | https://en.wikipedia.org/wiki/Elder%20village | In gerontology, an Elder Village or Senior Village (occasionally "virtual village", and usually shortened to "Village") is an organization, usually staffed by volunteers (often with a small paid staff), that provides services to the elderly in order to allow them to remain in their homes as they age. Villages are a part of the "aging in place" movement, and are found in the United States, Canada, Australia, and the Netherlands, as well as South Korea and Finland.
Most Villages have members, to whom they provide services upon request. Services offered typically include transportation, light home maintenance and repair, and social activities. Most Villages do not provide medical services or involved home maintenance, but provide referrals to those who do.
History
The first formal Village was founded in the Beacon Hill neighborhood of Boston in 2001. Approximately one dozen residents of the historic neighborhood wanted "to remain at home" once transportation and household chores became difficult, dangerous, or even impossible. They also wished to avoid becoming dependent on their children, but did not want to move to an old-age facility. They founded an organization to provide these services to the organization's members, who must live in Beacon Hill or the adjacent Back Bay neighborhood. The result has been called an "intentional community" or a "virtual retirement community".
The organization grew slowly, learning from its mistakes. After four years in existence, Beacon Hill Village was the subject of an article in The New York Times, and the idea spread. Beacon Hill Village prepared a how-to manual for sale to those who would found other Villages. By 2010, there were more than 50 Villages in the United States. As of 2012, there were some 90 Villages in operation in the United States, Canada, Australia, and the Netherlands, with more than 120 other Villages in the formation process. By 2018, the idea had spread as far as South Korea and Finland. By 2019, there were 280 Villages in the United States.
Operation
A Village tends to be formed as a non-profit corporation, with members, directors, and officers. Most are qualified as charitable organizations. They may or may not have paid staff, a regular office, and other business trappings.
Villages are largely funded through membership dues and fees, on the one hand, and donations and grants, on the other. Some 90% of American Villages charge dues
, but some charge no dues. They provide such services as transportation, grocery delivery, light home repairs, and dog walking, as well as organizing social activities. They typically pool the resources of a community in providing services. Most Villages do not provide medical services or involved home maintenance, but provide referrals to those who do. Village staff and volunteers might select and screen these outside providers, and can help coordinate members' appointments with them. Providers so identified may offer their services to Village members at reduced rates.
Villages tend to operate on one of three models. The first, pioneered in the 1990s by Community Without Walls in Princeton, N.J., has numerous members, each of whom belongs to one of a number of "houses". Annual dues are very low or non-existent, and much of the activity of such a group is social. Nearly all services are provided by volunteers. Members pay additional dues for further assistance and services needed. The second form delivers both volunteer and paid help. Dues are higher (and often subsidized for low-income members), and the level of services (which are typically provided without additional charge) tends to be more comprehensive. This has been termed the "classic village model". A third model amounts to being a service exchange. One member might pick up groceries for a neighbor; a second volunteer might then fix the first's leaky faucet. Historically, Villages have tended to operate in urban areas, with significant concentrations of both service providers and recipients, but they are spreading. Many experts believe that the second model, with both paid staff and volunteers, has the most widespread applicability. Currently, Villages are largely found in middle-class and upper-income neighborhoods; the movement has received some criticism for its perceived failure to reach more diverse communities to date. The Washington, D.C., area, with its large proportion of people who moved from elsewhere and thus do not have a local family network, has a particularly high concentration of Villages; the District of Columbia Office on Aging has a Web page dedicated to "Senior Villages" and has produced a "how to" guide for establishing a new Village.
The issue of sustainability, with the related issue of growth, has arisen in a number of Villages. In some, the founders have been surprised at the difficulty they experience in their efforts to expand membership beyond the initial group, which can impair efforts to grow the membership to the point at which a Village can become self-sustaining. Many people approached by a Village do not feel ready to join, while the people most in need of a Village's services are less likely to hear about them.
Individual Villages may share ideas and experiences through the Village to Village ("VtV") network. VtV was established in 2010 by Beacon Hill Village and Capital Impact in response to requests from a number of Villages. At the end of 2014, Capital Impact withdrew from the partnership and in March 2015, the organization, formally organized as a limited liability company, was converted to a corporation, named Village to Village Network, Inc. It serves as a clearinghouse for inter-Village communications, and provides information to help communities establish and operate their own Villages. It further organizes an annual meeting, the National Village Gathering, at which local Village officers and staffers may meet those from other Villages to share information and experiences.
The Beacon Hill Village in Boston began as a community of older adults joining forces to create "programs and services that will enable them to live at home, remaining independent as long as possible." The ‘Village’ model for aging in place is based on the Beacon Hill Village established in Boston in 2001. The ‘Village’ model is a grassroots, consumer driven, and volunteer first model. The ‘Village’ is a self-governed organization of older adults who have identified their desire to age in place. The model relies on informal network of community members. Volunteers are the backbone of the model, while the ‘Village’ staff is responsible for administration including vetting, training, and management of volunteers. Vendors provide home health care and professional home repairs. Volunteers provide transportation, shopping, household chores, gardening, and light home maintenance. The ‘Village’ model relies on the collective abilities of the community to respond to challenges face in the aging process. The ‘Village’ also works to build a shared sense of community through social activities including potluck dinners, book clubs, and educational programs. , there were over 50 fully operational ‘Villages’ and nearly 149 in the developmental stage. By 2015, the Village to Village Network had 251 member organizations, accounting for approximately 25,000 service-receiving members.
References
Links
Village to Village Network
PBS Newshour report
Helpful Village
Gerontology
Housing for the elderly | Elder village | Biology | 1,444 |
38,248,414 | https://en.wikipedia.org/wiki/Variable%20electro-precipitator | A variable electro-precipitator (VEP) is a waste water remediation unit using electrocoagulation. The differences between a standard electrocoagulation (EC) unit and a variable Electro-precipitation unit are in the enhanced flow path and the unit electrode connections. The variable electro-precipitator's flow path has been designed to maximize retention time and to increase the turbulence of the water within the unit. This design aids in increasing the amount of effective treatment per gallon of water.
A major design weakness of the electrocoagulation units is the method used in connecting the electrode to the power source. These designs cause overheating, resulting in premature failure of the electrocoagulation reaction chamber. VEP reaction chambers are designed to resolve these performance issues by changing all electrode connections from the standard wet connection (inside the chamber) to an external dry connection. The VEP is cooler-operating, and has a longer chamber life than an electrocoagulation unit.
References
Water treatment | Variable electro-precipitator | Chemistry,Engineering,Environmental_science | 207 |
31,611,212 | https://en.wikipedia.org/wiki/CH-quasigroup | In mathematics, a CH-quasigroup, introduced by , is a symmetric quasigroup in which any three elements generate an abelian quasigroup. "CH" stands for cubic hypersurface.
References
Non-associative algebra | CH-quasigroup | Mathematics | 49 |
43,469,245 | https://en.wikipedia.org/wiki/Henry%20Forder | Henry George Forder (27 September 1889 – 21 September 1981) was a New Zealand mathematician.
Academic career
Born in Shotesham All Saints, near Norwich, he won a scholarships first to a Grammar school and then to University of Cambridge. After teaching mathematics at a number of schools, he was appointed to the chair of mathematics at Auckland University College in New Zealand in 1933. He was very critical of the state of the New Zealand curriculum and set about writing a series of well received textbooks.
His Foundations of Euclidean Geometry (1927) was reviewed by F.W. Owens, who noted that 40 pages are devoted to "concepts of classes, relations, linear order, non archimedean systems, ..." and that order axioms together with a continuity axiom and a Euclidean parallel axiom are the required foundation.
The object achieved is a "continuous and rigorous development of the [Euclidean] doctrine in the light of modern investigations."
In 1929 Forder obtained drawings and notes of Robert William Genese on the exterior algebra of Grassmann. He relied on methods of H. F. Baker in Principles of Geometry to extend Genese's beginning into a complete development with applications throughout geometry. When The Calculus of Extension appeared in 1941 it was reviewed by Homer V. Craig: "The theorem density is exceptionally high and consequently despite the superior exposition it is not an easy book to work straight through – perhaps the key chapters suffer from a lack of recapitulation...
[It] provides the best exposition of the fundamental processes of the Ausdehnungslehre and the most inclusive treatment of the geometrical applications available at present."
Henry Forder was elected Fellow of the Royal Society of New Zealand in 1947 and received an honorary DSc from the University of Auckland in 1959.
Forder lectureship
The Forder Lectureship was established jointly by the London Mathematical Society and the New Zealand Mathematical Society in his honour in 1986.
Selected works
1927: The Foundations of Euclidean Geometry via Internet Archive
1930: A School Geometry
1931: Higher Course Geometry
1941: The Calculus of Extension via Internet Archive
1950: Geometry via Internet Archive
1953: Coordinates in Geometry
References
John C. Butcher (1985) "Obituary: Henry George Forder", Bulletin of the London Mathematical Society 17(2): 161
Fellows of the Royal Society of New Zealand
People educated at Paston College
Alumni of Sidney Sussex College, Cambridge
People from Shotesham
English emigrants to New Zealand
University of Auckland alumni
Academic staff of the University of Auckland
1889 births
1981 deaths
Textbook writers
20th-century New Zealand mathematicians
Geometers | Henry Forder | Mathematics | 521 |
56,841,538 | https://en.wikipedia.org/wiki/Weyl%20sequence | In mathematics, a Weyl sequence is a sequence from the equidistribution theorem proven by Hermann Weyl:
The sequence of all multiples of an irrational α,
0, α, 2α, 3α, 4α, ...
is equidistributed modulo 1.
In other words, the sequence of the fractional parts of each term will be uniformly distributed in the interval [0, 1).
In computing
In computing, an integer version of this sequence is often used to generate a discrete uniform distribution rather than a continuous one. Instead of using an irrational number, which cannot be calculated on a digital computer, the ratio of two integers is used in its place. An integer k is chosen, relatively prime to an integer modulus m. In the common case that m is a power of 2, this amounts to requiring that k is odd.
The sequence of all multiples of such an integer k,
0, k, 2k, 3k, 4k, …
is equidistributed modulo m.
That is, the sequence of the remainders of each term when divided by m will be uniformly distributed in the interval [0, m).
The term appears to originate with George Marsaglia’s paper "Xorshift RNGs".
The following C code generates what Marsaglia calls a "Weyl sequence":
d += 362437;
In this case, the odd integer is 362437, and the results are computed modulo because d is a 32-bit quantity. The results are equidistributed modulo 232.
See also
List of things named after Hermann Weyl
References
Mathematical series | Weyl sequence | Mathematics | 346 |
4,538,355 | https://en.wikipedia.org/wiki/Textile%20design | Textile design, also known as textile geometry, is the creative and technical process by which thread or yarn fibers are interlaced to form a piece of cloth or fabric, which is subsequently printed upon or otherwise adorned. Textile design is further broken down into three major disciplines: printed textile design, woven textile design, and mixed media textile design. Each uses different methods to produce a fabric for variable uses and markets. Textile design as an industry is involved in other disciplines such as fashion, interior design, and fine arts.
Overview
Articles produced using textile design include clothing, carpets, drapes, and towels. Textile design requires an understanding of the technical aspects of the production process, as well as the properties of numerous fibers, yarns, and dyes.
Textile design disciplines
Printed textile design
Printed textile designs are created by using various printing techniques on fabric, cloth, and other materials. Printed textile designers are mainly involved in designing patterns for home interior products like carpets, wallpapers, and ceramics. They also work in the fashion and clothing industries, the paper industry, and in designing stationery and gift wrap.
There are numerous established printed styles and designs that can be broken down into four major categories: floral, geometric, world cultures, and conversational. Floral designs include flowers, plants, or other botanical elements. Geometric designs feature elements, both inorganic and abstract, such as tessellations. World culture designs may be traced to a specific geographic, ethnic, or anthropological source. Finally, conversational designs are designs that fit less easily into the other categories; they may be described as presenting "imagery that references popular icons of a particular period or season, or which is unique and challenges our perceptions in some way." Each category contains subcategories, which include more specific individual styles and designs.
Moreover, different fabrics, like silk and wool, require different types of dye. Other protein-based fabrics require acidic dyes, whereas synthetic fabrics require specialized dispersed dyes.
The advent of computer-aided design software, such as Adobe Photoshop and Illustrator, has allowed each discipline of textile design to evolve and innovate new practices and processes but has most influenced the production of printed textile designs. Digital tools have influenced the process of creating repeating patterns or motifs, or repeats. Repeats are used to create patterns both visible and invisible to the eye: geometric patterns are intended to depict clear, intentional patterns, whereas floral or organic designs are intended to create unbroken repeats that are ideally undetectable. Digital tools have also aided in making patterns by decreasing the amount of an effect known as "tracking", in which the eye is inadvertently drawn to parts of textiles that expose the discontinuity of the textile and reveal its pattern. These tools, alongside the innovation of digital inkjet printing, have allowed the textile printing process to become faster, more scalable, and more sustainable.
Woven textile design
Woven textile design originates from the practice of weaving, which produces fabric by interlacing a vertical yarn (warp) and a horizontal yarn (weft), most often at right angles. Woven textile designs are created by various types of looms and are now predominantly produced using a mechanized or computerized jacquard loom.Designs within the context of weaving are created using various types of yarns, using variance in texture, size, and color to construct a stylized patterned or monochromatic fabric. There is a large range of yarn types available to the designer, including but not limited to cotton, twill, linen, and synthetic fibers. To produce the woven fabric, the designer first delineates and visualizes the sequence of threading, which is traditionally drawn out on graph paper known as point paper.
The designer also will choose a weave structure that governs the aesthetic design that will be produced. The most common process is a plain weave, in which the yarns interlace in an alternating, tight formation, producing a strong and flexible multi-use fabric. Twill weaves, which are also common, alternatively use diagonal lines created by floating the warp or the weft to the left or the right. This process creates a softer fabric favored by designers in the fashion and clothing design industries. Common, recognizable twill styles include patterns like Houndstooth or Herringbone.
Beyond weave structure, color is another dominant aspect in woven textile design. Typically, designers choose two or more contrasting colors that will be woven into patterns based on a chosen threading sequence. Color is also dependent on the size of the yarn: fine yarns will produce a fabric that may change colors when it receives light from different angles, whereas larger yarns will generally produce a more monochromatic surface.
Mixed media textile design
Mixed media textile designs are produced using embroidery or other various fabric manipulation processes such as pleating, appliqué, quilting, and laser cutting.
Embroidery is traditionally performed by hand, applying myriad stitches of thread to construct designs and patterns on the textile surface. Similar to printed textile design, embroidery affords the designer artistic and aesthetic control. Typical stitches include but are not limited to the cross stitch, the chain stitch, and couching. Although industrial and mechanized embroidery has become the standard, hand stitching still remains a fixture for fine arts textiles.
Quilting is traditionally used to enhance the insulation and warmth of a textile. It also provides the designer with the opportunity to apply aesthetic properties. Most commonly, quilts feature geometric and collage designs formed from various textiles of different textures and colors. Quilting also frequently employs the use of recycled scrap or heirloom fabrics. Quilts are also often used as a medium for an artist to depict a personal or communal narrative: for example, the Hmong people have a tradition of creating story quilts or cloths illustrating their experiences with immigration to the United States from Eastern and South-eastern Asia.
Environmental impact
The practice and industry of textile design present environmental concerns. From the production of cloth from raw material to dyeing and finishing, and the ultimate disposal of products, each step of the process produces environmental impacts. They have been further exacerbated with the emergence of fast fashion and other modern industrial practices.
Predominantly, these environmental impacts stem from the heavy use of hazardous chemicals involved in the textile creation process which must be properly disposed of. Other considerations involve the amount of waste created by the disposal of textile design products and the reclamation and reuse of recyclable textiles. The Environmental Protection Agency reported that over 15 million tons of textile waste is created annually. This consists of some 5% of all municipal waste generated. Only 15% of that waste is recovered and reused.
The existence and awareness of the negative environmental impacts of textile production has resulted in the emergence new technologies and practices. Textile designs involving the use of synthetic dyes and materials can result in harmful effects on the environment. This has caused a shift towards using natural dyes or materials and research towards other mediums that result in less harm to the environment. This research includes testing new ways to collect natural resources and how these natural resources work with other materials.
Electronic textiles involve items of clothing with electronic devices or technology woven into the fabric, such as heaters, lights, or sensors. These textiles can potentially have additional harmful environmental effects, such as producing electronic waste. Because of this, these textiles are often made by manufacturers with sustainability in mind. These new approaches to textile design attempt to lessen the negative environmental impact of these textiles.
These concerns have led to the birth of sustainable textile design movements and the practice of ecological design within the field. For instance, London's Royal Society of the Arts hosts design competitions that compel all entrants to center their design and manufacturing methods around sustainable practices and materials.
Textile design in different cultures
Textile patterns, designs, weaving methods, and cultural significance vary across the world. African countries use textiles as a form of cultural expression and way of life. They use textiles to liven up the interior of a space or accentuate and decorate the body of an individual. The textile designs of African cultures involve the process of strip-woven fibers that can repeat a pattern or vary from strip to strip.
History
The history of textile design dates back thousands of years. Due to the decomposition of textile fibers, early examples of textile design are rare. However, some of the oldest known and preserved examples of textiles were discovered in the form of nets and basketry, dating from Neolithic cultures in 5000 BCE. When trade networks formed in European countries, textiles like silk, wool, cotton, and flax fibers became valuable commodities. Many early cultures including Egyptian, Chinese, African, and Peruvian practiced early weaving techniques. One of the oldest examples of textile design was found in an ancient Siberian tomb in 1947. The tomb was said to be that of a prince aging back to 464 AD, making the tomb and all of its contents over 2,500 years old. The rug, known as the Pazyryk rug, was preserved inside ice and is detailed with elaborate designs of deer and men riding on horseback. The designs are similar to present-day Anatolian and Persian rugs that apply the directly proportional Ghiordes knot in their weaving. The Pazyryk rug is currently displayed at the Hermitage Museum located in St. Petersburg, Russia.
See also
Clothing technology
Fashion design
Textile manufacturing
References
Further reading
Jackson, Lesley. Twentieth-Century Pattern Design, Princeton Architectural Press, New York, 2002.
Jackson, Lesley. Shirley Craven and Hull Traders: Revolutionary Fabrics and Furniture 1957-1980, ACC Editions, 2009,
Jenkins, David, ed. The Cambridge History of Cambridge, UK: Cambridge University Press, 2003,
Kadolph, Sara J., ed. Textiles, 10th edition, Pearson/Prentice-Hall, 2007,
Labillois, Tabitha M., ed. "the meow institute", Mexico, 1756.
Miraftab, M., and A R. Horrocks. Ecotextiles The Way Forward for Sustainable Development in Textiles. Burlington: Elsevier Science, 2007. Print.
Schevill, Margot. Evolution in Textile Design from the Highlands of Guatemala: Seventeen Male Tzutes, or Headdresses, from Chichicastenango in the Collections of the Lowie Museum of Anthropology, University of California, Berkeley. Berkeley, Calif: Lowie Museum of Anthropology, University of California, Berkeley, 1985. Print.
Robinson, Stuart. A History of Printed Textiles: Block, Roller, Screen, Design, Dyes, fibers, Discharge, Resist, Further Sources for Research. London: Studio Vista, 1969. Print.
Speelberg, Femke. "Fashion & Virtue: Textile Patterns and the Print Revolution, 1520–1620". Metropolitan Museum of Art Bulletin. New York: Metropolitan Museum of Art, 2015. Print.
Perivoliotis, Margaret C. "The Role of Textile History in Design Innovation: A Case Study Using Hellenic Textile History". Textile history 36.1 (2005): 1–19. Web.
Grömer, Karina. The Art of Prehistoric Textile Making. Naturhistorisches Museum Wien, 2016. Web.
European Textile Forum, In Hopkins, H., In Kania, K., & European Textile Forum. (2019). Ancient textiles, modern science II.
In Siennicka, M., In Rahmstorf, L., & In Ulanowska, A. (2018). First textiles: The beginnings of textile manufacture in Europe and the Mediterranean: proceedings of the EAA Session held in Istanbul (2014) and the 'First Textiles' Conference in Copenhagen (2015)''.
Whewell, Charles S. and Abrahart, Edward Noah. "Textile". Encyclopædia Britannica, 4 Jun. 2020, https://www.britannica.com/topic/textile. Accessed 7 March 2021.
Gesimondo, Nancy and Postell, Jim. "Materiality and Interior Construction". John Wiley & Sons, 2011, | Textile design | Engineering | 2,437 |
5,751,182 | https://en.wikipedia.org/wiki/Human-based%20evolutionary%20computation | Human-based evolutionary computation (HBEC) is a set of evolutionary computation techniques that rely on human innovation.
Classes and examples
Human-based evolutionary computation techniques can be classified into three more specific classes analogous to ones in evolutionary computation. There are three basic types of innovation: initialization, mutation, and recombination. Here is a table illustrating which type of human innovation are supported in different classes of HBEC:
All these three classes also have to implement selection, performed either by humans or by computers.
Human-based selection strategy
Human-based selection strategy is a simplest human-based evolutionary computation procedure. It is used heavily today by websites outsourcing collection and selection of the content to humans (user-contributed content). Viewed as evolutionary computation, their mechanism supports two operations: initialization (when a user adds a new item) and selection (when a user expresses preference among items). The website software aggregates the preferences to compute the fitness of items so that it can promote the fittest items and discard the worst ones. Several methods of human-based selection were analytically compared in studies by Kosorukoff and Gentry.
Because the concept seems too simple, most of the websites implementing the idea can't avoid the common pitfall: informational cascade in soliciting human preference. For example, digg-style implementations, pervasive on the web, heavily bias subsequent human evaluations by prior ones by showing how many votes the items already have. This makes the aggregated evaluation depend on a very small initial sample of rarely independent evaluations. This encourages many people to game the system that might add to digg's popularity but detract from the quality of the featured results. It is too easy to submit evaluation in digg-style system based only on the content title, without reading the actual content supposed to be evaluated.
A better example of a human-based selection system is Stumbleupon. In Stumbleupon, users first experience the content (stumble upon it), and can then submit their preference by pressing a thumb-up or thumb-down button. Because the user doesn't see the number of votes given to the site by previous users, Stumbleupon can collect a relatively unbiased set of user preferences, and thus evaluate content much more precisely.
Human-based evolution strategy
In this context and maybe generally, the Wikipedia software is the best illustration of a working human-based evolution strategy wherein the (targeted) evolution of any given page comprises the fine tuning of the knowledge base of such information that relates to that page. Traditional evolution strategy has three operators: initialization, mutation, and selection. In the case of Wikipedia, the initialization operator is page creation, the mutation operator is incremental page editing. The selection operator is less salient. It is provided by the revision history and the ability to select among all previous revisions via a revert operation. If the page is vandalised and no longer a good fit to its title, a reader can easily go to the revision history and select one of the previous revisions that fits best (hopefully, the previous one). This selection feature is crucial to the success of the Wikipedia.
An interesting fact is that the original wiki software was created in 1995, but it took at least another six years for large wiki-based collaborative projects to appear. Why did it take so long? One explanation is that the original wiki software lacked a selection operation and hence couldn't effectively support content evolution. The addition of revision history and the rise of large wiki-supported communities coincide in time. From an evolutionary computation point of view, this is not surprising: without a selection operation the content would undergo an aimless genetic drift and would unlikely to be useful to anyone. That is what many people expected from Wikipedia at its inception. However, with a selection operation, the utility of content has a tendency to improve over time as beneficial changes accumulate. This is what actually happens on a large scale in Wikipedia.
Human-based genetic algorithm
Human-based genetic algorithm (HBGA) provides means for human-based recombination operation (a distinctive feature of genetic algorithms). Recombination operator brings together highly fit parts of different solutions that evolved independently. This makes the evolutionary process more efficient.
See also
References
Human-based computation
Evolutionary computation | Human-based evolutionary computation | Technology,Biology | 885 |
1,571,780 | https://en.wikipedia.org/wiki/Condensation%20algorithm | The condensation algorithm (Conditional Density Propagation) is a computer vision algorithm. The principal application is to detect and track the contour of objects moving in a cluttered environment. Object tracking is one of the more basic and difficult aspects of computer vision and is generally a prerequisite to object recognition. Being able to identify which pixels in an image make up the contour of an object is a non-trivial problem. Condensation is a probabilistic algorithm that attempts to solve this problem.
The algorithm itself is described in detail by Isard and Blake in a publication in the International Journal of Computer Vision in 1998. One of the most interesting facets of the algorithm is that it does not compute on every pixel of the image. Rather, pixels to process are chosen at random, and only a subset of the pixels end up being processed. Multiple hypotheses about what is moving are supported naturally by the probabilistic nature of the approach. The evaluation functions come largely from previous work in the area and include many standard statistical approaches. The original part of this work is the application of particle filter estimation techniques.
The algorithm’s creation was inspired by the inability of Kalman filtering to perform object tracking well in the presence of significant background clutter. The presence of clutter tends to produce probability distributions for the object state which are multi-modal and therefore poorly modeled by the Kalman filter. The condensation algorithm in its most general form requires no assumptions about the probability distributions of the object or measurements.
Algorithm overview
The condensation algorithm seeks to solve the problem of estimating the conformation of an object described by a vector at time , given observations of the detected features in the images up to and including the current time. The algorithm outputs an estimate to the state
conditional probability density by applying a nonlinear filter based on factored sampling and can be thought of as a development of a Monte-Carlo method. is a representation of the probability of possible conformations for the objects based on previous conformations and measurements. The condensation algorithm is a generative model since it models the joint distribution of the object and the observer.
The conditional density of the object at the current time is estimated as a weighted, time-indexed sample set with weights . N is a parameter determining the number of sample sets chosen. A realization of is obtained by sampling with replacement from the set with probability equal to the corresponding element of .
The assumptions that object dynamics form a temporal Markov chain and that observations are
independent of each other and the dynamics facilitate the implementation of the condensation algorithm. The first assumption allows the dynamics of the object to be entirely determined by the conditional density . The model of the system dynamics determined by must also be selected for the algorithm, and generally includes both deterministic and stochastic dynamics.
The algorithm can be summarized by initialization at time and three steps at each time t:
Initialization
Form the initial sample set and weights by sampling according to the prior distribution. For example, specify as Gaussian and set the weights equal to each other.
Iterative procedure
Sample with replacement times from the set with probability to generate a realization of .
Apply the learned dynamics to each element of this new set, to generate a new set .
To take into account the current observation , set for each element .
This algorithm outputs the probability distribution which can be directly used to calculate the mean position of the tracked object, as well as the other moments of the tracked object.
Cumulative weights can instead be used to achieve a more efficient sampling.
Implementation considerations
Since object-tracking can be a real-time objective, consideration of algorithm efficiency becomes important. The condensation algorithm is relatively simple when compared to the computational intensity of the Ricatti equation required for Kalman filtering. The parameter , which determines the number of samples in the sample set, will clearly hold a trade-off in efficiency versus performance.
One way to increase efficiency of the algorithm is by selecting a low degree of freedom model for representing the shape of the object. The model used by Isard 1998 is a linear parameterization of B-splines in which the splines are limited to certain configurations. Suitable configurations were found by analytically determining combinations of contours from multiple views, of the object in different poses, and through principal component analysis (PCA) on the deforming object.
Isard and Blake model the object dynamics as a second order difference equation with deterministic and stochastic components:
where is the mean value of the state, and , are matrices representing the deterministic and stochastic components of the dynamical model respectively. , , and are estimated via Maximum Likelihood Estimation while the object performs typical movements.
The observation model cannot be directly estimated from the data, requiring assumptions to be made in order to estimate it. Isard 1998 assumes that the clutter which may make the object not visible is a Poisson random process with spatial density and that any true target measurement is unbiased and normally distributed with standard deviation .
The basic condensation algorithm is used to track a single object in time. It is possible to extend the condensation algorithm using a single probability distribution to describe the likely states of multiple objects to track multiple objects in a scene at the same time.
Since clutter can cause the object probability distribution to split into multiple peaks, each peak represents a hypothesis about the object configuration. Smoothing is a statistical technique of conditioning the distribution based on both past and future measurements once the tracking is complete in order to reduce the effects of multiple peaks. Smoothing cannot be directly done in real-time since it requires information of future measurements.
Applications
The algorithm can be used for vision-based robot localization of mobile robots. Instead of tracking the position of an object in the scene, however, the position of the camera platform is tracked. This allows the camera platform to be globally localized given a visual map of the environment.
Extensions of the condensation algorithm have also been used to recognize human gestures in image sequences. This application of the condensation algorithm impacts the range of human–computer interactions possible. It has been used to recognize simple gestures of a user at a whiteboard to control actions such as selecting regions of the boards to print or save them. Other extensions have also been used for tracking multiple cars in the same scene.
The condensation algorithm has also been used for face recognition in a video sequence.
Resources
An implementation of the condensation algorithm in C can be found on Michael Isard’s website.
An implementation in MATLAB can be found on the Mathworks File Exchange.
An example of implementation using the OpenCV library can be found on the OpenCV forums.
See also
Particle filter – Condensation is the application of Sampling Importance Resampling (SIR) estimation to contour tracking
References
Computer vision | Condensation algorithm | Engineering | 1,382 |
6,070,432 | https://en.wikipedia.org/wiki/Solitary%20lymphatic%20nodule | The Solitary lymphatic nodules (or solitary follicles) are structures found in the small intestine and large intestine.
Small intestine
The solitary lymphatic nodules are found scattered throughout the mucous membrane of the small intestine, but are most numerous in the lower part of the ileum.
Their free surfaces are covered with rudimentary villi, except at the summits, and each gland is surrounded by the openings of the intestinal glands.
Each consists of a dense interlacing retiform tissue closely packed with lymph-corpuscles, and permeated with an abundant capillary network.
The interspaces of the retiform tissue are continuous with larger lymph spaces which surround the gland, through which they communicate with the lacteal system.
They are situated partly in the submucous tissue, partly in the mucous membrane, where they form slight projections of its epithelial layer.
Large intestine
The solitary lymphatic nodules of the large intestine are most abundant in the cecum and vermiform process, but are irregularly scattered also over the rest of the intestine.
They are similar to those of the small intestine.
See also
Peyer's patch
References
Digestive system
Lymphatic system
Lymphatic tissue | Solitary lymphatic nodule | Biology | 281 |
1,362,251 | https://en.wikipedia.org/wiki/Airshed | An airshed is a geographical area where local topography and meteorology limit the dispersion of pollutants away from the area. They are formed by air masses moving across a landscape, thus influencing the atmospheric composition of that area. Their boundaries are loosely defined, but can be quantified. Airborne chemicals disperse throughout an airshed and enter bodies of water in the area.
Defining airsheds
Airsheds are difficult to define precisely because they are spatially variable and can change over time. Their boundaries may change due to weather conditions and the sources of pollutants. Airsheds can be quantified for specific events, by season, or over a long period of time. Scientists model the movement of air masses over time to construct the dimensions of the upwind and downwind airsheds for a region.
Urban Airshed Model (UAM)
The Urban Airshed Model (UAM) is a three-dimensional grid model that is used to simulate chemical and physical atmospheric processes. The United States Environmental Protection Agency uses this model to develop air quality plans for ozone in urban areas.
Uni-directional Impacts
The upwind airshed of a city represents the area where emissions and pollutants affecting that city originate. This includes compounds that form secondary pollutants such as fine particulate matter and ground level ozone. Downwind effects in airsheds are responsible for the spread of those pollutants outside of the city. The downwind airshed tends to affect a wider area than the upwind because they are a combination of the upwind airshed and pollutants produced within the city.
References
External links
Georgia Basin-Puget Sound International Airshed Strategy
Meteorological phenomena | Airshed | Physics | 341 |
39,596,973 | https://en.wikipedia.org/wiki/GALEX%20Arecibo%20SDSS%20Survey | GALEX Arecibo SDSS Survey (GASS) is a large targeted survey at Arecibo Observatory that has been underway since 2008 to measure the neutral hydrogen content of a representative sample of approximately 1000 massive galaxies selected using the Sloan Digital Sky Survey and GALEX imaging surveys. The telescope being used is the world's largest single-dish radio telescope and can receive signals from distant objects.
See also
GALEX
Sloan Digital Sky Survey
Radio astronomy
References
Astronomical imaging
Astronomical surveys | GALEX Arecibo SDSS Survey | Astronomy | 96 |
60,129,248 | https://en.wikipedia.org/wiki/Delia%20Milliron | Delia J. Milliron is the T. Brockett Hudson Professor in Chemical Engineering at the University of Texas at Austin. Milliron leads a research team that focuses on developing and studying the properties of new electronic nanomaterials. Her team pursues studies on nanocrystals, nanoscale interfaces, and controlled assemblies of nanocrystals. Her team takes a systematic approach towards elucidating effects that arise at the nanoscale with a special focus on structure-property relationships.
Among many other topics, she is well known for her discoveries leading to development and innovation of technologies in the energy sciences. For her development of energy-efficient "smart window" coating technologies, Milliron is the co-founder and chief scientific officer of Heliotrope Technologies.
Research and career
Delia Milliron (Markiewicz) received her A.B. in Chemistry and Materials Science and Engineering from Princeton University where she performed undergraduate research with Jeffrey Schwartz and Antoine Kahn. During her undergraduate research experiences (and internships), Milliron established an early publication record on techniques and topics spanning from magnetic force microscopy to polymer cross-linking. Milliron would go on to receive her Ph.D. in Physical Chemistry from UC Berkeley in the laboratory of Paul Alivisatos where her thesis was on "New materials for nanocrystal solar cells" (2004). Milliron's research during her early career was distinguished by studies on shape control of nanomaterials, charge transfer, and preparation of hybrid nanocrystal-polymer photovoltaic cells.
After graduate school, Milliron held a post-doctoral research position at the IBM T.J. Watson Research Center and was then a research staff member at the IBM Almaden Research Center. At IBM, Milliron's publication record included studies on phase change nanomaterials and topics relevant to self-assembly of nanostructures. She also notably contributed to innovations in the field surrounding preparation of metal-chalcogen clusters and applications thereof. In 2008, Milliron transitioned to Lawrence Berkeley National Lab where she led a research team as a Staff Scientist in the Inorganic Nanostructures Facility of the Molecular Foundry. Milliron served as the Deputy Directory of the Molecular Foundry at large from 2008 to 2012. During her time at the Foundry, Milliron would continue to contribute to fundamental questions in nanoscience with technological impact. She contributed to advances in robotic nanocrystal synthesis through the development of WANDA with her then post-doc Emory Chan. And importantly for applications energy sciences, she began to explore topics relevant to innovations in window coating technology. Milliron's research continued to be distinguished by advancing fundamental knowledge in the field of nanoscience through studies on mixed ionic and electronic conductors, plasmonic nanocrystals, nanocrystal assemblies, and nanocrystal phase transitions.
Milliron and her research group moved to UT Austin in 2014. In addition to her current faculty appointment at UT Austin, Milliron is a co-principal investigator for the Center for Dynamics and Control of Materials a National Science Foundation Materials Research Science and Engineering Center (MRSEC). As part of the MRSEC, Milliron is also the faculty co-leader for the internal research group on "Reconfigurable and Porous Nanoparticle Networks".
Notable publications and patents
Milliron has been prolific in her publication record and also in technology impact of her research which has led to over 17 patents. Listed below are some of her notable publications:
Awards
Edith and Peter O'Donnell Award in Engineering, The Academy of Medicine, Engineering and Science of Texas (2018)
Norman Hackerman Award in Chemical Research, The Welch Foundation (2017)
Sloan Research Fellow, Sloan Foundation (2016)
Resonante Award Winner, Resnick Sustainability Institute Caltech (2015)
DOE Early Career Research Program Awardee for "Inorganic nanocomposite electrodes for electrochemical energy storage and energy conversion" (2010-2015)
R&D 100 Award for Universal Smart Windows (2013)
~$3M ARPA-E Award for "Low-Cost Solution Processed Universal Smart Window Coatings" (2013)
Mohr Davidow Ventures Innovators Award, CITRIS and the Banatao Institute (2010)
R&D 100 Award for Nanocrystal Solar Cells (2009)
References
American scientists
Nanomaterials
Living people
Year of birth missing (living people)
American women scientists
21st-century American women | Delia Milliron | Materials_science | 927 |
54,899,655 | https://en.wikipedia.org/wiki/List%20of%20craters%20on%20minor%20planets | This is a list of all named craters on minor planets in the Solar System as named by IAU's Working Group for Planetary System Nomenclature. In addition tentatively named craters—such as those of Pluto—may also be referred to. The number of craters is given in parentheses. For a full list of all craters, see list of craters in the Solar System.
Images
Arrokoth (1)
Ceres (90)
Eros (37)
Gaspra (31)
Ida (21)
Itokawa (10)
Lutetia (19)
Mathilde (23)
Pluto (14)
Šteins (23)
Vesta (90) | List of craters on minor planets | Astronomy | 131 |
23,655,997 | https://en.wikipedia.org/wiki/Septocellula | Septocellula is an extinct genus of fly in the family Dolichopodidae. It contains only one species, Septocellula asiatica, described from Eocene amber found near Fushun, China.
The genus Septocellula was first described in 1981 by You-chong Hong, who assigned to it three species: Septocellula asiatica, Septocellula fera and Septocellula trichopoda. In 2002, the latter two species were transferred by Hong to their own genera – Orbilabia and Wangia (later renamed Fushuniregis), respectively.
References
†
†
Prehistoric Diptera genera
†
Eocene insects
Monotypic prehistoric insect genera
Eocene animals of Asia
Prehistoric animals of China
Amber | Septocellula | Physics | 154 |
74,340,759 | https://en.wikipedia.org/wiki/Cystobasidium%20fimetarium | Cystobasidium fimetarium is a species of fungus in the order Cystobasidiales. It is a fungal parasite forming small gelatinous basidiocarps (fruit bodies) on various ascomycetous fungi (including Lasiobolus and Thelebolus spp) on dung. Microscopically, it has auricularioid (laterally septate) basidia producing basidiospores that germinate by budding off yeast cells. The species is known from Europe and North America.
Taxonomy
The species was originally described in 1803 on cow dung by Danish biologist Heinrich Schumacher who assigned it to Tremella, a genus then used for almost any fungus with gelatinous fruit bodies. In 1887 French mycologist Émile Boudier refound the species on goat dung in France and, discovering that it had auricularioid basidia (unlike Tremella species), transferred it to the auricularioid genus Helicobasidium.
In 1889, German mycologist Joseph Schröter described Platygloea fimicola as a new auricularioid species on rabbit dung from modern-day Poland. In 1898 Swedish mycologist Gustaf Lagerheim described Jola lasioboli as a new auricularioid species on cow dung from Norway. In 1924, German mycologist Walther Neuhoff transferred the latter species to his new genus Cystobasidium, based on the swollen, cyst-like probasidia from which the basidia emerge.
In 1999, British mycologist Peter Roberts noted that all these appeared to represent the same species and that Tremella fimetaria was the earliest name. Accordingly, he proposed the new combination Cystobasidium fimetarium for the species.
Molecular research, based on cladistic analysis of DNA sequences, has confirmed that the species is distinct and not closely related to other auricularioid fungi.
Description
Basidiocarps are waxy-gelatinous, disc-shaped to irregularly pustular, pale pinkish lilac, 1–4 mm in diameter. Microscopically, the hyphae are occasionally clamped, 1.5 to 3 μm wide, producing occasional haustorial cells that attach to host hyphae. Basidia emerge from swollen probasidia; they are tubular, often recurved, 25-55 x 3-4 μm long, and laterally septate, forming four cells. Basidiospores are hyaline, smooth, and ellipsoid to slightly fusoid, measuring 6–11.5 x 3-5 μm; they germinate by budding off subglobose to ovoid yeast cells that form pinkish colonies in culture.
Habitat and distribution
Cystobasidium fimetarium is a parasite of ascomycetous fungi on dung, including species of Lasiobolus and Thelebolus. It is known from Europe (Denmark, France, Germany, Netherlands, Norway, Poland, Spain, and Sweden) and North America (Canada) but has rarely been encountered. The only known British collection is on Thelebolus crustaceus from grouse dung in Scotland.
References
Pucciniomycotina
Fungi described in 1803
Fungus species | Cystobasidium fimetarium | Biology | 683 |
66,287,433 | https://en.wikipedia.org/wiki/Kepler-409b | Kepler-409b is a super-Earth orbiting Kepler-409, a G-type main-sequence star. Its orbital period around the star is 69 days. Kepler-409b has a radius 1.199 times that of Earth and a mass 6 times that of Earth. Its discovery in 2014 was made through the use of the transit detection method. The transit method was performed by the Kepler space telescope.
Possible exomoon
In 2020, a possible exomoon was discovered from transit timing variations. Follow-up observations deemed it unlikely.
References
Exoplanets discovered in 2014
Exoplanets discovered by the Kepler space telescope
Terrestrial planets
Super-Earths
Cygnus (constellation) | Kepler-409b | Astronomy | 144 |
58,269,490 | https://en.wikipedia.org/wiki/Nicholas%20Young%20%28mathematician%29 | Nicholas John Young is a British mathematician working in operator theory, functional analysis and several complex variables. He is a research professor at the University of Leeds. Much of his work has been about the interaction of operator theory and function theory.
Publications
Young has written more than a hundred papers, over 30 of them in collaboration with Jim Agler. He is the author of the book An Introduction to Hilbert Space.
His Ph.D. adviser was Vlastimil Pták, and he has had 5 Ph.D. students.
References
1943 births
Living people
Academics of the University of Leeds
Hilbert spaces
Operator theorists
20th-century British mathematicians
Alumni of the University of Oxford | Nicholas Young (mathematician) | Physics | 134 |
30,502,266 | https://en.wikipedia.org/wiki/Vladimir%20Karapetoff | Vladimir Karapetoff (January 8, 1876 in Saint Petersburg, Russian Empire – January 11, 1948) was a Russian-American electrical engineer, inventor, professor, and author.
Life
He was the son of Nikita Ivanovich Karapetov and Anna Joakimovna Karapetova. Karapetoff first studied at Petersburg State University of Means of Communication taking his first certification in 1897 and a second in 1902. During his studies he was a consultant to the Russian government and served as an instructor teaching electrical engineering and hydraulics in three of Saint Petersburg's colleges.
In 1899, he went to the Technische Hochschule Darmstadt to study power systems, he wrote Über Mehrphasige Stromsysteme in 1900.
In 1903 Karapetoff emigrated to the United States and apprenticed at Westinghouse Electric and Manufacturing Company. The following year he began his long association with Cornell University as professor of electrical engineering.
Karapetoff published the first part of his Engineering Applications of Higher Mathematics in 1911, and followed with parts two to five in 1916. That year he also published Electrical Measurements and Testing, Direct and Alternating Current.
The American Institute of Electrical Engineers made him a Fellow in 1912. He became a charter member of the American Association of University Professors in 1915. Karapetoff was a research editor for Electrical World from 1917 to 1926.
As a member of the Socialist Party of America, Karapetoff ran for the New York State Senate in 1910; and for New York State Engineer and Surveyor in the State elections of 1914, 1920 and 1924.
Karapetoff wrote several articles on special relativity to show that
much of the difficulty in understanding the subject lies in the popular effort to reconcile relativity with our every-day experience. Once this non-technical point of view, with its childish illustrations and analogs, has been abandoned, and the relativity space is considered mathematically per se, the treatment is not different from any other branch of mathematics. Certain postulates are made and a structure is built step by step on these, using mathematical logic and its recognized tools and operations.
In the first three articles in the journal of the Optical Society of America (1924-1929), Karapetoff used the notion of a "velocity angle" α which expressed the relation of a velocity v to the speed of light by sin α = v/c (an equivalent definition was used before in 1921 by Paul Gruner while developing symmetric Minkowski diagrams). In the later articles he used instead the hyperbolic angle u called rapidity in relativity, and determined by tanh u = v/c. As explained in a footnote on page 73 of the 1936 article, when sin α = tanh u, one says that α is the Gudermannian angle of u, and u is the anti-Gudermannian of α. Thus he explains, "the present treatment is in terms of the anti-Gudermannian of the velocity angle previously used." The diagrammatic treatments given by Karapetoff are frequently called Minkowski diagrams in physical science.
In 1930 he gave the first published statement of the Aufbau principle that describes the electron configurations of atoms (although Erwin Madelung had discovered it in 1926, Madelung did not publish until 1936).
In 1928 the Franklin Institute awarded him the Elliott Cresson Medal. Karapetoff was an accomplished cellist, and in 1934 was awarded an honorary doctorate in music by the New York College of Music. On November 25, 1936, he married Rosalie Margaret Cobb at Dobbs Ferry, New York. The Brooklyn Polytechnic Institute bestowed on him the degree of Doctor of Science in 1937. Karapetoff died in 1948, and is buried in Ithaca, Tompkins County, New York.
In his honor, since 1992 Eta Kappa Nu has celebrated significant work of electrical engineers with the Vladimir Karapetoff Outstanding Technical Achievement Award.
Articles
1924: "Aberration of light in terms of the theory of relativity as illustrated on a cone and a pyramid", Journal of the Optical Society of America 9(3):223–33.
1926: "Straight-line relativity in oblique coordinates; also illustrated by a mechanical model", Journal of the Optical Society of America 13:155.
1929: "Relativity transformation of an oscillation into a travelling wave, and DeBroglie's postulate in terms of velocity angle", Journal of the Optical Society of America 19:253.
1936: "Restricted relativity in terms of hyperbolic functions of rapidities", American Mathematical Monthly 43:70–82.
1941: "A general outline of restricted relativity", Scripta Mathematica 8:145–63.
1944: "The special theory of relativity in hyperbolic functions", Reviews of Modern Physics 16:33–52, Abstract, link to pdf
1945: "The constancy of the velocity of light", in A Collection of Papers in Memory of Sir William Rowan Hamiltion, Scripta Mathematica at Yeshiva College.
References
Allen G. Debus (1968) "Vladimir Karapetoff", Who's Who in Science, Marquis Who's Who.
THINKS ELECTRON DIVISIBLE; Karapetoff Also Predicts Wave Velocity Greater Than That of Light in NYT on February 14, 1930 (subscription required)
1876 births
1948 deaths
Cornell University faculty
Electrical engineers from the Russian Empire
Fellows of the IEEE
American relativity theorists
Emigrants from the Russian Empire to the United States
American people of Russian descent
American people of Armenian descent
Engineers from Saint Petersburg
People from Tompkins County, New York
20th-century American inventors
American electrical engineers
Socialist Party of America politicians from New York (state)
Engineers from New York (state)
People involved with the periodic table | Vladimir Karapetoff | Chemistry | 1,159 |
3,969,518 | https://en.wikipedia.org/wiki/Liquefied%20gas | Liquefied gas (sometimes referred to as liquid gas) is a gas that has been turned into a liquid by cooling or compressing it. Examples of liquefied gases include liquid air, liquefied natural gas, and liquefied petroleum gas.
Liquid air
At the Lister Institute of Preventive Medicine, liquid air has been brought into use as an agent in biological research. An inquiry into the intracellular constituents of the typhoid bacillus, initiated under the direction of Doctor Allan Macfadyen, necessitated the separation of the cell-plasma of the organism. The method at first adopted for the disintegration of the bacteria was to mix them with silver-sand and churn the whole up in a closed vessel in which a series of horizontal vanes revolved at a high speed. But certain disadvantages attached to this procedure, and accordingly some means was sought to do away with the sand and triturate the bacilli per se. This was found in liquid air, which, as had long before been shown at the Royal Institution, has the power of reducing materials like grass or the leaves of plants to such a state of brittleness that they can easily be powdered in a mortar. By its aid a complete trituration of the typhoid bacilli has been accomplished at the Jenner Institute, and the same process, already applied with success also to yeast cells and animal cells, is being extended in other directions.
When air is liquefied the oxygen and nitrogen are condensed simultaneously. However, owing to its greater volatility the latter boils off the more quickly of the two, so that the remaining liquid becomes gradually richer and richer in oxygen.
Liquefied natural gas
Liquefied natural gas is natural gas that has been liquefied for the purpose of storage or transport. Since transportation of natural gas requires a large network of pipeline that crosses through various terrains and oceans, a huge investment and long term planning are required. Before transport, natural gas is liquefied by pressurization. The liquefied gas is then transported through tankers with special airtight compartments. When the tanks are opened and the liquid exposed to atmospheric pressure, the liquid boils off from the latent heat of the air or its container.
References
See also
Liquid oxygen
Liquid nitrogen
Liquid hydrogen
Liquid helium
Laboratory techniques
Cryogenics | Liquefied gas | Physics,Chemistry | 480 |
18,063,804 | https://en.wikipedia.org/wiki/Eugene%20C.%20Bingham | Eugene Cook Bingham (8 December 1878 – 6 November 1945) was a professor and head of the department of chemistry at Lafayette College. Bingham made many contributions to rheology, a term he is credited (along with Markus Reiner) with introducing. He was a pioneer in both its theory and practice. The type of fluid known as a Bingham plastic or Bingham Fluid is named after him, as is Bingham Stress. He was also one of the people responsible for the construction of the Appalachian Trail.
Biography
Bingham was born on 8 December 1878 in Cornwall, Vermont.
He was awarded the Franklin Institute's Certificate of Merit in 1921 for his variable pressure viscometer. In 1922, as chairman of the Metric Committee of the American Chemical Society, he campaigned for the United States to adopt the metric system.
Bingham died on 6 November 1945 in Easton, Pennsylvania.
Legacy
The Society of Rheology has awarded the Bingham Medal annually since 1948.
Selected publications
Journal of Industrial and Engineering Chemistry (1914) vol. 6(3) pp. 233–237: A new viscometer for general scientific and technical purposes
Journal of Physical Chemistry (1914) vol. 18(2) pp. 157–165: The Viscosity of Binary Mixtures
Fluidity and Plasticity (1922) McGraw-Hill (Internet Digital Archive)
Journal of Physical Chemistry (1925) vol. 29(10) pp. 1201–1204: Plasticity
Review of Scientific Instruments (1933) vol. 4 p. 473: The New Science of Rheology
Journal of General Physiology (1944) vol. 28 pp. 79–94, pp. 131–149 [Bingham and Roepke], (1945) vol. 28 pp. 605–626: The Rheology of Blood
References
External links
Photograph of E. C. Bingham – Lafayette University Historical Photograph Collection
1878 births
1945 deaths
People from Cornwall, Vermont
American chemists
Rheologists
Fluid mechanics
American fluid dynamicists
Lafayette College faculty | Eugene C. Bingham | Engineering | 400 |
2,101,936 | https://en.wikipedia.org/wiki/Upsilon%20Scorpii | Upsilon Scorpii (υ Scorpii, abbreviated Upsilon Sco, υ Sco), formally named Lesath , is a star located in the "stinger" of the southern zodiac constellation of Scorpius, the scorpion. Based on parallax measurements obtained during the Hipparcos mission, it is approximately 580 light-years from the Sun. In the night sky it lies near the 1.6 magnitude star Lambda Scorpii, and the two form an optical pair that is sometimes called the "Cat's Eyes".
Nomenclature
υ Scorpii (Latinised to Upsilon Scorpii) is the star's Bayer designation.
It bore the traditional name Lesath (alternatively spelled Leschath, Lesuth), from the Arabic las'a "pass (or bite) of a poisonous animal"; but this is a miscorrection by Scaliger (a European astronomer who knew Arabic) for earlier "Alascha", which came from Arabic al laţkha "the foggy patch", referring to the nearby open cluster M7. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Lesath for this star on 21 August 2016 and it is now so included in the List of IAU-approved Star Names.
Together with Lambda Scorpii (Shaula), Lesath is listed in the Babylonian compendium MUL.APIN as dSharur4 u dShargaz, meaning "Sharur and Shargaz". In Coptic, they were called Minamref The indigenous Boorong people of northwestern Victoria named it as Karik Karik (together with Lambda Scorpii), "the Falcons"
In Chinese, (), meaning Tail, refers to an asterism consisting of Upsilon, Mu1, Epsilon, Zeta1, Zeta2, Eta, Theta, Iota1, Iota2, Kappa, and Lambda Scorpii. Consequently, the Chinese name for Upsilon Scorpii itself is (), "the Ninth Star of Tail".
Namesake
USS Lesuth (AK-125) was a United States Navy Crater class cargo ship named after the star.
Properties
This star has apparent magnitude +2.7 and belongs to spectral class B2 IV, with the luminosity class of 'IV' indicating it is a subgiant star. The star's luminosity is 12,300 times that of the Sun, while its surface temperature is 22,831 kelvins. The star has a radius of 6.1 times solar and 11 times the mass of the Sun.
References
Scorpii, Upsilon
Scorpii, 34
B-type subgiants
Scorpius
Lesath
085696
6508
158408
Durchmusterung objects | Upsilon Scorpii | Astronomy | 615 |
46,298,793 | https://en.wikipedia.org/wiki/Penicillium%20jamesonlandense | Penicillium jamesonlandense is a psychrotolerant species of the genus of Penicillium. Penicillium jamesonlandense produces patulin
Further reading
References
jamesonlandense
Fungi described in 2006
Fungus species | Penicillium jamesonlandense | Biology | 48 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.