source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Single-molecule%20magnetic%20sequencing | Magnetic sequencing is a single-molecule sequencing method in development. A DNA hairpin, containing the sequence of interest, is bound between a magnetic bead and a glass surface. A magnetic field is applied to stretch the hairpin open into single strands, and the hairpin refolds after decreasing of the magnetic field. The hairpin length can be determined by direct imaging of the diffraction rings of the magnetic beads using a simple microscope. The DNA sequences are determined by measuring the changes in the hairpin length following successful hybridization of complementary nucleotides.
Single-molecule sequencing vs. Next-generation sequencing
With the development of various next-generation sequencing platforms, there has been a substantial reduction in costs, and increase in throughput of DNA sequencing. However, the majority of the sequencing technologies rely on PCR-based clonal amplification of the DNA molecule in order to bring the signal to a detectable range. Sequencing of amplified clusters, or bulk sequencing in such a propose a read length-dependent phasing problem. During each cycle, not all of the molecules within the bulk have successful incorporation of an additional nucleotide. With increased sequencing cycle, the signal of the lagging molecules will eventually overwhelm the true signal. The phasing problem is a major limitation for the read lengths of the next-generation sequencing technologies. Therefore, there is an increased interest in developing single-molecule sequencing technologies, where no amplification is required. This not only shortens the preparation time for the sequencing libraries, it also has the potential to achieve much longer read lengths, as the lagging molecules with failed extensions can be ignored or considered separately. Previously known single-molecule sequencing technologies include Nanopore sequencing (Oxford Nanopore),
SMRT sequencing (Pacific Biosciences), and Heliscope single molecule sequencing (Helicos |
https://en.wikipedia.org/wiki/Archaeal%20translation | Archaeal translation is the process by which messenger RNA is translated into proteins in archaea. Not much is known on this subject, but on the protein level it seems to resemble eukaryotic translation.
Most of the initiation, elongation, and termination factors in archaea have homologs in eukaryotes. Shine-Dalgarno sequences only are found in a minority of genes for many phyla, with many leaderless mRNAs probably initiated by scanning. The process of ABCE1 ATPase-based recycling is also shared with eukaryotes.
Being a prokaryote without a nucleus, archaea do perform transcription and translation at the same time like bacteria do. |
https://en.wikipedia.org/wiki/Thermophobia | Thermophobia (adjective: thermophobic) is intolerance for high temperatures by either inorganic materials or organisms. The term has a number of specific usages.
In pharmacy, a thermophobic foam consisting of 0.1% betamethasone valerate was found to be at least as effective as conventional remedies for treating dandruff. In addition, the foam is non-greasy and does not irritate the scalp. Another use of thermophobic material is in treating hyperhydrosis of the axilla and the palm: A thermophobic foam named Bettamousse developed by Mipharm, an Italian company, was found to treat hyperhydrosis effectively.
In biology, some bacteria are thermophobic, such as mycobacterium leprae which causes leprosy. Thermophobic response in living organisms is negative response to higher temperatures.
In physics, thermophobia is motion of particles in mixtures (solutions, suspensions, etc.) towards the areas of lower temperatures, a particular case of thermophoresis.
In medicine, thermophobia refers to a sensory dysfunction, sensation of abnormal heat, which may be associated with, e.g., hyperthyroidism.
See also
Heat intolerance |
https://en.wikipedia.org/wiki/Piet%20Groeneboom | Petrus (Piet) Groeneboom (born 24 September 1941 in Scheveningen) is a Dutch statistician who made major advances in the field of shape-constrained statistical inference such as isotonic regression, and also worked in probability theory.
Education and career
At the beginning of his tertiary studies in 1959, Groeneboom enrolled in medicine at the University of Amsterdam but quickly switched to psychology at the same university, obtaining a candidate degree in 1963. During his studies he attended a course on logic by analytic philosopher Else M. Barth, whose influence, along with that by Lambert Meertens after his (Groeneboom's) candidate degree, he later stated as having made him decide to study mathematics. He was an assistant of Johannes de Groot. He obtained a master's degree in mathematics in 1971, also at the University of Amsterdam, and studied at the Vrije Universiteit Amsterdam from 1975 under Kobus Oosterhoff, obtaining his Ph.D. degree in 1979.
Before and immediately after obtaining his master's degree, Groeneboom worked at the psychological laboratory of the University of Amsterdam. After his second stint there ended in 1973, he moved to the , which at the time was called (Mathematical Centre), in the same city.
From 1979 to 1981, Groeneboom was a visiting assistant professor at the University of Washington, to where he would return from 1999 to 2013 as affiliate professor in the department of statistics. From 1981 on, he was again based at the Mathematical Centre before being appointed full professor of statistics at the University of Amsterdam in 1984. In 1988, he moved to Delft University of Technology, where he stayed until his retirement in 2006. From 2000 to 2006 he was additionally a part-time professor at the Vrije Universiteit Amsterdam.
Groeneboom has been a professor emeritus of statistics at Delft University of Technology since his retirement in 2006. He has also held positions at the Vrije Universiteit Amsterdam and the University of Was |
https://en.wikipedia.org/wiki/Gold%20cycle | The gold cycle is the biogeochemical cycling of gold through the lithosphere, hydrosphere, atmosphere, and biosphere. Gold is a noble transition metal that is highly mobile in the environment and subject to biogeochemical cycling, driven largely by microorganisms. Gold undergoes processes of solubilization, stabilization, bioreduction, biomineralization, aggregation, and ligand utilization throughout its cycle. These processes are influenced by various microbial populations and cycling of other elements such as carbon, nitrogen, and sulfur. Gold exists in several forms in the Earth's surface environment including Au(I/III)-complexes, nanoparticles, and placer gold particles (nuggets and grains). The gold biogeochemical cycle is highly complex and strongly intertwined with cycling of other metals including silver, copper, iron, manganese, arsenic, and mercury. Gold is important in the biotech field for applications such as mineral exploration, processing and remediation, development of biosensors and drug delivery systems, industrial catalysts, and for recovery of gold from electronic waste.
Lithosphere
The lithosphere is the dominant reservoir of gold, containing an estimated 2.6x1013 Mg. Today, gold exists primarily as electrum, in hard rock deposits like tellurides, and as particles in placers in Earth's crust. Gold cycling starts with the microbial weathering of gold-bearing rocks and minerals which mobilizes gold in the environment via release of elemental gold and solubilization. The Witwatersrand gold deposits host approximately 30% of the world's gold resources, a large proportion of which is directly associated with organic carbon derived from microbial mats. Gold ore has been mined in many countries, including Japan, India, Spain, Yugoslavia, South Africa, Australia, the United States of America, Canada, Colombia, Mexico, and Brazil.
Ocean
The ocean reservoir contains an estimated 5.6x109 Mg of gold and oceanic gold concentration is about 4 ng Au/L wit |
https://en.wikipedia.org/wiki/Silk%20Platform | Silk is a technology company headquartered in Needham, Massachusetts, United States. Silk offers a cloud platform for enterprise customers with mission-critical applications. The company has offices in Boston and Israel.
History
CEO, Dani Golan, founded Silk in 2008. Originally called Kaminario, after the Japanese god of lightning. The company was a flash storage start-up.
The company's first venture capital funding round was announced in May 2011 with a $15 million investment from Globespan Capital Partners, Sequoia Capital, and Pitango Venture Capital. Silk received $25 million in series D funding in June 2012 from Tenaya Capital and existing investors. In December 2014 and January 2015, the company announced $68 million in a series E round from Lazarus Hedge Fund, Silicon Valley Bank, Mitsui & Co. Global Investment, and existing investors.
In 2017, the company evolved to focus purely on storage software in a Storage as a Service model. It partnered with Tech Data to offer customers inexpensive hardware to run their storage software on.
In 2020, the company rebranded itself as Silk.
Products
The Silk Platform is a virtualized data layer that sits between customers’ cloud infrastructure and databases/workloads. The Silk Platform decouples data from the cloud infrastructure, enabling workloads to be moved freely. They can be transferred from one cloud vendor to another, to the private cloud, or to on-premises. Silk's Tier 1 data services include zero-footprint clones, data replication, deduplication, and thin provisioning to minimize the cloud resources being used.
The Silk Platform is built on auto-scalable and symmetric active-active architecture that delivers performance across any workload profile, leveraging cloud native IaaS components. With the Silk Clarity capability, the platform uses machine-learning analytics to optimize resources. Silk Flex makes it possible to orchestrate and automate data management. |
https://en.wikipedia.org/wiki/Virtual%20microscopy | Virtual microscopy is a method of posting microscope images on, and transmitting them over, computer networks. This allows independent viewing of images by large numbers of people in diverse locations. It involves a synthesis of microscopy technologies and digital technologies. The use of virtual microscopes can transform traditional teaching methods by removing the reliance on physical space, equipment, and specimens to a model that is solely dependent upon computer-internet access. This increases the convenience of accessing the slide sets and making the slides available to a broader audience. Digitized slides can have a high resolution and are resistant to being damaged or broken over time.
Prior to recent advances in virtual microscopy, slides were commonly digitized by various forms of film scanner and image resolutions rarely exceeded 5000 dpi. Nowadays, it is possible to achieve more than 100,000 dpi and thus resolutions approaching that visible under the optical microscope. This increase in scanning resolution comes at a price; whereas a typical flatbed or film scanner ranges in cost from $200 to $600, a 100,000 dpi slide scanner will range from $80,000 to $200,000.
See also
Digital pathology
Microscopy
Telepathology
Tissue Cytometry, a technique that brings the concept of flow cytometry to tissue section, in situ, and helps to perform whole slide scanning and quantification of markers by maintaining the spatial context. |
https://en.wikipedia.org/wiki/Index%20Herbariorum | The Index Herbariorum provides a global directory of herbaria and their associated staff. This searchable online index allows scientists rapid access to data related to 3,400 locations where a total of 350 million botanical specimens are permanently housed (singular, herbarium; plural, herbaria). The Index Herbariorum has its own staff and website. Over time, six editions of the Index were published from 1952 to 1974. The Index became available on-line in 1997.
The index was originally published by the International Association for Plant Taxonomy, which sponsored the first six editions (1952–1974); subsequently the New York Botanical Garden took over the responsibility for the index. The Index provides the supporting institution's name (often a university, botanical garden, or not-for-profit organization), its city and state, and each herbarium's acronym, along with contact information for staff members and their research specialties, and the important holdings of each herbarium's collection.
Editors
6th edition (1974) was co-edited by Patricia Kern Holmgren, Director of the New York Botanical Garden
7th printed edition, ed. by Patricia Kern Holmgren.
8th printed edition, ed. by Patricia Kern Holmgren.
Online edition, prepared by Noel Holmgren of the New York Botanical Garden
2008+, ed. by Barbara M. Thiers, Director of the New York Botanical Garden Herbarium |
https://en.wikipedia.org/wiki/Thermal%20imaging%20in%20ornithology | The use of thermal imaging devices to monitor birds began in the 1960s. It underwent significant development from the end of the 20th century onwards. This was, at least in part, due to improvements in the quality and portability of thermal-imaging devices, and reductions in their cost.
Although primarily a nocturnal activity, thermal imaging can also be used in daylight, for example monitoring Eurasian bittern (Botaurus stellaris) and water rail (Rallus aquaticus) in dense vegetation.
One bird ringing organisation, the West Midlands Ringing Group (formerly Brewood Ringers), caught and rung 424 adult skylarks (Alauda arvensis) in 2019, using thermal imaging to locate them; this was 81.4% of the total caught Britain & Ireland that year. The group subsequently received the 2021 Marsh Award for Innovative Ornithology for their innovative use of thermal imaging technology in monitoring farmland birds. The group uses Pulsar Helion thermal imaging cameras and have determined that this not only helps them to find more birds, but reduces the disturbance caused to the birds, due to needing to spend less time in the field.
Other taxa
Thermal imaging has also been used to monitor mammal species including bats, mice, and ungulates, and even the health of trees. |
https://en.wikipedia.org/wiki/The%20Codebreakers | The Codebreakers – The Story of Secret Writing () is a book by David Kahn, published in 1967, comprehensively chronicling the history of cryptography from ancient Egypt to the time of its writing. The United States government attempted to have the book altered before publication, and it succeeded in part.
Overview
Bradford Hardie III, an American cryptographer during World War II, contributed insider information, German translations from original documents, and intimate real-time operational explanations to The Codebreakers.
The Codebreakers is widely regarded as the best account of the history of cryptography up to its publication. William Crowell, the former deputy director of the National Security Agency, was quoted in Newsday magazine: "Before he (Kahn) came along, the best you could do was buy an explanatory book that usually was too technical and terribly dull."
The Puzzle Palace (1982), written by James Bamford, gives a history of the writing and publication of The Codebreakers. Kahn, then a journalist, was contracted to write a book on cryptology in 1961. He began writing it part-time, and then he quit his job to work on it full-time. The book was to include information on the NSA and, according to Bamford, the agency attempted to stop its publication. The NSA considered various options, including writing a negative review of Kahn's work to be published in the press to discredit him.
A committee of the United States Intelligence Board concluded that the book was "a possibly valuable support to foreign COMSEC authorities" and recommended "further low-key actions as possible, but short of legal action, to discourage Mr. Kahn or his prospective publishers". Kahn's publisher, Macmillan and Sons, handed over the manuscript to the government for review without Kahn's permission on 4 March 1966. Kahn and Macmillan eventually agreed to remove some material from the manuscript, particularly concerning the relationship between the NSA and its counterpart in the Un |
https://en.wikipedia.org/wiki/John%20Clive%20Ward | John Clive Ward, (1 August 1924 – 6 May 2000) was a Anglo-Australian physicist who made significant contributions to quantum field theory, condensed-matter physics, and statistical mechanics. Andrei Sakharov called Ward one of the titans of quantum electrodynamics.
Ward introduced the Ward–Takahashi identity. He was one of the authors of the Standard Model of gauge particle interactions: his contributions were published in a series of papers he co-authored with Abdus Salam. He is also credited with being an early advocate of the use of Feynman diagrams. It has been said that physicists have made use of his principles and developments "often without knowing it, and generally without quoting him." The Ising model was another one of his research interests.
In 1955, Ward was recruited to work at the Atomic Weapons Research Establishment at Aldermaston. There, he independently derived a version of the Teller–Ulam design, for which he has been called the "father of the British H-bomb".
Early life
John Clive Ward was born in East Ham, London, on 1 August 1924. He was the son of Joseph William Ward, a civil servant who worked in Inland Revenue, and his wife Winifred Palmer, a schoolteacher. He had a sister, Mary Patricia. He attended Chalkwell Elementary School and Westcliff High School for Boys. In 1938 he sat for and won a £100 scholarship to Bishop Stortford College. He took the Higher School Certificate Examination in 1942, receiving distinctions in Mathematics, Physics, Chemistry and Latin, and was offered a postmastership (scholarship) to Merton College, Oxford.
Although the Second World War was raging at the time, Ward was not called up by the Army, and was allowed to complete his Bachelor of Arts degree in Engineering Science with first class honours, studying mathematics under J. H. C. Whitehead and E. C. Titchmarsh. He received a bursary from the Harmsworth Trust, and in October 1946, with the war over, secured a position as a graduate assistant to Maurice |
https://en.wikipedia.org/wiki/Certified%20broadcast%20radio%20engineer | Certified Broadcast Radio Engineer (CBRE) is a title granted to an individual in the United States who successfully meets the experience and test requirements of the certification, regulated by the Society of Broadcast Engineers (SBE). The CBRE title is protected by copyright laws. Individuals who use this title without consent from the Society of Broadcast Engineers could face legal action.
The SBE certifications were created to recognize individuals who practice in career fields which are not regulated by state licensing or Professional Engineering programs. Broadcast Engineering is regulated at the national level and not by individual states.
External links
SBE Certified Broadcast Radio Engineer (CBRE) Requirements & Application
SBE Official Website
Broadcast engineering
Professional certification in engineering |
https://en.wikipedia.org/wiki/International%20Congress%20of%20Human%20Genetics | The International Congress of Human Genetics is the foremost meeting of the international human genetics community. The first Congress was held in 1956 in Copenhagen, and has met every five years since then with the exception of the 2021 meeting which was postponed for two years because of the global COVID-19 pandemic. The Congress is held under the auspices of the International Federation of Human Genetics Societies, an umbrella organization founded by the American Society of Human Genetics, the European Society of Human Genetics and the Human Genetics Society of Australasia. Congresses have been held in such diverse venues as Berlin, Brisbane, Chicago, The Hague, Jerusalem, Mexico City, Paris, Rio de Janeiro, Vienna and Washington.
The purview of the International Congress of Human Genetics is all aspects of human genetics, including research, clinical practice, and education. The Congress now attracts thousands of participants, including M.D. medical geneticists, Ph.D. human geneticists and genetic counselors from 80 or more countries. It is by far the largest human genetics meeting in the world.
External links
History of the International Congress of Human Genetics
Genetics organizations
Recurring events established in 1956
Medical conferences |
https://en.wikipedia.org/wiki/Fungicide%20use%20in%20the%20United%20States | This article summarizes different crops, what common fungal problems they have, and how fungicide should be used in order to mitigate damage and crop loss. This page also covers how specific fungal infections affect crops present in the United States.
Almonds
Alternaria leaf spot
Symptoms of Alternaria leaf spot appear as lesions with tan spots on the leaves. The centers of these lesions become black with fungal sporulation. This infection can lead to tree death within 3–4 years of the first serious outbreak. Orchards in high humidity areas result in the largest yield loss, often in excess of 50%. Yield loss tends to rise every year as the tree becomes weaker each year after infection. Three fungicide applications can achieve 60–80% control of leaf spot.
Anthracnose
Anthracnose was not seen on California almonds until the early 1990s. By 1996 it was widespread and causing severe yield losses throughout the state. Typical losses in 1996 were 10–15% of the almond crop with severely affected crops incurring losses of 25%. Under wet conditions, orange spore masses are produced and appear as visible droplets. Lesions on mature fruit are rusty orange and gum profusely. Once the diseased fruit die they become mummies that remain on the tree. The pathogen overwinters in these mummies. 80–90% control can be achieved by applying fungicides to protect the crop before rains begin. The California Department of Pesticide Regulation has estimated that without the fungicides to control anthracnose the state's almond production would drop 15–30%.
Brown rot
Damage from brown rot occurs several years after the infection strikes. The primary symptom is fruiting spur loss. Brown rot was first discovered on California almonds in the late 19th century and currently affects most almond-producing areas of California. Brown rot can be controlled using fungicides through bloom in order to protect the flower parts from brown rot attacks. Experiments have demonstrated that 44% of twig |
https://en.wikipedia.org/wiki/Suurballe%27s%20algorithm | In theoretical computer science and network routing, Suurballe's algorithm is an algorithm for finding two disjoint paths in a nonnegatively-weighted directed graph, so that both paths connect the same pair of vertices and have minimum total length. The algorithm was conceived by John W. Suurballe and published in 1974. The main idea of Suurballe's algorithm is to use Dijkstra's algorithm to find one path, to modify the weights of the graph edges, and then to run Dijkstra's algorithm a second time. The output of the algorithm is formed by combining these two paths, discarding edges that are traversed in opposite directions by the paths, and using the remaining edges to form the two paths to return as the output.
The modification to the weights is similar to the weight modification in Johnson's algorithm, and preserves the non-negativity of the weights while allowing the second instance of Dijkstra's algorithm to find the correct second path.
The problem of finding two disjoint paths of minimum weight can be seen as a special case of a minimum cost flow problem, where in this case there are two units of "flow" and nodes have unit "capacity". Suurballe's algorithm, also, can be seen as a special case of a minimum cost flow algorithm that repeatedly pushes the maximum possible amount of flow along a shortest augmenting path.
The first path found by Suurballe's algorithm is the shortest augmenting path for the initial (zero) flow, and the second path found by Suurballe's algorithm is the shortest augmenting path for the residual graph left after pushing one unit of flow along the first path.
Definitions
Let be a weighted directed graph with vertex set and edge set (figure A); let be a designated source vertex in , and let be a designated destination vertex. Let each edge in , from vertex to vertex , have a non-negative cost .
Define to be the cost of the shortest path to vertex from vertex in the shortest path tree rooted at (figure C).
Note: No |
https://en.wikipedia.org/wiki/Capital%20adequacy%20ratio | Capital Adequacy Ratio (CAR) is also known as Capital to Risk (Weighted) Assets Ratio (CRAR), is the ratio of a bank's capital to its risk. National regulators track a bank's CAR to ensure that it can absorb a reasonable amount of loss and complies with statutory Capital requirements.
It is a measure of a bank's capital. It is expressed as a percentage of a bank's risk-weighted credit exposures. The enforcement of regulated levels of this ratio is intended to protect depositors and promote stability and efficiency of financial systems around the world.
Two types of capital are measured: tier one capital, which can absorb losses without a bank being required to cease trading, and tier two capital, which can absorb losses in the event of a winding-up and so provides a lesser degree of protection to depositors.
Formula
Capital adequacy ratios (CARs) are a measure of the amount of a bank's core capital expressed as a percentage of its risk-weighted asset.
Capital adequacy ratio is defined as:
TIER 1 CAPITAL = (paid up capital + statutory reserves + disclosed free reserves) - (equity investments in subsidiary + intangible assets + current & brought-forward losses)
TIER 2 CAPITAL = A) Undisclosed Reserves + B) General Loss reserves + C) hybrid debt capital instruments and subordinated debts
where Risk can either be weighted assets () or the respective national regulator's minimum total capital requirement. If using risk weighted assets,
≥ 10%.
The percent threshold varies from bank to bank (10% in this case, a common requirement for regulators conforming to the Basel Accords) and is set by the national banking regulator of different countries.
Two types of capital are measured: tier one capital ( above), which can absorb losses without a bank being required to cease trading, and tier two capital ( above), which can absorb losses in the event of a winding-up and so provides a lesser degree of protection to depositors.
Use
Capital adequacy ratio is the ratio w |
https://en.wikipedia.org/wiki/Adelchi%20Negri | Adelchi Negri (16 July 1876 – 19 February 1912) was an Italian pathologist and microbiologist born in Perugia.
He studied medicine and surgery at the University of Pavia, where he was a pupil of Camillo Golgi (1843–1926). After graduation in 1900, he became an assistant to Golgi at his pathological institute. In 1909 Negri became a professor of bacteriology, and the first official instructor of bacteriology in Pavia. On 19 February 1912 he died of tuberculosis at age 35.
Negri performed extensive research in the fields of histology, hematology, cytology, protozoology and hygiene. In 1903 he discovered the eponymous Negri bodies, defined as cytoplasmatic inclusion bodies located in the Purkinje cells of the cerebellum in cases of rabies in animals and humans. He documented his findings in an article titled Contributo allo studio dell'eziologia della rabbia, published in the journal Bollettino della Società medico-chirurgica. At the time, Negri mistakenly described the pathological agent of rabies as a parasitic protozoa. A few months later, Paul Remlinger (1871–1964) at the Constantinople Imperial Bacteriology Institute correctly demonstrated that the aetiological agent of rabies was not a protozoan, but a filterable virus.
Negri went on, however, to demonstrate in 1906 that the smallpox vaccine, then known as "vaccine virus", or "variola vaccinae", was also a filterable virus. During the latter part of his career, he became interested in malaria and was at the forefront in efforts to eradicate it from Lombardy. In 1906 he married his colleague Lina Luzzani and six years later, at the age of thirty-five, died of tuberculosis.
Tomb
Negri was buried in the Monumental Cemetery of Pavia (Viale San Giovannino), along the central lane, on the left, near the tombs of other two important medical scientists, the anatomist Bartolomeo Panizza and his teacher, the Nobel Prize–winning Camillo Golgi. |
https://en.wikipedia.org/wiki/Session%20border%20controller | A session border controller (SBC) is a network element deployed to protect SIP based voice over Internet Protocol (VoIP) networks.
Early deployments of SBCs were focused on the borders between two service provider networks in a peering environment. This role has now expanded to include significant deployments between a service provider's access network and a backbone network to provide service to residential and/or enterprise customers.
The term "session" refers to a communication between two or more parties – in the context of telephony, this would be a call. Each call consists of one or more call signaling message exchanges that control the call, and one or more call media streams which carry the call's audio, video, or other data along with information of call statistics and quality. Together, these streams make up a session. It is the job of a session border controller to exert influence over the data flows of sessions.
The term "border" refers to a point of demarcation between one part of a network and another. As a simple example, at the edge of a corporate network, a firewall demarcates the local network (inside the corporation) from the rest of the Internet (outside the corporation). A more complex example is that of a large corporation where different departments have security needs for each location and perhaps for each kind of data. In this case, filtering routers or other network elements are used to control the flow of data streams. It is the job of a session border controller to assist policy administrators in managing the flow of session data across these borders.
The term "controller" refers to the influence that session border controllers have on the data streams that comprise sessions, as they traverse borders between one part of a network and another. Additionally, session border controllers often provide measurement, access control, and data conversion facilities for the calls they control.
Functions
SBCs commonly maintain full session sta |
https://en.wikipedia.org/wiki/Mohr%E2%80%93Mascheroni%20theorem | In mathematics, the Mohr–Mascheroni theorem states that any geometric construction that can be performed by a compass and straightedge can be performed by a compass alone.
It must be understood that "any geometric construction" refers to figures that contain no straight lines, as it is clearly impossible to draw a straight line without a straightedge. It is understood that a line is determined provided that two distinct points on that line are given or constructed, even though no visual representation of the line will be present. The theorem can be stated more precisely as:
Any Euclidean construction, insofar as the given and required elements are points (or circles), may be completed with the compass alone if it can be completed with both the compass and the straightedge together.
Though the use of a straightedge can make a construction significantly easier, the theorem shows that any set of points that fully defines a constructed figure can be determined with compass alone, and the only reason to use a straightedge is for the aesthetics of seeing straight lines, which for the purposes of construction is functionally unnecessary.
History
The result was originally published by Georg Mohr in 1672, but his proof languished in obscurity until 1928. The theorem was independently discovered by Lorenzo Mascheroni in 1797 and it was known as Mascheroni's Theorem until Mohr's work was rediscovered.
Several proofs of the result are known. Mascheroni's proof of 1797 was generally based on the idea of using reflection in a line as the major tool. Mohr's solution was different. In 1890, August Adler published a proof using the inversion transformation.
An algebraic approach uses the isomorphism between the Euclidean plane and the real coordinate space . In this way, a stronger version of the theorem was proven in 1990. It also shows the dependence of the theorem on Archimedes' axiom (which cannot be formulated in a first-order language).
Constructive proof
Outline
To |
https://en.wikipedia.org/wiki/Death%20and%20the%20Internet | A recent extension to the cultural relationship with death is the increasing number of people who die having created a large amount of digital content, such as social media profiles, that will remain after death. This may result in concern and confusion, because of automated features of dormant accounts (e.g. birthday reminders), uncertainty of the deceased's preferences that profiles be deleted or left as a memorial, and whether information that may violate the deceased's privacy (such as email or browser history) should be made accessible to family.
Issues with how this information is sensitively dealt with are further complicated as it may belong to the service provider (not the deceased) and many do not have clear policies on what happens to the accounts of deceased users. While some sites, including Facebook and Twitter, have policies related to death, others remain dormant until deleted due to inactivity or transferred to family or friends. The FADA (Fiduciary Access to Digital Assets Act) was set in place to make it possible to transfer digital possessions legally.
More broadly, the heavy increase in social media use is affecting cultural practices surrounding death. "Virtual funerals" and other forms of previously physical memorabilia are being introduced into the digital world, complete with public details of a person's life and death.
E-mail
Gmail and Hotmail allow the email accounts of the deceased to be accessed provided certain requirements are met. Yahoo! Mail will not provide access, citing the No Right of Survivorship and Non-Transferability clause in the Yahoo! terms of service. In 2005, Yahoo! was ordered by the Probate Court of Oakland County, Michigan, to release emails of deceased US Marine Justin Ellsworth to his father, John Ellsworth.
By website
Facebook
Policies
In its early days, Facebook used to delete profiles of dead people, but does not anymore. In October 2009, the company introduced "memorial pages" in response to multiple user |
https://en.wikipedia.org/wiki/Hydroxypropyl%20starch | Hydroxypropyl starch is a type of modified starch used as a food additive. I has the E number E1440. Hydroxyl propyl starch is not absorbed intact by the gut, but are significantly hydrolyzed by intestinal enzymes and then fermented by intestinal microbiota. |
https://en.wikipedia.org/wiki/Dolipore%20septum | Dolipore septa are specialized dividing walls between cells (septa) found in almost all species of fungi in the phylum Basidiomycota. Unlike most fungal septa, they have a barrel-shaped swelling around their central pore, which is about 0.1–0.2 µm wide. This structure is typically capped at either end by specialized membranes, called "parenthesomes" (after their parenthesis-like appearance under a microscope) or simply "pore caps".
Dolipore septa vary significantly between monokaryotic and dikaryotic hyphae, which form at different points in basidiomycete life cycles. In monokaryotic but not dikaryotic hyphae, the parenthesomes are continuous with the endoplasmic reticulum, and the septal walls are constructed from different material than the cell walls. All dolipore septa can allow cytoplasm, and sometimes mitochondria, to flow through their pores; those in monokaryotic hyphae have perforated parenthesomes, which allow cell nuclei to flow through as well.
The structure was first described by Royall Moore and James McAlear in 1962. |
https://en.wikipedia.org/wiki/Power-added%20efficiency | Power-added efficiency (PAE) is a metric for rating the efficiency of a power amplifier that takes into account the effect of the gain of the amplifier. It is calculated (in percent) as:
It differs from most power efficiency descriptions calculated (in percent) as:
PAE will be very similar to efficiency when the gain of the amplifier is sufficiently high. But if the amplifier gain is relatively low the amount of power that is needed to drive the input of the amplifier should be considered in a metric that measures the efficiency of said amplifier. |
https://en.wikipedia.org/wiki/Sialome | In biochemistry, the term sialome may refer to two distinct concepts:
The set of mRNA and proteins expressed in the salivary glands, especially of mosquitoes, ticks, and other blood-sucking arthropods.
The total complement of sialic acid types and linkages and their modes of presentation on a particular organelle, cell, tissue, organ or organism - as found at a particular time and under specific conditions. |
https://en.wikipedia.org/wiki/Phenotype%20modification | Phenotype modification is the process of experimentally altering an organism's phenotype to investigate the impact of phenotype on the fitness.
Phenotype modification has been used to assess the impact of parasite mechanical presence on fish host behaviour. |
https://en.wikipedia.org/wiki/BitSight | BitSight is a cybersecurity ratings company that analyzes companies, government agencies, and educational institutions. It is based in Back Bay, Boston. Security ratings that are delivered by BitSight are used by banks and insurance companies among other organizations.
The company rates more than 200,000 organizations with respect to their cybersecurity.
History
BitSight was founded in 2011 by Nagarjuna Venna and Stephen Boyer and currently has both United States-based and international employees. In 2016, BitSight raised $40 million USD in funding in the month of September.
In 2014, BitSight acquired AnubisNetworks, a Portugal-based cybersecurity firm that tracks real-time data threats.
By September 2016, BitSight had raised $40 million in a Series C round led by GGV Capital, with participation from Flybridge Capital Partners, Globespan Capital Partners, Menlo Ventures, Shaun McConnon, and the VC divisions of Comcast Ventures, Liberty Global Ventures, and Singtel Innov8.
Shaun McConnon stepped down as the CEO of BitSight in July 2017 but remains the executive chairman of the board. The CEO position was filled by Tom Turner in 2017, and then by Stephen Harvey in 2020.
In June 2018, BitSight closed $60 million in Series D funding, bringing the company's total funding to $155 million. BitSight's Series D financing was led by Warburg Pincus, with participation from existing investors Menlo Ventures, GGV Capital and Singtel Innov8.
In 2018, the company was located in Cambridge but purchased property in order to shift to Back Bay, where BitSight is currently located. Forbes has estimated BitSight's revenue as being US$100 million as of 2018.
Services
Organizations purchase BitSight's services in order to understand "security risks associated with sharing sensitive data with business partners." As of 2018, BitSight serves clients, including Lowe's, AIG, and Safeway.
BitSight assembles models that produce company ratings, which are based on a scale that enables |
https://en.wikipedia.org/wiki/Proton%20Synchrotron | The Proton Synchrotron (PS, sometimes also referred to as CPS) is a particle accelerator at CERN. It is CERN's first synchrotron, beginning its operation in 1959. For a brief period the PS was the world's highest energy particle accelerator. It has since served as a pre-accelerator for the Intersecting Storage Rings (ISR) and the Super Proton Synchrotron (SPS), and is currently part of the Large Hadron Collider (LHC) accelerator complex. In addition to protons, PS has accelerated alpha particles, oxygen and sulfur nuclei, electrons, positrons, and antiprotons.
Today, the PS is part of CERN's accelerator complex. It accelerates protons for the LHC as well as a number of other experimental facilities at CERN. Using a negative hydrogen ion source, the ions are first accelerated to the energy of 160 MeV in the linear accelerator Linac 4. The hydrogen ion is then stripped of both electrons, leaving only the nucleus containing one proton, which is injected into the Proton Synchrotron Booster (PSB), which accelerates the protons to 2 GeV, followed by the PS, which pushes the beam to 25 GeV. The protons are then sent to the Super Proton Synchrotron, and accelerated to 450 GeV before they are injected into the LHC. The PS also accelerate heavy ions from the Low Energy Ion Ring (LEIR) at an energy of 72 MeV, for collisions in the LHC.
Background
The synchrotron (as in Proton Synchrotron) is a type of cyclic particle accelerator, descended from the cyclotron, in which the accelerating particle beam travels around a fixed path. The magnetic field which bends the particle beam into its fixed path increases with time, and is synchronized to the increasing energy of the particles. As the particles travels around the fixed circular path they will oscillate around their equilibrium orbit, a phenomenon called betatron oscillations.
In a conventional synchrotron the focusing of the circulating particles is achieved by weak focusing: the magnetic field that guides the particles aro |
https://en.wikipedia.org/wiki/Thin%20set%20%28analysis%29 | In mathematical analysis, a thin set is a subset of n-dimensional complex space Cn with the property that each point has a neighbourhood on which some non-zero holomorphic function vanishes. Since the set on which a holomorphic function vanishes is closed and has empty interior (by the Identity theorem), a thin set is nowhere dense, and the closure of a thin set is also thin.
The fine topology was introduced in 1940 by Henri Cartan to aid in the study of thin sets. |
https://en.wikipedia.org/wiki/Bayesian%20regret | In stochastic game theory, Bayesian regret is the expected difference ("regret") between the utility of a Bayesian strategy and that of the optimal strategy (the one with the highest expected payoff).
The term Bayesian refers to Thomas Bayes (1702–1761), who proved a special case of what is now called Bayes' theorem, who provided the first mathematical treatment of a non-trivial problem of statistical data analysis using what is now known as Bayesian inference.
Economics
This term has been used to compare a random buy-and-hold strategy to professional traders' records. This same concept has received numerous different names, as the New York Times notes:
"In 1957, for example, a statistician named James Hanna called his theorem Bayesian Regret. He had been preceded by David Blackwell, also a statistician, who called his theorem Controlled Random Walks. Other, later papers had titles like 'On Pseudo Games', 'How to Play an Unknown Game', 'Universal Coding' and 'Universal Portfolios'".
Social Choice (voting methods)
"Bayesian Regret" has also been used as an alternate term for social utility efficiency, that is, a measure of the expected utility of different voting methods under a given probabilistic model of voter utilities and strategies. In this case, the relation to Bayes is unclear, as there is no conditioning or posterior distribution involved. |
https://en.wikipedia.org/wiki/Pharyngitis | Pharyngitis is inflammation of the back of the throat, known as the pharynx. It typically results in a sore throat and fever. Other symptoms may include a runny nose, cough, headache, difficulty swallowing, swollen lymph nodes, and a hoarse voice. Symptoms usually last 3–5 days, but can be longer depending on cause. Complications can include sinusitis and acute otitis media. Pharyngitis is a type of upper respiratory tract infection.
Most cases are caused by a viral infection. Strep throat, a bacterial infection, is the cause in about 25% of children and 10% of adults. Uncommon causes include other bacteria such as gonococcus, fungi, irritants such as smoke, allergies, and gastroesophageal reflux disease. Specific testing is not recommended in people who have clear symptoms of a viral infection, such as a cold. Otherwise, a rapid antigen detection test or throat swab is recommended. PCR testing has become common as it is as good as taking a throat swab but gives a faster result. Other conditions that can produce similar symptoms include epiglottitis, thyroiditis, retropharyngeal abscess, and occasionally heart disease.
NSAIDs, such as ibuprofen, can be used to help with the pain. Numbing medication, such as topical lidocaine, may also help. Strep throat is typically treated with antibiotics, such as either penicillin or amoxicillin. It is unclear whether steroids are useful in acute pharyngitis, other than possibly in severe cases, but a recent (2020) review found that when used in combination with antibiotics they moderately improved pain and the likelihood of resolution.
About 7.5% of people have a sore throat in any 3-month period. Two or three episodes in a year are not uncommon. This resulted in 15 million physician visits in the United States in 2007. Pharyngitis is the most common cause of a sore throat. The word comes from the Greek word pharynx meaning "throat" and the suffix -itis meaning "inflammation".
Classification
Pharyngitis is a type of inflamm |
https://en.wikipedia.org/wiki/Persistent%20homology%20group | In persistent homology, a persistent homology group is a multiscale analog of a homology group that captures information about the evolution of topological features across a filtration of spaces. While the ordinary homology group represents nontrivial homology classes of an individual topological space, the persistent homology group tracks only those classes that remain nontrivial across multiple parameters in the underlying filtration. Analogous to the ordinary Betti number, the ranks of the persistent homology groups are known as the persistent Betti numbers. Persistent homology groups were first introduced by Herbert Edelsbrunner, David Letscher, and Afra Zomorodian in a 2002 paper Topological Persistence and Simplification, one of the foundational papers in the fields of persistent homology and topological data analysis, based largely on the persistence barcodes and the persistence algorithm, that were first described by Serguei Barannikov in the 1994 paper. Since then, the study of persistent homology groups has led to applications in data science, machine learning, materials science, biology, and economics.
Definition
Let be a simplicial complex, and let be a real-valued monotonic function. Then for some values the sublevel-sets yield a sequence of nested subcomplexes known as a filtration of .
Applying homology to each complex yields a sequence of homology groups connected by homomorphisms induced by the inclusion maps of the underlying filtration. When homology is taken over a field, we get a sequence of vector spaces and linear maps known as a persistence module.
Let be the homomorphism induced by the inclusion . Then the persistent homology groups are defined as the images for all . In particular, the persistent homology group .
More precisely, the persistent homology group can be defined as , where and are the standard p-cycle and p-boundary groups, respectively.
Sometimes the elements of are described as the homology classes that are |
https://en.wikipedia.org/wiki/GEC%202050 | The GEC 2050 was an 8-bit minicomputer produced during the 1970s, initially by Marconi Elliott Computer Systems of the UK, before the company renamed itself GEC Computers Limited. The first models were labeled MECS 2050, before being renamed GEC 2050.
The GEC 2050 was commonly used as a Remote Job Entry station, supporting a punched card reader, line printer, system console, and a data link to a remote mainframe computer system, and GEC Computers sold a complete RJE package including the system, peripherals, and RJE software. Another turnkey application was a ticketing system, whose customers included Arsenal Football Club. The system was also commonly used for road traffic control and industrial process automation.
The GEC 2050 supported up to 64KiB of magnetic-core memory (minimum 4KiB, expandable by 8KiB and 16KiB modules). Weighed 41 kg (90 lbs). The system had a single Channel Controller for performing autonomous I/O, and used the same peripheral I/O controllers as the GEC 4000 series minicomputer.
Instruction set
Although CISC, the instruction set is sufficiently simple to be tabulated in its entirety:
Using the opcode 29 as an illustration, the assembler code (AD X2,X1,offset) causes the contents of the memory location 'offset(X1)' to be added to register X2. Thus, register X1 is being used as the index register, and the offset, v, is specified in the second byte of the instruction. G is a dummy index register whose value is always zero, and hence causes the offsets to be treated as absolute addresses in the zeroth (global) segment. (Incidentally, since X3 is the standard index register, the assembler program allows ',X3,address' to be abbreviated to ',address'.)
The conditional jump instructions are listed in pairs, the former opcode is for a forward jump, and the latter one for a backward jump. Again, the offset of the jump is obtained from the second byte of the instruction. Thus, all instructions in rows 0 to 7 and row 9 consist of two bytes (the opc |
https://en.wikipedia.org/wiki/MS100a7a15 | The S100 calcium-binding protein mS100a7a15 is the murine ortholog of human S100A7 (Psoriasin) and human S100A15 (Koebnerisin). mS100a7a15 is also known as S100a15, mS100a7 and mS100a7a and is encoded by the mS100a7a gene (alias: S100a15)
S100 proteins are a diverse calcium-binding family that mediate fundamental cellular and extracellular processes including cell proliferation and differentiation, cell migration, and the antimicrobial host defense as antimicrobial peptides.
mS100a7a15 was first cloned as mS100a15 from adult mouse skin (FVB/N mice, clone RP1-128L15, accession no.: AL591704). Today, the protein is of further interest because of its role in innate immunity, epidermal cell maturation and epithelial tumorigenesis. Additionally, it can serve as surrogate model to study hS100A7 and hS100A15 functions.
Function
Epithelial homeostasis and antimicrobial host defence
Skin: In new-born mouse skin, mS100a7a15 is localized to the epidermal granular and cornified layers of the interfollicular epidermis and to the maturing cells of hair follicles. In maturing keratinocytes, mS100a7a15 is induced by calcium mediated differentiation dependent on protein kinase C (PKC). It is upregulated in a TLR4 dependent manner by E.coli stimulation and might have antimicrobial effects against E.coli like its human counterparts. In the dermis, it is expressed by the smooth muscle cells of the panniculus carnosus.
Breast: mS100a7a15 is weakly expressed in normal mammary gland tissue.
Epithelial Carcinogenesis
Breast cancer: mS100a7a15 is upregulated in DMBA induced mammary gland tumors confined to epithelial tumor cells. mS100a7a15 overexpression in mammary epithelial cells enhances proliferation, angiogenesis and metastasis through induction of prometastatic and angiogenic factors like CCL2, Cox-2, MMP2, MMP9 and VEGF. Furthermore mS100a7a15 recruits tumor-associated-macrophages (TAM) through RAGE and Stat3 activation to promote tumorigenesis.
Inflammation
mS100a7a15 is up |
https://en.wikipedia.org/wiki/Asus%20Media%20Bus | The Asus Media Bus is a proprietary computer bus developed by Asus, which was used on some Socket 7 motherboards in the middle 1990s. It is a combined PCI and ISA slot. It was developed to provide a cost-efficient solution to a complete multimedia system. Using Media Bus cards for building a system reduced slot requirements and compatibility problems. Expansion cards supporting this interface were only manufactured by Asus for a very limited time. This bus is now obsolete.
While similar to PCI-X in appearance, the extension contains 4 additional pins (2 on each side) for a total of 68. The divider between the PCI slot and Media Bus extension is too wide to support a properly-keyed PCI-X card.
Despite the very short lifespan, there were at least two revisions of Asus Media Bus – revision 1.2 and 2.0. The difference between them is that the latter revision has 72 pins instead of 68 so it does not have to use any PCI slot signals reserved for PCI cards and PCI slot shared with the Media Bus slot becomes standards compliant. The gap between PCI slot and Media Bus extension is 0.32 in. for revision 1.2 (pictured) and 0.4 in. for revision 2.0 so expansion cards designed for two revisions are mutually incompatible.
Expansion cards designed for this interface included primarily combined audio and video cards, but also some combined SCSI and audio cards. The (possibly incomplete) list of Media Bus expansion cards presented here (all cards manufactured by Asus):
Media Bus rev. 1.2 cards
PCI-AS7870 – Fast/Wide SCSI and audio card (Adaptec AS7870 and Vibra16s (with separate Yamaha yfm262-m))
PCI-AV264CT – audio and video card (ATI Mach64 PCI 1 MiB (up to 2 MiB) and Vibra16s (with separate Yamaha yfm262-m))
PCI-AV868 (pictured) – audio and video card (S3 Vision868 1 MiB and Vibra16s (with separate Yamaha yfm262-m))
Media Bus rev. 1.2 motherboards
Asus P/I-P55SP4
Asus P/I-P55TP4XE
Media Bus rev. 2.0 cards
PCI-AS2940UW – Ultra Fast/Wide SCSI and audio card
PCI-AV264C |
https://en.wikipedia.org/wiki/Evaluation%20and%20Management%20Coding | Evaluation and management coding (commonly known as E/M coding or E&M coding) is a medical coding process in support of medical billing. Practicing health care providers in the United States must use E/M coding to be reimbursed by Medicare, Medicaid programs, or private insurance for patient encounters.
E/M standards and guidelines were established by Congress in 1995 and revised in 1997. It has been adopted by private health insurance companies as the standard guidelines for determining type and severity of patient conditions. This allows medical service providers to document and bill for reimbursement for services provided.
E/M codes are based on the Current Procedural Terminology (CPT) codes established by the American Medical Association (AMA).
In 2010, new codes were added to the E/M Coding set, for prolonged services without direct face-to-face contact.
See also
Resource-based relative value scale |
https://en.wikipedia.org/wiki/Linear%20search%20problem | In computational complexity theory, the linear search problem is an optimal search problem introduced by Richard E. Bellman and independently considered by Anatole Beck.
The problem
"An immobile hider is located on the real line according to a known probability distribution. A searcher, whose maximal velocity is one, starts from the origin and wishes to discover the hider in minimal expected time. It is assumed that the searcher can change the direction of his motion without any loss of time. It is also assumed that the searcher cannot see the hider until he actually reaches the point at which the hider is located and the time elapsed until this moment is the duration of the game."
The problem is to find the hider in the shortest time possible. Generally, since the hider could be on either side of the searcher and an arbitrary distance away, the searcher has to oscillate back and forth, i.e., the searcher has to go a distance x1 in one direction, return to the origin and go distance x2 in the other direction, etc., (the length of the n-th step being denoted by xn). (However, an optimal solution need not have a first step and could start with an infinite number of small 'oscillations'.) This problem is usually called the linear search problem and a search plan is called a trajectory. It has attracted much research, some of it quite recent.
The linear search problem for a general probability distribution is unsolved. However, there exists a dynamic programming algorithm that produces a solution for any discrete distribution and also an approximate solution, for any probability distribution, with any desired accuracy.
The linear search problem was solved by Anatole Beck and Donald J. Newman (1970) as a two-person zero-sum game. Their minimax trajectory is to double the distance on each step and the optimal strategy is a mixture of trajectories that increase the distance by some fixed constant. This solution gives search strategies that are not sensitive to assum |
https://en.wikipedia.org/wiki/Protist%20shell | Many protists have protective shells or tests, usually made from silica (glass) or calcium carbonate (chalk). Protists are a diverse group of eukaryote organisms that are not plants, animals, or fungi. They are typically microscopic unicellular organisms that live in water or moist environments.
Protists shells are often tough, mineralised forms that resist degradation, and can survive the death of the protist as a microfossil. Although protists are typically very small, they are ubiquitous. Their numbers are such that their shells play a huge part in the formation of ocean sediments and in the global cycling of elements and nutrients.
The role of protist shells depends on the type of protist. Protists such as diatoms and radiolaria have intricate, glass-like shells made of silica that are hard and protective, and serve as a barrier to prevent water loss. The shells have small pores that allow for gas exchange and nutrient uptake. Coccolithophores and foraminifera also have hard protective shells, but the shells are made of calcium carbonate. These shells can help with buoyancy, allowing the organisms to float in the water column and move around more easily.
In addition to protection and support, protist shells also serve scientists as a means of identification. By examining the characteristics of the shells, different species of protists can be identified and their ecology and evolution can be studied.
Protists
Cellular life likely originated as single-celled prokaryotes (including modern bacteria and archaea) and later evolved into more complex eukaryotes. Eukaryotes include organisms such as plants, animals, fungi and "protists". Protists are usually single-celled and microscopic. They can be heterotrophic, meaning they obtain nutrients by consuming other organisms, or autotrophic, meaning they produce their own food through photosynthesis or chemosynthesis, or mixotrophic, meaning they produce their own food through a mixture of those methods.
The term prot |
https://en.wikipedia.org/wiki/Nokia%20N810 | The Nokia N810 Internet tablet is an Internet appliance from Nokia, announced on 17 October 2007 at the Web 2.0 Summit in San Francisco. Despite Nokia's strong association with cellular products, the N810, like preceding tablets produced by Nokia, was not a phone, but instead allowed the user to browse the Internet and communicate using Wi-Fi networks or with a mobile phone via Bluetooth. It built on the hardware and software of the Nokia N800 with some features added and some removed.
The Nokia N810 featured the Maemo Linux distribution operating system based on Maemo 4.0, which featured MicroB (a Mozilla-based mobile browser), a GPS navigation application, new media player, and a refreshed interface.
Major changes from the N800
The Nokia N810 had much in common with the N800 and Internet Tablet OS 2008 operated on both, but there were some significant differences between the two. Here are the new features in the Nokia N810:
Sliding, backlit keyboard
Front-facing webcam (replacing pop-out rotating device)
Ambient Light Sensor
Integrated GPS
2 GB integrated internal storage
MiniSDHC card slot (replacing two full-size SDHC slots, one internal, one external) | Maximum: 32GB
Sunlight readable transflective display
USB Micro-AB receptacle (replacing a USB Mini-B receptacle)
No longer has an FM tuner
Nokia N810 WiMAX Edition
On 1 April 2008, Nokia announced a WiMAX equipped version of the N810 called the "N810 WiMAX Edition". This device was to be identical in specifications to the original N810 but included a WiMAX radio for use initially on Sprint's Xohm network, and featured a color change from Light Gray or dark blue to Black, as well as a larger case-back bulge to accommodate an antenna that was more efficient at the required bands.
The production of the Wimax Edition of the Nokia N810 ended in January 2009.
Maemo
The N810, like all Nokia Internet Tablets, ran Maemo, which was similar to many handheld operating systems, and provided a "Home" scre |
https://en.wikipedia.org/wiki/Between%20Silk%20and%20Cyanide | Between Silk and Cyanide: A Codemaker's War 1941–1945 is a memoir of public interest by former Special Operations Executive (SOE) cryptographer Leo Marks, describing his work including memorable events, actions and omissions of his colleagues during the Second World War. It was first published in 1998.
Date
The book was written in the early 1980s. It was published on UK Government approval in 1998.
Title
The title is derived from an incident related in the book, when Marks was asked why agents in occupied Europe should have their cryptographic material printed on silk (which was in very short supply). He summed his reply up by saying that it was "between silk and cyanide", meaning that it was a choice between the agent's surviving by making reliable coded radio transmissions with the help of the printed silk, and having to take a suicide pill to avoid being tortured into revealing the code and other secret information. Unlike paper, given away by rustling, silk is not detected by a casual or typical body search if concealed in the lining of clothing.
SOE
A major theme is Marks's inability to convince his superiors in the Special Operations Executive (SOE) that apparent mistakes made in radio transmissions from agents working with or in an alike role as the Dutch resistance were their prearranged duress codes, which it transpired they were as he alleged, and which fact haunted him. SOE management, unwilling to face the possibility that their Dutch network was compromised, insisted that the errors were attributable to poor operation by the recently trained Morse code operators and continued to parachute in new agents to sites prearranged with the compromised network, leading to their immediate capture and later execution by the order of the command of Nazi Germany.
Marks' interest in cryptography arose from reading Edgar Allan Poe's The Gold-Bug as a child. Furthermore, his father Benjamin was a partner in bookshop Marks & Co at 84 Charing Cross Road. As a boy, |
https://en.wikipedia.org/wiki/Game%20physics | Computer animation physics or game physics are laws of physics as they are defined within a simulation or video game, and the programming logic used to implement these laws. Game physics vary greatly in their degree of similarity to real-world physics. Sometimes, the physics of a game may be designed to mimic the physics of the real world as accurately as is feasible, in order to appear realistic to the player or observer. In other cases, games may intentionally deviate from actual physics for gameplay purposes. Common examples in platform games include the ability to start moving horizontally or change direction in mid-air and the double jump ability found in some games. Setting the values of physical parameters, such as the amount of gravity present, is also a part of defining the game physics of a particular game.
There are several elements that form components of simulation physics including the physics engine, program code that is used to simulate Newtonian physics within the environment, and collision detection, used to solve the problem of determining when any two or more physical objects in the environment cross each other's path.
Physics simulations
There are two central types of physics simulations: rigid body and soft-body simulators. In a rigid body simulation objects are grouped into categories based on how they should interact and are less performance intensive. Soft-body physics involves simulating individual sections of each object such that it behaves in a more realistic way.
Particle systems
A common aspect of computer games that model some type of conflict is the explosion. Early computer games used the simple expedient of repeating the same explosion in each circumstance. However, in the real world an explosion can vary depending on the terrain, altitude of the explosion, and the type of solid bodies being impacted. Depending on the processing power available, the effects of the explosion can be modeled as the split and shattered comp |
https://en.wikipedia.org/wiki/Tephrosin | {{chembox
| Verifiedfields = changed
| Watchedfields = changed
| verifiedrevid = 413150531
| ImageFile = Tephrosin.png
| ImageSize = 200px
| IUPACName = 7a-Hydroxy-9,10-dimethoxy-3,3-dimethyl-13,13a-dihydro-3H,7aH-pyrano[2,3-c;6,5-f]dichromen-7-one
| OtherNames = 12aβ-hydroxydeguelin
|Section1=
|Section2=
|Section3=
|Section8=
}}Tephrosin''' is rotenoid. It is a natural fish poison found in the leaves and seeds of Tephrosia purpurea and T. vogelii''.
See also
Cubé resin |
https://en.wikipedia.org/wiki/Committed%20information%20rate | In a Frame Relay network, committed information rate (CIR) is the bandwidth for a virtual circuit guaranteed by an internet service provider to work under normal conditions. Committed data rate (CDR) is the payload portion of the CIR.
At any given time, the available bandwidth should not fall below this committed figure. The bandwidth is usually expressed in kilobits per second (kbit/s).
Above the CIR, an allowance of burstable bandwidth is often given, whose value can be expressed in terms of an additional rate, known as the excess information rate (EIR), or as its absolute value, peak information rate (PIR). The provider guarantees that the connection will always support the CIR rate, and sometimes the EIR rate provided that there is adequate bandwidth. The PIR, i.e. the CIR plus EIR, is either equal to or less than the speed of the access port into the network. Frame Relay carriers define and package CIRs differently, and CIRs are adjusted with experience.
See also
Information rate
Throughput
Notes |
https://en.wikipedia.org/wiki/Nearly%20K%C3%A4hler%20manifold | In mathematics, a nearly Kähler manifold is an almost Hermitian manifold , with almost complex structure ,
such that the (2,1)-tensor is skew-symmetric. So,
for every vector field on .
In particular, a Kähler manifold is nearly Kähler. The converse is not true.
For example, the nearly Kähler six-sphere is an example of a nearly Kähler manifold that is not Kähler. The familiar almost complex structure on the six-sphere is not induced by a complex atlas on .
Usually, non Kählerian nearly Kähler manifolds are called "strict nearly Kähler manifolds".
Nearly Kähler manifolds, also known as almost Tachibana manifolds, were studied by Shun-ichi Tachibana in 1959 and then by Alfred Gray from 1970 on.
For example, it was proved that any 6-dimensional strict nearly Kähler manifold is an Einstein manifold and has vanishing first Chern class
(in particular, this implies spin).
In the 1980s, strict nearly Kähler manifolds obtained a lot of consideration because of their relation to Killing
spinors: Thomas Friedrich and Ralf Grunewald showed that a 6-dimensional Riemannian manifold admits
a Riemannian Killing spinor if and only if it is nearly Kähler. This was later given a more fundamental explanation by Christian Bär, who pointed out that
these are exactly the 6-manifolds for which the corresponding 7-dimensional Riemannian cone has holonomy G2.
The only compact simply connected 6-manifolds known to admit strict nearly Kähler metrics are , and . Each of these admits such a unique nearly Kähler metric that is also homogeneous, and these examples are in fact the only compact homogeneous strictly nearly Kähler 6-manifolds.
However, Foscolo and Haskins recently showed that and also admit strict nearly Kähler metrics that are not homogeneous.
Bär's observation about the holonomy of Riemannian cones might seem to indicate that the nearly-Kähler condition is
most natural and interesting in dimension 6. This actually borne out by a theorem of Nagy, who proved tha |
https://en.wikipedia.org/wiki/ANSEL | ANSEL, the American National Standard for Extended Latin Alphabet Coded Character Set for Bibliographic Use, was a character set used in text encoding. It provided a table of coded values for the representation of characters of the extended Latin alphabet in machine-readable form for thirty-five languages written in the Latin alphabet and for fifty-one romanized languages. ANSEL adds 63 graphic characters to ASCII, including 29 combining diacritic characters.
The initial revision of ANSEL was released in 1985, and before 1993 it was registered as Registration #231 in the ISO International Register of Coded Character Sets to be Used with Escape Sequences. The standard was reaffirmed in 2003 although it has been administratively withdrawn by ANSI effective 14 February 2013.
The requirement of hardware capable of overprinting accents doomed this from ever becoming a popular extended ASCII.
Code page layout
The following table shows ANSI/NISO Z39.47-1993 (R2003). Non-ASCII characters are shown with their Unicode code point. A combining diacritic precedes the spacing character on which it should be superimposed (in Unicode the combining diacritic is after the base character).
Use
GEDCOM
The GEDCOM specification for exchanging genealogical data refers to ANSEL (ANSI/NISO Z39.47-1985) as a valid text encoding for GEDCOM files and extends it with additional characters which are shown in the following table.
MARC21
The Extended Latin character set from MARC 21 is synchronized with ANSEL but additionally supports the eszett (ß) character at C7 and the euro sign (€) at C8. |
https://en.wikipedia.org/wiki/Masatoshi%20Shima | is a Japanese electronics engineer. He was one of the architects of the world's first microprocessor, the Intel 4004. In 1968, Shima worked for Busicom in Japan, and did the logic design for a specialized CPU to be translated into three-chip custom chips. In 1969, he worked with Intel's Ted Hoff and Stanley Mazor to reduce the three-chip Busicom proposal into a one-chip architecture. In 1970, that architecture was transformed into a silicon chip, the Intel 4004, by Federico Faggin, with Shima's assistance in logic design.
He later joined Intel in 1972. There, he worked with Faggin to develop the Intel 8080, released in 1974. Shima then developed several Intel peripheral chips, some used in the IBM PC, such as the 8259 interrupt controller, 8255 programmable peripheral interface chip, 8253 timer chip, 8257 direct memory access (DMA) chip and 8251 serial communication USART chip. He then joined Zilog, where he worked with Faggin to develop the Zilog Z80 (1976) and Z8000 (1979).
Early life and career
He studied organic chemistry at Tohoku University in Sendai, Miyagi Prefecture, Japan. With poor prospects for employment in the field of chemistry, he went to work for Busicom, a business calculator manufacturer, joining in Spring 1967. There, he learned about software and digital logic design, from 1967 to 1968.
Intel 4004
After Busicom decided to use large-scale integration (LSI) circuits in their calculator products, they began work on what later became known as the "Busicom Project", a chipset for the Busicom 141-PF calculator that led to creating the first microprocessor, the Intel 4004. In April 1968, Shima was asked to design the logic for what was intended to become a future chipset to be designed and produced by a semiconductor company. Shima designed a special-purpose LSI chipset, along with his supervisor Tadashi Tanba, in 1968. His design consisted of seven LSI chips, including a three-chip CPU. Shima's initial design included arithmetic units (adders), mul |
https://en.wikipedia.org/wiki/Effective%20dimension | In mathematics, effective dimension is a modification of Hausdorff dimension and other fractal dimensions that places it in a computability theory setting. There are several variations (various notions of effective dimension) of which the most common is effective Hausdorff dimension. Dimension, in mathematics, is a particular way of describing the size of an object (contrasting with measure and other, different, notions of size). Hausdorff dimension generalizes the well-known integer dimensions assigned to points, lines, planes, etc. by allowing one to distinguish between objects of intermediate size between these integer-dimensional objects. For example, fractal subsets of the plane may have intermediate dimension between 1 and 2, as they are "larger" than lines or curves, and yet "smaller" than filled circles or rectangles. Effective dimension modifies Hausdorff dimension by requiring that objects with small effective dimension be not only small but also locatable (or partially locatable) in a computable sense. As such, objects with large Hausdorff dimension also have large effective dimension, and objects with small effective dimension have small Hausdorff dimension, but an object can have small Hausdorff but large effective dimension. An example is an algorithmically random point on a line, which has Hausdorff dimension 0 (since it is a point) but effective dimension 1 (because, roughly speaking, it can't be effectively localized any better than a small interval, which has Hausdorff dimension 1).
Rigorous definitions
This article will define effective dimension for subsets of Cantor space 2ω; closely related definitions exist for subsets of Euclidean space Rn. We will move freely between considering a set X of natural numbers, the infinite sequence given by the characteristic function of X, and the real number with binary expansion 0.X.
Martingales and other gales
A martingale on Cantor space 2ω is a function d: 2ω → R≥ 0 from Cantor space to nonnegative r |
https://en.wikipedia.org/wiki/Psychology%2C%20philosophy%20and%20physiology | Psychology, philosophy and physiology (PPP) was a degree at the University of Oxford. It was Oxford's
first psychology degree, beginning in 1947, but admitted its last students in October 2010. It has been, in part, replaced by psychology, philosophy, and linguistics (PPL, in which students usually study two of three subjects).
PPP covered the study of thought and behaviour from the differing points of view of psychology, physiology and philosophy. Psychology includes social interaction, learning, child development, mental illness and information processing. Physiology considers the organization of the brain and body of mammals and humans, from the molecular level to the organism as a whole. Philosophy is concerned with ethics, knowledge, the mind, etc.
External links
Academic courses at the University of Oxford
Philosophy education
Physiology
Psychology education |
https://en.wikipedia.org/wiki/Mucous%20membrane | A mucous membrane or mucosa is a membrane that lines various cavities in the body of an organism and covers the surface of internal organs. It consists of one or more layers of epithelial cells overlying a layer of loose connective tissue. It is mostly of endodermal origin and is continuous with the skin at body openings such as the eyes, eyelids, ears, inside the nose, inside the mouth, lips, the genital areas, the urethral opening and the anus. Some mucous membranes secrete mucus, a thick protective fluid. The function of the membrane is to stop pathogens and dirt from entering the body and to prevent bodily tissues from becoming dehydrated.
Structure
The mucosa is composed of one or more layers of epithelial cells that secrete mucus, and an underlying lamina propria of loose connective tissue. The type of cells and type of mucus secreted vary from organ to organ and each can differ along a given tract.
Mucous membranes line the digestive, respiratory and reproductive tracts and are the primary barrier between the external world and the interior of the body; in an adult human the total surface area of the mucosa is about 400 square meters while the surface area of the skin is about 2 square meters. Along with providing a physical barrier, they also contain key parts of the immune system and serve as the interface between the body proper and the microbiome.
Examples
Some examples include:
Endometrium: the mucosa of the uterus
Gastric mucosa
Intestinal mucosa
Nasal mucosa
Olfactory mucosa
Oral mucosa
Penile mucosa
Respiratory mucosa
Vaginal mucosa
Frenulum of tongue
Anal canal
Conjunctiva
Development
Developmentally, the majority of mucous membranes are of endodermal origin. Exceptions include the palate, cheeks, floor of the mouth, gums, lips and the portion of the anal canal below the pectinate line, which are all ectodermal in origin.
Function
One of its functions is to keep the tissue moist (for example in the respiratory tract, including the mouth and nose |
https://en.wikipedia.org/wiki/FlyExpress | FlyExpress is a free database that collects the expression patterns of Drosophila melanogaster in embryogenesis via a series of images submitted from BDGP, Fly-FISH and publications from other researchers, containing over 100,000 images of over 4,000 genes. It is currently available freely both online and as an iPhone application.
History
FlyExpress was developed by the Center of Evolutionary Medicine and Informatics of the Biodesign Institute of Arizona State University and was released in 2011 with funding and support from an NIH grant.
Features
The primary images available in FlyExpress are GEMs, or Genome-wide Expression Maps, that display the spatial patterns of genes during a state of fly development via heat maps. These not only give a visual clue as to where these genes are expressed, but also how many of them are expressed in the same vicinity, as the darker regions of the heat map correlate to a higher expressed gene count.
Upon searching for images (with a gene name, PubMed ID, image ID or certain keywords), a series of GEMs are displayed from the two databases of the gene’s inclusion in descending stages and views of the embryo. Various details, such as the source and experimental protocol of the embryo, are also shown. All expression patterns reflect the wild-type allele for the gene.
Categorizing genes as images does not only allow for visualization in understanding the influence of the gene, but also contrasting the patterns between multiple different genes. FlyExpress has a primary function of searching between images for similar expression patterns using a specified spatial profile via the Basic Expression Search Tool for Images (or BESTi), bringing up all images and genes that fit the criteria of the pattern. Certain classifications and locations of expression of genes can also be searched within the BDGP and Fly-FISH databases to gain similar results without specifying a particular gene to begin with. Searching between GEMs tends to give bette |
https://en.wikipedia.org/wiki/View%20factor | In radiative heat transfer, a view factor, , is the proportion of the radiation which leaves surface that strikes surface . In a complex 'scene' there can be any number of different objects, which can be divided in turn into even more surfaces and surface segments.
View factors are also sometimes known as configuration factors, form factors, angle factors or shape factors.
Summation of view factors
Because radiation leaving a surface is conserved, the sum of all view factors from a given surface, , is unity:
For example, consider a case where two blobs with surfaces A and B are floating around in a cavity with surface C. All of the radiation that leaves A must either hit B or C, or if A is concave, it could hit A. 100% of the radiation leaving A is divided up among A, B, and C.
Confusion often arises when considering the radiation that arrives at a target surface. In that case, it generally does not make sense to sum view factors as view factor from A and view factor from B (above) are essentially different units. C may see 10% of A 's radiation and 50% of B 's radiation and 20% of C 's radiation, but without knowing how much each radiates, it does not even make sense to say that C receives 80% of the total radiation.
Self-viewing surfaces
For a convex surface, no radiation can leave the surface and then hit it later, because radiation travels in straight lines. Hence, for convex surfaces,
For concave surfaces, this doesn't apply, and so for concave surfaces
Superposition rule
The superposition rule (or summation rule) is useful when a certain geometry is not available with given charts or graphs. The superposition rule allows us to express the geometry that is being sought using the sum or difference of geometries that are known.
Reciprocity
The reciprocity theorem for view factors allows one to calculate if one already knows . Using the areas of the two surfaces and ,
View factors of differential areas
Taking the limit of a small flat surface gi |
https://en.wikipedia.org/wiki/Boustrophedon%20transform | In mathematics, the boustrophedon transform is a procedure which maps one sequence to another. The transformed sequence is computed by an "addition" operation, implemented as if filling a triangular array in a boustrophedon (zigzag- or serpentine-like) manner—as opposed to a "Raster Scan" sawtooth-like manner.
Definition
The boustrophedon transform is a numerical, sequence-generating transformation, which is determined by an "addition" operation.
Generally speaking, given a sequence: , the boustrophedon transform yields another sequence: , where is likely defined equivalent to . The entirety of the transformation itself can be visualized (or imagined) as being constructed by filling-out the triangle as shown in Figure 1.
Boustrophedon Triangle
To fill-out the numerical Isosceles triangle (Figure 1), you start with the input sequence, , and place one value (from the input sequence) per row, using the boustrophedon scan (zigzag- or serpentine-like) approach.
The top vertex of the triangle will be the input value , equivalent to output value , and we number this top row as row 0.
The subsequent rows (going down to the base of the triangle) are numbered consecutively (from 0) as integers—let denote the number of the row currently being filled. These rows are constructed according to the row number () as follows:
For all rows, numbered , there will be exactly values in the row.
If is odd, then put the value on the right-hand end of the row.
Fill-out the interior of this row from right-to-left, where each value (index: ) is the result of "addition" between the value to right (index: ) and the value to the upper right (index: ).
The output value will be on the left-hand end of an odd row (where is odd).
If is even, then put the input value on the left-hand end of the row.
Fill-out the interior of this row from left-to-right, where each value (index: ) is the result of "addition" between the value to its left (index: ) and the value to its upper left |
https://en.wikipedia.org/wiki/Hereditas | Hereditas (not to be confused with another journal called Heredity) is a scientific journal concerning genetics. It has been published since 1920 by Mendelska sällskapet i Lund (Mendelian Society of Lund). In its long history it has published important papers in the field of genetics, including the first discovery of the correct human chromosome count by Joe Hin Tijo and Albert Levan in 1956. In the post-genomic era, the scope of Hereditas has evolved to include any research on genomic analysis.
In 2005, Hereditas changed from a traditional subscription-based journal to become an open access, web based and author funded journal. By the end of 2014, Hereditas terminated its activity with Wiley Publishers. In May 2015 Hereditas was re-launched and became part of the portfolio of the open access publisher Biomed Central (BMC), now owned by Springer Nature.
It is indexed by BIOSIS, DOAJ, MEDLINE, Science Citation Index, and Scopus. The impact factor (IF) of 2021 is 3.4.
Editors-in-chief (EiC): Robert Larsson (1920–1954), Arne Müntzing (1955–1977), Arne Lundqvist (1978–1988), Karl Fredga and Arne Lundqvist (1989–1996), Ulf Gyllensten (1996–2001), Anssi Saura (2002-2011), Stefan Baumgartner (2012 - 2023), Yongyong Shi (2016 -, in parallel to Baumgartner). Ramin Massoumi (2024-, in parallel to Yongzong Shi) |
https://en.wikipedia.org/wiki/171%20%28number%29 | 171 (one hundred [and] seventy-one) is the natural number following 170 and preceding 172.
In mathematics
171 is a triangular number and a Jacobsthal number.
There are 171 transitive relations on three labeled elements, and 171 combinatorially distinct ways of subdividing a cuboid by flat cuts into a mesh of tetrahedra, without adding extra vertices.
The diagonals of a regular decagon meet at 171 points, including both crossings and the vertices of the decagon.
There are 171 faces and edges in the 57-cell, an abstract 4-polytope with hemi-dodecahedral cells that is its own dual polytope.
Within moonshine theory of sporadic groups, the friendly giant is defined as having cyclic groups ⟨ ⟩ that are linked with the function,
∈ where is the character of at .
This generates 171 moonshine groups within associated with that are principal moduli for different genus zero congruence groups commensurable with the projective linear group .
See also
The year AD 171 or 171 BC
List of highways numbered 171 |
https://en.wikipedia.org/wiki/Harvard%20architecture | The Harvard architecture is a computer architecture with separate storage and signal pathways for instructions and data. It is often contrasted with the von Neumann architecture, where program instructions and data share the same memory and pathways.
The term is often stated as having originated from the Harvard Mark I relay-based computer, which stored instructions on punched tape (24 bits wide) and data in electro-mechanical counters. These early machines had data storage entirely contained within the central processing unit, and provided no access to the instruction storage as data. Programs needed to be loaded by an operator; the processor could not initialize itself. However, in the only peer-reviewed published paper on the topic - The Myth of the Harvard Architecture published in the IEEE Annals of the History of Computing - the author demonstrates that:
- 'The term “Harvard architecture” was coined decades later, in the context of microcontroller design' and only 'retrospectively applied to the Harvard machines and subsequently applied to RISC microprocessors with separated caches'
- 'The so-called “Harvard” and “von Neumann” architectures are often portrayed as a dichotomy, but the various devices labeled as the former have far more in common with the latter than they do with each other.'
- 'In short [the Harvard architecture] isn't an architecture and didn't derive from work at Harvard.'
Modern processors appear to the user to be systems with von Neumann architectures, with the program code stored in the same main memory as the data. For performance reasons, internally and largely invisible to the user, most designs have separate processor caches for the instructions and data, with separate pathways into the processor for each. This is one form of what is known as the modified Harvard architecture.
Harvard architecture is historically, and traditionally, split into two address spaces, but having three, i.e. two extra (and all accessed in each cycle) |
https://en.wikipedia.org/wiki/Humster | A humster is a hybrid cell line made from a hamster oocyte fertilized with human sperm. This is possible due to the unique promiscuity of hamster ova, which allows them to fuse with non-hamster sperm. It always consists of single cells, and cannot form a multi-cellular being. Humsters are usually destroyed before they divide into two cells; if isolated and left alone to divide, they would still be unviable.
Humsters are routinely created mainly for two reasons:
To avoid legal issues with working with pure human embryonic stem cell lines.
To assess the viability of human sperm for in vitro fertilization
Somatic cell hybrids between humans and hamsters or mice have been used for the mapping of various traits since at least the 1970s.
See also
Hamster zona-free ovum test
Human–animal hybrid
Recombinant DNA |
https://en.wikipedia.org/wiki/InnoDB | InnoDB is a storage engine for the database management system MySQL and MariaDB. Since the release of MySQL 5.5.5 in 2010, it replaced MyISAM as MySQL's default table type. It provides the standard ACID-compliant transaction features, along with foreign key support (Declarative Referential Integrity). It is included as standard in most binaries distributed by MySQL AB, the exception being some OEM versions.
Description
InnoDB became a product of Oracle Corporation after its acquisition of the Finland-based company Innobase in October 2005. The software is dual licensed; it is distributed under the GNU General Public License, but can also be licensed to parties wishing to combine InnoDB in proprietary software.
InnoDB supports:
Both SQL and XA transactions
Tablespaces
Foreign keys
Full text search indexes, since MySQL 5.6 (February 2013) and MariaDB 10.0
Spatial operations, following the OpenGIS standard
Virtual columns, in MariaDB
See also
Comparison of MySQL database engines |
https://en.wikipedia.org/wiki/Tyndall%20Medal | The Tyndall Medal is a prize from the Institute of Acoustics awarded every two years to a citizen of the UK, preferably under the age of 40, for "achievement and services in the field of acoustics". The prize is named after John Tyndall.
List of recipients
Source: Institute of Acoustics
See also
List of physics awards |
https://en.wikipedia.org/wiki/Night-blooming%20cereus | Night-blooming cereus is the common name referring to a large number of flowering ceroid cacti that bloom at night. The flowers are short lived, and some of these species, such as Selenicereus grandiflorus, bloom only once a year, for a single night, though most put out multiple flowers over a period of several weeks, each of which opens for only a single night. Other names for one or more cacti with this habit are princess of the night, Honolulu queen (for Hylocereus undatus), Christ in the manger, dama de noche, and queen of the night (which is also used for an unrelated plant species).
Genera and species
While many cacti referred to as night-blooming cereus belong to the tribe Cereeae, other night-blooming cacti in the subfamily Cactoideae may also be called night-blooming cereus. Cacti which may be called by this name include:
Cereus
Echinopsis (usually Echinopsis pachanoi, San Pedro cactus)
Epiphyllum (usually Epiphyllum oxypetalum, gooseneck cactus; grown as an indoor houseplant throughout the world, and the most popular cultivated night-blooming cereus)
Harrisia
Hylocereus (of which Hylocereus undatus is the most frequently cultivated outdoors, and is the main source of the commercial fruit crop, dragonfruit)
Monvillea
Nyctocereus (usually Nyctocereus serpentinus)
Peniocereus (Peniocereus greggii, the best known, is strictly a desert plant which grows from an underground tuber and is infrequently cultivated)
Selenicereus (usually Selenicereus grandiflorus)
Trichocereus
Description
Regardless of genus or species, night-blooming cereus flowers are almost always white or very pale shades of other colors, often large, and frequently fragrant. Most of the flowers open after nightfall, and by dawn, most are in the process of wilting. Plants in the same geographical area tend to bloom on the same night. Also for healthy plants there can sometimes be as many as three separate blooming events spread out over the warmest months. The plants that bear such |
https://en.wikipedia.org/wiki/Edith%20Rebecca%20Saunders | Edith Rebecca Saunders FLS (14 October 1865 – 6 June 1945) was a British geneticist and plant anatomist. Described by J. B. S. Haldane as the "Mother of British Plant Genetics", she played an active role in the re-discovery of Mendel's laws of heredity, the understanding of trait inheritance in plants, and was the first collaborator of the geneticist William Bateson. She also developed extensive work on flower anatomy, particularly focusing on the gynoecia, the female reproductive organs of flowers.
Biography
Saunders was born on 14 October 1865 in Brighton, England. She was educated first at Handsworth Ladies' College and in 1884 she entered the female-only Newnham College, Cambridge. There, she attended both Part I (in 1887) and II (in 1888) of the Natural Sciences Tripos.
She continued to post-graduate research, and served as a demonstrator at the Balfour Biological Laboratory for Women between 1888 and 1890 (where students from Newnham and Girton colleges received preparation for the Natural Sciences Tripos). She was the last director of that Laboratory between 1890 and 1914.
She was also director of studies at Girton College (1904–1914) and Newnham College (1918–1925).
She was appointed a fellow of the Royal Horticultural Society from which she received the Banksian Medal in 1906. In 1905 she was elected a Fellow of the Linnean Society of London, later serving on its council (1910 - 1915) and as vice-president during 1912- 1913.
In 1920 she was the president of the botanical section of the British Association for the Advancement of Science. She also served as president of the Genetics Society, between 1936 and 1938.
During World War II she served as a volunteer helping the Allied forces. She died soon after returning to Britain, in 1945, after suffering injuries in a bicycle accident.
Research
Saunders' earlier research focused on genetics. Many of her genetic experiments led to her and William Bateson defining important terms like "allelomorphs" (nowada |
https://en.wikipedia.org/wiki/Ethnoichthyology | Ethnoichthyology is an area in anthropology that examines human knowledge of fish, the uses of fish, and importance of fish in different human societies. It draws on knowledge from many different areas including ichthyology, economics, oceanography, and marine botany.
This area of study seeks to understand the details of the interactions of humans with fish, including both cognitive and behavioural aspects. A knowledge of fish and their life strategies is extremely important to fishermen. In order to conserve fish species, it is also important to be aware of other cultures' knowledge of fish. Ignorance of the effects of human activity on fish populations may endanger fish species. Knowledge of fish can be gained through experience, scientific research, or information passed down through generations. Some factors that affect the amount of knowledge acquired include the value and abundance of the various types of fish, their usefulness in fisheries, and the amount of time one spends observing the fishes' life history patterns.
Etymology
The term was first used in the scientific literature by W.T. Morrill. He justified the origin and use of this term by stating that it arose from the model of "ethnobotany".
Importance in conservation
Ethnoichthyology can be very useful to the study and investigation of environmental changes caused by anthropogenic factors, such as the decline of fish stocks, the disappearance of fish species, and the introduction of non-native species of fish in certain environments. Ethnoichthyological knowledge can be used to create environmental conservation strategies. With a sound knowledge of fish ecology, informed decisions with respect to fishing practices can be made, and destructive environmental practices can be avoided. Ethnoichthyological knowledge can be the difference between conserving a species of fish, or placing a moratorium on fishing.
Newfoundland's cod fishery collapse
The collapse of the cod fishery in Newfoundland and Lab |
https://en.wikipedia.org/wiki/OTRS | OTRS (originally Open-Source Ticket Request System) is a service management suite. The suite contains an agent portal, admin dashboard and customer portal. In the agent portal, teams process tickets and requests from customers (internal or external). There are various ways in which this information, as well as customer and related data can be viewed. As the name implies, the admin dashboard allows system administrators to manage the system: Options are many, but include roles and groups, process automation, channel integration, and CMDB/database options. The third component, the customer portal, is much like a customizable webpage where information can be shared with customers and requests can be tracked on the customer side.
History
In 2001, OTRS began as an open source help desk ticketing software.
In 2003, OTRS GmbH was formed and a professional company entered the EMEA market. This was followed in 2006 by entry into the North American market.
In 2007, the company was renamed to OTRS AG with the intention of going public, which it did in 2009. This is the same year in which OTRS was brought to Latin America with the Mexican subsidiary officially being founded in 2010. Entry into the APAC region occurred in 2011.
In 2015, a new version of the software, known as OTRS Business Solution, was launched. This proprietary version was designed for professional users who needed additional support, configuration and features.
Also in 2015, STORM powered by OTRS was launched.
In 2018, both OTRS-specific products were renamed: The open source version became ((OTRS)) Community Edition. The proprietary and managed version is named OTRS. A third offering, also proprietary, is called OTRS On-Premise for professional customers who intend to host the platform in their own data centers.
In December 2020, OTRS AG announced the end of life of support for the Community Edition which led to several forks. Alongside its customers was Wikimedia Foundation, which used ((OTRS)) Commu |
https://en.wikipedia.org/wiki/Squid%20as%20food | Squid is eaten in many cuisines; in English, the culinary name calamari is often used for squid dishes. There are many ways to prepare and cook squid. Fried squid is common in the Mediterranean. In New Zealand, Australia, the United States, Canada, and South Africa, it is sold in fish and chip shops, and steakhouses. In Britain, it can be found in Mediterranean 'calamari' or Asian 'salt and pepper fried squid' forms in various establishments, often served as a bar snack, street food, or starter.
Squid can be prepared for consumption in a number of other ways. In Korea and Japan, it is sometimes served raw, and elsewhere it is used as sushi, sashimi and tempura items, grilled, stuffed, covered in batter, stewed in gravy and served in stir-fries, rice, and noodle dishes. Dried shredded squid is a common snack in some Asian regions, including East Asia.
Use
The body (mantle), arms, tentacles, and ink of squid are all edible; the only parts of the squid that are not eaten are its beak and gladius (pen). The mantle can be stuffed whole, cut into flat pieces or sliced into rings.
Asia
In Chinese and Southeast Asian cuisine, squid is used in stir-fries, rice, and noodle dishes. It may be heavily spiced.
In China, Thailand, and Japan squid is grilled whole and sold in food stalls.
Pre-packaged dried shredded squid or cuttlefish are snack items in Hong Kong, Taiwan, Korea, Japan, China and Russia, often shredded to reduce chewiness.
Japan
In Japan, squid is used in almost every type of dish, including sushi, sashimi, and tempura. It can also be marinated in soy sauce (ika okizuke), stewed (nabemono), and grilled (ikayaki). It is eaten raw as ika sōmen.
Korea
In Korea, squid is sometimes killed and served quickly. Unlike octopus, squid tentacles do not usually continue to move when reaching the table. This fresh squid is 산 오징어 (san ojingeo) (also with small octopuses called nakji). The squid is served with Korean mustard, soy sauce, chili sauce, or sesame sauce. I |
https://en.wikipedia.org/wiki/Water%20remote%20sensing | Water Remote Sensing is the observation of water bodies such as lakes, oceans, and rivers from a distance in order to describe their color, state of ecosystem health, and productivity. Water remote sensing studies the color of water through the observation of the spectrum of water leaving radiance. From the spectrum of color coming from the water, the concentration of optically active components of the upper layer of the water body can be estimated via specific algorithms.
Water quality monitoring by remote sensing and close-range instruments has obtained considerable attention since the founding of EU Water Framework Directive.
Overview
Water remote sensing instruments (sensors) allow scientists to record the color of a water body, which provides information on the presence and abundance of optically active natural water components (plankton, sediments, detritus, or dissolved substances). The water color spectrum as seen by a satellite sensor is defined as an apparent optical property (AOP) of the water. This means that the color of the water is influenced by the angular distribution of the light field and by the nature and quantity of the substances in the medium, in this case, water. Thus, the values of remote sensing reflectance, an AOP, will change with changes in the optical properties and concentrations of the optically active substances in the water. Properties and concentrations of substances in the water are known as the inherent optical properties or IOPs. IOPs are independent from the angular distribution of light (the "light field") but they are dependent on the type and amount of substances that are present in the water. For instance, the diffuse attenuation coefficient of downwelling irradiance, Kd (often used as an index of water clarity or ocean turbidity) is defined as an AOP (or quasi-AOP), while the absorption coefficient and the scattering coefficient of the water are defined as IOPs.
There are two different approaches to determine the concent |
https://en.wikipedia.org/wiki/Ohno%27s%20law | Ohno's law was proposed by a Japanese-American biologist Susumu Ohno, saying that the gene content of the mammalian species has been conserved over species not only in the DNA content but also in the genes themselves. That is, nearly all mammalian species have conserved the X chromosome from their primordial X chromosome of a common ancestor.
Evidence
Mammalian X chromosomes in various species, including human and mouse, have nearly the same size, with the content of about 5% of the genome. Additionally, for individual gene loci, a number of X-linked genes are common through mammalian species. Examples include glucose-6-phosphate dehydrogenase (G6PD), and the genes for Factor VIII and Factor IX. Moreover, no instances were found where an X-linked gene in one species was located on an autosome in the other species.
Conservation mechanisms
The content of a chromosome would be changed mainly by mutation after duplication of the chromosome and translocation with other chromosomes. However, in mammals, since the chromosomal sex-determination mechanism would have been established in their earlier stages of evolution, polyploidy would have not occurred due to its incompatibility with the sex-determining mechanism. Moreover, X-autosome translocation would have been prohibited because it might have resulted in detrimental effects for survival to the organism. Thus in mammals, the content of X chromosomes has been conserved after typical 2 round duplication events at early ancestral stages of evolution, at the fish or amphibia (2R hypothesis).
Contradicting and supportive evidence
Genes on the long arm of the human X are contained in the monotreme X and genes on the short arm of the human X are distributed on the autosomes of marsupials. Ohno commented to the result that monotremes and marsupials were not considered to be ancestors of true mammals, but they have diverged very early from the main line of mammals. Chloride channel gene (CLCN4) was mapped to the human X |
https://en.wikipedia.org/wiki/Index%20mark | Index mark has multiple meanings.
In computing, an index mark or index track is a physical impression made on a hard disk drive. Its purpose is to indicate the starting point for each track on the hard disk drive. Usually, an index mark takes the form of a hole, gap, or magnetic strip. It also allows a hard disk drive head to quickly move to various spots on the drive.
In electronics components, an index mark is a reference symbol printed on or molded into the casing of a device or circuit board, to indicate the location of "Pin 1". This allows the correct orientation of the component in a larger circuit assembly, so that the electrical leads can be correctly connected.
Another kind of index mark is a component of the registration system for road vehicles in the United Kingdom. It consists of a two-letter combination allocated to a local vehicle licensing office. Certain letters are associated with particular parts of the British Isles. Marks that include I or Z are issued either in the Republic of Ireland or Northern Ireland, those that include S in Scotland, and others in England and Wales, though some combinations have not been authorised for use. As an example the combinations AF, CV, GL, and RL were allocated to the Truro licensing office. |
https://en.wikipedia.org/wiki/Evgeny%20Yakovlevich%20Remez | Evgeny Yakovlevich Remez (sometimes spelled as Evgenii Yakovlevich Remez, ; (born 1895 in Mstislavl, now Belarus; died 1975 in Kyiv, now Ukraine) was a Soviet mathematician. He is known for his work in the constructive function theory, in particular, for the Remez algorithm and the Remez inequality.
His doctoral students include Boris Korenblum. |
https://en.wikipedia.org/wiki/Reuterin | Reuterin (3-hydroxypropionaldehyde) is the organic compound with the formula HOCH2CH2CHO. It is a bifunctional molecule, containing both a hydroxy and aldehyde functional groups.
The name reuterin is derived from Lactobacillus reuteri, which produces the compound biosynthetically from glycerol as a broad-spectrum antibiotic (bacteriocin). L. reuteri itself is named after the microbiologist Gerhard Reuter, who did early work in distinguishing it as a district species.
Solution structure
In aqueous solution 3-hydroxypropionaldehyde exists in equilibrium with its hydrate (1,1,3-propanetriol), in which the aldehyde group converts to a geminal diol:
HOCH2CH2CHO + H2O → HOCH2CH2CH(OH)2
The hydrate is also in equilibrium with its dimer (2-(2-hydroxyethyl)-4-hydroxy-1,3-dioxane), which dominates at high concentrations. These three components - the aldehyde, its dimer, and the hydrate are therefore in a dynamic equilibrium.
Besides, 3-hydroxypropionaldehyde suffers an spontaneous dehydration in aqueous solution, and the resulting molecule is called acrolein.
In fact, the term reuterin is the name given to the dynamic system formed by 3-hydroxypropionaldehyde, its hydrate, the dimer, and acrolein. This last molecule, acrolein, was recently included in reuterin definition.
Synthesis and reactions
3-Hydroxypropionaldehyde is formed by the condensation of acetaldehyde and formaldehyde. This reaction, when conducted in the gas-phase, was the basis for a now obsolete industrial route acrolein:
CH3CHO + CH2O → HOCH2CH2CHO
HOCH2CH2CHO → CH2=CHCHO + H2O
Presently 3-hydroxypropionaldehyde is an intermediate in the production of pentaerythritol. Hydrogenation of reuterin gives 1,3-propanediol.
Biological activity
Reuterin is an intermediate in the metabolism of glycerol to 1,3-propanediol catalysed by the coenzyme B12-dependent glycerol dehydratase.
Reuterin is a potent antimicrobial compound produced by Lactobacillus reuteri. It is an intermediate in the metabol |
https://en.wikipedia.org/wiki/3%CF%89-method | The 3ω-method (3 omega method) or 3ω-technique, is a measurement method for determining the thermal conductivities of bulk material (i.e. solid or liquid) and thin layers. The process involves a metal heater applied to the sample that is heated periodically. The temperature oscillations thus produced are then measured. The thermal conductivity and thermal diffusivity of the sample can be determined from their frequency dependence.
Theory
The 3ω-method can be accomplished by depositing a thin metal structure (generally a wire or a film) onto the sample to function as a resistive heater and a resistance temperature detector (RTD). The heater is driven with AC current at frequency ω, which induces periodic joule heating at frequency 2ω (since ) due to the oscillation of the AC signal during a single period.
There will be some delay between the heating of the sample and the temperature response which is dependent upon the thermal properties of the sensor/sample. This temperature response is measured by logging the amplitude and phase delay of the AC voltage signal from the heater across a range of frequencies (generally accomplished using a lock-in-amplifier).
Note, the phase delay of the signal is the lag between the heating signal and the temperature response. The measured voltage will contain both the fundamental and third harmonic components (ω and 3ω respectively), because the Joule heating of the metal structure induces oscillations in its resistance with frequency 2ω due to the temperature coefficient of resistance (TCR) of the metal heater/sensor as stated in the following equation:
,
where C0 is constant. Thermal conductivity is determined by the linear slope of ΔT vs. log(ω) curve. The main advantages of the 3ω-method are minimization of radiation effects and easier acquisition of the temperature dependence of the thermal conductivity than in the steady-state techniques. Although some expertise in thin film patterning and microlithography is required, |
https://en.wikipedia.org/wiki/Mesoplasma | Mesoplasma is a genus of bacteria belonging to the class Mollicutes. Mesoplasma is related to the genus Mycoplasma but differ in several respects.
Phylogeny
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI)
See also
List of bacteria genera
List of bacterial orders |
https://en.wikipedia.org/wiki/Norepinephrine%20transporter | The norepinephrine transporter (NET), also known as noradrenaline transporter (NAT), is a protein that in humans is encoded by the solute carrier family 6 member 2 (SLC6A2) gene.
NET is a monoamine transporter and is responsible for the sodium-chloride (Na+/Cl−)-dependent reuptake of extracellular norepinephrine (NE), which is also known as noradrenaline. NET can also reuptake extracellular dopamine (DA). The reuptake of these two neurotransmitters is essential in regulating concentrations in the synaptic cleft. NETs, along with the other monoamine transporters, are the targets of many antidepressants and recreational drugs. In addition, an overabundance of NET is associated with ADHD. There is evidence that single-nucleotide polymorphisms in the NET gene (SLC6A2) may be an underlying factor in some of these disorders.
Gene
The norepinephrine transporter gene, SLC6A2 is located on human chromosome 16 locus 16q12.2. This gene is encoded by 14 exons. Based on the nucleotide and amino acid sequence, the NET transporter consists of 617 amino acids with 12 membrane-spanning domains. The structural organization of NET is highly homologous to other members of a sodium/chloride-dependent family of neurotransmitter transporters, including dopamine, epinephrine, serotonin and GABA transporters.
Single-nucleotide polymorphisms
A single-nucleotide polymorphism (SNP) is a genetic variation in which a genome sequence is altered by a single nucleotide (A, T, C or G). NET proteins with an altered amino acid sequence (more specifically, a missense mutation) could potentially be associated with various diseases that involve abnormally high or low plasma levels of norepinephrine due to altered NET function. NET SNPs and possible associations with various diseases are an area of focus for many research projects. There is evidence suggesting a relationship between NET SNPs and various disorders such as ADHD psychiatric disorders, postural tachycardia and orthostatic intolerance. The |
https://en.wikipedia.org/wiki/Population%20impact%20measure | Population impact measures (PIMs) are biostatistical measures of risk and benefit used in epidemiological and public health research. They are used to describe the impact of health risks and benefits in a population, to inform health policy.
Frequently used measures of risk and benefit identified by Jerkel, Katz and Elmore, describe measures of risk difference (attributable risk), rate difference (often expressed as the odds ratio or relative risk), population attributable risk (PAR), and the relative risk reduction, which can be recalculated into a measure of absolute benefit, called the number needed to treat. Population impact measures are an extension of these statistics, as they are measures of absolute risk at the population level, which are calculations of number of people in the population who are at risk to be harmed, or who will benefit from public health interventions.
They are measures of absolute risk and benefit, producing numbers of people who will benefit from an intervention or be at risk from a risk factor within a particular local or national population. They provide local context to previous measures, allowing policy-makers to identify and prioritise the potential benefits of interventions on their own population. They are simple to compute, and contain the elements to which policy-makers would have to pay attention in the commissioning or improvement of services. They may have special relevance for local policy-making. They depend on the ability to obtain and use local data, and by being explicit about the data required may have the added benefit of encouraging the collection of such data.
Measures
To describe the impact of preventive and treatment interventions, the number of events prevented in a population (NEPP) is defined as "the number of events prevented by the intervention in a population over a defined time period". NEPP extends the well-known measure number needed to treat (NNT) beyond the individual patient to the population. To de |
https://en.wikipedia.org/wiki/Camera%20interface | The Camera Interface block or CAMIF is the hardware block that interfaces with different image sensor interfaces and provides a standard output that can be used for subsequent image processing.
A typical Camera Interface would support at least a parallel interface although these days many camera interfaces are beginning to support the Mobile Industry Processor Interface (MIPI) Camera Serial Interface (CSI) interface.
Electrical connections
The camera interface's parallel interface consists of the following lines:
8 to 12 bits parallel data line
These are parallel data lines that carry pixel data. The data transmitted on these lines change with every Pixel Clock (PCLK).
Horizontal Sync (HSYNC)
This is a special signal that goes from the camera sensor or ISP to the camera interface. An HSYNC indicates that one line of the frame is transmitted.
Vertical Sync (VSYNC)
This signal is transmitted after the entire frame is transferred. This signal is often a way to indicate that one entire frame is transmitted.
Pixel Clock (PCLK)
This is the pixel clock and it would change on every pixel.
NOTE: The above lines are all treated as input lines to the Camera Interface hardware.
See also
Digital camera
Digital photography
Demosaicing
Digital image processing
Camera Serial Interface (CSI) |
https://en.wikipedia.org/wiki/D%27Alembert%E2%80%93Euler%20condition | In mathematics and physics, especially the study of mechanics and fluid dynamics, the d'Alembert-Euler condition is a requirement that the streaklines of a flow are irrotational. Let x = x(X,t) be the coordinates of the point x into which X is carried at time t by a (fluid) flow. Let be the second material derivative of x. Then the d'Alembert-Euler condition is:
The d'Alembert-Euler condition is named for Jean le Rond d'Alembert and Leonhard Euler who independently first described its use in the mid-18th century. It is not to be confused with the Cauchy–Riemann conditions. |
https://en.wikipedia.org/wiki/Mechanical%20testing | Mechanical testing covers a wide range of tests, which can be divided broadly into two types:
those that aim to determine a material's mechanical properties, independent of geometry.
those that determine the response of a structure to a given action, e.g. testing of composite beams, aircraft structures to destruction, etc.
Mechanical testing of materials
There exists a large number of tests, many of which are standardized, to determine the various mechanical properties of materials. In general, such tests set out to obtain geometry-independent properties; i.e. those intrinsic to the bulk material. In practice this is not always feasible, since even in tensile tests, certain properties can be influenced by specimen size and/or geometry. Here is a listing of some of the most common tests:
Hardness Testing
Vickers hardness test (HV), which has one of the widest scales
Brinell hardness test (HB)
Knoop hardness test (HK), for measurement over small areas
Janka hardness test, for wood
Meyer hardness test
Rockwell hardness test (HR), principally used in the USA
Shore durometer hardness, used for polymers
Barcol hardness test, for composite materials
Tensile testing, used to obtain the stress-strain curve for a material, and from there, properties such as Young modulus, yield (or proof) stress, tensile stress and % elongation to failure.
Impact testing
Izod test
Charpy test
Fracture toughness testing
Linear-elastic (KIc)
K–R curve
Elastic plastic (JIc, CTOD)
Creep Testing, for the mechanical behaviour of materials at high temperatures (relative to their melting point)
Fatigue Testing, for the behaviour of materials under cyclic loading
Load-controlled smooth specimen tests
Strain-controlled smooth specimen tests
Fatigue crack growth testing
Non-Destructive Testing |
https://en.wikipedia.org/wiki/HdeA%20family | In molecular biology, the HdeA family of proteins (HNS (histone-like nucleoid structuring)-dependent expression A) is a stress response protein family found in highly acid resistant bacteria such as Shigella flexneri and Escherichia coli, but which is lacking in mildly acid tolerant bacteria such as Salmonella. HdeA is one of the most abundant proteins found in the periplasmic space of E. coli, where it is one of a network of proteins that confer an acid resistance phenotype essential for the pathogenesis of enteric bacteria. HdeA is thought to act as a chaperone, functioning to prevent the aggregation of periplasmic proteins denatured under acidic conditions. The HNS protein, a chromatin-associated protein that influences the gene expression of several environmentally-induced target genes, represses the expression of HdeA. HdeB, which is encoded within the same operon, may form heterodimers with HdeA. HdeA is a single domain protein with an overall fold that is similar to the fold of the N-terminal subdomain of the GluRS anticodon-binding domain. |
https://en.wikipedia.org/wiki/Josef%20Schnitter | Josef Schnitter (, Yosif Shniter; 16 October 1852–26 April 1914) was a Czech–Bulgarian architect, engineer and geodesist credited with shaping the modern appearance of Plovdiv, Bulgaria's second-largest city.
Schnitter was born in the small town of Nový Bydžov in Bohemia, Austrian Empire (today in the Czech Republic). He graduated from the University of Technology's Faculty of Construction in the imperial capital Vienna and then moved to the Russian Empire, where he converted to the Eastern Orthodox Church. Schnitter arrived in Bulgaria during the Russo-Turkish War of 1877–1878 and assisted the Imperial Russian Army as an engineer, designing the pontoon bridges used in the crossing of the Danube and the fortification equipment employed by the Russians in the Siege of Plevna. Schnitter was injured at Pleven and received a sabre from General Eduard Totleben, who commanded the siege, as a recognition of his contribution to the siege's success.
Even before the Treaty of San Stefano was signed on 3 March 1878 and Bulgaria was liberated from Ottoman rule, Schnitter was dispatched to Plovdiv by the Russians. He settled in the city, which was the capital of autonomous Eastern Rumelia until the Unification of Bulgaria in 1885, and, after a brief private practice, served as Plovdiv's head architect and municipal chief of engineering from 1878 until his death in 1914.
Schnitter was married to Elisabeth Maria Friederike Baumann from Windsheim in Germany who was staying in Plovdiv's Old Town en route to Istanbul. According to the story, he met his future wife when the crumbs she threw out of her window landed on Schnitter, who was touring the buildings he had constructed. His best man was Slovene scholar Anton Bezenšek who spent two decades teaching in Plovdiv. The architect acquired Bulgarian citizenship in 1906. Josef Schnitter died of bronchopneumonia in 1914; he got sick while overseeing the repair of the city water conduit. He was interred in the Plovdiv Central Cemetery |
https://en.wikipedia.org/wiki/Nina%20Bari | Nina Karlovna Bari (; 19 November 1901 – 15 July 1961) was a Soviet mathematician known for her work on trigonometric series.<ref name="asc">Biography of Nina Karlovna Bari, by Giota Soublis, Agnes Scott College.</ref> She is also well-known for two textbooks, Higher Algebra and The Theory of Series''.
Early life and education
Nina Bari was born in Russia on 19 November 1901, the daughter of Olga and Karl Adolfovich Bari, a physician. In 1918, she became one of the first women to be accepted to the Department of Physics and Mathematics at the prestigious Moscow State University. She graduated in 1921—just three years after entering the university. After graduation, Bari began her teaching career. She lectured at the Moscow Forestry Institute, the Moscow Polytechnic Institute, and the Sverdlov Communist Institute. Bari applied for and received the only paid research fellowship awarded by the newly created Research Institute of Mathematics and Mechanics. As a student, Bari was drawn to an elite group nicknamed the Luzitania—an informal academic and social organization. She studied trigonometric series and functions under the tutelage of Nikolai Luzin, becoming one of his star students. She presented the main result of her research to the Moscow Mathematical Society in 1922—the first woman to address the society.
In 1926, Bari completed her doctoral work on the topic of trigonometric expansions, winning the Glavnauk Prize for her thesis work. In 1927, Bari took advantage of an opportunity to study in Paris at the Sorbonne and the College de France. She then attended the Polish Mathematical Congress in Lwów, Poland; a Rockefeller grant enabled her to return to Paris to continue her studies. Bari's decision to travel may have been influenced by the disintegration of the Luzitanians. Luzin's irascible, demanding personality had alienated many of the mathematicians who had gathered around him. By 1930, all traces of the Luzitania movement had vanished, and Luzin left Mos |
https://en.wikipedia.org/wiki/238%20%28number%29 | 238 (two hundred [and] thirty-eight) is the natural number following 237 and preceding 239.
In mathematics
238 is an untouchable number.
There are 238 2-vertex-connected graphs on five labeled vertices, and 238 order-5 polydiamonds (polyiamonds that can partitioned into 5 diamonds). Out of the 720 permutations of six elements, exactly 238 of them have a unique longest increasing subsequence.
There are 238 compact and paracompact hyperbolic groups of ranks 3 through 10. |
https://en.wikipedia.org/wiki/European%20embedded%20value | The European embedded value (EEV) is an effort by the CFO Forum to standardize the calculation of the embedded value. For this purpose the CFO Forum has released guidelines how embedded value should be calculated.
There is a lot of subjectivity involved in calculating the value of a life insurer. Insurance contracts are long-term contracts, so the value of the company now is dependent on how each of those contracts end up performing. Profit is made if the policyholder does not die, for example, and just contributes premiums over many years. Losses are possible for policies where the insured dies soon after signing the contract. And profitability is also affected by whether (and when) a policy might terminate early.
An actuary calculates an embedded value by making certain assumptions about life expectancy, persistency, investment conditions, and so on - thus making an estimate of what the company is worth now. But if each person has a different opinion on how things will turn out, you could expect a range of inconsistent estimates of the worth of the company. With this range of approaches, it is very difficult to compare EV calculations between companies.
The CFO Forum was formed to consider general issues relevant to measuring the value of insurance companies. The EEV was the output of this forum, and allows greater consistency in the such calculations, making them more useful.
Types
EEV can be "real world" or "market consistent". The former takes the best estimate for parameters that are available, whereas the latter uses a slightly constrained set of parameters which are close to best estimate, but which produce results which match market-related hedge costs.
Real-world EEV usually uses a risk discount rate made up of the risk-free rate plus a risk margin which reflects the weighted average cost of capital and Beta from the CAPM model. Using company-level economic models clearly reflects a top-down approach to determining the risk discount rate.
Market-cons |
https://en.wikipedia.org/wiki/Fibular%20retinacula | The fibular retinacula (also known as peroneal retinacula) are fibrous retaining bands that bind down the tendons of the fibularis longus and fibularis brevis muscles as they run across the side of the ankle. (Retinaculum is Latin for "retainer.")
These bands consist of the superior fibular retinaculum and the inferior fibular retinaculum. The superior fibers are attached above to the lateral malleolus and below to the lateral surface of the calcaneus. The inferior fibers are continuous in front with those of the inferior extensor retinaculum of the foot; behind they are attached to the lateral surface of the calcaneus; some of the fibers are fixed to the calcaneal tubercle, forming a septum between the tendons of the fibularis longus and fibularis brevis.
See also
Fibularis longus
Fibularis brevis |
https://en.wikipedia.org/wiki/ColorOS | ColorOS is a user interface created by Oppo based on the Android Open Source Project. Initially, Realme phones used ColorOS until it was replaced by Realme UI in 2020. Realme UI uses some of ColorOS's apps. Starting from OnePlus 9 series OnePlus will preinstall ColorOS on all smartphones that are sold in mainland China instead of HydrogenOS (Chinese version of OxygenOS).
The first version of ColorOS was launched in September 2013. Oppo had released plenty of Android smartphones before then. It was not stock Android, but Oppo did not label it as ColorOS. Over the years, Oppo launched new official versions of the operating system. To make things less confusing, in 2020 the company revealed that it would adopt the same numbering scheme as mainline Android, and as such ColorOS jumped from ColorOS7 to ColorOS 11.
In the future, ColorOS, OnePlus' Oxygen OS and Realme UI will merge together to form a single Android skin that will appear on all OnePlus, Oppo and Realme UI phones.
Version history
Further reading
How OPPO's ColorOS 13 Pushes the Trend of Android Customization |
https://en.wikipedia.org/wiki/Deep%20hypothermic%20circulatory%20arrest | Deep hypothermic circulatory arrest (DHCA) is a surgical technique in which the temperature of the body falls significantly (between 20 °C (68 °F) to 25 °C (77 °F))and blood circulation is stopped for up to one hour. It is used when blood circulation to the brain must be stopped because of delicate surgery within the brain, or because of surgery on large blood vessels that lead to or from the brain. DHCA is used to provide a better visual field during surgery due to the cessation of blood flow. DHCA is a form of carefully managed clinical death in which heartbeat and all brain activity cease.
When blood circulation stops at normal body temperature (37 °C), permanent damage occurs in only a few minutes. More damage occurs after circulation is restored. Reducing body temperature extends the time interval that such stoppage can be survived. At a brain temperature of 14 °C, blood circulation can be safely stopped for 30 to 40 minutes. There is an increased incidence of brain injury at times longer than 40 minutes, but sometimes circulatory arrest for up to 60 minutes is used if life-saving surgery requires it. Infants tolerate longer periods of DHCA than adults.
Applications of DHCA include repairs of the aortic arch, repairs to head and neck great vessels, repair of large cerebral aneurysms, repair of cerebral arteriovenous malformations, pulmonary thromboendarterectomy, and resection of tumors that have invaded the vena cava.
History
The use of hypothermia for medical purposes dates back to Hippocrates, who advocated packing snow and ice into wounds to reduce hemorrhage. The origin of hypothermia and neuroprotection was also observed in infants who were exposed to cold due to abandonment and the prolonged viability of these infants.
In the 1940s and 1950s, Canadian surgeon Wilfred Bigelow demonstrated in animal models that the length of time the brain could survive stopped blood circulation could be extended from 3 minutes to 10 minutes by cooling to 30 °C before |
https://en.wikipedia.org/wiki/MineCam | The MineCam is a remote exploration camera built by I.A.Recordings. It is used for mine shaft exploration and other similar environments. It was originally conceptualized in 1988, and since went under several design revisions. The name MineCam, is a pun on MiniCam, an early hand-held broadcast camera built by CBS Laboratories.
History
Peter Eggleston of I.A.Recordings first had the idea for what became "MineCam" in 1988. He had been visiting some metal mines in Wales with the Shropshire Caving and Mining Club and spent several hours setting up a single rope technique rig to descend a remote shaft, only to find that there were no ways off at the bottom. This was the motivation to build a miniature camera which would allow enthusiasts to explore hard to reach, unsafe, or impossible to reach areas.
The remote exploration of mines prior to 1988 had been done commercially for several years by pipeline camera firms using equipment that needed to be housed in a vehicle and powered by a generator. Many old mine shafts are remote from roads though, so Peter's final goal was a small lightweight battery-powered kit which could be carried on foot. The first two versions of MineCam did not achieve this, but tested various approaches with the video technology available at the time.
Versions
MineCam 1
MineCam 1 used a monochrome vidicon camera in a waterproof housing made out of a 10 cm plastic sewer pipe and fittings, with an acrylic window. This was successfully tested in the deep end of a swimming pool. The camera was insensitive - it needed a 150 W lamp, which required a 240 V supply, but so did the camera. The cable was 100 m of video co-axial and power, taped together at 2 m intervals and numbered to give a crude depth measurement. The camera and lamp were heavy, so an old 6 mm static climbing rope was used to support it. The monitor was a 10 cm portable TV.
MineCam 1 worked, but the monochrome image was sometimes difficult to interpret. It was time to try colour, a |
https://en.wikipedia.org/wiki/M%20protein%20%28Streptococcus%29 | M protein is a virulence factor that can be produced by certain species of Streptococcus.
Viruses, parasites and bacteria are covered in protein and sugar molecules that help them gain entry into a host by counteracting the host's defenses. One such molecule is the M protein produced by certain streptococcal bacteria. At its C-terminus within the cell wall, M proteins embody a motif that is now known to be shared by many Gram-positive bacterial surface proteins. The motif includes a conserved hexapeptide LPXTGE, which precedes a hydrophobic C-terminal membrane spanning domain, which itself precedes a cluster of basic residues at the C-terminus.
M protein is strongly anti-phagocytic and is the major virulence factor for group A streptococci (Streptococcus pyogenes). It binds to serum factor H, destroying C3-convertase and preventing opsonization by C3b. However plasma B cells can generate antibodies against M protein which will help in opsonization and further the destruction of the microorganism by the macrophages and neutrophils. Cross-reactivity of anti-M protein antibodies with heart muscle has been suggested to be associated in some way with rheumatic fever.
It was originally identified by Rebecca Lancefield, who also formulated the Lancefield classification system for streptococcal bacteria. Bacteria like S. pyogenes, which possess M protein are classified in group A of the Lancefield system.
Literature |
https://en.wikipedia.org/wiki/Affine%20root%20system | In mathematics, an affine root system is a root system of affine-linear functions on a Euclidean space. They are used in the classification of affine Lie algebras and superalgebras, and semisimple p-adic algebraic groups, and correspond to families of Macdonald polynomials. The reduced affine root systems were used by Kac and Moody in their work on Kac–Moody algebras. Possibly non-reduced affine root systems were introduced and classified by and (except that both these papers accidentally omitted the Dynkin diagram ).
Definition
Let E be an affine space and V the vector space of its translations.
Recall that V acts faithfully and transitively on E.
In particular, if , then it is well defined an element in V denoted as which is the only element w such that .
Now suppose we have a scalar product on V.
This defines a metric on E as .
Consider the vector space F of affine-linear functions .
Having fixed a , every element in F can be written as with a linear function on V that doesn't depend on the choice of .
Now the dual of V can be identified with V thanks to the chosen scalar product and we can define a product on F as .
Set and for any and respectively.
The identification let us define a reflection over E in the following way:
By transposition acts also on F as
An affine root system is a subset such that:
The elements of S are called affine roots.
Denote with the group generated by the with .
We also ask
This means that for any two compacts the elements of such that are a finite number.
Classification
The affine roots systems A1 = B1 = B = C1 = C are the same, as are the pairs B2 = C2, B = C, and A3 = D3
The number of orbits given in the table is the number of orbits of simple roots under the Weyl group.
In the Dynkin diagrams, the non-reduced simple roots α (with 2α a root) are colored green. The first Dynkin diagram in a series sometimes does not follow the same rule as the others.
Irreducible affine root systems by rank
Rank 1: A |
https://en.wikipedia.org/wiki/Wireless%20sensor%20network | Wireless sensor networks (WSNs) refer to networks of spatially dispersed and dedicated sensors that monitor and record the physical conditions of the environment and forward the collected data to a central location. WSNs can measure environmental conditions such as temperature, sound, pollution levels, humidity and wind.
These are similar to wireless ad hoc networks in the sense that they rely on wireless connectivity and spontaneous formation of networks so that sensor data can be transported wirelessly. WSNs monitor physical conditions, such as temperature, sound, and pressure. Modern networks are bi-directional, both collecting data and enabling control of sensor activity. The development of these networks was motivated by military applications such as battlefield surveillance. Such networks are used in industrial and consumer applications, such as industrial process monitoring and control and machine health monitoring and agriculture.
A WSN is built of "nodes" – from a few to hundreds or thousands, where each node is connected to other sensors. Each such node typically has several parts: a radio transceiver with an internal antenna or connection to an external antenna, a microcontroller, an electronic circuit for interfacing with the sensors and an energy source, usually a battery or an embedded form of energy harvesting. A sensor node might vary in size from a shoebox to (theoretically) a grain of dust, although microscopic dimensions have yet to be realized. Sensor node cost is similarly variable, ranging from a few to hundreds of dollars, depending on node sophistication. Size and cost constraints constrain resources such as energy, memory, computational speed and communications bandwidth. The topology of a WSN can vary from a simple star network to an advanced multi-hop wireless mesh network. Propagation can employ routing or flooding.
In computer science and telecommunications, wireless sensor networks are an active research area supporting many worksho |
https://en.wikipedia.org/wiki/Y.%20P.%20Viyogi | Yogendra Pathak Viyogi (Y. P. Viyogi) is an Indian physicist at Indian National Science Academy. He is specialized in the field of experimental nuclear physics.
Early life
He born at Madhubani in the year 1948. He completed his primary education at his own village.
He received his post graduate degree in physics from Bihar University in Muzaffarpur.
Scientific career
He joined the 15th batch of Training School Programme of Bhabha Atomic Research Centre, Mumbai in 1971. He was trained in experimental nuclear physics at BARC and at Lawrence Berkeley laboratory, USA. He moved to Kolkata to work at the Variable Energy Cyclotron Centre, a unit of the Department of Atomic Energy and obtained his PhD in 1984 from the University of Calcutta. He was also a postdoctoral fellow at GANIL Laboratory in France from 1984 to 1986. He was Director of Institute of Physics, Bhubaneswar during June 2006 – June 2009. He retired from service in October 2012 as Outstanding Scientist at VECC Kolkata. |
https://en.wikipedia.org/wiki/Interfacial%20rheology | Interfacial rheology is a branch of rheology that studies the flow of matter at the interface between a gas and a liquid or at the interface between two immiscible liquids. The measurement is done while having surfactants, nanoparticles or other surface active compounds present at the interface. Unlike in bulk rheology, the deformation of the bulk phase is not of interest in interfacial rheology and its effect is aimed to be minimized. Instead, the flow of the surface active compounds is of interest..
The deformation of the interface can be done either by changing the size or shape of the interface. Therefore interfacial rheological methods can be divided into two categories: dilational and shear rheology methods.
Interfacial dilational rheology
In dilatational interfacial rheology, the size of the interface is changing over time. The change in the surface stress or surface tension of the interface is being measured during this deformation. Based on the response, interfacial viscoelasticity is calculated according to well established theories:
where
|E| is the complex surface dilatational modulus
γ is the surface tension or interfacial tension of the interface
A is the interfacial area
δ is the phase angle difference between the surface tension and area
E’' is the elastic (storage) modulus
E’'' is the viscous (loss) modulus
Most commonly, the measurement of dilational interfacial rheology is conducted with an optical tensiometer combined to a pulsating drop module. A pendant droplet with surface active molecules in it is formed and pulsated sinusoidally. The changes in the interfacial area causes changes in the molecular interactions which then changes the surface tension. Typical measurements include performing a frequency sweep for the solution to study the kinetics of the surfactant.
In another measurement method suitable especially for insoluble surfactants, a Langmuir trough is used in an oscillating barrier mode. In this case, two barriers that l |
https://en.wikipedia.org/wiki/Time%20resolved%20crystallography | Time resolved crystallography utilizes X-ray crystallography imaging to visualize reactions in four dimensions (x, y, z and time). This enables the studies of dynamical changes that occur in for example enzymes during their catalysis. The time dimension is incorporated by triggering the reaction of interest in the crystal prior to X-ray exposure, and then collecting the diffraction patterns at different time delays. In order to study these dynamical properties of macromolecules three criteria must be met;
The macromolecule must be biologically active in the crystalline state
It must be possible to trigger the reaction in the crystal
The intermediate of interest must be detectable, i.e. it must have a reasonable amount of concentration in the crystal (preferably over 25%).
This has led to the development of several techniques that can be divided into two groups, the pump-probe method and diffusion-trapping methods.
Pump-probe
In the pump-probe method the reaction is first triggered (pump) by photolysis (most often laser light) and then a diffraction pattern is collected by an X-ray pulse (probe) at a specific time delay. This makes it possible to obtain many images at different time delays after reaction triggering, and thereby building up a chronological series of images describing the events during reaction.
To obtain a reasonable signal to noise ratio this pump-probe cycle has to be performed many times for each spatial rotation of the crystal, and many times for the same time delay. Therefore, the reaction that one wishes to study with pump-probe must be able to relax back to its original conformation after triggering, enabling many measurements on the same sample.
The time resolution of the observed phenomena is dictated by the time width of the probing pulse (full width at half maximum). All processes that happen on a faster time scale than that are going to be averaged out by the convolution of the probe pulse intensity in time with the intensity of th |
https://en.wikipedia.org/wiki/Formulation | Formulation is a term used in various senses in various applications, both the material and the abstract or formal. Its fundamental meaning is the putting together of components in appropriate relationships or structures, according to a formula. Etymologically formula is the diminutive of the Latin forma, meaning shape. In that sense a formulation is created according to the standard for the product.
Abstract applications
Disciplines in which one might use the word formulation in the abstract sense include logic, mathematics, linguistics, legal theory, and computer science. For details, see the related articles.
Material applications
In more material senses the concept of formulation appears in the physical sciences, such as physics, chemistry, and biology. It also is ubiquitous in industry, engineering and medicine, especially pharmaceutics.
Pharmacy
In pharmacy, a formulation is a mixture or a structure such as a capsule, tablet, or an emulsion, prepared according to a specific procedure (called a "formula"). Formulations are a very important aspect of creating medicines, since they are essential to ensuring that the active part of the drug is delivered to the correct part of the body, in the right concentration, and at the right rate (not too fast and not too slowly). A good example is a drug delivery system that exploits supersaturation. They also need to have an acceptable taste (in the case of pills, tablets or syrups), last long enough in storage still to be safe and effective when used, and be sufficiently stable both physically and chemically to be transported from where they are manufactured to the eventual consumer. Competently designed formulations for particular applications are safer, more effective, and more economical than any of their components used singly.
Other examples of product formulations
Formulations are commercially produced for drugs, cosmetics, coatings, dyes, alloys, cleaning agents, foods, lubricants, fuels, fertilisers, pes |
https://en.wikipedia.org/wiki/Computational%20aeroacoustics | Computational aeroacoustics is a branch of aeroacoustics that aims to analyze the generation of noise by turbulent flows through numerical methods.
History
The origin of computational aeroacoustics can only very likely be dated back to the middle of the 1980s, with a publication of Hardin and Lamkin who claimed, that "[...] the field of computational fluid mechanics has been advancing rapidly in the past few years and now offers the hope that "computational aeroacoustics," where noise is computed directly from a first principles determination of continuous velocity and vorticity fields, might be possible, [...]"
Later in a publication 1986 the same authors introduced the abbreviation CAA. The term was initially used for a low Mach number approach (Expansion of the acoustic perturbation field about an incompressible flow) as it is described under EIF. Later in the beginning 1990s the growing CAA community picked up the term and extensively used it for any kind of numerical method describing the noise radiation from an aeroacoustic source or the propagation of sound waves in an inhomogeneous flow field. Such numerical methods can be far field integration methods (e.g. FW-H) as well as direct numerical methods optimized for the solutions (e.g.) of a mathematical model describing the aerodynamic noise generation and/or propagation. With the rapid development of the computational resources this field has undergone spectacular progress during the last three decades.
Methods
Direct numerical simulation (DNS) approach to CAA
The compressible Navier-Stokes equation describes both the flow field, and the aerodynamically generated acoustic field. Thus both may be solved for directly. This requires very high numerical resolution due to the large differences in the length scale present between the acoustic variables and the flow variables. It is computationally very demanding and unsuitable for any commercial use.
Hybrid approach
In this approach the computational domain is |
https://en.wikipedia.org/wiki/Pulmonary%20surfactant | Pulmonary surfactant is a surface-active complex of phospholipids and proteins formed by type II alveolar cells. The proteins and lipids that make up the surfactant have both hydrophilic and hydrophobic regions. By adsorbing to the air-water interface of alveoli, with hydrophilic head groups in the water and the hydrophobic tails facing towards the air, the main lipid component of surfactant, dipalmitoylphosphatidylcholine (DPPC), reduces surface tension.
As a medication, pulmonary surfactant is on the WHO Model List of Essential Medicines, the most important medications needed in a basic health system.
Function
To increase pulmonary compliance.
To prevent atelectasis (collapse of the alveoli or atriums) at the end of expiration.
To facilitate recruitment of collapsed airways.
Alveoli can be compared to gas in water, as the alveoli are wet and surround a central air space. The surface tension acts at the air-water interface and tends to make the bubble smaller (by decreasing the surface area of the interface). The gas pressure (P) needed to keep an equilibrium between the collapsing force of surface tension (γ) and the expanding force of gas in an alveolus of radius r is expressed by the Young–Laplace equation:
Compliance
Compliance is the ability of lungs and thorax to expand.
Lung compliance is defined as the volume change per unit of pressure change across the lung. Measurements of lung volume obtained during the controlled inflation/deflation of a normal lung show that the volumes obtained during deflation exceed those during inflation, at a given pressure. This difference in inflation and deflation volumes at a given pressure is called hysteresis and is due to the air-water surface tension that occurs at the beginning of inflation. However, surfactant decreases the alveolar surface tension, as seen in cases of premature infants with infant respiratory distress syndrome. The normal surface tension for water is 70 dyn/cm (70 mN/m) and in the lungs, it |
https://en.wikipedia.org/wiki/Orbital-free%20density%20functional%20theory | In computational chemistry, orbital-free density functional theory is a quantum mechanical approach to electronic structure determination which is based on functionals of the electronic density. It is most closely related to the Thomas–Fermi model. Orbital-free density functional theory is, at present, less accurate than Kohn–Sham density functional theory models, but has the advantage of being fast, so that it can be applied to large systems.
Kinetic energy of electrons
The Hohenberg–Kohn theorems guarantee that, for a system of atoms, there exists a functional of the electron density that yields the total energy. Minimization of this functional with respect to the density gives the ground-state density from which all of the system's properties can be obtained. Although the Hohenberg–Kohn theorems tell us that such a functional exists, they do not give us guidance on how to find it. In practice, the density functional is known exactly except for two terms. These are the electronic kinetic energy and the exchange–correlation energy. The lack of the true exchange–correlation functional is a well known problem in DFT, and there exists a huge variety of approaches to approximate this crucial component.
In general, there is no known form for the interacting kinetic energy in terms of electron density. In practice, instead of deriving approximations for interacting kinetic energy, much effort was devoted to deriving approximations for non-interacting (Kohn–Sham) kinetic energy, which is defined as (in atomic units)
where is the i-th Kohn–Sham orbital. The summation is performed over all the occupied Kohn–Sham orbitals. One of the first attempts to do this (even before the formulation of the Hohenberg–Kohn theorem) was the Thomas–Fermi model, which wrote the kinetic energy as
This expression is based on the homogeneous electron gas and, thus, is not very accurate for most physical systems. Finding more accurate and transferable kinetic-energy density functiona |
https://en.wikipedia.org/wiki/ILLIAC%20I | The ILLIAC I (Illinois Automatic Computer), a pioneering computer in the ILLIAC series of computers built in 1952 by the University of Illinois, was the first computer built and owned entirely by a United States educational institution.
Computer
The project was the brainchild of Ralph Meagher and Abraham H. Taub, who both were associated with Princeton's Institute for Advanced Study before coming to the University of Illinois. The ILLIAC I became operational on September 1, 1952. It was the second of two identical computers, the first of which was ORDVAC, also built at the University of Illinois. These two machines were the first pair of machines to run the same instruction set.
ILLIAC I was based on the IAS machine Von Neumann architecture as described by mathematician John von Neumann in his influential First Draft of a Report on the EDVAC. Unlike most computers of its era, the ILLIAC I and ORDVAC computers were twin copies of the same design, with software compatibility. The computer had 2,800 vacuum tubes, measured 10 ft (3 m) by 2 ft (0.6 m) by 8½ ft (2.6 m) (L×B×H), and weighed . ILLIAC I was very powerful for its time; in 1956, it had more computing power than all of Bell Telephone Laboratories.
Because the lifetime of the tubes within ILLIAC was about a year, the machine was shut down every day for "preventive maintenance" when older vacuum tubes would be replaced in order to increase reliability. Visiting scholars from Japan assisted in the design of the ILLIAC series of computers, and later developed the MUSASINO-1 computer in Japan. ILLIAC I was retired in 1962, when the ILLIAC II became operational.
Innovations
1955 – Lejaren Hiller and Leonard Isaacson used ILLIAC I to compose the Illiac Suite which was one of the first pieces of music to be written with the aid of a computer.
1957 – Mathematician Donald B. Gillies, physicist James E. Snyder, and astronomers George C. McVittie, S. P. Wyatt, Ivan R. King and George W. Swenson of the University of |
https://en.wikipedia.org/wiki/Loop%20representation%20in%20gauge%20theories%20and%20quantum%20gravity | Attempts have been made to describe gauge theories in terms of extended objects such as Wilson loops and holonomies. The loop representation is a quantum hamiltonian representation of gauge theories in terms of loops. The aim of the loop representation in the context of Yang–Mills theories is to avoid the redundancy introduced by Gauss gauge symmetries allowing to work directly in the space of physical states (Gauss gauge invariant states). The idea is well known in the context of lattice Yang–Mills theory (see lattice gauge theory). Attempts to explore the continuous loop representation was made by Gambini and Trias for canonical Yang–Mills theory, however there were difficulties as they represented singular objects. As we shall see the loop formalism goes far beyond a simple gauge invariant description, in fact it is the natural geometrical framework to treat gauge theories and quantum gravity in terms of their fundamental physical excitations.
The introduction by Ashtekar of a new set of variables (Ashtekar variables) cast general relativity in the same language as gauge theories and allowed one to apply loop techniques as a natural nonperturbative description of Einstein's theory. In canonical quantum gravity the difficulties in using the continuous loop representation are cured by the spatial diffeomorphism invariance of general relativity. The loop representation also provides a natural solution of the spatial diffeomorphism constraint, making a connection between canonical quantum gravity and knot theory. Surprisingly there were a class of loop states that provided exact (if only formal) solutions to Ashtekar's original (ill-defined) Wheeler–DeWitt equation. Hence an infinite set of exact (if only formal) solutions had been identified for all the equations of canonical quantum general gravity in this representation! This generated a lot of interest in the approach and eventually led to loop quantum gravity (LQG).
The loop representation has found applicatio |
https://en.wikipedia.org/wiki/Indexed%20family | In mathematics, a family, or indexed family, is informally a collection of objects, each associated with an index from some index set. For example, a family of real numbers, indexed by the set of integers, is a collection of real numbers, where a given function selects one real number for each integer (possibly the same) as indexing.
More formally, an indexed family is a mathematical function together with its domain and image (that is, indexed families and mathematical functions are technically identical, just point of views are different). Often the elements of the set are referred to as making up the family. In this view, indexed families are interpreted as collections of indexed elements instead of functions. The set is called the index set of the family, and is the indexed set.
Sequences are one type of families indexed by natural numbers. In general, the index set is not restricted to be countable. For example, one could consider an uncountable family of subsets of the natural numbers indexed by the real numbers.
Formal definition
Let and be sets and a function such that
where is an element of and the image of under the function is denoted by . For example, is denoted by The symbol is used to indicate that is the element of indexed by The function thus establishes a family of elements in indexed by which is denoted by or simply if the index set is assumed to be known. Sometimes angle brackets or braces are used instead of parentheses, although the use of braces risks confusing indexed families with sets.
Functions and indexed families are formally equivalent, since any function with a domain induces a family and conversely. Being an element of a family is equivalent to being in the range of the corresponding function. In practice, however, a family is viewed as a collection, rather than a function.
Any set gives rise to a family where is indexed by itself (meaning that is the identity function). However, families differ f |
https://en.wikipedia.org/wiki/Beta-propeller%20phytase | β-propeller phytases (BPPs) are a group of enzymes (i.e. protein superfamily) with a round beta-propeller structure. BPPs are phytases, which means that they are able to remove (hydrolyze) phosphate groups from phytic acid and its phytate salts. Hydrolysis happens stepwise and usually ends in myo-inositol triphosphate product which has three phosphate groups still bound to it. The actual substrate of BPPs is calcium phytate and in order to hydrolyze it, BPPs must have Ca2+ ions bound to themselves. BPPs are the most widely found phytase superfamily in the environment and they are thought to have a major role in phytate-phosphorus cycling in soil and water. As their alternative name alkaline phytase suggests, BPPs work best in basic (or neutral) environment. Their pH optima is 6–9, which is unique among the phytases.
Potential uses
As of April 2018, BPPs are not used commercially, but they may have potential for such use. Histidine acid phytases (HAPs) are the only group of phytases which are used in animal feed at the moment.
Animal feed
Recombinant phytases are added commonly in agriculture to animal feed of monogastric animals to enhance the feed's nutrient bioavailability. These nutrients include phosphorus which is bound to phytates in the form of their phosphate groups. In contrast to ruminants like cattle, gut bacteria of monogastric animals like pigs and chickens can't properly hydrolyze these groups free so that the digestive system of the animal can use the phosphorus. Unabsorbed phosphorus is thus wasted and may end up into the environment in animal manure via agricultural runoff and cause eutrophication. Phytic acid can also work as an antinutrient: it can chelate calcium from feed and decrease its bioavailability up to 60–70% of the feed's total calcium content. Phytase addition improves calcium availability and can also improve the bioavailability of iron and zinc. It might also increase the availability of copper and manganese. Amino acid bioavaila |
https://en.wikipedia.org/wiki/Roy%27s%20identity | Roy's identity (named after French economist René Roy) is a major result in microeconomics having applications in consumer choice and the theory of the firm. The lemma relates the ordinary (Marshallian) demand function to the derivatives of the indirect utility function. Specifically, denoting the indirect utility function as the Marshallian demand function for good can be calculated as
where is the price vector of goods and is income, and where the superscript indicates Marshallian demand. The result holds for continuous utility functions representing locally non-satiated and strictly convex preference relations on a convex consumption set, under the additional requirement that the indirect utility function is differentiable in all arguments.
Roy's identity is akin to the result that the price derivatives of the expenditure function give the Hicksian demand functions. The additional step of dividing by the wealth derivative of the indirect utility function in Roy's identity is necessary since the indirect utility function, unlike the expenditure function, has an ordinal interpretation: any strictly increasing transformation of the original utility function represents the same preferences.
Derivation of Roy's identity
Roy's identity reformulates Shephard's lemma in order to get a Marshallian demand function for an individual and a good () from some indirect utility function.
The first step is to consider the trivial identity obtained by substituting the expenditure function for wealth or income in the indirect utility function , at a utility of :
This says that the indirect utility function evaluated in such a way that minimizes the cost for achieving a certain utility given a set of prices (a vector ) is equal to that utility when evaluated at those prices.
Taking the derivative of both sides of this equation with respect to the price of a single good (with the utility level held constant) gives:
.
Rearranging gives the desired result:
with the s |
https://en.wikipedia.org/wiki/Behavior%20imaging | Behavior imaging is a technique used in behavioral health to diagnose, treat and monitor behavioral disorders, most commonly autism. It involves capturing short video clips of problem behaviors in natural environments on smartphones or other devices. |
https://en.wikipedia.org/wiki/Quantum%20singularity | The term quantum singularity is used to refer to many different phenomena in fiction. They often only approximate a gravitational singularity in the scientific sense in that they are massive, localized distortions of space and time. The name invokes one of the most fundamental problems remaining in modern physics: the difficulty in uniting Einstein's theory of relativity, which includes singularities within its models of black holes, and quantum mechanics. In fact, since according to relativity, singularities, by definition, are infinitely small, and expected to be quantum mechanical by nature, a theory of quantum gravity would be required to describe their behavior. No such theory has yet been formulated.
Star Trek
A Star Trek quantum singularity is a phenomenon of multiple varieties. One such variety appears in the Star Trek: Voyager episode "Parallax" which creates a mirror image along with a temporal distortion. Voyager flies into the singularity after seeing an image of itself inside, and becomes trapped. To escape, the crew uses a shuttle to fire a tachyon beam at the entry. In the Voyager episode "Hunters", the crew discover a Hirogen relay station almost 100,000 years old, powered by a quantum singularity, also referred to by Tom Paris as a black hole. The word "tiny" being used to describe a quantum singularity, about "a centimeter" in diameter, making it relatively large, although it is more likely that the stated diameter instead refers to the singularity's event horizon. In the episode "Scorpion", Species 8472 and the Borg, make use of quantum singularities to travel to and from fluidic space.
Artificial quantum singularities are also used to power Romulan Warbirds as first described in the Star Trek: The Next Generation episode "Face of the Enemy". Additionally, in the Star Trek: Deep Space Nine episode "Visionary", the side effects from quantum singularities cause Miles O'Brien to shift through time.
Futurama
In the Futurama episode "Love and Rocket |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.