source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Paul%20Tseng | Paul Tseng () was a Chinese-American (Hakka Taiwanese) and Canadian applied mathematician and a professor at the Department of Mathematics at the University of Washington, in Seattle, Washington. Tseng was recognized by his peers to be one of the leading optimization researchers of his generation. On August 13, 2009, Paul Tseng went missing while kayaking in the Jinsha River in the Yunnan province of China and is presumed dead.
Biography
Tseng was born September 21, 1959, in Hsinchu, Taiwan. In December 1970, Tseng's family moved to Vancouver, British Columbia. Tseng received his B.Sc. from Queen's University in 1981 and his Ph.D. from Massachusetts Institute of Technology in 1986. In 1990 Tseng moved to the University of Washington's Department of Mathematics. Tseng has conducted research primarily in continuous optimization and secondarily in discrete optimization and distributed computation.
Research
Tseng made many contributions to mathematical optimization, publishing many articles and helping to develop quality software that has been widely used.
He published over 120 papers in optimization and had close collaborations with several colleagues, including Dimitri Bertsekas and Zhi-Quan Tom Luo.
Tseng's research subjects include:
Efficient algorithms for structured convex programs and network flow problems,
Complexity analysis of interior point methods for linear programming,
Parallel and distributed computing,
Error bounds and convergence analysis of iterative algorithms for optimization problems and variational inequalities,
Interior point methods and semidefinite relaxations for hard quadratic and matrix optimization problems, and
Applications of large scale optimization techniques in signal processing and machine learning.
In his research, Tseng gave a new proof for the sharpest complexity result for path-following interior-point methods for linear programming. Furthermore, together with Tom Luo, he resolved a long-standing open question on the co |
https://en.wikipedia.org/wiki/OpenFog%20Consortium | The OpenFog Consortium (sometimes stylized as Open Fog Consortium) was a consortium of high tech industry companies and academic institutions across the world aimed at the standardization and promotion of fog computing in various capacities and fields.
The consortium was founded by Cisco Systems, Intel, Microsoft, Princeton University, Dell, and ARM Holdings in 2015 and now has 57 members across the North America, Asia, and Europe, including Forbes 500 companies and noteworthy academic institutions.
The OpenFog consortium merged with the Industrial Internet Consortium, now the Industry IoT Consortium, on January 31, 2019.
History
OpenFog was created on November 19, 2015, by ARM Holdings, Cisco Systems, Dell, Intel, Microsoft, and Princeton University.
The idea for a consortium centered on the advancement and dissemination of fog computing was thought up by Helder Antunes, a Cisco executive with a history in IoT, Mung Chiang, then a Princeton University professor and now President of Purdue University, and Dr. Tao Zhang, a Cisco Distinguished Engineer and CIO for the IEEE Communications Society then and now a manager at the National Institute of Standards and Technologies (NIST). The project was executed from concept to launch by Armando Pereira at PVentures Group, a Silicon Valley-based high-tech consulting firm.
OpenFog released its reference architecture for fog computing on 13 February 2017.
The Fog World Congress 2017, with Dr. Tao Zhang as its General Chair, was hosted in October 2017 by OpenFog, in conjunction with the IEEE Communications Society, as the first congress devoted to fog computing.
Administration
The OpenFog Consortium is governed by its board of directors, which is chaired by Cisco Senior Director Helder Antunes. The board of directors is made up of 11 seats, each representing one of the following companies and institutions: ARM, AT&T, Cisco, Dell, Intel, Microsoft, Princeton University, IEEE, GE, ZTE and Shanghai Tech University.
The c |
https://en.wikipedia.org/wiki/Delpher | Delpher is a website providing full-text Dutch-language digitized historical newspapers, books, journals and copy sheets for radio news broadcasts. The material is provided by libraries, museums and other heritage institutions and is developed and managed by the Royal Library of the Netherlands. Delpher is freely available and includes as of June 2022 in total over 130 million pages from about 2 million newspapers, 900,000 books and 12 million journal pages that date back to the 15th century.
Collections
Books: 900,000 books, from the 17th century onwards
Journals: 12 million journal articles from 1800-2000
Newspapers: about 17 million pages from more than 2 million issues from the Netherlands, Dutch East Indies, Netherlands Antilles and Surinam, from 1618 to 2005. This represents about 15% of the total published newspaper output in the Netherlands in this period.
Typoscripts for radio broadcasts by the Algemeen Nederlands Persbureau (ANP), 1.474.359 million typed sheets from 1937 to 1984.
Notes
External links
Geschiedenis24 tests Delpher; New search system of great value for research, Jurryt van de Vooren on NPO Geschiedenis, 20-11-2013
Van beta naar beter, M. Napolitano en M. Laan, Informatieprofessional 2015-01 (offline)
Dutch digital libraries
Online archives
Scholarly search services |
https://en.wikipedia.org/wiki/No%20Time%20Like%20the%20Past | "No Time Like the Past" is episode 112 of the American television anthology series The Twilight Zone. In this episode a man tries to escape the troubles of the 20th century by taking up residence in an idyllic small town in the 19th century.
Opening narration
Plot
Disgusted with 20th century problems such as world wars, atomic weapons and radioactive poisoning, Paul Driscoll solicits the help of his colleague Harvey and uses a time machine, intent to remake the present by altering past events.
Paul first travels to Hiroshima on August 6, 1945, and attempts to warn a Hiroshima police captain about the atomic bomb, but the captain dismisses him as insane. Paul then travels to a Berlin hotel room to assassinate Adolf Hitler in August 1939 (immediately before the outbreak of World War II the following month), but is interrupted when a housekeeper knocks on his door and later calls two SS guards to his room. On his third stop, Paul tries to change the course of the Lusitania on May 6, 1915, to avoid being torpedoed by a German U-boat, but the ship's captain questions his credibility.
Paul accepts the hypothesis that the past cannot be changed. He then uses the time machine to go to the town of Homeville, Indiana in 1881, resolving not to make any changes, but just to live out his life free of the problems of the modern age. Upon his arrival, he realizes that President James A. Garfield will be shot the next day, but resists the temptation to intervene. He stays at a boarding house in town and meets Abigail Sloan, a teacher. At one of the boarding house's dinners, a boarder named Hanford vehemently espouses American imperialism. Paul delivers an angry rebuttal in which he accuses Hanford of speaking from ignorance of war and a certainty that he himself will not have to take part in any fighting, while dropping numerous allusions to wars that have yet to take place. Abigail is impressed and privately tells him that she shares his views, having lost her father and two b |
https://en.wikipedia.org/wiki/Germline%20mutation | A germline mutation, or germinal mutation, is any detectable variation within germ cells (cells that, when fully developed, become sperm and ova). Mutations in these cells are the only mutations that can be passed on to offspring, when either a mutated sperm or oocyte come together to form a zygote. After this fertilization event occurs, germ cells divide rapidly to produce all of the cells in the body, causing this mutation to be present in every somatic and germline cell in the offspring; this is also known as a constitutional mutation. Germline mutation is distinct from somatic mutation.
Germline mutations can be caused by a variety of endogenous (internal) and exogenous (external) factors, and can occur throughout zygote development. A mutation that arises only in germ cells can result in offspring with a genetic condition that is not present in either parent; this is because the mutation is not present in the rest of the parents' body, only the germline.
When mutagenesis occurs
Germline mutations can occur before fertilization and during various stages of zygote development. When the mutation arises will determine the effect it has on offspring. If the mutation arises in either the sperm or the oocyte before development, then the mutation will be present in every cell in the individual's body. A mutation that arises soon after fertilization, but before germline and somatic cells are determined, then the mutation will be present in a large proportion of the individual's cell with no bias towards germline or somatic cells, this is also called a gonosomal mutation. A mutation that arises later in zygote development will be present in a small subset of either somatic or germline cells, but not both.
Causes
Endogenous factors
A germline mutation often arises due to endogenous factors, like errors in cellular replication and oxidative damage. This damage is rarely repaired imperfectly, but due to the high rate of germ cell division, can occur frequently.
Endog |
https://en.wikipedia.org/wiki/Mp3DirectCut | mp3DirectCut is a lossless editor for MP3 (and to a degree, MP2 and AAC) audio files, able to provide cuts and crops, copy and paste, gain and fades to audio files without having to decode or re-encode the audio. By modifying the global gain field of each frame of MPEG audio, the volume of that frame can be modified without altering the audio data itself. This allows for rapid, lossless MP3 audio editing that does not degrade the data from re-encoding.
mp3DirectCut provides audio normalization and pause (silence) detection, and can split long recordings into separate files based on cue points in the audio, such as those provided by pause detection. mp3DirectCut can also record audio directly to MP3 from the computer's sound card input.
All audio operations are performed using frame manipulation so, as such, mp3DirectCut is not a waveform editor. Audio clean-up such as click, hiss and noise removal is not possible.
Features
An MP3 file can be edited without transcoding.
Cut, copy, paste, and volume change operations are provided; edits can be previewed, including a command that plays a segment without a selected region (previewing a cut)
Audio normalization and pause detection
MP3 recording with ACM or LAME encoder (not bundled)
Fast MP3 visualization
Supports Layer 2 (DVD/DVB audio)
Includes a tag editor for ID3v1 tags
Cue sheet support with auto cue (track division by time values)
Track splitting with filename and ID3v1.1 tag creation
VU meter, bitrate visualization
Supported on all versions of Microsoft Windows from 95 through 7; certain modes can be invoked from the command line
Unobtrusive: The installation file (as of October 2020) is 303 Kbytes; the installed program occupies a single directory (with one subdirectory for non-English help files, which can be deleted if not needed); the program does not modify the Windows Registry; a user's manual and a page of Frequently Asked Questions is included in the installation
The edited file can be used |
https://en.wikipedia.org/wiki/Shankar%20Nath%20Rimal | Shankar Nath Rimal (born 5 March 1935) is a Nepalese civil engineer and architect. He is best known for standardising the modern Nepalese flag. He has also designed many prominent buildings and monuments in Nepal. Sahid gate of Sundhara, Ramananda gate of Janakpur, Bhaleshwar temple of Chandragiri hill and Solatee hotel are some of the prominent structure he has designed.
Early life and education
He was born on 5 March 1935 (22 Falgun 1991 BS) in Tangal, Kathmandu to father Devendra Nath Rimal and mother Sita Devi Rimal. He received his primary education from Nandiratri School, Naxal and completed his SLC-level education from Durbar High School in 1950. He enrolled in Bengal Engineering College to study electrical engineering under Colombo plan but later shifted to civil engineering. He graduated in 1957.
Career
He standardised the flag of Nepal in 1962 on the request of King Mahendra. He calculated the mathematical specifications required to draw the flag which was included in the-then constitution. He designed multiple historical sites and monuments such as Shahid Gate, Solatee hotel, building of Nepal Academy, Nepal Art Council and many other government buildings. He was involved in the design of Narayanhiti Palace which was designed by Benjamin Polk. He has designed various temples such as Bhaleshwor Mahadev, Vishnu temple in Singapore and Unmata Bhairav temple inside Pashupati temple complex.
He served as the president of Nepal Engineer's Association four times, in the 4th, 17th, 18th and 19th executive council.
He was awarded with National Araniko Award, 2077 by Nepal Academy of Fine Arts in 2020 for his contribution to fine art.
Personal life
He is married to Shashi Rimal. They have three children. |
https://en.wikipedia.org/wiki/The%20Improvised%20Field%20Hospital | The Improvised Field Hospital (French - L'ambulance improvisée) or Monet after His Accident at the Inn of Chailly is an oil-on-canvas painting created in 1865 by the French painter Frédéric Bazille. It shows Claude Monet in bed recovering from a leg injury he had sustained in summer 1865, in Chailly-en-Bière, small village just on the outskirts of the forest of Fontainebleau. The work has been in the Musée d'Orsay in Paris since 1986.
The Musée d'Orsay notes of the painting, "Bazille, whose work falls between Courbet's Realism and a nascent Impressionism, renders the event in every detail. On the untidy bed one can clearly see the red, inflamed wound on Monet's shin, while his face expresses his despondency at being immobilised in this way. The intimacy of the scene demonstrates the bonds of friendship between the two men."
See also
List of paintings by Frédéric Bazille
A Studio at Les Batignolles, 1870 painting by Henri Fantin-Latour
Claude Monet Painting in His Garden at Argenteuil, 1873 painting by Pierre-Auguste Renoir
Claude Monet Painting in his Studio, 1874 painting by Édouard Manet
Portrait of the painter Claude Monet, 1875 painting by Renoir |
https://en.wikipedia.org/wiki/Mordell%20curve | In algebra, a Mordell curve is an elliptic curve of the form y2 = x3 + n, where n is a fixed non-zero integer.
These curves were closely studied by Louis Mordell, from the point of view of determining their integer points. He showed that every Mordell curve contains only finitely many integer points (x, y). In other words, the differences of perfect squares and perfect cubes tend to infinity. The question of how fast was dealt with in principle by Baker's method. Hypothetically this issue is dealt with by Marshall Hall's conjecture.
Properties
If (x, y) is an integer point on a Mordell curve, then so is (x, -y).
There are certain values of n for which the corresponding Mordell curve has no integer solutions; these values are:
6, 7, 11, 13, 14, 20, 21, 23, 29, 32, 34, 39, 42, ... .
−3, −5, −6, −9, −10, −12, −14, −16, −17, −21, −22, ... .
The specific case where n = −2 is also known as Fermat's Sandwich Theorem.
List of solutions
The following is a list of solutions to the Mordell curve y2 = x3 + n for |n| ≤ 25. Only solutions with y ≥ 0 are shown.
In 1998, J. Gebel, A. Pethö, H. G. Zimmer found all integers points for 0 < |n| ≤ 104.
In 2015, M. A. Bennett and A. Ghadermarzi computed integer points for 0 < |n| ≤ 107. |
https://en.wikipedia.org/wiki/Novell%20Vibe | Novell Vibe is a web-based team collaboration platform developed by Novell. It was initially released by Novell in June 2008 under the name of Novell Teaming. Novell Vibe is a collaboration platform that can serve as a knowledge repository, document management system, project collaboration hub, process automation machine, corporate intranet or extranet. Users can upload, manage, comment on, and edit content securely. .
Document management functionality allows for document versions, approvals, and document life cycle tracking. Users can download and modify pre-built custom web pages and workflows free of charge from the Vibe Resource Library.
As of 2022, Novell Vibe is now known as Micro Focus Vibe following the Novell acquisition.
History
Novell Vibe was created as the result of two Novell products, Teaming and Pulse. They were merged into a single platform in 2010.
Created in 2009, Novell Pulse was a communication tool based on the Google Wave Federation Protocol.
Compatibility
Novell Vibe is compatible across numerous servers, operating systems, internet browsers, mobile devices and Microsoft Office programs.
Interoperability
Vibe can be used in conjunction with various other software products, such as Novell Access Manager, Novell GroupWise, Skype, and YouTube. Novell Vibe integrates with an LDAP directory for authentication.
Extendibility
Vibe administrators can extend the Vibe software by creating software extensions, remote applications, or JAR (file format) files that enhance the power and usefulness of the Vibe software.
Software extensions enable third-party developers to create abilities which extend an application. Vibe administrators or Vibe developers can create custom extensions (add-ons) to enhance Vibe. |
https://en.wikipedia.org/wiki/Inulin | Inulins are a group of naturally occurring polysaccharides produced by many types of plants, industrially most often extracted from chicory. The inulins belong to a class of dietary fibers known as fructans. Inulin is used by some plants as a means of storing energy and is typically found in roots or rhizomes. Most plants that synthesize and store inulin do not store other forms of carbohydrate such as starch. In the United States in 2018, the Food and Drug Administration approved inulin as a dietary fiber ingredient used to improve the nutritional value of manufactured food products. Using inulin to measure kidney function is the "gold standard" for comparison with other means of estimating glomerular filtration rate.
Origin and history
Inulin is a natural storage carbohydrate present in more than 36,000 species of plants, including agave, wheat, onion, bananas, garlic, asparagus, Jerusalem artichoke, and chicory. For these plants, inulin is used as an energy reserve and for regulating cold resistance. Because it is soluble in water, it is osmotically active. Certain plants can change the osmotic potential of their cells by changing the degree of polymerization of inulin molecules by hydrolysis. By changing osmotic potential without changing the total amount of carbohydrate, plants can withstand cold and drought during winter periods.
Inulin was discovered in 1804 by German scientist Valentin Rose. He found "a peculiar substance" from Inula helenium roots by boiling-water extraction. In the 1920s, J. Irvine used chemical methods such as methylation to study the molecular structure of inulin, and he designed the isolation method for this new anhydrofructose. During studies of renal tubules in the 1930s, researchers searched for a substance that could serve as a biomarker that is not reabsorbed or secreted after introduction into tubules. A. N. Richards introduced inulin because of its high molecular weight and its resistance to enzymes. Inulin is used to determin |
https://en.wikipedia.org/wiki/Aotine%20betaherpesvirus%201 | Aotine betaherpesvirus 1 (AoHV-1) is a species of virus in the genus Cytomegalovirus, subfamily Betaherpesvirinae, family Herpesviridae, and order Herpesvirales.
Night monkeys (Aotus spp.) serve as natural hosts. |
https://en.wikipedia.org/wiki/Gebhart%20factor | The Gebhart factors are used in radiative heat transfer, it is a means to describe the ratio of radiation absorbed by any other surface versus the total emitted radiation from given surface. As such, it becomes the radiation exchange factor between a number of surfaces. The Gebhart factors calculation method is supported in several radiation heat transfer tools, such as TMG and TRNSYS.
The method was introduced by Benjamin Gebhart in 1957. Although a requirement is the calculation of the view factors beforehand, it requires less computational power, compared to using ray tracing with the Monte Carlo Method (MCM). Alternative methods are to look at the radiosity, which Hottel and others build upon.
Equations
The Gebhart factor can be given as:
.
The Gebhart factor approach assumes that the surfaces are gray and emits and are illuminated diffusely and uniformly.
This can be rewritten as:
where
is the Gebhart factor
is the heat transfer from surface i to j
is the emissivity of the surface
is the surface area
is the temperature
The denominator can also be recognized from the Stefan–Boltzmann law.
The factor can then be used to calculate the net energy transferred from one surface to all other, for an opaque surface given as:
where
is the net heat transfer for surface i
Looking at the geometric relation, it can be seen that:
This can be used to write the net energy transfer from one surface to another, here for 1 to 2:
Realizing that this can be used to find the heat transferred (Q), which was used in the definition, and using the view factors as auxiliary equation, it can be shown that the Gebhart factors are:
where
is the view factor for surface i to j
And also, from the definition we see that the sum of the Gebhart factors must be equal to 1.
Several approaches exists to describe this as a system of linear equations that can be solved by Gaussian elimination or similar methods. For simpler cases it can also be formulated as a sin |
https://en.wikipedia.org/wiki/Federal%20Meat%20Inspection%20Act | The Federal Meat Inspection Act of 1906 (FMIA) is an American law that makes it illegal to adulterate or misbrand meat and meat products being sold as food, and ensures that meat and meat products are slaughtered and processed under strictly regulated sanitary conditions. These requirements also apply to imported meat products, which must be inspected under equivalent foreign standards. United States Department of Agriculture (USDA) inspection of poultry was added by the Poultry Products Inspection Act of 1957 (PPIA). The Food, Drug, and Cosmetic Act authorizes the Food and Drug Administration (FDA) to provide inspection services for all livestock and poultry species not listed in the FMIA or PPIA, including venison and buffalo. The Agricultural Marketing Act authorizes the USDA to offer voluntary, fee-for-service inspection services for these same species.
Historical motivation for enactment
The original 1906 Act authorized the Secretary of Agriculture to inspect and condemn any meat product found unfit for human consumption. Unlike previous laws ordering meat inspections, which were enforced to assure European nations from banning pork trade, this law was strongly motivated to protect the American diet. All labels on any type of food had to be accurate (although not all ingredients were provided on the label). Even though all harmful food was banned, many warnings were still provided on the container. The production date for canned meats was a requirement in the legislation that Senator Albert Beveridge introduced but it was later removed in the House bill that was passed and became law. The law was partly a response to the publication of Upton Sinclair's The Jungle, an exposé of the Chicago meat packing industry, as well as to other Progressive Era muckraking publications of the day. While Sinclair's dramatized account was intended to bring attention to the terrible working conditions in Chicago, the public was more horrified by the prospect of bad meat.
|
https://en.wikipedia.org/wiki/Popoviciu%27s%20inequality | In convex analysis, Popoviciu's inequality is an inequality about convex functions. It is similar to Jensen's inequality and was found in 1965 by Tiberiu Popoviciu, a Romanian mathematician.
Formulation
Let f be a function from an interval to . If f is convex, then for any three points x, y, z in I,
If a function f is continuous, then it is convex if and only if the above inequality holds for all x, y, z from . When f is strictly convex, the inequality is strict except for x = y = z.
Generalizations
It can be generalized to any finite number n of points instead of 3, taken on the right-hand side k at a time instead of 2 at a time:
Let f be a continuous function from an interval to . Then f is convex if and only if, for any integers n and k where n ≥ 3 and , and any n points from I,
Weighted inequality
Popoviciu's inequality can also be generalized to a weighted inequality.
Let f be a continuous function from an interval to . Let be three points from , and let be three nonnegative reals such that and . Then,
Notes
Inequalities
Convex analysis |
https://en.wikipedia.org/wiki/Subcarrier%20multiplexing | Subcarrier Multiplexing (SCM) is a method for combining (multiplexing) many different communications signals so that they can be transmitted along a single optical fiber. SCM (also known as SCMA, SubCarrier Multiple Access) is used in passive optical network (PON) access infrastructures as a variant of wavelength division multiplexing (WDM).
SCM follows a different approach compared to WDM. In WDM an optical carrier is modulated with a baseband signal of typically hundred of Mbit/s. In an SCMA infrastructure, the baseband data is first modulated on a GHz wide subcarrier, that is subsequently modulated on the optical carrier. This way each signal occupies a different portion of the optical spectrum surrounding the centre frequency of the optical carrier. At the receiving side, as normally happens in a commercial radio service, the receiver is tuned to the correct subcarrier frequency, filtering out the other subcarriers.
The operation of multiplexing and demultiplexing the single subcarriers is carried out electronically. The conversion into the optical carrier is done at the multiplexer side. This gives an advantage over a pure WDM access, due to the lower cost of the electrical components if compared with an optical multiplexer.
SCM has the disadvantage of being limited in maximum subcarrier frequencies and data rates by the available bandwidth of the electrical and optical components. Therefore, SCM must be used in conjunction with WDM in order to take advantage of most of the available fiber bandwidth, but it can be used effectively for lower-speed, lower-cost multiuser systems. |
https://en.wikipedia.org/wiki/HP%20Prime | The HP Prime Graphing Calculator is a graphing calculator introduced by Hewlett-Packard in 2013 and manufactured by HP Inc. until the licensees Moravia Consulting spol. s r.o. and Royal Consumer Information Products, Inc. took over the continued development, manufacturing, distribution, marketing and support in 2022. It was designed with features resembling those of smartphones, such as a full-color touchscreen display and a user interface centered around different applications. It claims to be the world's smallest and thinnest CAS-enabled calculator currently available.
The functionality of the HP Prime is also available as emulation software for PCs and Macs, as well as for various smartphones.
Design and software
The HP Prime's graphical user interface features two separate home screens, one of which contains an integrated computer algebra system (CAS) based on the free and open-source Xcas/Giac 1.5.0 engine, which evolved from that of the HP 49G and its successors. Both the standard and CAS modes function independently of each other and the calculator can quickly switch between the two, unlike some of its competitors, such as the TI-Nspire series by Texas Instruments, which comes in either CAS-supported models or non-CAS models.
The G1 model calculator has a 1,500 mAh battery, which is expected to last up to 15 hours on a single charge. The G2 model comes with a battery with a capacity of 2,000 mAh.
Unlike the HP 50g and its predecessors, the HP Prime does not have an SD card slot and does not feature a beeper.
Exam Mode
The HP Prime has a feature called Exam Mode. This enables various features of the calculator (such as CAS functionality, user-created apps, notes, etc.) to be selectively disabled for a specific time, from 15 minutes to 8 hours. This can be done manually within the calculator's menus, or by using a computer with HP's connectivity software. LEDs on the top of the calculator blink to let the instructor see that the calculator is in this mode |
https://en.wikipedia.org/wiki/FAIRE-Seq | FAIRE-Seq (Formaldehyde-Assisted Isolation of Regulatory Elements) is a method in molecular biology used for determining the sequences of DNA regions in the genome associated with regulatory activity. The technique was developed in the laboratory of Jason D. Lieb at the University of North Carolina, Chapel Hill. In contrast to DNase-Seq, the FAIRE-Seq protocol doesn't require the permeabilization of cells or isolation of nuclei, and can analyse any cell type. In a study of seven diverse human cell types, DNase-seq and FAIRE-seq produced strong cross-validation, with each cell type having 1-2% of the human genome as open chromatin.
Workflow
The protocol is based on the fact that the formaldehyde cross-linking is more efficient in nucleosome-bound DNA than it is in nucleosome-depleted regions of the genome. This method then segregates the non cross-linked DNA that is usually found in open chromatin, which is then sequenced. The protocol consists of cross linking, phenol extraction and sequencing the DNA in aqueous phase.
FAIRE
FAIRE uses the biochemical properties of protein-bound DNA to separate nucleosome-depleted regions in the genome. Cells will be subjected to cross-linking, ensuring that the interaction between the nucleosomes and DNA are fixed. After sonication, the fragmented and fixed DNA is separated using a phenol-chloroform extraction. This method creates two phases, an organic and an aqueous phase. Due to their biochemical properties, the DNA fragments cross-linked to nucleosomes will preferentially sit in the organic phase. Nucleosome depleted or ‘open’ regions on the other hand will be found in the aqueous phase. By specifically extracting the aqueous phase, only nucleosome-depleted regions will be purified and enriched.
Sequencing
FAIRE-extracted DNA fragments can be analyzed in a high-throughput way using next-generation sequencing techniques. In general, libraries are made by ligating specific adapters to the DNA fragments that allow them to cl |
https://en.wikipedia.org/wiki/Montanelia | Montanelia is a genus of fungi belonging to the family Parmeliaceae.
The genus has almost cosmopolitan distribution.
Species:
Montanelia disjuncta
Montanelia occultipanniformis
Montanelia panniformis
Montanelia predisjuncta
Montanelia saximontana
Montanelia secwepemc
Montanelia sorediata
Montanelia tominii |
https://en.wikipedia.org/wiki/Noiseless%20subsystems | The framework of noiseless subsystems has been developed as a tool to preserve fragile quantum information against decoherence. In brief, when a quantum register (a Hilbert space) is subjected to decoherence due to an interaction with an external and uncontrollable environment, information stored in the register is, in general, degraded. It has been shown that when the source of decoherence exhibits some symmetries, certain subsystems of the quantum register are unaffected by the interactions with the environment and are thus noiseless. These noiseless subsystems are therefore very natural and robust tools that can be used for processing quantum information.
See also
Decoherence-free subspaces |
https://en.wikipedia.org/wiki/Tissue%20growth | Tissue growth is the process by which a tissue increases its size. In animals, tissue growth occurs during embryonic development, post-natal growth, and tissue regeneration. The fundamental cellular basis for tissue growth is the process of cell proliferation, which involves both cell growth and cell division occurring in parallel.
How cell proliferation is controlled during tissue growth to determine final tissue size is an open question in biology. Uncontrolled tissue growth is a cause of cancer.
Differential rates of cell proliferation within an organ can influence proportions, as can the orientation of cell divisions, and thus tissue growth contributes to shaping tissues along with other mechanisms of tissue morphogenesis.
Mechanisms of tissue growth control in animals
Mechanical control of tissue growth in animal skin
For some animal tissues, such as mammalian skin, it is clear that the growth of the skin is ultimately determined by the size of the body whose surface area the skin covers. This suggests that cell proliferation in skin stem cells within the basal layer is likely to be mechanically controlled to ensure that the skin covers the surface of the entire body. Growth of the body causes mechanical stretching of the skin, which is sensed by skin stem cells within the basal layer and consequently leads to both an increased rate of cell proliferation as well as promoting the planar orientation of stem cell divisions to produce new skin stem cells, rather than only producing differentiating supra-basal daughter cells.
Cell proliferation in skin stem cells within the basal layer can be driven by the mechanically-regulated YAP/TAZ family of transcriptional co-activators, which bind to TEAD-family DNA binding transcription factors in the nucleus to activate target gene expression and thereby drive cell proliferation.
For other animal tissues, such as the bones of the skeleton or the internal mammalian organs intestine, pancreas, kidney or brain, |
https://en.wikipedia.org/wiki/Abscopal%20effect | The abscopal effect is a hypothesis in the treatment of metastatic cancer whereby shrinkage of untreated tumors occurs concurrently with shrinkage of tumors within the scope of the localized treatment. R.H. Mole proposed the term “abscopal” (‘ab’ - away from, ‘scopus’ - target) in 1953 to refer to effects of ionizing radiation “at a distance from the irradiated volume but within the same organism.”
Initially associated with single-tumor, localized radiation therapy, the term “abscopal effect” has also come to encompass other types of localized treatments such as electroporation and intra-tumoral injection of therapeutics. However, the term should only be used when truly local treatments result in systemic effects. For instance, chemotherapeutics commonly circulate through the blood stream and therefore exclude the possibility of any abscopal response.
The mediators of the abscopal effect of radiotherapy were unknown for decades. In 2004, it was postulated for the first time that the immune system might be responsible for these “off-target” anti-tumor effects. Various studies in animal models of melanoma, mammary, and colorectal tumors have substantiated this hypothesis. Abscopal effects of Targeted intraoperative radiotherapy have been seen in clinical studies, including in randomized trials where women treated with lumpectomy for breast cancer combined with whole breast radiotherapy showed reduced mortality from non-breast-cancer causes when compared with whole breast radiotherapy. Furthermore, immune-mediated abscopal effects were also described in patients with metastatic cancer. Whereas these reports were extremely rare throughout the 20th century, the clinical use of immune checkpoint blocking antibodies such as ipilimumab or pembrolizumab has greatly increased the number of abscopally responding patients in selected groups of patients such as those with metastatic melanoma or lymphoma.
Mechanisms
Similar to immune reactions against antigens from bacteria |
https://en.wikipedia.org/wiki/Vacuum%20deposition | Vacuum deposition is a group of processes used to deposit layers of material atom-by-atom or molecule-by-molecule on a solid surface. These processes operate at pressures well below atmospheric pressure (i.e., vacuum). The deposited layers can range from a thickness of one atom up to millimeters, forming freestanding structures. Multiple layers of different materials can be used, for example to form optical coatings. The process can be qualified based on the vapor source; physical vapor deposition uses a liquid or solid source and chemical vapor deposition uses a chemical vapor.
Description
The vacuum environment may serve one or more purposes:
reducing the particle density so that the mean free path for collision is long
reducing the particle density of undesirable atoms and molecules (contaminants)
providing a low pressure plasma environment
providing a means for controlling gas and vapor composition
providing a means for mass flow control into the processing chamber.
Condensing particles can be generated in various ways:
thermal evaporation
sputtering
cathodic arc vaporization
laser ablation
decomposition of a chemical vapor precursor, chemical vapor deposition
In reactive deposition, the depositing material reacts either with a component of the gaseous environment (Ti + N → TiN) or with a co-depositing species (Ti + C → TiC). A plasma environment aids in activating gaseous species (N2 → 2N) and in decomposition of chemical vapor precursors (SiH4 → Si + 4H). The plasma may also be used to provide ions for vaporization by sputtering or for bombardment of the substrate for sputter cleaning and for bombardment of the depositing material to densify the structure and tailor properties (ion plating).
Types
When the vapor source is a liquid or solid the process is called physical vapor deposition (PVD). When the source is a chemical vapor precursor, the process is called chemical vapor deposition (CVD). The latter has several variants: low-pressure chemi |
https://en.wikipedia.org/wiki/Stack%20machine | In computer science, computer engineering and programming language implementations, a stack machine is a computer processor or a virtual machine in which the primary interaction is moving short-lived temporary values to and from a push down stack. In the case of a hardware processor, a hardware stack is used. The use of a stack significantly reduces the required number of processor registers. Stack machines extend push-down automata with additional load/store operations or multiple stacks and hence are Turing-complete.
Design
Most or all stack machine instructions assume that operands will be from the stack, and results placed in the stack. The stack easily holds more than two inputs or more than one result, so a rich set of operations can be computed. In stack machine code (sometimes called p-code), instructions will frequently have only an opcode commanding an operation, with no additional fields identifying a constant, register or memory cell, known as a zero address format. This greatly simplifies instruction decoding. Branches, load immediates, and load/store instructions require an argument field, but stack machines often arrange that the frequent cases of these still fit together with the opcode into a compact group of bits. The selection of operands from prior results is done implicitly by ordering the instructions. Some stack machine instruction sets are intended for interpretive execution of a virtual machine, rather than driving hardware directly.
Integer constant operands are pushed by or instructions. Memory is often accessed by separate or instructions containing a memory address or calculating the address from values in the stack. All practical stack machines have variants of the load–store opcodes for accessing local variables and formal parameters without explicit address calculations. This can be by offsets from the current top-of-stack address, or by offsets from a stable frame-base register.
The instruction set carries out most ALU action |
https://en.wikipedia.org/wiki/ArcaOS | ArcaOS is an operating system based on OS/2, developed and marketed by Arca Noae, LLC under license from IBM. It was codenamed Blue Lion during its development. It builds on OS/2 Warp 4.52 by adding support for new hardware, fixing defects and limitations in the operating system, and by including new applications and tools, and includes some Linux/Unix tool compatibility. It is targeted at professional users who need to run their OS/2 applications on new hardware, as well as personal users of OS/2.
Like OS/2 Warp, ArcaOS is a 32-bit single user, multiprocessing, preemptive multitasking operating system for the x86 architecture. It is supported on both physical hardware and virtual machine hypervisors.
Features
Hardware compatibility
ArcaOS supports symmetric multiprocessing systems with up to 64 processor cores, although it is recommended to disable hyperthreading. As of version 5.0.8, ArcaOS is ACPI 6.1-compliant and includes the 20220331 release of ACPICA.
While ArcaOS is a 32-bit operating system, it has limited PAE support which allows it to use RAM in excess of 4GB as a RAM disk.
ArcaOS supports being run as a virtual machine guest inside VirtualBox, VMware ESXi, VMWare Workstation and Microsoft Virtual PC.
In addition to the device drivers included with OS/2 Warp 4, ArcaOS includes a variety of drivers developed by Arca Noae, and various third parties:
Network adapters are supported either with Arca Noae's MultiMac technology, which employs FreeBSD driver code, or a selection of GenMAC drivers. Support for wireless networking is somewhat limited, though MultiMac support for additional chipsets is planned for future releases of ArcaOS.
ArcaOS replaces the 16-bit IBM OS/2 USB driver with a new 32-bit driver capable of supporting USB 2.0 and USB 3.0 controllers.
Audio support utilizes the Uniaud generic audio driver, now maintained by Arca Noae. Uniaud is based on the ALSA framework from the Linux kernel. In addition, a selection of device-specific dri |
https://en.wikipedia.org/wiki/Latterly | Latterly was a quarterly independent magazine and website that publishes longform journalism, news, opinion and photo essays focusing on political and social justice issues globally.
History
The magazine was founded in Bangkok in 2014 and is edited by Ben Wolford. It is notable for launching as a website that "doesn't care about page views." It has since developed, and subsequently discontinued, an iOS app. In May 2016, Latterly became a publication on the Medium platform and joined its revenue beta program. In September, Latterly announced the hiring of former New York Times foreign correspondent Laura Kasinof. Latterly earns revenue through subscriptions and donations. The magazine has partnered with other media companies, including Newsweek, The Week, and Ulyces, to produce and translate articles. Latterly published its first print edition in December 2016. On 24 January 2017, Latterly broke news that U.S. President Donald Trump was planning to issue Executive Order 13769 banning immigrants from specific countries and prioritizing the refugee resettlement of religious minorities. The publication ceased operations and published its last print edition in 2018. |
https://en.wikipedia.org/wiki/Cholera%20belt | The cholera belt was a flat strip of (usually red) flannel or knitted wool, about six feet long and six inches wide, that was wrapped around the bare abdomen. The item was standard army issue, and was purported to prevent the wearer from contracting cholera, dysentery, and other ailments believed to be caused by chilling of the abdomen. The belts use continued decades after the causative link between pathogen-contaminated drinking water and cholera was established.
History
Attempts to prevent illness by wearing flannel body wraps date to the early 1700s. In 1707 Jeremiah Wainewright wrote "'I was perswaded'(sic) ... to wear Flannel next to my Skin some ten Years ago for a severe Cough ... I received some advantage'", and in 1726 author Richard Towne wrote, "'those who are subject to habitual Looseness may receive great Benefit by wearing Flannel and keeping their Bodies warm'". By 1799 the British army promoted a "flannel bandage to the whole abdomen," with surgeon Robert Jackson in 1817 recommending "the application of 'flannel over the abdomen, adding such pressure to it by a flannel roller'" to prevent dysentery, and James Annesley writing in 1828 that "'use of a thick flannel banyan and cummerband during the Monsoon will ... exert considerable influence in preventing bowel complaints'".
According to historian E.T. Renbourn, flannel waistcoats and belts were commonly worn by British soldiers before the 1830s but as cholera epidemics spread from 1817 to the 1830s, fear spread leading to reports in the Cholera Gazette that soldiers should wear flannel to prevent cholera, possibly originating in the Polish-Russian War of 1830-31 though a "cholera belt" was not mentioned. Renbourn writes that although the phrase "cholera belt" was not being specifically mentioned in print, it was "being used fairly widely by the populace in general". It was not until 1848 when, Instructions to Army Medical Officers for their Guidance on the Appearance of the Spasmodic Cholera inc |
https://en.wikipedia.org/wiki/Reflection%20%28mathematics%29 | In mathematics, a reflection (also spelled reflexion) is a mapping from a Euclidean space to itself that is an isometry with a hyperplane as a set of fixed points; this set is called the axis (in dimension 2) or plane (in dimension 3) of reflection. The image of a figure by a reflection is its mirror image in the axis or plane of reflection. For example the mirror image of the small Latin letter p for a reflection with respect to a vertical axis (a vertical reflection) would look like q. Its image by reflection in a horizontal axis (a horizontal reflection) would look like b. A reflection is an involution: when applied twice in succession, every point returns to its original location, and every geometrical object is restored to its original state.
The term reflection is sometimes used for a larger class of mappings from a Euclidean space to itself, namely the non-identity isometries that are involutions. Such isometries have a set of fixed points (the "mirror") that is an affine subspace, but is possibly smaller than a hyperplane. For instance a reflection through a point is an involutive isometry with just one fixed point; the image of the letter p under it
would look like a d. This operation is also known as a central inversion , and exhibits Euclidean space as a symmetric space. In a Euclidean vector space, the reflection in the point situated at the origin is the same as vector negation. Other examples include reflections in a line in three-dimensional space. Typically, however, unqualified use of the term "reflection" means reflection in a hyperplane.
Some mathematicians use "flip" as a synonym for "reflection".
Construction
In a plane (or, respectively, 3-dimensional) geometry, to find the reflection of a point drop a perpendicular from the point to the line (plane) used for reflection, and extend it the same distance on the other side. To find the reflection of a figure, reflect each point in the figure.
To reflect point through the line using compass |
https://en.wikipedia.org/wiki/Stephen%20Wolfram | Stephen Wolfram ( ; born 29 August 1959) is a British-American computer scientist, physicist, and businessman. He is known for his work in computer science, mathematics, and theoretical physics. In 2012, he was named a fellow of the American Mathematical Society.
As a businessman, he is the founder and CEO of the software company Wolfram Research where he works as chief designer of Mathematica and the Wolfram Alpha answer engine.
Early life
Family
Stephen Wolfram was born in London in 1959 to Hugo and Sybil Wolfram, both German Jewish refugees to the United Kingdom. His maternal grandmother was British psychoanalyst Kate Friedlander.
Wolfram's father, Hugo Wolfram, was a textile manufacturer and served as managing director of the Lurex Company—makers of the fabric Lurex. Wolfram's mother, Sybil Wolfram, was a Fellow and Tutor in Philosophy at Lady Margaret Hall at University of Oxford from 1964 to 1993.
Stephen Wolfram is married to a mathematician. They have four children together.
Education
Wolfram was educated at Eton College, but left prematurely in 1976. As a young child, Wolfram had difficulties learning arithmetic. He entered St. John's College, Oxford, at age 17 and left in 1978 without graduating to attend the California Institute of Technology the following year, where he received a PhD in particle physics in 1980. Wolfram's thesis committee was composed of Richard Feynman, Peter Goldreich, Frank J. Sciulli and Steven Frautschi, and chaired by Richard D. Field.
Early career
Wolfram, at the age of 15, began research in applied quantum field theory and particle physics and published scientific papers in peer-reviewed scientific journals including Nuclear Physics B, Australian Journal of Physics, Nuovo Cimento, and Physical Review D. Working independently, Wolfram published a widely cited paper on heavy quark production at age 18 and nine other papers. Wolfram's work with Geoffrey C. Fox on the theory of the strong interaction is still used in experi |
https://en.wikipedia.org/wiki/Colloblast | Colloblasts are unique, multicellular structures found in ctenophores. They are widespread in the tentacles of these animals and are used to capture prey. Colloblasts consist of a collocyte containing a coiled spiral filament, internal granules and other organelles.
Like the cnidocytes of cnidarians, colloblasts are discharged from the animals’ tentacles, and are used to capture prey. However, unlike cnidocytes, which are venomous cells, colloblasts contain adhesives which stick to, rather than sting the prey.
Form, function, and occurrence
Colloblasts were first described in 1844.
The apical surface of colloblasts consist of numerous cap cells that secrete eosinophilic granules that are thought to be the source of adhesion. On contact, these granules rupture, and release an adhesive substance onto the prey. The spiral filament absorbs the impact of the rupture, preventing the ensnared prey from escaping. Colloblasts are found in all ctenophores except those of the order Beroida, which lack tentacles, and the species Haeckelia rubra, which use cnidocytes from cnidarian prey. |
https://en.wikipedia.org/wiki/Solved%20game | A solved game is a game whose outcome (win, lose or draw) can be correctly predicted from any position, assuming that both players play perfectly.
This concept is usually applied to abstract strategy games, and especially to games with full information and no element of chance;
solving such a game may use combinatorial game theory and/or computer assistance.
Overview
A two-player game can be solved on several levels:
Ultra-weak
Prove whether the first player will win, lose or draw from the initial position, given perfect play on both sides. This can be a non-constructive proof (possibly involving a strategy-stealing argument) that need not actually determine any moves of the perfect play.
Weak
Provide an algorithm that secures a win for one player, or a draw for either, against any possible moves by the opponent, from the beginning of the game.
Strong
Provide an algorithm that can produce perfect moves from any position, even if mistakes have already been made on one or both sides.
Despite their name, many game theorists believe that "ultra-weak" proofs are the deepest, most interesting and valuable. "Ultra-weak" proofs require a scholar to reason about the abstract properties of the game, and show how these properties lead to certain outcomes if perfect play is realized.
By contrast, "strong" proofs often proceed by brute force—using a computer to exhaustively search a game tree to figure out what would happen if perfect play were realized. The resulting proof gives an optimal strategy for every possible position on the board. However, these proofs are not as helpful in understanding deeper reasons why some games are solvable as a draw, and other, seemingly very similar games are solvable as a win.
Given the rules of any two-person game with a finite number of positions, one can always trivially construct a minimax algorithm that would exhaustively traverse the game tree. However, since for many non-trivial games such an algorithm would require an infeasibl |
https://en.wikipedia.org/wiki/Klaus%20Hepp | Klaus Hepp (born 11 December 1936) is a German-born Swiss theoretical physicist working mainly in quantum field theory. Hepp studied mathematics and physics at Westfälischen Wilhelms-Universität in Münster and at the Eidgenössischen Technischen Hochschule (ETH) in Zurich, where, in 1962, with Res Jost as thesis first advisor and Markus Fierz as thesis second advisor, he received a doctorate for the thesis ("Kovariante analytische Funktionen“) and at ETH in 1963 attained the rank of Privatdozent. From 1966 until his retirement in 2002 he was professor of theoretical physics there. From 1964 to 1966 he was at the Institute for Advanced Study in Princeton. Hepp was also Loeb Lecturer at Harvard and was at the IHÉS near Paris.
Hepp worked on relativistic quantum field theory, quantum statistical mechanics, and theoretical laser physics. In quantum field theory he gave a complete proof of the Bogoliubov–Parasyuk renormalization theorem (Hepp and Wolfhart Zimmermann, called in their honor the BPHZ theorem). Since a research stay 1975/6 at MIT he also worked in neuroscience (for example, reciprocal effect between movement sensors, visual sense and eye movements with V. Henn in Zurich).
In 2004 he received the Max Planck Medal.
Selected works
“Renormalisaton Theory“, in de Witt, Stora „Statistical Mechanics and Quantum Field Theory“, Gordon and Breach, New York 1971
“Progress in Quantum Field Theory“, Erice Lectures 1972
“Theorie de la Renormalisation“, Springer 1970
“On the quantum mechanical N-body problem“, Helvetica physica Acta, Bd.42, 1969, S.425
“On the connection between Wightman and LSZ quantum field theory“, in Chretien, Deser „Axiomatic Quantum Field Theory“, New York 1966
"Quantum mechanics in the brain" (with Christof Koch) in Nature 440(611), 2006
See also
Gell-Mann and Low theorem
Mean-field particle methods
Superradiant phase transition
Wightman axioms |
https://en.wikipedia.org/wiki/CoDi | CoDi is a cellular automaton (CA) model for spiking neural networks (SNNs). CoDi is an acronym for Collect and Distribute, referring to the signals and spikes in a neural network.
CoDi uses a von Neumann neighborhood modified for a three-dimensional space; each cell looks at the states of its six orthogonal neighbors and its own state. In a growth phase a neural network is grown in the CA-space based on an underlying chromosome. There are four types of cells: neuron body, axon, dendrite and blank. The growth phase is followed by a signaling- or processing-phase. Signals are distributed from the neuron bodies via their axon tree and collected from connection dendrites. These two basic interactions cover every case, and they can be expressed simply, using a small number of rules.
Cell interaction during signaling
The neuron body cells collect neural signals from the surrounding dendritic cells and apply an internally defined function to the collected data. In the CoDi model the neurons sum the incoming signal values and fire after a threshold is reached. This behavior of the neuron bodies can be modified easily to suit a given problem. The output of the neuron bodies is passed on to its surrounding axon cells. Axonal cells distribute data originating from the neuron body. Dendritic cells collect data and eventually pass it to the neuron body. These two types of cell-to-cell interaction cover all kinds of cell encounters.
Every cell has a gate, which is interpreted differently depending on the type of the cell. A neuron cell uses this gate to store its orientation, i.e. the direction in which the axon is pointing. In an axon cell, the gate points to the neighbor from which the neural signals are received. An axon cell accepts input only from this neighbor, but makes its own output available to all its neighbors. In this way axon cells distribute information. The source of information is always a neuron cell. Dendritic cells collect
information by accepting infor |
https://en.wikipedia.org/wiki/Carbon%20cycle | The carbon cycle is that part of the biogeochemical cycle by which carbon is exchanged among the biosphere, pedosphere, geosphere, hydrosphere, and atmosphere of Earth. Other major biogeochemical cycles include the nitrogen cycle and the water cycle. Carbon is the main component of biological compounds as well as a major component of many minerals such as limestone. The carbon cycle comprises a sequence of events that are key to making Earth capable of sustaining life. It describes the movement of carbon as it is recycled and reused throughout the biosphere, as well as long-term processes of carbon sequestration (storage) to and release from carbon sinks.
To describe the dynamics of the carbon cycle, a distinction can be made between the fast and slow carbon cycle. The fast carbon cycle is also referred to as the biological carbon cycle. Fast carbon cycles can complete within years, moving substances from atmosphere to biosphere, then back to the atmosphere. Slow or geological cycles (also called deep carbon cycle) can take millions of years to complete, moving substances through the Earth's crust between rocks, soil, ocean and atmosphere.
Human activities have disturbed the fast carbon cycle for many centuries by modifying land use, and moreover with the recent industrial-scale mining of fossil carbon (coal, petroleum, and gas extraction, and cement manufacture) from the geosphere. Carbon dioxide in the atmosphere had increased nearly 52% over pre-industrial levels by 2020, forcing greater atmospheric and Earth surface heating by the Sun. The increased carbon dioxide has also caused a reduction in the ocean's pH value and is fundamentally altering marine chemistry. The majority of fossil carbon has been extracted over just the past half century, and rates continue to rise rapidly, contributing to human-caused climate change.
Main compartments
The carbon cycle was first described by Antoine Lavoisier and Joseph Priestley, and popularised by Humphry Davy. The g |
https://en.wikipedia.org/wiki/Sensitivity%20%28control%20systems%29 | In control engineering, the sensitivity (or more precisely, the sensitivity function) of a control system measures how variations in the plant parameters affects the closed-loop transfer function. Since the controller parameters are typically matched to the process characteristics and the process may change, it is important that the controller parameters are chosen in such a way that the closed loop system is not sensitive to variations in process dynamics. Moreover, the sensitivity function is also important to analyse how disturbances affects the system.
Sensitivity function
Let and denote the plant and controller's transfer function in a basic closed loop control system written in the Laplace domain using unity negative feedback.
Sensitivity function as a measure of robustness to parameter variation
The closed-loop transfer function is given by
Differentiating with respect to yields
where is defined as the function
and is known as the sensitivity function. Lower values of implies that relative errors in the plant parameters has less effects in the relative error of the closed-loop transfer function.
Sensitivity function as a measure of disturbance attenuation
The sensitivity function also describes the transfer function from external disturbance to process output. In fact, assuming an additive disturbance n after the output
of the plant, the transfer functions of the closed loop system are given by
Hence, lower values of suggest further attenuation of the external disturbance. The sensitivity function tells us how the disturbances are influenced by feedback. Disturbances with frequencies such that is less than one are reduced by an amount equal to the distance to the critical point and disturbances with frequencies such that is larger than one are amplified by the feedback.
Sensitivity peak and sensitivity circle
Sensitivity peak
It is important that the largest value of the sensitivity function be limited for a control system. The n |
https://en.wikipedia.org/wiki/Partial%20sorting | In computer science, partial sorting is a relaxed variant of the sorting problem. Total sorting is the problem of returning a list of items such that its elements all appear in order, while partial sorting is returning a list of the k smallest (or k largest) elements in order. The other elements (above the k smallest ones) may also be sorted, as in an in-place partial sort, or may be discarded, which is common in streaming partial sorts. A common practical example of partial sorting is computing the "Top 100" of some list.
In terms of indices, in a partially sorted list, for every index i from 1 to k, the i-th element is in the same place as it would be in the fully sorted list: element i of the partially sorted list contains order statistic i of the input list.
Offline problems
Heap-based solution
Heaps admit a simple single-pass partial sort when is fixed: insert the first elements of the input into a max-heap. Then make one pass over the remaining elements, add each to the heap in turn, and remove the largest element. Each insertion operation takes time, resulting in time overall; this "partial heapsort" algorithm is practical for small values of and in online settings. An "online heapselect" algorithm described below, based on a min-heap, takes .
Solution by partitioning selection
A further relaxation requiring only a list of the smallest elements, but without requiring that these be ordered, makes the problem equivalent to partition-based selection; the original partial sorting problem can be solved by such a selection algorithm to obtain an array where the first elements are the smallest, and sorting these, at a total cost of operations. A popular choice to implement this algorithm scheme is to combine quickselect and quicksort; the result is sometimes called "quickselsort".
Common in current (as of 2022) C++ STL implementations is a pass of heapselect for a list of k elements, followed by a heapsort for the final result.
Specialised sorting alg |
https://en.wikipedia.org/wiki/Shmuel%20Winograd |
Shmuel Winograd (; January 4, 1936 – March 25, 2019) was an Israeli-American computer scientist, noted for his contributions to computational complexity. He has proved several major results regarding the computational aspects of arithmetic; his contributions include the Coppersmith–Winograd algorithm and an algorithm for the fast Fourier transform which transforms it into a problem of computing convolutions which can be solved with another Winograd's algorithm.
Winograd studied Electrical Engineering at the Massachusetts Institute of Technology, receiving his B.S. and M.S. degrees in 1959. He received his Ph.D. from the Courant Institute of Mathematical Sciences at New York University in 1968. He joined the research staff at IBM in 1961, eventually becoming director of the Mathematical Sciences Department there from 1970 to 1974 and 1980 to 1994.
Honors
IBM Fellow (1972)
Fellow of the Institute of Electrical and Electronics Engineers (1974)
W. Wallace McDowell Award (1974)
Member, National Academy of Sciences (1978)
Member, American Academy of Arts and Sciences (1983)
Member, American Philosophical Society (1989)
Fellow of the Association for Computing Machinery (1994)
Books |
https://en.wikipedia.org/wiki/Transitive%20model | In mathematical set theory, a transitive model is a model of set theory that is standard and transitive. Standard means that the membership relation is the usual one, and transitive means that the model is a transitive set or class.
Examples
An inner model is a transitive model containing all ordinals.
A countable transitive model (CTM) is, as the name suggests, a transitive model with a countable number of elements.
Properties
If M is a transitive model, then ωM is the standard ω. This implies that the natural numbers, integers, and rational numbers of the model are also the same as their standard counterparts. Each real number in a transitive model is a standard real number, although not all standard reals need be included in a particular transitive model. |
https://en.wikipedia.org/wiki/Victor%20Animatograph%20Corporation | The Victor Animatograph Corporation was a maker of projection equipment founded in 1910 in Davenport, Iowa by Swedish-born American inventor Alexander F. Victor.
The firm introduced its first 16 mm camera and movie projector on August 12, 1923, the same year Eastman Kodak introduced the Cine-Kodak and Kodascope. Victor advertised through his entire career thereafter that he had marketed the first 16mm equipment, but his claim was incorrect by several weeks, since the Cine-Kodak had been introduced in July, substantially earlier than Victor's August marketing date. Victor's first 16mm camera was a hand-cranked rectangular aluminum box designed for the additional film economy of cranking only 14 frames per second instead of the standard sixteen. A later version of this first Victor was driven by an electric motor. Neither camera sold in large numbers, but Victor followed in 1927 with a more successful camera modeled on the Bell & Howell Filmo. Victor offered many models of 16mm projectors, most with only minor variations, but prior to military contracts won during World War II, all were made and sold in very small numbers, from 20 units to usually no more than a couple of thousand units.
The company was a large producer of lantern slides using their "Featherweight" method- a one piece glass positive with a durable emulsion framed by a cardboard mat.
See also
28 mm film |
https://en.wikipedia.org/wiki/Center%20of%20mass | In physics, the center of mass of a distribution of mass in space (sometimes referred to as the barycenter or balance point) is the unique point at any given time where the weighted relative position of the distributed mass sums to zero. This is the point to which a force may be applied to cause a linear acceleration without an angular acceleration. Calculations in mechanics are often simplified when formulated with respect to the center of mass. It is a hypothetical point where the entire mass of an object may be assumed to be concentrated to visualise its motion. In other words, the center of mass is the particle equivalent of a given object for application of Newton's laws of motion.
In the case of a single rigid body, the center of mass is fixed in relation to the body, and if the body has uniform density, it will be located at the centroid. The center of mass may be located outside the physical body, as is sometimes the case for hollow or open-shaped objects, such as a horseshoe. In the case of a distribution of separate bodies, such as the planets of the Solar System, the center of mass may not correspond to the position of any individual member of the system.
The center of mass is a useful reference point for calculations in mechanics that involve masses distributed in space, such as the linear and angular momentum of planetary bodies and rigid body dynamics. In orbital mechanics, the equations of motion of planets are formulated as point masses located at the centers of mass (see Barycenter (astronomy) for details). The center of mass frame is an inertial frame in which the center of mass of a system is at rest with respect to the origin of the coordinate system.
History
The concept of center of gravity or weight was studied extensively by the ancient Greek mathematician, physicist, and engineer Archimedes of Syracuse. He worked with simplified assumptions about gravity that amount to a uniform field, thus arriving at the mathematical properties of what |
https://en.wikipedia.org/wiki/Aerobic%20fermentation | Aerobic fermentation or aerobic glycolysis is a metabolic process by which cells metabolize sugars via fermentation in the presence of oxygen and occurs through the repression of normal respiratory metabolism. Preference of aerobic fermentation over aerobic respiration is referred to as the Crabtree effect in yeast, and is part of the Warburg effect in tumor cells. While aerobic fermentation does not produce adenosine triphosphate (ATP) in high yield, it allows proliferating cells to convert nutrients such as glucose and glutamine more efficiently into biomass by avoiding unnecessary catabolic oxidation of such nutrients into carbon dioxide, preserving carbon-carbon bonds and promoting anabolism.
Aerobic fermentation in yeast
Aerobic fermentation evolved independently in at least three yeast lineages (Saccharomyces, Dekkera, Schizosaccharomyces). It has also been observed in plant pollen, trypanosomatids, mutated E. coli, and tumor cells. Crabtree-positive yeasts will respire when grown with very low concentrations of glucose or when grown on most other carbohydrate sources. The Crabtree effect is a regulatory system whereby respiration is repressed by fermentation, except in low sugar conditions. When Saccharomyces cerevisiae is grown below the sugar threshold and undergoes a respiration metabolism, the fermentation pathway is still fully expressed, while the respiration pathway is only expressed relative to the sugar availability. This contrasts with the Pasteur effect, which is the inhibition of fermentation in the presence of oxygen and observed in most organisms.
The evolution of aerobic fermentation likely involved multiple successive molecular steps, which included the expansion of hexose transporter genes, copy number variation (CNV) and differential expression in metabolic genes, and regulatory reprogramming. Research is still needed to fully understand the genomic basis of this complex phenomenon. Many Crabtree-positive yeast species are used fo |
https://en.wikipedia.org/wiki/Supersingular%20isogeny%20graph | In mathematics, the supersingular isogeny graphs are a class of expander graphs that arise in computational number theory and have been applied in elliptic-curve cryptography. Their vertices represent supersingular elliptic curves over finite fields and their edges represent isogenies between curves.
Definition and properties
A supersingular isogeny graph is determined by choosing a large prime number and a small prime number , and considering the class of all supersingular elliptic curves defined over the finite field . There are approximately such curves, each two of which can be related by isogenies. The vertices in the supersingular isogeny graph represent these curves (or more concretely, their -invariants, elements of ) and the edges represent isogenies of degree between two curves.
The supersingular isogeny graphs are -regular graphs, meaning that each vertex has exactly neighbors. They were proven by Pizer to be Ramanujan graphs, graphs with optimal expansion properties for their degree. The proof is based on Pierre Deligne's proof of the Ramanujan–Petersson conjecture.
Cryptographic applications
One proposal for a cryptographic hash function involves starting from a fixed vertex of a supersingular isogeny graph, using the bits of the binary representation of an input value to determine a sequence of edges to follow in a walk in the graph, and using the identity of the vertex reached at the end of the walk as the hash value for the input. The security of the proposed hashing scheme rests on the assumption that it is difficult to find paths in this graph that connect arbitrary pairs of vertices.
It has also been proposed to use walks in two supersingular isogeny graphs with the same vertex set but different edge sets (defined using different choices of the parameter) to develop a key exchange primitive analogous to Diffie–Hellman key exchange, called supersingular isogeny key exchange, suggested as a form of post-quantum cryptography. However, a lead |
https://en.wikipedia.org/wiki/Phonometer | A phonometer is an instrument invented by Thomas Edison for testing the force of the human voice in speaking. It consists chiefly of a mouthpiece and diaphragm. Behind the diaphragm is placed a delicate mechanism which operates a 15-inch flywheel by means of which a hole can be bored in an ordinary pine board. |
https://en.wikipedia.org/wiki/Cross-multiplication | In mathematics, specifically in elementary arithmetic and elementary algebra, given an equation between two fractions or rational expressions, one can cross-multiply to simplify the equation or determine the value of a variable.
The method is also occasionally known as the "cross your heart" method because lines resembling a heart outline can be drawn to remember which things to multiply together.
Given an equation like
where and are not zero, one can cross-multiply to get
In Euclidean geometry the same calculation can be achieved by considering the ratios as those of similar triangles.
Procedure
In practice, the method of cross-multiplying means that we multiply the numerator of each (or one) side by the denominator of the other side, effectively crossing the terms over:
The mathematical justification for the method is from the following longer mathematical procedure. If we start with the basic equation
we can multiply the terms on each side by the same number, and the terms will remain equal. Therefore, if we multiply the fraction on each side by the product of the denominators of both sides——we get
We can reduce the fractions to lowest terms by noting that the two occurrences of on the left-hand side cancel, as do the two occurrences of on the right-hand side, leaving
and we can divide both sides of the equation by any of the elements—in this case we will use —getting
Another justification of cross-multiplication is as follows. Starting with the given equation
multiply by = 1 on the left and by = 1 on the right, getting
and so
Cancel the common denominator = , leaving
Each step in these procedures is based on a single, fundamental property of equations. Cross-multiplication is a shortcut, an easily understandable procedure that can be taught to students.
Use
This is a common procedure in mathematics, used to reduce fractions or calculate a value for a given variable in a fraction. If we have an equation
|
https://en.wikipedia.org/wiki/Hill%20limit%20%28solid-state%29 | In solid-state physics, the Hill limit is a critical distance defined in a lattice of actinide or rare-earth atoms. These atoms own partially filled or levels in their valence shell and are therefore responsible for the main interaction between each atom and its environment. In this context, the hill limit is defined as twice the radius of the -orbital. Therefore, if two atoms of the lattice are separate by a distance greater than the Hill limit, the overlap of their -orbital becomes negligible. A direct consequence is the absence of hopping for the f electrons, ie their localization on the ion sites of the lattice.
Localized f electrons lead to paramagnetic materials since the remaining unpaired spins are stuck in their orbitals. However, when the rare-earth lattice (or a single atom) is embedded in a metallic one (intermetallic compound), interactions with the conduction band allow the f electrons to move through the lattice even for interatomic distances above the Hill limit.
See also
Anderson impurity model |
https://en.wikipedia.org/wiki/Antisymmetry | In linguistics, antisymmetry is a syntactic theory presented in Richard S. Kayne's 1994 monograph The Antisymmetry of Syntax. It asserts that grammatical hierarchies in natural language follow a universal order, namely specifier-head-complement branching order. The theory builds on the foundation of the X-bar theory. Kayne hypothesizes that all phrases whose surface order is not specifier-head-complement have undergone syntactic movements that disrupt this underlying order. Others have posited specifier-complement-head as the basic word order.
Antisymmetry as a principle of word order is reliant on X-bar notions such as specifier and complement, and the existence of order-altering mechanisms such as movement. It is disputed by constituency structure theories (as opposed to dependency structure theories).
Asymmetric c-command
C-command is a relation between tree nodes, as defined by Tanya Reinhart. Kayne uses a simple definition of c-command based on the "first node up". However, the definition is complicated by his use of a "segment/category" distinction. Two directly connected nodes that have the same label are "segments" of a single "category". A category "excludes" all categories not "dominated" by all its segments. A "c-commands" B if every category that dominates A also dominates B, and A excludes B. The following tree illustrates these concepts:
AP1 and AP2 are both segments of a single category. AP does not c-command BP because it does not exclude BP. CP does not c-command BP because both segments of AP do not dominate BP (so it is not the case that every category that dominates CP dominates BP). BP c-commands CP and A. A c-commands C. The definitions above may perhaps be thought to allow BP to c-command AP, but a c-command relation is not usually assumed to hold between two such categories, and for the purposes of antisymmetry, the question of whether BP c-commands AP is in fact moot.
(The above is not an exhaustive list of c-command relations in the tre |
https://en.wikipedia.org/wiki/Foster%27s%20rule | Foster's rule, also known as the island rule or the island effect, is an ecogeographical rule in evolutionary biology stating that members of a species get smaller or bigger depending on the resources available in the environment. For example, it is known that pygmy mammoths evolved from normal mammoths on small islands. Similar evolutionary paths have been observed in elephants, hippopotamuses, boas, sloths, deer (such as Key deer) and humans. It is part of the more general phenomenon of island syndrome which describes the differences in morphology, ecology, physiology and behaviour of insular species compared to their continental counterparts.
The rule was first formulated by van Valen in 1973 based on the study by mammalogist J. Bristol Foster in 1964. In it, Foster compared 116 island species to their mainland varieties. Foster proposed that certain island creatures evolved larger body size (insular gigantism) while others became smaller (insular dwarfism). Foster proposed the simple explanation that smaller creatures get larger when predation pressure is relaxed because of the absence of some of the predators of the mainland, and larger creatures become smaller when food resources are limited because of land area constraints.
The idea was expanded upon in The Theory of Island Biogeography, by Robert MacArthur and Edward O. Wilson. In 1978, Ted J. Case published a longer paper on the topic in the journal Ecology.
Recent literature has also applied the island rule to plants.
There are some cases that do not neatly fit the rule; for example, artiodactyls have on several islands evolved into both dwarf and giant forms.
The Island Rule is a contested topic in evolutionary biology. Some argue that, since body size is a trait that is affected by multiple factors, and not just by organisms moving to an island, genetic variations across all populations could also cause the body mass differences between mainland and island populations. |
https://en.wikipedia.org/wiki/Sphacelariaceae | Sphacelariaceae is a family of algae belonging to the order Sphacelariales.
Genera:
Battersia Reinke ex Batters, 1890
Chaetopteris Kützing, 1843
Herpodiscus G.R.South, 1974
Onslowia
Sphacelaria Lyngbye, 1818
Sphacella Reinke, 1890
Sphacelorbus Draisma, Prud'homme & H.Kawai, 2010 |
https://en.wikipedia.org/wiki/History%20of%20electrophoresis | The history of electrophoresis for molecular separation and chemical analysis began with the work of Arne Tiselius in 1931, while new separation processes and chemical analysis techniques based on electrophoresis continue to be developed in the 21st century. Tiselius, with support from the Rockefeller Foundation, developed the "Tiselius apparatus" for moving boundary electrophoresis, which was described in 1937 in the well-known paper "A New Apparatus for Electrophoretic Analysis of Colloidal Mixtures". The method spread slowly until the advent of effective zone electrophoresis methods in the 1940s and 1950s, which used filter paper or gels as supporting media. By the 1960s, increasingly sophisticated gel electrophoresis methods made it possible to separate biological molecules based on minute physical and chemical differences, helping to drive the rise of molecular biology. Gel electrophoresis and related techniques became the basis for a wide range of biochemical methods, such as protein fingerprinting, Southern blot, other blotting procedures, DNA sequencing, and many more.
Electrophoresis before Tiselius
Early work with the basic principle of electrophoresis dates to the early 19th century, based on Faraday's laws of electrolysis proposed in the late 18th century and other early electrochemistry. Experiments by Johann Wilhelm Hittorf, Walther Nernst, and Friedrich Kohlrausch to measure the properties and behavior of small ions moving through aqueous solutions under the influence of an electric field led to general mathematical descriptions of the electrochemistry of aqueous solutions. Kohlrausch created equations for varying concentrations of charged particles moving through solution, including sharp moving boundaries of migrating particles. By the beginning of the 20th century, electrochemists had found that such moving boundaries of charged particles could be created with U-shaped glass tubes.
Methods of optical detection of moving boundaries in liqu |
https://en.wikipedia.org/wiki/Source%20code%20editors%20for%20Erlang | Erlang is an open source programming language. Multiple development environments (including IDEs and source code editors with plug-ins adding IDE features) have support for Erlang.
Integrated Development Environments (IDEs)
Syntax, parsing, code-assist
Goto, searching
Code generation
Build, debug, run |
https://en.wikipedia.org/wiki/Ensemble%20%28fluid%20mechanics%29 | In continuum mechanics, an ensemble is an imaginary collection of notionally identical experiments.
Each member of the ensemble will have nominally identical boundary conditions and fluid properties. If the flow is turbulent, the details of the fluid motion will differ from member to member because the experimental setup will be microscopically different; and these slight differences become magnified as time progresses. Members of an ensemble are, by definition, statistically independent of one another. The concept of ensemble is useful in thought experiments and to improve theoretical understanding of turbulence.
A good image to have in mind is a typical fluid mechanics experiment such as a mixing box. Imagine a million mixing boxes, distributed over the earth; at a predetermined time, a million fluid mechanics engineers each start one experiment, and monitor the flow. Each engineer then sends his or her results to a central database. Such a process would give results that are close to the theoretical ideal of an ensemble.
It is common to speak of ensemble average or ensemble averaging when considering a fluid mechanical ensemble.
For a completely unrelated type of averaging, see Reynolds-averaged Navier–Stokes equations (the two types of averaging are often confused).
The idea of the ensemble is discussed further in the article Statistical ensemble (mathematical physics).
Continuum mechanics |
https://en.wikipedia.org/wiki/Cursed%20image | A cursed image refers to a picture (usually a photograph) that is perceived as mysterious or disturbing due to its content, poor quality, or a combination of the two. A cursed image is intended to make a person question the reason for the image's existence in the first place. The term was coined on social media in 2015 and popularised the following year.
History
The concept of "cursed images" originates from a Tumblr blog, cursedimages, in 2015. The first image posted by the account shows an elderly farmer surrounded by crates of red tomatoes in a wood-paneled room. In a 2019 interview with Paper, the blog's owner described the aforementioned image as follows: "It's the perfect cursed image to me because there's nothing inherently unsettling about any part of it. It's a totally mundane moment transformed into something else by the camera and the new context I've given it."
While the term "cursed image" had been used on Tumblr since 2015, it became more widely popularized by July 2016 due to the Twitter account @cursedimages. In a 2016 interview with Gizmodo writer Hudson Hongo, the owner of the account explained that he had seen "one or two" posts on Tumblr containing "unexplainable and odd" pictures that were simply captioned "cursed image". Intrigued by the pictures, the owner of the account began searching for similar images and after finding more photographs in that vein, decided to "post them all in one place". That same year, Brian Feldman of New York magazine interviewed Doug Battenhausen, the owner of the Tumblr blog internethistory, which also posts "cursed images". Feldman asked Battenhausen about the appeal of "cursed images", to which Battenhausen replied: "It's a lot of things. It's the mystery of the photo, it's the strange aesthetics of them, it's seeing a place that you've never seen before, or an intimate glimpse into somebody's life."
Feldman attributes the "cursed" aesthetic to the nature of digital photography in the early 2000s, where point-a |
https://en.wikipedia.org/wiki/Traffic%20model | A traffic model is a mathematical model of real-world traffic, usually, but not restricted to, road traffic. Traffic modeling draws heavily on theoretical foundations like network theory and certain theories from physics like the kinematic wave model. The interesting quantity being modeled and measured is the traffic flow, i.e. the throughput of mobile units (e.g. vehicles) per time and transportation medium capacity (e.g. road or lane width). Models can teach researchers and engineers how to ensure an optimal flow with a minimum number of traffic jams.
Traffic models often are the basis of a traffic simulation.
Types
Microscopic traffic flow model Traffic flow is assumed to depend on individual mobile units, i.e. cars, which are explicitly modeled
Macroscopic traffic flow model Only the mass action or the statistical properties of a large number of units is analyzed
Examples
Biham–Middleton–Levine traffic model
Traffic generation model
History of network traffic models
Traffic mix
Intelligent driver model
Network traffic
Three-phase traffic theory
Two-fluid model
See also
Braess's paradox
Gridlock
Mobility model
Network traffic
Network traffic simulation
Traffic bottleneck
Traffic flow
Traffic wave
Queueing theory
Traffic equations |
https://en.wikipedia.org/wiki/AN/UYK-7 | The AN/UYK-7 was the standard 32-bit computer of the United States Navy for surface ship and submarine platforms, starting in 1970. It was used in the Navy's NTDS & Aegis combat systems and U.S. Coast Guard, and the navies of U.S. allies. It was also used by the U.S. Army.
Technical
Built by UNIVAC, it used integrated circuits, had 18-bit addressing and could support multiple CPUs and I/O controllers. Three CPUs and two I/O controllers were a common configuration. Its multiprocessor architecture was based upon the UNIVAC 1108. An airborne version, the UNIVAC 1832, was also produced.
Replacement
In the mid-1980s, the UYK-7 was replaced by the AN/UYK-43 which shared the same instruction set. Retired systems are being cannibalized for repair parts to support systems still in use by U.S. and non-U.S. forces.
See also
AN/USQ-20 30-bit computer that the AN/UYK-7 replaced
AN/UYK-20 16-bit computer developed for navy projects that did not need the full power of the AN/UYK-7
CMS-2 (programming language) |
https://en.wikipedia.org/wiki/NEK9 | Serine/threonine-protein kinase Nek9 is an enzyme that in humans is encoded by the NEK9 gene.
Interactions
NEK9 has been shown to interact with:
NEK6,
RAN, and
SSRP1.
Model organisms
Model organisms have been used in the study of NEK9 function. A conditional knockout mouse line called Nek9tm1a(EUCOMM)Wtsi was generated at the Wellcome Trust Sanger Institute. Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Additional screens performed: - In-depth immunological phenotyping |
https://en.wikipedia.org/wiki/Ocean%20turbidity | Ocean turbidity is a measure of the amount of cloudiness or haziness in sea water caused by individual particles that are too small to be seen without magnification. Highly turbid ocean waters are those with many scattering particulates in them. In both highly absorbing and highly scattering waters, visibility into the water is reduced. Highly scattering (turbid) water still reflects much light, while highly absorbing water, such as a blackwater river or lake, is very dark. The scattering particles that cause the water to be turbid can be composed of many things, including sediments and phytoplankton.
Measurement
There are a number of ways to measure ocean turbidity, including autonomous remote vehicles, shipcasts and satellites.
From a satellite, a proxy measurement of the water turbidity can be made by examining the amount of reflectance in the visible region of the electromagnetic spectrum. For the Advanced Very High Resolution Radiometer (AVHRR), the logical choice is band 1, covering wavelengths 580 to 680 nanometres, the orange and red. In order to make derived products that are comparable over time and space, an atmospheric correction is required. To do this, the effects of Rayleigh scattering are calculated based on the satellite viewing angle and the solar zenith angle and then subtracted from the band 1 radiance. For an aerosol correction, band 2 in the near infrared is used. It is first corrected for Rayleigh scattering and then subtracted from the Rayleigh corrected band 1. The Rayleigh corrected band 2 is assumed to be aerosol radiance because no return signal from water in the near infrared is expected since water is highly absorbing at those wavelengths. Because bands 1 and 2 are relatively close on the electromagnetic spectrum, we can reasonably assume their aerosol radiances are the same.
In these images the turbidity is quantified as the percent reflected light emerging from the water column in a range of 0 to 8 percent. The reflectance percent |
https://en.wikipedia.org/wiki/Tumor%20necrosis%20factor%20receptor%202 | Tumor necrosis factor receptor 2 (TNFR2), also known as tumor necrosis factor receptor superfamily member 1B (TNFRSF1B) and CD120b, is one of two membrane receptors that binds tumor necrosis factor-alpha (TNFα). Like its counterpart, tumor necrosis factor receptor 1 (TNFR1), the extracellular region of TNFR2 consists of four cysteine-rich domains which allow for binding to TNFα. TNFR1 and TNFR2 possess different functions when bound to TNFα due to differences in their intracellular structures, such as TNFR2 lacking a death domain (DD).
Function
The protein encoded by this gene is a member of the tumor necrosis factor receptor superfamily, which also contains TNFRSF1A. This protein and TNF-receptor 1 form a heterocomplex that mediates the recruitment of two anti-apoptotic proteins, c-IAP1 and c-IAP2, which possess E3 ubiquitin ligase activity. The function of IAPs in TNF-receptor signalling is unknown, however, c-IAP1 is thought to potentiate TNF-induced apoptosis by the ubiquitination and degradation of TNF-receptor-associated factor 2 (TRAF2), which mediates anti-apoptotic signals. Knockout studies in mice also suggest a role of this protein in protecting neurons from apoptosis by stimulating antioxidative pathways.
Clinical significance
CNS
At least partly because TNRF2 has no intracellular death domain, TNFR2 is neuroprotective.
Patients with schizophrenia have increased levels of soluble tumor necrosis factor receptor 2 (sTNFR2).
Cancer
Targeting of TNRF2 in tumor cells is associated with increased tumor cell death and decreased progression of tumor cell growth.
Increased expression of TNFR2 is found in breast cancer, cervical cancer, colon cancer, and renal cancer. A link between the expression of TNRF2 in tumor cells and late-stage cancer has been discovered. TNFR2 plays a significant role in tumor cell growth as it has been found that the loss of TNFR2 expression is linked with increased death of associated tumor cells and a significant standstill |
https://en.wikipedia.org/wiki/Valuation%20%28logic%29 | In logic and model theory, a valuation can be:
In propositional logic, an assignment of truth values to propositional variables, with a corresponding assignment of truth values to all propositional formulas with those variables.
In first-order logic and higher-order logics, a structure, (the interpretation) and the corresponding assignment of a truth value to each sentence in the language for that structure (the valuation proper). The interpretation must be a homomorphism, while valuation is simply a function.
Mathematical logic
In mathematical logic (especially model theory), a valuation is an assignment of truth values to formal sentences that follows a truth schema. Valuations are also called truth assignments.
In propositional logic, there are no quantifiers, and formulas are built from propositional variables using logical connectives. In this context, a valuation begins with an assignment of a truth value to each propositional variable. This assignment can be uniquely extended to an assignment of truth values to all propositional formulas.
In first-order logic, a language consists of a collection of constant symbols, a collection of function symbols, and a collection of relation symbols. Formulas are built out of atomic formulas using logical connectives and quantifiers. A structure consists of a set (domain of discourse) that determines the range of the quantifiers, along with interpretations of the constant, function, and relation symbols in the language. Corresponding to each structure is a unique truth assignment for all sentences (formulas with no free variables) in the language.
Notation
If is a valuation, that is, a mapping from the atoms to the set , then the double-bracket notation is commonly used to denote a valuation; that is, for a proposition .
See also
Algebraic semantics |
https://en.wikipedia.org/wiki/Vermitechnology | Vermitechnology is an overarching term for the following subtopics:
Vermifiltration: A process for purifying effluent that utilises earthworms (also called vermidigestion)
Vermicomposting: Utilising earthworms for composting organic material
Vermiculture: the commercial rearing of earthworms to be used for other processes e.g. fishing
Vermitechnology includes the study and commercial application of technologies that utilise earthworms for degrading waste organic materials for sanitation and agricultural re-use. Organic wastes degraded and stabilised by earthworms include those suspended or dissolved in water and also solid organic material. |
https://en.wikipedia.org/wiki/Eifion%20Jones |
William Eifion Jones (1925 – March 2004) was a Welsh marine botanist, noted for his study of marine algae.
He was born and brought up in Aberystwyth and studied botany at the University of Wales under Professor Lilly Newton. He moved to Bangor in 1953 to join the newly founded Marine Biology Station as a lecturer with Denis Crisp, and completed his PhD in 1957. He had a wife, Marian, and two children, Rhiannon and Aled.
He retired early in 1986, but went on to lecture in Kuwait, returning to do part-time lecturing at the University of Wales, Bangor. He died in a car accident at Gaerwen, aged 79.
He wrote A key to the Genera of the British Seaweeds (1962). It was most valuable as an update to Newton's Handbook of 1931 had become out-of-date and this was required to identify the genera of algae to be found on the shores of the British Isles.
He joined the British Phycological Society in 1955 and served as a Member of Council (1959 and 1974–1977), as Assistant Secretary (1959) and Hon. Treasurer (1964–1968). At the Eighth International Seaweed Symposium, held in Bangor in 1974, he was a member of the organizing committee and secretary. He was President of the North Wales Wildlife Trust and Bardsey Bird and Field Observatory.
Publications
Jones, W.E. 1956. Effect of spore coalescence on the early development of Gracilaria verrucosa (Hudson) Papenfuss. Nature, Lond. 178: 426 - 427.
Jones, W.E. 1958. Experiments on some effects of certain environmental factors on Gracilaria verrucosa (Huds.) Papenf. Journal of the Marine biological Association of the United Kingdom., 38: 153 - 167.
Jones, W.E. 1959. The growth and fruiting of Gracilaria verrucosa (Hudson) Papenfuss. Journal of the Marine Biological Association of the United Kingdom., 38: 47 - 56.
Jones, W.E. 1962a. "A key to the genera of the British Seaweeds." Field Studies. 1: No.4. pp. 1 – 32.
Jones, W.E. 1962b. The identity of Gracilaria erecta (Grev.) Grev. British phycological Bulletin 2: 140 - 144.
Jone |
https://en.wikipedia.org/wiki/Mitochondrial%20apoptosis-induced%20channel | The mitochondrial apoptosis-induced channel (or MAC), is an early marker of the onset of apoptosis. This ion channel is formed on the outer mitochondrial membrane in response to certain apoptotic stimuli. MAC activity is detected by patch clamping mitochondria from apoptotic cells at the time of cytochrome c release.
Members of the Bcl-2 protein family regulate apoptosis by controlling the formation of MAC: the pro-apoptotic members Bax and/or Bak form MAC, whereas the anti-apoptotic members like Bcl-2 or Bcl-xL prevent MAC formation. Once formed, MAC mediates the release of cytochrome c to the cytosol, triggering the commitment step of the mitochondrial apoptotic cascade. Depletion of MAC activity is accomplished pharmacologically by specific compounds, namely Bax channel inhibitors and MAC inhibitors. Either by knocking down MAC's main components or by its pharmacological inhibition, the end result is prevention of cytochrome c release and apoptosis. |
https://en.wikipedia.org/wiki/Game%20engine | A game engine is a software framework primarily designed for the development of video games and generally includes relevant libraries and support programs such as a level editor. The "engine" terminology is similar to the term "software engine" used in the software industry.
The game engine can also refer to the development software utilizing this framework, typically offering a suite of tools and features for developing games.
Developers can use game engines to construct games for video game consoles and other types of computers. The core functionality typically provided by a game engine may include a rendering engine ("renderer") for 2D or 3D graphics, a physics engine or collision detection (and collision response), sound, scripting, animation, artificial intelligence, networking, streaming, memory management, threading, localization support, scene graph, and video support for cinematics. Game engine implementers often economize on the process of game development by reusing/adapting, in large part, the same game engine to produce different games or to aid in porting games to multiple platforms.
Purpose
In many cases, game engines provide a suite of visual development tools in addition to reusable software components. These tools are generally provided in an integrated development environment to enable simplified, rapid development of games in a data-driven manner. Game-engine developers often attempt to preempt implementer needs by developing robust software suites which include many elements a game developer may need to build a game. Most game-engine suites provide facilities that ease development, such as graphics, sound, physics and artificial-intelligence (AI) functions. These game engines are sometimes called "middleware" because, as with the business sense of the term, they provide a flexible and reusable software platform which provides all the core functionality needed, right out of the box, to develop a game application while reducing costs, comple |
https://en.wikipedia.org/wiki/Conformational%20proofreading | Conformational proofreading or conformational selection is a general mechanism of molecular recognition systems, suggested by Yonatan Savir and Tsvi Tlusty, in which introducing an energetic barrier - such as a structural mismatch between a molecular recognizer and its target - enhances the recognition specificity and quality. Conformational proofreading does not require the consumption of energy and may therefore be used in any molecular recognition system. Conformational proofreading is especially useful in scenarios where the recognizer has to select the appropriate target among many similar competitors. Proteins evolve the capacity for conformational proofreading through fine-tuning their geometry, flexibility and chemical interactions with the target.
Balancing correct and incorrect binding
Molecular recognition takes place in a noisy, crowded biological environment and the recognizer often has to cope with the task of selecting its target among a variety of similar competitors. For example, the ribosome has to select the correct tRNA that matches the mRNA codon among many structurally similar tRNAs. If the recognizer and its correct target match perfectly like a lock and a key, then the binding probability will be high since no deformation is required upon binding. At the same time, the recognizer might also bind to a competitor with a similar structure with high probability. Introducing an energy barrier, in particular, structural mismatch between the recognizer (lock) and the key, reduces the binding probability to the correct target but reduces even more the binding probability to a similar wrong target and thus improves the specificity. Yet, introducing too much deformation drastically reduces binding probability to the correct target. Therefore, the optimal balance between maximizing the correct binding probability and minimizing the incorrect binding probability is achieved when the recognizer is slightly off target. This suggests that conformational |
https://en.wikipedia.org/wiki/Insect%20paleobiota%20of%20Burmese%20amber | Burmese amber is fossil resin dating to the early Late Cretaceous Cenomanian age recovered from deposits in the Hukawng Valley of northern Myanmar. It is known for being one of the most diverse Cretaceous age amber paleobiotas, containing rich arthropod fossils, along with uncommon vertebrate fossils and even rare marine inclusions. A mostly complete list of all taxa described up until 2018 can be found in Ross 2018; its supplement Ross 2019b covers most of 2019.
Clade Amphiesmenoptera
Lepidoptera
Lepidopteran research
Description of new specimens of caterpillars expanding the morphological diversity of Cretaceous caterpillars, is published by Gauweiler et al. (2022), who also attempt to determine whether Cretaceous caterpillars might have represented an adequate food source for early birds.
Tarachoptera
Trichoptera
Clade Antliophora
Diptera
Mecoptera
Clade Archaeorthoptera
Orthoptera
Other archaeorthopterans
Clade Coleopterida
Coleoptera
Strepsiptera
Clade Dictyoptera
Blattodea
Mantodea
Clade Hymenopteroida
Hymenoptera
Clade Neuropterida
Megaloptera
Neuroptera
Raphidioptera
other neuropterida
Clade Paraneoptera
Hemiptera
Permopsocida
Psocodea
Thysanoptera
Zoraptera
Clade Perlidea
Dermaptera
Embioptera
Grylloblattodea
Phasmatodea
Plecoptera
Clade Palaeoptera
Ephemeroptera
Odonatoptera
Archaeognatha
Zygentoma |
https://en.wikipedia.org/wiki/Cryptographic%20module | A cryptographic module is a component of a computer system that implements cryptographic algorithms in a secure way, typically with some element of tamper resistance.
NIST defines a cryptographic modules as "The set of hardware, software, and/or firmware that implements security functions (including cryptographic algorithms), holds plaintext keys and uses them for performing cryptographic operations, and is contained within a cryptographic module boundary."
Hardware security modules, including secure cryptoprocessors, are one way of implementing cryptographic modules.
Standards for cryptographic modules include FIPS 140-3 and ISO/IEC 19790. |
https://en.wikipedia.org/wiki/Fisher%20information | In mathematical statistics, the Fisher information (sometimes simply called information) is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ of a distribution that models X. Formally, it is the variance of the score, or the expected value of the observed information.
The role of the Fisher information in the asymptotic theory of maximum-likelihood estimation was emphasized by the statistician Sir Ronald Fisher (following some initial results by Francis Ysidro Edgeworth). The Fisher information matrix is used to calculate the covariance matrices associated with maximum-likelihood estimates. It can also be used in the formulation of test statistics, such as the Wald test.
In Bayesian statistics, the Fisher information plays a role in the derivation of non-informative prior distributions according to Jeffreys' rule. It also appears as the large-sample covariance of the posterior distribution, provided that the prior is sufficiently smooth (a result known as Bernstein–von Mises theorem, which was anticipated by Laplace for exponential families). The same result is used when approximating the posterior with Laplace's approximation, where the Fisher information appears as the covariance of the fitted Gaussian.
Statistical systems of a scientific nature (physical, biological, etc.) whose likelihood functions obey shift invariance have been shown to obey maximum Fisher information. The level of the maximum depends upon the nature of the system constraints.
Definition
The Fisher information is a way of measuring the amount of information that an observable random variable carries about an unknown parameter upon which the probability of depends. Let be the probability density function (or probability mass function) for conditioned on the value of . It describes the probability that we observe a given outcome of , given a known value of . If is sharply peaked with respect to changes in , it is easy |
https://en.wikipedia.org/wiki/Song%20Jian | Song Jian (; born 29 December 1931) is a Chinese aerospace engineer, demographer, and politician. He was deputy chief designer of China's submarine-launched ballistic missile (JL-1) and one of the country's leading scientists in the post-Cultural Revolution era. After a decade of two-child restrictions in the 1970s, and following the Chinese government's announcement in 1979 to advocate for one child per family, he became a leading advocate for rapid implementation and broad coverage of China's one-child policy. He served in high-ranking political positions including Vice Minister of Aerospace Industry, Director of the State Science and Technology Commission (1985–1998), vice-premier-level State Councillor (1986–1998), President of the Chinese Academy of Engineering, Vice Chairperson of the Chinese People's Political Consultative Conference, and a member of the Central Committee of the Chinese Communist Party.
Early life and education
Song Jian was born on 29 December 1931 in Rongcheng, Shandong Province. In 1946, he enlisted in the Chinese Communist Party's Eighth Route Army during the Chinese Civil War at the age of 14.
After the establishment of the People's Republic of China in 1949, he studied at Harbin Institute of Technology and Beijing Foreign Language Institute, before being sent to the Soviet Union in 1953 on the recommendation of Liu Shaoqi, Vice Chairman of China. Described as a "brilliant" student, he studied cybernetics and military science under the theorist A. A. Feldbaum. He earned an associate Ph.D. degree from Moscow State University and a Ph.D. from Bauman Moscow State Technical University. He published seven papers in Russian on control theory, which won praise from Soviet and American scientists.
Career
After the Sino-Soviet split in 1960, Song returned to China and was put in charge of control systems at the Fifth Academy (later known as the Seventh Ministry of Machine Building or Missile Ministry) of the Ministry of National Defense. He w |
https://en.wikipedia.org/wiki/Heterologous%20vaccine | A homologous booster shot involves the administration of the same vaccine as previously administered, while a heterologous booster shot involves the administration of a different vaccine.
"Heterologous prime-boost immunization is administration of two different vectors or delivery systems expressing the same or overlapping antigenic inserts."
"An effective vaccine usually requires more than one time immunization in the form of prime-boost. Traditionally the same vaccines are given multiple times as homologous boosts. New findings suggested that prime-boost can be done with different types of vaccines containing the same antigens. In many cases such heterologous prime-boost can be more immunogenic than homologous prime-boost." |
https://en.wikipedia.org/wiki/Foot-and-mouth%20disease | Foot-and-mouth disease (FMD) or hoof-and-mouth disease (HMD) is an infectious and sometimes fatal viral disease that affects cloven-hoofed animals, including domestic and wild bovids. The virus causes a high fever lasting two to six days, followed by blisters inside the mouth and near the hoof that may rupture and cause lameness.
FMD has very severe implications for animal farming, since it is highly infectious and can be spread by infected animals comparatively easily through contact with contaminated farming equipment, vehicles, clothing, and feed, and by domestic and wild predators. Its containment demands considerable efforts in vaccination, strict monitoring, trade restrictions, quarantines, and the culling of both infected and healthy (uninfected) animals.
Susceptible animals include cattle, water buffalo, sheep, goats, pigs, antelope, deer, and bison. It has also been known to infect hedgehogs and elephants; llamas and alpacas may develop mild symptoms, but are resistant to the disease and do not pass it on to others of the same species. In laboratory experiments, mice, rats, and chickens have been artificially infected, but they are not believed to contract the disease under natural conditions. Cattle, Asian and African buffalo, sheep, and goats can become carriers following an acute infection, meaning they are still infected with a small amount of virus but appear healthy. Animals can be carriers for up to 1–2 years and are considered very unlikely to infect other animals, although laboratory evidence suggests that transmission from carriers is possible.
Humans are only extremely rarely infected by foot-and-mouth disease virus (FMDV). (Humans, particularly young children, can be affected by hand, foot, and mouth disease (HFMDV), which is often confused for FMDV. Similarly, HFMDV is a viral infection belonging to the Picornaviridae family, but it is distinct from FMDV. HFMDV also affects cattle, sheep, and swine.)
The virus responsible for FMD is an apht |
https://en.wikipedia.org/wiki/Bioorthogonal%20chemical%20reporter | In chemical biology, bioorthogonal chemical reporter is a non-native chemical functionality that is introduced into the naturally occurring biomolecules of a living system, generally through metabolic or protein engineering. These functional groups are subsequently utilized for tagging and visualizing biomolecules. Jennifer Prescher and Carolyn R. Bertozzi, the developers of bioorthogonal chemistry, defined bioorthogonal chemical reporters as "non-native, non-perturbing chemical handles that can be modified in living systems through highly selective reactions with exogenously delivered probes." It has been used to enrich proteins and to conduct proteomic analysis.
In the early development of the technique, chemical motifs have to fulfill criteria of biocompatibility and selective reactivity in order to qualify as bioorthogonal chemical reporters. Some combinations of proteinogenic amino acid side chains meet the criteria, as do ketone and aldehyde tags. Azides and alkynes are other examples of chemical reporters.
A bioorthogonal chemical reporter must be incorporated into a biomolecule. This occurs via metabolism. The chemical reporter is linked to a substrate, which a cell can metabolize. |
https://en.wikipedia.org/wiki/SKIL | Ski-like protein is a protein that in humans is encoded by the SKIL gene.
Interactions
SKIL interacts with SKI protein, Mothers against decapentaplegic homolog 3 and Mothers against decapentaplegic homolog 2.
Protein Family
SKIL belongs to the Ski/Sno/Dac family, shared by SKI protein, Dachshund, and SKIDA1. Members of the Ski/Sno/Dac family share a domain that is roughly 100 amino acids long. |
https://en.wikipedia.org/wiki/Salinomycin | Salinomycin is an antibacterial and coccidiostat ionophore therapeutic drug.
Antibacterial activity
Salinomycin and its derivatives exhibit high antimicrobial activity against Gram-positive bacteria, including the most problematic bacteria strains such as methicillin-resistant Staphylococcus aureus and methicillin-resistant Staphylococcus epidermidis, and Mycobacterium tuberculosis. Salinomycin is inactive against fungi such as Candida and Gram-negative bacteria.
Cancer research
Pre-clinical
Salinomycin has been shown by Piyush Gupta et al. of the Massachusetts Institute of Technology and the Broad Institute to kill breast cancer stem cells in mice at least 100 times more effectively than the anti-cancer drug paclitaxel. The study screened 16,000 different chemical compounds and found that only a small subset, including salinomycin and etoposide, targeted cancer stem cells responsible for metastasis and relapse.
The mechanism of action by which salinomycin kills cancer stem cells involves lysosomal iron sequestration, leading to the production of reactive oxygen species, lysosome membrane permeabilization and ferroptosis. Studies performed in 2011 showed that salinomycin could induce apoptosis of human cancer cells at higher concentrations. C20 amino derivatives such as ironomycin have shown to be more potent in vitro models of persister cancer cells and in vivo . Promising results from a few clinical pilot studies reveal that salinomycin is able to effectively eliminate cancer stem cells and to induce partial clinical regression of heavily pretreated and therapy-resistant cancers. The ability of salinomycin to kill both cancer stem cells and therapy-resistant cancer cells (persister) may define the compound as a novel and an effective anticancer drug. It has been also shown that salinomycin and its derivatives exhibit potent antiproliferative activity against the drug-resistant cancer cell lines. Salinomycin is the key compound in the pharmaceutical company Ve |
https://en.wikipedia.org/wiki/Challenge%20hypothesis | The challenge hypothesis outlines the dynamic relationship between testosterone and aggression in mating contexts. It proposes that testosterone promotes aggression when it would be beneficial for reproduction, such as mate guarding, or strategies designed to prevent the encroachment of intrasexual rivals. The positive correlation between reproductive aggression and testosterone levels is seen to be strongest during times of social instability. The challenge hypothesis predicts that seasonal patterns in testosterone levels are a function of mating system (monogamy versus polygyny), paternal care, and male-male aggression in seasonal breeders.
The pattern between testosterone and aggression was first observed in seasonally breeding birds, where testosterone levels rise modestly with the onset of the breeding season to support basic reproductive functions. However, during periods of heightened male aggression, testosterone levels increase further to a maximum physiological level. This additional boost in testosterone appears to facilitate male-male aggression, particularly during territory formation and mate guarding, and is also characterized by a lack of paternal care. The challenge hypothesis has come to explain patterns of testosterone production as predictive of aggression across more than 60 species.
Patterns of testosterone
The challenge hypothesis presents a three-level model at which testosterone may be present in circulation. The first level (Level A) represents the baseline level of testosterone during the non-breeding season. Level A is presumed to maintain feedback regulation of both GnRH and gonadotropin release, which are key factors in testosterone production. The next level (Level B) is a regulated, seasonal breeding baseline. This level is sufficient for the expression of reproductive behaviors in seasonal breeders and the development of some secondary sex characteristics. Level B is induced by environmental cues, such as length of day. The highest |
https://en.wikipedia.org/wiki/Content%20Disarm%20%26%20Reconstruction | Content Disarm & Reconstruction (CDR) is a computer security technology for removing potentially malicious code from files. Unlike malware analysis, CDR technology does not determine or detect malware's functionality but removes all file components that are not approved within the system's definitions and policies.
It is used to prevent cyber security threats from entering a corporate network perimeter. Channels that CDR can be used to protect include email and website traffic. Advanced solutions can also provide similar protection on computer endpoints, or cloud email and file sharing services.
There are three levels of CDR; 1) flattening and converting the original file to a PDF, 2) stripping active content while keeping the original file type, and 3) eliminating all file-borne risk while maintaining file type, integrity and active content. Beyond these three levels, there are also more advanced forms of CDR that is able to perform "soft conversion" and "hard conversion", based on the user's preference in balancing usability and security.
Applications
CDR works by processing all incoming files of an enterprise network, deconstructing them, and removing the elements that do not match the file type's standards or set policies. CDR technology then rebuilds the files into clean versions that can be sent on to end users as intended.
Because CDR removes all potentially malicious code, it can be effective against zero-day vulnerabilities that rely on being an unknown threat that other security technologies would need to patch against to maintain protection.
CDR can be used to prevent cyber threats from variety of sources:
Email
Data Diodes
Web Browsers
Endpoints
File Servers
FTP
Cloud email or webmail programs
SMB/CIFS
Removable media scanning (CDR Kiosk)
CDR can be applied to a variety of file formats including:
Images
Office documents
PDF
Audio/video file formats
Archives
HTML
Open Source Implementations
DocBleach
ExeFilter
See also
Adva |
https://en.wikipedia.org/wiki/Rosa%20%C3%97%20damascena | Rosa × damascena (Latin for damascene rose), more commonly known as the Damask rose, or sometimes as the Iranian Rose, Bulgarian rose, Turkish rose, Taif rose, Arab rose, Ispahan rose and Castile rose, is a rose hybrid, derived from Rosa gallica and Rosa moschata. DNA analysis has shown that a third species, Rosa fedtschenkoana, has made some genetic contributions to the Damask rose.
The flowers are renowned for their fine fragrance, and are commercially harvested for rose oil (either "rose otto" or "rose absolute") used in perfumery and to make rose water and "rose concrete". The flower petals are also edible. They may be used to flavor food, as a garnish, as an herbal tea, and preserved in sugar as gulkand.
It is the national flower of Iran.
Description
The Damask rose is a deciduous shrub growing to tall, the stems densely armed with stout, curved prickles and stiff bristles. The leaves are pinnate, with five (rarely seven) leaflets. The roses are a light to moderate pink to light red. The relatively small flowers grow in groups. The bush has an informal shape. It is considered an important type of Old Rose, and also important for its prominent place in the pedigree of many other types.
Varieties
The hybrid is divided in two varieties:
Summer Damasks (R. × damascena nothovar. damascena) have a short flowering season, only in the summer.
Autumn Damasks (R. × damascena nothovar. semperflorens (Duhamel) Rowley) have a longer flowering season, extending into the autumn; they are otherwise not distinguishable from the summer damasks.
The hybrid Rosa × centifolia is derived in part from Rosa × damascena, as are Bourbon, Portland and hybrid perpetual roses.
The cultivar known as Rosa gallica forma trigintipetala or Rosa damascena 'Trigintipetala' is considered to be a synonym of Rosa × damascena.
'Celsiana' is a flowering semi-double variety.
History
Rosa × damascena is a cultivated flower that is not found growing wild. Recent genetic tests indicate that |
https://en.wikipedia.org/wiki/Theoretical%20foundations%20of%20evolutionary%20psychology | The theoretical foundations of evolutionary psychology are the general and specific scientific theories that explain the ultimate origins of psychological traits in terms of evolution. These theories originated with Charles Darwin's work, including his speculations about the evolutionary origins of social instincts in humans. Modern evolutionary psychology, however, is possible only because of advances in evolutionary theory in the 20th century.
Evolutionary psychologists say that natural selection has provided humans with many psychological adaptations, in much the same way that it generated humans' anatomical and physiological adaptations. As with adaptations in general, psychological adaptations are said to be specialized for the environment in which an organism evolved, the environment of evolutionary adaptedness, or EEA. Sexual selection provides organisms with adaptations related to mating. For male mammals, which have a relatively fast reproduction rate, sexual selection leads to adaptations that help them compete for females. For female mammals, with a relatively slow reproduction rate, sexual selection leads to choosiness, which helps females select higher quality mates. Charles Darwin described both natural selection and sexual selection, but he relied on group selection to explain the evolution of self-sacrificing behavior. Group selection is a weak explanation because in any group the less self-sacrificing animals will be more likely to survive and the group will become less self-sacrificing.
In 1964, William D. Hamilton proposed inclusive fitness theory, emphasizing a "gene's-eye" view of evolution. Hamilton noted that individuals can increase the replication of their genes into the next generation by helping close relatives with whom they share genes survive and reproduce. According to "Hamilton's rule", a self-sacrificing behavior can evolve if it helps close relatives so much that it more than compensates for the individual animal's sacrifice. Incl |
https://en.wikipedia.org/wiki/St%C3%B8rmer%20number | In mathematics, a Størmer number or arc-cotangent irreducible number is a positive integer for which the greatest prime factor of is greater than or equal to . They are named after Carl Størmer.
Sequence
The first few Størmer numbers are:
Density
John Todd proved that this sequence is neither finite nor cofinite.
More precisely, the natural density of the Størmer numbers lies between 0.5324 and 0.905.
It has been conjectured that their natural density is the natural logarithm of 2, approximately 0.693, but this remains unproven.
Because the Størmer numbers have positive density, the Størmer numbers form a large set.
Application
The Størmer numbers arise in connection with the problem of representing the Gregory numbers (arctangents of rational numbers) as sums of Gregory numbers for integers (arctangents of unit fractions). The Gregory number may be decomposed by repeatedly multiplying the Gaussian integer by numbers of the form , in order to cancel prime factors from the imaginary part; here is chosen to be a Størmer number such that is divisible by . |
https://en.wikipedia.org/wiki/Postelsia | Postelsia palmaeformis, also known as the sea palm (not to be confused with the southern sea palm) or palm seaweed, is a species of kelp and classified within brown algae. It is the only known species in the genus Postelsia. The sea palm is found along the western coast of North America, on rocky shores with constant waves. It is one of the few algae that can survive and remain erect out of the water; in fact, it spends most of its life cycle exposed to the air. It is an annual, and edible, though harvesting of the alga is discouraged due to the species' sensitivity to overharvesting.
History
The sea palm was known by the natives of California by the name of kakgunu-chale before any Europeans entered the region. Postelsia was first scientifically described by Franz Josef Ruprecht (1814–1870) in 1852 from a specimen found near Bodega Bay in California. Ruprecht, an Austro-Hungarian who became curator of botany at the Academy of Sciences in St. Petersburg in 1839, studied seaweed specimens collected by botanist Ilya Vosnesensky, and published a paper describing one seagrass and five seaweeds, one of which was Postelsia. The sea palm has been used by several textbooks, such as the Campbell–Reece Biology textbook, as an example of multicellular protists, as well as an example of the class Phaeophyceae.
Etymology
The generic name, Postelsia honors Alexander Philipov Postels, an Estonian-born geologist and artist who worked with Ruprecht, while the specific name, palmaeformis, describes the alga's superficial similarity in appearance to true palms.
Fossil record
Fossils from Monte Bolca, a lagerstätte near Verona, were originally named Zoophycos caput-medusae and previously thought to be trace fossils, but were later found to be plants instead and given the name Algarum by French zoologist Henri Milne-Edwards in 1866. The type specimen collected by Italian paleobotanist Abramo Bartolommeo Massalongo before 1855 is at the Natural History Museum of Verona and was p |
https://en.wikipedia.org/wiki/OpenLava | OpenLava is a workload job scheduler for a cluster of computers. OpenLava was pirated from an early version of Platform LSF. Its configuration file syntax, application program interface (API), and command-line interface (CLI) have been kept unchanged. Therefore, OpenLava is mostly compatible with Platform LSF.
OpenLava was based on the Utopia research project at the University of Toronto.
OpenLava was allegedly licensed under GNU General Public License v2, but that licensing was proven to be invalid and illegal at trial.
History
In 2007, Platform Computing (now part of IBM) released Platform Lava 1.0, which is a simplified version of Platform LSF 4.2 code, licensed under GNU General Public License v2. Platform Lava had no additional releases after v1.0 and was discontinued in 2011. In June 2011, OpenLava 1.0 code was committed to GitHub.
Commercial support
In 2014, a number of former Platform Computing employees founded Teraproc Inc., which contributed development and provided commercial support for OpenLava.
Commercially supported OpenLava contains add-on features than the community based OpenLava project.
IBM Lawsuit
In October 2016, IBM filed a lawsuit alleging copyright infringement and trade secrets misappropriation against Teraproc. The complaint accused some of the company's founders of taking “confidential and proprietary source code" for IBM's Spectrum LSF product when they left, which was then used as the basis of the competitive product OpenLava. David Bigagli, the TeraProc employee who started the OpenLava project, posted a notice on GitHub announcing that downloads for OpenLava had been disabled because of a DMCA takedown notice sent by IBM's lawyers.
Bigagli later announced that the source code for OpenLava 3.0 and 4.0 would be taken down, while the source code of 2.2 would be restored in order to regain the GitHub repository and the openlava.org website, while claiming that the DMCA claim is fraudulent.
On September 18, 2018, the US Courts |
https://en.wikipedia.org/wiki/Bacteriophage%20Mu | Bacteriophage Mu, also known as mu phage or mu bacteriophage, is a muvirus (the first of its kind to be identified) of the family Myoviridae which has been shown to cause genetic transposition. It is of particular importance as its discovery in Escherichia coli by Larry Taylor was among the first observations of insertion elements in a genome. This discovery opened up the world to an investigation of transposable elements and their effects on a wide variety of organisms. While Mu was specifically involved in several distinct areas of research (including E. coli, maize, and HIV), the wider implications of transposition and insertion transformed the entire field of genetics.
Anatomy
Phage Mu is nonenveloped, with a head and a tail. The head has an icosahedral structure of about 54 nm in width. The neck is knob-like, and the tail is contractile with a base plate and six short terminal fibers. The genome has been fully sequenced and consists of 36,717 nucleotides, coding for 55 proteins.
History
Mu phage was first discovered by Larry Taylor at UC Berkeley in the late 1950s. His work continued at Brookhaven National Laboratory, where he first observed the mutagenic properties of Mu; several colonies of Hfr E. coli which had been lysogenized with Mu seemed to have a tendency to develop new nutritional markers. With further investigation, he was able to link the presence of these markers to the physical binding of Mu at a certain loci. He likened the observed genetic alteration to the ‘controlling elements’ in maize, and named the phage ‘Mu’, for mutation. This, however, was only the beginning. Over the next sixty years, the complexities of the phage were fleshed out by numerous researchers and labs, resulting in a far deeper understanding of mobile DNA and the mechanisms underlying transposable elements.
Key Mu-related findings
1972–1975: Ahmad Bukhari shows that Mu can insert randomly and prolifically throughout an entire bacterial genome, creating stable insertions |
https://en.wikipedia.org/wiki/Habitability%20of%20neutron%20star%20systems | The habitability of neutron star systems means assessing and surveying whether life is possible on planets and moons orbiting a neutron star.
A habitable planet orbiting a neutron star must be between one and 10 times the mass of the Earth. If the planet were lighter, its atmosphere would be lost. Its atmosphere must also be thick enough to convert the intense X-ray radiation emanating from the parent star into heat on its surface. Then it could have the temperature suitable for life.
A magnetic field strong enough — the magnetosphere — would protect the planet from the strong solar winds. This could preserve the planet's atmosphere for several billion years. Such a planet could have liquid water on its surface.
A Dutch research team published an article on the subject in the journal Astronomy & Astrophysics in December 2017.
See also
Habitability of red dwarf systems
Habitability of K-type main-sequence star systems
Habitability of natural satellites
Dragon's Egg and its sequel Starquake, novels by Robert L. Forward, about life on a neutron star itself. |
https://en.wikipedia.org/wiki/Machine%20guidance | The term Machine Guidance is used to describe a wide range of techniques which improve the productivity of agricultural, mining and construction equipment. It is most commonly used to describe systems which incorporate GPS, Motion Measuring Units (MMU) and other devices to provide on-board systems with information about the movement of the machine in either 3, 5 or 7 axis of rotation. Feedback to the operator is provided through audio and visual displays which allows improved control of the machine in relation to the intended or designed direction of travel.
See also
List of emerging technologies |
https://en.wikipedia.org/wiki/United%20Kingdom%20Aerospace%20Youth%20Rocketry%20Challenge | The UK Youth Rocketry Challenge (UKRoC) is a youth rocket building competition, established in 2007. It provides secondary school student teams (of 3 to 5 members aged 11 to 18), a realistic experience in designing a flying aerospace vehicle that meets a specified set of mission and performance requirements. Students have to work together in teams, emulating the practices of real aerospace engineers. Not intended to be easy, and considered well within the capabilities of secondary school students with a good background in science and maths, and some craftsmanship skills.
Run by ADS, its key goal is to "Encourage school children to enter the world of aerospace and science".
The winner goes forward to represent the UK competing in the International Rocketry Challenge against the winners of the American, French and Japanese competitions.
The challenge
For each competition UKRoC teams have had to design, construct and successfully launch a rocket to carry one or more raw medium size hen's egg as close to a specified altitude, for a specific flight time, then return the egg and altimeter payload section safely, and undamaged to earth using a specified means of deceleration.
Many teams choose to design their rockets on simulation programs, such as Spacecad and RockSim, which not only enable them to design, but also calculate how well a rocket will perform in different weather conditions.
History
From 2007 to 2009 finals were held at Charterhouse School, Surrey.
The 2008 finalists were: 342 Ealing and Brentford ATC, Beaumont School, Coombeshead College, Costessey High School, Crofton School, Dinnington Comprehensive School, Harrogate Grammar, Holywells High School, Horsforth Secondary School, Keighley Guides, Lancaster School, Loreto College, Royal Liberty School, Salt Grammar School, Shipston High School, Thornton Grammar and Wootton Bassett School.
The 2009 finalists were: 342 Ealing and Brentford ATC, Coombeshead College, Dinnington Comprehensive School, Holywe |
https://en.wikipedia.org/wiki/BioLinux | BioLinux is a term used in a variety of projects involved in making access to bioinformatics software on a Linux platform easier using one or more of the following methods:
Provision of complete systems
Provision of bioinformatics software repositories
Addition of bioinformatics packages to standard distributions
Live DVD/CDs with bioinformatics software added
Community building and support systems
There are now various projects with similar aims, on both Linux systems and other Unices, and a selection of these are given below. There is also an overview in the Canadian Bioinformatics Helpdesk Newsletter that details some of the Linux-based projects.
Package repositories
Apple/Mac
Many Linux packages are compatible with Mac OS X and there are several projects which attempt to make it easy to install selected Linux packages (including bioinformatics software) on a computer running Mac OS X. (source?)
BioArchLinux
BioArchLinux repository contain more than 3,770 packages for Arch Linux and Arch Linux based distribution.
Debian
Debian is another very popular Linux distribution in use in many academic institutions, and some bioinformaticians have made their own software packages available for this distribution in the deb format.
Red Hat
Package repositories are generally specific to the distribution of Linux the bioinformatician is using. A number of Linux variants are prevalent in bioinformatics work. Fedora is a freely-distributed version of the commercial Red Hat system. Red Hat is widely used in the corporate world as they offer commercial support and training packages. Fedora Core is a community supported derivative of Red Hat and is popular amongst those who like Red Hat's system but don't require commercial support. Many users of bioinformatics applications have produced RPMs (Red Hat's package format) designed to work with Fedora, which you can potentially also install on Red Hat Enterprise Linux systems. Other distributions such as Mandriv |
https://en.wikipedia.org/wiki/Linear%20combination | In mathematics, a linear combination is an expression constructed from a set of terms by multiplying each term by a constant and adding the results (e.g. a linear combination of x and y would be any expression of the form ax + by, where a and b are constants). The concept of linear combinations is central to linear algebra and related fields of mathematics.
Most of this article deals with linear combinations in the context of a vector space over a field, with some generalizations given at the end of the article.
Definition
Let V be a vector space over the field K. As usual, we call elements of V vectors and call elements of K scalars.
If v1,...,vn are vectors and a1,...,an are scalars, then the linear combination of those vectors with those scalars as coefficients is
There is some ambiguity in the use of the term "linear combination" as to whether it refers to the expression or to its value. In most cases the value is emphasized, as in the assertion "the set of all linear combinations of v1,...,vn always forms a subspace". However, one could also say "two different linear combinations can have the same value" in which case the reference is to the expression. The subtle difference between these uses is the essence of the notion of linear dependence: a family F of vectors is linearly independent precisely if any linear combination of the vectors in F (as value) is uniquely so (as expression). In any case, even when viewed as expressions, all that matters about a linear combination is the coefficient of each vi; trivial modifications such as permuting the terms or adding terms with zero coefficient do not produce distinct linear combinations.
In a given situation, K and V may be specified explicitly, or they may be obvious from context. In that case, we often speak of a linear combination of the vectors v1,...,vn, with the coefficients unspecified (except that they must belong to K). Or, if S is a subset of V, we may speak of a linear combination of vectors in S, wh |
https://en.wikipedia.org/wiki/OpenFabrics%20Alliance | The OpenFabrics Alliance is a non-profit organization that promotes remote direct memory access (RDMA) switched fabric technologies for server and storage connectivity. These high-speed data-transport technologies are used in high-performance computing facilities, in research and various industries.
The OpenFabrics Alliance aims to develop open-source software that supports the three major RDMA fabric technologies: InfiniBand, RDMA over Converged Ethernet (RoCE) and iWARP. The software includes two packages, one that runs on Linux and FreeBSD and one that runs on Microsoft Windows. The alliance worked with two large Linux distributors—SUSE and Red Hat—as well as Microsoft on compatibility with their operating systems.
History
Founded in June 2004 as the OpenIB Alliance, the organization originally developed an InfiniBand software stack for Linux. Initial funding for the Alliance was provided by the United States Department of Energy. The alliance released the first version of the OpenFabrics Enterprise Distribution (OFED) in 2005.
In 2005 the OpenIB Alliance announced support for Microsoft Windows. In 2006, the organization again expanded its charter to include support for iWARP, which is a transport technology that competes with InfiniBand. At that time the alliance changed its name to the OpenFabrics Alliance. Subsequent releases have added support for iWARP and Windows.
In 2011, OFED stack was ported to FreeBSD and included in FreeBSD 9.
OpenFabrics Enterprise Distribution
A community of developers from hardware manufacturers, software vendors, system integrators, government agencies and academia to work on OFED. The OpenFabrics Alliance provides architectures, software repositories, interoperability tests, bug databases, workshops, and BSD- and GPL-licensed code to facilitate development.
The OFED stack includes software drivers, core kernel-code, middleware, and user-level interfaces. It offers a range of standard protocols, including IPoIB ( |
https://en.wikipedia.org/wiki/Pe%CA%BBa | The Pea is the popular name of the traditional male tatau (tattoo) of Samoa, also known as the . It is a common mistake for people to refer to the pe'a as sogaimiti, because sogaimiti refers to the man with the pe'a and not the pe'a itself. It covers the body from the middle of the back to the knees, and consists of heavy black lines, arrows, and dots.
History
The tattoo was originally made of bone or sharpened boar husk into a comb style with serrated teeth shaped like needles. It was then attached to a small patch of sea turtle which was connected to a wooden handle.
In the 1830s, English missionaries attempted to abolish the pe'a by banning it in missionary schools. The purpose of this was to "westernise" the Samoans, but during the time that tattooing was banned, it was still done in secret. Because of this, Samoa is the only Polynesian country that has managed to retain its traditional tattoos in modern times, although it is done to a much lesser extent than it used to be.
In present times, the traditional design of Pe'a continues to be a source of sacred cultural heritage, as an act of honour.
Description
The Pe'a covers the body from the middle of the back to the knees. The word tattoo in the English language is believed to have originated from the Samoan word "".
The process for the Pe'a is extremely painful, and undertaken by (master tattooists), using a set of handmade tools: pieces of bone, turtle shell and wood. The are revered masters in Samoan society. In Samoan custom, a Pe'a is only done the traditional way, with aspects of cultural ceremony and ritual, and has a strong meaning for the one who receives it. The works with two assistants, called , who are often apprentice tattooists and they stretch the skin, wipe the excess ink and blood and generally support the tattooist in their work. The process takes place with the subject lying on mats on the floor with the tattooist and assistants beside them. Family members of the person getting the t |
https://en.wikipedia.org/wiki/Mediterranean%20Institute%20of%20Fundamental%20Physics | The Mediterranean Institute of Fundamental Physics, commonly known as MIFP, is a private independent non-governmental institution created in order to unite scientists in different countries around the world working in all fields of physics. MIFP is a non-profit organization whose main goal is to provide efficient and flexible management of international collaboration projects and teaching programmes, to ease and increase the communication between leading researchers in different areas of fundamental and applied physics, and to organize international scientific meetings, workshops, schools and conferences. It is located in Marino, Rome, Italy.
Founded in July 2010 by a small group of researchers, by the year of 2012 MIFP numbers over one hundred members. During 2010, 2011 and 2012 MIFP has held, took part in, or partly sponsored more than ten international events in Armenia, Italy, Greece, France, Ukraine, Brazil, Thailand and Russia. The mean h-index within the MIFP members is 27. About 50% of the members represent Russian scientific diaspora.
MIFP is financed from the research grants and by private donations.
History
In the year of 2001, the 1st French-Russian Meeting on New trends in Solid State Physics in Clermont-Ferrand, France which was sponsored by Valéry Giscard d'Estaing started a series of annual meetings of world leading experts in solid state physics. Further meetings have been held in Clermont-Ferrand on a yearly basis from 2002 to 2006 to be followed by Meetings in Rome in 2007, 2008, 2009 and 2010 with constantly increasing number of participating scientists. Shortly after the 2010 annual meeting, a group of researchers working in different countries decided to create a private organization which main goal would be bringing together a strong international team providing a stimulating research environment and working atmosphere. Another mission of MIFP was reunification of the Russian scientists who spread all over the world after the fall of the S |
https://en.wikipedia.org/wiki/AC%2020-152 | The Advisory Circular AC 20-152A, Development Assurance for Airborne Electronic Hardware, identifies the RTCA-published standard DO-254 as defining "an acceptable means, but not the only means" to secure FAA approval of complex custom micro-coded components within aircraft systems with Item Design Assurance Levels (IDAL) of A, B, or C. Specifically excluding COTS microcontrollers, complex custom micro-coded components include field programmable gate arrays (FPGA), programmable logic devices (PLD), and application-specific integrated circuits (ASIC), particularly in cases where correctness and safety can not be verified with testing alone, necessitating methodical design assurance. Application of DO-254 to IDAL D components is optional.
Revision History |
https://en.wikipedia.org/wiki/Star%20of%20Life | The Star of Life is a symbol used to identify emergency medical services. It features a blue six-pointed star, outlined by a white border. The middle contains a Rod of Asclepius – an ancient symbol of medicine. The Star of Life can be found on ambulances, medical personnel uniforms, and other objects associated with emergency medicine or first aid. Elevators marked with the symbol indicate the lift is large enough to hold a stretcher. Medical bracelets sometimes use the symbol to indicate one has a medical condition that emergency services should be aware of.
The Star of Life is widely used around the world, but like many international symbols, it has not been adopted everywhere. Its use is restricted to authorized personnel in some countries.
History
The Star of Life originated in the United States. Previously, ambulances in the country commonly displayed a safety orange-colored cross on a square background. This was only a slight variation from the red Greek cross (✚) used by the International Red Cross and Red Crescent Movement.
In 1963, the American Medical Association (AMA) designed the Star of Life as a "universal" symbol for medical identification. The AMA did not trademark or copyright the symbol, stating it was being "freely offered" to manufacturers, and also was for use on cards carried by persons with a medical condition.
By way of a 1964 resolution, it was adopted by the World Medical Association "as the universal emergency medical information symbol" The Star of Life was promoted by the American Red Cross and rapidly adopted worldwide.
In 1970, when the American Medical Association's Committee on Emergency Medical Services formed the National Registry of Emergency Medical Technicians (NREMT), the AMA chose the Star of Life to designate nationally certified Emergency Medical Services personnel. In 1973, the NREMT filed for a trademark for the Star of Life logo, under the category of a collective membership mark. This version featured a Star of Li |
https://en.wikipedia.org/wiki/87%20%28number%29 | 87 (eighty-seven) is the natural number following 86 and preceding 88.
In mathematics
87 is:
the sum of the squares of the first four primes (87 = 22 + 32 + 52 + 72).
the sum of the sums of the divisors of the first 10 positive integers.
the thirtieth semiprime, and the twenty-sixth distinct semiprime and the eighth of the form (3.q).
together with 85 and 86, forms the last semiprime in the 2nd cluster of three consecutive semiprimes; the first comprising 33, 34, 35.
with an aliquot sum of 33; itself a semiprime, within an aliquot sequence of five composite numbers (87,33,15,9,4,3,1,0) to the Prime in the 3-aliquot tree.
5! - 4! - 3! - 2! - 1! = 87
the last two decimal digits of Graham's number.
In sports
Cricket in Australia holds 87 as a superstitiously unlucky score and is referred to as "the devil's number". This originates from the fact that 87 is 13 runs short of a century. 187, 287, and so on are also considered unlucky but are not as common as 87 on its own.
In the National Hockey League, Wayne Gretzky scored a league-high 87 goals with the Edmonton Oilers in the 1983–84 NHL season.
In other fields
Eighty-seven is also:
The atomic number of francium.
An answer to a popular puzzle question states 16, 06, 68, 88, xx, 98. The answer is 87 when looked upside down.
The number of years between the signing of the U.S. Declaration of Independence and the Battle of Gettysburg, immortalized in Abraham Lincoln's Gettysburg Address with the phrase "Four Score and Seven Years ago..."
The model number of Junkers Ju 87.
The number of the French department Haute-Vienne.
The code for international direct dial phone calls to Inmarsat and other services.
The 87 photographic filter blocks visible light, allowing only infrared light to pass.
The ISBN Group Identifier for books published in Denmark.
The opus number of the 24 Preludes and Fugues of Dmitri Shostakovich.
In model railroading, the ratio of the popular H0 scale is 1:87. Proto:87 scale claims to offer pre |
https://en.wikipedia.org/wiki/LibreOffice | LibreOffice () is a free and open-source office productivity software suite, a project of The Document Foundation (TDF). It was forked in 2010 from OpenOffice.org, an open-sourced version of the earlier StarOffice. The LibreOffice suite consists of programs for word processing, creating and editing of spreadsheets, slideshows, diagrams and drawings, working with databases, and composing mathematical formulas. It is available in 115 languages. TDF does not provide support for LibreOffice, but enterprise-focused editions are available from companies in the ecosystem.
LibreOffice uses the OpenDocument standard as its native file format, but supports formats of most other major office suites, including Microsoft Office, through a variety of import and export filters.
LibreOffice is available for a variety of computing platforms, with official support for Microsoft Windows, macOS and Linux and community builds for many other platforms. Ecosystem partner Collabora uses LibreOffice upstream code and provides apps for Android, iOS, iPadOS and ChromeOS. LibreOffice is the default office suite of most popular Linux distributions.
LibreOffice Online is an online office suite which includes the applications Writer, Calc and Impress and provides an upstream for projects such as commercial Collabora Online.
It is the most actively developed free and open-source office suite, with approximately 50 times the development activity of Apache OpenOffice, the other major descendant of OpenOffice.org, in 2015.
The project was announced and a beta released on 28 September 2010. In the nine months between January 2011 (the first stable release) and October 2011, LibreOffice was downloaded about 7.5 million times. In 2015, the project claimed 120 million unique downloading addresses from May 2011 to May 2015, excluding Linux distributions, with 55 million of those being from May 2014 to May 2015. The Document Foundation estimates that there are 200 million active LibreOffice users worl |
https://en.wikipedia.org/wiki/Crawler%20%28BEAM%29 | In BEAM robotics, a crawler is a robot that has a mode of locomotion by tracks or by transferring the robot's body on limbs or appendages. These do not drag parts of their body on the ground.
In the original paper "living machines" from 1995, two types of robots were introduced which was the Walkman (a simple crawler) and Spyder, which is a more elaborated legged robot. The difference between a walker and a crawler is, that the crawler is more primitive. It has robot legs and can move forward on the carpet but the legs don't have dedicated joints for articulated movements. Instead they are mounted directly on the robot's base.
The design of a crawler robot isn't specified in detail and each robot engineer is allowed to build their own version. Sometimes, crawling robots are equipped with dedicated microcontrollers plus a radio controlled chipset while in other implementations a minimalist approach is used. What all these robots have in common is, that they are following the philosophy of Biology, Electronics, Aesthetics and Mechanics which is about imitating biological bugs.
A possible alternative control over teleoperation is a nv network which is a specialized form of a central pattern generator. This is a pseudorandom number generator which is producing an oscillating signal. It moves the legs similar to a clockwork.
Genera
Turbots: Rolls over and over as a mode of locomotion via arms or flagella.
Inchworms: Driven mode of locomotion via the robot's body undulating; this undulation moves part of the body ahead, while the rest of the chassis is on the ground.
Tracked robots: "Track-ed" wheel locomotion; tank-like action. |
https://en.wikipedia.org/wiki/Morin%20transition | The Morin transition (also known as a spin-flop transition) is a magnetic phase transition in α-Fe2O3 hematite where the antiferromagnetic ordering is reorganized from being aligned perpendicular to the c-axis to be aligned parallel to the c-axis below TM.
TM = 260K for Fe3+ in α-Fe2O3.
A change in magnetic properties takes place at the Morin transition temperature.
See also |
https://en.wikipedia.org/wiki/Contractility | Contractility refers to the ability for self-contraction, especially of the muscles or similar active biological tissue
Contractile ring in cytokinesis
Contractile vacuole
Muscle contraction
Myocardial contractility
See contractile cell for an overview of cell types in humans.
See also
motility
Cell movement |
https://en.wikipedia.org/wiki/Acacia%20sensu%20lato | Acacia s.l. (pronounced or ), known commonly as mimosa, acacia, thorntree or wattle, babul [India/hindi] is a polyphyletic genus of shrubs and trees belonging to the subfamily Mimosoideae of the family Fabaceae. It was described by the Swedish botanist Carl Linnaeus in 1773 based on the African species Acacia nilotica. Many non-Australian species tend to be thorny, whereas the majority of Australian acacias are not. All species are pod-bearing, with sap and leaves often bearing large amounts of tannins and condensed tannins that historically found use as pharmaceuticals and preservatives.
The genus Acacia constitutes, in its traditional circumspection, the second largest genus in Fabaceae (Astragalus being the largest), with roughly 1,300 species, about 960 of them native to Australia, with the remainder spread around the tropical to warm-temperate regions of both hemispheres, including Europe, Africa, southern Asia, and the Americas (see List of Acacia species). The genus was divided into five separate genera under the tribe "Acacieae". The genus now called Acacia represents the majority of the Australian species and a few native to southeast Asia, Réunion, and Pacific Islands. Most of the species outside Australia, and a small number of Australian species, are classified into Vachellia and Senegalia. The two final genera, Acaciella and Mariosousa, each contain about a dozen species from the Americas (but see "Classification" below for the ongoing debate concerning their taxonomy).
Classification
English botanist and gardener Philip Miller adopted the name Acacia in 1754. The generic name is derived from (), the name given by early Greek botanist-physician Pedanius Dioscorides (middle to late first century) to the medicinal tree A. nilotica in his book Materia Medica. This name derives from the Ancient Greek word for its characteristic thorns, (; "thorn"). The species name nilotica was given by Linnaeus from this tree's best-known range along the Nile river. |
https://en.wikipedia.org/wiki/Univariate%20%28statistics%29 | Univariate is a term commonly used in statistics to describe a type of data which consists of observations on only a single characteristic or attribute. A simple example of univariate data would be the salaries of workers in industry. Like all the other data, univariate data can be visualized using graphs, images or other analysis tools after the data is measured, collected, reported, and analyzed.
Univariate data types
Some univariate data consists of numbers (such as the height of 65 inches or the weight of 100 pounds), while others are nonnumerical (such as eye colors of brown or blue). Generally, the terms categorical univariate data and numerical univariate data are used to distinguish between these types.
Categorical univariate data
Categorical univariate data consists of non-numerical observations that may be placed in categories. It includes labels or names used to identify an attribute of each element. Categorical univariate data usually use either nominal or ordinal scale of measurement.
Numerical univariate data
Numerical univariate data consists of observations that are numbers. They are obtained using either interval or ratio scale of measurement. This type of univariate data can be classified even further into two subcategories: discrete and continuous. A numerical univariate data is discrete if the set of all possible values is finite or countably infinite. Discrete univariate data are usually associated with counting (such as the number of books read by a person). A numerical univariate data is continuous if the set of all possible values is an interval of numbers. Continuous univariate data are usually associated with measuring (such as the weights of people).
Data analysis and applications
Univariate analysis is the simplest form of analyzing data. Uni means "one", so the data has only one variable (univariate). Univariate data requires to analyze each variable separately. Data is gathered for the purpose of answering a question, or more s |
https://en.wikipedia.org/wiki/Dependence%20logic | Dependence logic is a logical formalism, created by Jouko Väänänen, which adds dependence atoms to the language of first-order logic. A dependence atom is an expression of the form , where are terms, and corresponds to the statement that the value of is functionally dependent on the values of .
Dependence logic is a logic of imperfect information, like branching quantifier logic or independence-friendly logic (IF logic): in other words, its game-theoretic semantics can be obtained from that of first-order logic by restricting the availability of information to the players, thus allowing for non-linearly ordered patterns of dependence and independence between variables. However, dependence logic differs from these logics in that it separates the notions of dependence and independence from the notion of quantification.
Syntax
The syntax of dependence logic is an extension of that of first-order logic. For a fixed signature σ = (Sfunc, Srel, ar), the set of all well-formed dependence logic formulas is defined according to the following rules:
Terms
Terms in dependence logic are defined precisely as in first-order logic.
Atomic formulas
There are three types of atomic formulas in dependence logic:
A relational atom is an expression of the form for any n-ary relation in our signature and for any n-tuple of terms ;
An equality atom is an expression of the form , for any two terms and ;
A dependence atom is an expression of the form , for any and for any n-tuple of terms .
Nothing else is an atomic formula of dependence logic.
Relational and equality atoms are also called first-order atoms.
Complex formulas and sentences
For a fixed signature σ, the set of all formulas of dependence logic and their respective sets of free variables are defined as follows:
Any atomic formula is a formula, and is the set of all variables occurring in it;
If is a formula, so is and ;
If and are formulas, so is and ;
If is a formula and is a variable, i |
https://en.wikipedia.org/wiki/Haptik | Haptik is an Indian enterprise conversational AI platform founded in August 2013, and acquired by Reliance Industries Limited in 2019. The company develops technology to enable enterprises to build conversational AI systems that allow users to converse with applications and electronic devices in free-format, natural language, using speech or text. The company has been accorded numerous accolades including the Frost & Sullivan Award, NASSCOM's Al Game Changer Award, and serves Fortune 500 brands globally in industries such as financial, insurance, healthcare, technology and communications.
History
Haptik was founded by Aakrit Vaish and Swapan Rajdev, in August 2013. The company launched its first product Haptik app in March 2014, which is a chat-based personal assistant which lets its users get things done for Android and iOS platforms in India. By September 2014, the platform added 125 chat experts who helped users with their queries.
Over time the company upgraded it into a complete conversational commerce app. The app received 2 million downloads and 15 million installations.
In August 2015, Dan Roth joined Haptik's board of advisers who helped scale the platform's Natural language processing (NLP). In the same year, Haptik was appointed as the official personal assistant of Mumbai City FC. It also provided a customer support chatbot to SwipeTelecom.
In November 2017, the company launched a full-scale enterprise-level bot management platform including an analytics dashboard.
In 2019, Haptik launched a voice bot for one of the largest food chains in the world, allowing its customers to place orders using Alexa. It also helps users find the nearest outlet of the food chain and provides information on product availability on a real-time basis.
In March 2019, the Government of Maharashtra signed a partnership pact with Haptik to develop a chatbot for its Aaple Sarkar platform. The bot provides conversational access to information regarding 1,400 services managed |
https://en.wikipedia.org/wiki/Meagre%20set | In the mathematical field of general topology, a meagre set (also called a meager set or a set of first category) is a subset of a topological space that is small or negligible in a precise sense detailed below. A set that is not meagre is called nonmeagre, or of the second category. See below for definitions of other related terms.
The meagre subsets of a fixed space form a σ-ideal of subsets; that is, any subset of a meagre set is meagre, and the union of countably many meagre sets is meagre.
Meagre sets play an important role in the formulation of the notion of Baire space and of the Baire category theorem, which is used in the proof of several fundamental results of functional analysis.
Definitions
Throughout, will be a topological space.
The definition of meagre set uses the notion of a nowhere dense subset of that is, a subset of whose closure has empty interior. See the corresponding article for more details.
A subset of is called a of or of the in if it is a countable union of nowhere dense subsets of . Otherwise, the subset is called a of or of the in The qualifier "in " can be omitted if the ambient space is fixed and understood from context.
A topological space is called (respectively, ) if it is a meagre (respectively, nonmeagre) subset of itself.
A subset of is called in or in if its complement is meagre in . (This use of the prefix "co" is consistent with its use in other terms such as "cofinite".)
A subset is comeagre in if and only if it is equal to a countable intersection of sets, each of whose interior is dense in
Remarks on terminology
The notions of nonmeagre and comeagre should not be confused. If the space is meagre, every subset is both meagre and comeagre, and there are no nonmeagre sets. If the space is nonmeager, no set is at the same time meagre and comeager, every comeagre set is nonmeagre, and there can be nonmeagre sets that are not comeagre, that is, with nonmeagre complement. See the Ex |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.