source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Surface%20conductivity | Surface conductivity is an additional conductivity of an electrolyte in the vicinity of the charged interfaces. Surface and volume conductivity of liquids correspond to the electrically driven motion of ions in an electric field. A layer of counter ions of the opposite polarity to the surface charge exists close to the interface. It is formed due to attraction of counter-ions by the surface charges. This layer of higher ionic concentration is a part of the interfacial double layer. The concentration of the ions in this layer is higher as compared to the ionic strength of the liquid bulk. This leads to the higher electric conductivity of this layer.
Smoluchowski was the first to recognize the importance of surface conductivity at the beginning of the 20th century.
There is a detailed description of surface conductivity by Lyklema in "Fundamentals of Interface and Colloid Science"
The Double Layer (DL) has two regions, according to the well established Gouy-Chapman-Stern model. The upper level, which is in contact with the bulk liquid is the diffuse layer. The inner layer that is in contact with interface is the Stern layer.
It is possible that the lateral motion of ions in both parts of the DL contributes to the surface conductivity.
The contribution of the Stern layer is less well described. It is often called "additional surface conductivity".
The theory of the surface conductivity of the diffuse part of the DL was developed by Bikerman. He derived a simple equation that links surface conductivity κσ with the behaviour of ions at the interface. For symmetrical electrolyte and assuming identical ions diffusion coefficients D+=D−=D it is given in the reference:
where
F is the Faraday constant
T is the absolute temperature
R is the gas constant
C is the ionic concentration in the bulk fluid
z is the ion valency
ζ is the electrokinetic potential
The parameter m characterizes the contribution of electro-osmosis to the motion of ions within the DL:
The Dukhin n |
https://en.wikipedia.org/wiki/Eye%20beam | In the physics inherited from Plato (although rejected by Aristotle), an eye beam generated in the eye was thought to be responsible for the sense of sight. The eye beam darted by the imagined basilisk, for instance, was the agent of its lethal power, given the technical term extramission.
The exaggerated eyes of fourth-century Roman emperors like Constantine the Great (illustration) reflect this character.<ref>L. Safran, "What Constantine saw: reflections on the Capitoline Colossus, visuality and early Christian studies" ''Millennium 3 (2006:43-73), noted in Paul Stephenson, Constantine, Roman Emperor, Christian Victor, 2010: notes 333.</ref> The concept found expression in poetry into the 17th century, most famously in John Donne's poem "The Extasie":
Our eye-beams twisted, and did thred
Our eyes, upon one double string;
So to'entergraft our hands, as yet
Was all the meanes to make us one,
And pictures in our eyes to get
Was all our propagation.
In the same period John Milton wrote, of having gone blind, "When I consider how my light is spent", meaning that he had lost the capacity to generate eye beams.
Later in the century, Newtonian optics and increased understanding of the structure of the eye rendered the old concept invalid, but it was revived as an aspect of monstrous superhuman capabilities in popular culture of the 20th century.
The emission theory of sight''' seemed to be corroborated by geometry and was reinforced by Robert Grosseteste.
In Algernon Swinburne's "Atalanta in Calydon" the conception is revived for poetic purposes, enriching the poem's pagan context in the Huntsman's invocation of Artemis:
Hear now and help, and lift no violent hand,
But favourable and fair as thine eye's beam,
Hidden and shown in heaven".
In T.S. Eliot's rose garden episode that introduces "Burnt Norton" eyebeams persist in the fusion of possible pasts and presents like unheard music:
The unheard music hidden in the shrubbery
And the unseen eyebeam crossed, for |
https://en.wikipedia.org/wiki/Exonuclease | Exonucleases are enzymes that work by cleaving nucleotides one at a time from the end (exo) of a polynucleotide chain. A hydrolyzing reaction that breaks phosphodiester bonds at either the 3′ or the 5′ end occurs. Its close relative is the endonuclease, which cleaves phosphodiester bonds in the middle (endo) of a polynucleotide chain. Eukaryotes and prokaryotes have three types of exonucleases involved in the normal turnover of mRNA: 5′ to 3′ exonuclease (Xrn1), which is a dependent decapping protein; 3′ to 5′ exonuclease, an independent protein; and poly(A)-specific 3′ to 5′ exonuclease.
In both archaea and eukaryotes, one of the main routes of RNA degradation is performed by the multi-protein exosome complex, which consists largely of 3′ to 5′ exoribonucleases.
Significance to polymerase
RNA polymerase II is known to be in effect during transcriptional termination; it works with a 5' exonuclease (human gene Xrn2) to degrade the newly formed transcript downstream, leaving the polyadenylation site and simultaneously shooting the polymerase. This process involves the exonuclease's catching up to the pol II and terminating the transcription.
Pol I then synthesizes DNA nucleotides in place of the RNA primer it had just removed. DNA polymerase I also has 3' to 5' and 5' to 3' exonuclease activity, which is used in editing and proofreading DNA for errors. The 3' to 5' can only remove one mononucleotide at a time, and the 5' to 3' activity can remove mononucleotides or up to 10 nucleotides at a time.
E. coli types
In 1971, Lehman IR discovered exonuclease I in E. coli. Since that time, there have been numerous discoveries including: exonuclease, II, III, IV, V, VI, VII, and VIII. Each type of exonuclease has a specific type of function or requirement.
Exonuclease I breaks apart single-stranded DNA in a 3' → 5' direction, releasing deoxyribonucleoside 5'-monophosphates one after another. It does not cleave DNA strands without terminal 3'-OH groups because they ar |
https://en.wikipedia.org/wiki/CasADi | CasADi is a free and open source symbolic framework for automatic differentiation and optimal control.
See also
Automatic differentiation
JModelica.org |
https://en.wikipedia.org/wiki/Cell%20adhesion | Cell adhesion is the process by which cells interact and attach to neighbouring cells through specialised molecules of the cell surface. This process can occur either through direct contact between cell surfaces such as cell junctions or indirect interaction, where cells attach to surrounding extracellular matrix, a gel-like structure containing molecules released by cells into spaces between them. Cells adhesion occurs from the interactions between cell-adhesion molecules (CAMs), transmembrane proteins located on the cell surface. Cell adhesion links cells in different ways and can be involved in signal transduction for cells to detect and respond to changes in the surroundings. Other cellular processes regulated by cell adhesion include cell migration and tissue development in multicellular organisms. Alterations in cell adhesion can disrupt important cellular processes and lead to a variety of diseases, including cancer and arthritis. Cell adhesion is also essential for infectious organisms, such as bacteria or viruses, to cause diseases.
General mechanism
CAMs are classified into four major families: integrins, immunoglobulin (Ig) superfamily, cadherins, and selectins. Cadherins and IgSF are homophilic CAMs, as they directly bind to the same type of CAMs on another cell, while integrins and selectins are heterophilic CAMs that bind to different types of CAMs. Each of these adhesion molecules has a different function and recognizes different ligands. Defects in cell adhesion are usually attributable to defects in expression of CAMs.
In multicellular organisms, bindings between CAMs allow cells to adhere to one another and creates structures called cell junctions. According to their functions, the cell junctions can be classified as:
Anchoring junctions (adherens junctions, desmosomes and hemidesmosomes), which maintain cells together and strengthens contact between cells.
Occluding junctions (tight junctions), which seal gaps between cells through cell–cell |
https://en.wikipedia.org/wiki/Flashover | A flashover is the near-simultaneous ignition of most of the directly exposed combustible material in an enclosed area. When certain organic materials are heated, they undergo thermal decomposition and release flammable gases. Flashover occurs when the majority of the exposed surfaces in a space are heated to their autoignition temperature and emit flammable gases (see also flash point). Flashover normally occurs at or for ordinary combustibles and an incident heat flux at floor level of .
An example of flashover is the ignition of a piece of furniture in a domestic room. The fire involving the initial piece of furniture can produce a layer of hot smoke, which spreads across the ceiling in the room. The hot buoyant smoke layer grows in depth, as it is bounded by the walls of the room. The radiated heat from this layer heats the surfaces of the directly exposed combustible materials in the room, causing them to give off flammable gases, via pyrolysis. When the temperatures of the evolved gases become high enough, these gases will ignite throughout their extent.
Types
The original Swedish terminology related to the term 'flashover' has been altered in its translation to conform with current European and North American accepted [scientific] definitions as follows:
A lean flashover (sometimes called rollover) is the ignition of the gas layer under the ceiling, leading to total involvement of the compartment. The fuel/air ratio is at the bottom region of the flammability range (i.e. lean).
A rich flashover occurs when the flammable gases are ignited while at the upper region of the flammability range (i.e. rich). This can happen in rooms where the fire subsided because of lack of oxygen. The ignition source can be a smouldering object, or the stirring up of embers by the air track. Such an event is known as backdraft.
A delayed flashover occurs when the colder gray smoke cloud ignites after congregating outside of its room of origin. This results in a volatile si |
https://en.wikipedia.org/wiki/Patchwork%20%28software%29 | Patchwork is a free, web-based patch tracking system designed to facilitate the contribution and management of contributions to an open-source project. It is intended to make the patch management process easier for both the project's contributors and maintainers.
Patches that have been sent to a mailing list are 'caught' by the system, and appear on a web page. Any comments posted that reference the patch are appended to the patch page too. The project's maintainer can then scan through the list of patches, marking each with a certain state, such as Accepted, Rejected or Under Review. Old patches can be sent to the archive or deleted.
Currently, Patchwork is being used for a number of open-source projects, mostly subsystems of the Linux kernel and FFmpeg. Although Patchwork has been developed with the kernel workflow in mind, the aim is to be flexible enough to suit the majority of community projects.
History
Patchwork was developed by Jeremy Kerr for use with the Linux PPC64 mailing list. The ozlabs.org deployment was later expanded to cover additional projects and functionality.
Design
Originally written in Perl, it is now written in Python, using the Django web framework. Recent versions of Patchwork use Bootstrap for the front-end UI.
See also
List of tools for code review |
https://en.wikipedia.org/wiki/Kanamycin%20nucleotidyltransferase | In molecular biology, kanamycin nucleotidyltransferase (KNTase) is an enzyme which is involved in conferring resistance to aminoglycoside antibiotics. It catalyses the transfer of a nucleoside monophosphate group from a nucleotide to kanamycin. This enzyme is dimeric with each subunit being composed of two domains. The C-terminal domain contains five alpha helices, four of which are organised into an up-and-down alpha helical bundle. Residues found in this domain may contribute to this enzyme's active site. |
https://en.wikipedia.org/wiki/XAdES | XAdES (short for XML Advanced Electronic Signatures) is a set of extensions to XML-DSig recommendation making it suitable for advanced electronic signatures. W3C and ETSI maintain and update XAdES together.
Description
While XML-DSig is a general framework for digitally signing documents, XAdES specifies precise profiles of XML-DSig making it compliant with the European eIDAS regulation (Regulation on electronic identification and trust services for electronic transactions in the internal market). The eIDAS regulation enhances and repeals the Electronic Signatures Directive 1999/93/EC. EIDAS is legally binding in all EU member states since July 2014. An electronic signature that has been created in compliance with eIDAS has the same legal value as a handwritten signature.
An electronic signature, technically implemented based on XAdES has the status of an advanced electronic signature. This means that
it is uniquely linked to the signatory;
it is capable of identifying the signatory;
only the signatory has control of the data used for the signature creation;
it can be identified if data attached to the signature has been changed after signing.
A resulting property of XAdES is that electronically signed documents can remain valid for long periods, even if underlying cryptographic algorithms are broken.
However, courts are not obliged to accept XAdES-based electronic signatures as evidence in their proceedings; at least in EU, this is compulsory only for "qualified" signatures. A "qualified electronic signature" needs to be doted with a digital certificate, encrypted by a security signature creation device, and the identity of the owner of this signing-certificate must have been verified according to the "high" assurance level of the eIDAS regulation.
Profiles
XAdES defines four profiles (forms) differing in protection level offered.
XAdES-B-B (Basic Electronic Signature), The lowest and simplest version just containing the SignedInfo, SignatureValue, KeyInfo |
https://en.wikipedia.org/wiki/Ring%20species | In biology, a ring species is a connected series of neighbouring populations, each of which interbreeds with closely sited related populations, but for which there exist at least two "end populations" in the series, which are too distantly related to interbreed, though there is a potential gene flow between each "linked" population and the next. Such non-breeding, though genetically connected, "end populations" may co-exist in the same region (sympatry) thus closing a "ring". The German term , meaning a circle of races, is also used.
Ring species represent speciation and have been cited as evidence of evolution. They illustrate what happens over time as populations genetically diverge, specifically because they represent, in living populations, what normally happens over time between long-deceased ancestor populations and living populations, in which the intermediates have become extinct. The evolutionary biologist Richard Dawkins remarks that ring species "are only showing us in the spatial dimension something that must always happen in the time dimension".
Formally, the issue is that interfertility (ability to interbreed) is not a transitive relation; if A breeds with B, and B breeds with C, it does not mean that A breeds with C, and therefore does not define an equivalence relation. A ring species is a species with a counterexample to the transitivity of interbreeding. However, it is unclear whether any of the examples of ring species cited by scientists actually permit gene flow from end to end, with many being debated and contested.
History
The classic ring species is the Larus gull. In 1925 Jonathan Dwight found the genus to form a chain of varieties around the Arctic Circle. However, doubts have arisen as to whether this represents an actual ring species. In 1938, Claud Buchanan Ticehurst argued that the greenish warbler had spread from Nepal around the Tibetan Plateau, while adapting to each new environment, meeting again in Siberia where the ends no lon |
https://en.wikipedia.org/wiki/Knocking%20on%20Heaven%27s%20Door%20%28book%29 | Knocking on Heaven’s Door: How Physics and Scientific Thinking Illuminate the Universe and the Modern World is the second non-fiction book by Lisa Randall. It was initially published on September 20, 2011, by Ecco Press. The title is explained in the text: "Scientists knock on heaven's door in an attempt to cross the threshold separating the known from the unknown."
Review
—American Scientist
See also
Higgs boson
Higgs mechanism |
https://en.wikipedia.org/wiki/Marcel%20Loncin%20Research%20Prize | The Marcel Loncin Research Prize was established in 1994. It is awarded by the Institute of Food Technologists (IFT) in even-numbered years to fund basic chemistry, physics, and/or engineering research applied to food processing and improving food quality. It is named for Marcel Loncin (1920-1995), a Belgian-born, French chemical engineer who did food engineering research while a professor at the Centre d'Enseignement et de Recherches des Industries Alimentaries (CERIA) and afterwards at the Food Engineering Department of the Universität Karlsruhe (TH), Germany. It was the third and final IFT award as of 2006 that has been named for a then-living person.
Award winners receive USD 50,000 in two annual installments and a plaque from the Marcin Loncin Endowment Fund of the IFT.
Winners |
https://en.wikipedia.org/wiki/E%20Ink | E Ink (electronic ink) is a brand of electronic paper (e-paper) display technology commercialized by the E Ink Corporation, which was co-founded in 1997 by MIT undergraduates JD Albert and Barrett Comiskey, MIT Media Lab professor Joseph Jacobson, Jerome Rubin and Russ Wilcox.
It is available in grayscale and color and is used in mobile devices such as e-readers, digital signage, smartwatches, mobile phones, electronic shelf labels and architecture panels.
History
Background
The notion of a low-power paper-like display had existed since the 1970s, originally conceived by researchers at Xerox PARC, but had never been realized. While a post-doctoral student at Stanford University, physicist Joseph Jacobson envisioned a multi-page book with content that could be changed at the push of a button and required little power to use.
Neil Gershenfeld recruited Jacobson for the MIT Media Lab in 1995, after hearing Jacobson's ideas for an electronic book. Jacobson, in turn, recruited MIT undergrads Barrett Comiskey, a math major, and J.D. Albert, a mechanical engineering major, to create the display technology required to realize his vision.
Product development
The initial approach was to create tiny spheres which were half white and half black, and which, depending on the electric charge, would rotate such that the white side or the black side would be visible on the display. Albert and Comiskey were told this approach was impossible by most experienced chemists and materials scientists and had trouble creating these perfectly half-white, half-black spheres; during his experiments, Albert accidentally created some all-white spheres.
Comiskey experimented with charging and encapsulating those all-white particles in microcapsules mixed in with a dark dye. The result was a system of microcapsules that could be applied to a surface and could then be charged independently to create black and white images. A first patent was filed by MIT for the microencapsulated electrophore |
https://en.wikipedia.org/wiki/Murray%27s%20law | In biophysical fluid dynamics, Murray's law is a potential relationship between radii at junctions in a network of fluid-carrying tubular pipes. Its simplest version proposes that whenever a branch of radius splits into two branches of radii and , then all three radii should obey the equation If network flow is smooth and leak-free, then systems that obey Murray's law minimize the resistance to flow through the network. For turbulent networks, the law takes the same form but with a different characteristic exponent .
Murray's law is observed in the vascular and respiratory systems of animals, xylem in plants, and the respiratory system of insects. In principle, Murray's law also applies to biomimetic engineering, but human designs rarely exploit the law.
Murray's law is named after Cecil D. Murray, a physiologist at Bryn Mawr College, who first argued that efficient transport might determine the structure of the human vascular system.
Assumptions
Murray's law assumes material is passively transported by the flow of fluid in a network of tubular pipes, and that said network requires energy both to maintain flow and structural integrity. Variation in the fluid viscosity across scales will affect the Murray's law exponent, but is usually too small to matter.
At least two different conditions are known in which the cube exponent is optimal.
In the first, organisms have free (variable) circulatory volume. Also, maintenance energy is not proportional to the pipe material, but instead the quantity of working fluid. The latter assumption is justified in metabolically active biological fluids, such as blood. It is also justified for metabolically inactive fluids, such as air, as long as the energetic "cost" of the infrastructure scales with the cross-sectional area of each tube; such is the case for all known biological tubules.
In the second, organisms have fixed circulatory volume and pressure, but wish to minimize the resistance to flow through the system. |
https://en.wikipedia.org/wiki/Surface-conduction%20electron-emitter%20display | A surface-conduction electron-emitter display (SED) was a display technology for flat panel displays developed by a number of companies. SEDs used nanoscopic-scale electron emitters to energize colored phosphors and produce an image. In a general sense, a SED consists of a matrix of tiny cathode-ray tubes, each "tube" forming a single sub-pixel on the screen, grouped in threes to form red-green-blue (RGB) pixels. SEDs combine the advantages of CRTs, namely their high contrast ratios, wide viewing angles, and very fast response times, with the packaging advantages of LCD and other flat panel displays.
After considerable time and effort in the early and mid-2000s, SED efforts started winding down in 2009 as LCD became the dominant technology. In August 2010, Canon announced they were shutting down their joint effort to develop SEDs commercially, signaling the end of development efforts. SEDs were closely related to another developing display technology, the field emission display, or FED, differing primarily in the details of the electron emitters. Sony, the main backer of FED, has similarly backed off from their development efforts.
Description
A conventional cathode ray tube (CRT) is powered by an electron gun, essentially an open-ended vacuum tube. At one end of the gun, electrons are produced by "boiling" them off a metal filament, which requires relatively high currents and consumes a large proportion of the CRT's power. The electrons are then accelerated and focused into a fast-moving beam, flowing forward towards the screen. Electromagnets surrounding the gun end of the tube are used to steer the beam as it travels forward, allowing the beam to be scanned across the screen to produce a 2D display. When the fast-moving electrons strike the phosphor on the back of the screen, light is produced. Color images are produced by painting the screen with spots or stripes of three colored phosphors, each for red, green, and blue (RGB). When viewed from a distance, the |
https://en.wikipedia.org/wiki/Method%20of%20averaging | In mathematics, more specifically in dynamical systems, the method of averaging (also called averaging theory) exploits systems containing time-scales separation: a fast oscillation versus a slow drift. It suggests that we perform an averaging over a given amount of time in order to iron out the fast oscillations and observe the qualitative behavior from the resulting dynamics. The approximated solution holds under finite time inversely proportional to the parameter denoting the slow time scale. It turns out to be a customary problem where there exists the trade off between how good is the approximated solution balanced by how much time it holds to be close to the original solution.
More precisely, the system has the following form
of a phase space variable The fast oscillation is given by versus a slow drift of . The averaging method yields an autonomous dynamical system
which approximates the solution curves of inside a connected and compact region of the phase space and over time of .
Under the validity of this averaging technique, the asymptotic behavior of the original system is captured by the dynamical equation for . In this way, qualitative methods for autonomous dynamical systems may be employed to analyze the equilibria and more complex structures, such as slow manifold and invariant manifolds, as well as their stability in the phase space of the averaged system.
In addition, in a physical application it might be reasonable or natural to replace a mathematical model, which is given in the form of the differential equation for , with the corresponding averaged system , in order to use the averaged system to make a prediction and then test the prediction against the results of a physical experiment.
The averaging method has a long history, which is deeply rooted in perturbation problems that arose in celestial mechanics (see, for example in ).
First example
Consider a perturbed logistic growth
and the averaged equation
The purpose of the method o |
https://en.wikipedia.org/wiki/Prorastomus | Prorastomus sirenoides is an extinct species of primitive sirenian that lived during the Eocene Epoch 40 million years ago in Jamaica.
Taxonomy
The generic name Prorastomus, a combination of Greek (prōra), prow, and (stoma), mouth, refers to the lower jaw of the animal "resembling the prow of a wherry".
The genus name Prorastomus comes from Greek prora meaning "prow" and Latin stomus meaning "mouth." In 1892, naturalist Richard Lydekker respelled it as Prorastoma with a feminine ending, however this was unjustified as stomus is masculine in Latin.
Prorastomus is one of two genera of the family Prorastomidae, the other Pezosiren. These two species are the oldest sirenians, dating to the Eocene.
The first specimen was described by paleontologist Sir Richard Owen in 1855, and, being found in Jamaica in the Yellow Limestone Group, pointed to the origin of Sirenia as being in the New World rather than the Old World as was previously thought. However, the modern understanding of Afrotheria as a clade that originally diversified in Africa overturns this idea. The holotype specimen, BMNH 44897, comprises a skull, jaw, and atlas of the neck vertebrae. When Owen first acquired the skull, it was broken in two between the eyes and the braincase. Another specimen was found in 1989 in the same formation, USNM 437769, comprising the frontal bone, a tusk, vertebrae fragments, and ribs.
Description
While modern sirenians are fully aquatic, the Prorastomus was predominantly terrestrial, judging from the structure of its skull. Judging from its crown-shaped molars and the shape of its snout, it fed on soft plants. The snout is long, narrow, and, at the tip, bulbous. The nasal bones are larger than other sirenians. The nasal ridge is well developed, indicating it had a good sense of smell. The frontal bones are smaller than usual for sirenians, though, as in other sirenians, it had a pronounced brow ridge. Since Pezosiren has a sagittal crest, it is possible the Prorastomus spe |
https://en.wikipedia.org/wiki/Horns%20of%20Ammon | The horns of Ammon were curling ram horns, used as a symbol of the Egyptian deity Ammon (also spelled Amun or Amon). Because of the visual similarity, they were also associated with the fossils shells of ancient snails and cephalopods, the latter now known as ammonite because of that historical connection.
Classical iconography
Ammon, eventually Amon-Ra, was a deity in the Egyptian pantheon whose popularity grew over the years, until growing into a monotheistic religion in a way similar to the proposal that the Judeo-Christian deity evolved out of the Ancient Semitic pantheon. Egyptian pharaohs came to follow this religion for a while, Amenhotep and Tutankhamun taking their names from their deity. This trend caught on, with other Egyptian gods also sometimes being described as aspects of Amun.
Ammon was often depicted with ram's horns, so that as this deity became a symbol of supremacy, kings and emperors came to be depicted with Horns of Ammon on the sides of their head in profile, as well as the deities not only of Egypt, but other areas, so that Jupiter was sometimes depicted as "Jupiter Ammon", replete with Horns of Ammon, after Rome conquered Egypt, as was the Greek supreme deity Zeus. His deification as a conqueror had involved being declared the metaphorical "Son of Ammon" by the Oracle at Siwa. This tradition is thought by some to have continued for centuries, with Alexander the Great being allegedly referred to in the Quran as “Dhu al-Qarnayn” (The Two-Horned One), a supposed reference to his depiction on Middle Eastern coins and statuary as having horns of Ammon, despite the fact that most scholars on Islamic exegesis consider Dhu al-Qarnayn to have no relationship with Alexander the Great and the fact that this statement regarding the alleged continuation of the ancient pagan tradition through earlier several centuries of Christianity until the early Islamic times appears neither sustainable nor reasonable.
Pliny the Elder was among the earliest write |
https://en.wikipedia.org/wiki/Magosphaera%20planula | Magosphaera planula was a spherical multiflagellated multicellular microorganism discovered by Ernst Haeckel in September 1869 while he was collecting sponges off Gisøy island off the coast of Norway. He claimed to have seen it break up into separate cells which then became amoeboid. Nobody else has found it, and he kept no specimens of it. It played an important part in theories of metazoan phylogeny into the early 20th century. |
https://en.wikipedia.org/wiki/Sorting%20network | In computer science, comparator networks are abstract devices built up of a fixed number of "wires", carrying values, and comparator modules that connect pairs of wires, swapping the values on the wires if they are not in a desired order. Such networks are typically designed to perform sorting on fixed numbers of values, in which case they are called sorting networks.
Sorting networks differ from general comparison sorts in that they are not capable of handling arbitrarily large inputs, and in that their sequence of comparisons is set in advance, regardless of the outcome of previous comparisons. In order to sort larger amounts of inputs, new sorting networks must be constructed. This independence of comparison sequences is useful for parallel execution and for implementation in hardware. Despite the simplicity of sorting nets, their theory is surprisingly deep and complex. Sorting networks were first studied circa 1954 by Armstrong, Nelson and O'Connor, who subsequently patented the idea.
Sorting networks can be implemented either in hardware or in software. Donald Knuth describes how the comparators for binary integers can be implemented as simple, three-state electronic devices. Batcher, in 1968, suggested using them to construct switching networks for computer hardware, replacing both buses and the faster, but more expensive, crossbar switches. Since the 2000s, sorting nets (especially bitonic mergesort) are used by the GPGPU community for constructing sorting algorithms to run on graphics processing units.
Introduction
A sorting network consists of two types of items: comparators and wires. The wires are thought of as running from left to right, carrying values (one per wire) that traverse the network all at the same time. Each comparator connects two wires. When a pair of values, traveling through a pair of wires, encounter a comparator, the comparator swaps the values if and only if the top wire's value is greater or equal to the bottom wire's value.
In |
https://en.wikipedia.org/wiki/Management%20information%20system | A management information system (MIS) is an information system used for decision-making, and for the coordination, control, analysis, and visualization of information in an organization. The study of the management information systems involves people, processes and technology in an organizational context.
In a corporate setting, the ultimate goal of using management information system is to increase the value and profits of the business.
History
While it can be contested that the history of management information systems dates as far back as companies using ledgers to keep track of accounting, the modern history of MIS can be divided into five eras originally identified by Kenneth C. Laudon and Jane Laudon in their seminal textbook Management Information Systems.
First era – Mainframe and minicomputer computing
Second era – Personal computers
Third era – Client/server networks
Fourth era – Enterprise computing
Fifth era – Cloud computing
The first era (mainframe and minicomputer computing) was ruled by IBM and their mainframe computers for which they supplied both the hardware and software. These computers would often take up whole rooms and require teams to run them. As technology advanced, these computers were able to handle greater capacities and therefore reduce their cost. Smaller, more affordable minicomputers allowed larger businesses to run their own computing centers in-house / on-site / on-premises.
The second era (personal computers) began in 1965 as microprocessors started to compete with mainframes and minicomputers and accelerated the process of decentralizing computing power from large data centers to smaller offices. In the late 1970s, minicomputer technology gave way to personal computers and relatively low-cost computers were becoming mass market commodities, allowing businesses to provide their employees access to computing power that ten years before would have cost tens of thousands of dollars. This proliferation of computers created |
https://en.wikipedia.org/wiki/Autoassociative%20memory | Autoassociative memory, also known as auto-association memory or an autoassociation network, is any type of memory that is able to retrieve a piece of data from only a tiny sample of itself. They are very effective in de-noising or removing interference from the input and can be used to determine whether the given input is “known” or “unknown”.
In artificial neural network, examples include variational autoencoder, denoising autoencoder, Hopfield network.
In reference to computer memory, the idea of associative memory is also referred to as Content-addressable memory (CAM).
The net is said to recognize a “known” vector if the net produces a pattern of activation on the output units which is same as one of the vectors stored in it.
Background
Traditional memory
Traditional memory stores data at a unique address and can recall the data upon presentation of the complete unique address.
Autoassociative memory
Autoassociative memories are capable of retrieving a piece of data upon presentation of only partial information from that piece of data. Hopfield networks have been shown to act as autoassociative memory since they are capable of remembering data by observing a portion of that data.
Iterative Autoassociative Net
In some cases, an auto-associative net does not reproduce a stored pattern the first time around, but if the result of the first showing is input to the net again, the stored pattern is reproduced. They are of 3 further kinds — Recurrent linear auto-associator, Brain-State-in-a-Box net, and Discrete Hopfield net. The Hopfield Network is the most well known example of an autoassociative memory.
Hopfield Network
Hopfield networks serve as content-addressable ("associative") memory systems with binary threshold nodes, and they have been shown to act as autoassociative since they are capable of remembering data by observing a portion of that data.
Heteroassociative memory
Heteroassociative memories, on the other hand, can recall an associated pi |
https://en.wikipedia.org/wiki/Radial%20groove | The radial groove (also known as the musculospiral groove, radial sulcus, or spiral groove) is a broad but shallow oblique depression for the radial nerve and deep brachial artery. It is located on the center of the lateral border of the humerus bone. It is situated alongside the posterior margin of the deltoid tuberosity, ending at its inferior margin.
Although it provides protection to the radial nerve, it is often involved in compressions on the nerve (due to external pressure due to surgery) that can cause radial nerve palsy.
See also
Intertubercular groove
Triceps brachii muscle
Additional images |
https://en.wikipedia.org/wiki/Label%20%28computer%20science%29 | In programming languages, a label is a sequence of characters that identifies a location within source code. In most languages, labels take the form of an identifier, often followed by a punctuation character (e.g., a colon). In many high-level languages, the purpose of a label is to act as the destination of a GOTO statement. In assembly language, labels can be used anywhere an address can (for example, as the operand of a JMP or MOV instruction). Also in Pascal and its derived variations. Some languages, such as Fortran and BASIC, support numeric labels. Labels are also used to identify an entry point into a compiled sequence of statements (e.g., during debugging).
C
In C a label identifies a statement in the code. A single statement can have multiple labels. Labels just indicate locations in the code and reaching a label has no effect on the actual execution.
Function labels
Function labels consist of an identifier, followed by a colon. Each such label points to a statement in a function and its identifier must be unique within that function. Other functions may use the same name for a label. Label identifiers occupy their own namespace – one can have variables and functions with the same name as a label.
void foo(int number)
{
if (number < 0)
goto error;
bar(number);
return;
error:
fprintf(stderr, "Invalid number!\n");
}
Here error is the label. The statement goto can be used to jump to a labeled statement in the code. After a goto, program execution continues with the statement after the label.
Switch labels
Two types of labels can be put in a switch statement. A case label consists of the keyword case, followed by an expression that evaluates to integer constant. A default label consists of the keyword default. Case labels are used to associate an integer value with a statement in the code. When a switch statement is reached, program execution continues with the statement after the case label with value that matches the v |
https://en.wikipedia.org/wiki/Polyscias%20nothisii | Polyscias nothisii is a species of plant in the family Araliaceae for which no name has yet been published. It is endemic to New Caledonia.
It is found in dry forest at low altitude on calcareous or volcano-sedimentary substrate. Main threats are habitat degradation caused by recurrent bushfires and invasive species (Rusa rusa deer (Rusa timorensis)) as well as land clearing. Its extent of occurrence (EOO) and area of occupancy (AOO) are estimated to be of 1,536 km2 and 44 km2 while the number of locations is estimated to be five. Polyscias nothisii is therefore assessed as Endangered (EN) with criteria B1ab(iii)+2ab(iii) with a continuing decline of habitat quality. |
https://en.wikipedia.org/wiki/Nova%20classification | The Nova classification (, 'new classification') is a framework for grouping edible substances based on the extent and purpose of food processing applied to them. Researchers at the University of São Paulo, Brazil, proposed the system in 2009.
Nova classifies food into four groups:
Unprocessed or minimally processed foods
Processed culinary ingredients
Processed foods
Ultra-processed foods
The system has been used worldwide in nutrition and public health research, policy, and guidance as a tool for understanding the health implications of different food products.
History
The Nova classification grew out of the research of Carlos Augusto Monteiro. Born in 1948 into a family straddling the divide between poverty and relative affluence in Brazil, Monteiro's journey began as the first member of his family to attend university. His early research in the late 1970s focused on malnutrition, reflecting the prevailing emphasis in nutrition science of the time. In the mid-1990s, Monteiro observed a significant shift in Brazil's dietary landscape marked by a rise in obesity rates among economically disadvantaged populations, while more affluent areas saw declines. This transformation led him to explore dietary patterns holistically, rather than focusing solely on individual nutrients. Employing statistical methods, Monteiro identified two distinct eating patterns in Brazil: one rooted in traditional foods like rice and beans and another characterized by the consumption of highly processed products.
The classification's name is from the title of the original scientific article in which it was published, 'A new classification of foods' (). The idea of applying this as the classification's name is credited to Jean-Claude Moubarac of the Université de Montréal. The name is often styled in capital letters, NOVA, but it is not an acronym. Recent scientific literature leans towards writing the name as Nova, including papers written with Monteiro's involvement.
Nova food pr |
https://en.wikipedia.org/wiki/Rational%20motion | In kinematics, the motion of a rigid body is defined as a continuous set of displacements. One-parameter motions can be defined
as a continuous displacement of moving object with respect to a fixed frame in Euclidean three-space (E3), where the displacement depends on one parameter, mostly identified as time.
Rational motions are defined by rational functions (ratio of two polynomial functions) of time. They produce rational trajectories, and therefore they integrate well with the existing NURBS (Non-Uniform Rational B-Spline) based industry standard CAD/CAM systems. They are readily amenable to the applications of existing computer-aided geometric design (CAGD) algorithms. By combining kinematics of rigid body motions with NURBS geometry of curves and surfaces, methods have been developed for computer-aided design of rational motions.
These CAD methods for motion design find applications in animation in computer graphics (key frame interpolation), trajectory planning in robotics (taught-position interpolation), spatial navigation in virtual reality, computer-aided geometric design of motion via interactive interpolation, CNC tool path planning, and task specification in mechanism synthesis.
Background
There has been a great deal of research in applying the principles of computer-aided geometric design (CAGD) to the problem of computer-aided motion design.
In recent years, it has been well established that rational Bézier and rational B-spline based curve representation schemes can be combined with dual quaternion representation of spatial displacements to obtain rational Bézier and B-spline
motions. Ge and Ravani, developed a new framework for geometric constructions
of spatial motions by combining the concepts from kinematics and CAGD. Their work was built upon the seminal paper of Shoemake, in which he
used the concept of a quaternion for rotation interpolation. A detailed list of references on this topic can be found in and.
Rational Bézier and B-spline |
https://en.wikipedia.org/wiki/Fr%C3%A9chet%20distance | In mathematics, the Fréchet distance is a measure of similarity between curves that takes into account the location and ordering of the points along the curves. It is named after Maurice Fréchet.
Intuitive definition
Imagine a person traversing a finite curved path while walking their dog on a leash, with the dog traversing a separate finite curved path. Each can vary their speed to keep slack in the leash, but neither can move backwards. The Fréchet distance between the two curves is the length of the shortest leash sufficient for both to traverse their separate paths from start to finish. Note that the definition is symmetric with respect to the two curves—the Fréchet distance would be the same if the dog were walking its owner.
Formal definition
Let be a metric space. A curve in is a continuous map from the unit interval into , i.e., . A reparameterization of is a continuous, non-decreasing, surjection .
Let and be two given curves in . Then, the Fréchet distance between and is defined as the infimum over all reparameterizations and of of the maximum over all of the distance in between and . In mathematical notation, the Fréchet distance is
where is the distance function of .
Informally, we can think of the parameter as "time". Then, is the position of the dog and is the position of the dog's owner at time (or vice versa). The length of the leash between them at time is the distance between and . Taking the infimum over all possible reparametrizations of corresponds to choosing the walk along the given paths where the maximum leash length is minimized. The restriction that and be non-decreasing means that neither the dog nor its owner can backtrack.
The Fréchet metric takes into account the flow of the two curves because the pairs of points whose distance contributes to the Fréchet distance sweep continuously along their respective curves. This makes the Fréchet distance a better measure of similarity for curves than alternativ |
https://en.wikipedia.org/wiki/Effective%20atomic%20number%20%28compounds%20and%20mixtures%29 | The atomic number of a material exhibits a strong and fundamental relationship with the nature of radiation interactions within that medium. There are numerous mathematical descriptions of different interaction processes that are dependent on the atomic number, . When dealing with composite media (i.e. a bulk material composed of more than one element), one therefore encounters the difficulty of defining . An effective atomic number in this context is equivalent to the atomic number but is used for compounds (e.g. water) and mixtures of different materials (such as tissue and bone). This is of most interest in terms of radiation interaction with composite materials. For bulk interaction properties, it can be useful to define an effective atomic number for a composite medium and, depending on the context, this may be done in different ways. Such methods include (i) a simple mass-weighted average, (ii) a power-law type method with some (very approximate) relationship to radiation interaction properties or (iii) methods involving calculation based on interaction cross sections. The latter is the most accurate approach (Taylor 2012), and the other more simplified approaches are often inaccurate even when used in a relative fashion for comparing materials.
In many textbooks and scientific publications, the following - simplistic and often dubious - sort of method is employed. One such proposed formula for the effective atomic number, , is as follows:
where
is the fraction of the total number of electrons associated with each element, and
is the atomic number of each element.
An example is that of water (H2O), made up of two hydrogen atoms (Z=1) and one oxygen atom (Z=8), the total number of electrons is 1+1+8 = 10, so the fraction of electrons for the two hydrogens is (2/10) and for the one oxygen is (8/10). So the for water is:
The effective atomic number is important for predicting how photons interact with a substance, as certain types of photon interactions |
https://en.wikipedia.org/wiki/Metodolo%C5%A1ki%20zvezki | Metodološki zvezki – Advances in Methodology and Statistics is a peer-reviewed academic journal covering methodology and statistics, published by the Faculty of Social Sciences of the University of Ljubljana.
Anuška Ferligoj and Andrej Mrvar were the founding editors-in-chief.
Abstracting and indexing
From 2011 to 2019 the journal was abstracted and indexed in Scopus.
See also
List of academic journals published in Slovenia |
https://en.wikipedia.org/wiki/Pulation%20square | In category theory, a branch of mathematics, a pulation square (also called a Doolittle diagram) is a diagram that is simultaneously a pullback square and a pushout square. It is a self-dual concept. |
https://en.wikipedia.org/wiki/New%20Drug%20Application | The Food and Drug Administration's (FDA) New Drug Application (NDA) is the vehicle in the United States through which drug sponsors formally propose that the FDA approve a new pharmaceutical for sale and marketing. Some 30% or less of initial drug candidates proceed through the entire multi-year process of drug development, concluding with an approved NDA, if successful.
The goals of the NDA are to provide enough information to permit FDA reviewers to establish the complete history of the candidate drug. Among facts needed for the application are:
Patent and manufacturing information
Drug safety and specific effectiveness for its proposed use(s) when used as directed
Reports on the design, compliance, and conclusions of completed clinical trials by the Institutional Review Board
Drug susceptibility to abuse
Proposed labeling (package insert) and directions for use
Exceptions to this process include voter driven initiatives for medical marijuana in certain states.
Before trials
To legally test the drug on human subjects in the U.S., the maker must first obtain an Investigational New Drug (IND) designation from FDA. This application is based on nonclinical data, typically from a combination of in vivo and in vitro laboratory safety studies, that shows the drug is safe enough to test in humans. Often the "new" drugs that are submitted for approval include new molecular entities or old medications that have been chemically modified to elicit differential pharmacological effects or reduced side effects.
Clinical trials
The legal requirement for approval is "substantial" evidence of effectiveness demonstrated through controlled clinical trials.
This standard lies at the heart of the regulatory program for drugs. Data for the submission must come from rigorous clinical trials.
The trials are typically conducted in three phases:
Phase 1: The drug is tested in 20 to 100 healthy volunteers to determine its safety at low doses. About 70% of candidate drugs advance |
https://en.wikipedia.org/wiki/Models%20of%20consciousness | Models of consciousness are used to illustrate and aid in understanding and explaining distinctive aspects of consciousness. Sometimes the models are labeled theories of consciousness. Anil Seth defines such models as those that relate brain phenomena such as fast irregular electrical activity and widespread brain activation to properties of consciousness such as qualia. Seth allows for different types of models including mathematical, logical, verbal and conceptual models.
Neuroscience
Neural correlates of consciousness
The Neural correlates of consciousness (NCC) formalism is used as a major step towards explaining consciousness. The NCC are defined to constitute the minimal set of neuronal events and mechanisms sufficient for a specific conscious percept, and consequently sufficient for consciousness. In this formalism, consciousness is viewed as a state-dependent property of some undefined complex, adaptive, and highly interconnected biological system.
Dehaene–Changeux model
The Dehaene–Changeux model (DCM), also known as the global neuronal workspace or the global cognitive workspace model is a computer model of the neural correlates of consciousness programmed as a neural network. Stanislas Dehaene and Jean-Pierre Changeux introduced this model in 1986. It is associated with Bernard Baars's Global workspace theory for consciousness.
Electromagnetic theories of consciousness
Electromagnetic theories of consciousness propose that consciousness can be understood as an electromagnetic phenomenon that occurs when a brain produces an electromagnetic field with specific characteristics. Some electromagnetic theories are also quantum mind theories of consciousness.
Orchestrated objective reduction
Orchestrated objective reduction (Orch-OR) model is based on the hypothesis that consciousness in the brain originates from quantum processes inside neurons, rather than from connections between neurons (the conventional view). The mechanism is held to be |
https://en.wikipedia.org/wiki/Independent%20set%20%28graph%20theory%29 | In graph theory, an independent set, stable set, coclique or anticlique is a set of vertices in a graph, no two of which are adjacent. That is, it is a set of vertices such that for every two vertices in , there is no edge connecting the two. Equivalently, each edge in the graph has at most one endpoint in . A set is independent if and only if it is a clique in the graph's complement. The size of an independent set is the number of vertices it contains. Independent sets have also been called "internally stable sets", of which "stable set" is a shortening.
A maximal independent set is an independent set that is not a proper subset of any other independent set.
A maximum independent set is an independent set of largest possible size for a given graph . This size is called the independence number of and is usually denoted by . The optimization problem of finding such a set is called the maximum independent set problem. It is a strongly NP-hard problem. As such, it is unlikely that there exists an efficient algorithm for finding a maximum independent set of a graph.
Every maximum independent set also is maximal, but the converse implication does not necessarily hold.
Properties
Relationship to other graph parameters
A set is independent if and only if it is a clique in the graph’s complement, so the two concepts are complementary. In fact, sufficiently large graphs with no large cliques have large independent sets, a theme that is explored in Ramsey theory.
A set is independent if and only if its complement is a vertex cover. Therefore, the sum of the size of the largest independent set and the size of a minimum vertex cover is equal to the number of vertices in the graph.
A vertex coloring of a graph corresponds to a partition of its vertex set into independent subsets. Hence the minimal number of colors needed in a vertex coloring, the chromatic number , is at least the quotient of the number of vertices in and the independent number .
In a bipartite g |
https://en.wikipedia.org/wiki/Expander%20cycle | The expander cycle is a power cycle of a bipropellant rocket engine. In this cycle, the fuel is used to cool the engine's combustion chamber, picking up heat and changing phase. The now heated and gaseous fuel then powers the turbine that drives the engine's fuel and oxidizer pumps before being injected into the combustion chamber and burned.
Because of the necessary phase change, the expander cycle is thrust limited by the square–cube law. When a bell-shaped nozzle is scaled, the nozzle surface area with which to heat the fuel increases as the square of the radius, but the volume of fuel to be heated increases as the cube of the radius. Thus beyond approximately 300 kN (70,000 lbf) of thrust, there is no longer enough nozzle area to heat enough fuel to drive the turbines and hence the fuel pumps. Higher thrust levels can be achieved using a bypass expander cycle where a portion of the fuel bypasses the turbine and or thrust chamber cooling passages and goes directly to the main chamber injector. Non-toroidal aerospike engines are not subject to the limitations from the square-cube law because the engine's linear shape does not scale isometrically: the fuel flow and nozzle area scale linearly with the engine's width. All expander cycle engines need to use a cryogenic fuel such as liquid hydrogen, liquid methane, or liquid propane that easily reaches its boiling point.
Some expander cycle engines may use a gas generator of some kind to start the turbine and run the engine until the heat input from the thrust chamber and nozzle skirt increases as the chamber pressure builds up.
Some examples of an expander cycle engine are the Aerojet Rocketdyne RL10 and the Vinci engine for the future Ariane 6.
Expander bleed cycle
This operational cycle is a modification of the traditional expander cycle. In the bleed (or open) cycle, instead of routing all of the heated propellant through the turbine and sending it back to be combusted, only a small portion of the heated pr |
https://en.wikipedia.org/wiki/Percentile | In statistics, a k-th percentile, also known as percentile score or centile, is a score a given percentage k of scores in its frequency distribution falls ("exclusive" definition) or a score a given percentage falls ("inclusive" definition).
Percentiles are expressed in the same unit of measurement as the input scores, in percent; for example, if the scores refer to human weight, the corresponding percentiles will be expressed in kilograms or pounds.
In the limit of an infinite sample size, the percentile approximates the percentile function, the inverse of the cumulative distribution function.
Percentiles are a type of quantiles, obtained adopting a subdivision into 100 groups.
The 25th percentile is also known as the first quartile (Q1), the 50th percentile as the median or second quartile (Q2), and the 75th percentile as the third quartile (Q3).
For example, the 50th percentile (median) is the score (or , depending on the definition) which 50% of the scores in the distribution are found.
A related quantity is the percentile rank of a score, expressed in percent, which represents the fraction of scores in its distribution that are less than it, an exclusive definition.
Percentile scores and percentile ranks are often used in the reporting of test scores from norm-referenced tests, but, as just noted, they are not the same. For percentile ranks, a score is given and a percentage is computed. Percentile ranks are exclusive: if the percentile rank for a specified score is 90%, then 90% of the scores were lower. In contrast, for percentiles a percentage is given and a corresponding score is determined, which can be either exclusive or inclusive. The score for a specified percentage (e.g., 90th) indicates a score below which (exclusive definition) or at or below which (inclusive definition) other scores in the distribution fall.
Definitions
There is no standard definition of percentile,
however all definitions yield similar results when the number of observat |
https://en.wikipedia.org/wiki/Ultra-high-definition%20television | Ultra-high-definition television (also known as Ultra HD television, Ultra HD, UHDTV, UHD and Super Hi-Vision) today includes 4K UHD and 8K UHD, which are two digital video formats with an aspect ratio of 16:9. These were first proposed by NHK Science & Technology Research Laboratories and later defined and approved by the International Telecommunication Union (ITU).
The Consumer Electronics Association announced on October 17, 2012, that "Ultra High Definition", or "Ultra HD", would be used for displays that have an aspect ratio of 16:9 or wider and at least one digital input capable of carrying and presenting native video at a minimum resolution of . In 2015, the Ultra HD Forum was created to bring together the end-to-end video production ecosystem to ensure interoperability and produce industry guidelines so that adoption of ultra-high-definition television could accelerate. From just 30 in Q3 2015, the forum published a list up to 55 commercial services available around the world offering 4K resolution.
The "UHD Alliance", an industry consortium of content creators, distributors, and hardware manufacturers, announced during a Consumer Electronics Show (CES) 2016 press conference its "Ultra HD Premium" specification, which defines resolution, bit depth, color gamut, high dynamic range (HDR) performance required for Ultra HD (UHDTV) content and displays to carry their Ultra HD Premium logo.
Alternative terms
Ultra-high-definition television is also known as Ultra HD, UHD, UHDTV, and 4K. In Japan, 8K UHDTV will be known as Super Hi-Vision since Hi-Vision was the term used in Japan for HDTV. In the consumer electronics market companies had previously only used the term 4K at the 2012 CES but that had changed to "Ultra HD" during CES 2013. "Ultra HD" was selected by the Consumer Electronics Association after extensive consumer research, as the term has also been established with the introduction of "Ultra HD Blu-ray".
Technical details
Resolution
Two resolutio |
https://en.wikipedia.org/wiki/Compact%20operator | In functional analysis, a branch of mathematics, a compact operator is a linear operator , where are normed vector spaces, with the property that maps bounded subsets of to relatively compact subsets of (subsets with compact closure in ). Such an operator is necessarily a bounded operator, and so continuous. Some authors require that are Banach, but the definition can be extended to more general spaces.
Any bounded operator that has finite rank is a compact operator; indeed, the class of compact operators is a natural generalization of the class of finite-rank operators in an infinite-dimensional setting. When is a Hilbert space, it is true that any compact operator is a limit of finite-rank operators, so that the class of compact operators can be defined alternatively as the closure of the set of finite-rank operators in the norm topology. Whether this was true in general for Banach spaces (the approximation property) was an unsolved question for many years; in 1973 Per Enflo gave a counter-example, building on work by Grothendieck and Banach.
The origin of the theory of compact operators is in the theory of integral equations, where integral operators supply concrete examples of such operators. A typical Fredholm integral equation gives rise to a compact operator K on function spaces; the compactness property is shown by equicontinuity. The method of approximation by finite-rank operators is basic in the numerical solution of such equations. The abstract idea of Fredholm operator is derived from this connection.
Equivalent formulations
A linear map between two topological vector spaces is said to be compact if there exists a neighborhood of the origin in such that is a relatively compact subset of .
Let be normed spaces and a linear operator. Then the following statements are equivalent, and some of them are used as the principal definition by different authors
is a compact operator;
the image of the unit ball of under is relatively compact |
https://en.wikipedia.org/wiki/Intuitionistic%20type%20theory | Intuitionistic type theory (also known as constructive type theory, or Martin-Löf type theory) is a type theory and an alternative foundation of mathematics.
Intuitionistic type theory was created by Per Martin-Löf, a Swedish mathematician and philosopher, who first published it in 1972. There are multiple versions of the type theory: Martin-Löf proposed both intensional and extensional variants of the theory and early impredicative versions, shown to be inconsistent by Girard's paradox, gave way to predicative versions. However, all versions keep the core design of constructive logic using dependent types.
Design
Martin-Löf designed the type theory on the principles of mathematical constructivism. Constructivism requires any existence proof to contain a "witness". So, any proof of "there exists a prime greater than 1000" must identify a specific number that is both prime and greater than 1000. Intuitionistic type theory accomplished this design goal by internalizing the BHK interpretation. An interesting consequence is that proofs become mathematical objects that can be examined, compared, and manipulated.
Intuitionistic type theory's type constructors were built to follow a one-to-one correspondence with logical connectives. For example, the logical connective called implication () corresponds to the type of a function (). This correspondence is called the Curry–Howard isomorphism. Previous type theories had also followed this isomorphism, but Martin-Löf's was the first to extend it to predicate logic by introducing dependent types.
Type theory
Intuitionistic type theory has 3 finite types, which are then composed using 5 different type constructors. Unlike set theories, type theories are not built on top of a logic like Frege's. So, each feature of the type theory does double duty as a feature of both math and logic.
If you are unfamiliar with type theory and know set theory, a quick summary is: Types contain terms just like sets contain elements. |
https://en.wikipedia.org/wiki/Anorectal%20anomalies | Anorectal anomalies are congenital malformations of the anus and rectum. One anal anomaly, imperforate anus has an estimated incidence of 1 in 5000 births. It affects boys and girls with similar frequency.
Examples of anorectal anomalies include:
Anal stenosis
Proctitis
Anal bleeding
Anal fistula
See also
Imperforate anus |
https://en.wikipedia.org/wiki/Kaplansky%27s%20conjectures | The mathematician Irving Kaplansky is notable for proposing numerous conjectures in several branches of mathematics, including a list of ten conjectures on Hopf algebras. They are usually known as Kaplansky's conjectures.
Group rings
Let be a field, and a torsion-free group. Kaplansky's zero divisor conjecture states:
The group ring does not contain nontrivial zero divisors, that is, it is a domain.
Two related conjectures are known as, respectively, Kaplansky's idempotent conjecture:
does not contain any non-trivial idempotents, i.e., if , then or .
and Kaplansky's unit conjecture (which was originally made by Graham Higman and popularized by Kaplansky):
does not contain any non-trivial units, i.e., if in , then for some in and in .
The zero-divisor conjecture implies the idempotent conjecture and is implied by the unit conjecture. As of 2021, the zero divisor and idempotent conjectures are open. The unit conjecture, however, was disproved for fields of positive characteristic by Giles Gardam in February 2021: he published a preprint on the arXiv that constructs a counterexample. The field is of characteristic 2. (see also: Fibonacci group)
There are proofs of both the idempotent and zero-divisor conjectures for large classes of groups. For example, the zero-divisor conjecture is known for all torsion-free elementary amenable groups (a class including all virtually solvable groups), since their group algebras are known to be Ore domains. It follows that the conjecture holds more generally for all residually torsion-free elementary amenable groups. Note that when is a field of characteristic zero, then the zero-divisor conjecture is implied by the Atiyah conjecture, which has also been established for large classes of groups.
The idempotent conjecture has a generalisation, the Kadison idempotent conjecture, also known as the Kadison–Kaplansky conjecture, for elements in the reduced group C*-algebra. In this setting, it is known that if the Fa |
https://en.wikipedia.org/wiki/Compaq%20Presario | Presario is a discontinued line of consumer desktop computers and notebooks originally produced by Compaq. The Presario family of computers was introduced in September 1993.
In the mid-1990s, Compaq began manufacturing PC monitors under the Presario brand. A series of all-in-one units, containing both the PC and the monitor in the same case, were also released.
After Compaq merged with HP in 2002, the Presario line of desktops and laptops were sold concurrently with HP’s other products, such as the HP Pavilion. The Presario laptops subsequently replaced the then-discontinued HP OmniBook line of notebooks around that same year.
The Presario brand name continued to be used for low-end home desktops and laptops from 2002 up until the Compaq brand name was discontinued by HP in 2013.
Desktop PC series
Compaq Presario 2100
Compaq Presario 2200
Compaq Presario 2240
Compaq Presario 2254
Compaq Presario 2256
Compaq Presario 2285V
Compaq Presario 2286
Compaq Presario 2288
Compaq Presario 4108
Compaq Presario 4110
Compaq Presario 4160
Compaq Presario 4505
Compaq Presario 4508
Compaq Presario 4528
Compaq Presario 4532
Compaq Presario 4540
Compaq Presario 4600
Compaq Presario 4620
Compaq Presario 4712
Compaq Presario 4800
Compaq Presario 5000 series
Compaq Presario 5000
Compaq Presario 5006US
Compaq Presario 5008US
Compaq Presario 5000A
Compaq Presario 5000T
Compaq Presario 5000Z
Compaq Presario 5010
Compaq Presario 5030
Compaq Presario 5050
Compaq Presario 5080
Compaq Presario 5100 series
Compaq Presario 5150
Compaq Presario 5170
Compaq Presario 5184
Compaq Presario 5185
Compaq Presario 5190
Compaq Presario 5200 series
Compaq Presario 5202
Compaq Presario 5222
Compaq Presario 5240
Compaq Presario 5280
Compaq Presario 5285
Compaq Presario 5360
Compaq Presario 5400
Compaq Presario 5460
Compaq Presario 5477
Compaq Presario 5500
Compaq Presario 5520
Compaq Presario 5599
Compaq Presario 5600 series
Compaq Presario 5660
Compa |
https://en.wikipedia.org/wiki/List%20of%20concept-%20and%20mind-mapping%20software | Concept mapping and mind mapping software is used to create diagrams of relationships between concepts, ideas, or other pieces of information. It has been suggested that the mind mapping technique can improve learning and study efficiency up to 15% over conventional note-taking. Many software packages and websites allow creating or otherwise supporting mind maps.
File format
Using a standard file format allows interchange of files between various programs. Many programs listed below support the OPML file format and the XML file format used by FreeMind.
Free and open-source
The following tools comply with the Free Software Foundation's (FSF) definition of free software. As such, they are also open-source software.
Freeware
The following is a list of notable concept mapping and mind mapping applications which are freeware and available at no cost. Some are open source and others are proprietary software.
Proprietary software
The table below lists pieces of proprietary commercial software that allow creating mind and concept maps.
See also
Brainstorming
List of Unified Modeling Language tools
Outliner
Study software |
https://en.wikipedia.org/wiki/Musical%20Symbols%20%28Unicode%20block%29 | Musical Symbols is a Unicode block containing characters for representing modern musical notation. Fonts that support it include Bravura, Euterpe, FreeSerif, Musica and Symbola. The Standard Music Font Layout (SMuFL), which is supported by the MusicXML format, expands on the Musical Symbols Unicode Block's 220 glyphs by using the Private Use Area in the Basic Multilingual Plane, permitting close to 2600 glyphs.
Block
History
The following Unicode-related documents record the purpose and process of defining specific characters in the Musical Symbols block:
See also
Ancient Greek Musical Notation (Unicode block)
Byzantine Musical Symbols (Unicode block)
Znamenny Musical Notation (Unicode block)
List of musical symbols |
https://en.wikipedia.org/wiki/GQM | GQM, the initialism for goal, question, metric, is an established goal-oriented approach to software metrics to improve and measure software quality.
History
GQM has been promoted by Victor Basili of the University of Maryland, College Park and the Software Engineering Laboratory at the NASA Goddard Space Flight Center after supervising a Ph.D. thesis by Dr. David M. Weiss. Dr. Weiss' work was inspired by the work of Albert Endres at IBM Germany.
Method
GQM defines a measurement model on three levels:
1. Conceptual level (Goal) A goal is defined for an object, for a variety of reasons, with respect to various models of quality, from various points of view and relative to a particular environment.
2. Operational level (Question) A set of questions is used to define models of the object of study and then focuses on that object to characterize the assessment or achievement of a specific goal.
3. Quantitative level (Metric) A set of metrics, based on the models, is associated with every question in order to answer it in a measurable way.
GQM stepwise
Another interpretation of the procedure is:
Planning
Definition
Data collection
Interpretation
Sub-steps
Sub-steps are needed for each phases. To complete the definition phase, an eleven-step procedure is proposed:
Define measurement goals
Review or produce software process models
Conduct GQM interviews
Define questions and hypotheses
Review questions and hypotheses
Define metrics
Check metrics on consistency and completeness
Produce GQM plan
Produce measurement plan
Produce analysis plan
Review plans
Recent developments
The GQM+Strategies approach was developed by Victor Basili and a group of researchers from the Fraunhofer Society. It is based on the Goal Question Metric paradigm and adds the capability to create measurement programs that ensure alignment between business goals and strategies, software-specific goals, and measurement goals.
Novel application of GQM towards business data are |
https://en.wikipedia.org/wiki/Water%20hole%20%28radio%29 | The waterhole, or water hole, is an especially quiet band of the electromagnetic spectrum between 1420 and 1662 megahertz, corresponding to wavelengths of 21 and 18 centimeters, respectively. It is a popular observing frequency used by radio telescopes in radio astronomy.
The strongest hydroxyl radical spectral line radiates at 18 centimeters, and atomic hydrogen at 21 centimeters (the hydrogen line). These two molecules, which combine to form water, are widespread in interstellar gas, which means this gas tends to absorb radio noise at these frequencies. Therefore, the spectrum between these frequencies forms a relatively "quiet" channel in the interstellar radio noise background.
Bernard M. Oliver, who coined the term in 1971, theorized that the waterhole would be an obvious band for communication with extraterrestrial intelligence, hence the name, which is a pun: in English, a watering hole is a vernacular reference to a common place to meet and talk. Several programs involved in the search for extraterrestrial intelligence, including SETI@home, search in the waterhole radio frequencies.
See also
BLC1
Wow! signal
Radio source SHGb02+14a
Schelling point |
https://en.wikipedia.org/wiki/Pyrrhocoricin | Pyrrhocoricin is a 20-residue long antimicrobial peptide of the firebug Pyrrhocoris apterus.
Structure and function
Pyrrhocoricin is primarily active against Gram-negative bacteria. The peptide is proline-rich with proline-arginine repeats, as well a critical threonine residue, which is required for activity through O-glycosylation. Like the antimicrobial peptides drosocin and abaecin, pyrrhocoricin binds to the bacterial protein DnaK, inhibiting cell machinery and replication. Only the L-enantiomer of pyrrhocoricin is active against bacteria. The action of pyrrhocoricin-like peptides is potentiated by the presence of pore-forming peptides, which facilitates the entry of pyrrhocoricin-like peptides into the bacterial cell. Proline-rich peptides like Pyrrhocoricin can also bind to microbe ribosomes, preventing protein translation. In the absence of pore-forming peptides, pyrrhocoricin is taken into the bacteria by the action of bacterial uptake permeases. |
https://en.wikipedia.org/wiki/Fleming%20Prize%20Lecture | The Fleming Prize Lecture was started by the Microbiology Society in 1976 and named after Alexander Fleming, one of the founders of the society. It is for early career researchers, generally within 12 of being awarded their PhD, who have an outstanding independent research record making a distinct contribution to microbiology. Nominations can be made by any member of the society. Nominees do not have to be members.
The award is £1,000 and the awardee is expected to give a lecture based on their research at the Microbiology Society's Annual Conference.
List
The following have been awarded this prize.
1976 Graham Gooday Biosynthesis of the Fungal Wall – Mechanisms and Implications
1977 Peter Newell Cellular Communication During Aggregation of Dictyostelium
1978 George AM Cross Immunochemical Aspects of Antigenic Variation in Trypanosomes
1979 John Beringer The Development of Rhizobium Genetics
1980 Duncan James McGeoch Structural Analysis of Animal Virus Genomes
1981 Dave Sherratt The Maintenance and Propagation of Plasmid Genes in Bacterial Populations
1982 Brian Spratt Penicillin-binding Proteins and the Future of β-Lactam Antibiotics
1983 Ray Dixon The Genetic Complexity of Nitrogen Fixation Herpes Siplex and The Herpes Complex
1984 Paul Nurse Cell Cycle Control in Yeast
1985 Jeffrey Almond Genetic Diversity in Small RNA Viruses
1986 Douglas Kell Forces, Fluxes and Control of Microbial Metabolism
1987 Christopher Higgins Molecular Mechanisms of Membrane Transport: from Microbes to Man
1988 Gordon Dougan An Oral Route to Rational Vaccination
1989 Andrew Davison Varicella-Zoster Virus
1989 Graham J Boulnois Molecular Dissection of the Host-Microbe Interaction in Infection
1990 No award
1991 Lynne Boddy The Ecology of Wood- and Litter-rotting Basidiomycete Fungi
1992 Geoffrey L Smith Vaccinia Virus Glycoproteins and Immune Evasion
1993 Neil Gow Directional Growth and Guidance Systems of Fungal Pathogens
1994 Ian Roberts Bacterial Polysaccharides in Sickness and |
https://en.wikipedia.org/wiki/Seven-segment%20display | A seven-segment display is a form of electronic display device for displaying decimal numerals that is an alternative to the more complex dot matrix displays.
Seven-segment displays are widely used in digital clocks, electronic meters, basic calculators, and other electronic devices that display numerical information.
History
Seven-segment representation of figures can be found in patents as early as 1903 (in ), when Carl Kinsley invented a method of telegraphically transmitting letters and numbers and having them printed on tape in a segmented format. In 1908, F. W. Wood invented an 8-segment display, which displayed the number 4 using a diagonal bar (). In 1910, a seven-segment display illuminated by incandescent bulbs was used on a power-plant boiler room signal panel. They were also used to show the dialed telephone number to operators during the transition from manual to automatic telephone dialing. They did not achieve widespread use until the advent of LEDs in the 1970s.
Some early seven-segment displays used incandescent filaments in an evacuated bulb; they are also known as numitrons. A variation (minitrons) made use of an evacuated potted box. Minitrons are filament segment displays that are housed in DIP (dual in-line package) packages like modern LED segment displays. They may have up to 16 segments. There were also segment displays that used small incandescent light bulbs instead of LEDs or incandescent filaments. These worked similarly to modern LED segment displays.
Vacuum fluorescent display versions were also used in the 1970s.
Many early (c. 1970s) LED seven-segment displays had each digit built on a single die. This made the digits very small. Some included magnifying lenses in the design to try to make the digits more legible.
The seven-segment pattern is sometimes used in posters or tags, where the user either applies color to pre-printed segments, or applies color through a seven-segment digit template, to compose figures such as product |
https://en.wikipedia.org/wiki/Imre%20Leader | Imre Bennett Leader (born 30 October 1963) is a British mathematician, a professor in DPMMS at the University of Cambridge working in the field of combinatorics. He is also known as an Othello player.
Life
He is the son of the physicist Elliot Leader and his first wife Ninon Neményi, previously married to the poet Endre Kövesi; Darian Leader is his brother. Imre Lakatos was a family friend and his godfather.
Leader was educated at St Paul's School in London, from 1976 to 1980. He won a silver medal on the British team at the 1981 International Mathematical Olympiad (IMO) for pre-undergraduates. He later acted as the official leader of the British IMO team, taking over from Adam McBride in 1999, to 2001. He was the IMO's Chief Coordinator and Problems Group Chairman in 2002.
Leader went on to Trinity College, Cambridge, where he graduated B.A. in 1984, M.A. in 1989, and Ph.D. in 1989. His Ph.D. was in mathematics was for work on combinatorics, supervised by Béla Bollobás. From 1989 to 1996 he was Fellow at Peterhouse, Cambridge, then was Reader at University College London from 1996 to 2000. He was a lecturer at Cambridge from 2000 to 2002, and Reader there from 2002 to 2005. In 2000 he became a Fellow of Trinity College.
Awards and honours
In 1999 Leader was awarded a Junior Whitehead Prize for his contributions to combinatorics. Cited results included the proof, with Reinhard Diestel, of the bounded graph conjecture of Rudolf Halin.
Othello
Leader in an interview in 2016 stated that he began to play Othello in 1981, with his friend Jeremy Rickard. Between 1983 and 2019 he was 15 times the British Othello champion. In 1983 he came second in the world individual championship, and in 1988 he played on the British team that won the world team championship. In 2019 he won the European championship, beating Matthias Berg in the final in Berlin. |
https://en.wikipedia.org/wiki/Listening | Listening is giving attention to a sound. When listening, a person hears what others are saying and tries to understand what it means.
Listening involves complex affective, cognitive, and behavioral processes. Affective processes include the motivation to listen to others; cognitive processes include attending to, understanding, receiving, and interpreting content and relational messages; and behavioral processes include responding to others with verbal and nonverbal feedback.
Listening is a skill for resolving problems. Poor listening can lead to misinterpretations, thus causing conflict or dispute. Poor listening can be exhibited by excessive interruptions, inattention, hearing what you want to hear, mentally composing a response, or having a closed mind.
Listening is also linked to memory. According to one study, when there were background noises during a speech, listeners were better able to recall the information in the speech when hearing those noises again. For example, when a person reads or does something else while listening to music, he or she can recall what that was when hearing the music again later.
Listening also functions rhetorically as a means of promoting cross-culture communication.
What is listening?
Listening differs from obeying. A person who receives and understands information or an instruction, and then chooses not to comply with it or not to agree to it, has listened to the speaker, even though the result is not what the speaker wanted.
Listening begins by hearing a speaker producing the sound to be listened to. A semiotician, Roland Barthes, characterized the distinction between listening and hearing. "Hearing is a physiological phenomenon; listening is a psychological act." People are always hearing, most of the time subconsciously. Listening is done by choice. It is the interpretative action taken by someone in order to understand, and potentially make sense of, something they hear.
How does one listen?
Listening may be con |
https://en.wikipedia.org/wiki/ATSC%20tuner | An ATSC (Advanced Television Systems Committee) tuner, often called an ATSC receiver or HDTV tuner, is a type of television tuner that allows reception of digital television (DTV) television channels that use ATSC standards, as transmitted by television stations in North America, parts of Central America, and South Korea. Such tuners are usually integrated into a television set, VCR, digital video recorder (DVR), or set-top box which provides audio/video output connectors of various types.
Another type of television tuner is a digital television adapter (DTA) with an analog passthrough.
Technical overview
The terms "tuner" and "receiver" are used loosely, and it is perhaps more appropriately called an ATSC receiver, with the tuner being part of the receiver (see Metonymy). The receiver generates the audio and video (AV) signals needed for television, and performs the following tasks: demodulation; error correction; MPEG transport stream demultiplexing; decompression; AV synchronization; and media reformatting to match what is optimal input for one's TV. Examples of media reformatting include: interlace to progressive scan or vice versa; picture resolutions; aspect ratio conversions (16:9 to or from 4:3); frame rate conversion; and image scaling. Zooming is an example of resolution change. It is commonly used to convert a low-resolution picture to a high-resolution display. This lets the user eliminate letterboxing or pillarboxing by stretching or cropping the picture. Some ATSC receivers, mostly those in HDTV TV sets, will stretch automatically, either by detecting black bars or by reading the Active Format Descriptor (AFD).
Operation
An ATSC tuner works by generating audio and video signals that are picked up from over-the-air broadcast television. ATSC tuners provide the following functions: selective tuning; demodulation; transport stream demultiplexing; decompression; error correction; analog-to-digital conversion; AV synchronization; and media reformatting |
https://en.wikipedia.org/wiki/Speed%20Dependent%20Damping%20Control | Speed Dependent Damping Control (also called SD²C) was an automatic damper system installed on late-1980s and early-1990s Cadillac automobiles. This system firmed up the suspension at 25 mph (40 km/h) and again at 60 mph (97 km/h). The firmest setting was also used when starting from a standstill until 5 mph (8 km/h).
Applications:
1989–1992 Cadillac Allanté
Computer Command Ride
The semi-active suspension system was updated as Computer Command Ride in 1991. This new system included acceleration, braking rates, and lateral acceleration to the existing vehicle speed metric.
1991– Cadillac Fleetwood
1991– Cadillac Eldorado
1991– Cadillac Seville
1991– Cadillac De Ville (optional, standard for 1993)
1992– Oldsmobile Achieva SCX W41 |
https://en.wikipedia.org/wiki/Phileurus%20didymus | Phileurus didymus is a species of wood-feeding beetle of the family Scarabaeidae.
Description
Head, black, small, and triangular, having three tubercles issuing from it, of which the anterior is pointed, the others blunt. Thorax black, which is the general colour of the insect, rounded, smooth, and margined, having an impression in front, with a short tubercle situated on it near the edge; from whence runs a hollow groove or channel to the posterior margin. Scutellum small. Elytra shining, margined and furrowed. Abdomen smooth and shining, without hair. Tibiae furnished with spines, as are the first joints of the middle and posterior tarsi. Length 2 inches.
Distribution
P. didymus is native to Central America, and is also found in Peru and French Guiana. |
https://en.wikipedia.org/wiki/Biceps%20femoris%20muscle | The biceps femoris () is a muscle of the thigh located to the posterior, or back. As its name implies, it consists of two heads; the long head is considered part of the hamstring muscle group, while the short head is sometimes excluded from this characterization, as it only causes knee flexion (but not hip extension) and is activated by a separate nerve (the peroneal, as opposed to the tibial branch of the sciatic nerve).
Structure
It has two heads of origin:
the long head arises from the lower and inner impression on the posterior part of the tuberosity of the ischium. This is a common tendon origin with the semitendinosus muscle, and from the lower part of the sacrotuberous ligament.
the short head, arises from the lateral lip of the linea aspera, between the adductor magnus and vastus lateralis extending up almost as high as the insertion of the gluteus maximus, from the lateral prolongation of the linea aspera to within 5 cm. of the lateral condyle; and from the lateral intermuscular septum.
The two muscle heads joint together distally and unite in an intricate fashion. The fibers of the long head form a fusiform belly, which passes obliquely downward and lateralward across the sciatic nerve to end in an aponeurosis which covers the posterior surface of the muscle and receives the fibers of the short head. Inferiorly, the aponeurosis condenses to form a tendon which predominantly inserts onto the lateral side of the head of the fibula. There is a second small insertional attachment by a small tendon slip into the lateral condyle of the tibia.
At its insertion the tendon divides into two portions, which embrace the fibular collateral ligament of the knee-joint. Together, this joining of tendons is commonly referred to as the conjoined tendon of the knee.
From the posterior border of the tendon a thin expansion is given off to the fascia of the leg. The tendon of insertion of this muscle forms the lateral hamstring; the common fibular (peroneal) nerve descen |
https://en.wikipedia.org/wiki/International%20Journal%20of%20Food%20and%20Allied%20Sciences | The International Journal of Food and Allied Sciences is a peer-reviewed Food Science and Nutrition journal covering the fields of Food Science and Technology, Food Safety and Microbiology, Pharma Nutrition, Biochemistry and Agricultural Sciences with a focus on food crops. It is the official journal of the Institute of Food Science and Nutrition, Bahauddin Zakariya University Multan Pakistan.
About
The journal only publishes novel, high quality and high impact review papers, original research papers and short communications, in the various disciplines encompassing the science, technology of food and its allied sciences. It has been developed to create a truly international forum for the communication of research in food and allied sciences.
Aim and Scope
The journal solicits papers on topics including functional foods, food allergies and intolerances, diet and disease, malnutrition, public Health and dietary patterns. |
https://en.wikipedia.org/wiki/Dietary%20Supplements%20%28database%29 | The PubMed Dietary Supplement Subset (PMDSS) is a joint project between the National Institutes of Health (NIH) National Library of Medicine (NLM) and the NIH Office of Dietary Supplements (ODS). PMDSS is designed to help people search for academic journal articles related to dietary supplement literature. The subset was created using a search strategy that includes terms provided by the Office of Dietary Supplements, and selected journals indexed for PubMed that include significant dietary supplement related content. It succeeds the International Bibliographic Information on Dietary Supplements (IBIDS) database, 1999–2010, which was a collaboration between the Office of Dietary Supplements and the U.S. Department of Agriculture's National Agricultural Library.
The Subset
ODS and NLM partnered to create this Dietary Supplement Subset of NLM's PubMed database. PubMed provides access to citations from the MEDLINE database and additional life science academic journals. It also includes links to many full-text articles at journal Web sites and other related Web resources.
The subset is designed to limit search results to citations from a broad spectrum of dietary supplement literature including vitamin, mineral, phytochemical, ergogenic, botanical, and herbal supplements in human nutrition and animal models. The subset will retrieve dietary supplement-related citations on topics including, but not limited to:
chemical composition;
biochemical role and function — both in vitro and in vivo;
clinical trials;
health and adverse effects;
fortification;
traditional Chinese medicine and other folk/ethnic supplement practices;
cultivation of botanical products used as dietary supplements; as well as,
surveys of dietary supplement use.
The PMDSS is a free service and can be accessed either directly through the ODS Website or in PubMed using the Dietary Supplement filter (formerly referred to as a Limit).
History
Dietary supplements were first regulated in by the Fed |
https://en.wikipedia.org/wiki/Golden%20age%20of%20Spanish%20software | The golden age of Spanish software () was a time, between 1983 and 1992, when Spain became the second largest 8 bit computer entertainment software producer in Europe, only behind the United Kingdom. The disappearance of the 8 bit technology and its replacement by the 16 bit machines marked the end of this era, during which many software companies based in Spain launched their career: Dinamic Software, Topo Soft, Opera Soft, Made in Spain and Zigurat among others. The name Edad de oro del soft español was coined by specialized magazines of the time and has been used to refer to these years until nowadays.
History
Rise (1983–1985)
In the year 1983, the first home personal computers started arriving in Spain, all of them 8 bit machines. ZX Spectrum and Amstrad CPC were the most sold in the country, followed by MSX and Commodore 64 among others. These were simple machines, with lesser resources, therefore easy to manipulate, so many young programmers all over the country started experimenting with them.
The Golden Era of Spanish Software officially starts with the launch of Bugaboo, by PACO & PACO, the first Spanish video game to get a massive international distribution. Shortly, Fred (Roland on the ropes for Amstrad), by others authors, this time under the company Made in Spain, was another success, and the owners of Made in Spain decided to create Zigurat, a mother company that would at first be dedicated to distribution, turning Made in Spain into a producing company for Zigurat, which also would at first distribute titles from independent companies. Years later, Made in Spain and Zigurat would completely merge into a single producer and distributor company.
Meanwhile, Dinamic Software made their first steps when they launched Yenght for ZX Spectrum, which was a text adventure. And in the field of distribution, Erbe Software, the main Spanish software distributor for more than a decade, started their activity. In their first years, Erbe tried also to produce t |
https://en.wikipedia.org/wiki/Ronald%20N.%20Bracewell | Ronald Newbold Bracewell AO (22 July 1921 – 12 August 2007) was the Lewis M. Terman Professor of Electrical Engineering of the Space, Telecommunications, and Radioscience Laboratory at Stanford University.
Education
Bracewell was born in Sydney, in 1921, and educated at Sydney Boys High School. He graduated from the University of Sydney in 1941 with the BSc degree in mathematics and physics, later receiving the degrees of B.E. (1943), and M.E. (1948) with first class honours, and while working in the Engineering Department became the President of the Oxometrical society. During World War II he designed and developed microwave radar equipment in the Radiophysics Laboratory of the Commonwealth Scientific and Industrial Research Organisation, Sydney under the direction of Joseph L. Pawsey and Edward G. Bowen and from 1946 to 1949 was a research student at Sidney Sussex College, Cambridge, engaged in ionospheric research in the Cavendish Laboratory, where in 1949 he received his PhD degree in physics under J. A. Ratcliffe.
Career
From October 1949 to September 1954, Dr. Bracewell was a senior research officer at the Radiophysics Laboratory of the CSIRO, Sydney, concerned with very-long-wave propagation and radio astronomy. He then lectured in radio astronomy at the Astronomy Department of the University of California, Berkeley, from September 1954 to June 1955 at the invitation of Otto Struve, and at Stanford University during the summer of 1955, and joined the Electrical Engineering faculty at Stanford in December 1955.
In 1974 he was appointed the first Lewis M. Terman Professor and Fellow in Electrical Engineering (1974–1979). Though he retired in 1979, he continued to be active until his death.
Contributions and honours
Professor Bracewell was a Fellow of the Royal Astronomical Society (1950), Fellow and life member of the Institute of Electrical and Electronics Engineers (1961), Fellow of the American Association for the Advancement of Science (1989), a |
https://en.wikipedia.org/wiki/Isotopes%20in%20medicine | A medical isotope is an isotope used in medicine.
The first uses of isotopes in medicine were in radiopharmaceuticals, and this is still the most common use. However more recently, separated stable isotopes have also come into use.
Examples of non-radioactive medical isotopes are:
Deuterium in deuterated drugs
Carbon-13 used in liver function and metabolic tests
Radioactive isotopes used
Radioactive isotopes are used in medicine for both treatment and diagnostic scans. The most common isotope used in diagnostic scans is Tc-99m (Technetium-99m), being used in approximately 85% of all nuclear medicine diagnostic scans worldwide. It is used for diagnoses involving a large range of body parts and diseases such as cancers and neurological problems. Another well-known radioactive isotope used in medicine is I-131 (Iodine-131), which is used as a radioactive label for some radiopharmaceutical therapies or the treatment of some types of thyroid cancer. |
https://en.wikipedia.org/wiki/Diversity-generating%20retroelement | Diversity-generating retroelements (DGRs) are a family of retroelements that were first found in Bordetella phage (BPP-1), and since been found in bacteria (e.g.Treponema denticola and Legionella pneumophila), Archaea, Archaean viruses (e.g. ANMV-1), temperate phages (e.g. Hankyphage and CrAss-like phage), and lytic phages. DGRs benefit their host by mutating particular regions of specific target proteins, for instance, phage tail fiber in BPP-1, lipoprotein in legionella pneumophila ( the pathogen behind Legionnaires disease), and TvpA in Treponema denticola (oral-associated periopathogen). An error-prone reverse transcriptase is responsible for generating these hypervariable regions in target proteins (Mutagenic retrohoming). In mutagenic retrohoming, a mutagenized cDNA (containing substantial A to N mutations) is reverse transcribed from a template region (TR), and is replaced with a segment similar to the template region called variable region (VR). Accessory variability determinant (Avd) protein is another component of DGRs, and its complex formation with the error-prone RT is of importance to mutagenic rehoming.
DGRs are beneficial to the evolution and survival of their host. A large fraction of Faecalibacterium prausnitzii phages contain DGRs that are believed to have a role in phage adaptability to the digestive system, as patients with inflammatory bowel disease (IBD), have more phages, but less F.prausnitzii in their stool samples compared to healthy individuals, suggesting that these phages activate during the illness, and that they may trigger F.prausnitzii depletion. Several tools have been implemented to identify DGRs, such as DiGReF, DGRscan, MetaCSST, and myDGR
See also
Retron |
https://en.wikipedia.org/wiki/Next-generation%20matrix | In epidemiology, the next-generation matrix is used to derive the basic reproduction number, for a compartmental model of the spread of infectious diseases. In population dynamics it is used to compute the basic reproduction number for structured population models. It is also used in multi-type branching models for analogous computations.
The method to compute the basic reproduction ratio using the next-generation matrix is given by Diekmann et al. (1990) and van den Driessche and Watmough (2002). To calculate the basic reproduction number by using a next-generation matrix, the whole population is divided into compartments in which there are infected compartments. Let be the numbers of infected individuals in the infected compartment at time t. Now, the epidemic model is
, where
In the above equations, represents the rate of appearance of new infections in compartment . represents the rate of transfer of individuals into compartment by all other means, and represents the rate of transfer of individuals out of compartment .
The above model can also be written as
where
and
Let be the disease-free equilibrium. The values of the parts of the Jacobian matrix and are:
and
respectively.
Here, and are m × m matrices, defined as
and .
Now, the matrix is known as the next-generation matrix. The basic reproduction number of the model is then given by the eigenvalue of with the largest absolute value (the spectral radius of . Next generation matrices can be computationally evaluated from observational data, which is often the most productive approach where there are large numbers of compartments.
See also
Mathematical modelling of infectious disease |
https://en.wikipedia.org/wiki/Solar%20neutrino%20problem | The solar neutrino problem concerned a large discrepancy between the flux of solar neutrinos as predicted from the Sun's luminosity and as measured directly. The discrepancy was first observed in the mid-1960s and was resolved around 2002.
The flux of neutrinos at Earth is several tens of billions per square centimetre per second, mostly from the Sun's core. They are nevertheless hard to detect, because they interact very weakly with matter, traversing the whole Earth. Of the three types (flavors) of neutrinos known in the Standard Model of particle physics, the Sun produces only electron neutrinos. When neutrino detectors became sensitive enough to measure the flow of electron neutrinos from the Sun, the number detected was much lower than predicted. In various experiments, the number deficit was between one half and two thirds.
Particle physicists knew that a mechanism, discussed back in 1957 by Bruno Pontecorvo, could explain the deficit in electron neutrinos. However, they hesitated to accept it for various reasons, including the fact that it required a modification of the accepted Standard Model. They first pointed at the solar model for adjustment, which was ruled out. Today it is accepted that the neutrinos produced in the Sun are not massless particles as predicted by the Standard Model but rather mixed quantum states made up of defined-mass eigenstates in different (complex) proportions. That allows a neutrino produced as a pure electron neutrino to change during propagation into a mixture of electron, muon and tau neutrinos, with a reduced probability of being detected by a detector sensitive to only electron neutrinos.
Several neutrino detectors aiming at different flavors, energies, and traveled distance contributed to our present knowledge of neutrinos. In 2002 and 2015, a total of four researchers related to some of these detectors were awarded the Nobel Prize in Physics.
Background
The Sun performs nuclear fusion via the proton–proton chain reac |
https://en.wikipedia.org/wiki/Dehornoy%20order | In the mathematical area of braid theory, the Dehornoy order is a left-invariant total order on the braid group, found by Patrick Dehornoy. Dehornoy's original discovery of the order on the braid group used huge cardinals, but there are now several more elementary constructions of it.
Definition
Suppose that are the usual generators of the braid group on strings. Define a -positive word to be a braid that admits at least one expression in the elements and their inverses, such that the word contains , but does not contain nor for .
The set of positive elements in the Dehornoy order is defined to be the elements that can be written as a -positive word for some . We have:
and are disjoint ("acyclicity property");
the braid group is the union of and ("comparison property").
These properties imply that if we define as then we get a left-invariant total order on the braid group. For example, because the braid word is not -positive, but, by the braid relations, it is equivalent to the -positive word , which lies in .
History
Set theory introduces the hypothetical existence of various "hyper-infinity" notions such as large cardinals. In 1989, it was proved that one such notion, axiom , implies the existence of an algebraic structure called an acyclic shelf which in turn implies the decidability of the word problem for the left self-distributivity law a property that is a priori unconnected with large cardinals.
In 1992, Dehornoy produced an example of an acyclic shelf by introducing a certain groupoid that captures the geometrical aspects of the law. As a result, an acyclic shelf was constructed on the braid group , which happens to be a quotient of , and this implies the existence of the braid order directly. Since the braid order appears precisely when the large cardinal assumption is eliminated, the link between the braid order and the acyclic shelf was only evident via the original problem from set theory.
Properties
The existence of t |
https://en.wikipedia.org/wiki/Punctuated%20gradualism | Punctuated gradualism is a microevolutionary hypothesis that refers to a species that has "relative stasis over a considerable part of its total duration [and] underwent periodic, relatively rapid, morphologic change that did not lead to lineage branching". It is one of the three common models of evolution.
Description
While the traditional model of paleontology, the phylogenetic model, posits that features evolved slowly without any direct association with speciation, the relatively newer and more controversial idea of punctuated equilibrium claims that major evolutionary changes don't happen over a gradual period but in localized, rare, rapid events of branching speciation.
Punctuated gradualism is considered to be a variation of these models, lying somewhere in between the phyletic gradualism model and the punctuated equilibrium model. It states that speciation is not needed for a lineage to rapidly evolve from one equilibrium to another but may show rapid transitions between long-stable states.
History
In 1983, Malmgren and colleagues published a paper called "Evidence for punctuated gradualism in the late Neogene Globorotalia tumida lineage of planktonic foraminifera." This paper studied the lineage of planktonic foraminifera, specifically the evolutionary transition from G. plesiotumida to G. tumida across the Miocene/Pliocene boundary. The study found that the G. tumida lineage, while remaining in relative stasis over a considerable part of its total duration underwent periodic, relatively rapid, morphologic change that did not lead to lineage branching. Based on these findings, Malmgren and colleagues introduced a new mode of evolution and proposed to call it "punctuated gradualism." There is strong evidence supporting both gradual evolution of a species over time and rapid events of species evolution separated by periods of little evolutionary change. Organisms have a great propensity to adapt and evolve depending on the circumstances.
Studies
Studies |
https://en.wikipedia.org/wiki/Dictionary%20of%20Irish%20Architects | The Dictionary of Irish Architects is an online database which contains biographical and bibliographical information on architects, builders and craftsmen born or working in Ireland during the period 1720 to 1940, and information on the buildings on which they worked. Although it is principally devoted to architects, it includes engineers who designed buildings and structures, some builders, some artists and craftsmen, and some amateurs and writers on architectural subjects.
Architects from Britain and elsewhere who never resided in Ireland but designed buildings there are not given full biographical treatment, and only their Irish works are listed. Irish-born architects who emigrated are similarly treated; their careers after their departure from Ireland are not described in detail, and only their Irish works are listed in full.
The Dictionary of Irish Architects was created and compiled in the Irish Architectural Archive (IAA) over a period of thirty years. It was made publicly available online in January 2009. According to the IAA it remains a "work in progress" with new data added and updated since its initial release. As of 2018, it reportedly contained 6,700 entries. |
https://en.wikipedia.org/wiki/Microbial%20cell%20factory | Microbial cell factory is an approach to bioengineering which considers microbial cells as a production facility in which the optimization process largely depends on metabolic engineering. MCFs is a derivation of cell factories, which are engineered microbes and plant cells. In 1980s and 1990s, MCFs were originally conceived to improve productivity of cellular systems and metabolite yields through strain engineering. A MCF develops native and nonnative metabolites through targeted strain design. In addition, MCFs can shorten the synthesis cycle while reducing the difficulty of product separation.
History
Prior to MCFs, scientists employed traditional engineering techniques to produce various commodities. These methodologies include modifying metabolic pathways, eliminating enzymes, or the balancing of ATP to drive metabolic flux. However, when these approaches were applied for industrial productions, they could not withstand the industrial environments that consisted of toxins and fluctuating temperatures. Ultimately, the techniques were never able to scale up and output bio-products that were obtained in the laboratory.
Thus, MCFs were developed by using a heterogenous biosynthesis pathway in a microbial host. As a host, MCFs take in various substrates and convert them into valuable compounds. These products can range from fuels, chemical, food ingredients, to pharmaceuticals.
Structure
Cell Wall
In microbial cells, the cell walls are either Gram-positive or Gram-negative. These outcomes are based on the Gram Stain test. Gram-positive cell walls have thick peptidoglycan layer and no outer lipid membrane while Gram-negative bacteria have a thin peptidoglycan layer and an outer lipid membrane. Although a thick Gram-positive cell wall is advantageous, it is easier to attack as the peptidoglycan layer absorbs antibiotics and cleaning products. A Gram-negative cell wall is more resistant to such attacks and more difficult to destroy.
Membrane
The membrane of mic |
https://en.wikipedia.org/wiki/Centaurin%2C%20alpha%201 | Arf-GAP with dual PH domain-containing protein 1 is a protein that in humans is encoded by the ADAP1 gene.
Interactions
Centaurin, alpha 1 has been shown to interact with:
Casein kinase 1, alpha 1
Nucleolin,
P110α,
PRKCI,
Protein kinase D1, and
Protein kinase Mζ.
Model organisms
Model organisms have been used in the study of ADAP1 function. A conditional knockout mouse line called Adap1tm1a(EUCOMM)Wtsi was generated at the Wellcome Trust Sanger Institute. Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Additional screens performed:
In-depth immunological phenotyping
in-depth bone and cartilage phenotyping |
https://en.wikipedia.org/wiki/Iteration%20mark | Iteration marks are characters or punctuation marks that represent a duplicated character or word.
Chinese
In Chinese, or (usually appearing as , equivalent to the modern ideograph ) or is used in casual writing to represent a doubled character. However, it is not used in formal writing anymore, and it never appeared in printed matter. In a tabulated table or list, vertical repetition can be represented by a ditto mark ().
History
Iteration marks have been occasionally used for more than two thousand years in China. The example image shows an inscription in bronze script, a variety of formal writing dating to the Zhou Dynasty, that ends with , where the small ("two") is used as iteration marks in the phrase ("descendants to use and to treasure").
Malayo-Polynesian languages
In Filipino, Indonesian, and Malay, words that are repeated can be shortened with the use of numeral "2". For example, the Malay ("words", from single ) can be shortened to , and ("to walk around", from single ) can be shortened to . The usage of "2" can be also replaced with superscript "" (e.g. for ). The sign may also be used for reduplicated compound words with slight sound changes, for example for ("commotion"). Suffixes may be added after "2", for example in the word ("Western in nature", from the basic word ("West") with the prefix and suffix ).
The use of this mark dates back to the time when these languages were written with Arabic script, specifically the Jawi or Pegon varieties. Using the Arabic numeral , words such as (, butterfly) can be shortened to . The use of Arabic numeral was also adapted to several Brahmi derived scripts of the Malay archipelago, notably Javanese, Sundanese, Lontara, and Makassaran. As the Latin alphabet was introduced to the region, the Western-style Arabic numeral "2" came to be use for Latin-based orthography.
The use of "2" as an iteration mark was official in Indonesia up to 1972, as part of the Republican Spelling System. Its usage |
https://en.wikipedia.org/wiki/Ozgur%20B.%20Akan | Özgur Baris Akan is a Professor with the Department of Electrical and Electronics Engineering and the Director of Next-generation and Wireless Communications Laboratory (NWCL) at the University of Cambridge and Koç University in Istanbul, Turkey. He was named Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2016 for contributions to wireless sensor networks. Since the same year, he is also a fellow of the Vehicular Technology Society.
Early life and education
Akan was born in Ankara, Turkey. He attended Ankara Science High School, and after graduation from it, went to study in Bilkent University. After obtaining B.Sc. in electrical and electronics engineering from Bilkent in June 1999, he studied for an M.Sc. degree at Middle East Technical University, where in January 2002 he graduated with it in the same field. Akan received the Ph.D. degree in electrical and computer engineering in 2004, after studying at the Broadband and Wireless Networking Laboratory at Georgia Tech under the supervision of Ian Akyildiz. Following graduation, he joined the Department of Electrical and Electronics Engineering at the Middle East Technical University, serving there until August 2010. From January 2013 to May 2016, Akan served as associate and then as director of Graduate School of Sciences and Engineering. |
https://en.wikipedia.org/wiki/CloudMe | CloudMe is a file storage service operated by CloudMe AB that offers cloud storage, file synchronization and client software. It features a blue folder that appears on all devices with the same content, all files are synchronized between devices. The CloudMe service is offered with a freemium business model and provides encrypted SSL connection with SSL Extended Validation Certificate. CloudMe provides client software for Microsoft Windows, macOS, Linux, Android, iOS, Google TV, Samsung Smart TV, WD TV, Windows Storage Server for NAS and web browsers.
As a cloud sync storage provider, CloudMe has a strong focus on the European market and differentiates itself from other storage providers with mobility and media features like Samsung SmartTV support.
Recently Novell announced support for the CloudMe service in their Dynamic File Services Suite. Novosoft Handy Backup version 7.3 also announced support for CloudMe. WinZip is also integrated with CloudMe. There are many third party mobile apps and software available for CloudMe, many using the WebDAV support of CloudMe.
History
CloudMe was founded by Daniel Arthursson in 2012 and is mainly owned by Xcerion. The company runs its own servers and operates from Sweden. In 2012 CloudMe received the Red Herring Top 100 Global company, AlwaysON Global 250 award, White Bull 2012 Yearling Award and the White Bull 2014 Longhorn Award.
Previously CloudMe.com was called iCloud.com, but the service changed name after Apple acquired the domain and trademark for a rumoured 4.5 million dollars. For a while visitors to icloud.com were directed to cloudme.com. After the name change, the former iCloud.com service was split into two companies and services, CloudMe for file sync and storage, and CloudTop as the virtual cloud desktop that previously was the main attraction of the iCloud.com service and included file storage. Xcerion, the major owner of CloudMe and CloudTop initially gained an investment of $12 million to build the iClou |
https://en.wikipedia.org/wiki/Gerhard%20Thomsen | Gerhard Thomsen (23 June 1899 – 4 January 1934) was a German mathematician, probably best known for his work in various branches of geometry.
Life
Thomsen was born on 23 June 1899 in Hamburg. His father, Georg Thomsen, was a physician. Thomsen grew up in Hamburg and attended the Johanneum (gymnasium/highschool) from 1908 to 1917. After completing school he served in the army during the last year of World War I. In 1919 he became of the first students at the newly founded University of Hamburg majoring in mathematics and natural science. Aside from a temporary interlude Thomsen studied in Hamburg until 1923. He received a certification to teach at highschools the fall of 1922 and finally his PhD in the summer of the following year. After he worked shortly as an assistant at the Karlsruhe Institute of Technology before returning to Hamburg in a similar capacity in the spring of 1925. While working on his habilitation thesis Thomsen spend one year in Rome on Rockefeller grant to study with Levi-Civita. He received his habilitation in Hamburg in 1928 and started a position as a tenured professor at the University of Rostock in the fall of 1929.
On 11 November 1933 Thomsen gave an inflammatory talk entitled "Über die Gefahr der Zurückdrängung der exakten Naturwissenschaften an Schulen und Hochschulen" (On the danger of marginalizing the exact sciences in schools and universities), that received a large amount of publicity in academic circles. While the talk seemed supportive of some aims of the Nazis, it also directly attacked their suppression of education in the sciences. This caused him to be investigated by the Gestapo.
Thomsen was killed by a train on a railroad track in Rostock on 4 January 1934. It is assumed that he committed suicide possibly triggered by the Gestapo investigation.
Work
In Hamburg Thomsen assisted Wilhelm Blaschke to apply Felix Klein's Erlangen Program on differential geometry. He also edited and organized Blaschke's lectures on differential |
https://en.wikipedia.org/wiki/Karplus%20equation | The Karplus equation, named after Martin Karplus, describes the correlation between 3J-coupling constants and dihedral torsion angles in nuclear magnetic resonance spectroscopy:
where J is the 3J coupling constant, is the dihedral angle, and A, B, and C are empirically derived parameters whose values depend on the atoms and substituents involved. The relationship may be expressed in a variety of equivalent ways e.g. involving cos 2φ rather than cos2 φ —these lead to different numerical values of A, B, and C but do not change the nature of the relationship.
The relationship is used for 3JH,H coupling constants. The superscript "3" indicates that a 1H atom is coupled to another 1H atom three bonds away, via H-C-C-H bonds. (Such hydrogens bonded to neighbouring carbon atoms are termed vicinal). The magnitude of these couplings are generally smallest when the torsion angle is close to 90° and largest at angles of 0 and 180°.
This relationship between local geometry and coupling constant is of great value throughout nuclear magnetic resonance spectroscopy and is particularly valuable for determining backbone torsion angles in protein NMR studies. |
https://en.wikipedia.org/wiki/Plone%20%28software%29 | Plone is a free and open source content management system (CMS) built on top of the Zope application server. Plone is positioned as an enterprise CMS and is commonly used for intranets and as part of the web presence of large organizations. High-profile public sector users include the U.S. Federal Bureau of Investigation, Brazilian Government, United Nations, City of Bern (Switzerland), New South Wales Government (Australia), and European Environment Agency. Plone's proponents cite its security track record and its accessibility as reasons to choose Plone.
Plone has a long tradition of development happening in so-called "sprints", in-person meetings of developers over the course of several days, the first having been held in 2003 and nine taking place in 2014. The largest sprint of the year is the sprint immediately following the annual conference. Certain other sprints are considered strategic so are funded directly by the Plone Foundation, although very few attendees are sponsored directly. The Plone Foundation also holds and enforces all copyrights and trademarks in Plone, and is assisted by legal counsel from the Software Freedom Law Center.
History
The Plone project began in 1999 by Alexander Limi, Alan Runyan, and Vidar Andersen. It was made as a usability layer on top of the Zope Content Management Framework. The first version was released in 2001. The project quickly grew into a community, receiving plenty of new add-on products from its users. The increase in community led to the creation of the annual Plone conference in 2003, which is still running today. In addition, "sprints" are held, where groups of developers meet to work on Plone, ranging from a couple of days to a week. In March 2004, Plone 2.0 was released. This release brought more customizable features to Plone, and enhanced the add-on functions. In May 2004, the Plone Foundation was created for the development, marketing, and protection of Plone. The Foundation has ownership rights over th |
https://en.wikipedia.org/wiki/Ferdinand%20Peper | Ferdinand Peper (born 1961) is a Dutch theoretical computer scientist.
Peper obtained his PhD at the Delft University of Technology in 1989 with the thesis Efficient network topologies for extensible massively parallel computers. He currently is working in a senior research position at Kobe Advanced ICT Research Center, and the National Institute of Information and Communications Technology. He is best known for his research on Nanocomputing, Asynchronous systems, Cellular automaton, Reconfigurable hardware and Instantaneous Noise-based logic. His research goals are to develop next-generation computing and communication architectures and also schemes enhanced by Nanotechnology and Nanoelectronics including single-electron transistors. Particular topics of his research include the reduction of energy requirement, the exploitation of noise and fluctuations for informatics,
and the features of molecular self-organization and self-assembly. He was the Chair of the Fourth International Workshop on Natural Computing (2009) and acted as a co-editor of the book Natural Computing (Springer). He is a member of editorial board of the International Journal of Unconventional Computing.
Most cited papers
Peper F, Lee J, Adachi S, et al., "Laying out circuits on asynchronous cellular arrays: a step towards feasible nanocomputers?", Nanotechnology 14 (2003) 469-485.
Peper F, Lee J, Abo F, et al., "Fault-tolerance in nanocomputers: A cellular array approach", IEEE Trans. Nanotechnology 3 (2004) 187-201.
Adachi S, Peper F, Lee J, "Computation by asynchronously updating cellular automata", J. Stat. Phys. 114 (2004) 261-289.
Peper F, Isokawa T, Kouda N, et al., "Self-timed cellular automata and their computational ability", Future Generation Computer Systems 18 (2002) 893-904.
See also
Asynchronous circuit
Natural computing
Unconventional computing
Noise-based logic |
https://en.wikipedia.org/wiki/Safe%40Office | Safe@Office is a line of firewall and virtual private network (VPN) appliances developed by SofaWare Technologies, a Check Point company.
The Check Point Safe@Office product line is targeted at the small and medium business segment, and includes the 500 and 500W (with Wi-Fi) series of internet security appliance. The old S-Box, Safe@Home, 100 series, 200 series, and 400W series are discontinued.
The appliances are licensed according to the number of protected IP addresses (referenced to as users) in numbers 5, 25 or unlimited. There is also a variant with a built-in asymmetric disconnection line (ADSL) modem.
See also
VPN-1 UTM Edge — similar appliance with possibility of being managed from the Check Point SmartCenter. |
https://en.wikipedia.org/wiki/Head-related%20transfer%20function | A head-related transfer function (HRTF), also known as anatomical transfer function (ATF), or a head shadow, is a response that characterizes how an ear receives a sound from a point in space. As sound strikes the listener, the size and shape of the head, ears, ear canal, density of the head, size and shape of nasal and oral cavities, all transform the sound and affect how it is perceived, boosting some frequencies and attenuating others. Generally speaking, the HRTF boosts frequencies from 2–5 kHz with a primary resonance of +17 dB at 2,700 Hz. But the response curve is more complex than a single bump, affects a broad frequency spectrum, and varies significantly from person to person.
A pair of HRTFs for two ears can be used to synthesize a binaural sound that seems to come from a particular point in space. It is a transfer function, describing how a sound from a specific point will arrive at the ear (generally at the outer end of the auditory canal). Some consumer home entertainment products designed to reproduce surround sound from stereo (two-speaker) headphones use HRTFs. Some forms of HRTF processing have also been included in computer software to simulate surround sound playback from loudspeakers.
Sound localization
Humans have just two ears, but can locate sounds in three dimensions – in range (distance), in direction above and below (elevation), in front and to the rear, as well as to either side (azimuth). This is possible because the brain, inner ear, and the external ears (pinna) work together to make inferences about location. This ability to localize sound sources may have developed in humans and ancestors as an evolutionary necessity since the eyes can only see a fraction of the world around a viewer, and vision is hampered in darkness, while the ability to localize a sound source works in all directions, to varying accuracy,
regardless of the surrounding light.
Humans estimate the location of a source by taking cues derived from one ear (monaura |
https://en.wikipedia.org/wiki/Automatic%20double%20tracking | Automatic double-tracking or artificial double-tracking (ADT) is an analogue recording technique designed to enhance the sound of voices or instruments during the mixing process. It uses tape delay to create a delayed copy of an audio signal which is then played back at slightly varying speed controlled by an oscillator and combined with the original. The effect is intended to simulate the sound of the natural doubling of voices or instruments achieved by double tracking. The technique was developed in 1966 by engineers at Abbey Road Studios in London at the request of the Beatles.
Background
As early as the 1950s, it was discovered that double-tracking the lead vocal in a song gave it a richer, more appealing sound, especially for singers with weak or light voices. Use of this technique became possible with the advent of magnetic tape for use in sound recording. Originally, a pair of single-track tape recorders were used to produce the effect; later, multitrack tape machines were used. Early pioneers of this technique were Les Paul and Buddy Holly.
Before the development of ADT, it was necessary to record and mix multiple takes of the vocal track. Because it is nearly impossible for a performer to sing or play the same part in exactly the same way twice, a recording and blending of two different performances of the same part will create a fuller, "chorused" effect with double tracking. But if one simply plays back two copies of the same performance in perfect sync, the two sound images become one and no double tracking effect is produced.
Invention
ADT was invented specially for the Beatles during the spring of 1966 by Ken Townsend, a recording engineer employed at EMI's Abbey Road Studios, mainly at the request of John Lennon, who despised the tedium of double tracking during sessions and regularly expressed a desire for a technical alternative.
Townsend came up with a system using tape delay, . Townsend's system added a second tape recorder to the regular se |
https://en.wikipedia.org/wiki/LED%20filament | A LED filament light bulb is a LED lamp which is designed to resemble a traditional incandescent light bulb with visible filaments for aesthetic and light distribution purposes, but with the high efficiency of light-emitting diodes (LEDs). It produces its light using LED filaments, which are series-connected strings of diodes that resemble in appearance the filaments of incandescent light bulbs. They are direct replacements for conventional clear (or frosted) incandescent bulbs, as they are made with the same envelope shapes, the same bases that fit the same sockets, and work at the same supply voltage. They may be used for their appearance, similar when lit to a clear incandescent bulb, or for their wide angle of light distribution, typically 300°. They are also more efficient than many other LED lamps.
History
A LED filament type design light bulb was produced by Ushio Lighting in 2008, intended to mimic the appearance of a standard light bulb. Contemporary bulbs typically used a single large LED or matrix of LEDs attached to one large heatsink. As a consequence, these bulbs typically produced a beam only 180 degrees wide. By about 2015, LED filament bulbs had been introduced by several manufacturers. These designs used several LED filament light emitters, similar in appearance when lit to the filament of a clear, standard incandescent bulb, and very similar in detail to the multiple filaments of the early Edison incandescent bulbs.
LED filament bulbs were patented by Ushio and Sanyo in 2008. Panasonic described a flat arrangement with modules similar to filaments in 2013. Two other independent patent applications were filed in 2014 but were never granted. The early filed patents included a heat drain under the LEDs. At that time, luminous efficacy of LEDs was under 100 lm/W. By the late 2010s, this had risen to near 160 lm/W.
Design
The LED filament consists of multiple series-connected LEDs on a transparent substrate, referred to as chip-on-glass (COG). Thes |
https://en.wikipedia.org/wiki/Scheff%C3%A9%27s%20lemma | In mathematics, Scheffé's lemma is a proposition in measure theory concerning the convergence of sequences of integrable functions. It states that, if is a sequence of integrable functions on a measure space that converges almost everywhere to another integrable function , then if and only if .
In probability theory, almost sure convergence can be weakened to requiring only convergence in probability.
Applications
Applied to probability theory, Scheffe's theorem, in the form stated here, implies that almost everywhere pointwise convergence of the probability density functions of a sequence of -absolutely continuous random variables implies convergence in distribution of those random variables.
History
Henry Scheffé published a proof of the statement on convergence of probability densities in 1947. The result is a special case of a theorem by Frigyes Riesz about convergence in Lp spaces published in 1928. |
https://en.wikipedia.org/wiki/Behrouz%20Nikbin | Behrouz Nikbin (born in Iran) is an Iranian immunologist and biomedical scientist. Nikbin studied medicine at Tehran University and got a PhD degree in immunology.
External links
Behrouz Nikbin's publications in pubmed
Iranian immunologists
Living people
Year of birth missing (living people)
University of Tehran alumni |
https://en.wikipedia.org/wiki/European%20Quality%20in%20Social%20Services | The European Quality in Social Services (EQUASS) is an integrated sector-specific quality certification system that certifies compliance of social services with European quality principles and criteria. EQUASS aims to enhance the social sector by engaging service providers in quality and continuous improvement and by guaranteeing service users quality of services throughout Europe.
EQUASS, formerly called the European Quality in Rehabilitation Mark (EQRM) is an initiative of the European Platform for Rehabilitation (EPR) and its secretariat is based in Brussels.
History
In July 1995, the European Platform for Rehabilitation, then called the European Platform for Vocational Rehabilitation (EPVR), tasked its working group "Quality management and cost effectiveness" to study the launch of a "Quality Award for Vocational Rehabilitation" for European institutes of vocational rehabilitation.
The award would require an audit which, if successful, would show that the institute of vocational rehabilitation works in accordance with the standards of the platform.
The certification system to be launched was mainly inspired by the CARF certification.
The European Platform for Rehabilitation enrolled a few of its members to implement the quality standard that was then named EQRM (European Quality in Rehabilitation Mark). The first awarding event took place in Rome in December 2003.
The main values behind the quality mark were:
A client-focused approach of rehabilitation services
An awareness of the rights of service users
The involvement and empowerment of service users
A systematic enhancement of quality of life
Motivation of staff
Efficient business management practices
Measuring and proving the service outcomes and results
Services providers that are accountable for their actions and their use of funds
In 2008, the European Platform for Rehabilitation decided to create an additional quality assurance mark (EQUASS Assurance), alongside its EQRM mark (now renamed |
https://en.wikipedia.org/wiki/Field%20equation | In theoretical physics and applied mathematics, a field equation is a partial differential equation which determines the dynamics of a physical field, specifically the time evolution and spatial distribution of the field. The solutions to the equation are mathematical functions which correspond directly to the field, as functions of time and space. Since the field equation is a partial differential equation, there are families of solutions which represent a variety of physical possibilities. Usually, there is not just a single equation, but a set of coupled equations which must be solved simultaneously. Field equations are not ordinary differential equations since a field depends on space and time, which requires at least two variables.
Whereas the "wave equation", the "diffusion equation", and the "continuity equation" all have standard forms (and various special cases or generalizations), there is no single, special equation referred to as "the field equation".
The topic broadly splits into equations of classical field theory and quantum field theory. Classical field equations describe many physical properties like temperature of a substance, velocity of a fluid, stresses in an elastic material, electric and magnetic fields from a current, etc. They also describe the fundamental forces of nature, like electromagnetism and gravity. In quantum field theory, particles or systems of "particles" like electrons and photons are associated with fields, allowing for infinite degrees of freedom (unlike finite degrees of freedom in particle mechanics) and variable particle numbers which can be created or annihilated.
Generalities
Origin
Usually, field equations are postulated (like the Einstein field equations and the Schrödinger equation, which underlies all quantum field equations) or obtained from the results of experiments (like Maxwell's equations). The extent of their validity is their ability to correctly predict and agree with experimental results.
From a theor |
https://en.wikipedia.org/wiki/TMC-647055 | TMC-647055 is an experimental antiviral drug which was developed as a treatment for hepatitis C, and is in clinical trials as a combination treatment with ribavirin and simeprevir. It acts as a NS5b polymerase inhibitor. |
https://en.wikipedia.org/wiki/Formal%20concept%20analysis | In information science, formal concept analysis (FCA) is a principled way of deriving a concept hierarchy or formal ontology from a collection of objects and their properties. Each concept in the hierarchy represents the objects sharing some set of properties; and each sub-concept in the hierarchy represents a subset of the objects (as well as a superset of the properties) in the concepts above it. The term was introduced by Rudolf Wille in 1981, and builds on the mathematical theory of lattices and ordered sets that was developed by Garrett Birkhoff and others in the 1930s.
Formal concept analysis finds practical application in fields including data mining, text mining, machine learning, knowledge management, semantic web, software development, chemistry and biology.
Overview and history
The original motivation of formal concept analysis was the search for real-world meaning of mathematical order theory. One such possibility of very general nature is that data tables can be transformed into algebraic structures called complete lattices, and that these can be utilized for data visualization and interpretation. A data table that represents a heterogeneous relation between objects and attributes, tabulating pairs of the form "object g has attribute m", is considered as a basic data type. It is referred to as a formal context. In this theory, a formal concept is defined to be a pair (A, B), where A is a set of objects (called the extent) and B is a set of attributes (the intent) such that
the extent A consists of all objects that share the attributes in B, and dually
the intent B consists of all attributes shared by the objects in A.
In this way, formal concept analysis formalizes the semantic notions of extension and intension.
The formal concepts of any formal context can—as explained below—be ordered in a hierarchy called more formally the context's "concept lattice". The concept lattice can be graphically visualized as a "line diagram", which then may be he |
https://en.wikipedia.org/wiki/2004%20California%20Proposition%2071 | Proposition 71 of 2004 (or the California Stem Cell Research and Cures Act) is a law enacted by California voters to support stem cell research in the state. It was proposed by means of the initiative process and approved in the 2004 state elections on November 2. The Act amended both the Constitution of California and the Health and Safety Code.
The Act makes conducting stem cell research a state constitutional right. It authorizes the sale of general obligation bonds to allocate three billion dollars over a period of ten years to stem cell research and research facilities. Although the funds could be used to finance all kinds of stem cell research, it gives priority to human embryonic stem cell research.
Proposition 71 created the California Institute for Regenerative Medicine (CIRM), which is in charge of making "grants and loans for stem cell research, for research facilities, and for other vital research opportunities to realize therapies" as well as establishing "the appropriate regulatory standards of oversight bodies for research and facilities development". The Act also establishes a governing body called the Independent Citizen's Oversight Committee (ICOC) to oversee CIRM.
Proposition 71 is unique in at least three ways. Firstly, it uses general obligation bonds, which are usually used to finance brick-and-mortar projects such as bridges or hospitals, to fund scientific research. Secondly, by funding scientific research on such a large scale, California is taking on a role that is typically fulfilled by the U.S. federal government. Thirdly, Proposition 71 establishes the state constitutional right to conduct stem cell research. The initiative also represents a unique instance where the public directly decided to fund scientific research.
By 2020, the funding from proposition 71 was mostly used, and so the California Institute for Regenerative Medicine expected to shut down if it did not receive additional funding. For that reason, another ballot init |
https://en.wikipedia.org/wiki/Fine-tuning%20%28physics%29 | In theoretical physics, fine-tuning is the process in which parameters of a model must be adjusted very precisely in order to fit with certain observations. This had led to the discovery that the fundamental constants and quantities fall into such an extraordinarily precise range that if it did not, the origin and evolution of conscious agents in the universe would not be permitted.
Theories requiring fine-tuning are regarded as problematic in the absence of a known mechanism to explain why the parameters happen to have precisely the observed values that they return. The heuristic rule that parameters in a fundamental physical theory should not be too fine-tuned is called naturalness.
Background
The idea that naturalness will explain fine tuning was brought into question by Nima Arkani-Hamed, a theoretical physicist, in his talk "Why is there a Macroscopic Universe?", a lecture from the mini-series "Multiverse & Fine Tuning" from the "Philosophy of Cosmology" project, a University of Oxford and Cambridge Collaboration 2013. In it he describes how naturalness has usually provided a solution to problems in physics; and that it had usually done so earlier than expected. However, in addressing the problem of the cosmological constant, naturalness has failed to provide an explanation though it would have been expected to have done so a long time ago.
The necessity of fine-tuning leads to various problems that do not show that the theories are incorrect, in the sense of falsifying observations, but nevertheless suggest that a piece of the story is missing. For example, the cosmological constant problem (why is the cosmological constant so small?); the hierarchy problem; and the strong CP problem, among others.
Also, Dongshan He's team has suggested a possible solution for the fine tuned Cosmological constant by the universe creation from nothing model.
Example
An example of a fine-tuning problem considered by the scientific community to have a plausible "natural" |
https://en.wikipedia.org/wiki/Ob/ob%20mouse | The ob/ob or obese mouse is a mutant mouse that eats excessively due to mutations in the gene responsible for the production of leptin and becomes profoundly obese. It is an animal model of type II diabetes. Identification of the gene mutated in ob led to the discovery of the hormone leptin, which is important in the control of appetite.
The first ob/ob mouse arose by chance in a colony at the Jackson Laboratory in 1949. The mutation is recessive. Mutant mice are phenotypically indistinguishable from their unaffected littermates at birth, but gain weight rapidly throughout their lives, reaching a weight three times that of unaffected mice. ob/ob mice develop high blood sugar, despite an enlargement of the pancreatic islets and increased levels of insulin.
The gene affected by the ob mutation was identified by positional cloning. The gene produces a hormone, called leptin, that is produced predominantly in adipose tissue. One role of leptin is to regulate appetite by signalling to the brain that the animal has had enough to eat. Since the ob/ob mouse cannot produce leptin, its food intake is uncontrolled by this mechanism.
A positional cloning approach in the Lepob mouse allows to identify the locus of the gene encoding for the ob protein. Clones were used to construct a contig across most of the 650-kb critical region of ob. Exons from this interval were trapped using exon trapping method and each was afterward sequenced and searched in the GenBank. One of the exons was hybridized to a Northern blot of mouse white adipose tissue (WAT). This allowed to investigate the levels of ob gene expression which seemed to be markedly increased in WAT of Lepob mice. This is consistent with a biologically inactive truncated protein.
See also
Zucker rat |
https://en.wikipedia.org/wiki/L-matrix | In mathematics, the class of L-matrices are those matrices whose off-diagonal entries are less than or equal to zero and whose diagonal entries are positive; that is, an L-matrix L satisfies
See also
Z-matrix—every L-matrix is a Z-matrix
Metzler matrix—the negation of any L-matrix is a Metzler matrix |
https://en.wikipedia.org/wiki/Grashof%20number | In fluid mechanics (especially fluid thermodynamics), the Grashof number (, after Franz Grashof) is a dimensionless number which approximates the ratio of the buoyancy to viscous forces acting on a fluid. It frequently arises in the study of situations involving natural convection and is analogous to the Reynolds number ().
Definition
Heat transfer
Free convection is caused by a change in density of a fluid due to a temperature change or gradient. Usually the density decreases due to an increase in temperature and causes the fluid to rise. This motion is caused by the buoyancy force. The major force that resists the motion is the viscous force. The Grashof number is a way to quantify the opposing forces.
The Grashof number is:
for vertical flat plates
for pipes
for bluff bodies
where:
is gravitational acceleration due to Earth
is the coefficient of volume expansion (equal to approximately for ideal gases)
is the surface temperature
is the bulk temperature
is the vertical length
is the diameter
is the kinematic viscosity.
The and subscripts indicate the length scale basis for the Grashof number.
The transition to turbulent flow occurs in the range for natural convection from vertical flat plates. At higher Grashof numbers, the boundary layer is turbulent; at lower Grashof numbers, the boundary layer is laminar, that is, in the range .
Mass transfer
There is an analogous form of the Grashof number used in cases of natural convection mass transfer problems. In the case of mass transfer, natural convection is caused by concentration gradients rather than temperature gradients.
where
and:
is gravitational acceleration due to Earth
is the concentration of species at surface
is the concentration of species in ambient medium
is the characteristic length
is the kinematic viscosity
is the fluid density
is the concentration of species
is the temperature (constant)
is the pressure (constant).
Relationship to other dimensio |
https://en.wikipedia.org/wiki/New%20South%20Wales%20Heritage%20Database | New South Wales Heritage Database, or State Heritage Inventory, is an online database of information about historic sites in New South Wales, Australia with statutory heritage listings.
Contents
It holds the information about sites listed on the New South Wales State Heritage Register (over 1,650 entries) in addition to sites on heritage lists managed by New South Wales local government authorities and other statutory heritage registers.
It is important to note that this is an online database holding information about historic sites but is not in itself a heritage register. An historic site can have multiple entries in this database if it is listed multiple heritage registers. For example, Young railway station is on three heritage registers and therefore has three entries in the database.
Licensing
The database is licensed CC BY except for material identified as being the copyright of third parties. |
https://en.wikipedia.org/wiki/Limiting%20density%20of%20discrete%20points | In information theory, the limiting density of discrete points is an adjustment to the formula of Claude Shannon for differential entropy.
It was formulated by Edwin Thompson Jaynes to address defects in the initial definition of differential entropy.
Definition
Shannon originally wrote down the following formula for the entropy of a continuous distribution, known as differential entropy:
Unlike Shannon's formula for the discrete entropy, however, this is not the result of any derivation (Shannon simply replaced the summation symbol in the discrete version with an integral), and it lacks many of the properties that make the discrete entropy a useful measure of uncertainty. In particular, it is not invariant under a change of variables and can become negative. In addition, it is not even dimensionally correct. Since would be dimensionless, must have units of , which means that the argument to the logarithm is not dimensionless as required.
Jaynes argued that the formula for the continuous entropy should be derived by taking the limit of increasingly dense discrete distributions. Suppose that we have a set of discrete points , such that in the limit their density approaches a function called the "invariant measure".
Jaynes derived from this the following formula for the continuous entropy, which he argued should be taken as the correct formula:
Typically, when this is written, the term is omitted, as that would typically not be finite. So the actual common definition is
Where it is unclear whether or not the term should be omitted, one could write
Notice that in Jaynes' formula, is a probability density. For any finite that is a uniform density over the quantization of the continuous space that is used in the Riemann sum. In the limit, is the continuous limiting density of points in the quantization used to represent the continuous variable .
Suppose one had a number format that took on possible values, distributed as per . Then (if |
https://en.wikipedia.org/wiki/Environmental%20consulting | Environmental consulting is often a form of compliance consulting, in which the consultant ensures that the client maintains an appropriate measure of compliance with environmental regulations. Sustainable consulting is a specialized field that offers guidance and solutions for businesses seeking to operate in an environmentally responsible and sustainable way. The goal of sustainable consulting is to help organizations reduce their environmental impact while maintaining profitability and social responsibility. There are many types of environmental consultants, but the two main groups are those who enter the field from the industry side, and those who enter the field from the environmentalist side.
Environmental consultants work in a very wide variety of fields. Whether it be providing construction services such as Asbestos Hazard Assessments or Lead Hazard Assessments or conducting due diligence reports for customers to rid them of possible sanctions. Consultancies may generalize across a wide range of disciplines or specialize in certain areas of environmental consultancy such as waste management.
Environmental consultants usually have an undergraduate degree and sometimes even master's degree in Environmental Engineering, Environmental Science, Environmental Studies, Geology, or some other science discipline. They should have deep knowledge on environmental regulations, which they can advise particular clients in the private industry or public government institutions to help them steer clear of possible fines, legal action or misguided transactions.
Environmental consulting spans a wide spectrum of industry. The most basic industry that environmental consulting remains prominent in is the commercial estate market. Many commercial lenders rely on both small and large environmental firms. Many commercial lenders will not lend monies to borrowers if the property or personal capital does not exceed the worth of the land. If an environmental problem is discovered p |
https://en.wikipedia.org/wiki/Maximum%20clade%20credibility%20tree | A maximum clade credibility tree is a tree that summarises the results of a Bayesian phylogenetic inference. Whereas a majority-rule tree combines the most common clades, and usually yields a tree that wasn't sampled in the analysis, the maximum-credibility method evaluates each of the sampled posterior trees. Each clade within the tree is given a score based on the fraction of times that it appears in the set of sampled posterior trees, and the product of these scores are taken as the tree's score. The tree with the highest score is then the maximum clade credibility tree. |
https://en.wikipedia.org/wiki/123Movies | 123Movies, GoMovies, GoStream, MeMovies or 123movieshub was a network of file streaming websites operating from Vietnam which allowed users to watch films for free. It was called the world's "most popular illegal site" by the Motion Picture Association of America (MPAA) in March 2018, before being shut down a few weeks later on foot of a criminal investigation by the Vietnamese authorities. , websites imitating the brand remain active.
Development
The site went through several name changes after being shut down from different domains; sometimes the name appeared as "123Movies", and other times as "123movies". The original name, and URL, was 123movies.to, which changed to other domains including 123movies.is before redirecting to gomovies.to and later gomovies.is. It was changed to gostream.is, and then to memovies.to, before changing to 123movieshub.to/is and remaining there until shutdown.
In October 2016, the MPAA listed 123Movies in its Online Notorious Markets overview to the Office of the United States Trade Representative (USTR), stating that: "The site has a global Alexa rank of 559 and a local rank of 386 in the U.S. 123movies.to had 9.26 million worldwide unique visitors in August 2016 according to SimilarWeb data". In October 2016, Business Insider reported that 123movies.to was the "most-used pirate website" in the United Kingdom.
123Movies included HD, HD-RIP, Blu-ray and camera qualities of films. The video hosters and players it used included Openload, Streamango, and MyCloud. During its existence and shutdown period, the site was covered by TorrentFreak regarding its features, uptime/downtime, shutdown, and reasons for shutdown.
In December 2017, the creators of 123movies launched another streaming site dedicated to anime, named AnimeHub.to, which remained online for months after 123Movies's shutdown.
Shutdown
In March 2017, TorrentFreak reported that the US ambassador to Vietnam, Ted Osius, had been in talks with the local Minister of Inform |
https://en.wikipedia.org/wiki/ROHR2 | ROHR2 is a pipe stress analysis CAE system from SIGMA Ingenieurgesellschaft mbH, based in Unna, Germany. The software performs both static and dynamic analysis of complex piping and skeletal structures, and runs on Microsoft Windows platform.
ROHR2 software comes with built in industry standard stress codes; such as ASME B31.1, B31.3, B31.4, B31.5, B31.8, EN 13480, CODETI; along with several GRP pipe codes; as well as nuclear stress codes such as ASME Cl. 1-3, KTA 3201.2, KTA 3211.2.
Name
The brand name comes from the German word "Rohr" (pronounced as “ROAR“) which means "Pipe“.
History
Early years as a MBP product : 1960's to 1989
ROHR2 was created in the late 1960s by the one of the first software companies in Germany, Mathematischer Beratungs- und Programmierungsdienst (MBP), based in Dortmund. ROHR2 first ran on mainframes such as UNIVAC 1, CRAY, and later Prime computer. At the time, the program was command line driven with a proprietary programming language to describe the piping systems and define the various load conditions. The 1987 launched version 26, was released for IBM PC as well as IBM PC compatible systems.
As a EDS / SIGMA product : 1989 to 2000
MBP was later taken over by EDS (then a part of General Motors Corp., now part of HP Enterprise Services). In 1989, SIGMA Ingenieurgesellschaft mbH was founded in Dortmund, and the ROHR2 development and support team moved to the new office premises of SIGMA. The graphical user interface was added in 1994 to the product, which allowed the editing of piping systems without the need of mastering the earlier required programming language.
Sigma Ingenieurgesellschaft mbH product : 2000 to present
From the year 2000 onwards, the complete licensing and sales activities came under the management of SIGMA Ingenieurgesellschaft mbH; which by then evolved into an engineering company specializing in pipe engineering, as well as a software development firm.
The recent developments include new bi-directional int |
https://en.wikipedia.org/wiki/Radar%20geo-warping | Radar geo-warping is the adjustment of geo-referenced radar images and video data to be consistent with a geographical projection. This image warping avoids any restrictions when displaying it together with video from multiple radar sources or with other geographical data including scanned maps and satellite images which may be provided in a particular projection.
There are many areas where geo warping has unique benefits:
Single radar video signal displayed together with maps of different geographical projections. E.g.
Mercator
UTM
stereographic
Multiple radar video signals displayed simultaneously:
Having the computing power to do so on one computer.
Adapting the projection of all radar signals allowing the geographically correct display and accurate superimposition of those videos.
Slant range correction: a modern 3D radar system can measure the height of a target and hence it is possible to correct the radar video by the real corrected range of the target. Slant Range Correction also allows to compensate the radar tower height e.g. for maritime surveillance radars.
Introduction
Radar video presents the echoes of electromagnetic waves a radar system has emitted and received as reflections afterwards. These echoes are typically presented on a computer screen with a color-coding scheme depicting the reflection strength.
Two problems have to be solved during such a visualization process. The first problem arises from the fact that typically the radar antenna turns around its position and measures the reflection echo distances from its position in one direction. This effectively means that the radar video data are present in polar coordinates. In older systems the polar oriented picture has been displayed in so called plan position indicators (PPI). The PPI-scope uses a radial sweep pivoting about the center of the presentation. This results in a map-like picture of the area covered by the radar beam. A long-persistence screen is used so that the display rema |
https://en.wikipedia.org/wiki/Desulfurobacteriaceae | The Desulfurobacteriaceae family are bacteria belonging to the Aquificota phylum.
Phylogeny
Taxonomy
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LSPN) and the National Center for Biotechnology Information (NCBI).
Class "Desulfurobacteriia"
Order Desulfurobacteriales Gupta & Lali 2014
Family Desulfurobacteriaceae L'Haridon et al. 2006 em. Gupta & Lali 2013
Genus Balnearium Takai et al. 2003
Species B. lithotrophicum Takai et al. 2003
Genus Desulfurobacterium L'Haridon et al. 1998 emend. L'Haridon et al. 2006
Species D. atlanticum L'Haridon et al. 2006
Species "D. crinifex" Alain et al. 2003
Species D. indicum Cao et al. 2017
Species D. pacificum L'Haridon et al. 2006
Species D. thermolithotrophum L'Haridon et al. 1998 (type sp.)
Genus Phorcysia Pérez-Rodríguez et al. 2012
Species P. thermohydrogeniphila Pérez-Rodríguez et al. 2012
Genus Thermovibrio Huber et al. 2002
Species T. ammonificans Vetriani et al. 2004
Species T. guaymasensis L'Haridon et al. 2006
Species T. ruber Huber et al. 2002 (type sp.) |
https://en.wikipedia.org/wiki/Caldivirga | In taxonomy, Caldivirga is a genus of the Thermoproteaceae. |
https://en.wikipedia.org/wiki/Continuous%20knapsack%20problem | In theoretical computer science, the continuous knapsack problem (also known as the fractional knapsack problem) is an algorithmic problem in combinatorial optimization in which the goal is to fill a container (the "knapsack") with fractional amounts of different materials chosen to maximize the value of the selected materials. It resembles the classic knapsack problem, in which the items to be placed in the container are indivisible; however, the continuous knapsack problem may be solved in polynomial time whereas the classic knapsack problem is NP-hard. It is a classic example of how a seemingly small change in the formulation of a problem can have a large impact on its computational complexity.
Problem definition
An instance of either the continuous or classic knapsack problems may be specified by the numerical capacity of the knapsack, together with a collection of materials, each of which has two numbers associated with it: the weight of material that is available to be selected and the total value of that material. The goal is to choose an amount of each material, subject to the capacity constraint
and maximizing the total benefit
In the classic knapsack problem, each of the amounts must be either zero or ; the continuous knapsack problem differs by allowing to range continuously from zero to .
Some formulations of this problem rescale the variables to be in the range from 0 to 1. In this case the capacity constraint becomes
and the goal is to maximize the total benefit
Solution technique
The continuous knapsack problem may be solved by a greedy algorithm, first published in 1957 by George Dantzig, that considers the materials in sorted order by their values per unit weight. For each material, the amount xi is chosen to be as large as possible:
If the sum of the choices made so far equals the capacity W, then the algorithm sets xi = 0.
If the difference d between the sum of the choices made so far and W is smaller than wi, then the algorithm sets |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.