source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/CRISPR/Cas%20tools | CRISPR-Cas design tools are computer software platforms and bioinformatics tools used to facilitate the design of guide RNAs (gRNAs) for use with the CRISPR/Cas gene editing system.
CRISPR-Cas
The CRISPR/Cas (clustered regularly interspaced short palindromic repeats/CRISPR associated nucleases) system was originally discovered to be an acquired immune response mechanism used by archaea and bacteria. It has since been adopted for use as a tool in the genetic engineering of higher organisms.
Designing an appropriate gRNA is an important element of genome editing with the CRISPR/Cas system. A gRNA can and at times does have unintended interactions ("off-targets") with other locations of the genome of interest. For a given candidate gRNA, these tools report its list of potential off-targets in the genome thereby allowing the designer to evaluate its suitability prior to embarking on any experiments.
Scientists have also begun exploring the mechanics of the CRISPR/Cas system and what governs how good, or active, a gRNA is at directing the Cas nuclease to a specific location of the genome of interest. As a result of this work, new methods of assessing a gRNA for its 'activity' have been published, and it is now best practice to consider both the unintended interactions of a gRNA as well as the predicted activity of a gRNA at the design stage.
Table
The below table lists available tools and their attributes.
References
Genetic engineering
Genome editing |
https://en.wikipedia.org/wiki/Microbial%20contamination%20of%20diesel%20fuel | Diesel bug is contamination of diesel fuel by microbes such as bacteria and fungi.
Water can get into diesel fuel as a result of condensation, rainwater penetration or adsorption from the air — modern biodiesel is especially hygroscopic. The presence of water then encourages microbial growth which either occurs at the interface between the oil and water or on the tank walls, depending on whether the microbes need oxygen. Species which may grow in this way include:
bacteria — Clostridium; Desulfotomaculum; Desulfovibrio; Flavobacterium; Acidovorax facilis; Pseudomonas; Sarcina
fungi — Aspergillus; Candida keroseneae; Fusarium; Hormoconis resinae
Fuel companies agree that if left untreated fuel will remain reliable for just 6–12 months, after which fuel contamination (such as the diesel bug) begins to appear. Most industrial engine manufacturers now recommend a fuel conditioning programme to ensure the reliability of fuel.
References
External links
Stored Fuel for Back Up Generators - including "How to minimize the risk of fuel contamination occurring"
Diesel fuel |
https://en.wikipedia.org/wiki/Lime%20tree%20in%20culture | The lime tree, or linden, (Tilia) is important in the mythology, literature, and folklore of a number of cultures.
Cultural significance
Slavic mythology
In old pagan Slavic mythology, the linden (lipa, as called in all Slavic languages) was considered a sacred tree. Particularly in Poland, many villages have the name "Święta Lipka" (or similar), which literally means "Holy Lime". To this day, the tree is a national emblem of the Czech Republic, Slovakia, Slovenia, and Lusatia. Lipa gave name to the traditional Slavic name for the month of June (Croatian, lipanj) or July (Polish, lipiec, Ukrainian "lypen'/липень"). It is also the root for the German city of Leipzig, taken from the Sorbian name lipsk. The former Croatian currency, kuna, consisted of 100 lipa (Tilia). "Lipa" was also a proposed name for Slovenian currency in 1990, however the name "tolar" ultimately prevailed. In the Slavic Orthodox Christian world, limewood was the preferred wood for panel icon painting. The icons by the hand of Andrei Rublev, including the Holy Trinity (Hospitality of Abraham), and The Savior, now in the State Tretyakov Gallery in Moscow, are painted on linden wood. Its wood was chosen for its ability to be sanded very smooth and for its resistance to warping once seasoned. The southern Slovenian village of "Lipica" signifies little Lime tree and has given its name to the Lipizzan horse breed.
Baltic mythology
In Baltic mythology, there is an important goddess of fate by the name of Laima /laɪma/, whose sacred tree is the lime. Laima's dwelling was a lime-tree, where she made her decisions as a cuckoo. For this reason Lithuanian women prayed and gave sacrifices under lime-trees asking for luck and fertility. They treated lime-trees with respect and talked with them as if they were human beings.
Germanic mythology
The linden was also a highly symbolic and hallowed tree to the Germanic peoples in their native pre-Christian Germanic mythology.
Originally, local communities as |
https://en.wikipedia.org/wiki/Stevie%20%28text%20editor%29 | Stevie, ST Editor for VI Enthusiasts,
is a discontinued clone of Bill Joy's vi text editor. Stevie was written by Tim Thompson for the Atari ST in 1987. It later became the basis for Vim, which was released in 1991.
Thompson posted his original C source code as free software to the comp.sys.atari.st newsgroup on 28 June 1987. Tony Andrews added features and ported it to Unix, OS/2 and Amiga, posting his version to the comp.sources.unix newsgroup as free software on 6 June 1988. In 1991, Bram Moolenaar released Vim, which he based on the source code of the Amiga port of Stevie.
References
Vi
Free text editors
Atari ST software
Amiga software
Unix text editors
OS/2 text editors
Free software programmed in C
Cross-platform free software |
https://en.wikipedia.org/wiki/Ecosystem%20decay | Ecosystem decay is a term coined by Thomas Lovejoy to define the process of which species become extinct locally based on habitat fragmentation. This process is what led to the extinction of several species, including the Irish Elk. Ecosystem decay can be mainly attributed to population isolation, leading to inbreeding, leading to a decrease in the population of local species. Another factor is the absence of competition, preventing the mechanisms of natural selection to benefit the population. This leads to a lack of a skill set for the animal to adjust and adapt to a new environment. Habitat fragmentation and loss lead to smaller habitat sizes, and ecosystem decay predicts ecological processes are changed so heavily in smaller habitats that the loss in diversity is more extreme than expected by fragmentation alone.
Although similar to forest fragmentation and island biogeography, ecosystem decay is what results in the event of forest fragmentation.
Overview
Ecosystem decay is a natural phenomenon that has several resulting features.
Decline of native populations of animals
Decrease in genetic diversity
Decrease of the interior:edge ratio
Isolation of an area of viable habitat
Reduction in viable habitats and often extreme separation
Process
The process through which ecosystem decay occurs can be long and complicated or short and hasty. Overall, it still follows some basic guidelines. First, a piece of habitat is surrounded and thus isolated by farmland or cities.
Secondly, pollination of the plants immediately ceases and the number of species thins out. Thirdly, through generations of inbreeding and thus higher birth mortality than birth survival rate and infertile dirt, the forest fragment will slowly decline to nothing.
Causes
Ecosystem decay is commonly caused by the harvesting of rain forest in appliance to certain laws or illegally for profit by humans. Certain countries such as Brazil prohibit the harvesting of Brazil nut trees and groves of this |
https://en.wikipedia.org/wiki/BioFabric | BioFabric is an open-source software application for graph drawing. It presents graphs as a node-link diagram, but unlike other graph drawing tools that depict the nodes using discrete symbols, it represents nodes using horizontal lines.
Rationale
Traditional node-link methods for visualizing networks deteriorate in terms of legibility when dealing with large networks, due to the proliferation of edge crossings amassing as what are disparagingly termed 'hairballs'. BioFabric is one of a number of alternative approaches designed explicitly to tackle this scalability issue, choosing to do so by depicting nodes as lines on the horizontal axis, one per row; edges as lines on the vertical axis, one per column, terminating at the two rows associated with the endpoint nodes. As such, nodes and edges are each provided their own dimension (as opposed to solely the edges with nodes being non-dimensional points). BioFabric exploits the additional degree of freedom thus produced to place ends of incident edges in groups. This placement can potentially carry semantic information, whereas in node-link graphics the placement is often arbitrarily generated within constraints for aesthetics, such as during force-directed graph drawing, and may result in apparently informative artifacts.
Edges are drawn (vertically) in a darker shade than (horizontal) nodes, creating visual distinction. Additional edges increase the width of the graph.
Both ends of a link are represented as a square to reinforce the above effect even at small scales. Directed graphs also incorporate arrowheads.
Development
The first version, 1.0.0, was released in July 2012. Development work on BioFabric is ongoing. An open source R implementation was released in 2013, RBioFabric, for use with the igraph package, and subsequently described on the project weblog.
Features
Input
Networks can be imported using SIF files as input.
Related work
Blakley et al. have described how the technique used by BioFabric, |
https://en.wikipedia.org/wiki/Foil%20%28architecture%29 | A foil is an architectural device based on a symmetrical rendering of leaf shapes, defined by overlapping circles of the same diameter that produce a series of cusps to make a lobe. Typically, the number of cusps can be three (trefoil), four (quatrefoil), five (cinquefoil), or a larger number (multifoil).
Foil motifs may be used as part of the heads and tracery of window lights, complete windows themselves, the underside of arches, in heraldry, within panelling, and as part of any decorative or ornament device. Foil types are commonly found in Gothic and Islamic architecture.
References
Ornaments (architecture)
Symbols
Heraldic charges
Visual motifs |
https://en.wikipedia.org/wiki/Congruence-permutable%20algebra | In universal algebra, a congruence-permutable algebra is an algebra whose congruences commute under composition. This symmetry has several equivalent characterizations, which lend to the analysis of such algebras. Many familiar varieties of algebras, such as the variety of groups, consist of congruence-permutable algebras, but some, like the variety of lattices, have members that are not congruence-permutable.
Definition
Given an algebra , a pair of congruences are said to permute when . An algebra is called congruence-permutable when each pair of congruences of permute. A variety of algebras is referred to as congruence-permutable when every algebra in is congruence-permutable.
Properties
In 1954 Maltsev gave two other conditions that are equivalent to the one given above defining a congruence-permutable variety of algebras. This initiated the study of congruence-permutable varieties.
Theorem (Maltsev, 1954)
Suppose that is a variety of algebras. The following are equivalent:
Such a term is called a Maltsev term and congruence-permutable varieties are also known as Maltsev varieties in his honor.
Examples
Most classical varieties in abstract algebra, such as groups, rings, and Lie algebras are congruence-permutable. Any variety that contains a group operation is congruence-permutable, and the Maltsev term is .
Nonexamples
Viewed as a lattice the chain with three elements is not congruence-permutable and hence neither is the variety of lattices.
References
Universal algebra |
https://en.wikipedia.org/wiki/Circulating%20water%20plant | A circulating water plant or circulating water system is an arrangement of flow of water in fossil-fuel power station, chemical plants and in oil refineries. The system is required because various industrial process plants uses heat exchanger, and also for active fire protection measures. In chemical plants, for example in caustic soda production, water is needed in bulk quantity for preparation of brine. The circulating water system in any plant consists of a circulator pump, which develops an appropriate hydraulic head, and pipelines to circulate the water in the entire plant.
System description
Circulating water pumps
Circulating water systems are normally of the wet pit type, but for sea water circulation, both the wet pit type and the concrete volute type are employed. In some industries, one or two stand-by pumps are also connected parallel to CW pumps. It is recommended that these pumps must be constantly driven by constant speed squirrel cage induction motors. CW pumps are designed as per IS:9137, standards of the Hydraulic Institute, USA or equivalent.
Cooling tower
In the present era, mechanical induced draft–type cooling towers are employed in cooling of water. Performance testing of cooling towers (both IDCT and NDCT) shall be carried
out as per ATC-105 at a time when the atmospheric conditions are within the permissible limits of deviation from the design conditions. As guidelines of Central Electricity Authority, two mechanical draft cooling towers Or one natural draft cooling tower must be established for each 500 MW unit in power plants. The cooling towers are designed as per Cooling Tower Institute codes.
CW treatment system
Some coastal power stations or chemical plants intake water from sea for condenser cooling. They either use closed cycle cooling by using cooling towers or once through cooling. Selection of type of system is based on the thermal pollution effect on sea water and techno-economics based on the distance of power station from |
https://en.wikipedia.org/wiki/Jurimetrics | Jurimetrics is the application of quantitative methods, and often especially probability and statistics, to law. In the United States, the journal Jurimetrics is published by the American Bar Association and Arizona State University. The Journal of Empirical Legal Studies is another publication that emphasizes the statistical analysis of law.
The term was coined in 1949 by Lee Loevinger in his article "Jurimetrics: The Next Step Forward". Showing the influence of Oliver Wendell Holmes Jr., Loevinger quoted Holmes' celebrated phrase that:
The first work on this topic is attributed to Nicolaus I Bernoulli in his doctoral dissertation De Usu Artis Conjectandi in Jure, written in 1709.
Common methods
Bayesian inference
Causal inference
Instrumental variables
Design of experiments
Vital for epidemiological studies
Generalized linear models
Ordinary least squares, logistic regression, Poisson regression
Meta-analysis
Probability distributions
Binomial distribution, hypergeometric distribution, normal distribution
Survival analysis
Kaplan-Meier estimator, proportional hazards model, Weibull distribution
Applications
Accounting fraud detection (Benford's law)
Airline deregulation
Analysis of police stops (Negative binomial regression)
Ban the Box legislation and subsequent impact on job applications
Statistical discrimination (economics)
Calorie labeling mandates and food consumption
Risk compensation
Challenging election results (Hypergeometric distribution)
Condorcet's jury theorem
Cost-benefit analysis of renewable portfolio standards for greenhouse gas abatement
Effect of compulsory schooling on future earnings
Effect of corporate board size on firm performance
Effect of damage caps on medical malpractice claims
Effect of a fiduciary standard on financial advice
False conviction rate of inmates sentenced to death
Legal evidence (Bayesian network)
Impact of "pattern-or-practice" investigations on crime
Legal informatics
Ogden tables
Optimal stopping of clin |
https://en.wikipedia.org/wiki/Biometric%20device | A biometric device is a security identification and authentication device. Such devices use automated methods of verifying or recognising the identity of a living person based on a physiological or behavioral characteristic. These characteristics include fingerprints, facial images, iris and voice recognition.
History
Biometric devices have been in use for thousands of years. Non-automated biometric devices have in use since 500 BC, when ancient Babylonians would sign their business transactions by pressing their fingertips into clay tablets.
Automation in biometric devices was first seen in the 1960s. The Federal Bureau of Investigation (FBI) in the 1960s, introduced the Indentimat, which started checking for fingerprints to maintain criminal records. The first systems measured the shape of the hand and the length of the fingers. Although discontinued in the 1980s, the system set a precedent for future Biometric Devices.
Types of biometric devices
There are two categories of biometric devices,
Contact Devices - These types of devices need contact of body part of live persons. They are mainly fingerprint scanners, either single fingerprint, dual fingerprint or slap (4+4+2) fingerprint scanners, and hand geometry scanners.
Contactless Devices - These devices don't need any type of contact. The main examples of these are face, iris, retina and palm vein scanners and voice identification devices.
Subgroups
The characteristic of the human body is used to access information by the users. According to these characteristics, the sub-divided groups are
Chemical biometric devices: Analyses the segments of the DNA to grant access to the users.
Visual biometric devices: Analyses the visual features of the humans to grant access which includes iris recognition, face recognition, Finger recognition, and Retina Recognition.
Behavioral biometric devices: Analyses the Walking Ability and Signatures (velocity of sign, width of sign, pressure of sign) distinct to |
https://en.wikipedia.org/wiki/List%20of%20sums%20of%20reciprocals | In mathematics and especially number theory, the sum of reciprocals generally is computed for the reciprocals of some or all of the positive integers (counting numbers)—that is, it is generally the sum of unit fractions. If infinitely many numbers have their reciprocals summed, generally the terms are given in a certain sequence and the first n of them are summed, then one more is included to give the sum of the first n+1 of them, etc.
If only finitely many numbers are included, the key issue is usually to find a simple expression for the value of the sum, or to require the sum to be less than a certain value, or to determine whether the sum is ever an integer.
For an infinite series of reciprocals, the issues are twofold: First, does the sequence of sums diverge—that is, does it eventually exceed any given number—or does it converge, meaning there is some number that it gets arbitrarily close to without ever exceeding it? (A set of positive integers is said to be large if the sum of its reciprocals diverges, and small if it converges.) Second, if it converges, what is a simple expression for the value it converges to, is that value rational or irrational, and is that value algebraic or transcendental?
Finitely many terms
The harmonic mean of a set of positive integers is the number of numbers times the reciprocal of the sum of their reciprocals.
The optic equation requires the sum of the reciprocals of two positive integers a and b to equal the reciprocal of a third positive integer c. All solutions are given by a = mn + m2, b = mn + n2, c = mn. This equation appears in various contexts in elementary geometry.
The Fermat–Catalan conjecture concerns a certain Diophantine equation, equating the sum of two terms, each a positive integer raised to a positive integer power, to a third term that is also a positive integer raised to a positive integer power (with the base integers having no prime factor in common). The conjecture asks whether the equation has an infi |
https://en.wikipedia.org/wiki/Gollum%20%28software%29 | Gollum is a wiki software using git as the back end storage mechanism, and written mostly in Ruby. It started life as the wiki system used by the GitHub web hosting system. Although the open source Gollum project and the software currently used to run GitHub wikis have diverged from one another, Gollum strives to maintain compatibility with the latter. Currently it is used by GitLab server to store and interconnect wiki-pages with wiki-links, but the plan is to move complete away from Gollum in the future.
Formats supported
Gollum wikis are simply Git repositories that adhere to a specific format. Gollum pages may be written in a variety of formats including Markdown, AsciiDoc, ReStructuredText, Creole and MediaWiki markup.
Features
YAML Frontmatter for controlling per-page settings
UML diagrams via PlantUML
BibTeX and citation support (when using Pandoc for rendering)
Annotations using CriticMarkup
Mathematics via MathJax
Macros
Redirects
Support for Right-To-Left Languages
Editing
Editing the pages can be done via the provided web interface, via its API or with a text editor directly in the git repository.
See also
ikiwiki: Also uses a version control system to store pages
Gitit (software): Git-based wiki software with similar features
References
External links
Documentation
GitHub
Free wiki software
Free software programmed in Ruby
Microsoft free software
Software using the MIT license
2009 software |
https://en.wikipedia.org/wiki/Newton%E2%80%93Okounkov%20body | In algebraic geometry, a Newton–Okounkov body, also called an Okounkov body, is a convex body in Euclidean space associated to a divisor (or more generally a linear system) on a variety. The convex geometry of a Newton–Okounkov body encodes (asymptotic) information about the geometry of the variety and the divisor. It is a large generalization of the notion of the Newton polytope of a projective toric variety.
It was introduced (in passing) by Andrei Okounkov in his papers in the late 1990s and early 2000s. Okounkov's construction relies on an earlier result of Askold Khovanskii on semigroups of lattice points. Later, Okounkov's construction was generalized and systematically developed in the papers of Robert Lazarsfeld and Mircea Mustață as well as Kiumars Kaveh and Khovanskii.
Beside Newton polytopes of toric varieties, several polytopes appearing in representation theory (such as the Gelfand–Zetlin polytopes and the string polytopes of Peter Littelmann and Arkady Berenstein–Andrei Zelevinsky) can be realized as special cases of Newton–Okounkov bodies.
References
External links
Oberwolfach workshop "Okounkov bodies and applications"
BIRS workshop "Positivity of linear series and vector bundles"
BIRS workshop "Convex bodies and representation theory"
Oberwolfach workshop "New developments in Newton–Okounkov bodies"
Algebraic geometry
Multi-dimensional geometry |
https://en.wikipedia.org/wiki/Enumerator%20%28computer%20science%29 | An enumerator is a Turing machine with an attached printer. The Turing machine can use that printer as an output device to print strings. Every time the Turing machine wants to add a string to the list, it sends the string to the printer. Enumerator is a type of Turing machine variant and is equivalent with Turing machine.
Formal definition
An enumerator can be defined as a 2-tape Turing machine (Multitape Turing machine where ) whose language is . Initially, receives no input, and all the tapes are blank (i.e., filled with blank symbols). Newly defined symbol is the delimiter that marks end of an element of . The second tape can be regarded as the printer, strings on it are separated by . The language enumerated by an enumerator denoted by is defined as set of the strings on the second tape (the printer).
Equivalence of Enumerator and Turing Machines
A language over a finite alphabet is Turing Recognizable if and only if it can be enumerated by an enumerator. This shows Turing recognizable languages are also recursively enumerable.
Proof
A Turing Recognizable language can be Enumerated by an Enumerator
Consider a Turing Machine and the language accepted by it be . Since the set of all possible strings over the input alphabet i.e. the Kleene Closure is a countable set, we can enumerate the strings in it as etc. Then the Enumerator enumerating the language will follow the steps:
1 for i = 1,2,3,...
2 Run with input strings for -steps
3 If any string is accepted, then print it.
Now the question comes whether every string in the language will be printed by the Enumerator we constructed. For any string in the language the TM will run finite number of steps(let it be for ) to accept it. Then in the -th step of the Enumerator will be printed. Thus the Enumerator will print every string recognizes but a single string may be printed several times.
An Enumerable Language is Turing Recognizable
It's very easy to construct a Turing Machine |
https://en.wikipedia.org/wiki/Validation%20authority | In public key infrastructure, a validation authority (VA) is an entity that provides a service used to verify the validity or revocation status of a digital certificate per the mechanisms described in the X.509 standard and (page 69).
The dominant method used for this purpose is to host a certificate revocation list (CRL) for download via the HTTP or LDAP protocols. To reduce the amount of network traffic required for certificate validation, the OCSP protocol may be used instead.
While a validation authority is capable of responding to a network-based request for a CRL, it lacks the ability to issue or revoke certificates. It must be continuously updated with current CRL information from a certificate authority which issued the certificates contained within the CRL.
While this is a potentially labor-intensive process, the use of a dedicated validation authority allows for dynamic validation of certificates issued by an offline root certificate authority. While the root CA itself will be unavailable to network traffic, certificates issued by it can always be verified via the validation authority and the protocols mentioned above.
The ongoing administrative overhead of maintaining the CRLs hosted by the validation authority is typically minimal, as it is uncommon for root CAs to issue (or revoke) large numbers of certificates.
References
Certificate revocation
Public-key cryptography
Key management
Public key infrastructure
Transport Layer Security |
https://en.wikipedia.org/wiki/Reticulate%20evolution | Reticulate evolution, or network evolution is the origination of a lineage through the partial merging of two ancestor lineages, leading to relationships better described by a phylogenetic network than a bifurcating tree. Reticulate patterns can be found in the phylogenetic reconstructions of biodiversity lineages obtained by comparing the characteristics of organisms. Reticulation processes can potentially be convergent and divergent at the same time. Reticulate evolution indicates the lack of independence between two evolutionary lineages. Reticulation affects survival, fitness and speciation rates of species.
Reticulate evolution can happen between lineages separated only for a short time, for example through hybrid speciation in a species complex. Nevertheless, it also takes place over larger evolutionary distances, as exemplified by the presence of organelles of bacterial origin in eukaryotic cells.
Reticulation occurs at various levels: at a chromosomal level, meiotic recombination causes evolution to be reticulate; at a species level, reticulation arises through hybrid speciation and horizontal gene transfer; and at a population level, sexual recombination causes reticulation.
The adjective reticulate stems from the Latin words reticulatus, "having a net-like pattern" from reticulum, "little net."
Underlying mechanisms and processes
Since the nineteenth century, scientists from different disciplines have studied how reticulate evolution occurs. Researchers have increasingly succeeded in identifying these mechanisms and processes. It has been found to be driven by symbiosis, symbiogenesis (endosymbiosis), lateral gene transfer, hybridization and infectious heredity.
Symbiosis
Symbiosis is a close and long-term biological interaction between two different biological organisms. Often, both of the organisms involved develop new features upon the interaction with the other organism. This may lead to the development of new, distinct organisms. The alterati |
https://en.wikipedia.org/wiki/Gitter | Gitter is an open-source instant messaging and chat room system for developers and users of GitLab and GitHub repositories. Gitter is provided as software-as-a-service, with a free option providing all basic features and the ability to create a single private chat room, and paid subscription options for individuals and organisations, which allows them to create arbitrary numbers of private chat rooms.
Individual chat rooms can be created for individual git repositories on GitHub. Chatroom privacy follows the privacy settings of the associated GitHub repository: thus, a chatroom for a private (i.e. members-only) GitHub repository is also private to those with access to the repository. A graphical badge linking to the chat room can then be placed in the git repository's README file, bringing it to the attention of all users and developers of the project. Users can chat in the chat rooms, or access private chat rooms for repositories they have access to, by logging into Gitter via GitHub
Gitter is similar to Slack. Like Slack, it automatically logs all messages in the cloud.
In late 2020, New Vector Limited acquired Gitter from GitLab, and announced Gitter's features would eventually be moved to New Vector's flagship product, Element, thereby replacing Gitter entirely. On February 13, 2023, Gitter migrated their service to a custom-branded Matrix instance that uses Element for its web interface.
Features prior to Migration to Matrix
Gitter supports:
Notifications, which are batched up on mobile devices to avoid annoyance
Inline media files
Viewing and subscribing to ("starring") multiple chat rooms in one web browser tab
Linking to individual files in the linked git repository
Linking to GitHub issues (by typing # and then the issue number) in the linked git repository, with hovercards showing the details of the issue
GitHub-flavored Markdown in chat messages
Online status for users
User hovercards, based on their GitHub profiles and statistics (number of |
https://en.wikipedia.org/wiki/Butler%20oscillator | The Butler oscillator is a crystal-controlled oscillator that uses the crystal near its series resonance point. They are used where a simple low-cost circuit is needed which can oscillate at high frquencies (>50MHz) by using overtones of a crystal, and also giving low low phase noise.
It was described by Butler in 1946 as the earthed grid oscillator, a derivative of the Hartley oscillator. It is also known as the bridged-T oscillator or the grounded-base oscillator.
Circuit operation
The classic Butler oscillator circuit is a two-stage circuit with two non-inverting stages, a grounded base stage and an emitter follower. The crystal is inserted in series in the overall feedback path.
The more common modern form of the circuit uses just the emitter follower stage. The circuit may be analysed by considering it as a equivalent AC circuit with three parts. The emitter follower forms an amplifier with no phase shift. The crystal and its loading capacitor then produce a phase lag network, followed by the LC network of the resonant tank circuit. This then produces a phase lead, which overall meets the Barkhausen criteria for self-oscillation.
The Butler circuit is a free-running or tuned oscillator. If the crystal is replaced temporarily with a low value resistor, the circuit will still oscillate at approximately the design frequency of the tank circuit. This allows the circuit to be set-up and adjusted initially without the crystal, and also encourages the selection of the correct crystal harmonic. To avoid the circuit oscillating at the strong resonance of the crystal's fundamental, a small inductor may be placed in parallel with the crystal.
Both the better-known Pierce and Colpitts oscillator circuits may be considered as derivatives of the Butler.
References
Further reading
External links
Two-transistor Butler
Electronic oscillators |
https://en.wikipedia.org/wiki/VeraCrypt | VeraCrypt is a free and open-source utility for on-the-fly encryption (OTFE). The software can create a virtual encrypted disk that works just like a regular disk but within a file. It can also encrypt a partition or (in Windows) the entire storage device with pre-boot authentication.
VeraCrypt is a fork of the discontinued TrueCrypt project. It was initially released on 22 June 2013. Many security improvements have been implemented and concerns within the TrueCrypt code audits have been addressed. VeraCrypt includes optimizations to the original cryptographic hash functions and ciphers, which boost performance on modern CPUs.
Encryption scheme
VeraCrypt employs AES, Serpent, Twofish, Camellia, and Kuznyechik as ciphers. Version 1.19 stopped using the Magma cipher in response to a security audit. For additional security, ten different combinations of cascaded algorithms are available:
AES–Twofish
AES–Twofish–Serpent
Camellia–Kuznyechik
Camellia–Serpent
Kuznyechik–AES
Kuznyechik–Serpent–Camellia
Kuznyechik–Twofish
Serpent–AES
Serpent–Twofish–AES
Twofish–Serpent
The cryptographic hash functions available for use in VeraCrypt are RIPEMD-160, SHA-256, SHA-512, Streebog and Whirlpool.
VeraCrypt's block cipher mode of operation is XTS. It generates the header key and the secondary header key (XTS mode) using PBKDF2 with a 512-bit salt. By default they go through 200,000 to 655,331 iterations, depending on the underlying hash function used. The user can customize it to start as low as 2,048.
Security improvements
The VeraCrypt development team considered the TrueCrypt storage format too vulnerable to a National Security Agency (NSA) attack, so it created a new format incompatible with that of TrueCrypt. VeraCrypt versions prior to 1.26.5 are capable of opening and converting volumes in the TrueCrypt format. Since ver. 1.26.5 TrueCrypt compatibility is dropped.
An independent security audit of TrueCrypt released 29 September 2015 found TrueCrypt include |
https://en.wikipedia.org/wiki/Visual%20computing | Visual computing is a generic term for all computer science disciplines dealing with images and 3D models, such as computer graphics, image processing, visualization, computer vision, virtual and augmented reality and video processing. Visual computing also includes aspects of pattern recognition, human computer interaction, machine learning and digital libraries. The core challenges are the acquisition, processing, analysis and rendering of visual information (mainly images and video). Application areas include industrial quality control, medical image processing and visualization, surveying, robotics, multimedia systems, virtual heritage, special effects in movies and television, and computer games.
History and overview
Visual computing is a fairly new term, which got its current meaning around 2005, when the International Symposium on Visual Computing first convened. Areas of computer technology concerning images, such as image formats, filtering methods, color models, and image metrics, have in common many mathematical methods and algorithms. When computer scientists working in computer science disciplines that involve images, such as computer graphics, image processing, and computer vision, noticed that their methods and applications increasingly overlapped, they began using the term "visual computing" to describe these fields collectively. And also the programming methods on graphics hardware, the manipulation tricks to handle huge data, textbooks and conferences, the scientific communities of these disciplines and working groups at companies intermixed more and more.
Furthermore, applications increasingly needed techniques from more than one of these fields concurrently. To generate very detailed models of complex objects you need image recognition, 3D sensors and reconstruction algorithms, and to display these models believably you need realistic rendering techniques with complex lighting simulation. Real-time graphics is the basis for usable virtual and |
https://en.wikipedia.org/wiki/Summa%20de%20arithmetica | (Summary of arithmetic, geometry, proportions and proportionality) is a book on mathematics written by Luca Pacioli and first published in 1494. It contains a comprehensive summary of Renaissance mathematics, including practical arithmetic, basic algebra, basic geometry and accounting, written for use as a textbook and reference work.
Written in vernacular Italian, the Summa is the first printed work on algebra, and it contains the first published description of the double-entry bookkeeping system. It set a new standard for writing and argumentation about algebra, and its impact upon the subsequent development and standardization of professional accounting methods was so great that Pacioli is sometimes referred to as the "father of accounting".
Contents
The Summa de arithmetica as originally printed consists of ten chapters on a series of mathematical topics, collectively covering essentially all of Renaissance mathematics. The first seven chapters form a summary of arithmetic in 222 pages. The eighth chapter explains contemporary algebra in 78 pages. The ninth chapter discusses various topics relevant to business and trade, including barter, bills of exchange, weights and measures and bookkeeping, in 150 pages. The tenth and final chapter describes practical geometry (including basic trigonometry) in 151 pages.
The book's mathematical content draws heavily on the traditions of the abacus schools of contemporary northern Italy, where the children of merchants and the middle class studied arithmetic on the model established by Fibonacci's Liber Abaci. The emphasis of this tradition was on facility with computation, using the Hindu–Arabic numeral system, developed through exposure to numerous example problems and case studies drawn principally from business and trade. Pacioli's work likewise teaches through examples, but it also develops arguments for the validity of its solutions through reference to general principles, axioms and logical proof. In this way the Su |
https://en.wikipedia.org/wiki/DASH-IF | The DASH Industry Forum (DASH-IF) creates interoperability guidelines for the usage of the MPEG-DASH streaming standard, promotes and catalyzes the adoption of MPEG-DASH, and helps transition it from a specification into a real business. It consists of the major streaming and media companies, such as Microsoft, Netflix, Google, Ericsson, Samsung and Adobe.
Interoperability
One of the main goals of the DASH Industry Forum is to attain interoperability of DASH-enabled products on the market.
The DASH Industry Forum has produced several documents as implementation guidelines:
DASH-AVC/264 Interoperability Points V3.0: DRM updates, Improved Live, Ad Insertion, Events, H.265/HEVC support, Trick Modes, CEA608/708
DASH-AVC/264 Interoperability Points V2.0: HD and Multi-Channel Audio Extensions
DASH-AVC/264 Interoperability Points V1.0
Open-Source Reference Player
The DASH Industry Forum provides the open source MPEG-DASH player dash.js
See also
H.264/MPEG-4 AVC
References
MPEG
Standards organizations in the United States |
https://en.wikipedia.org/wiki/Jonty%20Hurwitz | Jonty Hurwitz (born 2 September 1969 in Johannesburg) is a British South African artist, engineer and entrepreneur. Hurwitz creates scientifically inspired artworks and anamorphic sculptures. He is recognised for the smallest human form ever created using nano technology.
Early life
Jonty Hurwitz was born in Johannesburg, South Africa, to Selwin, a hotelier and entrepreneur and Marcia Berger, a drama lecturer and teacher. Jonty and his sister (Tamara) spent their early life living in small hotels in rural towns in South Africa while his father built up his business.
Jonty studied Electrical Engineering at the University of the Witwatersrand in Johannesburg from 1989 to 1993. His major was Signal Processing. He then joined the University of Cape Town Remote Sensing Group as a full-time researcher under Professor Michael Inggs, publishing a paper on radar pattern recognition.
Following his research post, Hurwitz traveled for a long period of time in India studying Yoga and wood carving.
Career in art
Hurwitz's work focuses on the aesthetics of art in the context of human perception. His early body of sculpture was discovered by Estelle Lovatt during 2011 in an article for Art of England Magazine: "Thinning the divide gap between art and science, Hurwitz is cognisant of the two being holistically co-joined in the same way as we are naturally, comfortably split between our spiritual and operational self".
Hurwitz began producing sculptures in 2008. In 2009, his first sculpture 'Yoda and the Anamorph' won the People's Choice Bentliff Prize of the Maidstone Museum and Art Gallery. Later in 2009 he won the Noble Sculpture Prize and was commissioned to install his first large scale work (a nude study of his father called 'Dietro di me') in the Italian village Colletta di Castelbianco. In 2010, he was selected as a finalist for the 4th International Arte Laguna Prize in Venice, Italy.
In January 2013, Hurwitz's anamorphic work was described by the art blogger Christoph |
https://en.wikipedia.org/wiki/Exposed%20point | In mathematics, an exposed point of a convex set is a point at which some continuous linear functional attains its strict maximum over . Such a functional is then said to expose . There can be many exposing functionals for . The set of exposed points of is usually denoted .
A stronger notion is that of strongly exposed point of which is an exposed point such that some exposing functional of attains its strong maximum over at , i.e. for each sequence we have the following implication: . The set of all strongly exposed points of is usually denoted .
There are two weaker notions, that of extreme point and that of support point of .
Mathematical analysis
Convex geometry
Functional analysis |
https://en.wikipedia.org/wiki/Endodermic%20evagination | Endodermic evagination relates to the inner germ layers of cells of the very early embryo, from which is formed the lining of the digestive tract, of other internal organs, and of certain glands, implies the extension of a layer of body tissue to form a pouch, or the turning inside out (protrusion) of some body part or organ from its basic position, for example the para-nasal sinuses are believed to be formed in the fetus by 'ballooning' of the developing nasal canal, and the prostate or Skene's gland formed out of evaginations of the urethra.
See also
List of human cell types derived from the germ layers
References
Embryology
Developmental biology |
https://en.wikipedia.org/wiki/Tape%20op | A tape operator or tape op, also known as a second engineer, is a person who performs menial operations in a recording studio in a similar manner to a tea boy or gopher. They may act as an apprentice or an assistant to a recording engineer and duties can consist of threading audio tape, setting up microphones and stands, configuring MIDI equipment and cables, and sometimes pressing the relevant transport controls on the recorder or digital audio workstation. Abbey Road Studios always assigned at least one tape op to each recording session.
History and prospects
The role of tape op was a useful entry into a professional recording environment, and several went on to successful careers as engineers and record producers. The music and film soundtrack producer John Kurlander started his production career at Abbey Road Studios in 1967 as a tea boy, progressing to principal tape op (or assistant engineer) by 1969. He was partially responsible for including "Her Majesty" on the Beatles' Abbey Road after carefully splicing a discarded take of the song onto the master tape. Alan Parsons also began his production career as an Abbey Road tape op, which led to him to assisting with the mixing of Pink Floyd's Atom Heart Mother and engineering on The Dark Side of the Moon.
Due to the increasing ability to produce professional quality recordings at home studios, the experience that can be gained by working as a tape op is being lost, resulting in people having a harder learning curve with music engineering and production.
References
Citations
Sources
Audio engineering |
https://en.wikipedia.org/wiki/Fairbanks%20Exploration%20Company%20Dredge%20No.%202 | The Fairbanks Exploration Company Dredge No. 2 is a historic gold mining dredge in a remote area of Fairbanks North Star Borough, Alaska, northeast of the city of Fairbanks. It is currently located on the north bank of Fish Creek, shortly northeast of the mouth of Slippery Creek. Its main structure is a compartmented steel hull, long, wide, and high, with a 1-2 story superstructure above made of steel and wood framing sheathed in corrugated metal. It has three gantries, and a digging ladder long at its bow that weights . All of its original operating equipment was reported to be in place in 1999. The dredge was built in 1927 by the Bethlehem Steel Company, and assembled for use in Alaska in 1928. It was operated by the Fairbanks Exploration Company in the Goldstream Valley from 1928 to 1949, and on Fairbanks Creek and lower Fish Creek from 1950 to 1961.
See also
National Register of Historic Places listings in Fairbanks North Star Borough, Alaska
References
1928 establishments in Alaska
Buildings and structures completed in 1928
Gold mining in Alaska
Industrial buildings and structures on the National Register of Historic Places in Alaska
Industrial equipment on the National Register of Historic Places
Gold dredges
Buildings and structures on the National Register of Historic Places in Fairbanks North Star Borough, Alaska
Buildings and structures completed in 1927 |
https://en.wikipedia.org/wiki/Onavo | Onavo, Inc. was an Israeli mobile web analytics company owned by Meta Platforms. The company primarily performed its activities via consumer mobile apps, including the virtual private network (VPN) service Onavo Protect, which analysed web traffic sent through the VPN to provide statistics on the usage of other apps.
Guy Rosen and Roi Tiger founded Onavo in 2010. In October 2013, Onavo was acquired by Facebook, which used Onavo's analytics platform to monitor competitors. This influenced Facebook to make various business decisions, including its 2014 acquisition of WhatsApp.
Since the acquisition, Onavo was frequently classified as being spyware, as the VPN was used to monetize application usage data collected within an allegedly privacy-focused environment. In August 2018, Facebook pulled Onavo Protect from the iOS App Store due to violations of Apple's policy forbidding apps from collecting data on the usage of other apps. In February 2019, in response to criticism over a Facebook market research program employing similar techniques (including, in particular, being targeted towards teens), Onavo announced that it would close the Android version of Protect as well.
History
Onavo was founded in 2010 by Roi Tiger and Guy Rosen.
Onavo had two rounds of funding: the first was a Series A investment for $3 million from Magma Venture Partners and Sequoia Capital in May 2011. The second was a Series B investment of $13 million from Magma Ventures, Sequoia Capital, and Horizons Ventures. Onavo's sale to Facebook is one of the top exits for Magma Venture Partners and other Israeli venture capital firms.
On October 13, 2013, Facebook bought Onavo for approximately $120 million.
The Australian Competition & Consumer Commission (ACCC) initiated legal proceedings against Facebook on December 16, 2020, alleging that Facebook engaged in "false, misleading or deceptive conduct" by using personal data collected from Onavo "for its own commercial purposes" contrary to Onavo's |
https://en.wikipedia.org/wiki/Three-stage%20quantum%20cryptography%20protocol | The three-stage quantum cryptography protocol, also known as Kak's three-stage protocol is a method of data encryption that uses random polarization rotations by both Alice and Bob, the two authenticated parties, that was proposed by Subhash Kak. In principle, this method can be used for continuous, unbreakable encryption of data if single photons are used. It is different from methods of QKD (quantum key distribution) for it can be used for direct encryption of data, although it could also be used for exchanging keys.
The basic idea behind this method is that of sending secrets (or valuables) through an unreliable courier by having both Alice and Bob place their locks on the box containing the secret, which is also called double-lock cryptography. Alice locks the box with the secret in it and it is transported to Bob, who sends it back after affixing his own lock. Alice now removes her lock (after checking that it has not been tampered with) and sends it back to Bob who, similarly unlocks his lock and obtains the secret. In the braided form, only one-pass suffices but here Alice and Bob share an initial key.
This protocol has been proposed as a method for secure communication that is entirely quantum unlike quantum key distribution in which the cryptographic transformation uses classical algorithms
The basic polarization rotation scheme has been implemented in hardware by Pramode Verma in the quantum optics laboratory of the University of Oklahoma.
In this method more than one photon can be used in the exchange between Alice and Bob and, therefore, it opens up the possibility of multi-photon quantum cryptography.
This works so long as the number of photons siphoned off by the eavesdropper is not sufficient to determine the polarization angles. A version that can deal with the man-in-the-middle attack has also been advanced.
Parakh analyzed the three-stage protocol under rotational quantum errors and proposed a modification that would correct these errors. On |
https://en.wikipedia.org/wiki/Fairbanks%20Exploration%20Company%20Gold%20Dredge%20No.%205 | The Fairbanks Exploration Company Gold Dredge No. 5 was a historic gold mining dredge in a remote area of Fairbanks North Star Borough, Alaska, north of the city of Fairbanks. It was last located on Upper Dome Creek, shortly northeast of the mouth of Seattle Creek, about north of Fairbanks, prior to its being scrapped c. 2012. The dredge was manufactured by the Bethlehem Steel Company in 1928, shipped in pieces to Alaska, and assembled by the Fairbanks Exploration Company on Cleary Creek, where it was used until 1942. It thereafter served on Eldorado Creek (1947–55) and Dome Creek (1955-59) before it was abandoned.
The dredge was listed on the National Register of Historic Places in 2004.
See also
National Register of Historic Places listings in Fairbanks North Star Borough, Alaska
References
1929 establishments in Alaska
Buildings and structures completed in 1929
Demolished buildings and structures in Alaska
Gold mining in the United States
Industrial buildings and structures on the National Register of Historic Places in Alaska
Industrial equipment on the National Register of Historic Places
Gold dredges
Buildings and structures on the National Register of Historic Places in Fairbanks North Star Borough, Alaska |
https://en.wikipedia.org/wiki/Buchstab%20function | The Buchstab function (or Buchstab's function) is the unique continuous function defined by the delay differential equation
In the second equation, the derivative at u = 2 should be taken as u approaches 2 from the right. It is named after Alexander Buchstab, who wrote about it in 1937.
Asymptotics
The Buchstab function approaches rapidly as where is the Euler–Mascheroni constant. In fact,
where ρ is the Dickman function. Also, oscillates in a regular way, alternating between extrema and zeroes; the extrema alternate between positive maxima and negative minima. The interval between consecutive extrema approaches 1 as u approaches infinity, as does the interval between consecutive zeroes.
Applications
The Buchstab function is used to count rough numbers.
If Φ(x, y) is the number of positive integers less than or equal to x with no prime factor less than y, then for any fixed u > 1,
Notes
References
"Buchstab Function", Wolfram MathWorld. Accessed on line Feb. 11, 2015.
§IV.32, "On Φ(x,y) and Buchstab's function", Handbook of Number Theory I, József Sándor, Dragoslav S. Mitrinović, and Borislav Crstici, Springer, 2006, .
"A differential delay equation arising from the sieve of Eratosthenes", A. Y. Cheer and D. A. Goldston, Mathematics of Computation 55 (1990), pp. 129–141.
"An improvement of Selberg’s sieve method", W. B. Jurkat and H.-E. Richert, Acta Arithmetica 11 (1965), pp. 217–240.
Analytic number theory
Special functions |
https://en.wikipedia.org/wiki/Set%20intersection%20oracle | A set intersection oracle (SIO) is a data structure which represents a collection of sets and can quickly answer queries about whether the set intersection of two given sets is non-empty.
The input to the problem is n finite sets. The sum of the sizes of all sets is N (which also means that there are at most N distinct elements). The SIO should quickly answer any query of the form:
"Does the set Si intersect the set Sk"?
Minimum memory, maximum query time
Without any pre-processing, a query can be answered by inserting the elements of Si into a temporary hash table and then checking for each element of Sk whether it is in the hash table. The query time is .
Maximum memory, minimum query time
Alternatively, we can pre-process the sets and create an n-by-n table where the intersection information is already entered. Then the query time is , but the memory required is .
A compromise
Define a "large set" as a set with at least elements. Obviously there are at most such sets. Create a table of intersection data between every large set to every other large set. This requires memory. Additionally, for each large set, keep a hash table of all its elements. This requires additional memory.
Given two sets, there are three possible cases:
Both sets are large. Then just read the answer to the intersection query from the table, in time .
Both sets are small. Then insert the elements of one of them into a hash table and check the elements of the other one; because the sets are small, the required time is .
One set is large and one set is small. Loop over all elements in the small set and check them against the hash table of the large set. The required time is again .
In general, if we define a "large set" as a set with at least elements, then the number of large set is at most so the memory required is , and the query time is .
Reduction to approximate distance oracle
The SIO problem can be reduced to the approximate distance oracle (DO) problem, in the f |
https://en.wikipedia.org/wiki/Toroidal%20embedding | In algebraic geometry, a toroidal embedding is an open embedding of algebraic varieties that locally looks like the embedding of the open torus into a toric variety. The notion was introduced by Mumford to prove the existence of semistable reductions of algebraic varieties over one-dimensional bases.
Definition
Let X be a normal variety over an algebraically closed field and a smooth open subset. Then is called a toroidal embedding if for every closed point x of X, there is an isomorphism of local -algebras:
for some affine toric variety with a torus T and a point t such that the above isomorphism takes the ideal of to that of .
Let X be a normal variety over a field k. An open embedding is said to a toroidal embedding if is a toroidal embedding.
Examples
Tits' buildings
See also
tropical compactification
References
Abramovich, D., Denef, J. & Karu, K.: Weak toroidalization over non-closed fields. manuscripta math. (2013) 142: 257.
External links
Toroidal embedding
Algebraic geometry |
https://en.wikipedia.org/wiki/K%C5%99ov%C3%A1k%27s%20projection | Křovák's projection or simply Krovak is a conic projection invented in 1922 by Czech geodesist Josef Křovák.
The projection is based on Bessel ellipsoid and it was calculated as the optimal projection of Czechoslovakia (in its interwar extent including Carpathian Ruthenia). It is still in use as national grids for civil state maps of the Czech Republic and Slovakia. The corresponding coordinate system is abbreviated S-JTSK (for Systém Jednotné trigonometrické sítě katastrální, "the Unified cadastral trigonometric network System"), code 5514.
The projection has been deliberately made so that for any point located in former Czechoslovakia, the X coordinate may be always bigger (in absolute value) than Y. This makes easy to distinguish the coordinates even when transformed into another quadrant.
References
External links
. Research Institute of Geodesy, Topography and Cartography.
Map projections |
https://en.wikipedia.org/wiki/Nasopharyngeal%20swab | A nasopharyngeal swab is a device used for collecting a sample of nasal secretions from the back of the nose and throat. The sample is then analyzed for the presence of organisms or other clinical markers for disease. This diagnostic method is commonly used in suspected cases of whooping cough, diphtheria, influenza, and various types of diseases caused by the coronavirus family of viruses, including SARS, MERS, and COVID-19.
Procedure
To collect the sample, the swab is inserted in the nostril and gently moved forward into the nasopharynx, a region of the pharynx that covers the roof of the mouth. The swab is then rotated for a specified period of time to collect secretions, then the swab is removed and placed into a sterile viral transport media, which preserves the sample for the subsequent analysis.
Material composition of swab
Similar in concept to the cotton swab, a swab used for nasopharyngeal collection constitutes a narrow stick made of a short plastic rod that is covered, at one tip, with adsorbing material such as cotton, polyester, or flocked nylon. (Some swab handles have been made of nichrome or stainless steel wire.) The swab material used for a particular diagnostic application may vary based on the test type. Some research has shown that flocked swabs collect a larger volume of the sample material, when compared to fiber swabs.
Related methods
Slightly different but related is nasopharyngeal aspiration. Rather than depending on a physical swab to catch material from the nasopharynx, aspiration uses a catheter that is attached to a syringe. As with the swab method, the catheter is placed into the nostril and gently advanced to the nasopharynx, where approximately one to three milliliters of saline are introduced, followed by immediate re-aspiration of the saline—along with cells and secretions—back into the syringe. This aspiration method is often used when 1. the patient is an infant or elderly and 2. when the method is indicated as effective for |
https://en.wikipedia.org/wiki/Numerical%20modeling%20in%20echocardiography | Numerical manipulation of Doppler parameters obtain during routine Echocardiography has been extensively utilized to non-invasively estimate intra-cardiac pressures, in many cases removing the need for invasive cardiac catheterization.
Echocardiography uses ultrasound to create real-time anatomic images of the heart and its structures. Doppler echocardiography utilizes the Doppler principle to estimate intracardiac velocities. Via the modified Bernoulli equation, velocity is routinely converted to pressure gradient for use in clinical cardiology decision making.
A broad discipline of mathematical modeling of intracardiac velocity parameters for pulmonary circulation and aortic Doppler for aortic stenosis have been investigated. Diasatolic dysfunction algorithms use complex combinations of these numeric models to estimate intra-cardiac filling pressures. Shunt defects have been studied using the Relative Atrial Index.
See also
Medical ultrasonography section: Doppler sonography
Echocardiography
American Society of Echocardiography
Christian Doppler
References
External links
Echocardiography Textbook by Bonita Anderson
Echocardiography (Ultrasound of the heart)
Doppler Examination - Introduction
The Doppler Principle and the Study of Cardiac Flows
Medical ultrasonography
Medical equipment
Cardiac procedures
Multidimensional signal processing
Cardiology |
https://en.wikipedia.org/wiki/Betibeglogene%20autotemcel | Betibeglogene autotemcel, sold under the brand name Zynteglo, is a medication for the treatment for beta thalassemia. It was developed by Bluebird Bio and was given breakthrough therapy designation by the U.S. Food and Drug Administration in February 2015.
The most common adverse reactions include reduced platelet and other blood cell levels, as well as mucositis, febrile neutropenia, vomiting, pyrexia (fever), alopecia (hair loss), epistaxis (nosebleed), abdominal pain, musculoskeletal pain, cough, headache, diarrhea, rash, constipation, nausea, decreased appetite, pigmentation disorder and pruritus (itch).
It was approved for medical use in the European Union in May 2019, and in the United States in August 2022.
Medical uses
Betibeglogene autotemcel is indicated for the treatment of people twelve years and older with transfusion-dependent beta thalassemia (TDT) who do not have a β0/β0 genotype, for whom hematopoietic stem cell (HSC) transplantation is appropriate but a human leukocyte antigen (HLA)-matched related HSC donor is not available.
Betibeglogene autotemcel is made individually for each recipient out of stem cells collected from their blood, and must only be given to the recipient for whom it is made. It is given as an autologous intravenous infusion and the dose depends on the recipient's body weight.
Before betibeglogene autotemcel is given, the recipient receives conditioning chemotherapy to clear their bone marrow of cells (myeloablation).
To make betibeglogene autotemcel, the stem cells taken from the recipient's blood are modified by a virus that carries working copies of the beta globin gene into the cells. When these modified cells are given back to the recipient, they are transported in the bloodstream to the bone marrow where they start to make healthy red blood cells that produce beta globin. The effects of betibeglogene autotemcel are expected to last for the recipient's lifetime.
Mechanism of action
Beta thalassemia is caused by mutat |
https://en.wikipedia.org/wiki/Domain%20adaptation | Domain adaptation is a field associated with machine learning and transfer learning. This scenario arises when we aim at learning a model from a source data distribution and applying that model on a different (but related) target data distribution. For instance, one of the tasks of the common spam filtering problem consists in adapting a model from one user (the source distribution) to a new user who receives significantly different emails (the target distribution). Domain adaptation has also been shown to be beneficial for learning unrelated sources.
Note that, when more than one source distribution is available the problem is referred to as multi-source domain adaptation.
Overview
Domain adaptation is the ability to apply an algorithm trained in one or more "source domains" to a different (but related) "target domain". Domain adaptation is a subcategory of transfer learning. In domain adaptation, the source and target domains all have the same feature space (but different distributions); in contrast, transfer learning includes cases where the target domain's feature space is different from the source feature space or spaces.
Domain shift
A domain shift, or distributional shift, is a change in the data distribution between an algorithm's training dataset, and a dataset it encounters when deployed. These domain shifts are common in practical applications of artificial intelligence. Conventional machine-learning algorithms often adapt poorly to domain shifts. The modern machine-learning community has many different strategies to attempt to gain better domain adaptation.
Examples
An algorithm trained on newswires might have to adapt to a new dataset of biomedical documents.
A spam filter, trained on a certain group of email users during training, must adapt to a new target user when deployed.
Applying AI diagnostic algorithms, trained on labeled data associated with previous diseases, to new unlabeled data associated with the COVID-19 pandemic.
A sudden soci |
https://en.wikipedia.org/wiki/Viral%20dynamics | Viral dynamics is a field of applied mathematics concerned with describing the progression of viral infections within a host organism. It employs a family of mathematical models that describe changes over time in the populations of cells targeted by the virus and the viral load. These equations may also track competition between different viral strains and the influence of immune responses. The original viral dynamics models were inspired by compartmental epidemic models (e.g. the SI model), with which they continue to share many common mathematical features, such as the concept of the basic reproductive ratio (R0). The major distinction between these fields is in the scale at which the models operate: while epidemiological models track the spread of infection between individuals within a population (i.e. "between host"), viral dynamics models track the spread of infection between cells within an individual (i.e. "within host"). Analyses employing viral dynamic models have been used extensively to study HIV, hepatitis B virus, and hepatitis C virus, among other infections
References
External links
Viral Dynamics Mathematical Modeling Training, Center for AIDS Research, University of Washington
Evolutionary dynamics
Evolutionary biology
Virology
Immunology
Applied mathematics
Mathematical modeling |
https://en.wikipedia.org/wiki/IEEEXtreme | IEEEXtreme (often abbreviated as Xtreme) is an annual hackathon and competitive programming challenge in which teams of IEEE Student members, often supported by an IEEE Student Branch and proctored by an IEEE member, compete in a 24-hour time span against each other to solve a set of programming problems. The competition is underwritten and coordinated by IEEE's Membership and Geographic Activities department, and is often supported by partnering sponsors, like IEEE Computer Society.
History
IEEEXtreme was created in 2006 by Marko Delimar and Ricardo Varela who, at the time, were with the IEEE Student Activities Committee. The first instance of IEEEXtreme was held in 2006 with a global participation of 44 teams and 150 contestants. The numbers more than tripled the second time it was held, in 2008, to 130 teams with 500 participants. The iteration of IEEEXtreme in 2015, enjoyed the registration of over 2,300 teams, participation of over 1,900 teams, 5,500+ student competitors, 600+ proctors, and 100+ volunteers around the world.
Competition rules
Teams of up to three student IEEE members receive sets of programming problems over 24 hours, starting at 0:00 UTC on the competition date. All teams receive the same problems to solve and are expected to solve the problems without direct outside consultation. Teams don’t need to tackle every problem, but the more they solve, the more points they score. Students submit their solutions using an online tool, which has been HackerRank in recent years. Points are awarded based on how the problem was solved, the time it took, and its difficulty. Higher-grade IEEE members serve as judges and proctors for the competition.
The competition is free, but IEEE Student Membership is required to participate. Students - undergraduate and graduate - are welcome to register as IEEE Student Members and participants in IEEEXtreme on the same day. The cost of IEEE Student Membership varies from country to country.
Yearly results
IEEEXt |
https://en.wikipedia.org/wiki/Naspers | Naspers Limited is a South African multinational internet, technology and multimedia holding company headquartered in Cape Town, with interests in online retail, publishing and venture capital investment. Naspers' principal shareholder is its Dutch listed investment subsidiary Prosus, which owns approximately 49% of its parent as part of a cross ownership structure.
Founded in 1915 by attorney W. A. Hofmeyr, Naspers was the largest publishing company in South Africa throughout the 20th century with interests across newspapers, magazines and books. In the 1980s, the company began to diversify, launching a subscription television service and investing in markets outside of South Africa for the first time.
In 2001, Naspers made an early investment in Chinese technology firm Tencent and became increasingly focused on the global consumer internet sector. In 2019, Naspers listed its global internet investment business unit Prosus (including a 31% stake in Tencent) on Euronext Amsterdam.
Naspers currently owns a 56.92% stake in Prosus and wholly owns Media24 (Africa's largest publishing company), Takealot.com (South Africa's largest online retailer) and Naspers Foundry, a South African focused venture capital fund.
History
Founding and Afrikaner nationalism
In 1914, a group of prominent Cape Afrikaners decided at a meeting in Stellenbosch to form a publishing company that would support Afrikaner nationalism in the Union of South Africa. This meeting led to W. A. Hofmeyr, a well-known Cape lawyer and National Party organizer; founding De Nasionale Pers Beperkt (National Press Ltd) in 1915 as a publisher of newspapers and magazines. The firm's name was commonly shortened to Naspers (De Nasionale Pers Beperk), the contraction eventually becoming used even by the company itself.
Naspers launched with the support of Jannie Marais, a prominent Stellenbosch farmer, Jan Christiaan Smuts, Louis Botha, and National Party founding president J.B.M. Hertzog. Naspers was strongly |
https://en.wikipedia.org/wiki/Facebook%20like%20button | The like button on the social networking website Facebook was first enabled on February 9, 2009. The like button enables users to easily interact with status updates, comments, photos and videos, links shared by friends, and advertisements. Once clicked by a user, the designated content appears in the News Feeds of that user's friends, and the button also displays the number of other users who have liked the content, including a full or partial list of those users. The like button was extended to comments in June 2010. After extensive testing and years of questions from the public about whether it had an intention to incorporate a "Dislike" button, Facebook officially rolled out "Reactions" to users worldwide on February 24, 2016, letting users long-press on the like button for an option to use one of five pre-defined emotions, including "Love", "Haha", "Wow", "Sad", or "Angry". Reactions were also extended to comments in May 2017, and had a major graphical overhaul in April 2019.
The like button is one of Facebook's social plug-ins, in which the button can be placed on third-party websites. Its use centers around a form of an advertising network, in which it gathers information about which users visit what websites. This form of functionality, a sort of web beacon, has been significantly criticized for privacy. Privacy activist organizations have urged Facebook to stop its data collection through the plug-in, and governments have launched investigations into the activity for possible privacy law violations. Facebook has stated that it anonymizes the information after three months, and that the data collected is not shared or sold to third parties. Additionally, the like button's potential use as a measurement of popularity has caused some companies to sell likes through fake Facebook accounts, which in turn have sparked complaints from some companies advertising on Facebook that have received an abundance of fake likes that have distorted proper user metrics. Fac |
https://en.wikipedia.org/wiki/System%20basis%20chip | A system basis chip (SBC) is an integrated circuit that includes various functions of automotive electronic control units (ECU) on a single die.
It typically includes a mixture between digital standard functionality like communication bus interfaces and analog or power functionality, denoted as smart power. Therefore SBCs are based on special smart power technology platforms.
The embedded functions may include:
Voltage regulators
Supervision functions
Reset generators,
Watchdog functions
Bus interfaces, like Local Interconnect Network (LIN), CAN bus or others
Wake-up logic
Power switches
The complexity range for SBC starts with rather simple hardwired devices to configurable state-machine controlled devices (e.g. through a serial peripheral interface).
Various major automotive semiconductor manufacturers offer SBCs.
References
Integrated circuits |
https://en.wikipedia.org/wiki/Relative%20Atrial%20Index | The Relative Atrial Index (RAI) is a numeric parameter used to assess for cardiac shunt defects. It is calculated from the standard transthoracic Doppler echocardiogram measurements of the right atrial area divided by the left atrial area. RAI = right atrial area / left atrial area. These measurements are made from the apical four chamber view.
Large validation studies in patients with known atrial septal defects showed that the RAI > 1.0 in the majority of cases. This is in contrast to matched and population controls, where the RAI was significantly below 1.0. This simple numeric parameter has found a role in the diagnostic work-up for possible shunt defects on standard tranthorcaic echocardiograms. The RAI rapidly normalizes within 24 hours of percutaneous closure of atrial septal defects. Secondary validation studies have confirmed the data in discrete patient populations. This parameter has been shown to predict long-term survival after acute pulmonary embolism.
The RAI was conceptualized in response to observed clinical inadequacies of standard transthoracic echocardiography in some shunt conditions. The same author had developed several Doppler echocardiographic numeric parameters over the last two decades to assess cardiac diastolic function.
See also
Medical ultrasonography section: Doppler sonography
Echocardiography
American Society of Echocardiography
Christian Doppler
References
External links
Echocardiography Textbook by Bonita Anderson
Echocardiography (Ultrasound of the heart)
Medical ultrasonography
Medical equipment
Cardiac procedures
Multidimensional signal processing
Cardiology |
https://en.wikipedia.org/wiki/Functor%20represented%20by%20a%20scheme | In algebraic geometry, a functor represented by a scheme X is a set-valued contravariant functor on the category of schemes such that the value of the functor at each scheme S is (up to natural bijections) the set of all morphisms . The scheme X is then said to represent the functor and that classify geometric objects over S given by F.
The best known example is the Hilbert scheme of a scheme X (over some fixed base scheme), which, when it exists, represents a functor sending a scheme S to a flat family of closed subschemes of .
In some applications, it may not be possible to find a scheme that represents a given functor. This led to the notion of a stack, which is not quite a functor but can still be treated as if it were a geometric space. (A Hilbert scheme is a scheme, but not a stack because, very roughly speaking, deformation theory is simpler for closed schemes.)
Some moduli problems are solved by giving formal solutions (as opposed to polynomial algebraic solutions) and in that case, the resulting functor is represented by a formal scheme. Such a formal scheme is then said to be algebraizable if there is another scheme that can represent the same functor, up to some isomorphisms.
Motivation
The notion is an analog of a classifying space in algebraic topology. In algebraic topology, the basic fact is that each principal G-bundle over a space S is (up to natural isomorphisms) the pullback of a universal bundle along some map from S to . In other words, to give a principal G-bundle over a space S is the same as to give a map (called a classifying map) from a space S to the classifying space of G.
A similar phenomenon in algebraic geometry is given by a linear system: to give a morphism from a projective variety to a projective space is (up to base loci) to give a linear system on the projective variety.
Yoneda's lemma says that a scheme X determines and is determined by its points.
Functor of points
Let X be a scheme. Its functor of points is the fu |
https://en.wikipedia.org/wiki/.NET | .NET (pronounced as "dot net"; formerly named .NET Core) is a free and open-source, managed computer software framework for Windows, Linux, and macOS operating systems. It is a cross-platform successor to .NET Framework. The project is mainly developed by Microsoft employees by way of the .NET Foundation and is released under an MIT License.
History
On November 12, 2014, Microsoft announced .NET Core, in an effort to include cross-platform support for .NET, including on Linux and macOS, source for the .NET Core CoreCLR implementation, source for the "entire [...] library stack" for .NET Core, and the adoption of a conventional ("bazaar"-like) open-source development model under the stewardship of the .NET Foundation. Miguel de Icaza describes .NET Core as a "redesigned version of .NET that is based on the simplified version of the class libraries", and Microsoft's Immo Landwerth explained that .NET Core would be "the foundation of all future .NET platforms". At the time of the announcement, the initial release of the .NET Core project had been seeded with a subset of the libraries' source code and coincided with the relicensing of Microsoft's existing .NET reference source away from the restrictions of the Ms-RSL. Landwerth acknowledged the disadvantages of the formerly selected shared license, explaining that it made codename Rotor "a non-starter" as a community-developed open source project because it did not meet the criteria of an Open Source Initiative (OSI) approved license.
1.0 was released on June 27, 2016, along with Microsoft Visual Studio 2015 Update 3, which enables .NET Core development. 1.0.4 and .NET Core 1.1.1 were released along with .NET Core Tools 1.0 and Visual Studio 2017 on March 7, 2017.
.NET Core 2.0 was released on August 14, 2017, along with Visual Studio 2017 15.3, ASP.NET Core 2.0, and Entity Framework Core 2.0. 2.1 was released on May 30, 2018. NET Core 2.2 was released on December 4, 2018.
.NET Core 3 was released on September 23 |
https://en.wikipedia.org/wiki/International%20Conference%20on%20Mechanical%20Industrial%20%26%20Energy%20Engineering | International Conference on Mechanical Industrial & Energy Engineering (ICMIEE) is held in Bangladesh every alternate 2 years starting from 2010. The objective of ICMIEE is to present the latest research and results of scientists and researchers. The conference provides opportunities for different area delegates to exchange new ideas and applications experiences face-to-face to establish research relationships.
Technological development can be enhanced through continuous research. The Faculty of Mechanical Engineering, Khulna University of Engineering & Technology organizes International Conference on Mechanical, Industrial and Energy Engineering (ICMIEE). It brings great opportunities for both researchers and industrial communities to meet, discuss and share their research outcomes. This helps develop a bridge among the researchers and the experts of the industry. This conference aims to provide a common platform for the participants throughout the world to exchange their views and share the ideas in the vast field of Mechanical, Industrial and Energy Engineering.
Areas
Aerodynamics
Applied Mechanics
Automation, Mechatronics and Robotics
Automobile Engineering
CAD/CAM/CIM
CFD
Computational Techniques
Composite and Smart Materials
Energy Engineering and Management
Fatigue and Fracture
Fluid Mechanics and Machinery
Fuels and Combustion
Heat and Mass Transfer
IC Engines
Industrial Engineering
Instrumentation and Control
Leather Engineering
Manufacturing and Production Process
MEMS and Nanotechnology
Oil and Gas Exploration
Operations Research and Management
Pollution and Environmental Engineering
Quality Management, Quality Engineering
Refrigeration and Air-conditioning
Renewable Energy
Safety and Maintenance
Supply Chain Management
Textile Engineering
Tribology
References
Academic conferences
Control engineering |
https://en.wikipedia.org/wiki/The%20AWK%20Programming%20Language | The AWK Programming Language is a well-known 1988 book written by Alfred V. Aho, Brian W. Kernighan, and Peter J. Weinberger and published by Addison-Wesley, often referred to as the gray book. The book describes the AWK programming language and is the de facto standard for the language, written by its inventors. W. Richard Stevens, author of several UNIX books including Advanced Programming in the Unix Environment, cites the book as one of his favorite technical books. The book is translated to several languages and is cited by many technical papers in the ACM journals.
According to the book's frontmatter the book was typeset "using an Autologic APS-5 phototypesetter and a DEC VAX 8550 running the 9th Edition of the UNIX operating system".
In September 2023, the second edition was published by Addison-Wesley, along with an accompanying website.
References
External links
The Awk Programming Language book review - IEEE
Computer books
Computer_programming_books
Addison-Wesley books |
https://en.wikipedia.org/wiki/Information%20security%20awareness | Information security awareness is an evolving part of information security that focuses on raising consciousness regarding potential risks of the rapidly evolving forms of information and the rapidly evolving threats to that information which target human behavior. As threats have matured and information has increased in value, attackers have increased their capabilities and expanded to broader intentions, developed more attack methods and methodologies and are acting on more diverse motives. As information security controls and processes have matured, attacks have matured to circumvent controls and processes. Attackers have targeted and successfully exploited individuals human behavior to breach corporate networks and critical infrastructure systems. Targeted individuals who are unaware of information and threats may unknowingly circumvent traditional security controls and processes and enable a breach of the organization. In response, information security awareness is maturing. Cybersecurity as a business problem has dominated the agenda of most chief information officers (CIO)s, exposing a need for countermeasures to today's cyber threat landscape. The goal of Information security awareness is to make everyone aware that they are susceptible to the opportunities and challenges in today's threat landscape, change human risk behaviors and create or enhance a secure organizational culture.
Background
Information security awareness is one of several key principles of information security. Information security awareness seeks to understand and enhance human risk behaviors, beliefs and perceptions about information and information security while also understanding and enhancing organizational culture as a countermeasure to rapidly evolving threats. For example, the OECD's Guidelines for the Security of Information Systems and Networks include nine generally accepted principles: awareness, responsibility, response, ethics, democracy, risk assessment, security |
https://en.wikipedia.org/wiki/Microsecond%20Bus | The Microsecond Bus, μSB or MSB is an asymmetric serial communication interface specification for short-distance communication between a master and multiple slaves. The MSB has been developed in the first place for motor management applications in order to reduce the classical pulse-width modulation (PWM) of power loads by a fast serial interface with low pin count and low latency for the downstream to the smart power device. The downstream from master to slave is synchronous with low latency, while the upstream, mainly used to send diagnostic information from the slave back to the master is asynchronous, and can be slower.
The name of the bus originates from the time of one microsecond to transmit 16 bits in one of the first implementations. The bus was developed by Infineon and published in SAE International in 2005. In the meantime the bus has been adopted by several other automotive semiconductor providers.
Interface
The MSC downlink specifies:
FCL : Serial Clock (output from master).
FDA : Master Output, Slave Input (output from master).
SSY : Slave Select (active low, output from master).
In case of LVDS signaling FDA and FCL are split into four differential lines.
Comparison with SPI
The clocking scheme of the fast synchronous downstream is closely related to the scheme of the SPI bus. There are implementations for single-ended TTL level signaling, as well as LVDS signalling. The (optional) upstream is asynchronous to the downstream clock and can be slowed down by a variable clock division of the downstream clock.
See also
List of network buses
References
Computer buses
Serial buses |
https://en.wikipedia.org/wiki/Feedback%20suppressor | A feedback suppressor is an audio signal processing device which is used in the signal path in a live sound reinforcement system to prevent or suppress audio feedback.
Digital feedback reduction is the application of digital techniques to sound reinforcement in order to reduce audio feedback and increase headroom.
Operation
Feedback suppressors use three main methods to control feedback,
frequency shifting,
adaptive filtering and
automatic notch filtering
Frequency shifting is the oldest feedback suppression technique dating back to the 1960s. This technique works by introducing a varying shift in frequency to the system response. This is typically implemented using a frequency mixer. Only modest improvement of gain before feedback is achieved and the technique creates noticeable pitch distortion in music program.
The adaptive filter approach works by modeling the transfer function of the sound reinforcement system and subtracts the reinforced sound from the inputs to the system in the same way that an echo canceller removes echoes from a communications system.
Parametric equalization and notch filters are commonly used by sound engineers to manually control feedback. A feedback suppressor using the automatic notch technique listens for the onset of feedback and automatically inserts a notch filter into the signal path at the frequency of the detected feedback. Feedback suppressors use several techniques for detecting feedback from non-invasive harmonic analysis of a potential feedback signal to more invasive adaptive filtering and speculative placement of notch filters. The automatic notch technique is the most popular method and has the advantage that the sound is not colored until the system is at risk of feedback.
References
Sound recording technology
Audio engineering |
https://en.wikipedia.org/wiki/Failure%20modes%2C%20effects%2C%20and%20diagnostic%20analysis | Failure modes, effects, and diagnostic analysis (FMEDA) is a systematic analysis technique to obtain subsystem / product level failure rates, failure modes and diagnostic capability. The FMEDA technique considers:
All components of a design,
The functionality of each component,
The failure modes of each component,
The effect of each component failure mode on the product functionality,
The ability of any automatic diagnostics to detect the failure,
The design strength (de-rating, safety factors) and
The operational profile (environmental stress factors).
Given a component database calibrated with field failure data that is reasonably accurate, the method can predict product level failure rate and failure mode data for a given application. The predictions have been shown to be more accurate than field warranty return analysis or even typical field failure analysis given that these methods depend on reports that typically do not have sufficient detail information in failure records.
The abstract of an FMEDA report typically mentions the Safe Failure Fraction (rate of failures that are neither dangerous nor undetected over the total rate) and the Diagnostic Coverage (rate of detected dangerous failures over the rate of all dangerous failures). Each term is defined equivalently in both standards, IEC 61508 and ISO 13849.
The name was given by Dr. William M. Goble in 1994 to the technique that had been in development since 1988 by Dr. Goble and other engineers now at exida.
Antecedents
A failure modes and effects analysis, FMEA, is a structured qualitative analysis of a system, subsystem, process, design or function to identify potential failure modes, their causes and their effects on (system) operation. The concept and practice of performing a FMEA, has been around in some form since the 1960s. The practice was first formalized in 1970s with the development of US MIL-STD-1629/1629A.
In early practice its use was limited to select applications and industrie |
https://en.wikipedia.org/wiki/Nim%20%28programming%20language%29 | Nim is a general-purpose, multi-paradigm, statically typed, compiled high-level systems programming language, designed and developed by a team around Andreas Rumpf. Nim is designed to be "efficient, expressive, and elegant", supporting metaprogramming, functional, message passing, procedural, and object-oriented programming styles by providing several features such as compile time code generation, algebraic data types, a foreign function interface (FFI) with C, C++, Objective-C, and JavaScript, and supporting compiling to those same languages as intermediate representations.
Description
Nim is statically typed. It supports compile-time metaprogramming features such as syntactic macros and term rewriting macros. Term rewriting macros enable library implementations of common data structures, such as bignums and matrices, to be implemented efficiently and with syntactic integration, as if they were built-in language facilities. Iterators are supported and can be used as first class entities, as can functions, allowing for the use of functional programming methods. Object-oriented programming is supported by inheritance and multiple dispatch. Functions can be generic and overloaded, and generics are further enhanced by Nim's support for type classes. Operator overloading is also supported. Nim includes multiple tunable memory management strategies, including tracing garbage collection, reference counting, and fully manual systems, with the default being deterministic reference counting with optimizations via move semantics and cycle collection via trial deletion.
, Nim compiles to C, C++, JavaScript, Objective-C, and LLVM.
History
According to language creator, nim was conceived to combine best parts of Ada typing system, Python flexibility, and powerful Lisp macro system.
Nim's initial development was started in 2005 by Andreas Rumpf. It was originally named Nimrod when the project was made public in 2008.
The first version of the Nim compiler was written in P |
https://en.wikipedia.org/wiki/Scale%20%28chemistry%29 | The scale of a chemical process refers to the rough ranges in mass or volume of a chemical reaction or process that define the appropriate category of chemical apparatus and equipment required to accomplish it, and the concepts, priorities, and economies that operate at each. While the specific terms used—and limits of mass or volume that apply to them—can vary between specific industries, the concepts are used broadly across industry and the fundamental scientific fields that support them. Use of the term "scale" is unrelated to the concept of weighing; rather it is related to cognate terms in mathematics (e.g., geometric scaling, the linear transformation that enlarges or shrinks objects, and scale parameters in probability theory), and in applied areas (e.g., in the scaling of images in architecture, engineering, cartography, etc.).
Practically speaking, the scale of chemical operations also relates to the training required to carry them out, and can be broken out roughly as follows:
procedures performed at the laboratory scale, which involve the sorts of procedures used in academic teaching and research laboratories in the training of chemists and in discovery chemistry venues in industry,
operations at the pilot plant scale, e.g., carried out by process chemists, which, though at the lowest extreme of manufacturing operations, are on the order of 200- to 1000-fold larger than laboratory scale, and used to generate information on the behavior of each chemical step in the process that might be useful to design the actual chemical production facility;
intermediate bench scale sets of procedures, 10- to 200-fold larger than the discovery laboratory, sometimes inserted between the preceding two;
operations at demonstration scale and full-scale production, whose sizes are determined by the nature of the chemical product, available chemical technologies, the market for the product, and manufacturing requirements, where the aim of the first of these is literally |
https://en.wikipedia.org/wiki/Nucleic%20acid%20hybridization | In molecular biology, hybridization (or hybridisation) is a phenomenon in which single-stranded deoxyribonucleic acid (DNA) or ribonucleic acid (RNA) molecules anneal to complementary DNA or RNA. Though a double-stranded DNA sequence is generally stable under physiological conditions, changing these conditions in the laboratory (generally by raising the surrounding temperature) will cause the molecules to separate into single strands. These strands are complementary to each other but may also be complementary to other sequences present in their surroundings. Lowering the surrounding temperature allows the single-stranded molecules to anneal or “hybridize” to each other.
DNA replication and transcription of DNA into RNA both rely upon nucleotide hybridization, as do molecular biology techniques including Southern blots and Northern blots, the polymerase chain reaction (PCR), and most approaches to DNA sequencing.
Applications
Hybridization is a basic property of nucleotide sequences and is taken advantage of in numerous molecular biology techniques. Overall, genetic relatedness of two species can be determined by hybridizing segments of their DNA (DNA-DNA hybridization). Due to sequence similarity between closely related organisms, higher temperatures are required to melt such DNA hybrids when compared to more distantly related organisms. A variety of different methods use hybridization to pinpoint the origin of a DNA sample, including the polymerase chain reaction (PCR). In another technique, short DNA sequences are hybridized to cellular mRNAs to identify expressed genes. Pharmaceutical drug companies are exploring the use of antisense RNA to bind to undesired mRNA, preventing the ribosome from translating the mRNA into protein.
DNA-DNA hybridization
Fluorescence In Situ Hybridization
Fluorescence in situ hybridization (FISH) is a laboratory method used to detect and locate a DNA sequence, often on a particular chromosome.
In the 1960s, researchers Joseph Ga |
https://en.wikipedia.org/wiki/E-LOTOS | In computer science E-LOTOS (Enhanced LOTOS) is a formal specification language designed between 1993 and 1999, and standardized by International Organization for Standardization (ISO) in 2001.
E-LOTOS was initially intended to be a revision of the LOTOS language standardized by ISO 8807 in 1989, but the revision turned out to be profound, leading to a new specification language.
The starting point for the revision of LOTOS was the PhD thesis of Ed Brinksma, who had been the Rapporteur at ISO of the LOTOS standard.
In 1993, the initial goals of the definition of E-LOTOS were stated in ISO/IEC JTC1/N2802 announcement.
In 1997, when the language definition reached the maturity level of an ISO Committee Draft, an announcement was posted describing the main features of E-LOTOS.
The following document recalls the milestones of E-LOTOS definition project.
E-LOTOS has inspired descendent languages, among which LOTOS NT and LNT.
See also
Formal methods
List of ISO standards
Language Of Temporal Ordering Specification
CADP
References
External links
French-Romanian contributions to E-LOTOS
Process calculi
Formal methods
Formal specification languages
Concurrency (computer science)
Concurrency control
Synchronization |
https://en.wikipedia.org/wiki/Art%20Apart%20Fair | Art Apart Fair is Singapore's first hotel-based boutique art fair. Initially called Worlds Apart Fair in January 2013, the success of the fair encouraged a second edition, later renamed as Art Apart Fair as part of a series of "Apart Fairs".
Art Apart coincides with Singapore Art Week, an initiative launched by the National Arts Council, along with the Singapore Tourism Board and Singapore Economic Development Board.
The fairs take place twice biannually – during January and July – in Singapore. Art Apart made its international debut in London in October 2014 at the Town Hall Hotel. It will be setting its stage in New York in October 2016.
Art Apart Fair serves as a platform to provide support towards emerging artists with the potential to become established. The fair gives artists the opportunity to showcase their works and gain recognition among art lovers and collectors. In January 2014, some 33 galleries and 1,500 artists' work from countries such as Taiwan, China, Australia, Kazakhstan, Croatia, Japan, Vietnam, Austria, Cambodia, South Korea, Russia, Spain and Germany, were featured.
Some of the galleries that have exhibited with Art Apart are South Korea's Seoul Arts Centre, Shanghai's Nancy Gallery, Madrid's Jorge & Fernando Alcolea Gallery and Russia's Gallery 11.12.
In comparison to other art fairs, Art Apart focuses on selecting galleries with emerging artists with a possibility of becoming established in their national or international market.
Due to the high costs of exhibiting at art fairs and the presence of many artists struggling to succeed financially, there is an "Adopt An Artist" initiative, where a Patron of the Arts and sponsors help to fund these artists. Galleries that are invited to showcase new works at Art Apart also help support the emerging artists by paying for the exhibiting fee. Works have to be new and not shown at other art fairs before.
Besides providing support for emerging artists, part of the revenue generated from the sal |
https://en.wikipedia.org/wiki/Mathematics%20and%20Plausible%20Reasoning | Mathematics and Plausible Reasoning is a two-volume book by the mathematician George Pólya describing various methods for being a good guesser of new mathematical results. In the Preface to Volume 1 of the book Pólya exhorts all interested students of mathematics thus: "Certainly, let us learn proving, but also let us learn guessing." P. R. Halmos reviewing the book summarised the central thesis of the book thus: ". . . a good guess is as important as a good proof."
Outline
Volume I: Induction and analogy in mathematics
Polya begins Volume I with a discussion on induction, not mathematical induction, but as a way of guessing new results. He shows how the chance observations of a few results of the form 4 = 2 + 2, 6 = 3 + 3, 8 = 3 + 5, 10 = 3 + 7, etc., may prompt a sharp mind to formulate the conjecture that every even number greater than 4 can be represented as the sum of two odd prime numbers. This is the well known Goldbach's conjecture. The first problem in the first chapter is to guess the rule according to which the successive terms of the following sequence are chosen: 11, 31, 41, 61, 71, 101, 131, . . . In the next chapter the techniques of generalization, specialization and analogy are presented as possible strategies for plausible reasoning. In the remaining chapters, these ideas are illustrated by discussing the discovery of several results in various fields of mathematics like number theory, geometry, etc. and also in physical sciences.
Volume II: Patterns of Plausible Inference
This volume attempts to formulate certain patterns of plausible reasoning. The relation of these patterns with the calculus of probability are also investigated. Their relation to mathematical invention and instruction are also discussed. The following are
some of the patterns of plausible inference discussed by Polya.
Reviews
References
Mathematics books
Reasoning
Inference |
https://en.wikipedia.org/wiki/OpenVX | OpenVX is an open, royalty-free standard for cross-platform acceleration of computer vision applications. It is designed by the Khronos Group to facilitate portable, optimized and power-efficient processing of methods for vision algorithms. This is aimed for embedded and real-time programs within computer vision and related scenarios. It uses a connected graph representation of operations.
Overview
OpenVX specifies a higher level of abstraction for programming computer vision use cases than compute frameworks such as OpenCL. The high level makes the programming easy and the underlying execution will be efficient on different computing architectures. This is done while having a consistent and portable vision acceleration API.
OpenVX is based on a connected graph of vision nodes that can execute the preferred chain of operations. It uses an opaque memory model, allowing to move image data between the host (CPU) memory and accelerator, such as GPU memory. As a result, the OpenVX implementation can optimize the execution through various techniques, such as acceleration on various processing units or dedicated hardware. This architecture facilitates applications programmed in OpenVX on different systems with different power and performance, including battery-sensitive, vision-enabled, wearable displays.
OpenVX is complementary to the open source vision library OpenCV. OpenVX in some applications offers a better optimized graph management than OpenCV.
History
OpenVX 1.0 specification was released in October 2014.
OpenVX sample implementation was released in December 2014.
OpenVX 1.1 specification was released on May 2, 2016.
OpenVX 1.2 was released on May 1, 2017.
Updated OpenVX adopters program and OpenVX 1.2 conformance test suite was released on November 21, 2017.
OpenVX 1.2.1 was released on November 27, 2018.
OpenVX 1.3 was released on October 22, 2019.
Implementations, frameworks and libraries
AMD MIVisionX - for AMD's CPUs and GPUs.
Cadence - for Cade |
https://en.wikipedia.org/wiki/Remote%20mobile%20virtualization | Remote mobile virtualization, like its counterpart desktop virtualization, is a technology that separates operating systems and applications from the client devices that access them. However, while desktop virtualization allows users to remotely access Windows desktops and applications, remote mobile virtualization offers remote access to mobile operating systems such as Android.
Remote mobile virtualization encompasses both full operating system virtualization, referred to as virtual mobile infrastructure (VMI), and user and application virtualization, termed mobile app virtualization. Remote mobile virtualization allows a user to remotely control an Android virtual machine (VM) or application. Users can access remotely hosted applications with HTML5-enabled web browsers or thin client applications from a variety of smartphones, tablets and computers, including Apple iOS, Mac OS, Blackberry, Windows Phone, Windows desktop, and Firefox OS devices.
Virtual mobile infrastructure (VMI)
VMI refers to the method of hosting a mobile operating system on a server in a data center or the cloud. Mobile operating system environments are executed remotely and they are rendered via Mobile Optimized Display protocols through the network. Compared to virtual desktop infrastructure (VDI), VMI has to operate in low bandwidth network environments such as cellular networks with fluctuating coverage and metered access. As a result, even if a mobile phone is connected to a high speed 4G/LTE network, users may need to limit overall bandwidth usage to avoid expensive phone bills.
Most common implementations of VMI host multiple mobile OS virtual machines (VMs) on private or public cloud infrastructure and allow users to access them remotely via options such as Miracast™, the ACE Protocol or custom streaming implementations optimized for 3G/4G networks. Some implementations also allow for Multimedia redirection for better audio and video performance. Mobile operating systems hosted |
https://en.wikipedia.org/wiki/UBlock%20Origin | uBlock Origin (; "" ) is a free and open-source browser extension for content filtering, including ad blocking. The extension is available for Chrome, Chromium, Edge, Firefox, Opera, Pale Moon, as well as versions of Safari prior to 13. uBlock Origin has received praise from technology websites and is reported to be much less memory-intensive than other extensions with similar functionality. uBlock Origin's stated purpose is to give users the means to enforce their own (content-filtering) choices.
uBlock Origin is actively developed and maintained by its creator and lead developer Raymond Hill.
History
uBlock
uBlock was initially named "μBlock" but the name was later changed to "uBlock" to avoid confusion as to how the Greek letter μ (Mu/Micro) in "μBlock" should be pronounced. Development started by forking from the codebase of HTTP Switchboard along with another blocking extension called uMatrix, designed for advanced users. uBlock was developed by Raymond Hill to use community-maintained block lists, while adding features and raising the code quality to release standards. First released in June 2014 as a Chrome and Opera extension, by winter 2015, the extension had expanded to other browsers.
The uBlock project official repository was transferred to Chris Aljoudi by original developer Raymond Hill in April 2015, due to frustration of dealing with requests. However, Hill immediately self-forked it and continued the effort there. This version was later renamed uBlock Origin and it has been completely divorced from Aljoudi's uBlock. Aljoudi created ublock.org to host and promote uBlock and to request donations. In response, uBlock's founder Raymond Hill stated that "the donations sought by ublock.org are not benefiting any of those who contributed most to create uBlock Origin." The development of uBlock stopped in August 2015 and it has been sporadically updated since January 2017. In July 2018, ublock.org was acquired by AdBlock, and since February 2019, uBlock |
https://en.wikipedia.org/wiki/Bernstein%E2%80%93Kushnirenko%20theorem | The Bernstein–Kushnirenko theorem (or Bernstein–Khovanskii–Kushnirenko (BKK) theorem), proven by David Bernstein and in 1975, is a theorem in algebra. It states that the number of non-zero complex solutions of a system of Laurent polynomial equations is equal to the mixed volume of the Newton polytopes of the polynomials , assuming that all non-zero coefficients of are generic. A more precise statement is as follows:
Statement
Let be a finite subset of Consider the subspace of the Laurent polynomial algebra consisting of Laurent polynomials whose exponents are in . That is:
where for each we have used the shorthand notation to denote the monomial
Now take finite subsets of , with the corresponding subspaces of Laurent polynomials, Consider a generic system of equations from these subspaces, that is:
where each is a generic element in the (finite dimensional vector space)
The Bernstein–Kushnirenko theorem states that the number of solutions of such a system is equal to
where denotes the Minkowski mixed volume and for each is the convex hull of the finite set of points . Clearly, is a convex lattice polytope; it can be interpreted as the Newton polytope of a generic element of the subspace .
In particular, if all the sets are the same, then the number of solutions of a generic system of Laurent polynomials from is equal to
where is the convex hull of and vol is the usual -dimensional Euclidean volume. Note that even though the volume of a lattice polytope is not necessarily an integer, it becomes an integer after multiplying by .
Trivia
Kushnirenko's name is also spelt Kouchnirenko. David Bernstein is a brother of Joseph Bernstein. Askold Khovanskii has found about 15 different proofs of this theorem.
References
See also
Bézout's theorem for another upper bound on the number of common zeros of polynomials in indeterminates.
Theorems in algebra
Theorems in geometry |
https://en.wikipedia.org/wiki/Carbon%20fiber%20testing | Carbon fiber testing is a set of various different tests that researchers use to characterize the properties of carbon fiber. The results for the testing are used to aid the manufacturer and developers decisions selecting and designing material composites, manufacturing processes and for ensured safety and integrity. Safety-critical carbon fiber components, such as structural parts in machines, vehicles, aircraft or architectural elements are subject to testing.
Introduction
Carbon fiber reinforced plastic and reinforced polymers are gaining importance as light-weight material. There are various disciplines for material testing that especially apply to carbon fiber materials. Most common are destructive tests, such as stress, fatigue and micro sectioning tests. There are also methods that allow non-destructive testing (NDT), so the material can be still be used after testing. Common methods are ultrasonic, X-ray, HF Eddy Current, Radio Wave testing or thermography. Additionally, Structural Health Monitoring (SHM) methods allow testing during application.
Testing methods
Destructive Testing
Safety-critical carbon fiber parts, such as aircraft frames, need to be tested destructively (e.g. stress, fatigue) and non-destructively (e.g. fiber orientation, delamination and bonding). Three types of destructive testing are micro-sectioning, stress and fatigue tests. A form of fatigue testing for carbon fiber components is very high cycle fatigue (VHCF). Common VHCF test methods are ultrasonic or resonance testing of tension, compression, or torsion. Typically, destructive tests are carried out to validate the mechanical properties, whereas NDT is used to monitor and control the manufacturing process of the CFRP parts.
Non-Destructive Testing
The aerospace industry relies on thermography testing to help detect defects in the carbon fiber components. Ultrasonic testing of CFRP parts is the most popular form of NDT testing. Ultrasonic testing allows researchers to find a |
https://en.wikipedia.org/wiki/Distributed%20R | Distributed R is an open source, high-performance platform for the R language. It splits tasks between multiple processing nodes to reduce execution time and analyze large data sets. Distributed R enhances R by adding distributed data structures, parallelism primitives to run functions on distributed data, a task scheduler, and multiple data loaders. It is mostly used to implement distributed versions of machine learning tasks. Distributed R is written in C++ and R, and retains the familiar look and feel of R. , Hewlett-Packard (HP) provides enterprise support for Distributed R with proprietary additions such as a fast data loader from the Vertica database.
History
Distributed R was begun in 2011 by Indrajit Roy, Shivaram Venkataraman, Alvin AuYoung, and Robert S. Schreiber as a research project at HP Labs. It was open sourced in 2014 under the GPLv2 license and is available at GitHub.
In February 2015, Distributed R reached its first stable version 1.0, along with enterprise support from HP.
Components
Distributed R is a platform to implement and execute distributed applications in R. The goal is to extend R for distributed computing, while retaining the simplicity and look-and-feel of R. Distributed R consists of the following components:
Distributed data structures: Distributed R extends R's common data structures such as array, data.frame, and list to store data across multiple nodes. The corresponding Distributed R data structures are darray, dframe, and dlist. Many of the common data structure operations in R, such as colSums, rowSums, nrow and others, are also available on distributed data structures.
Parallel loop: Programmers can use the parallel loop, called foreach, to manipulate distributed data structures and execute tasks in parallel. Programmers only specify the data structure and function to express applications, while the runtime schedules tasks and, if required, moves around data.
Distributed algorithms: Distributed versions of common machin |
https://en.wikipedia.org/wiki/Medical%20data%20breach | Medical data, including patients' identity information, health status, disease diagnosis and treatment, and biogenetic information, not only involve patients' privacy but also have a special sensitivity and important value, which may bring physical and mental distress and property loss to patients and even negatively affect social stability and national security once leaked. However, the development and application of medical AI must rely on a large amount of medical data for algorithm training, and the larger and more diverse the amount of data, the more accurate the results of its analysis and prediction will be. However, the application of big data technologies such as data collection, analysis and processing, cloud storage, and information sharing has increased the risk of data leakage. In the United States, the rate of such breaches has increased over time, with 176 million records breached by the end of 2017. There have been 245 data breaches of 10,000 or more records, 68 breaches of the healthcare data of 100,000 or more individuals, 25 breaches that affected more than half a million individuals, and 10 breaches of the personal and protected health information of more than 1 million individuals.
Black market for health data
In February 2015 an NPR report claimed that organized crime networks had ways of selling health data in the black market.
In 2015 a Beazley Group employee estimated that medical records could sell on the black market for -50.
Crime is the primary cause of medical data breaches.
How data is lost
Theft, data loss, hacking, and unauthorized account access are ways in which medical data breaches happen. Among reported breaches of medical information in the United States networked information systems accounted for the largest number of records breached. There is a large number of data breaches happening in the US health care system, among business associates of the health care providers that continuously gain access to patients' data.
List |
https://en.wikipedia.org/wiki/Leia%20%28company%29 | Leia Inc. is an American company producing 3D Lightfield products and software applications.
Leia is headquartered in Menlo Park, California, with a nano-fabrication center in Palo Alto, a content team in Los Angeles and Auckland, New Zealand, and industrialization center in Suzhou, China.
History
The company was founded in 2014 as a spin-off of HP Labs. Its research into the holographic display concept under HP was published by Nature in 2013. CEO David Fattal explained that its diffraction system would address shortcomings with other mobile 3D display systems, such as being able to be seen by multiple viewers at once, using the display in its original 2D mode with no loss of resolution and not requiring CPU-intensive eye tracking. The company foresaw uses of its technology in mobile devices, automobiles, and medical applications.
In May 2016, Leia announced a partnership with Altice to market a smartphone featuring its technology.
In 2017, Red Digital Cinema announced its intent to produce a high-end smartphone featuring the technology (the Red Hydrogen One). As part of its development, Red entered into a strategic partnership with Leia, including funding, and Red's founder Jim Jannard joining Leia's board of directors.
In 2018, Leia launched its lightfield content platform LeiaLoft™ including an Android App Store and a developer portal.
The RED Hydrogen phone featuring Leia's switchable lightfield display product was launched on November 2, 2018 in the United States via AT&T and Verizon and in Mexico via Telcel.
In July 2019, Leia and Continental announced a long-term partnership to bring lightfield displays and content to the automotive world.
In 2020, Leia launched the Lume Pad, a B2B Android tablet featuring its recent switchable 10.8-in Lightfield display, designed to service the Education, Medical, Retail, and Hospitality industries.
Lume Pad won 2 CES 2021 Awards (Computer Hardware & Components and Digital Imaging & Photography).
In 2021, Leia la |
https://en.wikipedia.org/wiki/De%20Correspondent | De Correspondent is a Dutch news website based in Amsterdam, Netherlands. It was launched on 30 September 2013 after raising more than in a crowdfunding campaign in eight days. The website distinguishes itself by rejecting the daily news cycle and focusing on in-depth and chronological coverage on a topical basis, led by individual correspondents who each focus on specific topics. Sometimes it publishes English versions of its articles.
The concept and initial success of De Correspondent has inspired other projects elsewhere. A German website Krautreporter was founded in 2014 and adopted the same concept.
An English-language news site, titled The Correspondent, launched on September 30, 2019. The site raised through a crowdfunding campaign in late 2018, boosted by prominent backers including Jay Rosen and Trevor Noah. However, it endured substantial criticism after it was announced that it would not open an office in the United States, as many backers had anticipated. On 10 December 2020, NiemanLab broke the news that The Correspondent would be closing down on 31 December 2020.
History
The project was co-founded by Dutch journalist Rob Wijnberg, creative director Harald Dunnink, CTO Sebastian Kersten, and publisher Ernst-Jan Pfauth. Wijnberg, former editor-in-chief of the Dutch newspaper NRC Next, proposed the crowdfunding idea for an ad-free news media platform on national television in March 2013. Eight days later, he and his team reached their goal of 15,000 subscribers all paying €60 for a one-year membership.
Wijnberg worked with digital creative agency Momkai and its owners, Harald Dunnink and Sebastian Kersten, served as creative director and CTO respectively. Ernst-Jan Pfauth, who had been the founding editor of The Next Web and head of digital at Dutch newspaper NRC Handelsblad, joined as a publisher.
The website went live in September 2013. By January 2015 the website had more than 45,000 paying subscribers. In January 2016 the number of paying subsc |
https://en.wikipedia.org/wiki/Premier%20Boxing%20Champions | Premier Boxing Champions (PBC) is an ongoing series of televised boxing events and promotion connected to manager Al Haymon.
PBC was initially promoted as an effort to return boxing to mainstream broadcast and cable television, as opposed to premium channels and pay-per-view. The first Premier Boxing Champions card was broadcast by NBC on March 7, 2015, and the promotion reached deals with an array of other broadcasters, with brokered cards scheduled across all four of the major television networks in the United States (ABC, CBS, Fox, NBC) and their affiliated sports-oriented cable networks (ESPN, CBS Sports Network, FS1, and NBCSN, respectively), as well as on outlets such as Spike and Bounce TV.
In parallel with the focus on major cards on broadcast television, the events initially featured a more elaborate in-arena staging than other boxing events, featuring an entrance stage, and a circular marquee and jumbotron suspended above the ring. The telecasts also employed various technologies, including a 360-degree camera rig above the ring, and sensor-equipped gloves and shorts for gathering additional statistics. However, these features were phased out from later events. By 2018, PBC had established long-term deals with Fox Sports and Showtime, with the networks paying traditional rights fees, and holding the rights to produce PBC pay-per-view events.
Although it promotes the media rights of its associated events, PBC is not considered to be a promoter, in compliance with the Muhammad Ali Boxing Reform Act (which forbids manager from also serving as a promoter). Haymon considers himself an "adviser" and manager. Golden Boy Promotions and Top Rank both filed lawsuits against Haymon and the investors of PBC, arguing that through PBC and other internal intricacies, Haymon was serving as both a manager and promoter—actions which are forbidden under the Ali Act. Additionally, the two promoters claimed violations of antitrust law, with Top Rank in particular claiming |
https://en.wikipedia.org/wiki/Wind%20Energy%20%28journal%29 | Wind Energy is a monthly peer-reviewed scientific journal covering research on wind power published by John Wiley & Sons. The editor-in-chief is Simon Watson (Delft University of Technology). According to the Journal Citation Reports, the journal has a 2020 impact factor of 2.730, ranking it 78th out of 114 journals in "Energy & Fuels" and 54th out of 135 journals in "Engineering Mechanical".
References
External links
Print:
Online:
Wiley (publisher) academic journals
Academic journals established in 1998
English-language journals
Energy and fuel journals
Monthly journals
Wind power |
https://en.wikipedia.org/wiki/Progress%20in%20Photovoltaics | Progress in Photovoltaics is a monthly peer-reviewed scientific journal covering research on photovoltaics. It is published by John Wiley & Sons and the editor-in-chief is Martin A. Green (University of New South Wales). According to the Journal Citation Reports, the journal has a 2020 impact factor of 7.953, ranking it 17th out of 114 journals in "Energy & Fuels", 21st out of 160 journals in "Physics Applied", and 59th out of 336 journals in "Materials Science Multidisciplinary".
References
External links
Wiley (publisher) academic journals
Academic journals established in 1993
English-language journals
Energy and fuel journals
Photovoltaics
Monthly journals |
https://en.wikipedia.org/wiki/Spot%20height | A spot height is an exact point on a map with an elevation recorded beside it that represents its height above a given datum. In the UK this is the Ordnance Datum. Unlike a bench-mark, which is marked by a disc or plate, there is no official indication of a spot height on the ground although, in open country, spot heights may sometimes be marked by cairns. In geoscience, it can be used for showing elevations on a map, alongside contours, bench marks, etc.
See also
Surveying
Benchmark (surveying)
Triangulation station
References
Cartography
Geodesy
Surveying
Vertical position |
https://en.wikipedia.org/wiki/Obangsaek | The traditional Korean color spectrum, also known as Obangsaek (, means five-orientation-color), is the color scheme of the five Korean traditional colors of white, black, blue, yellow and red. In Korean traditional arts and traditional textile patterns, the colors of Obangsaek represent five cardinal directions: Obangsaek theory is a combination of Five Elements and Five Colours theory and originated in China.
Five orientations
Blue: east
Red: south
Yellow: center
White: west
Black: north
These colors are also associated with the Five Elements of traditional Korean culture:
Blue: Wood
Red: Fire
Yellow: Earth
White: Metal
Black: Water
References
Color
Optical spectrum
Vision
Korean culture
Korean art
Korean clothing
Orientation (geometry)
Color in culture |
https://en.wikipedia.org/wiki/Fungistatics | Fungistatics are anti-fungal agents that inhibit the growth of fungus (without killing the fungus). The term fungistatic may be used as both a noun and an adjective. Fungistatics have applications in agriculture, the food industry, the paint industry, and medicine.
Anti-fungal medicines
Fluconazole is a fungistatic antifungal medication that is administered orally or intravenously. It is used to treat a variety of fungal infections, especially Candida infections of the vagina ("yeast infections'), mouth, throat, and bloodstream. It is also used to prevent infections in people with weak immune systems, including those with neutropenia due to cancer chemotherapy, transplant patients, and premature babies. Its mechanism of action involves interfering with synthesis of the fungal cell membrane.
Itraconazole (R51211), invented in 1984, is a triazole fungistatic antifungal agent prescribed to patients with fungal infections. The drug may be given orally or intravenously. Itraconazole has a broader spectrum of activity than fluconazole (but not as broad as voriconazole or posaconazole). In particular, it is active against Aspergillus, which fluconazole is not. The mechanism of action of itraconazole is the same as the other azole antifungals: it inhibits the fungal-mediated synthesis of ergosterol.
Anti-fungal food preservatives
Sodium benzoate and potassium sorbate are both examples of fungistatic substances that are widely used in the preservation of food and beverages.
See also
Fungicide – the other type of anti-fungal agents are fungicidal agents (fungicides)
References
Pharmaceutical sciences
Food chemistry
Biochemistry |
https://en.wikipedia.org/wiki/Buccopharyngeal | In anatomy, buccopharyngeal structures are those pertaining to the cheek and the pharynx or to the mouth and the pharynx.
It may refer to:
Buccopharyngeal membrane
Buccopharyngeal fascia
Anatomy |
https://en.wikipedia.org/wiki/IBM%20Music%20Feature%20Card | The IBM Music Feature Card (simply referred to as the IBM PC 'Music Feature' by IBM) and sometimes abbreviated as the IBM MFC, or just IMFC) is a professional-level sound card for the PC, and used the 8-bit ISA bus. The card made use of the Yamaha YM2164 chip which produces sound and music via FM synthesis.
It was introduced in 1987 by IBM, and originally oriented towards composers and musicians.
In the late 80's, sound was becoming the norm in computer games and as such, video game companies started supporting sound cards in their products. In the case of the IBM Music Feature Card, Sierra and MicroProse were the main companies who showed support.
The IBM Music Feature Card failed to gain much traction, mainly because of its high retail price , and aggressive, superior competition by Roland with the internal LAPC-I (and MT-32 external sound module equivalent).
Some games fully support the IMFC, including King's Quest IV: The Perils of Rosella, Leisure Suit Larry Goes Looking for Love (in Several Wrong Places), Leisure Suit Larry III: Passionate Patti in Pursuit of the Pulsating Pectorals, Space Quest III: The Pirates of Pestulon and Silpheed.
See also
IBM PC
Sound card
Sierra Online
Roland LAPC-I, MPU-401 and MT-32
References
External links
IBM Music Feature store demonstration software (YouTube)
Sound cards
Music Feature Card |
https://en.wikipedia.org/wiki/Indigenous%20architecture | The field of Indigenous architecture refers to the study and practice of architecture of, for and by Indigenous people. It is a field of study and practice in the United States, Australia, Aotearoa/New Zealand, Canada, Arctic area of Sápmi and many other countries where Indigenous people have a built tradition or aspire translate or to have their cultures translated in the built environment. This has been extended to landscape architecture, urban design, planning, public art, placemaking and other ways of contributing to the design of built environments.
Australia
The traditional or vernacular architecture of Aboriginal and Torres Strait Islander people in Australia varied to meet the lifestyle, social organisation, family size, cultural and climatic needs and resources available to each community.
The types of forms varied from dome frameworks made of cane through spinifex-clad arc-shaped structures, to tripod and triangular shelters and elongated, egg-shaped, stone-based structures with a timber frame to pole and platform constructions. Annual base camp structures, whether dome houses in the rainforests of Queensland and Tasmania or stone-based houses in south-eastern Australia, were often designed for use over many years by the same family groups. Different language groups had differing names for structures. These included humpy, gunyah (or gunya), goondie, wiltja and wurley (or wurlie).
Until the 20th century, non-Indigenous peoples assumed that Aboriginal people lacked permanent buildings, likely because Aboriginal ways of life were misinterpreted during early contact with Europeans. Labelling Aboriginal communities as 'nomadic' allowed early settlers to justify the takeover of Traditional Lands claiming that they were not inhabited by permanent residents.
Stone engineering was utilised by a number of Indigenous language groups. Examples of Aboriginal stone structures come from Western Victoria's Gunditjmara peoples. These builders utilised basalt rocks a |
https://en.wikipedia.org/wiki/Smooth%20maximum | In mathematics, a smooth maximum of an indexed family x1, ..., xn of numbers is a smooth approximation to the maximum function meaning a parametric family of functions such that for every , the function is smooth, and the family converges to the maximum function as . The concept of smooth minimum is similarly defined. In many cases, a single family approximates both: maximum as the parameter goes to positive infinity, minimum as the parameter goes to negative infinity; in symbols, as and as . The term can also be used loosely for a specific smooth function that behaves similarly to a maximum, without necessarily being part of a parametrized family.
Examples
Boltzmann operator
For large positive values of the parameter , the following formulation is a smooth, differentiable approximation of the maximum function. For negative values of the parameter that are large in absolute value, it approximates the minimum.
has the following properties:
as
is the arithmetic mean of its inputs
as
The gradient of is closely related to softmax and is given by
This makes the softmax function useful for optimization techniques that use gradient descent.
This operator is sometimes called the Boltzmann operator, after the Boltzmann distribution.
LogSumExp
Another smooth maximum is LogSumExp:
This can also be normalized if the are all non-negative, yielding a function with domain and range :
The term corrects for the fact that by canceling out all but one zero exponential, and if all are zero.
Mellowmax
The mellowmax operator is defined as follows:
It is a non-expansive operator. As , it acts like a maximum. As , it acts like an arithmetic mean. As , it acts like a minimum. This operator can be viewed as a particular instantiation of the quasi-arithmetic mean. It can also be derived from information theoretical principles as a way of regularizing policies with a cost function defined by KL divergence. The operator has previously been utilized in other |
https://en.wikipedia.org/wiki/NowSecure | NowSecure (Formerly viaForensics) is a Chicago-based mobile security company that publishes mobile app and device security software.
2009: Beginnings
Former CEO and Co-founder Andrew Hoog was working as a CIO when one of his employees was dismissed. After this dismissal, Hoog was tasked with reviewing whether the employee had stolen any sensitive data from the company. Rather than hire a forensics firm to investigate, Hoog performed the investigation himself and continued to do forensic work on the side.
2009-2014: From Forensics to Security
Hoog and his wife Chee-Young Kim both contributed money to start the company, originally known as Chicago Electronic Discovery and then as viaForensics. Hoog devoted himself full-time to mobile forensics, while Kim continued to work at her corporate job during the day and participated in the business development at night and on weekends. In March 2011, viaForensics was profitable to the extent that it could pay for employee benefits, so Kim left her job and went to work at viaForensics full-time. On June 5 of that year, viaExtract 1.0 was released at a conference in Myrtle Beach. viaForensics introduced what was known as viaLab in March 2013. viaLab was a product that allowed automated testing for a variety of security flaws in apps, including man-in-the-middle attacks, SSL strip attacks, coding problems, and opportunities for reverse engineering.
viaForensics was the subject of INC 5000's "#13 fastest-growing tech company in the US" award in 2014.
2014-Present: Rebrand to NowSecure
In 2014, viaForensics launched viaProtect, an app to show users destinations and sources of data to and from their mobile devices, at RSA Conference. The company then began to focus more on similar individual and enterprise device protection. As a result of this focus shift, viaForensics decided to rebrand as NowSecure.
Products
NowSecure is the publisher of NowSecure Forensics (formerly viaExtract), NowSecure Lab (formerly viaLab), and the NowS |
https://en.wikipedia.org/wiki/Swecha | Swecha is a non-profit organization formerly called as Free Software Foundation Andhra Pradesh (FSF-AP) later changed its name to Swecha. It is a Telugu Operating System released in the year 2005, and is a part of Free Software Movement of India (FSMI). The organization is a social movement working towards educating the masses with the essence of Free Software and to provide knowledge to the commoners.
Swecha organizes workshops and seminars in the Indian state of Telangana and Andhra Pradesh. Presently Swecha is active GLUG (GNU/Linux User Group) in many engineering colleges like International Institute of Information Technology, Hyderabad , Jawaharlal Nehru Technological University, Hyderabad, Chaitanya Bharathi Institute of Technology, St. Martin's Engineering College, Sridevi Women's Engineering College, Mahatma Gandhi Institute of Technology, SCIENT Institute of Technology, CMR Institute of Technology, Hyderabad, Jyothishmathi College of Engineering and Technology, MVGR College of Engineering, K L University and Ace Engineering College.
Objectives
The main objectives of the organization are as follows:
To take forward free software and its ideological implications to all corners of our country from the developed domains to the underprivileged.
To create awareness among computer users in the use of free software.
To work towards usage of free software in all streams of sciences and research.
To take forward implementation and usage of free software in school education, academics and higher education.
To work towards e-literacy and bridging digital divide based on free software and mobilizing the underprivileged.
To work among developers on solutions catering to societal & national requirements.
To work towards a policy change favoring free software in all walks of life.
Activities
Swecha hosted a National Convention for Academics and Research which was attended by researchers and academicians from different parts of the country. Former President of In |
https://en.wikipedia.org/wiki/Anthem%20medical%20data%20breach | The Anthem medical data breach was a medical data breach of information held by Elevance Health, known at that time as Anthem Inc.
On February 4, 2015, Anthem, Inc. disclosed that criminal hackers had broken into its servers and had potentially stolen over 37.5 million records that contain personally identifiable information from its servers. On February 24, 2015 Anthem raised the number to 78.8 million people whose personal information had been affected. According to Anthem, Inc., the data breach extended into multiple brands Anthem, Inc. uses to market its healthcare plans, including, Anthem Blue Cross, Anthem Blue Cross and Blue Shield, Blue Cross and Blue Shield of Georgia, Empire Blue Cross and Blue Shield, Amerigroup, Caremore, and UniCare. Healthlink says that it was also a victim. Anthem says users' medical information and financial data were not compromised. Anthem has offered free credit monitoring in the wake of the breach. Michael Daniel, chief adviser on cybersecurity for President Barack Obama, said he would be changing his own password. According to The New York Times, about 80 million company records were hacked, and there is a fear that the stolen data will be used for identity theft. The compromised information contained names, birthdays, medical IDs, social security numbers, street addresses, e-mail addresses and employment information, including income data.
Theft of the data
The data was stolen over a period of weeks the month before the data breach was discovered.
Because no medical information was compromised, Anthem was not required by law to encrypt the data. However, Anthem faced several civil class-action lawsuits, which were settled in 2017 at a cost of $115 million. Anthem did not admit any wrongdoing in the settlement.
Data from the attack is expected to be sold on the black market.
Impact
Persons whose data was stolen could have resulting problems about identity theft for the rest of their lives. Anthem had a million insurance |
https://en.wikipedia.org/wiki/End-sequence%20profiling | End-sequence profiling (ESP) (sometimes "Paired-end mapping (PEM)") is a method based on sequence-tagged connectors developed to facilitate de novo genome sequencing to identify high-resolution copy number and structural aberrations such as inversions and translocations.
Briefly, the target genomic DNA is isolated and partially digested with restriction enzymes into large fragments. Following size-fractionation, the fragments are cloned into plasmids to construct artificial chromosomes such as bacterial artificial chromosomes (BAC) which are then sequenced and compared to the reference genome. The differences, including orientation and length variations between constructed chromosomes and the reference genome, will suggest copy number and structural aberration.
Artificial chromosome construction
Before analyzing target genome structural aberration and copy number variation (CNV) with ESP, the target genome is usually amplified and conserved with artificial chromosome construction. The classic strategy to construct an artificial chromosome is bacterial artificial chromosome (BAC). Basically, the target chromosome is randomly digested and inserted into plasmids which are transformed and cloned in bacteria. The size of fragments inserted is 150–350 kb. Another commonly used artificial chromosome is fosmid. The difference between BAC and fosmids is the size of the DNA inserted. Fosmids can only hold 40 kb DNA fragments, which allows a more accurate breakpoint determination.
Structural aberration detection
End sequence profiling (ESP) can be used to detect structural variations such as insertions, deletions, and chromosomal rearrangement. Compare to other methods that look at chromosomal abnormalities, ESP is particularly useful to identify copy neutral abnormalities such as inversions and translocations that would not be apparent when looking at copy number variation. From the BAC library, both ends of the inserted fragments are sequenced using a sequencing platfor |
https://en.wikipedia.org/wiki/Amelia%20Greenhall | Amelia Cousins Greenhall is an American feminist tech blogger. She cofounded feminist tech blog and publication Model View Culture with Shanley Kane. Greenhall is co-founder and Executive Director of Double Union, a feminist women-only hackerspace in San Francisco, with Valerie Aurora, and is a Quantified Self enthusiast. Greenhall is the publisher and co-founder of Open Review Quarterly, a literary journal on modern culture (founded in September 2010).
Prior to co-founding Model View Culture in November 2013, Greenhall was a user experience designer, user interface designer and data scientist in Seattle. She left Model View Culture in May 2014.
Born in Hawaii and raised in Arizona, Greenhall is a 2009 studio art and electrical engineering graduate of Vanderbilt University in Tennessee. She went on to earn a master's degree in public health at the University of Washington.
References
External links
Activists from the San Francisco Bay Area
American bloggers
American computer programmers
American feminists
Businesspeople from the San Francisco Bay Area
Computer designers
Feminist bloggers
Living people
People from San Francisco
Writers from Seattle
Third-wave feminism
University of Washington School of Public Health alumni
Vanderbilt University alumni
American women bloggers
Year of birth missing (living people)
21st-century American women artists |
https://en.wikipedia.org/wiki/DRIP-seq | DRIP-seq (DRIP-sequencing) is a technology for genome-wide profiling of a type of DNA-RNA hybrid called an "R-loop". DRIP-seq utilizes a sequence-independent but structure-specific antibody for DNA-RNA immunoprecipitation (DRIP) to capture R-loops for massively parallel DNA sequencing.
Introduction
An R-loop is a three-stranded nucleic acid structure, which consists of a DNA-RNA hybrid duplex and a displaced single stranded DNA (ssDNA). R-loops are predominantly formed in cytosine-rich genomic regions during transcription and are known to be involved with gene expression and immunoglobulin class switching. They have been found in a variety of species, ranging from bacteria to mammals. They are preferentially localized at CpG island promoters in human cells and highly transcribed regions in yeast.
Under abnormal conditions, namely elevated production of DNA-RNA hybrids, R-loops can cause genome instability by exposing single-stranded DNA to endogenous damages exerted by the action of enzymes such as AID and APOBEC, or overexposure to chemically reactive species. Therefore, understanding where and in what circumstances R-loops are formed across the genome is crucial for the better understanding of genome instability. R-loop characterization was initially limited to locus specific approaches. However, upon the arrival of massive parallel sequencing technologies and thereafter derivatives like DRIP-seq, the possibility to investigate entire genomes for R-loops has opened up.
DRIP-seq relies on the high specificity and affinity of the S9.6 monoclonal antibody (mAb) towards DNA-RNA hybrids of various lengths. S9.6 mAb was first created and characterized in 1986 and is currently used for the selective immunoprecipitation of R-loops. Since then, it was used in diverse immunoprecipitation methods for R-loop characterization. The concept behind DRIP-seq is similar to ChIP-sequencing; R-loop fragments are the main immunoprecipitated material in DRIP-seq.
Uses and Current |
https://en.wikipedia.org/wiki/Device-independent%20quantum%20cryptography | A quantum cryptographic protocol is device-independent if its security does not rely on trusting that the quantum devices used are truthful.
Thus the security analysis of such a protocol needs to consider scenarios of imperfect or even malicious devices. Several important problems have been shown to admit unconditional secure and device-independent protocols. A closely related topic (that is not discussed in this article) is measurement-device independent quantum key distribution.
Overview and history
Mayers and Yao proposed the idea of designing quantum protocols using "self-testing" quantum apparatus, the internal operations of which can be uniquely determined by their input-output statistics. Subsequently, Roger Colbeck in his Thesis proposed the use of Bell tests for checking the honesty of the devices. Since then, several problems have been shown to admit unconditional secure and device-independent protocols, even when the actual devices performing the Bell test are substantially "noisy," i.e., far from being ideal. These problems include
quantum key distribution, randomness expansion, and randomness amplification.
Key distribution
The goal of quantum key distribution is for two parties, Alice and Bob, to share a common secret string through communications over public channels. This was a problem of central interest in quantum cryptography. It was also the motivating problem in Mayers and Yao's paper. A long sequence of works aim to prove unconditional security with robustness. Vazirani and Vidick were the first to reach this goal. Subsequently, Miller and Shi proved a similar result using a different approach.
Randomness expansion
The goal of randomness expansion is to generate a longer private random string starting from a uniform input string and using untrusted quantum devices. The idea of using Bell test to achieve this goal was first proposed by Roger Colbeck in his Ph.D. Thesis. Subsequent works have aimed to prove unconditional security with robus |
https://en.wikipedia.org/wiki/ID.me | ID.me is an American online identity network company that allows people to provide proof of their legal identity online. ID.me digital credentials can be used to access government services, healthcare logins, or discounts from retailers. The company is based in McLean, Virginia.
In the wake of the economic downturn caused by the COVID-19 pandemic, ID.me was contracted by numerous state unemployment agencies to verify the identities of claimants. The US Internal Revenue Service also uses ID.me as its only online option in accessing its online taxpayer tools.
History
Origins as TroopSwap and Troop ID
ID.me was founded in early 2010 by Blake Hall and Matt Thompson as TroopSwap, a daily deals website similar to Groupon and LivingSocial with a focus on the American military community. The company evolved into Troop ID, which provided digital identity verification for military personnel and veterans. Troop ID allowed service members and veterans to access online benefits from retailers, such as military discounts, as well as government agencies like the United States Department of Veterans Affairs.
Rebrand to ID.me
In 2013, the company rebranded again as ID.me with the goal of providing a ubiquitous secure identity verification network. To that end, they expanded to include verification of credentials for first responders, nurses, and students for discounts. In 2013, ID.me was awarded a two-year grant by the United States Chamber of Commerce to participate in the President's National Strategy for Trusted Identities in Cyberspace (NSTIC), a pilot project intended to help develop secure digital identification methods.
In late 2014, ID.me won a contract with the General Services Administration to provide digital identity credentials with Connect.gov. Co-founder Matt Thompson left the company in 2015. In March 2017, ID.me received $19 million in its Series B funding round. In 2018, ID.me became the first digital identity provider to be certified by the Kantara Initiati |
https://en.wikipedia.org/wiki/Comparison%20of%20anti-plagiarism%20software | The following tables compare software used for plagiarism detection.
General
References
Educational assessment and evaluation
Plagiarism detectors
Anti-plagiarism
Software law |
https://en.wikipedia.org/wiki/Pubmatic | PubMatic, Inc. develops and implements online advertising software and strategies for the digital publishing and advertising industry. PubMatic's sell-side, real-time programmatic ad transaction advertising software puts publishers of websites, videos, and mobile apps into contact with ad buyers by using automated systems, while allowing users to opt-out of having their personal information collected on internet searches. PubMatic has a number of offices in countries around the world.
History
PubMatic was founded in 2006 by brothers Rajeev Goel and Amar Goel, Anand Das and Mukul Kumar. PubMatic software was developed in Pune, India.
In 2011 the company hired Steve Pantelick as CFO, and in 2012 PubMatic raised $45 million from investors.
In 2014 PubMatic acquired mobile ad server Mocean Mobile, formerly knows as Mojiva, for $15.5 million.
In 2015, PubMatic opened an office in Latin America.
By 2016, the firm was operating by storing most of its data on OpenStack private cloud servers.
In January 2020, PubMatic launched an Identity Hub integrating identity partner IDs, including IAB DigiTrust, The Trade Desk Unified ID (UID 2.0), ID5, and LiveIntent.
In February 2020, PubMatic released the OpenWrap SDK to enhance header bidding options for mobile publishers.
In November 2020, PubMatic filed for an IPO in Nasdaq. The company launched its IPO on 9 December 2020. Its clients in 2020 included Verizon, News Corp, Electronic Arts, and Zynga, with Verizon comprising about a quarter of Pubmatic's revenue during the previous year.
Activities
PubMatic, for a fee, participates in online auctions to help advertisers buy and publishers sell media and advertising spots between various advertising companies. The company also produces quarterly reports about advertising prices.
References
External links
Official Website
Famium Website
Marketing companies established in 2006
Indian companies established in 2006
Digital marketing companies of India
Online advertising servic |
https://en.wikipedia.org/wiki/Arnaud%20Ch%C3%A9ritat | Arnaud Chéritat (born June 7, 1975) is a French mathematician who works as a director of research at the Institut de Mathématiques de Toulouse. His research concerns complex dynamics and the shape of Julia sets.
Chéritat earned a licenciate in mathematics in 1995 from the École Normale Supérieure, a diplôme d'études approfondies in pure mathematics in 1996 from the University of Paris-Sud, and a master's degree in pure and applied mathematics and informatics in 1998 from the École Normale Supérieure.
He defended his doctoral thesis in 2001 from the University of Paris-Sud, under the supervision of Adrien Douady, and completed his habilitation in 2008 from the University of Toulouse. He worked as a maître de conférences at the University of Toulouse from 2002 until 2007, when he moved to the Institut de Mathématiques de Toulouse.
In 2006, Chéritat won the Leconte Prize of the French Academy of Sciences. He was an invited speaker at the International Congress of Mathematicians in 2010. In 2012, he became one of the inaugural fellows of the American Mathematical Society.
Selected publications
with Artur Avila and Xavier Buff:
with Xavier Buff:
with Xavier Buff:
References
External links
Home page
1975 births
Living people
21st-century French mathematicians
Fellows of the American Mathematical Society
Dynamical systems theorists |
https://en.wikipedia.org/wiki/Windows%20Insider | Windows Insider is an open software testing program by Microsoft that allows users globally who own a valid license of Windows 11, Windows 10, or Windows Server to register for pre-release builds of the operating system previously only accessible to software developers.
Microsoft launched Windows Insider for developers, enterprise testers and the "technically able" to test new developer features on pre-release software and builds to gather low level diagnostics feedback in order to identify, investigate, mitigate and improve Windows 10, with the help, support and guidance of the Insider program Participants, in direct communication with Microsoft Engineers via a proprietary communication and diagnostic channel.
It was announced on September 30, 2014, along with Windows 10. By September 2015, over 7 million people took part in the Windows Insider program. On February 12, 2015, Microsoft started to test out previews of Windows 10 Mobile. Microsoft announced that the Windows Insider program would continue beyond the official release of Windows 10 for future updates.
Gabriel Aul and Dona Sarkar were both previously the head of the Windows Insider Program. The present head of the Windows Insider program is Amanda Langowski. Similar to the Windows Insider program, the Microsoft Office, Microsoft Edge, Skype, Bing, Xbox and Visual Studio Code teams have set up their own Insider programs.
History
Microsoft originally launched Windows Insider for enterprise testers and the "technically able" to test out new developer features and to gather feedback to improve the features built into Windows 10. By the time of the official launch of Windows 10 for PCs, a total of 5 million volunteers were registered on both Windows 10 and Windows 10 Mobile. They were also among the first people to receive the official update to Windows 10.
With the release of Windows 10, the Windows Insider app was merged with the Settings app. This made the ability to install Windows Insider preview b |
https://en.wikipedia.org/wiki/General%20Automation | GA General Automation was an American company, founded in 1968 by Larry Goshorn (a former marketing executive and a salesman from Honeywell), which manufactured minicomputers and industrial controllers.
In 1994, General Automation announced it would be relocating from Anaheim to Irvine. It announced it would be phasing-out its manufacturing operations but would retain its 50 employees.
Products
GA SPC-12 (Jan 1968)
Priced at $6400 and claiming $4,000 worth of free options
Totally integrated, binary, parallel, single address processor
8-bit data and 12 bit address
4,096 words (8 bit bytes) of memory with a 2.2 microsecond cycle time
Shared command concept that permits the SPC-12s 8-bit memory to handle 12-bit instructions.
Features included a real-time clock, expandable memory to 16K, a teletype interface, a control panel and a priority interrupt
GA SPC-8 (Nov 1968)
GA 18/30 (June 1968, IBM 1800 compatible)
GA SPC-16/30, /50 & /70 (November 1971)
GA SPC-16/40, /45, /65 & /85 (January 1972)
LSI-12/16 (January 1974)
These computers were initially produced with silicon on sapphire circuit technology provided by Rockwell International but yield problems caused a switch to conventional ICs by 1975.
GA 16/110 & /120 (December 1976)
GA 16/220 (July 1978)
GA 16/330
GA 16/440
GA 16/460
GA Zebra 1700/1750 (Introduced in 1985, a Motorola 68000 computer running Pick Operating System)
Parallel Computers, Inc. – fault-tolerant supermicro/minicomputer based on Unix, acquired 1987, sold 1988
References
External links
Computer History Museum
Documents at Bitsavers
18/30 Fortran IV Software Data Sheet
Minicomputers
Defunct computer companies of the United States
Defunct manufacturing companies based in California
Defunct technology companies based in California
Companies based in Anaheim, California
Computer companies established in 1968
Technology companies established in 1968
1968 establishments in California |
https://en.wikipedia.org/wiki/Beyond%20CMOS | Beyond CMOS refers to the possible future digital logic technologies beyond the CMOS scaling limits which limits device density and speeds due to heating effects.
Beyond CMOS is the name of one of the 7 focus groups in ITRS 2.0 (2013) and in its successor, the International Roadmap for Devices and Systems.
CPUs using CMOS were released from 1986 (e.g. 12 MHz Intel 80386). As CMOS transistor dimensions were shrunk the clock speeds also increased. Since about 2004 CMOS CPU clock speeds have leveled off at about 3.5 GHz.
CMOS devices sizes continue to shrink – see Intel tick–tock and ITRS :
22 nanometer Ivy Bridge in 2012
first 14 nanometer processors shipped in Q4 2014.
In May 2015, Samsung Electronics showed a 300 mm wafer of 10 nanometer FinFET chips.
It is not yet clear if CMOS transistors will still work below 3 nm. See 3 nanometer.
Comparisons of technology
About 2010 the Nanoelectronic Research Initiative (NRI) studied various circuits in various technologies.
Nikonov benchmarked (theoretically) many technologies in 2012, and updated it in 2014. The 2014 benchmarking included 11 electronic, 8 spintronic, 3 orbitronic, 2 ferroelectric, and 1 straintronics technology.
The 2015 ITRS 2.0 report included a detailed chapter on Beyond CMOS, covering RAM and logic gates.
Some areas of investigation
Magneto-Electric Spin-Orbit logic
tunnel junction devices, eg Tunnel field-effect transistor
indium antimonide transistors
carbon nanotube FET, eg CNT Tunnel field-effect transistor
graphene nanoribbons
molecular electronics
spintronics — many variants
future low-energy electronics technologies, ultra-low dissipation conduction paths, including
topological materials
exciton superfluids
photonics and optical computing
superconducting computing
rapid single-flux quantum (RSFQ)
Superconducting computing and RSFQ
Superconducting computing includes several beyond-CMOS technologies that use superconducting devices, namely Josephson junctions, for electronic |
https://en.wikipedia.org/wiki/Double%20vector%20bundle | In mathematics, a double vector bundle is the combination of two compatible vector bundle structures, which contains in particular the tangent of a vector bundle and the double tangent bundle .
Definition and first consequences
A double vector bundle consists of , where
the side bundles and are vector bundles over the base ,
is a vector bundle on both side bundles and ,
the projection, the addition, the scalar multiplication and the zero map on E for both vector bundle structures are morphisms.
Double vector bundle morphism
A double vector bundle morphism consists of maps , , and such that is a bundle morphism from to , is a bundle morphism from to , is a bundle morphism from to and is a bundle morphism from to .
The 'flip of the double vector bundle is the double vector bundle .
Examples
If is a vector bundle over a differentiable manifold then is a double vector bundle when considering its secondary vector bundle structure.
If is a differentiable manifold, then its double tangent bundle is a double vector bundle.
References
Differential geometry
Topology
Differential topology |
https://en.wikipedia.org/wiki/Productive%20matrix | In linear algebra, a square nonnegative matrix of order is said to be productive, or to be a Leontief matrix, if there exists a nonnegative column matrix such as is a positive matrix.
History
The concept of productive matrix was developed by the economist Wassily Leontief (Nobel Prize in Economics in 1973) in order to model and analyze the relations between the different sectors of an economy. The interdependency linkages between the latter can be examined by the input-output model with empirical data.
Explicit definition
The matrix is productive if and only if and such as .
Here denotes the set of r×c matrices of real numbers, whereas and indicates a positive and a nonnegative matrix, respectively.
Properties
The following properties are proven e.g. in the textbook (Michel 1984).
Characterization
Theorem
A nonnegative matrix is productive if and only if is invertible with a nonnegative inverse, where denotes the identity matrix.
Proof
"If" :
Let be invertible with a nonnegative inverse,
Let be an arbitrary column matrix with .
Then the matrix is nonnegative since it is the product of two nonnegative matrices.
Moreover, .
Therefore is productive.
"Only if" :
Let be productive, let such that .
The proof proceeds by reductio ad absurdum.
First, assume for contradiction is singular.
The endomorphism canonically associated with can not be injective by singularity of the matrix.
Thus some non-zero column matrix exists such that .
The matrix has the same properties as , therefore we can choose as an element of the kernel with at least one positive entry.
Hence is nonnegative and reached with at least one value .
By definition of and of , we can infer that:
, using that by construction.
Thus , using that by definition of .
This contradicts and , hence is necessarily invertible.
Second, assume for contradiction is invertible but with at least one negative entry in its inverse.
Hence such that there is at least one negative entry |
https://en.wikipedia.org/wiki/DigiDoc | DigiDoc (Digital Document) is a family of digital signature- and cryptographic computing file formats utilizing a public key infrastructure. It currently has three generations of sub formats, DDOC- , a later binary based BDOC and currently used ASiC-E format that is supposed to replace the previous generation formats. DigiDoc was created and is developed and maintained by RIA (Riigi Infosüsteemi Amet, Information System Authority of Estonia).
The format is used to legally sign and optionally encrypt file(s) like text documents as part of electronic transaction. All operations are done using a national id card, a hardware token, that has a chip with digital PKI certificates to verify a person's signature mathematically. Signed file is a container holding actual signed, unmodified files and hence operation does not require any support from software that created those files.
Format container and its signatures can be created using application like qDigiDoc or a web service with user's web browser with signing extension. When an application is used, container is typically exchanged between signing parties as an email attachment until everyone has signed it and have their own complete copy.
Web services also utilize identity cards for session authentication using an authentication certificate which is also stored on the id-card.
Technical description
DigiDoc container contains actual files and metadata, including a hash that represents those files. When signing, software sends content hash using standardised PKCS 11 interface to the user's id-card. After verifying the user's PIN, id-card signs the hash internally and returns a signature which is then stored into DigiDoc container.
During the signing, the certificate validity of each signing party is checked, and a signed timestamp is retrieved, using an OCSP service. The signed timestamp makes it possible to prove later at what time a document was signed (as the timestamp is derived from the document hash) and th |
https://en.wikipedia.org/wiki/Circuit%20underutilization | Circuit underutilization also chip underutilization, programmable circuit underutilization, gate underutilization, logic block underutilization refers to a physical incomplete utility of semiconductor grade silicon on a standardized mass-produced circuit programmable chip, such as a gate array type ASIC, an FPGA, or a CPLD.
Gate array
In the example of a gate array, which may come in sizes of 5,000 or 10,000 gates, a design which utilizes even 5,001 gates would be required to use a 10,000 gate chip. This inefficiency results in underutilization of the silicon.
FPGA
Due to the design components of field-programmable gate array into logic blocks, simple designs that underutilize a single block suffer from gate underutilization, as do designs that overflow onto multiple blocks, such as designs that use wide gates. Additionally, the very generic architecture of FPGAs lends to high inefficiency; multiplexers occupy silicon real estate for programmable selection, and an abundance of flip-flops to reduce setup and hold times, even if the design does not require them, resulting in 40 times less density than of standard cell ASICs.
See also
Circuit minimization
Don't-care condition
References |
https://en.wikipedia.org/wiki/Plug.dj | plug.dj was an interactive online social music streaming website based in Los Angeles, owned by Rowl, Inc. The site was "dedicated to growing positive international communities for sharing and discovering music". It was a free service with microtransactions, and had over 3 million registered accounts. The website was launched on February 29, 2012, by Steven Sacks, Alex Reinlieb, and Jason Grunstra.
Overview
Plug.dj consisted of different online chat rooms, called "communities", that users could freely create. Inside each community, users could choose to join a wait list and wait for their turn to be the DJ for everyone else in the community, playing a video or song chosen from either YouTube or SoundCloud, or simply listen passively. Users could also vote positively or negatively for each song or video played, or add it to their own playlists. By spending time or being active on the site, they were able to earn experience points (XP) and plug points (PP), which could be used to unlock and purchase various items, such as new avatars and chat badges. Each community on plug.dj was typically focused on a few specific musical genres, usually one of the subgenres of EDM (Electronic Dance Music), such as Trap, Dubstep, Electro, Drum'n'Bass and many others. Communities dedicated to non-electronic genres, such as rock, jazz, death metal, and classical also existed. The community creator was able to promote users to moderators to help ensure the community's rules were followed and to keep the environment friendly. Volunteer global moderators, called "Brand Ambassadors", also existed.
In April 2015, a paid subscription service was launched, which provided access to subscriber avatars and badges without the user having to spend PP.
Financial issues and relaunches
2015 shutdown
On September 14, 2015, plug.dj announced that the service would be shutting down if it was unable to raise enough money to support the running of the service by a disclosed deadline of September 28 th |
https://en.wikipedia.org/wiki/Sliding%20filament%20theory | The sliding filament theory explains the mechanism of muscle contraction based on muscle proteins that slide past each other to generate movement. According to the sliding filament theory, the myosin (thick filaments) of muscle fibers slide past the actin (thin filaments) during muscle contraction, while the two groups of filaments remain at relatively constant length.
The theory was independently introduced in 1954 by two research teams, one consisting of Andrew Huxley and Rolf Niedergerke from the University of Cambridge, and the other consisting of Hugh Huxley and Jean Hanson from the Massachusetts Institute of Technology. It was originally conceived by Hugh Huxley in 1953. Andrew Huxley and Niedergerke introduced it as a "very attractive" hypothesis.
Before the 1950s there were several competing theories on muscle contraction, including electrical attraction, protein folding, and protein modification. The novel theory directly introduced a new concept called cross-bridge theory (classically swinging cross-bridge, now mostly referred to as cross-bridge cycle) which explains the molecular mechanism of sliding filament. Cross-bridge theory states that actin and myosin form a protein complex (classically called actomyosin) by attachment of myosin head on the actin filament, thereby forming a sort of cross-bridge between the two filaments. The sliding filament theory is a widely accepted explanation of the mechanism that underlies muscle contraction.
History
Early works
The first muscle protein discovered was myosin by a German scientist Willy Kühne, who extracted and named it in 1864. In 1939 a Russian husband and wife team Vladimir Alexandrovich Engelhardt and Militsa Nikolaevna Lyubimova discovered that myosin had an enzymatic (called ATPase) property that can breakdown ATP to release energy. Albert Szent-Györgyi, a Hungarian physiologist, turned his focus on muscle physiology after winning the Nobel Prize in Physiology or Medicine in 1937 for his works on v |
https://en.wikipedia.org/wiki/Tetramethyl%20acetyloctahydronaphthalenes | Tetramethyl acetyloctahydronaphthalenes (International Nomenclature for Cosmetic Ingredients (INCI) name) (1-(1,2,3,4,5,6,7,8-ottaidro-2,3,8,8,-tetrametil-2-naftil)etan-1-one) is a synthetic ketone fragrance also known as OTNE (octahydrotetramethyl acetophenone) and by other commercial trade names such as: Iso E Super, Iso Gamma Super, Anthamber, Amber Fleur, Boisvelone, Iso Ambois, Amberlan, Iso Velvetone, Orbitone, Amberonne. It is a synthetic woody odorant and is used as a fragrance ingredient in perfumes, laundry products and cosmetics.
Odour
OTNE has a woody, slightly ambergris odour, reminiscent of clean human skin. Its odour is long-lasting on skin and fabric.
Uses
Iso E Super is a very common perfume ingredient, providing a sandalwood-like and cedarwood-like fragrance, in soap, shampoo, perfumes, detergents, fabric fresheners, antiperspirants or deodorants, and air fresheners. It is also used as a tobacco flavoring (at 200–2000 ppm), as a plasticizer and as a precursor for the delivery of organoleptic and antimicrobial compounds.
Production
Iso E Super is produced commercially by Diels–Alder reaction of myrcene with 3-methyl-3-penten-2-one in the presence of aluminium chloride to give a monocyclic intermediate that is cyclized in the presence of 85% phosphoric acid.
Carrying out the initial Diels–Alder reaction using a Lewis acid catalyst such as aluminum chloride appears to ensure that the acetyl group is at position 2 of the resulting cyclohexene adduct, which distinguished Iso E Super from other (previously patented) fragrances based on tetramethylacetyloctaline. The second cyclization reaction yields a mixture of diastereomers with the general structure depicted above, the predominant ones being (2R,3R) and (2S,3S).
Chemical Summary
OTNE is the abbreviation for the fragrance material with Chemical Abstract Service (CAS) numbers 68155-66-8, 54464-57-2 and 68155-67-9 and EC List number 915-730-3. It is a multi-constituent isomer mixture contain |
https://en.wikipedia.org/wiki/Interleaved%20deltas | Interleaved deltas, or SCCS weave is a method used by the Source Code Control System to store all revisions of a file. All lines from all revisions are "woven" together in a single block of data, with interspersed control instructions indicating which lines are included in which revisions of the file. Interleaved deltas are traditionally implemented with line oriented text files in mind, although nothing prevents the method from being applied to binary files as well.
Interleaved deltas were first implemented by Marc Rochkind in the SCCS in 1975. Its design makes all versions available at the same time, so that it takes the same time to retrieve any revision. It also contains sufficient information to identify the author of each line (blaming) in one block. On the other hand, because all revisions for a file are parsed, every operation grows slower as more revisions are added. The term interleaved delta was coined later in 1982 by Walter F. Tichy, author of the Revision Control System, which compares the SCCS weave to his new reverse delta mechanism in RCS.
Implementation in SCCS
In SCCS, the following weave block
^AI 1
^AD 2
foo
^AE 2
bar
^AI 2
baz
^AE 2
^AE 1
represents a file that contains the lines "foo" and "bar" in the first release and the lines "bar" and "baz" in the second revision. The string "^A" denotes a control-A character.
The control lines in the interleaved delta block have the following meaning:
^AI serial Start a block of lines that was inserted with the named serial number.
^AD serial Start a block of lines that was removed with the named serial number.
^AE serial Block end for a corresponding ^AI or ^AD statement that uses the same serial number.
Advantages
The time it takes to extract any revision from such an interleaved delta block is proportional to the size of the archive. The size of the archive is the sum of the size of all different lines in all revisions.
In order to extract a specific revision, an array of |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.