source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/Cellular%20space | A cellular space is a Hausdorff space that has the structure of a CW complex.
Compactness (mathematics)
General topology
Properties of topological spaces
Topology |
https://en.wikipedia.org/wiki/Transcriptomics%20technologies | Transcriptomics technologies are the techniques used to study an organism's transcriptome, the sum of all of its RNA transcripts. The information content of an organism is recorded in the DNA of its genome and expressed through transcription. Here, mRNA serves as a transient intermediary molecule in the information network, whilst non-coding RNAs perform additional diverse functions. A transcriptome captures a snapshot in time of the total transcripts present in a cell. Transcriptomics technologies provide a broad account of which cellular processes are active and which are dormant.
A major challenge in molecular biology is to understand how a single genome gives rise to a variety of cells. Another is how gene expression is regulated.
The first attempts to study whole transcriptomes began in the early 1990s. Subsequent technological advances since the late 1990s have repeatedly transformed the field and made transcriptomics a widespread discipline in biological sciences. There are two key contemporary techniques in the field: microarrays, which quantify a set of predetermined sequences, and RNA-Seq, which uses high-throughput sequencing to record all transcripts. As the technology improved, the volume of data produced by each transcriptome experiment increased. As a result, data analysis methods have steadily been adapted to more accurately and efficiently analyse increasingly large volumes of data. Transcriptome databases getting bigger and more useful as transcriptomes continue to be collected and shared by researchers. It would be almost impossible to interpret the information contained in a transcriptome without the knowledge of previous experiments.
Measuring the expression of an organism's genes in different tissues or conditions, or at different times, gives information on how genes are regulated and reveals details of an organism's biology. It can also be used to infer the functions of previously unannotated genes. Transcriptome analysis has enabled the st |
https://en.wikipedia.org/wiki/Polynomial%20matrix%20spectral%20factorization | Polynomial matrices are widely studied in the fields of systems theory and control theory and have seen other uses relating to stable polynomials. In stability theory, Spectral Factorization has been used to find determinantal matrix representations for bivariate stable polynomials and real zero polynomials. A key tool used to study these is a matrix factorization known as either the Polynomial Matrix Spectral Factorization or the Matrix Fejer–Riesz Theorem.
Given a univariate positive polynomial , a polynomial which takes on non-negative values for any real input , the Fejer–Riesz Theorem yields the polynomial spectral factorization . Results of this form are generically referred to as Positivstellensatz. Considering positive definiteness as the matrix analogue of positivity, Polynomial Matrix Spectral Factorization provides a similar factorization for polynomial matrices which have positive definite range. This decomposition also relates to the Cholesky decomposition for scalar matrices . This result was originally proven by Wiener in a more general context which was concerned with integrable matrix-valued functions that also had integrable log determinant. Because applications are often concerned with the polynomial restriction, simpler proofs and individual analysis exist focusing on this case. Weaker positivstellensatz conditions have been studied, specifically considering when the polynomial matrix has positive definite image on semi-algebraic subsets of the reals. Many publications recently have focused on streamlining proofs for these related results. This article roughly follows the recent proof method of Lasha Ephremidze which relies only on elementary linear algebra and complex analysis.
Spectral Factorization is used extensively in linear–quadratic–Gaussian control. Because of this application there have been many algorithms to calculate spectral factors. Some modern algorithms focus on the more general setting originally studied by Wiener. In the cas |
https://en.wikipedia.org/wiki/DH5-Alpha%20Cell | DH5-Alpha Cells are E. coli cells engineered by American biologist Douglas Hanahan to maximize transformation efficiency. They are defined by three mutations: recA1, endA1 which help plasmid insertion and lacZΔM15 which enables blue white screening. The cells are competent and often used with calcium chloride transformation to insert the desired plasmid. A study of four transformation methods and six bacteria strains showed that the most efficient one was the DH5 strain with the Hanahan method.
Mutations
The recA1 mutation is a single point mutation that replaces glycine 160 of the recA polypeptide with an aspartic acid residue in order to disable the activity of the recombinases and inactivate homologous recombination.
The endA1 mutation inactivates an intracellular endonuclease to prevent it from degrading the inserted plasmid.
References
Escherichia coli
Molecular biology |
https://en.wikipedia.org/wiki/Unrestricted%20algorithm | An unrestricted algorithm is an algorithm for the computation of a mathematical function that puts no restrictions on the range of the argument or on the precision that may be demanded in the result. The idea of such an algorithm was put forward by C. W. Clenshaw and F. W. J. Olver in a paper published in 1980.
In the problem of developing algorithms for computing, as regards the values of a real-valued function of a real variable (e.g., g[x] in "restricted" algorithms), the error that can be tolerated in the result is specified in advance. An interval on the real line would also be specified for values when the values of a function are to be evaluated. Different algorithms may have to be applied for evaluating functions outside the interval. An unrestricted algorithm envisages a situation in which a user may stipulate the value of x and also the precision required in g(x) quite arbitrarily. The algorithm should then produce an acceptable result without failure.
References
Numerical analysis
Theoretical computer science |
https://en.wikipedia.org/wiki/Web%20Cryptography%20API | The Web Cryptography API is the World Wide Web Consortium’s (W3C) recommendation for a low-level interface that would increase the security of web applications by allowing them to perform cryptographic functions without having to access raw keying material. This agnostic API would perform basic cryptographic operations, such as hashing, signature generation and verification and encryption as well as decryption from within a web application.
Description
On 26 January 2017, the W3C released its recommendation for a Web Cryptography API that could perform basic cryptographic operations in web applications. This agnostic API would utilize JavaScript to perform operations that would increase the security of data exchange within web applications. The API would provide a low-level interface to create and/or manage public keys and private keys for hashing, digital signature generation and verification and encryption and decryption for use with web applications.
The Web Cryptography API could be used for a wide range of uses, including:
Providing authentication for users and services
Electronic signing of documents or code
Protecting the integrity and confidentiality of communication and digital data exchange
Because the Web Cryptography API is agnostic in nature, it can be used on any platform. It would provide a common set of interfaces that would permit web applications and progressive web applications to conduct cryptographic functions without the need to access raw keying material. This would be done with the assistance of the SubtleCrypto interface, which defines a group of methods to perform the above cryptographic operations. Additional interfaces within the Web Cryptography API would allow for key generation, key derivation and key import and export.
Vision for using the Web Cryptography API
The W3C’s specification for the Web Cryptography API places focus on the common functionality and features that currently exist between platform-specific and standardize |
https://en.wikipedia.org/wiki/International%20Society%20for%20Prosthetics%20and%20Orthotics | The International Society for Prosthetics and Orthotics (ISPO) is a non-governmental organization of people working in or interested in prosthetics, orthotics, mobility and assistive devices technology.
It was founded in 1970 in Copenhagen, Denmark by a committee chaired by Knud Jansen. It currently has about 3,500 members in over 100 countries.
ISPO, in partnership with the World Health Organization (WHO) has developed the WHO Standards for Prosthetics and Orthotics that were launched in May 2017 at the 16th World Congress of the International Society of Prosthetics and Orthotics (ISPO) in Cape Town, South Africa.
ISPO is also responsible for Prosthetics and Orthotics International, an academic journal that quarterly publishes papers related to Prothetics and Orthotics.
References
External links
Prosthetics
International medical and health organizations
Non-profit organizations based in Copenhagen
1970 establishments in Denmark |
https://en.wikipedia.org/wiki/RMIT%20School%20of%20Science | The RMIT (Royal Melbourne Institute of Technology) School of Science is an Australian tertiary education school within the College of Science Engineering and Health of RMIT University. It was created in 2016 from the former schools of Applied Sciences, Computer Science and Information Technology, and Mathematical and Geospatial Sciences.
See also
RMIT University
References
External links
School of Science
Health and Biomedical Sciences
Schools of mathematics
Computer science departments
Information technology schools |
https://en.wikipedia.org/wiki/CloudMask | CloudMask is a data privacy company for public or private cloud applications.
History
Launched in 2013, Dr Wael Aggan, the CEO and co-founder of CloudMask and Tarek El-Gillani, the CTO and co-founder of CloudMask, an information privacy company based out of Ottawa, Canada. Prior to CloudMask, the two started ViaSafe and TradeMerit, which were later acquired.
In 2015, CloudMask Engine received Common Criteria Certification, for their data protection services.
Since, CloudMask has partnered with AllStream and Clio, helping secure their customers' data.
Overview
CloudMask patent technology masks production data in real-time. Running on end devices, it transparently intercepts and changes the production data so that the unauthorized data requesters do not get access to sensitive data, while no physical changes to the original production data take place.
By protecting against data theft, masking acts as a compliance solution for organizations subject to data privacy regulations such as HIPAA, GDPR, and PCI. Unauthorized users who breach perimeter security can only see the masked data, and data privacy is upheld.
Products
Secure communication for Gmail and Outlook.
Confidential storage and file sharing for Google Drive and Box.
Confidential practice management for Clio.
Secure case management for Salesforce.
Custom configuration, embedding protection layer for business applications.
References
Information privacy
Cryptography
Email clients
Online companies of Canada |
https://en.wikipedia.org/wiki/MS-DOS%207 | MS-DOS 7 is a real mode operating system for IBM PC compatibles. Unlike earlier versions of MS-DOS it was not released separately by Microsoft, but included in the Windows 9x family of operating systems. Windows 95 RTM report it as MS-DOS 7.0, while Windows 95 OSR 2.x and Windows 98 report as 7.1. The real-mode MS-DOS 7.x operating system is contained in the IO.SYS file.
New features
As the first version in the series, MS-DOS 7.0 added support for long filename (LFN) awareness, and its DIR command for example will show them with an LFN driver such as DOSLFN (earlier versions of MS-DOS wouldn’t show long filenames even with such a driver). It also supports for larger extended memory (up to 4GB) via its HIMEM.SYS driver. Various smaller improvements are also introduced, such as enhanced DOS commands, more efficient use of UMB memory (COMMAND.COM and part of the DOS kernel are loaded high automatically), and the fact that environment variables can be used in the DOS command line directly.
MS-DOS 7.1 added FAT32 support (up to 2TB per volume), while MS-DOS 7.0 and earlier versions of MS-DOS only supported FAT12 and FAT16. Logical block addressing (LBA) is also supported in MS-DOS 7.x for accessing large hard disks, unlike earlier versions which only supported cylinder-head-sector (CHS)-based addressing. Year 2000 support was added to DIR command via the new /4 option.
MS-DOS 7.x added support for running the graphical interface of Windows 9x, which cannot be run on older MS-DOS releases. Even though VER command usually shows the Windows version, the MS-DOS version is also officially mentioned in other places. For example, if one attempts to run Windows 95 OSR2 or Windows 98’s VMM32.VXD file (renamed to VMM32.EXE) directly from an earlier version of MS-DOS, the following message will be immediately displayed:
Cannot run Windows with the installed version of MS-DOS.
Upgrade MS-DOS to version 7.1 or higher.
In the case of Windows 95 RTM, the version number 7.0 is disp |
https://en.wikipedia.org/wiki/V-Key | V-Key is a software-based digital security provider. Headquartered in Singapore, it provides products to financial institutions, mobile payment providers and governments to implement cloud-based payments, authentication for mobile banking, and secured mobile applications for user access and data protection.
Background & founders
V-Key was founded in 2011 by entrepreneurs Eddie Chau, Benjamin Mah and Joseph Gan. Eddie Chau, who are the founders of digital agency Brandtology, acquired by iSentia in 2014, started V-Key primarily to secure mobile devices and applications with patented technology.
Benjamin Mah is the co-founder and chief executive officer of V-Key. He was general manager at e-Cop (acquired by a wholly owned subsidiary of Temasek Holdings) and regional director at Encentuate (acquired by IBM), before he co-founded V-Key. He is a concurrently venture partner of Venture Craft, chairman of Jump Start Asia and a mentor at UOB Finlabs.
Joseph Gan is the third co-founder of V-Key. Before joining V-Key, he was at the Center for Strategic Info comm Technologies (CSIT) as the head of the Cryptography Lab, where he oversaw research and development into cryptographic software for the Ministry of Defence (Singapore).
Companies that have funded V-Key are IPV Capital and Ant Financial Services, which runs the Alipay mobile wallet app.
Technology
V-Key provides security to businesses to support cloud-based payments, digital identity and authentication for mobile banking as well as other secured mobile applications via its core technology—V-OS.
V-Key's partners are financial institutions, governments and mobile payment providers in various markets. Its technology has been used by:
ChinaPnR - financial payment provider: V-Key integrates a virtual secure element into "ChinaPnR POS Acquirer" (Point-of-Sale payment acquisition query platform) to protect mobile applications' runtime environment, program logic and important data.
See also
Application Security
Encr |
https://en.wikipedia.org/wiki/999%20phone%20charging%20myth | The 999 phone charging myth is an urban legend that claims that if a mobile phone has low battery, then dialling 999 (or any regional emergency telephone number) charges the phone so it has more power. This was confirmed as untrue by several British police forces who publicly cited the dangers of making such calls.
Basis
The basis for the belief was a feature of BlackBerry phones: if the battery level was too low, the phone automatically locked down phone features and shut down the phone radio for all calls except to emergency services. People discovered that if they dialled 999 then immediately hung up, it would override the shutdown for several minutes so that phone calls could be made. The belief seems to have originated in BlackBerry forums around 2012.
A related belief arose in 2015 that telling Siri on an iPhone to "Charge my phone to 100%" would cause the phone to call emergency services as a secret safety code. This was later traced to a bug in Apple programming that was fixed within a day. The myth continued to spread on social media as a prank.
Response
In 2013, Derbyshire Constabulary released a press release telling people not to believe the claim that calling 999 charges the battery. They cited that for every silent or aborted 999 call received, the operators have to call the person back to make sure there is no emergency. These silent calls waste police time that could potentially block responses to real emergencies. Bedfordshire Police also released information asking people not to call 999 except for an emergency as they stated that in the last six months of 2013 they had an increase in hoax 999 calls from people believing the urban legend. Other sources supplemented these press releases by stating that misusing the 999 number is illegal. They also stated that the police could cut off telephones being used to abuse the 999 service.
References
Emergency services
2012 hoaxes
Battery charging
BlackBerry Limited
Telephone crimes
Misconceptions
U |
https://en.wikipedia.org/wiki/Hadamard%20test | In quantum computation, the Hadamard test is a method used to create a random variable whose expected value is the expected real part , where is a quantum state and is a unitary gate acting on the space of . The Hadamard test produces a random variable whose image is in and whose expected value is exactly . It is possible to modify the circuit to produce a random variable whose expected value is .
Description of the circuit
To perform the Hadamard test we first calculate the state . We then apply the unitary operator on conditioned on the first qubit to obtain the state . We then apply the Hadamard gate to the first qubit, yielding .
Measuring the first qubit, the result is with probability , in which case we output . The result is with probability , in which case we output . The expected value of the output will then be the difference between the two probabilities, which is
To obtain a random variable whose expectation is follow exactly the same procedure but start with .
The Hadamard test has many applications in quantum algorithms such as the Aharonov-Jones-Landau algorithm.
Via a very simple modification it can be used to compute inner product between two states and : instead of starting from a state it suffice to start from the ground state , and perform two controlled operations on the ancilla qubit. Controlled on the ancilla register being , we apply the unitary that produces in the second register, and controlled on the ancilla register being in the state , we create in the second register. The expected value of the measurements of the ancilla qubits leads to an estimate of . The number of samples needed to estimate the expected value with absolute error is , because of a Chernoff bound. This value can be improved to using amplitude estimation techniques.
References
,
Quantum computing
Quantum algorithms |
https://en.wikipedia.org/wiki/Data%20preparation | Data preparation is the act of manipulating (or pre-processing) raw data (which may come from disparate data sources) into a form that can readily and accurately be analysed, e.g. for business purposes.
Data preparation is the first step in data analytics projects and can include many discrete tasks such as loading data or data ingestion, data fusion, data cleaning, data augmentation, and data delivery.
The issues to be dealt with fall into two main categories:
systematic errors involving large numbers of data records, probably because they have come from different sources;
individual errors affecting small numbers of data records, probably due to errors in the original data entry.
Data specification
The first step is to set out a full and detailed specification of the format of each data field and what the entries mean. This should take careful account of:
most importantly, consultation with the users of the data
any available specification of the system which will use the data to perform the analysis
a full understanding of the information available, and any gaps, in the source data.
See also data definition specification.
Example
Suppose there is a two-character alphabetic field that indicates geographical location. It is possible that in one data source a code "EE" means "Europe" and in another data source the same code means "Estonia". One would need to devise an unambiguous set of codes and amend the code in one set of records accordingly.
Furthermore, the "geographical area" might refer to any of e.g. delivery address, billing address, address from which goods supplied, billing currency, or applicable national regulations. All these matters must be covered in the specification.
There could be some records with "X" or "555" in that field. Clearly, this is invalid data as it does not conform to the specification. If there are only small numbers of such records, one would either correct them manually or if precision is not important, simply delete t |
https://en.wikipedia.org/wiki/Free%20product%20of%20associative%20algebras | In algebra, the free product (coproduct) of a family of associative algebras over a commutative ring R is the associative algebra over R that is, roughly, defined by the generators and the relations of the 's. The free product of two algebras A, B is denoted by A ∗ B. The notion is a ring-theoretic analog of a free product of groups.
In the category of commutative R-algebras, the free product of two algebras (in that category) is their tensor product.
Construction
We first define a free product of two algebras. Let A and B be algebras over a commutative ring R. Consider their tensor algebra, the direct sum of all possible finite tensor products of A, B; explicitly, where
We then set
where I is the two-sided ideal generated by elements of the form
We then verify the universal property of coproduct holds for this (this is straightforward.)
A finite free product is defined similarly.
References
K. I. Beidar, W. S. Martindale and A. V. Mikhalev, Rings with generalized identities, Section 1.4. This reference was mentioned in
External links
Algebra |
https://en.wikipedia.org/wiki/Commandino%27s%20theorem | Commandino's theorem, named after Federico Commandino (1509–1575), states that the four medians of a tetrahedron are concurrent at a point S, which divides them in a 3:1 ratio. In a tetrahedron a median is a line segment that connects a vertex with the centroid of the opposite face – that is, the centroid of the opposite triangle. The point S is also the centroid of the tetrahedron.
History
The theorem is attributed to Commandino, who stated, in his work De Centro Gravitatis Solidorum (The Center of Gravity of Solids, 1565), that the four medians of the tetrahedron are concurrent. However, according to the 19th century scholar Guillaume Libri, Francesco Maurolico (1494–1575) claimed to have found the result earlier. Libri nevertheless thought that it had been known even earlier to Leonardo da Vinci, who seemed to have used it in his work. Julian Coolidge shared that assessment but pointed out that he couldn't find any explicit description or mathematical treatment of the theorem in da Vinci's works. Other scholars have speculated that the result may have already been known to Greek mathematicians during antiquity.
Generalizations
Commandino's theorem has a direct analog for simplexes of any dimension:
Let be a -simplex of some dimension in and let be its vertices. Furthermore, let , be the medians of , the lines joining each vertex with the centroid of the opposite -dimensional facet . Then, these lines intersect each other in a point , in a ratio of .
Full generality
The former analog is easy to prove via the following, more general result, which is analogous to the way levers in physics work:
Let and be natural numbers, so that in an -vector space , pairwise different points are given.
Let be the centroid of the points , let be the centroid of the points , and let be the centroid of all of these points.
Then, one has
In particular, the centroid lies on the line and divides it in a ratio of .
Reusch's theorem
The previous theorem has f |
https://en.wikipedia.org/wiki/Krivine%20machine | In theoretical computer science, the Krivine machine is an abstract machine (sometimes called virtual machine). As an abstract machine, it shares features with Turing machines and the SECD machine. The Krivine machine explains how to compute a recursive function. More specifically it aims to define rigorously head normal form reduction of a lambda term using call-by-name reduction. Thanks to its formalism, it tells in details how a kind of reduction works and sets the theoretical foundation of the operational semantics of functional programming languages. On the other hand, Krivine machine implements call-by-name because it evaluates the body of a β-redex before it applies the body to its parameter. In other words, in an expression (λ x. t) u it evaluates first λ x. t before applying it to u. In functional programming, this would mean that in order to evaluate a function applied to a parameter, it evaluates first the function before applying it to the parameter.
The Krivine machine was designed by the French logician Jean-Louis Krivine at the beginning of the 1980s.
Call by name and head normal form reduction
The Krivine machine is based on two concepts related to lambda calculus, namely head reduction and call by name.
Head normal form reduction
A redex (one says also β-redex) is a term of the lambda calculus of the form (λ x. t) u. If a term has the shape (λ x. t) u1 ... un it is said to be a head redex. A head normal form is a term of the lambda calculus which is not a head redex. A head reduction is a (non empty) sequence of contractions of a term which contracts head redexes. A head reduction of a term t (which is supposed not to be in head normal form) is a head reduction which starts from a term t and ends on a head normal form. From an abstract point of view, head reduction is the way a program computes when it evaluates a recursive sub-program. To understand how such a reduction can be implemented is important. One of the aims of the Krivin |
https://en.wikipedia.org/wiki/National%20Centre%20for%20Plant%20Genetic%20Resources%3A%20Polish%20Genebank | The National Centre for Plant Genetic Resources: Polish Genebank (NCPGR) is a research unit in the Plant Breeding and Acclimatization Institute – National Research Institute. NCPGR is the coordinator and implementer of the National Crop Plant Genetic Resources Protection Programme. The Programme aims to protect the biodiversity of crop plants endangered by genetic erosion in Poland, and is funded by the Ministry of Agriculture. The main tasks include collection of crop and wild plant populations and varieties threatened by genetic erosion, description and evaluation of collected materials, and preservation of their viability and genetic purity. The Programme is an implementation of provisions laid down in international treaties ratified by Poland:
Convention on Biological Diversity (CBD),
International Treaty on Plant Genetic Resources for Food and Agriculture (ITPGRFA),
2nd Agreement on Government Procurement (GPA).
Objectives
NCPGR collects populations and cultivated varieties of crop and wild plants threatened with genetic erosion. Collected materials are characterised, evaluated and documented.
Seed samples and clones are maintained in viable state and genetic purity. NCPGR exchanges samples with other institutions worldwide and provides initial plant materials for breeding and research programs.
Organizational structure
Laboratory for Plants Collection and Evaluation
The Laboratory organizes collecting expeditions, during which plant genetic resources are obtained. Plants are collected from natural sites or obtained from farmers or on local markets. Collected material is reproduced and stored in a gene bank. The Laboratory also carries out studies of variation and genetic structure of selected species and prepares initial materials of selected species for practical breeding.
Laboratory for Documentation and Seeds Long-term Storage
The Laboratory covers drawing up documentation of genetic resources of crop plants and exchanges information with other g |
https://en.wikipedia.org/wiki/Subspace%20identification%20method | In mathematics, specifically in control theory, subspace identification (SID) aims at identifying linear time invariant (LTI) state space models from input-output data. SID does not require that the user parametrizes the system matrices before solving a parametric optimization problem and, as a consequence, SID methods do not suffer from problems related to local minima that often lead to unsatisfactory identification results.
History
SID methods are rooted in the work by the German mathematician Leopold Kronecker (1823–1891). Kronecker showed that a power series can be written as a rational function when the rank of the Hankel operator that has the power series as its symbol is finite. The rank determines the order of the polynomials of the rational function.
In the 1960s the work of Kronecker inspired a number of researchers in the area of Systems and Control, like Ho and Kalman, Silverman and Youla and Tissi, to store the Markov parameters of an LTI system into a finite dimensional Hankel matrix and derive from this matrix an (A,B,C) realization of the LTI system. The key observation was that when the Hankel matrix is properly dimensioned versus the order of the LTI system, the rank of the Hankel matrix is the order of the LTI system and the SVD of the Hankel matrix provides a basis of the column space observability matrix and row space of the controllability matrix of the LTI system. Knowledge of this key spaces allows to estimate the system matrices via linear least squares.
An extension to the stochastic realization problem where we have knowledge only of the Auto-correlation (covariance) function of the output of an LTI system driven by white noise, was derived by researchers like Akaike.
A second generation of SID methods attempted to make SID methods directly operate on input-output measurements of the LTI system in the decade 1985–1995. One such generalization was presented under the name of the Eigensystem Realization Algorithm (ERA) made use of spe |
https://en.wikipedia.org/wiki/TASKING | TASKING GmbH is a provider of embedded-software development tools headquartered in Munich, Germany.
History
Founded as a software consulting company in 1977, TASKING developed its first C compiler in 1986. In 1988, its first embedded toolset for the 8051 family of single-chip microcontrollers was launched. The company gained a presence in the U.S. market by merging with Boston System Office (BSO) in 1989 and shortly later developed a second-generation compiler designed to support the C166 and DSP56K.
In 1998, TASKING partnered with Infineon Technologies to develop the first TriCore development software. Altium acquired TASKING in 2001, and began working on its third-generation compiler technology, the Viper compiler. This compiler technology was designed to increase the speed and code efficiency of the TriCored development toolset.
The C166 toolset was upgraded to third-generation compiler technology in 2006, providing a significant increase in speed optimization and code size. 2014 saw the introduction of both a compiler for the Renesas RH850 family and an Automotive Safety Support Program (Safety Kit) for ISO 26262 certification.
The TASKING TriCore toolset received a major update in 2015 and another update in 2017. These updates further increased speed and decreased code size, but their primary focus was additional support for the Infineon AURIX and Infineon AURIX 2G multi-core processors.
In 2016, the Safety Checker product was released. Safety Checker provides static code analysis to verify that no unauthorized access to protected memory occurs. In 2017, the VX Toolset for TriCore v6.2 with a stand-alone embedded debugger was released.
Products
TASKING provides embedded-software development tools for the following processors:
Infineon TriCore/AURIX
Infineon/ST Micro C166/ST10
Freescale Qorivva
STMicroelectronics SPC 5
Renesas RH850
Bosch GTM-IP MCS (generic timer module)
8051 and others
The most popular TASKING product is the VX Toolset for Tri |
https://en.wikipedia.org/wiki/3N170 | The 3N170 is an enhancement mode N-Channel MOSFET standard product designed for use as a general purpose amplifier or switch. The part was produced previously by Intersil, Motorola, and others. It is currently produced by Linear Integrated Systems, Inc.
Part characteristics
Characteristics include:
Low switching voltages
Fast switching times
Low drain-source resistance
Low reverse transfer capacitance
References
MOSFETs
Commercial transistors |
https://en.wikipedia.org/wiki/Power%20ISA | Power ISA is a reduced instruction set computer (RISC) instruction set architecture (ISA) currently developed by the OpenPOWER Foundation, led by IBM. It was originally developed by IBM and the now-defunct Power.org industry group. Power ISA is an evolution of the PowerPC ISA, created by the mergers of the core PowerPC ISA and the optional Book E for embedded applications. The merger of these two components in 2006 was led by Power.org founders IBM and Freescale Semiconductor.
Prior to version 3.0, the ISA is divided into several categories. Processors implement a set of these categories as required for their task. Different classes of processors are required to implement certain categories, for example a server-class processor includes the categories: Base, Server, Floating-Point, 64-Bit, etc. All processors implement the Base category.
Power ISA is a RISC load/store architecture. It has multiple sets of registers:
32 × 32-bit or 64-bit general-purpose registers (GPRs) for integer operations.
64 × 128-bit vector scalar registers (VSRs) for vector operations and floating-point operations.
32 × 64-bit floating-point registers (FPRs) as part of the VSRs for floating-point operations.
32 × 128-bit vector registers (VRs) as part of the VSRs for vector operations.
8 × 4-bit condition register fields (CRs) for comparison and control flow.
11 special registers of various sizes: Counter Register (CTR), link register (LR), time base (TBU, TBL), alternate time base (ATBU, ATBL), accumulator (ACC), status registers (XER, FPSCR, VSCR, SPEFSCR).
Instructions up to version 3.0 have a length of 32 bits, with the exception of the VLE (variable-length encoding) subset that provides for higher code density for low-end embedded applications, and version 3.1 which introduced prefixing to create 64-bit instructions. Most instructions are triadic, i.e. have two source operands and one destination. Single- and double-precision IEEE-754 compliant floating-point operations are supp |
https://en.wikipedia.org/wiki/Analysis%20of%20Boolean%20functions | In mathematics and theoretical computer science, analysis of Boolean functions is the study of real-valued functions on or (such functions are sometimes known as pseudo-Boolean functions) from a spectral perspective. The functions studied are often, but not always, Boolean-valued, making them Boolean functions. The area has found many applications in combinatorics, social choice theory, random graphs, and theoretical computer science, especially in hardness of approximation, property testing, and PAC learning.
Basic concepts
We will mostly consider functions defined on the domain . Sometimes it is more convenient to work with the domain instead. If is defined on , then the corresponding function defined on is
Similarly, for us a Boolean function is a -valued function, though often it is more convenient to consider -valued functions instead.
Fourier expansion
Every real-valued function has a unique expansion as a multilinear polynomial:
(Note that even if the function is 0-1 valued this is not a sum mod 2, but just an ordinary sum of real numbers.)
This is the Hadamard transform of the function , which is the Fourier transform in the group . The coefficients are known as Fourier coefficients, and the entire sum is known as the Fourier expansion of . The functions are known as Fourier characters, and they form an orthonormal basis for the space of all functions over , with respect to the inner product .
The Fourier coefficients can be calculated using an inner product:
In particular, this shows that , where the expected value is taken with respect to the uniform distribution over . Parseval's identity states that
If we skip , then we get the variance of :
Fourier degree and Fourier levels
The degree of a function is the maximum such that for some set of size . In other words, the degree of is its degree as a multilinear polynomial.
It is convenient to decompose the Fourier expansion into levels: the Fourier coefficient is on level .
The degre |
https://en.wikipedia.org/wiki/Buchholz%20psi%20functions | Buchholz's psi-functions are a hierarchy of single-argument ordinal functions introduced by German mathematician Wilfried Buchholz in 1986. These functions are a simplified version of the -functions, but nevertheless have the same strength as those. Later on this approach was extended by Jäger and Schütte.
Definition
Buchholz defined his functions as follows. Define:
Ωξ = ωξ if ξ > 0, Ω0 = 1
The functions ψv(α) for α an ordinal, v an ordinal at most ω, are defined by induction on α as follows:
ψv(α) is the smallest ordinal not in Cv(α)
where Cv(α) is the smallest set such that
Cv(α) contains all ordinals less than Ωv
Cv(α) is closed under ordinal addition
Cv(α) is closed under the functions ψu (for u≤ω) applied to arguments less than α.
The limit of this notation is the Takeuti–Feferman–Buchholz ordinal.
Properties
Let be the class of additively principal ordinals. Buchholz showed following properties of this functions:
Fundamental sequences and normal form for Buchholz's function
Normal form
The normal form for 0 is 0. If is a nonzero ordinal number then the normal form for is where and and each is also written in normal form.
Fundamental sequences
The fundamental sequence for an ordinal number with cofinality is a strictly increasing sequence with length and with limit , where is the -th element of this sequence. If is a successor ordinal then and the fundamental sequence has only one element . If is a limit ordinal then .
For nonzero ordinals , written in normal form, fundamental sequences are defined as follows:
If where then and
If , then and ,
If , then and ,
If then and (and note: ),
If and then and ,
If and then and where .
Explanation
Buchholz is working in Zermelo–Fraenkel set theory, that means every ordinal is equal to set . Then condition means that set includes all ordinals less than in other words .
The condition means that set includes:
all ordinals from previous set ,
all ordinals that |
https://en.wikipedia.org/wiki/Quark%20%28hash%20function%29 | Quark is a cryptographic hash function (family).
It was designed by Jean-Philippe Aumasson, Luca Henzen, Willi Meier and María Naya-Plasencia.
Quark was created because of the expressed need by application designers (notably for implementing RFID protocols) for a lightweight cryptographic hash function.
The SHA-3 NIST hash function competition concerned general-purpose designs and focused on software performance.
Quark is a lightweight hash function, based on a single security level and on the sponge construction, to minimize memory requirements. Inspired by the lightweight ciphers Grain and KATAN, the hash function family Quark is composed of the three instances u-Quark, d-Quark, and t-Quark. Hardware benchmarks show that Quark compares well to previous lightweight hashes.
For example, the u-Quark conjecturally instance provides at least 64-bit security against all attacks (collisions, multicollisions, distinguishers, etc.), fits in 1379 gate-equivalents, and consumes in average 2.44 µW at 100 kHz in 0.18 µm ASIC.
External links
Quark on Jean-Philippe Aumasson's website 131002.net
Quark on the International Association for Cryptologic Research website iacr.org
Cryptographic hash functions
Cryptography |
https://en.wikipedia.org/wiki/Out%20in%20Science%2C%20Technology%2C%20Engineering%2C%20and%20Mathematics | Out in Science, Technology, Engineering, and Mathematics, Inc., abbreviated oSTEM, is a 501(c)(3) non-profit professional society dedicated to LGBTQ+ individuals within the science, technology, engineering, and mathematics (STEM) community.
History
In October 2005, IBM sponsored a focus group where students from across the United States convened at the Human Rights Campaign headquarters in Washington, D.C. These students discussed topics relevant to LGBTQ+ communities at their colleges and universities, and they debated how to structure an organization that serves students in science, technology, engineering, and mathematics.
Founded in 2009 and achieving 501(c)(3) status in 2010, oSTEM currently consists of more than 100 chapters across the United States and the United Kingdom.
Mission
oSTEM strives to identify, address, and advocate for the needs of LGBTQ+ students and professionals within the STEM fields. oSTEM fulfills these needs by providing networking opportunities, mentorship connections, strategic collaborations, and professional/leadership development, as well as an annual global conference.
Activities
Conferences
oSTEM hosts annual conferences that discuss LGBTQ+ topics in STEM as well as intelligence fields. Topics discussed include inclusion, outreach, and diversity within the workplace. The goal of workshops, talks, and networking events for LGBTQ+ people is to help them integrate and move up in their fields. The fourth annual conference was hosted jointly with the National Organization of Gay and Lesbian Scientists and Technical Professionals' Out to Innovate in Atlanta in 2014.
LGBT STEM Day
On July 5, 2018, oSTEM along with Pride in STEM, House of STEM, and InterEngineering created international awareness for LGBTQ+ people in Science, Technology, Engineering, and Math.
Awards
oSTEM presents a variety of awards annually to individuals and organizations that demonstrate a strong dedication to advancing and empowering LGBTQ+ in STEM fields.
oS |
https://en.wikipedia.org/wiki/John%20Murphy%20%28engineer%29 | John A. Murphy is an American inventor and computer engineer credited with inventing ARCNET, the first commercial networking system, in 1976. He was working for Datapoint Corporation at the time. His biography appeared in the IT History Society website.
Background and career
Originally from Tulsa, Oklahoma, Murphy graduated from the University of Notre Dame in 1965 with an B.S. degree in electrical engineering. He first worked at IBM, then Motorola, Telex, and Singer Business Machines before joining Datapoint, where he led design of the computer networking system ARCNET. Victor Poor had established the R&D function at Datapoint as industry leading: with Harry Pyle, Poor co-created the architecture that was ultimately implemented in the first successful computer microprocessor, the Intel 8008.
ARCNET
Developed in 1976, ARCNET (Attached Resource Computer NETwork) was the first widely available networking system for microcomputers.
Datapoint had pioneered microprocessors; the challenge ARCNET addressed was how to facilitate the efficient transmission of information between different machines. In an interview with Len Shustek for the Computer History Museum, Murphy notes that Datapoint took ARCNET from concept to reality in "under a year and probably very much under a year." As the first commercial local area network, ARCNET found early success, but corporate struggles at Datapoint led to slower adoption in the 1980s, relative to other commercial alternatives like Ethernet. According to Techopedia, "ARCnet was the first simple networking based solution that provided for all kinds of transmission regardless of the transmission medium or the type of computer."
References
1943 births
American inventors
American computer programmers
American electronics engineers
People from Tulsa, Oklahoma
21st-century American engineers
Computer hardware engineers
University of Notre Dame alumni
Living people |
https://en.wikipedia.org/wiki/Engineering%20controls%20for%20nanomaterials | Engineering controls for nanomaterials are a set of hazard control methods and equipment for workers who interact with nanomaterials. Engineering controls are physical changes to the workplace that isolate workers from hazards, and are considered the most important set of methods for controlling the health and safety hazards of nanomaterials after systems and facilities have been designed.
The primary hazard of nanomaterials is health effects from inhalation of aerosols containing nanoparticles. Many engineering controls developed for other industries can be used or adapted for protecting workers from exposure to nanomaterials, including ventilation and filtering using laboratory fixtures such as fume hoods, containment using gloveboxes, and other non-ventilation controls such as sticky mats. Research is ongoing as to what engineering controls are most effective for nanomaterials.
Background
Engineering controls
Controlling exposures to occupational hazards is considered the fundamental method of protecting workers. Traditionally, a hierarchy of controls has been used as a means of determining how to implement feasible and effective controls, which typically include elimination, substitution, engineering controls, administrative controls, and personal protective equipment. Methods earlier in the list are considered generally more effective in reducing the risk associated with a hazard, with process changes and engineering controls recommended as the primary means for reducing exposures, and personal protective equipment being the approach of last resort. Following the hierarchy is intended to lead to the implementation of inherently safer systems, ones where the risk of illness or injury has been substantially reduced.
Engineering controls are physical changes to the workplace that isolate workers from hazards by containing them in an enclosure, or removing contaminated air from the workplace through ventilation and filtering. Well-designed engineering c |
https://en.wikipedia.org/wiki/Terminator%20Genisys%3A%20Future%20War | Terminator Genisys: Future War is a mobile MMO strategy video game created by Plarium in cooperation with Skydance Media. The events of the game take place in a post-apocalyptic future years after the events of the Terminator Genisys film. Originally developed as a sequel film, the game was announced on June 28, 2016 and released on May 18, 2017 on the iOS App Store and Google Play. It uses Plarium's usual model of free to play, with some in-game features and upgrades available to purchase.
Gameplay
In Terminator Genisys: Future War, players construct buildings, improve their base, train their troops, upgrade their leader, and create and develop clans. In total, 48 unit types are available to the player (24 for each faction). They are divided into six classes: infantry, cavalry, aviation, spy drones, assault and siege troops.
In-game processes are initiated by using the following resources: energy, iridium, materials, ammo, fuel and the special in-game currency, technology points. These technology points can also be used to speed up an active process. To obtain resources, players need to construct special buildings or send their units to resource locations. As the game progresses, the cost and length of in-game processes increase accordingly.
The strategic aim for players in a clan is to capture the time machine, a special location at the center of each dimension.
Storyline
Terminator Genisys: Future War is set directly after the events of the film Terminator Genisys. Genisys is destroyed and Skynet is offline, but the future war is far from over.
The game gives players two options: they can lead the Resistance or join Skynet’s mechanized forces. According to the game's developers, "In a first for any Terminator game, players will have the option to be a Resistance or Skynet Commander as they battle rival player alliances for territory, dominance and survival."
Arnold Schwarzenegger
In the game, players can choose Arnold Schwarzenegger, the T-800 android |
https://en.wikipedia.org/wiki/One%20Love%20Manchester | One Love Manchester was a benefit concert and British television special on 4 June 2017, organised by American singer Ariana Grande, Simon Moran, Melvin Benn and Scooter Braun in response to the Manchester Arena bombing after Grande's concert two weeks earlier. It took place at Old Trafford Cricket Ground, and was attended by 55,000 people. Guest stars included Justin Bieber, the Black Eyed Peas, Coldplay, Miley Cyrus, Mac Miller, Marcus Mumford, Niall Horan, Little Mix, Katy Perry, Take That, Imogen Heap, Pharrell Williams, Robbie Williams and Liam Gallagher.
Proceeds from the event went to the We Love Manchester Emergency Fund established by Manchester City Council and the British Red Cross to help the victims and their families. The British Red Cross received more than £10 million in donations in the 12 hours following the concert. Networks from at least 50 countries broadcast the concert live, which was simultaneously streamed live on various platforms, including Twitter, Facebook, and YouTube.
New Yorks Vulture.com ranked the event as the No. 1 concert of 2017.
Background
On 22 May 2017, a suicide bomb attack was carried out at Manchester Arena in Manchester, England, following a performance by American singer Ariana Grande as part of her Dangerous Woman Tour. 22 of the concert-goers and parents who were at the entrance waiting to pick up their children following the show were killed, and more than 800 were injured, 116 of those seriously.
Within a few hours of the bombing, Grande herself posted on Twitter: "broken. from the bottom of my heart, i am so so sorry. i don't have words." The tweet briefly became the most-liked tweet in history. Grande subsequently suspended her tour and flew to her mother's home in Boca Raton, Florida. On 26 May, she announced that she would host a benefit concert in Manchester for the victims of the attack.
Before main event
Developments and planning
Event tickets were made available on 1 June 2017 for £40, and sold with no |
https://en.wikipedia.org/wiki/Non-motile%20bacteria | Non-motile bacteria are bacteria species that lack the ability and structures that would allow them to propel themselves, under their own power, through their environment. When non-motile bacteria are cultured in a stab tube, they only grow along the stab line. If the bacteria are mobile, the line will appear diffuse and extend into the medium. The cell structures that provide the ability for locomotion are the cilia and flagella. Coliform and Streptococci are examples of non-motile bacteria as are Klebsiella pneumoniae, and Yersinia pestis. Motility is one characteristic used in the identification of bacteria and evidence of possessing structures: peritrichous flagella, polar flagella and/or a combination of both.
Though the lack of motility might be regarded a disadvantage, some non-motile bacteria possess structures that allow their attachment to eukaryotic cells, like GI mucousal cells.
Some genera have been divided based upon the presence or absence of motility. Motility is determined by using a motility medium. The ingredients include motility test medium, nutrient broth powder, NaCl and distilled water. An inoculating needle (not a loop) is used to insert the bacterial sample. The needle is inserted through the medium for a length of one inch. The media tube incubated at . Bacteria that are motile grow away from the stab, and toward the sides and downward toward the bottom of the tube. Growth should be observed in 24 to 48 hours. With some species, the bacterium is inconsistent related to its motility.
References
Bacteria
Bacteriology
Bacteria |
https://en.wikipedia.org/wiki/Biodiversity%20of%20Kosovo | Kosovo is characterised by a diverse biodiversity and an abundance of different ecosystems and habitats determined by the climate along with the geology and hydrology. Predominantly mountainous, it is located at the center of the Balkan Peninsula bounded by Montenegro to the west, Serbia to the north and east, North Macedonia to the southeast, and Albania to the southwest.
Most of the country is geographically defined by the plains of Dukagjini and Kosovo. It is framed along its borders by the Albanian Alps in the west and the Sharr Mountains in the southeast, which are simultaneously, in terms of plant and animal species, the most important and diverse areas of the country.
The climate of the country is a combination of a continental and a mediterranean climate, with four distinct seasons. It is mostly defined by its geographical location in Southeastern Europe and strongly influenced by the Adriatic, Aegean and Black Sea within the Mediterranean Sea.
In terms of phytogeography, the land area of Kosovo lies within the Boreal Kingdom, specifically within the Illyrian province of the Circumboreal Region. Its territory can be subdivided into two terrestrial ecoregions of the Palearctic realm, the Balkan and Dinaric mixed forests.
The forests are the most widespread terrestrial ecosystem in Kosovo and currently protected by particular laws of the Constitution of Kosovo. Most of the forests are important because they provide shelter and protection to hundreds of plant and animal species of national and international importance.
Flora
The Kosovan forest flora is represented by 139 orders classified in 63 families, 35
genera and 20 species. It has a significance for the Balkans as whole – although Kosovo represents only 2.3% of the region's area, in terms of vegetation it represents 25% of flora and about 18% of total European flora. Due to the Mediterranean climate, several plants characteristic to sub-Mediterranean regions are found in forests, including ter |
https://en.wikipedia.org/wiki/16K%20resolution | 16K resolution is a display resolution with approximately 16,000 pixels horizontally. The most commonly discussed 16K resolution is , which doubles the pixel count of 8K UHD in each dimension, for a total of four times as many pixels. This resolution has 132.7 megapixels, 16 times as many pixels as 4K resolution and 64 times as many pixels as 1080p resolution.
As of June 2022, 16K resolutions can be run using multi-monitor setups with AMD Eyefinity or Nvidia Surround.
History
In 2016, AMD announced a target for their future graphics cards to support 16K resolution with a refresh rate of 240Hz for "true immersion" in VR.
Linus Tech Tips released a series of videos in 2017 attempting to play video games at 16K using sixteen 4K monitors.
In 2018, US filmmaker Martin Lisius released a short time-lapse film titled, "Prairie Wind" that he produced using a 2-camera Canon EOS 5DS system he developed. Two still images were stitched together to create one pixel image and then rendered as 16K resolution video with an extremely wide aspect ratio of . This is among the first known 16K videos to exist.
Innolux displayed the world's first 100-inch 16K () display module at Touch Taiwan in August 2018.
Sony introduced a commercial 16K display at NAB 2019 that is set to be released in Japan. It is made up of 576 modules (each ) in a formation of 48 by 12 modules, forming a screen, with aspect ratio.
On June 26, 2019, VESA formally released the DisplayPort 2.0 standard with support for one 16K (-pixel) display supporting 30-bit-per-pixel 4:4:4 RGB/-color HDR video at a refresh rate of 60Hz using DSC video compression.
See also
Virtual reality
32K resolution digital video formats with a horizontal resolution of around 32,000 pixels
10K resolution digital video formats with a horizontal resolution of around 10,000 pixels, aimed at non-television computer monitor usage
8K resolution digital video formats with a horizontal resolution of around 8,000 pixels
5K resolu |
https://en.wikipedia.org/wiki/Dislocation%20avalanches | Dislocation avalanches are rapid discrete events during plastic deformation, in which defects are reorganized collectively. This intermittent flow behavior has been observed in microcrystals, whereas macroscopic plasticity appears as a smooth process. Intermittent plastic flow has been observed in several different systems. In AlMg Alloys, interaction between solute and dislocations can cause sudden jump during dynamic strain aging. In metallic glass, it can be observed via shear banding with stress localization; and single crystal plasticity, it shows up as slip burst. However, analysis of the events with orders-magnitude difference in sizes with different crystallographic structure reveals power-law scaling between the number of events and their magnitude, or scale-free flow.
This microscopic instability of plasticity can have profound consequences on mechanical behavior of microcrystals. The increased relative size of the fluctuations makes it difficult to control the plastic forming process. Moreover, at small specimen sizes the yield stress is not well defined by the 0.2% plastic strain criterion anymore, since this value varies specimen by specimen.
Similar intermittent effects has been studied in many completely different systems, including intermittency of energy dissipation in magnetism (Barkhausen effect), superconductivity, earthquakes, and friction.
Background
Macroscopic plasticity are well-described by continuum model. Dislocations motions are characterized by an average velocity
which is known as Orowan's equation. However, this approach completely fails to account for well-known intermittent deformation phenomena such as the spatial localization of dislocation flow into "slip bands"(also known as Lüders band) and the temporal fluctuations in stress-strain curves (the Portevin–Le Chatelier effect first reported in the 1920s).
Experimental Approach
Although evidence of intermittent flow behavior is long known and studied, it is not until past two |
https://en.wikipedia.org/wiki/Toxic%20unit | Toxic units (TU) are used in the field of toxicology to quantify the interactions of toxicants in binary mixtures of chemicals. A toxic unit for a given compound is based on the concentration at which there is a 50% effect (ex. EC50) for a certain biological endpoint. One toxic unit is equal to the EC50 for a given endpoint for a specific biological effect over a given amount of time. Toxic units allow for the comparison of the individual toxicities of a binary mixture to the combined toxicity. This allows researchers to categorize mixtures as additive, synergistic or antagonistic. Synergism and antagonism are defined by mixtures that are more or less toxic than predicted by the sum of their toxic units.
Contaminants are frequently present as mixtures in the environment. Regulatory decisions are based on mixture toxicity models that assume additivity, which can result in under or overestimation of toxic effects. Refining our understanding of mixture interactions can lead to better informed environmental management and decision making. In addition, exploring mixture interactions can elucidate the mechanisms of action for specific toxicants which, in many cases, are poorly understood.
Methods
Application of toxic units requires toxicity data for the individual components of the mixture as well as specialized mixture toxicity data. Evaluating the response of each individual chemical allows researchers to generate a new dosing metric, toxic units, which is standardized to the toxicity of each chemical. Since the toxicity of two compounds may vary widely, 1 toxic unit of two different compounds could correspond to two very different concentrations on a per mass basis. In addition to the toxicity of the individual components, use of toxic units requires a 2x2 factorial design concentration series where the response is measured to an increase of each contaminant with the other contaminant held constant. This elaborate concentration series allows researchers to describe |
https://en.wikipedia.org/wiki/Amphipathic%20lipid%20packing%20sensor%20motifs | Amphipathic Lipid Packing Sensor (ALPS) motifs were first identified in 2005 in ARFGAP1 and have been reviewed.
The curving of a phospholipid bilayer, for example into a liposome, causes disturbances to the packing of the lipids on the side of the bilayer that has the larger surface area (the outside of a liposome for example). The less "ordered" or "looser" packing of the lipids is recognized by ALPS motifs.
ALPS motifs are 20 to 40 amino acid long portions of proteins that have important collections of types of amino acid residues. Bulky hydrophobic amino acid residues, such as Phenylalanine, Leucine, and Tryptophan are present every 3 or 4 positions, with many polar but uncharged amino acid residues such as Glycine, Serine and Threonine between. The ALPS is unstructured in solution but folds as an alpha helix when associated with the membrane bilayer, such that the hydrophobic residues insert between loosely packed lipids and the polar residues point toward the aqueous cytoplasm.
References
Membrane biology
Molecular biology
Proteins
es:Motivo estructural ALPS |
https://en.wikipedia.org/wiki/Multi-hop%20routing | Multi-hop routing (or multihop routing) is a type of communication in radio networks in which network coverage area is larger than radio range of single nodes. Therefore, to reach some destination a node can use other nodes as relays.
Since the transceiver is the major source of power consumption in a radio node and long distance transmission requires high power, in some cases multi-hop routing can be more energy efficient than single-hop routing.
Typical applications of multi-hop routing:
Wireless sensor networks
Wireless mesh networks
Mobile ad hoc networks
Smart phone ad hoc networks
Mobile networks with stationary multi-hop relays
References
Wireless networking |
https://en.wikipedia.org/wiki/Reverse%20spherification | Reverse spherification is a method of molecular gastronomy. This method is similar to spherification, different in that it is used to enclose liquid containing alcohol content, as well as liquid with calcium content such as milk and yogurt. When the liquid containing alcohol or calcium salt is dropped into an alginate bath, the liquid will draw itself into a spherical shape and becomes encapsulated by the gel-like membrane formed by the cross-linking of the calcium ions and the alginate polymer strands. Larger spheres can be created using reverse spherification. After removing the jelly from the alginate bath, calcium would not continue to diffuse into the center of the sphere, therefore would not create a gel center. Longer storage time could be obtained for this product accordingly.
Both the liquid for consumption and the alginate bath should be left to stand after preparing to eliminate air bubbles. Air bubbles inside the liquid could result in the inability of the flavourful liquid to sink when dropping into the alginate bath, creating uneven skin and weak spots in the skin.
Important factors for reverse spherification
There are two factors that need to be or can be adjusted for successful reverse spherification. The first is the amount of free calcium ions and density of the liquid to be made for spherification. The amount of free calcium ions needs to be sufficient in order to form a gel-like capsule reaction with sodium alginate. Milk based products such as cream, yogurt or milk already have a sufficient amount of calcium. However, conducting reverse spherification with a liquid containing insufficient calcium ion concentrations, calcium salt could be added to the liquid; for example, calcium lactate gluconate could be added to a liquid to produce a 2% concentration in the liquid creating an effective solution for conducting reverse spherification. Calcium ions from calcium lactate is responsible for providing a gelling process without increasing the pH o |
https://en.wikipedia.org/wiki/Asus%20Vivo | The Vivo is a lineup of portable computers developed by Asus. It consists of:
laptops (VivoBooks)
All-in-Ones (Vivo AiO)
desktops (VivoPC)
Stick PCs (VivoStick)
Mini PCs (VivoMini)
smartwatches (VivoWatch)
computer mouse (VivoMouse)
tablets (VivoTab).
VivoBook
Some Asus VivoBook models are branded under different series depending on regions and/or time. For example, the VivoBook E12 E203 used to be marketed under the VivoBook E Series but has since been marketed without 'E12' and under the Asus Laptop series.
VivoBook 4K
The Asus VivoBook 4K uses a 15.6" 16:9 IPS 4K (3840 x 2160) display with a color gamut of 72% NTSC, 100% sRGB, and 74% Adobe RGB. The laptop supports up to Intel Core i7 processor, up to 12GB of RAM, up to a 2TB HDD and up to a Nvidia 940M video card. The I/O consists of a combo audio jack, a VGA port, 2x USB 3.0 port(s), 1x USB 2.0 port(s), a RJ45 LAN Jack and a HDMI.
VivoBook E Series
The Asus VivoBook E Series is the successor to the EeeBook and Eee PC lineup of computers. Some of the VivoBook E Series laptops are simply rebadged EeeBook laptops such as the E402 and E202. The VivoBook E Series consists of the E200 (E200HA), E201 (E201NA), E202 (E202SA), E12 E203 (E203NAH and E203NA), E402 (E402SA, E402NA, E402BA and E402BP), E403 (E403SA and E403NA) and E502 (E502NA).
E200 Reception
Windows Central rated the E200HA 4 out of 5 concluded by stating that the E200H has a great design, good touchpad, runs quiet and cool and has good speakers but also commented that it has a bad display and oddly sized keyboard. pcverge gave the E200HA a rating of 74% commenting that it is very inexpensive, light and well-made build with excellent battery life but could be improved with better viewing angles, a better Keyboard and a larger touchpad.
VivoBook F Series
The VivoBook F Series consists of the F200 (F200MA, F200CA and F200LA), F450 ( F450CA and F450CC) and F550 (F550LD).
VivoBook Max
The VivoBook Max Series consists of the X441 (X441SA, X441UV, |
https://en.wikipedia.org/wiki/Hardware%20security | Hardware security is a discipline originated from the cryptographic engineering and involves hardware design, access control, secure multi-party computation, secure key storage, ensuring code authenticity, measures to ensure that the supply chain that built the product is secure among other things.
A hardware security module (HSM) is a physical computing device that safeguards and manages digital keys for strong authentication and provides cryptoprocessing. These modules traditionally come in the form of a plug-in card or an external device that attaches directly to a computer or network server.
Some providers in this discipline consider that the key difference between hardware security and software security is that hardware security is implemented using "non-Turing-machine" logic (raw combinatorial logic or simple state machines). One approach, referred to as "hardsec", uses FPGAs to implement non-Turing-machine security controls as a way of combining the security of hardware with the flexibility of software.
Hardware backdoors are backdoors in hardware. Conceptionally related, a hardware Trojan (HT) is a malicious modification of electronic system, particularly in the context of integrated circuit.
A physical unclonable function (PUF) is a physical entity that is embodied in a physical structure and is easy to evaluate but hard to predict. Further, an individual PUF device must be easy to make but practically impossible to duplicate, even given the exact manufacturing process that produced it. In this respect it is the hardware analog of a one-way function. The name "physical unclonable function" might be a little misleading as some PUFs are clonable, and most PUFs are noisy and therefore do not achieve the requirements for a function. Today, PUFs are usually implemented in integrated circuits and are typically used in applications with high security requirements.
Many attacks on sensitive data and resources reported by organizations occur from within the org |
https://en.wikipedia.org/wiki/Maurice%20Wilkes%20Award | The Association for Computing Machinery SIGARCH Maurice Wilkes Award is given annually for outstanding contribution to computer architecture by a young computer scientist or engineer; "young" defined as having a career that started within the last 20 years. The award is named after Maurice Wilkes, a computer scientist credited with several important developments in computing such as microprogramming. The award is presented at the International Symposium on Computer Architecture. Prior recipients include:
1998 – Wen-mei Hwu
1999 – Gurindar S. Sohi
2000 – William J. Dally
2001 – Anant Agarwal
2002 – Glenn Hinton
2003 – Dirk Meyer
2004 – Kourosh Gharachorloo
2005 – Steve Scott
2006 – Doug Burger
2007 – Todd Austin
2008 – Sarita Adve
2009 – Shubu Mukherjee
2010 – Andreas Moshovos
2011 – Kevin Skadron
2012 – David Brooks
2013 – Parthasarathy (Partha) Ranganathan
2014 – Ravi Rajwar
2015 – Christos Kozyrakis
2016 – Timothy Sherwood
2017 – Lieven Eeckhout
2018 – Gabriel Loh
2019 – Onur Mutlu
2020 – Luis Ceze and Karin Strauss
2021 – Thomas Wenisch
2022 – Moinuddin Qureshi
2023 – Abhishek Bhattacharjee
See also
ACM Special Interest Group on Computer Architecture
Computer engineering
Computer science
Computing
List of computer science awards
List of computer-related awards
References
External links
Official page
Computer-related awards
Computer science awards |
https://en.wikipedia.org/wiki/Alan%20D.%20Berenbaum%20Distinguished%20Service%20Award | The Association for Computing Machinery SIGARCH Alan D. Berenbaum Distinguished Service Award is given for outstanding service in the field of computer architecture and design.
Recipients
Source: ACM
2022 – David A. Wood
2022 – Kathryn S. McKinley
2021 – Per Stenström
2020 – Alvin R. Lebeck
2019 – Margaret Martonosi
2018 – Koen De Bosschere
2016 – Michael Flynn
2014 – Doug DeGroot
2013 – Norman P. Jouppi
2011 – David A. Patterson
2010 – Mary Jane Irwin
2009 – Mark D. Hill
2008 – Alan Berenbaum
See also
ACM Special Interest Group on Computer Architecture
Computer engineering
Computer science
Computing
Service
List of computer-related awards
List of computer science awards
References
External links
ACM SIGARCH Alan D. Berenbaum Distinguished Service Award
Computer-related awards
Computer science awards
Distinguished service awards |
https://en.wikipedia.org/wiki/Network%20Performance%20Monitoring%20Solution | Network Performance Monitor (NPM) in Operations Management Suite, a component of Microsoft Azure, monitors network performance between office sites, data centers, clouds and applications in near real-time. It helps a network administrator locate and troubleshoot bottlenecks like network delay, data loss and availability of any network link across on-premises networks, Microsoft Azure VNets, Amazon Web Services VPCs, hybrid networks, VPNs or even public internet links.
Network Performance Monitor
Network Performance Monitor (NPM) is network monitoring from the Operations Management Suite, that monitors networks. NPM monitors the availability of connectivity and quality of connectivity between multiple locations within and across campuses, private and public clouds. It uses synthetic transactions to test for reachability and can be used on any IP network irrespective of the make and model of network routers or switches deployed.
Features
A dashboard is generated to display summarized information about the Network including Network health events, unhealthy Network links, and the Subnetwork links with the most loss and most latency. Custom dashboards can also be created to find the state of the network at a point-in-time in history.
An interactive topology map is also generated to show the routes between Nodes. Network administrator can use it to distinguish the unhealthy path to find out the root cause of the issue.
Alerts can be configured to send e-mails to stakeholders when a threshold is reached.
Use cases
Two on-premises networks: Monitor connectivity between two office sites which could be connected using an MPLS WAN link or VPN
Multiple sites: Monitor connectivity to a central site from multiple sites. For example, scenarios where users from multiple office locations are accessing applications hosted at a central location
Hybrid Networks: Monitor connectivity between on-premises and Azure VNets that could be connected using S2S VPN or ExpressRoute
Multip |
https://en.wikipedia.org/wiki/Open%20Pluggable%20Specification | Open Pluggable Specification (OPS) is a computing module plug-in format available for adding computing capability to flat panel displays.
The format was first announced by NEC, Intel, and Microsoft in 2010.
Computing modules in the OPS format are available on Intel- and ARM-based CPUs, running operating systems including Microsoft Windows and Google Android.
The main benefit of using OPS in digital signage is to reduce downtime and maintenance cost by making it extremely easy to replace the computing module in case of a failure.
Technical specification
A computing module fully enclosed in a 180mm x 119mm x 30mm box
JAE TX25 plug connector and TX24 receptacle
80-pin contacts
Supported interfaces:
Power
HDMI/DVI and DisplayPort
Audio
USB 2.0/3.0
UART
OPS control signals
Pin definition
Succession
The OPS format is planned to be succeeded by the Smart Display Module (SDM) format.
References
Display technology |
https://en.wikipedia.org/wiki/History%20of%20the%20Berkeley%20Software%20Distribution | The History of the Berkeley Software Distribution begins in the 1970s.
1BSD (PDP-11)
The earliest distributions of Unix from Bell Labs in the 1970s included the source code to the operating system, allowing researchers at universities to modify and extend Unix. The operating system arrived at Berkeley in 1974, at the request of computer science professor Bob Fabry who had been on the program committee for the Symposium on Operating Systems Principles where Unix was first presented. A PDP-11/45 was bought to run the system, but for budgetary reasons, this machine was shared with the mathematics and statistics groups at Berkeley, who used RSTS, so that Unix only ran on the machine eight hours per day (sometimes during the day, sometimes during the night). A larger PDP-11/70 was installed at Berkeley the following year, using money from the Ingres database project.
Also in 1975, Ken Thompson took a sabbatical from Bell Labs and came to Berkeley as a visiting professor. He helped to install Version 6 Unix and started working on a Pascal implementation for the system. Graduate students Chuck Haley and Bill Joy improved Thompson's Pascal and implemented an improved text editor, ex. Other universities became interested in the software at Berkeley, and so in 1977 Joy started compiling the first Berkeley Software Distribution (1BSD), which was released on March 9, 1978. 1BSD was an add-on to Version 6 Unix rather than a complete operating system in its own right. Some thirty copies were sent out.
2BSD (PDP-11)
The Second Berkeley Software Distribution (2BSD), released in May 1979, included updated versions of the 1BSD software as well as two new programs by Joy that persist on Unix systems to this day: the vi text editor (a visual version of ex) and the C shell. Some 75 copies of 2BSD were sent out by Bill Joy. A further feature was a networking package called Berknet, developed by Eric Schmidt as part of his master's thesis work, that could connect up to twenty-six compu |
https://en.wikipedia.org/wiki/ZADNA | .ZADNA (.za Domain Name Authority) is a not-for-profit company that administrates the .za namespace. The .za Domain Name Authority (.ZADNA) is the statutory regulator and manager of .za, the Internet country code top-level domain (ccTLD) for South Africa. .ZADNA is an agency of South African government under the Department of Communications and Digital Technologies and the administrator of registry of the South Africa ccTLD.
.ZA domain information
.ZADNA is responsible for deciding .ZA second level domain (SLD) structure
Moderated second-level domains
AC.za
EDU.za
GOV.za
NOM.za
Unmoderated second-Level domains
CO.za
NET.za
.ZA domain name dispute resolution
.za Alternative Dispute Resolution (ADR) regulations to resolve .za domain name registration disputes.
.ZA domain name disputes types:
Abusive registration
Offensive registration
Alternative Dispute Resolution (ADR) only applicable to un-moderated Second Level Domains (SLDs)
References
External links
Domain name registries
Information technology organisations based in South Africa
Government departments of South Africa
Internet in South Africa
Internet governance
Internet-related organizations |
https://en.wikipedia.org/wiki/R.%20Tom%20Sawyer | Robert Thomas Sawyer (June 20, 1901 – January 19, 1986) was the inventor of the first successful gas turbine locomotive. He also assisted in development of the diesel locomotive while he worked for General Electric, which led him to be known as the "Father of the Diesel Locomotive". Sawyer was the founder of what is now the modern-day International Gas Turbine Institute (IGTI), and among industry professionals was known as "Mr. Gas Turbine". Sawyer authored books about gas turbines, locomotives, and atomic power, and was awarded three U.S. Patents. The ASME established the R. Tom Sawyer Award to honor him for advancing gas turbine technology in all of its aspects for over 40 years. The award in his name is the highest award given by the IGTI, and is awarded annually at their international Turbo Expo.
Education and early work
Sawyer was born on June 20, 1901, in Schenectady, New York, but lived most of his life in Ho-Ho-Kus, New Jersey (Ridgewood, Bergen County). He received a bachelor's degree in electrical engineering from Ohio State University in 1923 followed by a master's degree in mechanical engineering in 1930. His undergraduate thesis was entitled "Preliminary Design of 60,000 kw Steam Power Station". While an undergraduate, he was a member of Sigma Pi fraternity and Scabbard and Blade. After receiving his undergraduate degree, Sawyer began working for General Electric, where he designed and developed early diesel locomotives.
In 1928, while working on his master's degree, Sawyer rebuilt a 1920s Jordan automobile and replaced the gear box with a generator and motor. He thus created an early hybrid electric vehicle which combined the Jordan automobile's internal combustion engine (ICE) with a DC generator that powered an electric motor to drive the axles. Sawyer drove the car for 60,000 miles in the United States and Australia as a demonstration of its capability. He determined, however, that the electrical parts were too heavy and expensive |
https://en.wikipedia.org/wiki/Moschovakis%20coding%20lemma | The Moschovakis coding lemma is a lemma from descriptive set theory involving sets of real numbers under the axiom of determinacy (the principle — incompatible with choice — that every two-player integer game is determined). The lemma was developed and named after the mathematician Yiannis N. Moschovakis.
The lemma may be expressed generally as follows:
Let be a non-selfdual pointclass closed under real quantification and , and a -well-founded relation on of rank . Let be such that . Then there is a -set which is a choice set for R , that is:
.
.
A proof runs as follows: suppose for contradiction is a minimal counterexample, and fix , , and a good universal set for the -subsets of . Easily, must be a limit ordinal. For , we say codes a -choice set provided the property (1) holds for using and property (2) holds for where we replace with . By minimality of , for all , there are -choice sets.
Now, play a game where players I, II select points and II wins when coding a -choice set for some implies codes a -choice set for some . A winning strategy for I defines a set of reals encoding -choice sets for arbitrarily large . Define then
,
which easily works. On the other hand, suppose is a winning strategy for II. From the s-m-n theorem, let be continuous such that for all , , , and ,
.
By the recursion theorem, there exists such that . A straightforward induction on for shows that
,
and
.
So let
.
References
Axioms of set theory
Determinacy
Large cardinals
Lemmas in set theory |
https://en.wikipedia.org/wiki/Dmitry%20Feichtner-Kozlov | Dmitry Feichtner-Kozlov (born 16 December 1972, in Tomsk, Russia) is a Russian-German mathematician.
He works in the field of Applied and Combinatorial Topology, where he publishes under the name Dmitry N. Kozlov.
Biography
Feichtner-Kozlov obtained his PhD from the Royal Institute of Technology, Stockholm in 1996, with thesis Extremal Combinatorics, Weighting Algorithms, and Topology of Subspaces Arrangements written under the direction of Anders Björner. In 2004, after longer stays at the Mathematical Sciences Research Institute in Berkeley, California, the Massachusetts Institute of Technology, the Institute for Advanced Study in Princeton, New Jersey, the University of Washington in Seattle, the University of Bern, and the Royal Institute of Technology, he assumed the position of assistant professor at ETH Zurich, Switzerland.
Since 2007, he works at the University of Bremen, Germany, where he holds the Chair of Algebra and Geometry, and is the director of the Institute for Algebra, Geometry, Topology and their applications.
Feichtner-Kozlov has done research on various topics, such as: topological methods in combinatorics, including applications to graph colorings;
combinatorially defined polyhedral and cell complexes; combinatorial structures in geometry and topology, such as stratifications and compactifications of spaces; combinatorial aspects of chain complexes, such as coboundary expansion. He has also done interdisciplinary work, e.g., developing rigorous mathematical methods in theoretical distributed computing.
Feichtner-Kozlov is the recipient of the following prizes: Wallenberg prize 2003, Goran Gustafsson prize 2004, European Prize in Combinatorics 2005. The book "Distributed Computing through Combinatorial Topology",
which he wrote together with computer scientists Maurice Herlihy and Sergio Rajsbaum has been selected as a Notable Book on the Best of Computing 2013 list by the Association for Computing Machinery.
He is a managing editor of th |
https://en.wikipedia.org/wiki/Molecular%20phenotyping | Molecular phenotyping describes the technique of quantifying pathway reporter genes, i.e. pre-selected genes that are modulated specifically by metabolic and signaling pathways, in order to infer activity of these pathways.
In most cases, molecular phenotyping quantifies changes of pathway reporter gene expression to characterize modulation of pathway activities induced by perturbations such as therapeutic agents or stress in a cellular system in vitro. In such contexts, measurements at early time points are often more informative than later observations because they capture the primary response to the perturbation by the cellular system. Integrated with quantified changes of phenotype induced by the perturbation, molecular phenotyping can identify pathways that contribute to the phenotypic changes.
Currently molecular phenotyping uses RNA sequencing and mRNA expression to infer pathway activities. Other technologies and readouts such as mass spectrometry and protein abundance or phosphorylation levels can be potentially used as well.
Application in early drug discovery
Current data suggest that by quantifying pathway reporter gene expression, molecular phenotyping is able to cluster compounds based on pathway profiles and dissect associations between pathway activities and disease phenotypes simultaneously. Furthermore, molecular phenotyping can be applicable to compounds with a range of binding specificities and is able to triage false positives derived from high-content screening assays. Furthermore, molecular phenotyping allows integration of data derived from in vitro and in vivo models as well as patient data into the drug discovery process.
References
Molecular biology
RNA
Gene expression
Drug discovery
Pharmaceutical industry |
https://en.wikipedia.org/wiki/Paytm%20Payments%20Bank | Paytm Payments Bank (PPBL) is an Indian payments bank, founded in 2017 and headquartered in Noida and is part of mobile payment company paytm. In the same year, it received the license to run a payments bank from the Reserve Bank of India and was launched in November 2017. In 2021, the bank received a scheduled bank status from the RBI.
Vijay Shekhar Sharma holds 51 per cent in the entity with One97 Communications holding 49 per cent . Vijay Shekhar Sharma is the promoter of Paytm Payments Bank, and One97 Communications Limited is not categorized as one of its promoters.
History
In 2015, Paytm Payments Bank Limited had received in-principle approval from the Reserve Bank of India to set up a payments bank and was formally inaugurated on November 28, 2017.
In the financial year 2020, the bank facilitated more than 485 crore transactions worth ₹4.6 lakh crore. It processed over 778 million UPI transactions amounting to ₹89,388 crore in June 2022 and continues to be India’s biggest UPI beneficiary bank with over 1,370 million digital transactions in June 2022.
In March 2021, the bank received approval for its @Paytm UPI handle from the Securities and Exchange Board of India for issuing payment mandates for initial public offerings (IPOs) through the Unified Payments Interface (UPI).
Products and services
Paytm Payments Bank offers savings and current accounts with a debit card, facilitating fast and easy payments.
Paytm Payments Bank has issued seven million Visa debit cards through its platform in FY'21.
Partnership
In January 2018, Paytm Payments Bank partnered with IndusInd Bank to offer fixed deposits. It entered into a partnership with MasterCard for the issuance of virtual and physical debit cards in April 2020. In January 2021, it tied up with Suryoday Small Finance Bank to offer fixed deposit services to its account holders. Since June 2021, it is not providing new fixed deposit creation with Suryodaya Bank.
Financials
Paytm Payments Bank repo |
https://en.wikipedia.org/wiki/Australasian%20Journal%20of%20Combinatorics | The Australasian Journal of Combinatorics is a triannual peer-reviewed open-access scientific journal covering combinatorics. It was established in 1990 and is published by the Centre for Discrete Mathematics and Computing (University of Queensland) on behalf of the Combinatorial Mathematics Society of Australasia. Originally published biannually, it has been published three times per year since 2005. The editors-in-chief are Michael H. Albert (University of Otago) and Elizabeth J. Billington (University of Queensland). Since 2014, the journal has been diamond open access, charging fees neither to readers nor to authors.
Abstracting and indexing
The journal is abstracted and indexed in Mathematical Reviews, the Emerging Sources Citation Index, Scopus, and Zentralblatt MATH.
References
External links
Combinatorics journals
Academic journals established in 1990
Open access journals
English-language journals
Triannual journals |
https://en.wikipedia.org/wiki/Dog%20Aging%20Project | The Dog Aging Project is a long-term biological study of aging in dogs, centered at the University of Washington. Professors Daniel Promislow and Matt Kaeberlein are the co-directors of the project. Together with Chief Veterinarian, Dr. Kate Creevy, the project primarily focuses on research to understand dog aging through the collection and analysis of big data through citizen science.
Additionally, there is a small component of the project that explores the use of pharmaceuticals to potentially increase life span of dogs. The project has implications for improving the life spans of humans and is an example of geroscience.
The project engages the general public to register their dogs in the studies, and therefore the project is an example of citizen science. nearly 40,000 dogs have been registered with the project. The majority of the dogs will participate in a longitudinal study of 10,000 dogs over a 10-year period conducted across the United States. Individual dogs are followed for the duration of their lives to understand the biological and environmental factors that influence dog longevity. A small subset of those dogs (approximately 500) will be enrolled in a double-blind, placebo-controlled study of the pharmaceutical rapamycin, which has shown signs of extending longevity in species such as mice.
The Dog Aging Project is an open science initiative. The investigators have committed to releasing all anonymized research data to the public domain. The longitudinal study portion of the Dog Aging Project bears some similarity to the Golden Retriever Lifetime Study of the Morris Animal Foundation although with much larger phenotypic diversity. The entire project also shares operational similarities to Darwin's Ark, a citizen science initiative of companion animals with more specific focus on genetics. The initiatives are each managed to ensure the data can be integrated into a powerful master data set.
A premise of the project is that dogs may be a sentine |
https://en.wikipedia.org/wiki/Vu%20Televisions | Vu Televisions (also Vu Technologies) is a television brand and an LED TV and display manufacturer founded by Indian businessperson Devita Saraf, in United States in 2006. It is amongst top 10 largest-selling TV brand across e-commerce platforms in India.
History and products
Vu Technologies was founded as a high-end electronics company in 2006 by Indian businessperson Devita Saraf, who became its CEO and Design Head.
With a sales turnover of ₹1 billion, the company turned profitable in 2012. It started exporting televisions to the United States. Vu introduced ultra-high-definition television (4K HD) in 2014. At the end of the year, the company, valued at ₹2 billion, was selling about 40,000 televisions a year.
Vu had annual sales of $30 million in 2015, with units sold. Vu Technologies gave away a 25% stake in the company to private equity investors.
In 2016, Vu introduced entertainment-focused apps on its TVs, with video on demand including Netflix as a preloaded feature. The company had 20 own stores in Indian cities. Already producing TVs with a range of input options, including features like MHL compatibility and built-in Miracast support, the company launched Vu SuperMac TV, India's first TV with built-in OS X Mountain Lion Operating System. It has also manufactured Windows and Android-based televisions. Its designer TV sets include one made in collaboration with designer Tarun Tahiliani, with a Swarovski crystal frame.
Aside from India, Vu Televisions are sold in 60 countries. The company sells a majority of its televisions online and has become the highest-selling TV brand across e-commerce platforms in India. In 2016, Flipkart became its exclusive online sales partner. The share of Vu grew to 40% of the total market share in the TV category for Flipkart.
Vu Technologies imports all the components for panel manufacturing from China, Taiwan, South Korea and Japan. The company had an annual turnover of ₹5 billion for 2016–17, and Flipkart reported that |
https://en.wikipedia.org/wiki/Biosolarization | Biosolarization is an alternative technology to soil fumigation used in agriculture. It is closely related to biofumigation and soil solarization, or the use of solar power to control nematodes, bacteria, fungi and other pests that damage crops. In solarization, the soil is mulched and covered with a tarp to trap solar radiation and heat the soil to a temperature that kills pests. Biosolarization adds the use of organic amendments or compost to the soil before it is covered with plastic, which speeds up the solarization process by decreasing the soil treatment time through increased microbial activity. Research conducted in Spain on the use of biosolarization in strawberry fruit production has shown it to be a sustainable and cost effective option. The practice of biosolarization is being used among small agricultural operations in California. Biosolarization is a growing practice in response to the need for methods for organic soil solarization. The option for more widespread use of biosolarization is being studied by researchers at the Western Center for Agricultural Health and Safety at the University of California at Davis in order to validate the effectiveness of biosolarization in commercial agriculture in California, where it has the potential to greatly reduce the use of conventional fumigants. Biosolarization can also use as organic waste management practice. Recent studies showed the potential of food industrial residues as soil amendments that can improve the efficiency of biosolarization.
References
Soil science
Soil contamination
Biocides
Pest control techniques
Agricultural terminology |
https://en.wikipedia.org/wiki/Multivalued%20treatment | In statistics, in particular in the design of experiments, a multi-valued treatment is a treatment that can take on more than two values. It is related to the dose-response model in the medical literature.
Description
Generally speaking, treatment levels may be finite or infinite as well as ordinal or cardinal, which leads to a large collection of possible treatment effects to be studied in applications. One example is the effect of different levels of program participation (e.g. full-time and part-time) in a job training program.
Assume there exists a finite collection of multi-valued treatment status with J some fixed integer. As in the potential outcomes framework, denote the collection of potential outcomes under the treatment J, and denotes the observed outcome and is an indicator that equals 1 when the treatment equals j and 0 when it does not equal j, leading to a fundamental problem of causal inference. A general framework that analyzes ordered choice models in terms of marginal treatment effects and average treatment effects has been extensively discussed by Heckman and Vytlacil.
Recent work in the econometrics and statistics literature has focused on estimation and inference for multivalued treatments and ignorability conditions for identifying the treatment effects. In the context of program evaluation, the propensity score has been generalized to allow for multi-valued treatments, while other work has also focused on the role of the conditional mean independence assumption. Other recent work has focused more on the large sample properties of an estimator of the marginal mean treatment effect conditional on a treatment level in the context of a difference-in-differences model, and on the efficient estimation of multi-valued treatment effects in a semiparametric framework.
References
Applied mathematics
Design of experiments
Statistical theory
Industrial engineering
Systems engineering
Statistical process control
Quantitative research
Experime |
https://en.wikipedia.org/wiki/Optimal%20instruments | In statistics and econometrics, optimal instruments are a technique for improving the efficiency of estimators in conditional moment models, a class of semiparametric models that generate conditional expectation functions. To estimate parameters of a conditional moment model, the statistician can derive an expectation function (defining "moment conditions") and use the generalized method of moments (GMM). However, there are infinitely many moment conditions that can be generated from a single model; optimal instruments provide the most efficient moment conditions.
As an example, consider the nonlinear regression model
where is a scalar (one-dimensional) random variable, is a random vector with dimension , and is a -dimensional parameter. The conditional moment restriction is consistent with infinitely many moment conditions. For example:
More generally, for any vector-valued function of , it will be the case that
.
That is, defines a finite set of orthogonality conditions.
A natural question to ask, then, is whether an asymptotically efficient set of conditions is available, in the sense that no other set of conditions achieves lower asymptotic variance. Both econometricians and statisticians have extensively studied this subject.
The answer to this question is generally that this finite set exists and have been proven for a wide range of estimators. Takeshi Amemiya was one of the first to work on this problem and show the optimal number of instruments for nonlinear simultaneous equation models with homoskedastic and serially uncorrelated errors. The form of the optimal instruments was characterized by Lars Peter Hansen, and results for nonparametric estimation of optimal instruments are provided by Newey. A result for nearest neighbor estimators was provided by Robinson.
In linear regression
The technique of optimal instruments can be used to show that, in a conditional moment linear regression model with iid data, the optimal GMM estimator is generali |
https://en.wikipedia.org/wiki/Stock%20sampling | Stock sampling is sampling people in a certain state at the time of the survey. This is in contrast to flow sampling, where the relationship of interest deals with duration or survival analysis. In stock sampling, rather than focusing on transitions within a certain time interval, we only have observations at a certain point in time. This can lead to both left and right censoring. Imposing the same model on data that have been generated under the two different sampling regimes can lead to research reaching fundamentally different conclusions if the joint distribution across the flow and stock samples differ sufficiently.
Stock sampling essentially leads to a sample selection problem. This selection issue is akin to the truncated regression model where we face selection on the basis of a binary response variable, but the problem has been referred to as length-biased sampling in this specific context. Consider, for example, the figure below that plots some duration data. If a researcher would revert to stock sampling and only sample and survey individuals at the survey dates (i.e. the survey data, 12 months after the survey date, etc.), there is a high likelihood the short duration spells will be omitted from the sample, as some durations shorter than 12 months are necessarily omitted from the sample:
A number of methods to adjust for these sampling issues have been proposed. One can appropriately adjust the maximum likelihood estimation for censored flow data for the sample selection, or use nonparametric estimation methods for censored flow data for the sample selection, or use nonparametric estimation methods for censored data.
References
Sampling (statistics)
Design of experiments
Actuarial science
Single-equation methods (econometrics)
Regression models
Mathematical and quantitative methods (economics)
Medical statistics
Clinical trials
Epidemiology |
https://en.wikipedia.org/wiki/Axway%20Software | Axway Software is a French-American publicly held information technology company that provides software tools for enterprise software, enterprise application integration, business activity monitoring, business analytics, mobile application development and web API management.
Since it split from parent company Sopra Steria in June 2011, Axway has been listed on Euronext Paris (AXW).
History
Axway Software was incorporated on 28 December 2000 when the software infrastructure division of the French IT services company Sopra was spun-out as a subsidiary. (Sopra subsequently merged with another French IT services company Steria to form Sopra Steria in 2014.)
Sopra used Axway as a vehicle for expansion into the Enterprise Application Integration market. Subsequently, a number of acquisitions have been made by Axway. The Swedish company Viewlocity was acquired in early 2002.
Christophe Fabre became CEO of Axway in 2005 and remained in that position until 2015. Axway acquired the US company Cyclone Commerce in January 2006 after which much of the executive management of Axway relocated to Phoenix, Arizona. In February 2007, Axway acquired the Atos's B2B software business in Germany. US company Tumbleweed Communications was acquired in June 2008.
In June 2011, Axway was spun out of Sopra Group and listed on the Paris Euronext. In November 2012, the Irish company Vordel, an API Management vendor, was acquired. The Brazilian company, SCI Soluções, was acquired in September 2013. In January 2014, Axway acquired the assets of Information Gateway in Australia. Axway acquired French company Systar, a developer of Business Activity Monitoring software, in June 2014.
In January 2016, Axway acquired US company Appcelerator, creator of the Appcelerator Titanium open-source framework for multiplatform native mobile app development. Axway acquired US company Syncplicity, developer of a file share and synchronization service, in February 2017.
As of 2017, Sopra Steria holds 33. |
https://en.wikipedia.org/wiki/Resource%20selection%20function | Resource selection functions (RSFs) are a class of functions that are used in spatial ecology to assess which habitat characteristics are important to a specific population or species of animal, by assessing a probability of that animal using a certain resource proportional to the availability of that resource in the environment.
Modeling
Resource Selection Functions require two types of data: location information for the wildlife in question, and data on the resources available across the study area. Resources can include a broad range of environmental and geographical variables, including categorical variables such as land cover type, or continuous variables such as average rainfall over a given time period. A variety of methods are used for modeling RSFs, with logistic regression being commonly used.
RSFs can be fit to data where animal presence is known, but absence is not, such as for species where several individuals within a study area are fitted with a GPS collar, but some individuals may be present without collars.
When this is the case, buffers of various distances are generated around known presence points, with a number of available points generated within each buffer, which represent areas where the animal could have been, but it is unknown whether they actually were. These models can be fit using binomial generalized linear models or binomial generalized linear mixed models, with the resources, or environmental and geographic data, as explanatory variables.
Scale
Resource selection functions can be modeled at a variety of spatial scales, depending on the species and the scientific question being studied. (insert one more sentence on scale)
Most RSFs address one of the following scales, which were defined by Douglas Johnson in 1980 and are still used today:
First order selection: The entire range of a species
Second order selection: The home range of an individual or group of animals
Third order selection: Resource or habitat usage within an i |
https://en.wikipedia.org/wiki/Edgar%20Silinsh | Edgar Imant Silinsh (, , Edgar Aleksandrovich Silinsh; 21 March 1927 – 26 May 1998) was a Soviet and Latvian scientist in the field of semiconductor physics and philosophy of science, academician of the Latvian Academy of Sciences (1992).
Biography
Edgar Silinsh was born the 4th child in a family of prosperous farmer Aleksandrs Siliņš (1875—1934) on the "Veclapsas" farmstead in Līgatne municipality of Riga district. During his school years, he was mostly interested in literature and history rather than in physical sciences. Due to the start of World War II and the death of his mother in 1943, E. Silinsh was forced to discontinue his education; nevertheless, in 1946 he took secondary school exams and enrolled at the Faculty of Chemistry of the State University of Latvia (SUL). The choice for natural sciences was rooted in Edgar's awareness of humanities being ideologically constrained in the Soviet Union. Yet he was forced to cease his studies even in the field of chemistry during the Stalinist repressions of 1949, because of class-based mistreatment. After that, E. Silinsh worked as a laborant for 14 years, and for last 12 of these he was employed at the Central Laboratory of the Riga Plant of Electrical Machine Building (). At this institution, E. Silinsh could for the first time perform actual scientific research, mostly in the field of atomic spectroscopy. In 1958, his two messages were included in the X All-Union Spectroscopy Conference in Lvov. In total, Edgar Silinsh published 26 scientific and technical papers in the field of atomic and molecular spectroscopy during his years at RER, as well as 16 technical and technological publications of other kinds. During the "Khrushchev Thaw", Edgar Silinsh was finally able to enroll (in 1957) and graduate (in 1961) the Faculty of Physics and Mathematics of the University of Latvia. In 1962, he began his extramural studies for the Candidate of Sciences degree (equivalent to the Western PhD) at the S. Vavilov State Opt |
https://en.wikipedia.org/wiki/Women%20in%20Bletchley%20Park | About 7,500 women worked in Bletchley Park, the central site for British cryptanalysts during World War II. Women constituted roughly 75% of the workforce there. While women were overwhelmingly under-represented in high-level work such as cryptanalysis, they were employed in large numbers in other important areas, including as operators of cryptographic and communications machinery, translators of Axis documents, traffic analysts, clerical workers, and more.
Women made up the majority of Bletchley Park’s workforce, most enlisted in the Women’s Royal Naval Service, WRNS, nicknamed the Wrens.
The Wrens performed a vital role operating the computers used for code-breaking, including the Colossus and Bombe machines. Working around the clock in three 8
hour shifts, they were the beating heart of Bletchley Park.
Women were also involved in the construction of the machines, including doing the wiring and soldering to create each Colossus computer.
Background
Bletchley Park was the central site for British cryptanalysis during World War II. It housed the Government Code and Cypher School (GC&CS), which regularly penetrated the secret communications of the Axis Powers – most importantly the German Enigma and Lorenz ciphers. According to Sir Harry Hinsley, the "Ultra" intelligence produced at Bletchley Park shortened the war by approximately two years. Bletchley Park is famous for the impact it had on the war and for the work performed there by scholars such as Alan Turing and Dilly Knox. This work, though secret until 1974, had a significant impact on the history of science and technology – in particular, the history of technology. In the past century, archivists and historians have increasingly emphasized the role of the women who worked in Bletchley Park.
Recruitment of women
In 1937, when the tensions in Europe and Asia were becoming apparent, the Chief of MI6, Admiral Hugh Sinclair, ordered GC&CS to begin preparing for a war-footing and to expand its staff numbers. |
https://en.wikipedia.org/wiki/2017%20Bank%20of%20the%20Philippine%20Islands%20systems%20glitch | On June 7, 2017, the Bank of the Philippine Islands (BPI) suspended its online transaction and automatic teller machine services amidst reports of money missing from its account holders. There was speculation that BPI was compromised by hackers but the bank claimed that the problem was caused by an internal data processing error. The scope of the issue was nationwide according to the bank and also said that only a small portion of its customers were affected and that most of them were in Metro Manila.
It was reported that the value of some transactions made from April 27 to May 3, 2017, were doubled. The bank issued a statement that they were resolving the issue and assured that its clients would not lose any money.
BPI's stocks in the Philippine Stock Exchange remained unaffected in response to the incident. Luis Limlingan, head of research and sales at Regina Capital Development Corporation viewed that most investors could have seen the incident as a one-off event that could be resolved. According to Limlingan the real problem was how BPI dealt with its disgruntled customers.
BPI announced that they resolved the issue at 9 p.m. on June 8, 2017. The Bangko Sentral ng Pilipinas, the country's central bank, launched a probe on the incident.
See also
2012 RBS Group computer system problems
References
2010s economic history
June 2017 events in the Philippines
Software anomalies
Corporate scandals |
https://en.wikipedia.org/wiki/Hager%20Group | Hager Group is a manufacturer of electrical installations in residential, commercial and industrial buildings based in Blieskastel, Germany. The company has been family-run and owned ever since its foundation in 1955.
Hager Group provides products and services ranging from energy distribution and cable management to intelligent building automation and security systems, under the brand Hager. Hager Group also owns the brands Berker, Bocchiotti, Daitem, Diagral, Elcom and E3/DC. In 2018, Hager Group was the world market leader in electrical installation systems. In August 2019, the group was ranked number 128 in the top 500 family-owned businesses in Germany according to the magazine Die Deutsche Wirtschaft.
History
In 1955, Hager oHG, elektrotechnische Fabrik was founded by brothers Oswald and Hermann Hager, together with their father Peter Hager in Ensheim in the Saarland region of Germany. Since 1945, Saarland had been under the economic control of France and had no access to the German market. However, Hager wanted to gain a foothold in both markets. In 1959, the Hager brothers founded their first foreign subsidiary, Hager Electro S. A., in Obernai, Alsace, in north-eastern France.
In 1966, Hager began systematical training of its electricians, whose expertise has created a culture of customer loyalty, something that continues to this day. Hager’s modular rotary fuse carrier was patented in Germany in 1968 and in France in 1970. At the same time, the first mass-produced distribution board, the Hager-Rapid-System, was launched on the French market. In 1973, Hager achieved sales of 43 million Deutsche Marks in Germany and in 1974 the company reached a turnover of 22 million francs in France.
In 1976, Hager launched the mini Gamma enclosure, in 1982 the company started producing the first Residual-current circuit breakers (RCCB) in Germany. A new production facility with a high-bay warehouse was opened in Blieskastel.
Hager Group began to market itself as a |
https://en.wikipedia.org/wiki/Hypersequent | In mathematical logic, the hypersequent framework is an extension of the proof-theoretical framework of sequent calculi used in structural proof theory to provide analytic calculi for logics that are not captured in the sequent framework. A hypersequent is usually taken to be a finite multiset of ordinary sequents, written
The sequents making up a hypersequent are called components. The added expressivity of the hypersequent framework is provided by rules manipulating different components, such as the communication rule for the intermediate logic LC (Gödel–Dummett logic)
or the modal splitting rule for the modal logic S5:
Hypersequent calculi have been used to treat modal logics, intermediate logics, and substructural logics. Hypersequents usually have a formula interpretation, i.e., are interpreted by a formula in the object language, nearly always as some kind of disjunction. The precise formula interpretation depends on the considered logic.
Formal definitions and propositional rules
Formally, a hypersequent is usually taken to be a finite multiset of ordinary sequents, written
The sequents making up a hypersequent consist of pairs of multisets of formulae, and are called the components of the hypersequent. Variants defining hypersequents and sequents in terms of sets or lists instead of multisets are also considered, and depending on the considered logic the sequents can be classical or intuitionistic. The rules for the propositional connectives usually are adaptions of the corresponding standard sequent rules with an additional side hypersequent, also called hypersequent context. E.g., a common set of rules for the functionally complete set of connectives for classical propositional logic is given by the following four rules:
Due to the additional structure in the hypersequent setting the structural rules are considered in their internal and external variants. The internal weakening and internal contraction rules are the adaptions o |
https://en.wikipedia.org/wiki/Luftnachrichten%20Abteilung%20350 | The Luftnachrichten Abteilung 350, abbreviated as OKL/Ln Abt 350 and formerly called the (), was the Signal Intelligence Agency of the German Air Force, the Luftwaffe, before and during World War II. Before November 1944, the unit was the , which was often abbreviated to Chi-Stelle/ObdL or more commonly Chi-Stelle.
The founding of the former agencies of OKL/Ln Abt 350 dates back to the year 1936, when Colonel (later ) Wolfgang Martini instigated the creation of the agency, that was later established on the orders of Hermann Göring, the German politician, head of the air force, and leading member of the Nazi Party. Right from the beginning, the Luftwaffe High Command resolved itself to make itself entirely independent from the German Army (Heer) in the field of cryptology.
Background
The LN Abt 350 was one of a large number of regiments which were named in that series, but there were several related regiments, which dealt with intelligence-related matters, of one kind or another. These were as follows:
LN Regiment 351. Mapping and interception of communications intelligence of Allied air forces in England and France. It conducted air to air interception, ground to air, and ground to ground including tracking of navigational aids.
LN Regiment 352. Mapping and interception of communication intelligence of Allied air forces in the Mediterranean area.
LN Regiment 353. Tracking and mapping of the Soviet Air Force.
LN Abteilung 355. Allied air forces in northern areas, specifically the Soviet Air Force in Northern Norway. It covered ground to ground, and air to air according to reception conditions. This unit was formerly W-Leit 5 based in Oslo.
LN Abteilung 356. Route tracking of Allied air forces by radar interception and in collaboration with LN Regiment 357.
LN Abteilung 357. Tracking Allied four-engined formations and route tracking by intercepted signals and in collaboration with LN Abt. 356.
LN Abteilung 358. Training of intercept personnel.
LN Abteilung 359. ra |
https://en.wikipedia.org/wiki/International%20Journal%20of%20Web%20Services%20Research | The International Journal of Web Services Research (IJWSR) is a quarterly peer-reviewed academic journal covering web services. It was established in 2004 and is published by IGI Global. The editor-in-chief is Liang-Jie Zhang (Kingdee International Software Group in China and The Open Group).
Abstracting and indexing
The journal is abstracted and indexed in:
References
External links
Academic journals established in 2004
English-language journals
Quarterly journals
Web Services Research, International Journal of
Computer science journals
Web services |
https://en.wikipedia.org/wiki/ELOT%20927 | ELOT 927 is 7-bit character set standardized by ELOT, the Hellenic Organization for Standardization (HOS). It is also known as ISO-IR-88, CSISO88GREEK7 or 7-bit DEC Greek. The standard was withdrawn in November 1986. Support for it was implemented in various dot matrix printers (for example by Fujitsu) and line printers (for example by Printronix and Siemens) as well as in computer terminals (for example by DEC). Support for it can still be found in various applications, languages and protocols today, for example in Perl and Kermit.
Character set
See also
ISO/IEC 646
ELOT 928
References
Character sets |
https://en.wikipedia.org/wiki/Journal%20of%20Photonics%20for%20Energy | Journal of Photonics for Energy is a quarterly, online peer-reviewed scientific journal covering fundamental and applied research on the applications of photonics for renewable energy harvesting, conversion, storage, distribution, monitoring, consumption, and efficient usage, published by SPIE. The editor-in-chief is Sean Shaheen.
Abstracting and indexing
The journal is abstracted and indexed in:
Science Citation Index Expanded
Current Contents - Physical, Chemical & Earth Sciences
Current Contents - Engineering, Computing & Technology
Inspec
Scopus
Ei/Compendex
According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.836.
References
External links
Optics journals
Engineering journals
SPIE academic journals
English-language journals
Academic journals established in 2011 |
https://en.wikipedia.org/wiki/ARINC%20629 | The ARINC 629 computer bus was introduced in May 1995 and is used on aircraft such as the Boeing 777, Airbus A330 and Airbus A340 as well as the Airbus A320 series.
The ARINC 629 bus operates as a multiple-source, multiple-sink system; each terminal can transmit data to, and receive data from, every other terminal on the data bus. This allows much more freedom in the exchange of data between units in the avionics system. ARINC 629 has the ability to accommodate up to a total of 128 terminals on a data bus and supports a data rate of 2 Mbit/s.
The ARINC 629 data bus was developed by the Airlines Electronic Engineering Committee (AEEC) to replace the ARINC 429 bus.
The ARINC 629 data bus was based on the Boeing DATAC bus.
References
Computer buses
Avionics
Computer-related introductions in 1995
ARINC standards
Serial buses |
https://en.wikipedia.org/wiki/Moduli%20stack%20of%20vector%20bundles | In algebraic geometry, the moduli stack of rank-n vector bundles Vectn is the stack parametrizing vector bundles (or locally free sheaves) of rank n over some reasonable spaces.
It is a smooth algebraic stack of the negative dimension . Moreover, viewing a rank-n vector bundle as a principal -bundle, Vectn is isomorphic to the classifying stack
Definition
For the base category, let C be the category of schemes of finite type over a fixed field k. Then is the category where
an object is a pair of a scheme U in C and a rank-n vector bundle E over U
a morphism consists of in C and a bundle-isomorphism .
Let be the forgetful functor. Via p, is a prestack over C. That it is a stack over C is precisely the statement "vector bundles have the descent property". Note that each fiber over U is the category of rank-n vector bundles over U where every morphism is an isomorphism (i.e., each fiber of p is a groupoid).
See also
classifying stack
moduli stack of principal bundles
References
Algebraic geometry |
https://en.wikipedia.org/wiki/Bitstream%20International%20Character%20Set | The Bitstream International Character Set (BICS) was developed by Bitstream, Inc.
Code charts
Character set 0x00
� Not in Unicode
Character set 0x01
Character set 0x02
Character set 0x03
Character set 0x04
Character set 0x05
Character set 0x07
Character set 0x08
Character set 0x0A
Character set 0x0B
Character set 0x0D
Character set 0x0E
Character set 0x11
Character set 0x12
Character set 0x13
Character set 0x14
Character set 0x15
Character set 0x19
References
Character sets |
https://en.wikipedia.org/wiki/Basic%20Korean%20Dictionary | Basic Korean Dictionary () is an online learner's dictionary of the Korean language, launched on 5 October 2016 by the National Institute of Korean Language. It consists of one monolingual and ten bilingual dictionaries that provide meanings of Korean words and expressions in Korean, English, Arabic, French, Indonesian, Japanese, Mongolian, Russian, Spanish, Thai, and Vietnamese.
Multilingual support
Korean: Basic Korean dictionary
Korean–English: Korean–English Learners' Dictionary
Korean–Arabic:
Korean–French:
Korean–Indonesian:
Korean–Japanese:
Korean–Mongolian:
Korean–Russian:
Korean–Spanish:
Korean–Thai:
Korean–Vietnamese:
See also
Standard Korean Language Dictionary
References
External links
Korean dictionaries
Online dictionaries |
https://en.wikipedia.org/wiki/Radio-86RK | The Radio-86RK () is a build-it-yourself home computer designed in the Soviet Union. It was featured in the popular Radio () magazine for radio hams and electronics hobbyists in 1986. The letters RK in the title stands for the words Radio ham's Computer (). Design of the computer was published in a series of articles describing its logical structure, electrical circuitry, drawings of printed circuit boards and firmware. The computer could be built entirely out of standard off-the-shelf parts. Later it was also available in a kit form as well as fully assembled form.
Predecessors
The Radio-86RK is the successor of earlier build-it-yourself computer of the same designers, the Micro-80, and has limited compatibility with it. Its description was also published in a series of articles in the Radio magazine in the early 1980s. But its complex design, consisting of several modules and containing about 200 chips, lack of printed circuit board drawings and most importantly lack of chips on sale made the assembly of the computer hard to accomplish. Micro-80 computers were assembled by only a few enthusiasts.
Assembly process
To assemble the computer, it was required to acquire the necessary electronic components, to make two printed circuit boards and mount all components on them. It was mostly a single board computer, as the second board served only as the base to mount the keyboard keys. The main board used a single large connector for power, keyboard, tape recorder and even video output. Hence it was easy to disconnect the board and work on both sides of it outside the case.
Next, the firmware has to be written in two erasable ROM chips using a chip programmer. Also a power supply unit, a keyboard and a computer case were to be made. The computer used a normal domestic TV set connected to a composite video input as a display. As most Soviet TVs of the time did not have video inputs, it was necessary to install a special module or modify the TV's electronics to impleme |
https://en.wikipedia.org/wiki/Hermite%20transform | In mathematics, Hermite transform is an integral transform named after the mathematician Charles Hermite, which uses Hermite polynomials as kernels of the transform. This was first introduced by Lokenath Debnath in 1964.
The Hermite transform of a function is
The inverse Hermite transform is given by
Some Hermite transform pairs
References
Integral transforms
Mathematical physics |
https://en.wikipedia.org/wiki/Hello%20English | Hello English is an English language-learning application, which allows users to learn the English language through interactive modules. It functions on a freemium pricing model. The app is available on Android, iOS, Windows and Web.
History
Hello English was launched in October 2014 by CultureAlley. It is an edtech startup co-founded by Nishant Patni, an alumnus of IIT Bombay and Kellogg School of Management along with Pranshu Patni, an alumnus of NMIMS, back in December 2012. It runs under Jaipur based Intap Labs Private Limited.
Funding
The platform raised $6.5 million in Series-A funding led by New York-based venture capital firm Tiger Global Management in March, 2015. The other participants included Kae Capital (led by Sasha Mirchandani), and 500 Startups, California, and angel investors - Rajan Anandan and Sunil Kalra.
The platform had previously raised an undisclosed amount of funding from Kae Capital in 2013.
Features
The application consists of 475 interactive lessons and games associated with reading, writing, speaking, and listening and has gamification mechanics in the app. It has a bilingual dictionary, available in 22 languages.
Awards/Recognition
2017: Number #3 Educational app on Google Play Store in India.
2016: Received the Most Innovative Mobile App for India award from the Internet and Mobile Association of India (IAMAI).
2016: Best Apps of 2016 in 'Made in India' category by Google Play Store.
2015: Co-founder of Hello English, Pranshu Patni listed as Forbes 30 under 30 achievers for creating the Hello English app.
2015: Number #1 Educational app on Google Play store in India.
2015: According to App Annie, it is the 98th most downloaded app in India on Android phones as of 8 July 2015.
2014: Received the Most Innovative Mobile App for India award from the Internet and Mobile Association of India (IAMAI).
See also
Computer-assisted language learning
Language education
Language pedagogy
List of flashcard software
List of langu |
https://en.wikipedia.org/wiki/Mineral%20jig | In metallurgy, mineral jigs are a type of gravity concentrator, separating materials with different densities. It is widely used in recovering valuable heavy minerals such as gold, platinum, tin, tungsten, as well as gemstones such as diamond and sapphire, from alluvial or placer deposits. Base metals such as iron, manganese, and barite can also be recovered using jigs.
The process begins with flowing a stream of liquid-suspended material over a screen and subjecting the screen to a vertical hydraulic pulsation. This pulsation momentarily expands or dilates the screen bed and allows the heavier materials to work toward the bottom. Heavier material finer than the screen openings will gradually work through the beds and the retention screen into the hutch, or lower compartment. That material, the concentrate, is discharged from this compartment or hutch through a spigot. If the concentrate is coarser than the screen, it will work down to the top of the shot bed, and can be withdrawn either continuously or intermittently. The lighter material, or tailing, will be rejected over the end of the jig.
The mineral jig has certain advantages in placer and hardrock mill flowsheets. In gold recovery, the jigs produce highly concentrated products which can be easily upgraded by methods such as barrel amalgamation, treating across shaking tables or processing through centrifugal concentrators. In other placer operations the heavy minerals being sought are recovered efficiently and cheaply with similar high ratios of concentration. In iron, manganese, and base metal treatment flowsheets, the jigs are operated to produce marketable grades of concentrate, or, as pre-concentration devices, to reject barren gangue prior to the ore entering the fine grinding section of the mill flowsheet.
The construction of the mineral jig results in maximum utilization of floor area and minimum head room requirements, permitting greater capacity per unit of operating floor area than, for example, |
https://en.wikipedia.org/wiki/Curie%27s%20principle | Curie's principle, or Curie's symmetry principle, is a maxim about cause and effect formulated by Pierre Curie in 1894:
The idea was based on the ideas of Franz Ernst Neumann and Bernhard Minnigerode. Thus, it is sometimes known as the Neuman–Minnigerode–Curie Principle.
References
Group theory
Concepts in physics
Symmetry |
https://en.wikipedia.org/wiki/Plasmon%20coupling | Plasmon coupling is a phenomenon that occurs when two or more plasmonic particles approach each other to a distance below approximately one diameter's length. Upon the occurrence of plasmon coupling, the resonance of individual particles start to hybridize, and their resonance spectrum peak wavelength will shift (either blueshift or redshift), depending on how surface charge density distributes over the coupled particles. At a single particle's resonance wavelength, the surface charge densities of close particles can either be out of phase or in phase, causing repulsion or attraction and thus leading to increase (blueshift) or decrease (redshift) of hybridized mode energy. The magnitude of the shift, which can be the measure of plasmon coupling, is dependent on the interparticle gap as well as particles geometry and plasmonic resonances supported by individual particles. A larger redshift is usually associated with smaller interparticle gap and larger cluster size.
Plasmon coupling can also cause the electric field in the interparticle gap to be boosted by several orders of magnitude, which far-exceeds the field enhancement for a single plasmonic nanoparticle. Many sensing applications such as surface enhanced Raman spectroscopy (SERS) utilize the plasmon coupling between nanoparticles to achieve ultralow detection limit.
Plasmon ruler
Plasmon ruler refers to a dimer of two identical plasmonic nanospheres linked together through a polymer, typically DNA or RNA. Based on the Universal Scaling Law between spectral shift and the interparticle separations, the nanometer scale distance can be monitored by the color shifts of dimer resonance peak. Plasmon rulers are typically used to monitor distance fluctuation below the diffraction limit, between tens of nanometers and a few nanometers.
Plasmon coupling microscopy
Plasmon coupling microscopy is a ratiometric widefield imaging approach that allows monitoring of multiple plasmon rulers with high temporal resolution. |
https://en.wikipedia.org/wiki/Mott%E2%80%93Schottky%20plot | In semiconductor electrochemistry, a Mott–Schottky plot describes the reciprocal of the square of capacitance versus the potential difference between bulk semiconductor and bulk electrolyte. In many theories, and in many experimental measurements, the plot is linear. The use of Mott–Schottky plots to determine system properties (such as flatband potential, doping density or Helmholtz capacitance) is termed Mott–Schottky analysis.
Consider the semiconductor/electrolyte junction shown in Figure 1. Under applied bias voltage the size of the depletion layer is
(1)
Here is the permittivity, is the elementary charge, is the doping density, is the built-in potential.
The depletion region contains positive charge compensated by ionic negative charge at the semiconductor surface (in the liquid electrolyte side). Charge separation forms a dielectric capacitor at the interface of the metal/semiconductor contact. We calculate the capacitance for an electrode area as
(2)
replacing as obtained from equation 1, the result of the capacitance per unit area is
(3)
a equation describing the capacitance of a capacitor constructed of two parallel plates both of area separated by a distance .
Replacing equation (3) in (1) we obtain the result
(4).
Therefore, a representation of the reciprocal square capacitance, is a linear function of the voltage, which constitutes the Mott–Schottky plot as shown in Fig. 1c. The measurement of the Mott–Schottky plot brings us two important pieces of information.
The slope gives the doping (semiconductor) density (provided that the dielectric constant is known).
The intercept to the x axis provides the built-in potential, or the flatband potential (as here the surface barrier has been flattened) and allows establishing the semiconductor conduction band level with respect to the reference of potential.
In liquid junction the reference of potential is normally a standard reference electrode. In solid junctions, we can take as |
https://en.wikipedia.org/wiki/STANAG%204427%20on%20CM | <!-- Google tag (gtag.js) -->
<script async src="https://www.googletagmanager.com/gtag/js?id=G-S75VHDBZ9S"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'G-S75VHDBZ9S');
</script>STANAG 4427 on Configuration Management in System Life Cycle Management is the Standardization Agreement (STANAG) of NATO nations on how to do configuration management (CM) on defense systems. The STANAG, and its supporting NATO publications, provides guidance on managing the configuration of products and services. It is unique in its full life cycle perspective, requiring a Life Cycle CM Plan, and in its approach to contracting for CM, using an ISO standard as the base, and building-up additional requirements (as opposed to the classical tailoring-down).
History
STANAG 4427 is NATO’s agreement on how to do configuration management on defense systems. Edition 1 was originally promulgated in 1997 and updated with Edition 2 in 2007. The first iteration of the Standardization Agreement was entitled Introduction of Allied Configuration Management Publications (ACMPs), and it called on ratifying nations to use seven NATO publications (ACMP 1-7) as the agreed upon contractual clauses for configuration management.
In 2010, NATO undertook to review and revise the STANAGs and ACMPs with two major assignments: make the NATO guidance useful and extend the guidance through the full project life cycle. This work resulted in the promulgation of STANAG 4427 Edition 3, Configuration Management in System Life Cycle Management, in 2014. As of 2017, it has been ratified by 19 nations.
Overview
With Edition 3, NATO published three new ACMPs: ACMP-2000, Policy on Configuration Management; ACMP-2009, Guidance on Configuration Management; and ACMP-2100, Configuration Management Contractual Requirements. This trio of publications uses a civil standard as the platform (ISO 10007), requires the acquirer to |
https://en.wikipedia.org/wiki/ARC%20Centre%20of%20Excellence%20in%20Future%20Low-Energy%20Electronics%20Technologies | The ARC Centre of Excellence in Future Low-Energy Electronics Technologies (or FLEET) is a collaboration of physicists, electrical engineers, chemists and material scientists from seven Australian universities developing ultra-low energy electronics aimed at reducing energy use in information technology (IT). The Centre was funded in the 2017 ARC funding round.
Aims
FLEET aims to develop a new generation of ultra-low resistance electronic devices, capitalising on Australian research in atomically thin materials, topological materials, exciton superfluids and nanofabrication.
Programmes
FLEET is pursuing three broad research themes to develop devices in which electrical current can flow without resistance:
Topological insulators: a relatively new class of materials and recognised by the 2016 Nobel Prize in Physics, topological insulators conduct electricity only along their edges, and strictly in one direction. This one-way path conducts electricity without loss of energy due to resistance. Approaches being used within FLEET to study topological materials include magnetic topological insulators and quantum anomalous Hall effect (QAHE), topological Dirac semimetals (including oxide ‘antiperovskites’) and artificial topological systems (artificial graphene and 2D topological insulators).
Exciton superfluids: a quantum state known to achieve electrical current flow with minimal wasted dissipation of energy. FLEET aims to develop superfluid devices that operate at room temperature, without the need for expensive, energy-intensive cooling. Approaches being used within FLEET’s include exciton–polariton bosonic condensation in atomically thin materials, topologically-protected exciton–polariton flow, and exciton superfluids in twin-layer materials.
Light-transformed materials: a material can be temporarily forced into a new state by applying an intense light beam. FLEET aims to study the fundamental physics behind this temporary state change. Approaches being pursued |
https://en.wikipedia.org/wiki/Associated%20Signature%20Containers | Associated Signature Containers (ASiC) specifies the use of container structures to bind together one or more signed objects with either advanced electronic signatures or timestamp tokens into one single digital container.
Regulatory context
Under the eIDAS-regulation, an associated signature container (ASiC) for eIDAS is a data container that is used to hold a group of file objects and digital signatures and/or time assertions that are associated to those objects. This data is stored in the ASiC in a ZIP format.
European Commission Implementing Decision 2015/1506 of 8 September 2015 laid down specifications relating to formats of advanced electronic signatures and advanced seals to be recognised by public sector bodies pursuant to Articles 27 and 37 of the eIDAS-regulation. EU Member States requiring an advanced electronic signature or an advanced electronic signature based on a qualified certificate, shall recognise XML, CMS or PDF advanced electronic signature at conformance level B, T or LT level or using an associated signature container, where those signatures comply with the following technical specifications:
XAdES Baseline Profile - ETSI TS 103171 v.2.1.1.
CAdES Baseline Profile - ETSI TS 103173 v.2.2.1.
PAdES Baseline Profile - ETSI TS 103172 v.2.2.2.
Associated Signature Container Baseline Profile - ETSI TS 103174 v.2.2.1
Technical specification of ASiCs have been updated and standardized since April 2016 by the European Telecommunications Standards Institute in the standard Associated Signature Containers (ASiC)(ETSI EN 319 162-1 V1.1.1 (2016-04), but this updated standard is not required by the European Commission Implementing Decision.
Structure
The internal structure of an ASiC includes two folders:
A root folder that stores all the container's content, which might include folders that reflect the structure of that content.
A “META-INF” folder that resides in the root folder and contains files that hold metadata about the content, incl |
https://en.wikipedia.org/wiki/Sentinus | Sentinus is a educational charity based in Lisburn, Northern Ireland that provides educational programs for young people interested in science, technology, engineering and mathematics (STEM).
History
Northern Ireland produces around 2,000 qualified IT workers each year; there are around 16,000 IT jobs in the Northern Ireland economy.
Function
It works with EngineeringUK and the Council for the Curriculum, Examinations & Assessment (CCEA). It works with primary and secondary schools in Northern Ireland.
It runs summer placements for IT workshops for those of sixth form age (16-18). It offers Robotics Roadshows for primary school children.
Sentinus Young Innovators
Sentinus hosts the annual Big Bang Northern Ireland Fair which incorporates Sentinus Young Innovators. This is a one day science and engineering project exhibition for post-primary students. It is one of largest such events in the United Kingdom. In 2019 over 3,000 students participated from 130 schools across both Northern Ireland and the Republic of Ireland.
The competition is affiliated with the International Science and Engineering Fair (ISEF) and the Broadcom MASTERS program. The overall winner represents Northern Ireland at the following year's ISEF.
Past Overall Winners
See also
Discover Science & Engineering, equivalent in the Republic of Ireland
Science Week Ireland
The Big Bang Fair
Young Scientist and Technology Exhibition
References
External links
Sentinus
Computer science education in the United Kingdom
Educational charities based in the United Kingdom
Educational organisations based in Northern Ireland
Engineering education in the United Kingdom
Engineering organizations
Learning programs in Europe
Mathematics education in the United Kingdom
Science and technology in Northern Ireland
Science events in the United Kingdom |
https://en.wikipedia.org/wiki/Soci%C3%A9t%C3%A9%20de%20Chimie%20Industrielle%20%28American%20Section%29 | The Société de Chimie Industrielle (American Section) is an independent learned society inspired by the creation of the Société de Chimie Industrielle in Paris in 1917. The American Section was formed on January 18, 1918, and held its first meeting on April 4, 1918.
The Société de Chimie Industrielle (American Section) hosts speakers, grants scholarships, and gives awards. It has given the International Palladium Medal roughly every second year since 1961, and helps to award the Othmer Gold Medal and the Winthrop-Sears Medal every year. The Société also hosts monthly talks, and presents scholarships to writers, educators, and historians of science.
History
One of the first societies for chemists was the Society of Chemical Industry, founded in London in 1881. This inspired a number of other groups, including the Société de Chimie Industrielle in Paris, France. The French Société was modeled on the British organization in 1917.
A number of those active in forming the French Société were elected to its first set of officers, which included industrialist Paul Kestner as president,
vice-presidents Albin Haller and Henry Louis Le Châtelier, and
Jean Gérard as general secretary.
Creation of the French Société in turn inspired creation of a related American association in New York in 1918. This was part of an effort to rebuild international connections between individuals and institutions that had been disrupted during the First World War.
René Laurent Engel encouraged the re-establishment of ties between chemists in the two countries in his position as the scientific representative in a French Mission to the United States.
Victor Grignard of the University of Nancy also encouraged the creation of an American organization. A circular appealed to the Chemists and Manufacturers of America to "extend to our French fellow chemists and manufacturers our moral and financial support and the right hand of good fellowship."
The American section of the Société de Chim |
https://en.wikipedia.org/wiki/Action%E2%80%93domain%E2%80%93responder | Action–domain–responder (ADR) is a software architectural pattern that was proposed by Paul M. Jones as a refinement of Model–view–controller (MVC) that is better suited for web applications. ADR was devised to match the request-response flow of HTTP communications more closely than MVC, which was originally designed for desktop software applications. Similar to MVC, the pattern is divided into three parts.
Components
The action takes HTTP requests (URLs and their methods) and uses that input to interact with the domain, after which it passes the domain's output to one and only one responder.
The domain can modify state, interacting with storage and/or manipulating data as needed. It contains the business logic.
The responder builds the entire HTTP response from the domain's output which is given to it by the action.
Comparison to MVC
ADR should not be mistaken for a renaming of MVC; however, some similarities do exist.
The MVC model is very similar to the ADR domain. The difference is in behavior: in MVC, the view can send information to or modify the model, whereas in ADR, the domain only receives information from the action, not the responder.
In web-centric MVC, the view is merely used by the controller to generate the content of a response, which the controller could then manipulate before sending as output. In ADR, execution control passes to the responder after the action finishes interacting with the domain, and thus the responder is entirely responsible for generating all output. The responder can then use any view or template system that it needs to.
MVC controllers usually contain several methods that, when combined in a single class, require additional logic to handle properly, like pre- and post-action hooks. Each ADR action, however, is represented by separate classes or closures. In terms of behavior, the action interacts with the domain in the same way that the MVC controller interacts with the model, except that the action does not then int |
https://en.wikipedia.org/wiki/Floyd%20Mayweather%20Jr.%20vs.%20Conor%20McGregor | Floyd Mayweather Jr. vs. Conor McGregor, billed as The Money Fight and The Biggest Fight in Combat Sports History, was a professional boxing match between undefeated eleven-time five-division boxing world champion Floyd Mayweather Jr. and two-division mixed martial arts (MMA) world champion and, at the time, UFC Lightweight Champion Conor McGregor. The fight took place at T-Mobile Arena in Paradise, Nevada, on August 26, 2017, at the light middleweight limit (154 lbs; 69.9 kg). It was scheduled for twelve rounds and recorded the second highest pay-per-view buy rate in history, behind Mayweather vs. Pacquiao.
Mayweather extended his professional boxing undefeated streak to 50 victories and 0 defeats (50–0), surpassing the 49–0 record of Hall of Famer Rocky Marciano, after defeating McGregor by technical knockout (TKO) in the 10th round. Mayweather's guaranteed disclosed paycheck was $100 million and McGregor's guaranteed disclosed paycheck was $30 million. However, the purse for the two fighters was expected to be substantially higher for each, with Mayweather reportedly earning $280 million from the fight and McGregor earning $130 million.
Background
During his successful UFC mixed martial arts career McGregor maintained an interest in boxing and entertained the idea of a "money fight" with Mayweather. UFC president Dana White dismissed the rumors of a fight with Mayweather on The Dan Patrick Show, stating that Mayweather would have to contact him since McGregor was under contract with the UFC. White even went as far as stating, "Here's what I think the chances are [of the fight happening]: About the same of me being the backup quarterback for Brady on Sunday," referring to Super Bowl LI. In January 2017, it was reported that the two parties had entered an "exploratory phase" in negotiating a potential match between Mayweather and McGregor. On The Herd with Colin Cowherd, White openly offered to pay Mayweather $25 million to hold the proposed bout during a UFC e |
https://en.wikipedia.org/wiki/Communication%20in%20distributed%20software%20development | Communication in Distributed Software Development is an area of study that considers communication processes and their effects when applied to software development in a globally distributed development process. The importance of communication and coordination in software development is widely studied and organizational communication studies these implications at an organizational level. This also applies to a setting where teams and team members work in separate physical locations. The imposed distance introduces new challenges in communication, which is no longer a face to face process, and may also be subjected to other constraints such as teams in opposing time zones with a small overlap in working hours.
There are several reasons that force elements from the same project to work in geographically separated areas, ranging from different teams in the same company to outsourcing and offshoring, to which different constraints and necessities in communication apply. The added communication challenges result in the adoption of a wide range of different communication methods usually used in combination. They can either be in real time as in the case of a video conference, or in an asynchronous way such as email. While a video conference might allow the developers to be more efficient with regards to their time spent communicating, it is more difficult to accomplish when teams work in different time zones, in which case using an email or a messaging service might be more useful.
History
The history of communication in distributed software development is tied to the historical setting of distributed development itself. Communication tools helped in advancing the distributed development process, since communication was the principal missing component in early attempts for distributed software development . One of the main factors in the creation of new tools and making distributed development a viable methodology is the introduction of the Internet as an accessible pla |
https://en.wikipedia.org/wiki/Phasevarion | In bacteria, phasevarions (also known as phase variable regulons) mediate a coordinated change in the expression of multiple genes or proteins. This occurs via phase variation of a single DNA methyltransferase. Phase variation of methyltransferase expression results in differential methylation throughout the bacterial genome, leading to variable expression of multiple genes through epigenetic mechanisms.
Phasevarions have been identified in several mucosal-associated human-adapted pathogens, which include; Haemophilus influenzae, Neisseria meningitidis, Neisseria gonorrhoeae, Helicobacter pylori, Moraxella catarrhalis, and Streptococcus pneumoniae. All described phasevarions regulate expression of proteins that are involved in host colonization, survival, and pathogenesis, and many regulate putative vaccine targets. The presence of phasevarions complicates identification of stably expressed proteins, as the regulated genes do not contain any identifiable features. The only way to identify genes in a phasevarion is by detailed study of the organisms containing such systems. Study of the phasevarions, and identification of proteins they regulate, is therefore critical to generate effective and stable vaccines.
Phase variable DNA methyltransferases
Many of the phasevarions described to date are controlled by Type III methyltransferases. Mod genes are the methyltransferase component of type III restriction modification (R-M) systems in bacteria, and serve to protect host DNA from the action of the associated restriction enzyme. However, in many bacterial pathogens, mod genes contain simple sequence repeats (SSRs), and the associated restriction enzyme encoding gene (res) is inactive. In these organisms the DNA methyltransferase phase varies between two states (ON or OFF) by variation in the number of SSRs in the mod gene. Multiple different mod genes have been identified. Each Mod methylates a different DNA sequence in the genome. Methylation of unique DNA sequences r |
https://en.wikipedia.org/wiki/FlatBuffers | FlatBuffers is a free software library implementing a serialization format similar to Protocol Buffers, Thrift, Apache Avro, SBE, and Cap'n Proto, primarily written by Wouter van Oortmerssen and open-sourced by Google. It supports “zero-copy” deserialization, so that accessing the serialized data does not require first copying it into a separate part of memory. This makes accessing data in these formats much faster than data in formats requiring more extensive processing, such as JSON, CSV, and in many cases Protocol Buffers. Compared to other serialization formats however, the handling of FlatBuffers requires usually more code, and some operations are not possible (like some mutation operations).
The serialized format allows random access to specific data elements (e.g. individual string or integer properties) without parsing all data. Unlike Protocol Buffers, which uses variable length integers, FlatBuffers encodes integers in their native size, which favors performance but leads to longer encoded representations.
FlatBuffers can be used in software written in C++, C#, C, Go, Java, JavaScript, Kotlin, Lobster, Lua, PHP, Python, Rust, Swift, and TypeScript. The schema compiler runs on Android, Microsoft Windows, Mac OS X, and Linux, but games and other programs use FlatBuffers for serialization work on many other operating systems as well, including iOS, Amazon's Fire OS, and Windows Phone.
Van Oortmerssen originally developed FlatBuffers for game development and similar applications.
Although FlatBuffers has its own interface definition language to define the data to be serialized with it, it also supports schemas defined in the Protocol Buffers .proto format.
Users
Some notable users of FlatBuffers:
Cocos2d-x, the popular free-software 2-D game programming library, uses FlatBuffers to serialize all of its game data.
Facebook Android Client uses FlatBuffers for disk storage and communication with Facebook servers. The previously used JSON format was perf |
https://en.wikipedia.org/wiki/Indonesian%20Small%20Islands%20Directory | Indonesian Small Islands Directory (Direktori Pulau-pulau Kecil Indonesia) is a web directory that lists small islands of Indonesia.
Established by 2007 Law 27 (Undang-Undang 27 Tahun 2007), it covers islands up to area of 2,000 km2 and their surrounding marine ecosystem.
The directory is managed by Direktorat Pendayagunaan Pulau-Pulau Kecil, Direktorat Jenderal Pengelolaan Ruang Laut, Kementerian Kelautan dan Perikanan, Republik Indonesia.
The database also is important in quantifying the long term task of verifying how many islands exist in Indonesia.
Difficulties in ascertaining information about the smaller islands of the Indonesian archipelago has been a serious long term issue - and the database is part of governmental efforts to verify and determine the issue.
See also
List of Indonesian islands by area
List of Indonesian islands by population
List of outlying islands of Indonesia
References
External links
Directory
Islands of Indonesia
Online databases |
https://en.wikipedia.org/wiki/ToaruOS | ToaruOS (also known as ToAruOS or とあるOS; 'toaru' is Japanese roughly equivalent to 'a certain') is a hobby operating system and kernel developed largely independently (notably contrary to most modern OSes, which are based on existing source code) by K. Lange. Despite a 1.0 version being released, Lange has stated that it is still 'incomplete', and may not be 'suitable for any purpose you might have for an operating system'. It is released under the permissive UIUC License, and supports 64-bit computer hardware with SMP.
Design and features
ToaruOS is programmed in C, and uses the Cairo graphics library. It has support for GCC, Python, and Simple DirectMedia Layer as well as many open-source utilities – including Vim. A package manager and basic window switcher are also included.
The kernel is a 'basic Unix-like environment'. It has a hybrid architecture, with internal and external device support being delegated to modules. Several filesystems are supported via this system, including ext2 and ISO 9660. Networking support is included, but is limited to simple IPv4 functionality. The userspace also has a window manager, Yutani (named after the Weyland-Yutani Corporation from the Alien franchise, and as a reference to the Wayland Display Server for Linux), with input support. It stores windows as shared memory regions with 32-bit colour, and uses pipes to communicate to other parts of the OS. Unusually, windows also support a rotation feature.
History
Development was started by creator K. Lange in December 2010; it initially was supported by the University of Illinois at Urbana–Champaign, but after the beginning of 2012, it largely shifted to being mostly done by Lange. Initially, it was based on tutorials for x86 kernels. The operating system was named after the A Certain Scientific Railgun series of manga, but Lange stated it also mirrors generic naming of other hobby OSes. A GUI was added with a window manager in 2012, this was replaced with a more advanced vers |
https://en.wikipedia.org/wiki/Software%20safety%20classification | Software installed in medical devices is assessed for health and safety issues according to international standards.
Safety classes
Software classification is based on the potential for hazard(s) that could cause injury to the user or patient.
Per [[IEC 62304|IEC 62304:2006] + A1:2015], the software can be divided into three separate classes:
The SOFTWARE SYSTEM is software safety class A if:
the SOFTWARE SYSTEM cannot contribute to a HAZARDOUS SITUATION; or
the SOFTWARE SYSTEM can contribute to a HAZARDOUS SITUATION which does not result in unacceptable RISK after consideration of RISK CONTROL measures external to the SOFTWARE SYSTEM.
The SOFTWARE SYSTEM is software safety class B if:
the SOFTWARE SYSTEM can contribute to a HAZARDOUS SITUATION which results in unacceptable RISK after consideration of RISK CONTROL measures external to the SOFTWARE SYSTEM and the resulting possible HARM is non-SERIOUS INJURY.
The SOFTWARE SYSTEM is software safety class C if:
the SOFTWARE SYSTEM can contribute to a HAZARDOUS SITUATION which results in unacceptable RISK after consideration of RISK CONTROL measures external to the SOFTWARE SYSTEM and the resulting possible HARM is death or SERIOUS INJURY“
Serious injury
For the purpose of this classification, serious injury is defined as injury or illness that directly or indirectly is life threatening; results in permanent impairment of a body function or permanent damage to a body structure; or necessitates medical or surgical intervention to prevent permanent impairment of a body function or permanent damage to a body structure.
References
Software
Occupational safety and health |
https://en.wikipedia.org/wiki/Authenticated%20Received%20Chain | Authenticated Received Chain (ARC) is an email authentication system designed to allow an intermediate mail server like a mailing list or forwarding service to sign an email's original authentication results. This allows a receiving service to validate an email when the email's SPF and DKIM records are rendered invalid by an intermediate server's processing.
ARC is defined in RFC 8617, published in July 2019, as "Experimental".
Overview
DMARC allows a sender's domain to indicate that their emails are protected by SPF and/or DKIM, and tells a receiving service what to do if neither of those authentication methods passes - such as to reject the message. However, a strict DMARC policy may block legitimate emails sent through a mailing list or forwarder, as the DKIM signature will be invalidated if the message is modified, such as by adding a subject tag or footer, and the SPF check will either fail (if the forwarder didn't change the bounce address) or be aligned with the mailing list domain and not with the message author's domain (unless the mailing list rewrites the From: header field.)
ARC was devised to solve this problem by giving intermediate servers a way to sign the original message's validation results. Even if the SPF and DKIM validation fail, the receiving service can choose to validate the ARC chain. If it indicates that the original message passed the SPF and DKIM checks, and the only modifications were made by intermediaries trusted by the receiving service, the receiving service may choose to accept the email. Validating an ARC chain only makes sense if the receiver trusts the ARC signers. In fact, an ARC chain can be counterfeited, so ARC processing applies when receivers trust the good faith of ARC signers, but not so much their filtering practices.
Implementation
ARC defines three new mail headers:
ARC-Authentication-Results (abbreviated AAR) - A combination of an instance number (i) and the results of the SPF, DKIM, and DMARC validation
AR |
https://en.wikipedia.org/wiki/Photo-induced%20cross-linking%20of%20unmodified%20proteins | Photo-Induced Cross-Linking of Unmodified Proteins (PICUP) is a protein cross-linking method by visible light irradiation of a photocatalyst in the presence of an electron acceptor and the protein of interest. Irradiation results in a highly reactive protein radical that forms a covalent bond between the amino acid side chains of the proteins to be linked. Cross-linking methods developed prior to PICUP, including the use of physical, oxidative, and chemical cross-linkers, often require more time and result in protein byproducts. In addition, the cross-linked protein yield is very low due to the multifunctionality of the cross-linking reagents.
The process was invented (US6613582B1) in 1999 to utilize protein cross-linking techniques to analyze the interactions between polypeptides as well as structural differences proteins undergo in a catalytic pathway. The techniques in the 20th century were not sufficient to be applied to cross-link fast and transient changes of these proteins in high yield. PICUP allowed for rapid (<1 second) and high production of covalently-linked proteins in close proximity with each other.
History
Fancy and Kodadek
Fancy and Kodadek's invention of PICUP in 1999 was the first time proteins cross-linking was able to be performed in such a short period of time (1 second) and without modifying the structure of the proteins in question. Additionally, PICUP was able to be performed at physiological pH of 7.4, which opened doors for further application of protein cross-linking such as studying the biochemical mechanisms that proteins participate in the human body. In addition, irradiation by visible light in PICUP is useful because many biomolecules that participate in metabolic pathways to be analyzed do no absorb light with wavelengths below the range for UV light, allowing for cross-links without denaturation.
Bitan, Lomakin, and Teplow
In 2001, Gal Bitan, Aleksey Lomakin, and David B. Teplow applied PICUP to study amyloid β-protein (Aβ) |
https://en.wikipedia.org/wiki/Ventura%20International | Ventura International (or VENTURA_INT) is an 8-bit character encoding created by Ventura Software for use with Ventura Publisher. Ventura International is based on the GEM character set, but ¢ and ø are swapped and ¥ and Ø are swapped so that it is more similar to code page 437 (on which GEM was based, but GEM is more similar to code page 865 because the placement of Ø and ø in GEM match the placement in code page 865). There is also the PCL Ventura International, which is used for communication with PCL printers. PCL Ventura International is based on HP Roman-8. Both have the same character set, but a different encoding.
Ventura International character set
PCL Ventura International character set
Conversion tables
{| class="wikitable chset nounderlines" frame="box" width="50%" style="text-align:center; font-family:monospace; border-collapse:collaps; background:#FFFFEF"
|-
|colspan="17"| Ventura International → PCL Ventura International (upper half only; lower half is identical)
|- bgcolor=#EFF3FF
|bgcolor=#BFBFBF| ||_0||_1||_2||_3||_4||_5||_6||_7||_8||_9||_A||_B||_C||_D||_E||_F
|-
|bgcolor=#EFF3FF|8_||B4||CF||C5||C0||CC||C8||D4||B5||C1||CD||C9||DD||D1||D9||D8||D0
|-
|bgcolor=#EFF3FF|9_||DC||D7||D3||C2||CE||CA||C3||CB||EF||DA||DB||BF||BB||BC||BA||BE
|-
|bgcolor=#EFF3FF|A_||C4||D5||C6||C7||B7||B6||F9||FA||B9||B1||B2||AB||AC||B8||FB||FD
|-
|bgcolor=#EFF3FF|B_||E2||EA||D2||D6||F1||F0||A1||E1||E9||BD||F4||F3||F2||A8||A9||AA
|-
|bgcolor=#EFF3FF|C_||A0||FF||B0||FC||F6||F5||B3||E0||A2||A3||A4||A5||E6||E5||A6||A7
|-
|bgcolor=#EFF3FF|D_||E8||E7||DF||EB||EC||AD||ED||AE||EE||DE||bgcolor=#BFBFBF| ||bgcolor=#BFBFBF| ||bgcolor=#BFBFBF| ||bgcolor=#BFBFBF| ||bgcolor=#BFBFBF| ||bgcolor=#BFBFBF|
|-
|bgcolor=#EFF3FF|E_||bgcolor=#BFBFBF| ||bgcolor=#BFBFBF| ||bgcolor=#BFBFBF| ||bgcolor=#BFBFBF| ||bgcolor=#BFBFBF| ||bgcolor=#BFBFBF| ||bgcolor=#BFBFBF| ||bgcolor=#BFBFBF| ||bgcolor=#BFBFBF| ||bgcolor=#BFBFBF| ||bgcolor=#BFBFBF| ||bgcolor=#BFBFBF| ||bgcolor=#BFBFBF| ||bgcolor=#BFBFBF| | |
https://en.wikipedia.org/wiki/Relative%20cycle | In algebraic geometry, a relative cycle is a type of algebraic cycle on a scheme. In particular, let be a scheme of finite type over a Noetherian scheme , so that . Then a relative cycle is a cycle on which lies over the generic points of , such that the cycle has a well-defined specialization to any fiber of the projection .
The notion was introduced by Andrei Suslin and Vladimir Voevodsky in 2000; the authors were motivated to overcome some of the deficiencies of sheaves with transfers.
References
Appendix 1A of
Algebraic geometry |
https://en.wikipedia.org/wiki/Mott%E2%80%93Schottky%20equation | The Mott–Schottky equation relates the capacitance to the applied voltage across a semiconductor-electrolyte junction.
where is the differential capacitance , is the dielectric constant of the semiconductor, is the permittivity of free space, is the area such that the depletion region volume is , is the elementary charge, is the density of dopants, is the applied potential, is the flat band potential, is the Boltzmann constant, and T is the absolute temperature.
This theory predicts that a Mott–Schottky plot will be linear. The doping density can be derived from the slope of the plot (provided the area and dielectric constant are known). The flatband potential can be determined as well; absent the temperature term, the plot would cross the -axis at the flatband potential.
Derivation
Under an applied potential , the width of the depletion region is
Using the abrupt approximation, all charge carriers except the ionized dopants have left the depletion region, so the charge density in the depletion region is , and the total charge of the depletion region, compensated by opposite charge nearby in the electrolyte, is
Thus, the differential capacitance is
which is equivalent to the Mott-Schottky equation, save for the temperature term. In fact the temperature term arises from a more careful analysis, which takes statistical mechanics into account by abandoning the abrupt approximation and solving the Poisson–Boltzmann equation for the charge density in the depletion region.
References
Equations |
https://en.wikipedia.org/wiki/Quotient%20space%20of%20an%20algebraic%20stack | In algebraic geometry, the quotient space of an algebraic stack F, denoted by |F|, is a topological space which as a set is the set of all integral substacks of F and which then is given a "Zariski topology": an open subset has a form for some open substack U of F.
The construction is functorial; i.e., each morphism of algebraic stacks determines a continuous map .
An algebraic stack X is punctual if is a point.
When X is a moduli stack, the quotient space is called the moduli space of X. If is a morphism of algebraic stacks that induces a homeomorphism , then Y is called a coarse moduli stack of X. ("The" coarse moduli requires a universality.)
References
H. Gillet, Intersection theory on algebraic stacks and Q-varieties, J. Pure Appl. Algebra 34 (1984), 193–240, Proceedings of the Luminy conference on algebraic K-theory (Luminy, 1983).
Algebraic geometry |
https://en.wikipedia.org/wiki/Src%3ACard | Src:Card is a 1–2 player card game where players attempt to defeat the robotic core of an opponent's battle robot by writing code. The game is designed around a rudimentary Src:Card programming language which encapsulates much of imperative procedural programming based on academic research developed at the University of Auckland and Otago. The game's language replicates conditional flow, loops, and other control structures as well as basic algorithmic logic. While it contains many of the hallmarks of a Turing complete language (such as conditional branching) the game would require a larger function set to qualify as a Turing complete imperative language.
Launched in 2015, the card game was one of Malaysia's first successful Kickstarter project. The game has received press coverage from most board gaming news outlets. The game is currently being extensively used by Malaysian Coder Dojos to teach basic programming. Src:Card is currently a free and open download. Players can print and play Src:Card and use open assets to modify the game.
References
External links
srccard.com
Src:Card at BoardGameGeek
Dedicated deck card games
Programming games
Kickstarter-funded tabletop games |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.