source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/Nullspace%20property | In compressed sensing, the nullspace property gives necessary and sufficient conditions on the reconstruction of sparse signals using the techniques of -relaxation. The term "nullspace property" originates from Cohen, Dahmen, and DeVore. The nullspace property is often difficult to check in practice, and the restricted isometry property is a more modern condition in the field of compressed sensing.
The technique of -relaxation
The non-convex -minimization problem,
subject to ,
is a standard problem in compressed sensing. However, -minimization is known to be NP-hard in general. As such, the technique of -relaxation is sometimes employed to circumvent the difficulties of signal reconstruction using the -norm. In -relaxation, the problem,
subject to ,
is solved in place of the problem. Note that this relaxation is convex and hence amenable to the standard techniques of linear programming - a computationally desirable feature. Naturally we wish to know when -relaxation will give the same answer as the problem. The nullspace property is one way to guarantee agreement.
Definition
An complex matrix has the nullspace property of order , if for all index sets with we have that: for all .
Recovery Condition
The following theorem gives necessary and sufficient condition on the recoverability of a given -sparse vector in . The proof of the theorem is a standard one, and the proof supplied here is summarized from Holger Rauhut.
Let be a complex matrix. Then every -sparse signal is the unique solution to the -relaxation problem with if and only if satisfies the nullspace property with order .
For the forwards direction notice that and are distinct vectors with by the linearity of , and hence by uniqueness we must have as desired. For the backwards direction, let be -sparse and another (not necessary -sparse) vector such that and . Define the (non-zero) vector and notice that it lies in the nullspace of . Call the support of , and then |
https://en.wikipedia.org/wiki/Order%20polynomial | The order polynomial is a polynomial studied in mathematics, in particular in algebraic graph theory and algebraic combinatorics. The order polynomial counts the number of order-preserving maps from a poset to a chain of length . These order-preserving maps were first introduced by Richard P. Stanley while studying ordered structures and partitions as a Ph.D. student at Harvard University in 1971 under the guidance of Gian-Carlo Rota.
Definition
Let be a finite poset with elements denoted , and let be a chain elements. A map is order-preserving if implies . The number of such maps grows polynomially with , and the function that counts their number is the order polynomial .
Similarly, we can define an order polynomial that counts the number of strictly order-preserving maps , meaning implies . The number of such maps is the strict order polynomial .
Both and have degree . The order-preserving maps generalize the linear extensions of , the order-preserving bijections . In fact, the leading coefficient of and is the number of linear extensions divided by .
Examples
Letting be a chain of elements, we have and There is only one linear extension (the identity mapping), and both polynomials have leading term .
Letting be an antichain of incomparable elements, we have . Since any bijection is (strictly) order-preserving, there are linear extensions, and both polynomials reduce to the leading term .
Reciprocity theorem
There is a relation between strictly order-preserving maps and order-preserving maps:
In the case that is a chain, this recovers the negative binomial identity. There are similar results for the chromatic polynomial and Ehrhart polynomial (see below), all special cases of Stanley's general Reciprocity Theorem.
Connections with other counting polynomials
Chromatic polynomial
The chromatic polynomial counts the number of proper colorings of a finite graph with available colors. For an acyclic orientation of the edges |
https://en.wikipedia.org/wiki/Molybdovanadate%20reagent | The molybdovanadate reagent is a solution containing both the molybdate and vanadate ions. It is commonly used in the determination of phosphate ion content. The reagent used is ammonium molybdovanadate with the addition of 70% perchloric acid (sulfuric acid is also known to be used). It is used for purposes such as the analysis of wine, canned fruits and other fruit-based products such as jams and syrups.
Physical properties
The reagent appears as a clear, yellow liquid without odour. It is harmful if inhaled, a recognised carcinogen and can cause eye burns.
References
Reagents
Analytical chemistry
Food science |
https://en.wikipedia.org/wiki/Diplostigmaty | Diplostigmaty refers, in botany, to the presence of extra stigmas along the style. This condition is known from the genus Sebaea. It is thought to provide reproductive assurance.
References
Botany |
https://en.wikipedia.org/wiki/QSK%20operation%20%28full%20break-in%29 | In CW Morse code operations, QSK or full break-in operation describes an operating mode in which the transmitting station can detect signals from other stations between the elements (dots and dashes) or letters of the Morse transmission. This allows other stations to interrupt the transmitting station between individual coding elements, and such allows for a conversational style of communication.
"QSK" is one of the Q-code signals established for radiotelegraph operators in the first decade of the 1900s. The three letter code "QSK" literally means "I can hear you between my signals; you may break in on my transmission." Although Morse code is no longer used for commercial or professional purposes, it continues to be used in amateur radio.
In QSK or full break-in operation the silent periods between the Morse code dits and dahs enable operators to listen between their transmitted signals for the signals of the counterpart, thus enabling a conversational style of communication. This is especially useful in high-speed telegraphy.
Signals, silent periods and symbols
Morse code has silent periods between its symbol elements (dots and dashes), letters, words, and sentences. These silent periods provide the sending operator with opportunities to listen for interruptions from receiving stations. Whereas in usual CW operation the sending carrier wave is always on, and only gated to the antenna, in QSK operation the antenna is switched to receiver status on the off-time between dits and dahs, and then switched right back. This lets either side of the conversation hear the other one transmitting, in between the off-times of Morse code, and thus enables the other end to readily break back into one's speak.
QSK transmit/receive (T/R) switch operation
QSK operation is a technical challenge: It requires very fast T/R RF switches at the high power and voltage side of the radio transceiver. Such switches must be controlled automatically by the telegraph key, and as such they |
https://en.wikipedia.org/wiki/Design%20sprint | A design sprint is a time-constrained, five-phase process that uses design thinking with the aim of reducing the risk when bringing a new product, service or a feature to the market. The process aims to help teams to clearly define goals, validate assumptions and decide on a product roadmap before starting development. It seeks to address strategic issues using interdisciplinary expertise, rapid prototyping, and usability testing. This design process is similar to Sprints in an Agile development cycle.
How it started
There are multiple origins to the concept of mixing Agile and Design Thinking.
The most popular was developed by a multi-disciplinary team working out of Google Ventures. The initial iterations of the approach were created by Jake Knapp, and popularised by a series of blog articles outlining the approach and reporting on its successes within Google. As it gained industry recognition, the approach was further refined and added to by other Google staff including Braden Kowitz, Michael Margolis, John Zeratsky and Daniel Burka.
It was later published in a book published by Google Ventures called .
Possible uses
Claimed uses of the approach include
Launching a new product or a service.
Extending an existing experience to a new platform.
Existing MVP needing revised User experience design and/or UI Design.
Adding new features and functionality to a digital product.
Opportunities for improvement of a product (e.g. a high rate of cart abandonment)
Opportunities for improvement of a service.
Supporting organizations in their transformation towards new technologies (e.g., AI).
Phases
The creators of the design sprint approach, recommend preparation by picking the proper team, environment, materials and tools working with six key 'ingredients'.
Understand: Discover the business opportunity, the audience, the competition, the value proposition, and define metrics of success.
Diverge: Explore, develop and iterate creative ways of solving the pr |
https://en.wikipedia.org/wiki/Kali%20NetHunter | Kali NetHunter is a free and open-source mobile penetration testing platform for Android devices, based on Kali Linux. Kali NetHunter is available for un-rooted devices (NetHunter Rootless), for rooted devices that have a standard recovery (NetHunter Lite), and for rooted devices with custom recovery for which a NetHunter specific kernel is available (NetHunter). Official images are published by Offensive Security on their download page and are updated every quarter. NetHunter images with custom kernels are published for the most popular supported devices, such as Google Nexus, Samsung Galaxy and OnePlus. Many more models are supported, and images not published by Offensive Security can be generated using NetHunter build scripts. Kali NetHunter is maintained by a community of volunteers, and is funded by Offensive Security.
Background and history
Version 1.1 was released in January 2015 and added support for Oneplus devices & non-English keyboard layouts for HID attacks.
Version 1.2 was released in May 2015 and added support for Nexus 9 Android tablets.
Version 3.0 was released in January 2016 after a major rewrite of the application, installer, and kernel building framework. This version also introduced support for devices running Android Marshmallow.
Version 2019.2 was released in May 2019 and switched to kali-rolling as its Kali Linux container. It adopted the Kali Linux versioning and release cycle to reflect that change. With this release, the number of supported Android devices grew to over 50.
Version 2019.3 was released in September 2019 and introduced the NetHunter App Store as the default mechanism for deploying and updating apps.
Version 2019.4 was released in December 2019 and premiered the "Kali NetHunter Desktop Experience."
Before December 2019, Kali NetHunter was only available for selected Android devices. Installing Kali NetHunter required a device that:
is rooted
has a custom recovery
had a kernel built especially for Kali NetHunter
|
https://en.wikipedia.org/wiki/Statistical%20disclosure%20control | Statistical disclosure control (SDC), also known as statistical disclosure limitation (SDL) or disclosure avoidance, is a technique used in data-driven research to ensure no person or organization is identifiable from the results of an analysis of survey or administrative data, or in the release of microdata. The purpose of SDC is to protect the confidentiality of the respondents and subjects of the research.
SDC usually refers to 'output SDC'; ensuring that, for example, a published table or graph does not disclose confidential information about respondents. SDC can also describes protection methods applied to the data: for example, removing names and addresses, limiting extreme values, or swapping problematic observations. This is sometimes referred to as 'input SDC', but is more commonly called anonymization, de-identification, or microdata protection.
Textbooks (eg ) typically cover input SDC and tabular data protection (but not other parts of output SDC). This is because these two problems are of direct interest to statistical agencies who supported the development of the field. For analytical environments, output rules developed for statistical agencies were generally used until data managers began arguing for specific output SDC for research.
Necessity
Many kinds of social, economic and health research use potentially sensitive data as a basis for their research, such as survey or Census data, tax records, health records, educational information, etc. Such information is usually given in confidence, and, in the case of administrative data, not always for the purpose of research.
Researchers are not usually interested in information about one single person or business; they are looking for trends among larger groups of people. However, the data they use is, in the first place, linked to individual people and businesses, and SDC ensures that these cannot be identified from published data, no matter how detailed or broad.
It is possible that at the end of |
https://en.wikipedia.org/wiki/Mida%20%28website%29 | Mida () is an Israeli current affairs and opinion online magazine self-identifying with classical and conservative liberalism, and the national-liberal Right, targeting a secular and right-wing readership in both the political and economic sense of the term, comparable to the US
Republican Party of 2013, with a "realist position" on security issues.
History and profile
Mida was launched by Ran Baratz and El Haprat, a nonprofit organization financed by the New York-based Tikvah Fund, chaired by Roger Hertog.
Some of its articles had been published on the HaAyin HaShevi'it Internet site.
In 2017, the Nrg.co.il website posted the findings of Akiva Bigman, at that point reporting both under the logo of Israel Hayom and Mida, in the Umm al-Hiran incident, which lead to the death of a Bedouin teacher and a policeman.
Contributors
Yehuda Harel
Amnon Lord
Erel Segal
Daniel Seaman
See also
List of online magazines
Media of Israel
References
External links
Archive Articles at the official website
Articles at the official website
Articles at the official website
2012 establishments in Israel
Conservatism in Israel
Conservative magazines
English-language websites
Hebrew-language websites
Magazines published in Israel
Israeli news websites
Israeli political websites
Magazines established in 2012
Mass media in Jerusalem
Multilingual news services
Multilingual websites
Online magazines |
https://en.wikipedia.org/wiki/ZX81%20character%20set | The ZX81 character set is the character encoding used by the Sinclair Research ZX81 family of microcomputers including the Timex Sinclair 1000 and Timex Sinclair 1500. The encoding uses one byte per character for 256 code points. It has no relationship with previously established ones like ASCII or EBCDIC, but it is related though not identical to the character set of the predecessor ZX80.
Printable characters
The character set has 64 unique glyphs present at code points 0–63. With the most significant bit set the character is generated in inverse video; corresponding to code points 128–191. These 128 values are the only displayable ones allowed in the video memory (known as the display file). The remaining code points (64–127 and 192–255) are used as control characters such as 118 for newline or, uniquely to Sinclair BASIC, for keywords, while some are unused.
The small effective range of only 64 unique glyphs precludes support for Latin lower case letters, and many symbols used widely in computing such as the exclamation point and the at sign. The lack of an apostrophe led some software authors to use a comma instead.
There are 11 block graphics characters, counting code point 0 which also doubles as space. The first 8 of these together with their 8 inverse video versions (16 code points) provide every combination of the character cell divided into 2×2 black-and-white block pixels for low-resolution 64×48 pixel graphics. These 2×2 blocks are present in the Block Elements Unicode block. An additional 3 characters provide a cell divided into 1×2 black, white or dithered gray wide block pixels. These, in combination with their inverse video versions and some of the previous 2×2 blocks provides for a 32×48 resolution with 3 levels (white, dithered gray, black). The basic 11 characters plus their inverse video versions, makes for 22 block graphics characters in total. The dithered characters (of which there are 6) are also available in Unicode (mostly in the Symbol |
https://en.wikipedia.org/wiki/Havis%2C%20Inc. | Havis, Inc. is an American manufacturer of in-vehicle mobile office and prisoner transport products for private and public corporate, military and law enforcement, and enterprise fleets. Founded in Philadelphia and now headquartered in Warminster, Pennsylvania, Havis serves numerous industries including, but not limited to: public safety, military and government, utility and public works, energy services, transportation, material handling, and other mobile professions. Along with its factory in Warminster, Havis also operates a satellite office in Plymouth, Michigan.
History
Havis began in 1928 as the Havis-Shields Equipment Corporation, named for founders Dan Havis and Jim Shields. Havis-Shields supplied heavy duty automotive and electrical equipment to police departments and other areas of the public sector.
By the early 1980s, the company had branched into manufacturing, selling high-intensity scene lighting to the public safety sector. In 2009, the company announced a merger with LEDCO-ChargeGuard. Havis-Shields was renamed to Havis, Inc.
In 2011, Havis announced the sale of its Lighting Solutions product line of Kwik-Raze, Magnafire, Collins Dynamics, and Quester to R-O- M Corporation in order to focus more on its current product lines of docking stations, prisoner and K9 transport, and integrated control systems.
In 2014, Havis acquired long-time partner, Schlotter Precision Products, a manufacturer of plastic parts and molds, and formed Havis Molding.
Havis has been at its Warminster headquarters since 2002, with an on-site expansion completed in the summer of 2016.
Corporate overview
Havis is led by CEO Joe Bernert, his team of Executive Directors, and a number of small teams that focus on marketing, customer service, engineering, ISO certification, and other fields. Havis employs more than 300 workers who build and install company products.
Markets served
Havis, Inc. serves several different industries.
Public Safety
Military and Government
Uti |
https://en.wikipedia.org/wiki/Our%20World%20in%20Data | Our World in Data (OWID) is a scientific online publication that focuses on large global problems such as poverty, disease, hunger, climate change, war, existential risks, and inequality.
It is a project of the Global Change Data Lab, a registered charity in England and Wales, and was founded by Max Roser, a social historian and development economist. The research team is based at the University of Oxford. The organisation is chaired by Hetan Shah.
Content
Our World in Data uses interactive charts and maps to illustrate research findings, often taking a long-term view to show how global living conditions have changed over time.
History
Roser began his work on the project in 2011, adding a research team at the University of Oxford later on. In the first years, Roser developed the publication together with inequality researcher Sir Tony Atkinson. Hannah Ritchie joined in 2017 and became Head of Research. Edouard Mathieu joined in 2020 and became Head of Data. The organization began the COVID-19 pandemic with six staff members, and grew to 20 by late 2021.
In 2019, Our World in Data won the Lovie Award, a European web award, and was one of three nonprofit organizations in Y Combinator's Winter 2019 cohort.
Beginning in 2020, Our World in Data added an emphasis on publishing global data and research on the COVID-19 pandemic:
They created and maintained a worldwide database on vaccinations for COVID-19, which was used as the source for data published by the World Health Organization, researchers and other international organizations, journals, and numerous newspapers.
Similarly, the team built and maintained a global dataset on COVID-19 testing which was used by the United Nations, the White House, the World Health Organization, and epidemiologists and researchers, and also published data such as hospitalizations and computations of excess deaths.
In 2021 the team began campaigning for the International Energy Agency to make the data it collects from nati |
https://en.wikipedia.org/wiki/Wild%20problem | In the mathematical areas of linear algebra and representation theory, a problem is wild if it contains the problem of classifying pairs of square matrices up to simultaneous similarity. Examples of wild problems are classifying indecomposable representations of any quiver that is neither a Dynkin quiver (i.e. the underlying undirected graph of the quiver is a (finite) Dynkin diagram) nor a Euclidean quiver (i.e., the underlying undirected graph of the quiver is an affine Dynkin diagram).
Necessary and sufficient conditions have been proposed to check the simultaneously block triangularization and diagonalization of a finite set of matrices under the assumption that each matrix is diagonalizable over the field of the complex numbers.
See also
Semi-invariant of a quiver
References
Linear algebra
Representation theory |
https://en.wikipedia.org/wiki/Consensus%20estimate | Consensus estimate is a technique for designing truthful mechanisms in a prior-free mechanism design setting. The technique was introduced for digital goods auctions and later extended to more general settings.
Suppose there is a digital good that we want to sell to a group of buyers with unknown valuations. We want to determine the price that will bring us maximum profit. Suppose we have a function that, given the valuations of the buyers, tells us the maximum profit that we can make. We can use it in the following way:
Ask the buyers to tell their valuations.
Calculate - the maximum profit possible given the valuations.
Calculate a price that guarantees that we get a profit of .
Step 3 can be attained by a profit extraction mechanism, which is a truthful mechanism. However, in general the mechanism is not truthful, since the buyers can try to influence by bidding strategically. To solve this problem, we can replace the exact with an approximation - - that, with high probability, cannot be influenced by a single agent.
As an example, suppose that we know that the valuation of each single agent is at most 0.1. As a first attempt of a consensus-estimate, let
= the value of rounded to the nearest integer below it. Intuitively, in "most cases", a single agent cannot influence the value of (e.g., if with true reports , then a single agent can only change it to between and , but in all cases ).
To make the notion of "most cases" more accurate, define: , where is a random variable drawn uniformly from . This makes a random variable too. With probability at least 90%, cannot be influenced by any single agent, so a mechanism that uses is truthful with high probability.
Such random variable is called a consensus estimate:
"Consensus" means that, with high probability, a single agent cannot influence the outcome, so that there is an agreement between the outcomes with or without the agent.
"Estimate" means that the random variable is near the real varia |
https://en.wikipedia.org/wiki/Constant-recursive%20sequence | In mathematics and theoretical computer science, a constant-recursive sequence is an infinite sequence of numbers in which each number in the sequence is equal to a fixed linear combination of one or more of its immediate predecessors. The concept is variously known as a linear recurrence sequence, linear-recursive sequence, linear-recurrent sequence, a C-finite sequence, or a solution to a linear recurrence with constant coefficients.
A prototypical example is the Fibonacci sequence , in which each number is the sum of the previous two. The power of two sequence is also constant-recursive because each number is the sum of twice the previous number. The square number sequence is also constant-recursive. However, not all sequences are constant-recursive; for example, the factorial sequence is not constant-recursive. All arithmetic progressions, all geometric progressions, and all polynomials are constant-recursive.
Formally, a sequence of numbers is constant-recursive if it satisfies a recurrence relation
where are constants. For example, the Fibonacci sequence satisfies the recurrence relation where is the th Fibonacci number.
Constant-recursive sequences are studied in combinatorics and the theory of finite differences. They also arise in algebraic number theory, due to the relation of the sequence to polynomial roots; in the analysis of algorithms, as the running time of simple recursive functions; and in the theory of formal languages, where they count strings up to a given length in a regular language. Constant-recursive sequences are closed under important mathematical operations such as term-wise addition, term-wise multiplication, and Cauchy product.
The Skolem–Mahler–Lech theorem states that the zeros of a constant-recursive sequence have a regularly repeating (eventually periodic) form. On the other hand, the Skolem problem, which asks for an algorithm to determine whether a linear recurrence has at least one zero, remains unsolved.
Definition
|
https://en.wikipedia.org/wiki/Segre%27s%20theorem | In projective geometry, Segre's theorem, named after the Italian mathematician Beniamino Segre, is the statement:
Any oval in a finite pappian projective plane of odd order is a nondegenerate projective conic section.
This statement was assumed 1949 by the two Finnish mathematicians G. Järnefelt and P. Kustaanheimo and its proof was published in 1955 by B. Segre.
A finite pappian projective plane can be imagined as the projective closure of the real plane (by a line at infinity), where the real numbers are replaced by a finite field . Odd order means that is odd. An oval is a curve similar to a circle (see definition below): any line meets it in at most 2 points and through any point of it there is exactly one tangent. The standard examples are the nondegenerate projective conic sections.
In pappian projective planes of even order greater than four there are ovals which are not conics. In an infinite plane there exist ovals, which are not conics. In the real plane one just glues a half of a circle and a suitable ellipse smoothly.
The proof of Segre's theorem, shown below, uses the 3-point version of Pascal's theorem and a property of a finite field of odd order, namely, that the product of all the nonzero elements equals -1.
Definition of an oval
In a projective plane a set of points is called oval, if:
(1) Any line meets in at most two points.
If the line is an exterior (or passing) line; in case a tangent line and if the line is a secant line.
(2) For any point there exists exactly one tangent at , i.e., .
For finite planes (i.e. the set of points is finite) we have a more convenient characterization:
For a finite projective plane of order (i.e. any line contains points) a set of points is an oval if and only if and no three points are collinear (on a common line).
Pascal's 3-point version
Theorem
Let be an oval in a pappian projective plane of characteristic .
is a nondegenerate conic if and only if statement (P3)
holds:
(P3): Let |
https://en.wikipedia.org/wiki/SACEM%20%28railway%20system%29 | The Système d'aide à la conduite, à l'exploitation et à la maintenance (SACEM) is an embedded, automatic speed train protection system for rapid transit railways. The name means "Driver Assistance, Operation, and Maintenance System".
It was developed in France by GEC-Alsthom, Matra (now part of Siemens Mobility) and CSEE (now part of Hitachi Rail STS) in the 1980s. It was first deployed on the RER A suburban railway in Paris in 1989.
Afterwards it was installed:
on the Santiago Metro in Santiago, Chile;
on some of the MTR lines in Hong Kong (Kwun Tong line, Tsuen Wan line, Island line, Tseung Kwan O line, Airport Express and Tung Chung line), all enhanced with ATO,
on Lines A, B and 8 of the Mexico City Metro lines in Mexico City; and
on Shanghai Metro Line 3.
In 2017 the SACEM system in Paris was enhanced with Automatic Train Operation (ATO) and was put in full operation at the end of 2018.
The SACEM system in Paris is to be enhanced to a fully fledged CBTC system named NExTEO. First to be deployed on the newly-extended line RER E in 2024, it is proposed to replace signalling and control on all RER lines.
Operation
The SACEM system enables a train to receive signals from devices under the tracks. A receiver in the train cabin interprets the signal, and sends data to the console so the driver can see it. A light on the console indicates the speed control setting: an orange light means slow speed, or ; a red light means full stop. If the driver alters the speed, a warning buzzer may sound. If the system determines that the speed might be unsafe, and the driver does not change it within a few seconds, SACEM engages the emergency brake. SACEM also allows for a reduction in potential train bunching and easier recovery from delays, therefore safely increasing operating frequencies as much as possible especially during rush hour.
References
External links
Operation principles and examples with pictures in Hongkong
Embedded systems
Rapid transit
Railway sign |
https://en.wikipedia.org/wiki/IEEEmadC | IEEEmadC (Mobile Applications Development Contest) is an international contest organized by volunteers for IEEE student members across the globe. The main goal is to educate and encourage students to pursue their future career as mobile application developers, develop their engineering, social and team skills and consequently become more competitive in the labor market. The contest is organized online with three phases: Education and Idea, Development and Judging stage. Teams up to three IEEE student members are invited to devise and develop mobile applications. Develop a mobile app and win big prizes SIX criteria will be judged: UI Design, User Experience, Usefulness, Availability, Number of supported platforms and Open source support.
History
IEEEmadC started in the fall of 2013 from University of J.J.Strossmayer Student Branch in Osijek, Croatia. Contest was founded by Josip Balen, Luka Horvat and Igor Bedek with support from IEEE R8 Student Activities Committee. The first iteration of the contest was for IEEE students in Europe, Africa and the Middle East (Region 8). The second iteration of the contest was organized worldwide for IEEE student members in all 10 IEEE regions. IEEEmadC is enjoying its 3rd year of competition in 2016.
Received honours
Contest stages
Education stage
The main goals of the Educational stage are to educate and encourage students to become MAD (Mobile Application Developers), develop their engineering, social and team skills and consequently become more competitive in the labor market. During this stage IEEEmadC ambassadors with experts from industry organize webinars on mobile application development. Furthermore, with help from university professors they are organizing technical workshops and lectures in their local IEEE sections, universities and IEEE student branches around the globe.
Idea stage
In this stage students are able to register by submitting their ideas about mobile applications that they would like to develop. All |
https://en.wikipedia.org/wiki/Pristine%20Sources | Pristine Sources is a software management concept coined by the developers of the short-lived Bogus Linux distribution and popularized by Marc Ewing, co-founder of Red Hat Inc, after he adopted it and RPM Package Manager as a development philosophy for Red Hat Linux. It was the concept that enabled Red Hat to build Linux distributions faster and more reliably than had been possible previously. Briefly, the problem with building an operating system out of the myriad pieces of open source (or free software) components available from teams across the Internet was that there were many of these components and they all upgraded on different schedules at different times. Ewing's insight was to recognize that he could not take responsibility for these components. He and Erik Troan, wanted to build a software package management system, RPM, that allowed the team at Red Hat to avoid changing any of the source code of the software components they needed to use to build their Red Hat Linux operating system.
It is best summed up by Ewing's explanation in a mid-1990s Red Hat manual:
"The Philosophy Behind RPM"
References
External links
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/3/html/System_Administration_Guide/ch-rpm.html
https://docs.fedoraproject.org/ro/Fedora_Draft_Documentation/0.1/html/RPM_Guide/ch01s02s08.html
Programming principles |
https://en.wikipedia.org/wiki/Frontier%20Fiber | Frontier Fiber (formerly known as Frontier FiOS or FiOS from Frontier) is a bundled Internet access, telephone, and (until 2021) television service that operates over a fiber-optic communications network in California, Texas, Florida, Indiana, and South Carolina. Service is offered in some areas of the United States by Frontier Communications in areas built out and formerly served by Verizon, using the same infrastructure as its Fios service. Other service providers often use fiber optics in the network backbone and existing copper or coax infrastructure for residential users. Frontier's service began in 2009 with the acquisition of portions of Verizon's network, and networked areas expanded through 2015 through similar acquisitions, although some areas do not have service or cannot receive TV and phone service because of franchise agreements.
History
In May 2009, Frontier announced that it would acquire Verizon Communications' 4.8 million landlines leased to residential and small business customers in Arizona, Idaho, Illinois, Indiana, Michigan, Nevada, North Carolina, Ohio, Oregon, South Carolina, Washington, West Virginia, and Wisconsin, for $8.6 billion. In addition to the purchase of copper lines, Frontier also acquired the fiber-optic system built by Verizon in Fort Wayne, around Portland, and in some eastern suburbs of Seattle. These operations would continue to operate under the FiOS branding used by Verizon.
Frontier initially stated that it had no plans for changes after the transition. However, the company later attempted to institute a $500 installation fee for new television subscribers, backed out of franchise agreements in some cities in Oregon, and increased rates by 50% in Indiana. Frontier later retracted the rate increases and installation fee but has not reclaimed franchises in the cities that it relinquished and not before losing FiOS TV subscribers.
On February 5, 2015, Frontier announced that it would also acquire Verizon's wireline asset |
https://en.wikipedia.org/wiki/In%20vitro%20models%20for%20calcification | In vitro models for calcification may refer to systems that have been developed in order to reproduce, in the best possible way, the calcification process that tissues or biomaterials undergo inside the body. The aim of these systems is to mimic the high levels of calcium and phosphate present in the blood and measure the extent of the crystal's deposition. Different variations can include other parameters to increase the veracity of these models, such as flow, pressure, compliance and resistance. All the systems have different limitations that have to be acknowledged regarding the operating conditions and the degree of representation. The rational of using such is to partially replace in vivo animal testing, whilst rendering much more controllable and independent parameters compared to an animal model.
The main use of these models is to study the calcification potential of prostheses that are in direct contact with the blood. In this category we find examples such as animal tissue prostheses (xenogeneic bioprosthesis). Xenogeneic heart valves are of special importance for this area of study as they demonstrate a limited durability mainly due to the fatigue of the tissue and the calcific deposits (see Aortic valve replacement).
Description
In vitro calcification models have been used in medical implant development to evaluate the calcification potential of the medical device or tissue. They can be considered a subfamily of the bioreactors that have been used in the field of tissue engineering for tissue culture and growth. These calcification bioreactors are designed to mimic and maintain the mechano-chemical environment that the tissue encounters in vivo with a view to generating the pathological environment that would favor calcium deposition. Parameters including medium flow, pH, temperature and supersaturation of the calcifying solution used in the bioreactor are maintained and closely monitored. The monitoring of these parameters allows to obtain information |
https://en.wikipedia.org/wiki/Flow-sensitive%20typing | In programming language theory, flow-sensitive typing (also called flow typing or occurrence typing) is a type system where the type of an expression depends on its position in the control flow.
In statically typed languages, a type of an expression is determined by the types of the sub-expressions that compose it. However, in flow-sensitive typing, an expression's type may be updated to a more specific type if it follows an operation that validates its type. Validating operations can include type predicates, imperative updates, and control flow.
Examples
Ceylon
See the following example in Ceylon which illustrates the concept:
// Object? means the variable "name" is of type Object or else null
void hello(Object? name) {
if (is String name) {
// "name" now has type String in this block
print("Hello, ``name``!");
// and it is possible to call String methods on the variable
print(" String.size is ``name.size``");
}
else if (exists name) {
// "name" now has type Object in this block
print("Hello, object ``name``!");
}
else {
print("Hello, world!");
}
}
hello(null);
hello(1);
hello("John Doe");
and which outputs:
Hello, world!
Hello, object 1!
Hello, John Doe!
String.size is 8
Kotlin
See this example in Kotlin:
fun hello(obj: Any) {
// A type cast fails if `obj` is not a String
obj as String
// Since the type cast did not fail, `obj` must be a String!
val l = obj.length
println("'$obj' is a string of length $l")
}
hello("Mooooo")
Benefits
This technique coupled with type inference reduces the need for writing type annotations for all variables or to do type casting, like is seen with dynamic languages that use duck typing. It reduces verbosity and makes for terser code, easier to read and modify.
It can also help language implementers provide implementations that execute dynamic languages faster by predicting the type of objects s |
https://en.wikipedia.org/wiki/Atari%20ST%20character%20set | The Atari ST character set is the character set of the Atari ST personal computer family including the Atari STE, TT and Falcon. It is based on code page 437, the original character set of the IBM PC, and like that set includes ASCII codes 32–126, extended codes for accented letters (diacritics), and other symbols. It differs from code page 437 in using other dingbats at code points 0–31, in exchanging the box-drawing characters 176–223 for the Hebrew alphabet and other symbols, and exchanging code points 158, 236 and 254–255 with the symbols for sharp S, line integral, cubed and macron.
The Atari ST family of computers contained this font stored in ROM in three sizes; as an 8×16 pixels-per-character font used in the high-resolution graphics modes, as an 8×8 pixels-per-character font used in the low- and medium-resolution graphics modes, and as a 6×6 pixels-per-character font used for icon labels in any graphics mode.
All 256 codes were assigned a graphical character in ROM, including the codes from 0 to 31 that in ASCII were reserved for non-graphical control characters.
Digital Research's Intel-based original GEM for IBM compatible PCs utilized the similar GEM character set. It has swapped ¢ and ø and has also swapped ¥ and Ø (meaning GEM is more similar to code page 865 by placement of Ø and ø). It also has the currency sign (¤) at codepoint 158, “ at codepoint 169, ” at codepoint 170, ‹ at codepoint 171, › at codepoint 172, section sign (§) at codepoint 184, double dagger (‡) at codepoint 185, „ at codepoint 192, horizontal ellipsis (…) codepoint 193, per mille sign (‰) at codepoint 194, bullet (•) at codepoint 195, en dash (–) at codepoint 196, em dash (—) at codepoint 197, degree sign (°) at code point 198, the S with caron (uppercase and lowercase) and various uppercase Latin accented letters (in codepoint order, they are Á, Â, È, Ê, Ë, Ì, Í, Î, Ï, Ò, Ó, Ô, Š, š, Ù, Ú, Û, and Ÿ) at codepoints 199-216, sharp s (ß) at codepoint 217, various spaces at codepoi |
https://en.wikipedia.org/wiki/OneSubsea | OneSubsea is a Schlumberger company, headquartered in Houston, Texas, United States. The company is a subsea supplier for the subsea oil and gas market.
OneSubsea has more than 5,000 employees in over 23 countries operating in six divisions—Integrated Solutions, Production Systems, Processing Systems, Control Systems, Swivel and Marine Systems, and Subsea Services—that provide products and services to oil and gas operators around the world including the FRIEND Remote Surveillance & Diagnostic System.
History
The integration of Cameron subsea section and Schlumberger-owned Framo Engineering in 2013 formed Onesubsea as a 60% Cameron and 40% Schlumberger joint venture. This cooperation, finally led to major acquisition of Cameron and OneSubsea in a $14.8 billion deal in late 2015.
Shell Stones
In August 2015, it was announced that OneSubsea was awarded a contract to supply subsea processing systems for the Shell Offshore Inc. Stones development in the Gulf of Mexico. The project was deemed the industry's first 15,000-PSI (super high pressure) subsea pump system with high capacity.
Business alliance
In January 2015, Helix, OneSubsea and Schlumberger formed the Subsea Services Alliance to develop technologies and deliver equipment and services to optimize the value chain of subsea well intervention systems.
In July 2015, Subsea7 and OneSubsea jointly announced a global alliance to design, develop and deliver integrated subsea development solutions through the combination of subsurface expertise, subsea production systems, subsea processing systems, subsea umbilicals, risers and flowlines systems (SURF) and life-of-field services. This was considered as a move against rivals FMC and also GE oil and gas that lately formed major alliance with other subsea companies.
See also
List of oilfield service companies
Oil industry
Wellhead
References
https://www.prnewswire.com/news-releases/cameron-announces-results-for-fourth-quarter-of-2015-300211184.html
Energy eng |
https://en.wikipedia.org/wiki/Blotting%20matrix | A blotting matrix, in molecular biology and genetics, is the substrate onto which macromolecules, such as proteins, are transferred in a blot method. The matrices are generally chemically modified paper filters or microporous membrane filters. In a dot blot, macromolecules are applied directly to the matrix. Macromolecules can also be separated and transferred via gel electrophoresis.
One of the most common blotting matrices for protein analysis is nitrocellulose, which has a high affinity for proteins due to hydrophobic interactions. However, proteins with low molecular weight have a small affinity for nitrocellulose, limiting potential applications. This defect may be remedied by glutaraldehyde, which can covalently bond proteins to nitrocellulose. Another matrix is cellulose paper modified with diazophenylthiother, which can also facilitate covalent bonding of proteins. Nylon membranes are also used for protein blotting, although they may result in the binding of anionic dyes such as Coomassie blue and Amido black. Polyvinylidene fluoride membranes are also commonly used, due to their hydrophobicity.
References
Molecular biology |
https://en.wikipedia.org/wiki/Bronshtein%20and%20Semendyayev | Bronshtein and Semendyayev (often just Bronshtein or Bronstein, sometimes BS) is the informal name of a comprehensive handbook of fundamental working knowledge of mathematics and table of formulas originally compiled by the Russian mathematician Ilya Nikolaevich Bronshtein and engineer Konstantin Adolfovic Semendyayev.
The work was first published in 1945 in Russia and soon became a "standard" and frequently used guide for scientists, engineers, and technical university students. Over the decades, high popularity and a string of translations, extensions, re-translations and major revisions by various editors led to a complex international publishing history centered around the significantly expanded German version. Legal hurdles following the fall of the Iron Curtain caused the development to split into several independent branches maintained by different publishers and editors to the effect that there are now two considerably different publications associated with the original title – and both of them are available in several languages.
With some slight variations, the English version of the book was originally named A Guide-Book to Mathematics, but changed its name to Handbook of Mathematics. This name is still maintained up to the present by one of the branches. The other line is meanwhile named Users' Guide to Mathematics to help avoid confusion.
Overview
Bronshtein and Semendyayev is a comprehensive handbook of fundamental working knowledge of mathematics and table of formulas based on the Russian book (, literally: "Handbook of mathematics for engineers and students of technical universities") compiled by the Russian mathematician Ilya Nikolaevich Bronshtein () and engineer Konstantin Adolfovic Semendyayev ().
The scope is the concise discussion of all major fields of applied mathematics by definitions, tables and examples with a focus on practicability and with limited formal rigour. The work also contains a comprehensive list of analytically solvable i |
https://en.wikipedia.org/wiki/Conservative%20functor | In category theory, a branch of mathematics, a conservative functor is a functor such that for any morphism f in C, F(f) being an isomorphism implies that f is an isomorphism.
Examples
The forgetful functors in algebra, such as from Grp to Set, are conservative. More generally, every monadic functor is conservative. In contrast, the forgetful functor from Top to Set is not conservative because not every continuous bijection is a homeomorphism.
Every faithful functor from a balanced category is conservative.
References
External links
Category theory |
https://en.wikipedia.org/wiki/Why%20Johnny%20Can%27t%20Add | Why Johnny Can't Add: The Failure of the New Math is a 1973 book by Morris Kline, in which the author severely criticized the teaching practices characteristic of the "New Math" fashion for school teaching, which were based on Bourbaki's approach to mathematical research, and were being pushed into schools in the United States. Reactions were immediate, and the book became a best seller in its genre and was translated into many languages.
References
Further reading
External links
Text on-line, with permission of the current copyright holders
Books about mathematics education
1973 non-fiction books |
https://en.wikipedia.org/wiki/Subgraph%20%28operating%20system%29 | Subgraph OS was a Debian-based project designed to be resistant to surveillance and interference by sophisticated adversaries over the Internet. It has been mentioned by Edward Snowden as showing future potential.
Subgraph OS was designed to be locked down, with a reduced attack surface, to increase the difficulty to carry out certain classes of attack against it. This was accomplished through system hardening and a proactive, ongoing focus on security and attack resistance. Subgraph OS also placed emphasis on ensuring the integrity of installed software packages through deterministic compilation.
The last update of the project's blog was in September 2017, and all of its github repositories haven't seen any recent activity as of 2021.
Features
Some of Subgraph OS's notable features included:
Linux kernel hardened with the grsecurity and PaX patchset.
Linux namespaces and xpra for application containment.
Mandatory file system encryption during installation using LUKS.
Configurable firewall rules to automatically ensure that network connections for installed applications are made using the Tor anonymity network. Default settings ensure that each application's communication is transmitted via an independent circuit on the network.
GNOME Shell integration for the OZ virtualization client, which runs apps inside a secure Linux container, targeting ease-of-use by everyday users.
Security
The security of Subgraph OS (which uses sandbox containers) has been questioned in comparison to Qubes (which uses virtualization), another security focused operating system. An attacker can trick a Subgraph user to run a malicious unsandboxed script via the OS's default Nautilus file manager or in the terminal. It is also possible to run malicious code containing .desktop files (which are used to launch applications). Malware can also bypass Subgraph OS's application firewall. Also, by design, Subgraph does not isolate the network stack like Qubes OS.
See also
Tails (operat |
https://en.wikipedia.org/wiki/Chain-ladder%20method | The chain-ladder or development method is a prominent actuarial loss reserving technique.
The chain-ladder method is used in both the property and casualty and health insurance fields. Its intent is to estimate incurred but not reported claims and project ultimate loss amounts.
The primary underlying assumption of the chain-ladder method is that historical loss development patterns are indicative of future loss development patterns.
Methodology
According to Jacqueline Friedland's "Estimating Unpaid Claims Using Basic Techniques," there are seven steps to apply the chain-ladder technique:
Compile claims data in a development triangle
Calculate age-to-age factors
Calculate averages of the age-to-age factors
Select claim development factors
Select tail factor
Calculate cumulative claim development factors
Project ultimate claims
Age-to-age factors, also called loss development factors (LDFs) or link ratios, represent the ratio of loss amounts from one valuation date to another, and they are intended to capture growth patterns of losses over time. These factors are used to project where the ultimate amount losses will settle.
Example
Firstly, losses (either reported or paid) are compiled into a triangle, where the rows represent accident years and the columns represent valuation dates. For example, the entry '43,169,009' represents loss amounts related to claims occurring in 1998, valued as of 24 months.
Next, age-to-age factors are determined by calculating the ratio of losses at subsequent valuation dates. From 24 months to 36 months, accident year 1998 losses increased from 43,169,009 to 45,568,919, so the corresponding age-to-age factor is 45,568,919 / 43,169,009 = 1.056. A "tail factor" is selected (in this case, 1.000) to project from the latest valuation age to ultimate.
Finally, averages of the age-to-age factors are calculated. Judgmental selections are made after observing several averages. The age-to-age factors are then multiplied together t |
https://en.wikipedia.org/wiki/9600%20port | The '9600 port' (also named data-jack or data-port) is an industry-specific name given to a special connector on the back of amateur radio HF, VHF, and UHF transceivers. It is used for connecting a packet radio modem or any other type of data-modem which uses audio tones to convey data.
This port is capable of transmitting and receiving data at speeds of at least 9600 bits per second, but usually faster. This is achieved by bypassing the highpass, lowpass, preemphasis, and deemphasis filters normally contained in the microphone and speaker circuits of an FM transmitter and receiver.
Amateur radio data ports which are not "9600 capable" are typically limited to a max speed of 1200 to 3000 bits per second.
Commonly this 9600-capable data port uses a 6-pin mini-DIN connector (shown to the right).
This is the same physical connector-type as PS/2 port mice and keyboards.
Modem Manufacturers
There are a number of manufacturers making modems intended for this 9600 port / data port.
Kantronics
Tigertronics
Argent Data
Byonics
Coastal ChipWorks
MFJ Enterprises
Symek
Timewave Technologies
Masters Communications
Radio Manufacturers
There are a number of manufacturers making radios which include a 9600 capable data port as a feature:
Alinco
Icom Incorporated
Yaesu
Kenwood
Software Modems
The 9600 port can also be connected to computer's soundcard for use with a number of different software-based data modems:
Direwolf
MixW
AGW Packet Engine
Soundmodem
UZ7HO Soundmodem
Digital Voice
The 9600 port can be used to connect a digital voice adapter, or dongle, which allows analog amateur radios to transmit and receive ICOM's D-Star digital voice protocol (AMBE2020).
Digital Voice Dongle
Star*DV / Star*Board
DVRPTR_V1 D-Star boards
PAPA GMSK Boards
DUTCH*Star
Users of this technology
This 9600 port is used to communicate with some amateur radio satellites using the packet radio
A 9600-baud capable amateur radio and modem are installed aboard the Intern |
https://en.wikipedia.org/wiki/QDriverStation | The QDriverStation is a free and open-source robotics software for the FIRST Robotics Competition.
The project was started in September 2015 by Alex Spataru (Team 3794), with the objective to provide a stable, free, extensible and friendly to use alternative to the FRC Driver Station. Since then, several FRC students, alumni and mentors have contributed to the project by providing feedback, documenting the communication protocols and creating Linux packages.
Features
Some important features of the QDriverStation are:
The QDriverStation implements a simple auto-updater to ensure that teams are running the latest version of the software.
The QDriverStation uses SDL to obtain joystick input, but it also implements the option to enable a "virtual joystick", which uses the keyboard keys to operate the robot.
The QDriverStation implements a simple sandbox around every protocol to ensure the safe operation of the robot and the software.
The QDriverStation uses the Qt framework to implement the Graphical user interface.
FRC communication protocols
The developers of the QDriverStation have implemented the 2014, 2015 and 2016 FRC communication protocols. Some users have requested to implement support for the ROS protocol, however, work for this feature has not been published yet.
Mobile version
The developers of the QDriverStation have also developed a side-project for mobile devices (such as Android and iOS) with QML. The mobile version has most of the capabilities that the desktop version has.
Screenshots
External links
GitHub Repository
QDriverStation announcement thread
References
Free software programmed in C++
Robotics software |
https://en.wikipedia.org/wiki/Species%20sorting | Species sorting is a mechanism in the metacommunity framework of ecology whereby species distributions and abundances can be related to the environmental or biotic conditions in a particular habitat. The species sorting paradigm describes a system of habitat patches with different environmental conditions that organisms can move between. Species are able to disperse to patches with suitable environmental conditions, resulting in patterns where environmental conditions can predict the species found in a particular habitat.
References
Ecology |
https://en.wikipedia.org/wiki/Prior-free%20mechanism | A prior-free mechanism (PFM) is a mechanism in which the designer does not have any information on the agents' valuations, not even that they are random variables from some unknown probability distribution.
A typical application is a seller who wants to sell some items to potential buyers. The seller wants to price the items in a way that will maximize his profit. The optimal prices depend on the amount that each buyer is willing to pay for each item. The seller does not know these amounts, and cannot even assume that the amounts are drawn from a probability distribution. The seller's goal is to design an auction that will produce a reasonable profit even in worst-case scenarios.
PFMs should be contrasted with two other mechanism types:
Bayesian-optimal mechanisms (BOM) assume that the agents' valuations are drawn from a known probability distribution. The mechanism is tailored to the parameters of this distribution (e.g., its median or mean value).
Prior-independent mechanisms (PIM) assume that the agents' valuations are drawn from an unknown probability distribution. They sample from this distribution in order to estimate the distribution parameters.
From the point-of-view of the designer, BOM is the easiest, then PIM, then PFM. The approximation guarantees of BOM and PIM are in expectation, while those of PFM are in worst-case.
What can we do without a prior? A naive approach is to use statistics: ask the potential buyers what their valuations are and use their replies to calculate an empirical distribution function. Then, apply the methods of Bayesian-optimal mechanism design to the empirical distribution function.
The problem with this naive approach is that the buyers may behave strategically. Since the buyers' answers affect the prices that they are going to pay, they may be incentivized to report false valuations in order to push the price down. The challenge in PFMD is to design truthful mechanisms. In truthful mechanisms, the agents cannot affect the |
https://en.wikipedia.org/wiki/Tripod%20Beta | Tripod Beta is an incident and accident analysis methodology made available by the Stichting Tripod Foundation via the Energy Institute. The methodology is designed to help an accident investigator analyse the causes of an incident or accident in conjunction with conducting the investigation. This helps direct the investigation as the investigator will be able to see where more information is needed about what happened, or how or why the incident occurred.
Early development
Tripod Beta was developed by Shell International Exploration and Production B.V. as the result of Shell-funded academic research in the 1980s and 1990s. Such research contributed towards the development of the Swiss cheese model of accident causation, and in the late 1990s and early 2000s, towards the development of the Hearts and Minds safety culture toolkit.
The research was based on the following hypotheses
Accidents happen because controls fail (now known as the Swiss Cheese model)
The underlying causes of controls failing are due to underlying causes in the way we manage
Those underlying causes, metaphorically comparable with 'pathogens' are present long before an accident occurs
Those 'imperfections' are known by some of the people before the incident occurs
People are usually well intended, trying to get their task done despite the imperfections in the system.
If we can identify those failures and take action to remove them we will reduce the probability of accidents
The early research focused on a predictive tool to identify underlying causes of incidents before they occurred rather than an incident investigation methodology This would later become the basis for Tripod Delta.
The incident investigation methodology whilst always part of the research came later around 1990. initial Tripod Investigation followed a tabular approach as graphical program was not yet available
Following the 1988 Piper Alpha disaster and Lord Cullen report in 1990, Shell International created a team to |
https://en.wikipedia.org/wiki/Code%20page%20942 | Code page 942 (abbreviated as CP942 or IBM-942) is one of IBM's extensions of Shift JIS. The coded character sets are JIS X 0201, JIS X 0208, IBM extensions for IBM 1880 UDC and IBM extensions. It is the combination of the single-byte Code page 1041 and the double-byte Code page 301.
It is a superset of IBM-932, differing in its use of Code page 1041 in place of Code page 897 for its single byte codes. Code page 1041 is an extension of Code page 897 and adds five single-byte characters. 0x80 is mapped to the cent sign (¢), 0xA0 is mapped to the pound sign (£), 0xFD is mapped to the not sign (¬), 0xFE is mapped to the backslash (\) and 0xFF is mapped to the tilde (~). These are all unassigned in Code page 897 and therefore IBM-932.
Code page 942 contains standard 7-bit ISO 646 codes, and Japanese characters are indicated by the high bit of the first byte being set to 1. Some code points in this page require a second byte, so characters use either 8 or 16 bits for encoding.
Code page 1041, and therefore Code page 942, uses 0x5C for the Yen sign (¥) and 0x7E for the overline (‾), matching the lower half of JIS X 0201 rather than US-ASCII. However, the version of Code page 942 used in International Components for Unicode (called "ibm-942_P12A-1999" or "x-IBM942C") uses US-ASCII mappings for single-byte characters between 0x20 and 0x7E. This results in duplicate mapping for the tilde (0x7E and 0xFF) and the backslash (0x5C and 0xFE).
Layout
See also
Code page 943
References
External links
IBM Code Page 942
942
Encodings of Japanese |
https://en.wikipedia.org/wiki/Code%20page%20949%20%28IBM%29 | IBM code page 949 (IBM-949) is a character encoding which has been used by IBM to represent Korean language text on computers. It is a variable-width encoding which represents the characters from the Wansung code defined by the South Korean standard KS X 1001 in a format compatible with EUC-KR, but adds IBM extensions for additional hanja, additional precomposed Hangul syllables, and user-defined characters.
Giving values in hexadecimal, bytes 0x00 through 0x7F are used for single byte KS X 1003 (ISO 646:KR) characters, a similar set to ASCII but with a won sign rather than a backslash. Bytes 0x80 through 0x84 are used for IBM single byte extension characters. Lead bytes 0x8F through 0xA0 are used for IBM double byte extension characters. Lead bytes 0xA1 through 0xFE are used for Wansung code (KS X 1001 characters in EUC-KR form, double byte), but with some unused space opened up for user-defined use.
Although both are sometimes named "cp949", IBM-949 is different from Windows code page 949 (IBM-1363), which is Microsoft's Unified Hangul Code, a different extension of EUC-KR. It should also not be confused with IBM's implementation of plain EUC-KR (IBM-970). Code page 949 in OS/2 is the IBM code page; however, a third-party patch exists to change this.
Terminology and encoding labelling
Both IBM-949 and Unified Hangul Code (Windows-949) are known as "code page 949" (or "cp949") although they share only the EUC-KR subset in common. Neither has a standardised IANA-registered label to identify it. Although UHC is included in the WHATWG Encoding Standard, with labels including "windows-949", IBM-949 is not. IBM-949 therefore is not permitted in HTML5.
Although the meaning of the label "ibm-949" (and conversely "windows-949" and "ms949") is unambiguous where these labels are supported, the interpretation of the encoding labels "949" and "cp949" consequently varies between implementations. For example, International Components for Unicode uses "cp949", "949", "ibm-94 |
https://en.wikipedia.org/wiki/Movidius | Movidius is a company based in San Mateo, California, that designs low-power processor chips for computer vision. The company was acquired by Intel in September 2016.
Company history
Movidius was co-founded in 2005 by Sean Mitchell and David Moloney in Dublin, Ireland. Between 2006 and 2016, it raised nearly $90 million in capital funding. In May 2013, the company appointed Remi El-Ouazzane as CEO. In January 2016, the company announced a partnership with Google. Movidius has been active in Google's Project Tango, and also announced a planned acquisition by Intel in September 2016.
Products
Myriad 2
The company's Myriad 2 chip is a manycore vision processing unit that can function on power-constrained devices. The Fathom is a USB stick containing a Myriad 2 processor, allowing a vision accelerator to be added to devices using ARM processors including PCs, drones, robots, IoT devices and video surveillance for tasks such as identifying people or objects. It can run at between 80 and 150 GFLOPS on 1W of power.
Myriad X
Intel's Myriad X VPU (vision processing unit) is the third generation VPU from Movidius. It uses a Neural Compute Engine, a dedicated hardware accelerator—for neural network deep-learning inferences.
Neural Compute Stick
The Intel Movidius Neural Compute Stick (NCS) is a tiny fanless deep-learning device that can be used to learn AI programming at the edge. NCS is powered by the same low-power, high-performance Intel Movidius Vision Processing Unit that can be found in millions of smart security cameras, gesture-controlled drones, industrial machine vision equipment, and more. Supported frameworks are TensorFlow and Caffe.
On 14 November 2018, the company announced the latest version of NCS, marketed as "Neural Compute Stick 2" at the AI DevCon event in Beijing.
Uses
Google Clips camera uses Myriad 2 VPU.
The Intel RealSense Tracking Camera T265 uses the Myriad 2.
Mavic used the Myriad 2 in all consumer drones announced in 2016.
The Ryze |
https://en.wikipedia.org/wiki/EIDAS | eIDAS (electronic IDentification, Authentication and trust Services) is an EU regulation on electronic identification and trust services for electronic transactions in the European Single Market. It was established in EU Regulation 910/2014 of 23 July 2014 on electronic identification and repeals 1999/93/EC from 13 December 1999.
It entered into force on 17 September 2014 and applies from 1 July 2016 except for certain articles, which are listed in its Article 52. All organizations delivering public digital services in an EU member state must recognize electronic identification from all EU member states from September 29, 2018.
Description
eIDAS oversees electronic identification and trust services for electronic transactions in the European Union's internal market. It regulates electronic signatures, electronic transactions, involved bodies, and their embedding processes to provide a safe way for users to conduct business online like electronic funds transfer or transactions with public services. Both the signatory and the recipient can have more convenience and security. Instead of relying on traditional methods, such as mail or facsimile, or appearing in person to submit paper-based documents, they may now perform transactions across borders, like "1-Click" technology.
eIDAS has created standards for which electronic signatures, qualified digital certificates, electronic seals, timestamps, and other proof for authentication mechanisms enable electronic transactions, with the same legal standing as transactions that are performed on paper.
The regulation came into effect in July 2015, as a means to facilitate secure and seamless electronic transactions within the European Union. Member states are required to recognise electronic signatures that meet the standards of eIDAS.
Vision
eIDAS is a result of the European Commission's focus on Europe's Digital Agenda. With the commission's oversight, eIDAS was implemented to spur digital growth within the EU.
The |
https://en.wikipedia.org/wiki/Qvist%27s%20theorem | In projective geometry, Qvist's theorem, named after the Finnish mathematician , is a statement on ovals in finite projective planes. Standard examples of ovals are non-degenerate (projective) conic sections. The theorem gives an answer to the question How many tangents to an oval can pass through a point in a finite projective plane? The answer depends essentially upon the order (number of points on a line −1) of the plane.
Definition of an oval
In a projective plane a set of points is called an oval, if:
Any line meets in at most two points, and
For any point there exists exactly one tangent line through , i.e., }.
When the line is an exterior line (or passant), if a tangent line and if the line is a secant line.
For finite planes (i.e. the set of points is finite) we have a more convenient characterization:
For a finite projective plane of order (i.e. any line contains points) a set of points is an oval if and only if and no three points are collinear (on a common line).
Statement and proof of Qvist's theorem
Qvist's theorem
Let be an oval in a finite projective plane of order .
(a) If is odd,
every point is incident with 0 or 2 tangents.
(b) If is even,
there exists a point , the nucleus or knot, such that, the set of tangents to oval is the pencil of all lines through .
Proof
(a) Let be the tangent to at point and let be the remaining points of this line. For each , the lines through partition into sets of cardinality 2 or 1 or 0. Since the number is even, for any point , there must exist at least one more tangent through that point. The total number of tangents is , hence, there are exactly two tangents through each , and one other. Thus, for any point not in oval , if is on any tangent to it is on exactly two tangents.
(b) Let be a secant, } and }. Because is odd, through any , there passes at least one tangent . The total number of tangents is . Hence, through any point for there is exactly one tangent. If is the |
https://en.wikipedia.org/wiki/FlexEnable | FlexEnable is a technology provider that develops flexible organic electronics technologies and OTFT materials (branded as FlexiOM™). The company is located on the Cambridge Science Park, just north of Cambridge city centre.
FlexEnable was spun-out of Cambridge University with a focus on replacing silicon on glass in large-area electronics with organic thin-film transistors (OTFTs) on flexible substrates, enabling optoelectronic modules which are flexible, ultra-thin, ultra-light and unbreakable. FlexEnable is bringing organic electronics technology to market in a fabless business model. It supplies OTFT materials, technology transfers, and process licenses to display manufacturers allowing them to upgrade and diversify using existing production assets and tap into exciting new markets.
.
Technology
FlexEnable's maximum processing temperature for organic thin-film transistors is below 100 °C. This low temperature allows for the use of lower-cost plastic substrates enabling low cost, high yield mount and demount approach to handling flexible substrates .
FlexEnable also state that their transistors are the most flexible – and can be bent to a radius of 0.25mm thousands of times without affecting performance.
Applications for FlexEnable’s technology include flexible Organic LCD displays (OLCD) for consumer electronics and automotive, and flexible biaxially formable active Liquid Crystal Cell (LC Optics) that actively modulate, steer and focus light for applications including AR/VR optics and smart glasses, automotive smart windows and switchable ePrivacy displays.
See also
Electronic paper
Waveshare Electronics
External links
FlexEnable Flexible Display.
References
Companies based in Cambridge
Technology companies of England
Organic electronics
Flexible displays
Flexible electronics |
https://en.wikipedia.org/wiki/Telechrome | Telechrome was the first all-electronic single-tube color television system. It was invented by well-known Scottish television engineer, John Logie Baird, who had previously made the first public television broadcast, as well as the first color broadcast using a pre-Telechrome system.
Telechrome used two electron guns aimed at either side of a thin, semi-transparent mica sheet. One of the sides was covered in cyan phosphor and the other red-orange, producing a limited color gamut, but well suited to displaying skin tones. With minor modifications, the system could also be used to produce 3D images. Telechrome was selected as the basis for a UK-wide television standard by a committee in 1944, but the difficult task of converting the two-color system to three-color RGB was still under way when Baird died in 1946.
The introduction of the shadow mask design by RCA produced a workable solution for color television, albeit one with considerably less image brightness. Interest in alternative systems like the Telechrome or Geer tube faded by the late 1950s. The only alternatives to see widespread use were General Electric's slot-mask, and Sony's Trinitron, both were modifications of the RCA concept. All CRT-based methods have since been almost completely replaced by LCD television, starting in the 1990s.
Background
Mechanical and hybrid color
Baird performed one of the earliest public demonstrations of color television system on 3 July 1928 using an all-mechanical system with three Nipkow disk scanners synchronized with a single disk on the receiving end and three colored lights that were turned on and off in synchronicity with the broadcaster. The same basic system was used on 4 February 1938 to create the first color broadcast transmissions from The Crystal Palace to the Dominion Theatre in London. Baird was not the only one to experiment with mechanical color television, and a number of similar devices were demonstrated throughout this period, but Baird is recorded |
https://en.wikipedia.org/wiki/DIN%2066003 | The German standard DIN 66003, also known as Code page 1011 (CCSID 1011; abbreviated CP1011) by IBM, Code page 20106 (abbreviated CP20106) by Microsoft and D7DEC by Oracle, is a modification of 7-bit ASCII with adaptations for the German language, replacing certain symbol characters with umlauts and the eszett. It is the German national version of ISO/IEC 646 (ISO 646-DE), and also a localised option in DEC's National Replacement Character Set (NRCS) for their VT220 terminals.
It is registered with the ISO-IR registry for use with ISO/IEC 2022 as ISO-IR-21. Kermit calls it , but also accepts the IANA-registered name . Other IANA-registered names include , and simply .
Code page layout
See also
National Replacement Character Set (NRCS)
References
External links
DIN 66003 purchase page
Roman Czyborra: ISO 646 (Good old ASCII)
Airport display mojibake arising from the differences between DIN 66003 and ASCII
1011
66003 |
https://en.wikipedia.org/wiki/Curses%20%27N%20Chaos | Curses 'N Chaos is a 2D, wave-based, arena-brawler video game with a focus on 2-player co-op by independent developer Tribute Games. The game was released on August 18, 2015 for Windows, OS X, PlayStation 4, and PlayStation Vita.
Gameplay
The game is a single screen arena brawler where you fight waves of enemies. Players have the ability to craft new items and power ups, as well play alongside a friend, locally or online. The PlayStation platforms support cross-buy, cross-save and cross-play features. Players can choose between one of two heroes, Lea and Leo, to fight in melee combat against progressively more difficult waves of AI enemies.
References
External links
2015 video games
Indie games
MacOS games
PlayStation 4 games
PlayStation Vita games
PlayStation Network games
Action games
Video games developed in Canada
Video games with cross-platform play
Windows games
Tribute Games games |
https://en.wikipedia.org/wiki/Excess%20noise%20ratio | In electronics, excess noise ratio is a characteristic of a noise generator such as a "noise diode", that is used to measure the noise performance of amplifiers. The Y-factor method is a common measurement technique for this purpose.
By using a noise diode, the output noise of an amplifier is measured using two input noise levels, and by measuring the output noise factor (referred to as Y) the noise figure of the amplifier can be determined without having to measure the amplifier gain.
Background
Any amplifier generates noise. In a radio receiver the first stage dominates the overall noise of the receiver and in most cases thermal, or Johnson noise, determines the overall noise performance of a receiver. As radio signals decrease in size, the noise at the input of the receiver will determine a lower threshold of what can be received. The level of noise is determined by calculating the noise in a 50 ohm resistor at the input of the receiver as follows:
where:
= Boltzmann's constant = 1.38 × 10−23 J/K
= Temperature
= Bandwidth
Thus, receivers with a narrow bandwidth have a higher sensitivity than receivers with a large bandwidth and input noise can be decreased by cooling the receiver input stage.
A noise diode is a device which has a defined excess noise ratio (ENR).
When the diode is off (unpowered) the noise from it will be thermal noise defined by the above formula. The bandwidth to be used is the bandwidth of the receiver.
When the diode is on (powered) the noise from it will be increased from the thermal noise by the diode's excess noise ratio. This figure could be 6 dB for testing an amplifier with 40 dB gain and could be 16 dB for an amplifier with less gain or higher noise.
To determine the noise figure of an amplifier one uses a noise diode at the input to the amplifier and determines the output noise Y with the diode switched on and off.
Knowing both Y and the ENR, one can then determine the amount of noise contributed by the amplifier and henc |
https://en.wikipedia.org/wiki/Treiber%20stack | The Treiber stack algorithm is a scalable lock-free stack utilizing the fine-grained concurrency primitive compare-and-swap. It is believed that R. Kent Treiber was the first to publish it in his 1986 article "Systems Programming: Coping with Parallelism".
Basic principle
The basic principle for the algorithm is to only add something new to the stack once you know the item you are trying to add is the only thing that has been added since you began the operation. This is done by using compare-and-swap. Pushing an item to the stack is done by first taking the top of the stack (old head) and placing it after your new item to create a new head. You then compare the old head to the current head. If the two are matching then you can swap old head to the new one, if not then it means another thread has added an item to the stack, in which case you must try again.
When popping an item from the stack, before returning the item you must check that another thread has not added a new item since the operation began.
Correctness
In some languages—particularly, those without garbage collection—the Treiber stack can be at risk for the ABA problem. When a process is about to remove an element from
the stack (just before the compare and set in the pop routine below) another process can change the stack such that the head is the same, but the second element is
different. The compare and swap will set the head of the stack to the old second element in the stack mixing up the complete data structure. However, the Java version on this page is not subject to this problem, because of the stronger guarantees offered by the Java runtime (it is impossible for a newly created, unaliased object reference to be reference-equal to any other reachable object.)
Testing for failures such as ABA can be exceedingly difficult, because the problematic sequence of events is very rare.
Model checking is an excellent way to uncover such problems. See for instance exercise 7.3.3 in "Modeling and an |
https://en.wikipedia.org/wiki/Multi-parametric%20surface%20plasmon%20resonance | Multi-parametric surface plasmon resonance (MP-SPR) is based on surface plasmon resonance (SPR), an established real-time label-free method for biomolecular interaction analysis, but it uses a different optical setup, a goniometric SPR configuration. While MP-SPR provides same kinetic information as SPR (equilibrium constant, dissociation constant, association constant), it provides also structural information (refractive index, layer thickness). Hence, MP-SPR measures both surface interactions and nanolayer properties.
History
The goniometric SPR method was researched alongside focused beam SPR and Otto configurations at VTT Technical Research Centre of Finland since 1980s by Dr. Janusz Sadowski. The goniometric SPR optics was commercialized by Biofons Oy for use in point-of-care applications. Introduction of additional measurement laser wavelengths and first thin film analyses were performed in 2011 giving way to MP-SPR method.
Principle
The MP-SPR optical setup measures at multiple wavelengths simultaneously (similarly to spectroscopic SPR), but instead of measuring at a fixed angle, it rather scans across a wide range of θ angles (for instance 40 degrees). This results in measurements of full SPR curves at multiple wavelengths providing additional information about structure and dynamic conformation of the film.
Measured values
The measured full SPR curves (x-axis: angle, y-axis: reflected light intensity) can be transcribed into sensograms (x-axis: time, y-axis: selected parameter such as peak minimum, light intensity, peak width). The sensograms can be fitted using binding models to obtain kinetic parameters including on- and off-rates and affinity. The full SPR curves are used to fit Fresnel equations to obtain thickness and refractive index of the layers. Also due to the ability of scanning the whole SPR curve, MP-SPR is able to separate bulk effect and analyte binding from each other using parameters of the curve.
While QCM-D measures wet mass, MP- |
https://en.wikipedia.org/wiki/Linear%20control | Linear control are control systems and control theory based on negative feedback for producing a control signal to maintain the controlled process variable (PV) at the desired setpoint (SP). There are several types of linear control systems with different capabilities.
Proportional control
Proportional control is a type of linear feedback control system in which a correction is applied to the controlled variable which is proportional to the difference between the desired value (SP) and the measured value (PV). Two classic mechanical examples are the toilet bowl float proportioning valve and the fly-ball governor.
The proportional control system is more complex than an on–off control system but simpler than a proportional-integral-derivative (PID) control system used, for instance, in an automobile cruise control. On–off control will work for systems that do not require high accuracy or responsiveness but are not effective for rapid and timely corrections and responses. Proportional control overcomes this by modulating the manipulated variable (MV), such as a control valve, at a gain level that avoids instability, but applies correction as fast as practicable by applying the optimum quantity of proportional correction.
A drawback of proportional control is that it cannot eliminate the residual SP–PV error, as it requires an error to generate a proportional output. A PI controller can be used to overcome this. The PI controller uses a proportional term (P) to remove the gross error, and an integral term (I) to eliminate the residual offset error by integrating the error over time.
In some systems, there are practical limits to the range of the MV. For example, a heater has a limit to how much heat it can produce and a valve can open only so far. Adjustments to the gain simultaneously alter the range of error values over which the MV is between these limits. The width of this range, in units of the error variable and therefore of the PV, is called the proportiona |
https://en.wikipedia.org/wiki/Vladimir%20Pentkovski | Vladimir Mstislavovich Pentkovski (Russian: Владимир Мстиславович Пентковский; March 18, 1946, Moscow, Soviet Union – December 24, 2012, Folsom, California, United States) was a Soviet-American computer scientist, a graduate of the Moscow Institute of Physics and Technology and winner of the highest former Soviet Union's USSR State Prize (1987). He was one of the leading architects of the Soviet Elbrus supercomputers and the high-level programming language El-76. At the beginning of 1990s, he immigrated to the United States where he worked at Intel and led the team that developed the architecture for the Pentium III processor. According to a popular legend, Pentium processors were named after Vladimir Pentkovski.
Biography
Pentkovski was born in Moscow, USSR, into the family of the mathematician Mstislav Pentkovskii (1911–1968), Doctor of Physical and Mathematical Sciences, full professor (1955), full member of The National Academy of Sciences of the Republic of Kazakhstan (1958), the author of the specific nomogram's application in the engineering.
After graduation from the Moscow Institute of Physics and Technology (1970), completed his PhD and Doctorate of Science. From 1970 to 1992 Pentkovski worked at the Lebedev Institute of Precision Mechanics and Computer Engineering designing the supercomputers Elbrus-1 and Elbrus-2 and leading the development of the high-level programming language El-76.
Starting in 1986, he led the research and development of the 32-bit microprocessor El-90 which combined the concept of RISC and Elbrus-2 architecture. The logical design of El-90 processor was finished by 1987, with the prototype launched in 1990. At the same time Pentkovski started designing El-91C microprocessor based on El-90 design, but the project was closed due to the changes to Russian political and economic systems.
In February 1993 Pentkovski started his career at Intel and rose to the level of Senior Principal Engineer. He focused mainly on CPU architecture, |
https://en.wikipedia.org/wiki/Extended%20Mathematical%20Programming | Algebraic modeling languages like AIMMS, AMPL, GAMS, MPL and others have been developed to facilitate the description of a problem in mathematical terms and to link the abstract formulation with data-management systems on the one hand and appropriate algorithms for solution on the other. Robust algorithms and modeling language interfaces have been developed for a large variety of mathematical programming problems such as linear programs (LPs), nonlinear programs (NPs), Mixed Integer Programs (MIPs), mixed complementarity programs (MCPs) and others. Researchers are constantly updating the types of problems and algorithms that they wish to use to model in specific domain applications.
Extended Mathematical Programming (EMP) is an extension to algebraic modeling languages that facilitates the automatic reformulation of new model types by converting the EMP model into established mathematical programming classes to solve by mature solver algorithms. A number of important problem classes can be solved. Specific examples are variational inequalities, Nash equilibria, disjunctive programs and stochastic programs.
EMP is independent of the modeling language used but currently it is implemented only in GAMS. The new types of problems modeled with EMP are reformulated with the GAMS solver JAMS to well established types of problems and the reformulated models are passed to a suitable GAMS solver to be solved. The core of EMP is a file called where the annotations that are needed for the reformulations are added to the model.
Equilibrium problems
Equilibrium problems model questions arising in the study of economic equilibria in a mathematically abstract form. Equilibrium problems include Variational Inequalities, problems with Nash Equilibria, and Multiple Optimization Problems with Equilibrium Constraints (MOPECs). Use EMP's keywords to reformulate these problems as mixed complementarity problems (MCPs), a class of problems for which mature solver technology exists. |
https://en.wikipedia.org/wiki/Cross-platform%20play | In video games with online gaming functionality, also called cross-compatible play, cross-platform play, crossplay, or cross-play describes the ability of players using different video game hardware to play with each other simultaneously. It is commonly applied to the ability for players using a game on a specific video game console to play alongside a player on a different hardware platform such as another console or a computer. A related concept is cross-save, where the player's progress in a game is stored in separate servers, and can be continued in the game but on a different hardware platform.
Cross-play is related to but distinct from the notion of cross-platform development, which uses software languages and tools to enable deployment of software on multiple platforms. Cross-platform play is also a distinct concept from the ability to allow a player to play a game on different hardware platforms, often only having to purchase the title for one single system to have access to it on other systems, and retaining their progress in the game through the use of cloud storage or similar techniques.
Cross-platform play, while technically feasible with today's computer hardware, generally is impeded by two factors. One factor is the difference in control schemes between personal computers and consoles, with the keyboard-and-mouse controls typically giving computer players an advantage that cannot be easily remedied. The second factor relates to the closed online services used on consoles that are designed to provide a safe and consistent environment for its players that require the businesses' cooperation to open up for cross-platform play. Up through September 2018, Sony Interactive Entertainment had restricted PlayStation 4 cross-platform play with other consoles, creating a rift between players of popular games like Rocket League and Fortnite Battle Royale. In September 2018, Sony changed their stance, and had opened up beta-testing for Fortnite cross-platform pl |
https://en.wikipedia.org/wiki/Cascade%20impactor | A cascade impactor measures the reach range of a particulate substance as it moves through an opening with the use of aerosol. Cascade impactors are strictly measurement-related devices. In addition to measuring the range of substances moved through an opening by aerosol, the impactor can also be used to determine the particle size of the distributed substance. A cascade impactor collects its samples in a graduated manner. This allows the user to identify the sizes of the substance particles as the particles are distributed from the propellant aerosol source. When the aerosol substance is distributed into the cascade impactor, the substance enters a series of discs designed to collect solids and different particulate matter. The substance is thus collected as it passes through the disc series. Each disc is set in sequence with both the prior and the previous disc. The size of the discs is graduated as well, to properly determine the size of the particulate matter at each stage of the impactor.
An impactor is a device that classifies particles present in a sample of air or gas into known size ranges. It does this by drawing the air sample through a cascade of progressively finer nozzles. The air jets from these nozzles impact on plane sampling surfaces and each stage collects finer particles than its predecessor. The samples may be analysed under the microscope or by any method of chemical analysis that may be suitable for obtaining the mass of material of interest on each stage, e.g. atomic absorption or gas chromatography-mass spectrometry (GC/MS).
All cascade impactors have certain design features in common. By using suitable pumps, jet dimensions can be decreased to the point where the velocity is close to that of sound, enabling particles down to about 0.25 µm to be impacted.
See also
Aerosol impaction
External links
A multi-stage, low flow rate cascade impactor Journal of Aerosol Science, 1970
Good Cascade Impactor Practice (GCIP) and Considerations for “ |
https://en.wikipedia.org/wiki/Polymer%20sponge | Taking clues from spongy toddler toys that can absorb water and inflate to bigger sizes, scientists at Mayo Clinical Research Centre, Rochester, Minnesota, United States have developed biodegradable polymer grafts that, when surgically placed in damaged vertebrae, intended to grow such that it is just the right size and shape to fix the spinal column.
For obvious reasons, any problem with the backbone of a vertebrate is often considered a potential disability which can limit a person's ability to manoeuvre their way around their surroundings, cause a lot of pain and be responsible for mental distress. This has been researched upon by Lichun Lu and Xifeng Liu, scientists from Mayo Clinic's college of medicine, who have developed a novel spinal graft that, once surgically placed in the body, will grow to be just the right size and shape to fix the spinal column. They presented their work at the 251st National Meeting & Exposition of the non-profit organization American Chemical Society (ACS).
Problem
Current treatments for spinal tumours have been considered way too expensive and invasive. When cancer metastasizes it predominantly tends to settle in the spinal column. A different approach to replacing harmed vertebrae has been investigated. Polymer sponge researchers were reported to being about to present their work in March 2016 to a meeting of the American Chemical Society (ACS).
Solution
Doctors can cut out the infected bone tissue (or flat-out replace it as they did in the Sydney case) but that leaves large gaps in the spine. Normally, doctors would either have to open the chest cavity and access the spine from far side (which entails a lengthy recovery and high probability of complications) or they'd make a small incision in the neck/back and inject expandable titanium rods into the bone gap (which is super expensive because titanium). This new technique combines the easy access and short recovery of the titanium rod method with the low cost of the open che |
https://en.wikipedia.org/wiki/NIS%20code | The NIS code (Dutch: NIS-code, French: code INS) is a numeric code for regional areas of Belgium.
This code is used for statistical data treatment in Belgium.
This code was developed mid-1960s by the Statistics Belgium. It was first used for the census of 1970.
Structure of the code
The NIS code consists of 5 digits:
The first number identifies the province. if this digit is followed by 4 zeroes, this code identifies the complete province. Example: 70000 identifies the province Limburg.
The second digit identifies the arrondissement within this province. If after the two first digits there are three zeroes, then this code identifies the complete arrondissement. Example: 71000 identifies the arrondissement of Hasselt.
The last three digits uniquely identify the municipality within that arrondissement. Example: 71066 identifies Zonhoven.
Special cases
The country Belgium received the code 01000.
The three regions received the codes 02000 for Flanders, 03000 for Wallonia and 04000 for the Brussels region.
In 1995 the province of Brabant with first digit 2 was split in Flemish Brabant and Walloon Brabant. Flemish Brabant received code 20001 and Walloon Brabant received code 20002. The arrondissements kept their old codes.
The provinces and municipalities of Brussels are sorted alphabetically on their French name.
Mergers
municipalities that were merged and received a new name also received a new NIS code, which followed the last number of the list of municipalities of that arrondissement.
municipalities that lost their independence by merger and became municipality parts, also lost their NIS code. Per merged municipality only 1 NIS code remained. At the same time the structure of the NIS sector was adapted. An alphabetic letter was added per municipality part to be able to uniquely identify such a municipality part.
The Arrondissement of Brussels-Periphery with code 22000 merging in 1971 with the arrondissement of Halle-Vilvoorde, also lost its code by m |
https://en.wikipedia.org/wiki/NQIT | NQIT (Networked Quantum Information Technologies) is a quantum computing research hub established in 2014 as part of the UK National Quantum Technologies Programme. NQIT is a consortium of 9 UK universities and 30 partners, which received funding of £38m over a 5-year period.
By the end of the 5-year programme NQIT aims to produce the Q20:20 engine, a demonstration of a scalable quantum computer demonstrator comprising an optically-linked network of 20 cells, each cell being a quantum processor with 20 matter qubits.
Organisation
The UK National Quantum Technologies Programme was initiated by the UK Chancellor of the Exchequer, George Osborne in the Autumn Statement in 2013 in which he pledged a £270 million investment. A £120 million national network of four Quantum Technology Hubs was announced by Greg Clark in 2014. The NQIT Hub is led by a Director, Professor Ian Walmsley, who provides overall leadership and scientific vision, and two Co-Directors, Professor Dominic O’Brien, who leads the Systems Engineering, and Dr Tim Cook, who leads the Industrial User Engagement activities.
NQIT is led by the University of Oxford and academic partners are the University of Bath, the University of Cambridge, the University of Edinburgh, the University of Leeds, the University of Southampton, the University of Strathclyde, the University of Sussex and the University of Warwick.
Within the University of Oxford, NQIT works across the Departments of Physics, Engineering, Computer Science and Materials.
NQIT works with 30 industrial and government partners, including Aspen Electronics, the Centre for Quantum Technologies, Covesion Ltd, the Defence Science and Technology Laboratory, Element Six, ETSI, the Fraunhofer Institute for Telecommunications, Google, Lockheed Martin, M Squared Lasers, the UK National Physical Laboratory, Oxford Capital, Oxford Instruments, Pure Lifi, Raytheon UK, Rohde & Schwarz, Satellite Applications Catapult and Toshiba.
Q20:20
NQIT's principal g |
https://en.wikipedia.org/wiki/Mud%20ring%20feeding | Mud ring feeding (or mud plume fishing) is a cooperative feeding behavior seen in bottlenose dolphins on the lower Atlantic coast of Florida, United States. Dolphins use this hunting technique to forage and trap fish. A single dolphin will swim in a circle around a group of fish, swiftly moving his tail along the sand to create a plume. This creates a temporary net around the fish and they become disoriented. The fish begin jumping above the surface, so the dolphins can lunge through the plume and catch the fish.
Strategy
Single dolphin in the group will begin to swim with his tail moving along the sand; initial appearance of suspended sediment will appear
As the dolphin moves in a circle, the plume begins to grow
Cessation of plume growth and repositioning of dolphins in orientation to the plume
Dolphin lunges through the plume into the group of trapped fish
Mud ring feeding was first observed in 1999 in a group of 18 dolphins. An appearance of a thick cloud of suspended sediment on the surface of the water was noticed. The sediment plume then grows linearly or curvilinearly and the dolphin is observed to lead at the edge of the plume rather than just ahead. Lengths of the plume are estimated to be between . The entire behavior is observed to last an average of 17.0 seconds from the initiation of the mud plume through the final lunge. In all cases of the behavior, it is performed by a single animal and the plume is used once; though it has been seen that simultaneous plumes can be created separately by other dolphins in the group.
See also
Bubble net
List of feeding behaviours
Cooperative hunting
Predation
Pack hunter
External links
BBC Life Mud-ringing
References
Mammal behavior
Cetology
Eating behaviors
Collaboration |
https://en.wikipedia.org/wiki/Have%20I%20Been%20Pwned%3F | Have I Been Pwned? (HIBP; stylized in all lowercase as "';--have i been pwned?") is a website that allows Internet users to check whether their personal data has been compromised by data breaches. The service collects and analyzes hundreds of database dumps and pastes containing information about billions of leaked accounts, and allows users to search for their own information by entering their username or email address. Users can also sign up to be notified if their email address appears in future dumps. The site has been widely touted as a valuable resource for Internet users wishing to protect their own security and privacy. Have I Been Pwned? was created by security expert Troy Hunt on 4 December 2013.
As of June 2019, Have I Been Pwned? averages around one hundred and sixty thousand daily visitors, the site has nearly three million active email subscribers and contains records of almost eight billion accounts.
Features
The primary function of Have I Been Pwned? since it was launched is to provide the general public with a means to check if their private information has been leaked or compromised. Visitors to the website can enter an email address, and see a list of all known data breaches with records tied to that email address. The website also provides details about each data breach, such as the backstory of the breach and what specific types of data were included in it.
Have I Been Pwned? also offers a "Notify me" service that allows visitors to subscribe to notifications about future breaches. Once someone signs up with this notification mailing service, they will receive an email message any time their personal information is found in a new data breach.
In September 2014, Hunt added functionality that enabled new data breaches to be automatically added to HIBP's database. The new feature used Dump Monitor, a Twitter bot which detects and broadcasts likely password dumps found on pastebin pastes, to automatically add new potential breaches in real |
https://en.wikipedia.org/wiki/3x%20%2B%201%20semigroup | In algebra, the 3x + 1 semigroup is a special subsemigroup of the multiplicative semigroup of all positive rational numbers. The elements of a generating set of this semigroup are related to the sequence of numbers involved in the still open Collatz conjecture or the "3x + 1 problem". The 3x + 1 semigroup has been used to prove a weaker form of the Collatz conjecture. In fact, it was in such context the concept of the 3x + 1 semigroup was introduced by H. Farkas in 2005. Various generalizations of the 3x + 1 semigroup have been constructed and their properties have been investigated.
Definition
The 3x + 1 semigroup is the multiplicative semigroup of positive rational numbers generated by the set
The function T : Z → Z, where Z is the set of all integers, as defined below is used in the "shortcut" definition of the Collatz conjecture:
The Collatz conjecture asserts that for each positive integer n, there is some iterate of T with itself which maps n to 1, that is, there is some integer k such that T(k)(n) = 1. For example if n = 7 then the values of T(k)(n) for k = 1, 2, 3,... are 11, 17, 26, 13, 20, 10, 5, 8, 4, 2, 1 and T(11)(7) = 1.
The relation between the 3x + 1 semigroup and the Collatz conjecture is that the 3x + 1 semigroup is also generated by the set
The weak Collatz conjecture
The weak Collatz conjecture asserts the following: "The 3x + 1 semigroup contains every positive integer." This was formulated by Farkas and it has been proved to be true as a consequence of the following property of the 3x + 1 semigroup:
The 3x + 1 semigroup S equals the set of all positive rationals in lowest terms having the property that b ≠ 0 (mod 3). In particular, S contains every positive integer.
The wild semigroup
The semigroup generated by the set
which is also generated by the set
is called the wild semigroup. The integers in the wild semigroup consists of all integers m such that m ≠ 0 (mod 3).
See also
Wild number
References
Semigroup theory
Arithmetic dy |
https://en.wikipedia.org/wiki/Woodland%20edge | A woodland edge or forest edge is the transition zone (ecotone) from an area of woodland or forest to fields or other open spaces. Certain species of plants and animals are adapted to the forest edge, and these species are often more familiar to humans than species only found deeper within forests. A classic example of a forest edge species is the white-tailed deer in North America.
The woodland edge on maps
On topographic maps woods and forests are generally depicted in a soft green colour. Their edges are - like other features - usually determined from aerial photographs, but sometimes also by terrestrial survey. However, they only represent a snapshot in time because almost all woods have a tendency to spread or to gradually fill clearings. In addition, working out the exact edge of the wood or forest may be difficult where it transitions into scrub or bushes or the trees thin out slowly. Differences of opinion here often involved several tens of metres. In addition, many cartographers prefer to show even small islands of trees, while others – depending on the scale of the map – prefer more general, continuous lines to demarcate the forest or woodland edges.
For specialised work, aerial photographs or satellite imagery are frequently utilised without having to revise the maps. Cadastral maps cannot show the current situation because for reasons of cost they can only be updated at fairly long intervals and cultural boundaries are not legally binding.
Woodland edges and biology
On the woodland edge – however it is defined – not only does the flora change, but also the fauna and the soil type. These edge effects mean that many species of animal prefer woodland edges to the heart of the forest, because they have both protection and light - for example tree pipits and dunnocks. At the woodland edge trees are often different from those inside the wood, as well as hedge vegetation, brambles and low-growing plants.
The more gradual the transition from open country |
https://en.wikipedia.org/wiki/Epoxyeicosatetraenoic%20acid | Epoxyeicosatetraenoic acids (EEQs or EpETEs) are a set of biologically active epoxides that various cell types make by metabolizing the omega 3 fatty acid, eicosapentaenoic acid (EPA), with certain cytochrome P450 epoxygenases. These epoxygenases can metabolize EPA to as many as 10 epoxides that differ in the site and/or stereoisomer of the epoxide formed; however, the formed EEQs, while differing in potency, often have similar bioactivities and are commonly considered together.
Structure
EPA is a straight-chain, 20 carbon omega-3 fatty acid containing cis (see Cis–trans isomerism) double bonds between carbons 5 and 6, 8 and 9, 11 and 12, 14 and 15, and 17 and 18; each of these double bonds is designated with the notation Z to indicate its cis configuration in the IUPAC Chemical nomenclature used here. EPA is therefore 5Z,8Z,11Z,14Z,17Z-eicosapentaenoic acid. Certain cytochrome P450 epoxygenases metabolize EPA by converting one of these double bounds to an epoxide thereby forming one of 5 possible eicosatetraenoic acid epoxide regioisomers (see Structural isomer, section on position isomerism (regioisomerism)). These regioisomers are: 5,6-EEQ (i.e. 5,6-epoxy-8Z,11Z,14Z,17Z-eicosatetraenoic acid), 8,9-EEQ (i.e. 8,9-epoxy-5Z,11Z,14Z,17Z-eicosatetraenoic acid), 11,12-EEQ (i.e. 11,12-epoxy-5Z,8Z,14Z,17Z-eicosatetraenoic acid), 14,15-EEQ (i.e. 14,15-epoxy-5Z,8Z,11Z,17Z-eicosatetraenoic acid, and 17,18-EEQ (i.e. 17,18-epoxy-5Z,8Z,11Z,14Z-eicosatetraenoic acid. The epoxydases typically make both R/S enantiomers of each epoxide. For example, they metabolize EPA at its 17,18 double bond to a mixture of 17R,18S-EEQ and 17S,18R-EEQ. The EEQ products therefore consist of as many as ten isomers.
Production
Cellular cytochrome P450 epoxygenases metabolize various polyunsaturated fatty acids to epoxide-containing products. They metabolize the omega-6 fatty acids arachidonic acid, which possess four double bonds, to 8 different epoxide isomers which are termed epoxyeicosatrieno |
https://en.wikipedia.org/wiki/Coding%20bootcamp | Coding bootcamps are intensive programs of software development. They first appeared in 2011.
History
The first coding bootcamps were opened in 2011.
As of July 2017, there were 95 full-time coding bootcamp courses in the United States. The length of courses typically ranges from between 8 and 36 weeks, with most lasting 10 to 12 (averaging 12.9) weeks.
Collaboration with higher education
Following the increased popularity of coding bootcamps, some universities have started their own intensive coding programs or partnered with existing private coding bootcamps.
Online coding bootcamps
There are various online options for online bootcamps. These usually work by matching students with a mentor and are also generally cheaper and more accommodating to specific student needs.
Data science bootcamps and fellowships
Bootcamps that focus less on full stack development and more on producing data scientists and data engineers are known as data science bootcamps.
Matching programs
Coding bootcamps may be selective and require minimum skills; some companies aim to help novices learn prerequisite skills and apply to bootcamps.
Tuition
Coding bootcamps can be part-time or online, they may be funded by employers or qualify for student loans. According to a 2017 market research report, tuition ranged from free to $21,000 for a course, with an average tuition of $11,874.
"Deferred Tuition" refers to a payment model in which students pay the school a percentage (18%–22.5%) of their salary for 1–3 years after graduation, instead of upfront tuition.
In Europe, coding bootcamps can be free or a couple thousand euros per program. In contrast to formal university education, private offerings for training appear expensive.
On August 16, 2016, the US Department of Education announced up to $17 million in loans or grants for students to study with nontraditional training providers, including coding bootcamps. These grants or loans will be administered through the pilot program |
https://en.wikipedia.org/wiki/Sequence%20graph | Sequence graph, also called an alignment graph, breakpoint graph, or adjacency graph, are bidirected graphs used in comparative genomics. The structure consists of multiple graphs or genomes with a series of edges and vertices represented as adjacencies between segments in a genome and DNA segments respectively. Traversing a connected component of segments and adjacency edges (called a thread) yields a sequence, which typically represents a genome or a section of a genome. The segments can be thought of as synteny blocks, with the edges dictating how to arrange these blocks in a particular genome, and the labelling of the adjacency edges representing bases that are not contained in synteny blocks.
Construction
Before constructing a sequence graph, there must be at least two genomes represented as directed graphs with edges as threads (adjacency edges) and vertices as DNA segments. The genomes should be labeled P and Q, while the sequence graph is labeled as BreakpointGraph(P, Q).
The directional vertices of Q and their edges are arranged in the order of P. Once completed, the edges of Q are reconnected to their original vertices. After all edges have been matched the vertex directions are removed and instead each vertex is labeled as vh (vertex head) and vt (vertex tail).
Similarity between genomes is represented by the number of cycles (independent systems) within the sequence graph. The number of cycles is equal to cycles (P, Q). The max number of cycles possible is equal to the number of vertices in the sequence graph.
Example
Figure example.
Upon receiving genomes P (+a +b -c) and Q (+a +b -c), Q should be realigned to follow the direction edges (red) of P. The vertices should be renamed from a, b, c to ah at, bh bt, ch ct and the edges of P and Q should be connected to their original vertices (P edges = black, Q edges = green). Remove the directional edges (red). The number of cycles in G(P, Q) is 1 while the max possible is 3.
Applications
Breakpoint |
https://en.wikipedia.org/wiki/International%20Conference%20on%20Human%E2%80%93Robot%20Interaction | The ACM/IEEE '''International Conference on Human-Robot Interaction (HRI)''' is an annual conference "focusing on human-robot interaction with roots in robotics, psychology, cognitive science, human computer interaction (HCI), human factors, artificial intelligence, organizational behavior, anthropology, and other fields". The conference is a joint undertaking of the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE) organizations.
See also
ACM
ACM SIGAI
IEEE
References
External links
ACM web site
IEEE web site
HCI 2016 conference site
Software engineering conferences |
https://en.wikipedia.org/wiki/Tuft%20cell | Tuft cells are chemosensory cells in the epithelial lining of the intestines. Similar tufted cells are found in the respiratory epithelium where they are known as brush cells. The name "tuft" refers to the brush-like microvilli projecting from the cells. Ordinarily there are very few tuft cells present but they have been shown to greatly increase at times of a parasitic infection. Several studies have proposed a role for tuft cells in defense against parasitic infection. In the intestine, tuft cells are the sole source of secreted interleukin 25 (IL-25).
ATOH1 is required for tuft cell specification but not for maintenance of a mature differentiated state, and knockdown of Notch results in increased numbers of tuft cells.
Human tuft cells
The human gastrointestinal (GI) tract is full of tuft cells for its entire length. These cells were located between the crypts and villi. On the basal pole of all cells was expressed DCLK1. They did not have the same morphology as was describe in animal studies but they showed an apical brush border the same thickness. Colocalization of synaptophysin and DCLK1 were found in the duodenum, this suggests that these cells play a neuroendocrine role in this region. A specific marker of intestinal tuft cells is microtubule kinase - Double cortin-like kinase 1 (DCLK1). Tuft cells that are positive in this kinase are important in gastrointestinal chemosensation, inflammation or can make repairs after injuries in the intestine.
Function
One key to understanding the role of tuft cells is that they share many characteristics with chemosensory cells in taste buds. For instance, they express many taste receptors and taste signaling apparatus. This might suggest that tuft cells could function as chemoreceptive cells that can sense many chemical signals around them. However, with more new research suggests that tuft cells can also be activated by the taste receptor apparatus. These can also be triggered by different small molecules, such as |
https://en.wikipedia.org/wiki/Duodenal%20lymphocytosis | Duodenal lymphocytosis, sometimes called lymphocytic duodenitis, lymphocytic duodenosis, or duodenal intraepithelial lymphocytosis, is a condition where an increased number of intra-epithelial lymphocytes is seen in biopsies of the duodenal mucosa when these are examined microscopically. This form of lymphocytosis is often a feature of coeliac disease but may be found in other disorders.
Presentation
The condition is characterised by an increased proportion of lymphocytes in the epithelium of the duodenum, usually when this is greater than 20–25 per 100 enterocytes. Intra-epithelial lymphocyte (IEL) are normally present in intestine and numbers are normally greater in the crypts and in the jejunum; these are distinct from those found in the lamina propria of the intestinal mucosa. IELs are mostly T cells. Increased numbers of IELs are reported in around 3% of in duodenal biopsies, depending on case mix, but may be increasingly being found, in up to 7%.
Causes
The list of possible causes is wide, including coeliac disease, environmental enteropathy (tropical sprue), autoimmune enteropathy, small intestinal bacterial overgrowth, NSAID damage, Helicobacter pylori, other infections and Crohn's disease.
Diagnosis
Diagnosis is made by accurate counting of intraepithelial lymphocytes during histological examination of the duodenum. The definition of the condition includes the requirement that the duodenal histological appearances are otherwise unremarkable, specifically with normal villous architecture.
In coeliac disease (also known as gluten-sensitive enteropathy), duodenal lymphocytosis is found in untreated or partially treated cases. This is the least severe type of change, known as the Marsh I stage, in the classification of histological changes in coeliac disease. Additional features including villous atrophy and crypt hyperplasia are the other findings in other Marsh stages of coeliac disease.
Antibodies associated with coeliac disease were reported in a |
https://en.wikipedia.org/wiki/Dreamcast%20homebrew | Though Sega officially discontinued its Dreamcast video game console in 2001, and released the console's last official game in 2007, Dreamcast homebrew developers continued to release unofficial games for the console. Unlike homebrew communities for other consoles, the Dreamcast homebrew developers are organized in development teams, such as Redspotgames.
Community
Redspotgames is a German homebrew publisher.
NG:DEV.TEAM
Games
This is a partial list of games. For a more complete list, see List of Dreamcast homebrew games
(R4)
References
Further reading
Dreamcast
Homebrew software
Video game development |
https://en.wikipedia.org/wiki/Galileo%20%28supercomputer%29 | Galileo is a 1.1petaFLOPS supercomputer located at CINECA in Bologna, Italy.
History
GALILEO is available in Cineca since January 2015, in full production since February, the 2-nd, sponsored by the Ministry of Education, Universities and Research (Italy), the Istituto Nazionale di Fisica Nucleare and the University of Milano-Bicocca. It is the Italian National Tier-1 HPC machine, devoted to scientific computing as well as technical oriented applications. Galileo is also available to European researchers as a Tier-1 system of the PRACE
infrastructure.
In June 2015, Galileo reached the 105-th position on the TOP500 list of fastest supercomputers in the world.
In the Green500 list of top supercomputers. Galileo reached the 389-th position in their benchmark, the system tested at 242.17 MFLOPS/W (Performance per watt).
Technical details
Galileo is an IBM Linux infiniband cluster, with a NeXtScale architecture. It is made of 516 compute nodes. Each node contains 2x8-cores Intel Haswell processors (2.40 GHz) and a shared memory of 128 GB. The internal network is Infiniband with 4xQDR switches. The cluster is accessible though 8 login nodes, also user for visualization, reachable via ssh at the address login.galileo.cineca.it. The login nodes are equipped with 2 nVidia K40 GPU each. On the cluster there are also 8 service nodes NX360M5 for I/O and management.
The Operating system for both executable and login nodes is CentOS 7.0.
Galileo is an heterogeneous hybrid cluster: 359 nodes are equipped with Intel accelerators (Intel Phi 7120p), 2 accelerators per node for a total of 768 Phi in the system; 40 nodes are equipped with nVidia accelerators (nVidia K80), 2 accelerators per node for a total of 80 K80 in the system.
See also
Supercomputing in Europe
References
External links
La Repubblica in Italian
Corriere Comunicazioni in Italian
ResearchItaly in Italian
Primeur Magazine
RaiNews in Italian
GPGPU supercomputers
IBM supercomputers
Supercomputing in |
https://en.wikipedia.org/wiki/Advertising%20in%20biology | Advertising in biology means the use of displays by organisms such as animals and plants to signal their presence for some evolutionary reason.
Such signalling may be honest, used to attract other organisms, as when flowers use bright colours, patterns, and scent to attract pollinators such as bees; or, again honestly, to warn off other organisms, as when distasteful animals use warning coloration to prevent attacks from potential predators. Such honest advertising benefits both the sender and the receiver.
Other organisms may advertise dishonestly; in Batesian mimicry, edible animals more or less accurately mimic distasteful animals to reduce their own risk of being attacked by predators.
In plants
Insect-pollinated flowers use bright colours, patterns, rewards of nectar and pollen, and scent to attract pollinators such as bees. Some also use drugs such as caffeine to encourage bees to return more often. Advertising is influenced by sexual selection: in dioecious plants like sallow, the male flowers are brighter yellow (the colour of their pollen) and have more scent than female flowers. Honey bees are more attracted by the brighter male flowers, but not by their scent.
Many flowers that are adapted for pollination by birds produce copious quantities of nectar and advertise this with their red coloration. Insects see red less well than other colours, and the plant needs to devote its energy to attracting birds that can act as pollinators rather than insects that cannot. In fact, the Canary Island endemic Echium wildpretii has two subspecies, a red-flowering one on Teneriffe which is mainly pollinated by birds, and a pink-flowered one on Las Palmas which is pollinated by insects.
In animals
Advertising takes a variety of forms in animals. Breeding adults often display to attract a mate. Breeding males of sexually dimorphic birds, such as peacocks, birds of paradise and bower birds, have elaborate plumage, song, and behaviour. These evolved through sexual sele |
https://en.wikipedia.org/wiki/Multiple%20models | In control theory, multiple models is an approach to improve efficiency of adaptive system or observer system. It uses large number of models, which are distributed in the region of uncertainty, and based on the responses of the plant and the models. One model is chosen at every instant, which is closest to the plant according to some metric. The method offers satisfactory performance when no restrictions are put on the number of available models.
Approaches
There are two multiple model methods:
“Switching” the control input to the plant is based on the fixed model chosen at that instant. It is discontinuous, fast, but coarse.
“Switching and tuning”, an adaptive model is initialized from the location of the fixed model chosen, and the parameters of the best model determine the control to be used. It is continuous, slow, but accurate.
Applications
Multiple model method can be used for:
controlling an unknown plant - parameter estimate and the identification errors can be used collectively to determine the control input to the overall system,
applying multi observer - significantly improve transients and reduce observer overshoot.
See also
State observer
Adaptive control
References
General references
Control theory |
https://en.wikipedia.org/wiki/Coincheck | Coincheck is a Japanese bitcoin wallet and exchange service headquartered in Tokyo, Japan, founded by Koichiro Wada and Yusuke Otsuka. It operates exchanges between bitcoin, ether and fiat currencies in Japan, and bitcoin transactions and storage in some countries.
In April 2018, Coincheck was acquired by Monex Group for 3.6 billion yen. (US33.4 million)
Coincheck since 2016 has been the trademark name of a numismatic supply company located and trademark registered in the United States since 2016.
History
Coincheck started in August 2014 and is operated by Coincheck, inc. (previously ResuPress, inc) (founded in 2012). There were then more than 2,200 merchants using their bitcoin payment solution, just in Japan. Coincheck is a member of JBA (Japan Blockchain Association) and is actively helping to build the Japanese bitcoin community's usage standards with the government.
Coincheck partnered with SEKAI to support Chinese, Hong Kong, and Taiwan investors to buy Japanese real estate with bitcoin.
2018 hacking incident
In January 2018, Coincheck was hacked and approximately 500 million NEM tokens ($530 million) were stolen. The currency was transferred through a total of nineteen accounts, one of which was found to have no connection with the hacker.
The hack led two of Japan's crypto-currency trade groups to merge into a new self-regulatory organization. The Financial Services Agency took administrative action by ordering Coincheck to improve its security practices, but did not order the exchange to shut down out of a concern for the protection of its users. Coincheck initially announced that it may not be able to compensate all users affected by the hack, but then announced that it would repay all 260,000 users affected in Japanese yen using its own capital. As of February 2021, the Tokyo Public Prosecutors Office has charged 31 individuals for their involvement in transactions of stolen NEM tokens. In total, these individuals converted around 18.8 billion |
https://en.wikipedia.org/wiki/Amplitude%20integrated%20electroencephalography | Amplitude integrated electroencephalography (aEEG), cerebral function monitoring (CFM) or continuous electroencephalogram (CEEG) is a technique for monitoring brain function in intensive care settings over longer periods of time than the traditional electroencephalogram (EEG), typically hours to days. By placing electrodes on the scalp of the patient, a trace of electrical activity is produced which is then displayed on a semilogarithmic graph of peak-to-peak amplitude over time; amplitude is logarithmic and time is linear. In this way, trends in electrical activity in the cerebral cortex can be interpreted to inform on events such as seizures or suppressed brain activity. aEEG is useful especially in neonatology where it can be used to aid in diagnosis of hypoxic ischemic encephalopathy (HIE), and to monitor and diagnose seizure activity.
Interpretation of the aEEG
The CFM readout offers an integrated trace in one pane and a non-integrated trace in another pane (see image). Modern machines give a readout for each hemisphere corresponding to the positions of electrodes placed on the patient's head. The characteristics of the CFM include the 'baseline' which should be more than 5 µV, the upper limit of the trace which should be more than 10 µV, and the presence of 'sleep wake cycling' whereby the trace is expected to narrow and broaden over time. Seizures appear on the trace as regions of high activity with a raised and compacted trace in the aEEG pane; this would correspond to high-amplitude, repetitive waveforms in the non-integrated pane. A low-amplitude or 'suppressed' trace is prognostically concerning as it indicates abnormally low brain activity. A further possible pattern is a 'burst suppression' trace which consists of a low-amplitude signal interspersed with periods of high activity on the aEEG readout. This also carries a poor prognosis.
See also
Electroencephalogram(EEG)
Bispectral index
Epileptic seizure
Hypoxic ischaemic encephalopathy
Therapeutic h |
https://en.wikipedia.org/wiki/VARAN | VARAN (Versatile Automation Random Access Network) is a Fieldbus Ethernet-based industrial communication system.
VARAN is a wired data network technology for local data networks (LAN) with the main application in the field of automation technology. It enables the exchange of data in the form of data frames between all LAN connected devices (controllers, input/output devices, drives, etc.).
VARAN includes the definitions for types of cables and connectors, describes the physical signalling and specifies packet formats and protocols. From the perspective of the OSI model, VARAN specifies both the physical layer (OSI Layer 1) and the data link layer (OSI Layer 2). VARAN is a protocol according to the principle master-slave. The VARAN BUS USER ORGANIZATION (VNO) is responsible for the care of the Protocol.
References
Computer networking
Computer network organizations |
https://en.wikipedia.org/wiki/Redox%20%28operating%20system%29 | Redox is a Unix-like microkernel operating system written in the programming language Rust, which has a focus on safety, stability, and performance. Redox aims to be secure, usable, and free. Redox is inspired by prior kernels and operating systems, such as SeL4, MINIX, Plan 9, and BSD. It is similar to Linux and BSD, but is written in a memory-safe language. It is free and open-source software distributed under an MIT License.
Redox gets its name from the reduction-oxidation reactions in chemistry; one redox reaction is the corrosion of iron, also called rust.
Design
The Redox operating system is designed to be secure. This is reflected in two design decisions:
Using the programming language Rust for implementation
Using a microkernel design, similar to MINIX
Components
Redox provides packages (memory allocator, file system, display manager, core utilities, etc.) that together make up a functional operating system. Redox relies on an ecosystem of software written in Rust by members of the project.
Redox kernel – derives from the concept of microkernels, with inspiration from MINIX
Ralloc – memory allocator
TFS file system – inspired by the ZFS file system
Ion shell – the underlying library for shells and command execution in Redox, and the default shell
pkgutils – package manager
Orbital windowing system – display and window manager, sets up the orbital: scheme, manages the display, and handles requests for window creation, redraws, and event polling
relibc – C standard library
Command-line applications
Redox supports command-line interface (CLI) programs, including:
Sodium – vi-like editor that provides syntax highlighting
Graphical applications
Redox supports graphical user interface (GUI) programs, including:
NetSurf – a lightweight web browser which uses its own layout engine
Calculator – a software calculator which provides functions similar to the Windows Calculator program
Editor – simple text editor, similar to Microsoft Notepad
File |
https://en.wikipedia.org/wiki/123%20Reg | 123 Reg is a British domain registrar and web hosting company founded in 2000 and now under the ultimate ownership of GoDaddy. The company claims to be the UK's largest accredited domain registrar and provides Internet services to small- and medium-sized business. From 2003 to 2017, 123 Reg was part of Host Europe Group (HEG). In April 2017, American hosting company GoDaddy acquired HEG for 1.69 billion euros ($1.82 billion).
History
123 Reg was founded in 2000 by Jonathan and Tim Beresford-Brealey, who prior to this had also set up Webfusion Internet Solutions Ltd in 1997. In 2003, 123 Reg and Webfusion were acquired by Host Europe Group (HEG). In 2004, Host Europe Group was bought by Pipex Communications for £31.2m and 123 Reg became the UK's largest domain registrar, according to the company's parent.
In 2009, Host Europe Group organised its UK operations under the Webfusion Ltd group but kept both brands. The same year, Webfusion became the first UK web host to offer Windows Server 2008 web hosting, and the company opened a £2.5 million data centre in Leeds. Also in 2009, 123 Reg became the first UK domain registrar to have 2 million domain names on register. In 2010, Webfusion Ltd was included on The Sunday Times's list of Britain's fastest-growing private-equity backed companies, the Deloitte Buyout Track 100, and was the only hosting company on the list. In 2012, it became the first UK domain registrar to have 3 million domain names on register.
On 16 April 2016, 123 Reg admitted a major deletion of a large number of virtual private servers (VPSs) caused by an error during what should have been routine maintenance. The event deleted hundreds of websites, with users losing sites and access to data on their VPS service. By 24 April, the situation was still ongoing. During this period, 123 Reg had a further data breach, with customers being able to see the support tickets of other account holders.
In April 2017, American hosting company GoDaddy acquired 123 |
https://en.wikipedia.org/wiki/Moduli%20stack%20of%20elliptic%20curves | In mathematics, the moduli stack of elliptic curves, denoted as or , is an algebraic stack over classifying elliptic curves. Note that it is a special case of the moduli stack of algebraic curves . In particular its points with values in some field correspond to elliptic curves over the field, and more generally morphisms from a scheme to it correspond to elliptic curves over . The construction of this space spans over a century because of the various generalizations of elliptic curves as the field has developed. All of these generalizations are contained in .
Properties
Smooth Deligne-Mumford stack
The moduli stack of elliptic curves is a smooth separated Deligne–Mumford stack of finite type over , but is not a scheme as elliptic curves have non-trivial automorphisms.
j-invariant
There is a proper morphism of to the affine line, the coarse moduli space of elliptic curves, given by the j-invariant of an elliptic curve.
Construction over the complex numbers
It is a classical observation that every elliptic curve over is classified by its periods. Given a basis for its integral homology and a global holomorphic differential form (which exists since it is smooth and the dimension of the space of such differentials is equal to the genus, 1), the integralsgive the generators for a -lattice of rank 2 inside of pg 158. Conversely, given an integral lattice of rank inside of , there is an embedding of the complex torus into from the Weierstrass P function pg 165. This isomorphic correspondence is given byand holds up to homothety of the lattice , which is the equivalence relationIt is standard to then write the lattice in the form for , an element of the upper half-plane, since the lattice could be multiplied by , and both generate the same sublattice. Then, the upper half-plane gives a parameter space of all elliptic curves over . There is an additional equivalence of curves given by the action of thewhere an elliptic curve defined by the lattice is |
https://en.wikipedia.org/wiki/Kirito%20%28Sword%20Art%20Online%29 | , born as , is a fictional character and the protagonist of the Sword Art Online series of light novels written by Reki Kawahara. Known by a portmanteau of his name, which is his user name in the Sword Art Online video game which the series are partially set in.
Kirito is a teenager who was chosen as one of 1,000 beta testers for a new virtual reality video game: Sword Art Online. After the game is released to the general public, he and the rest of the 10,000 players discover that they are unable to log out and are trapped in the simulation unless they manage to beat the game. In the anime adaptation, Kirito is voiced in Japanese by Yoshitsugu Matsuoka and in English by Bryce Papenbrook.
Creation and conception
In an interview with Sword Art Online series creator Reki Kawahara, they wrote the series to change popular opinion of online gaming; viewing it not a social ill or just an escape from real life, and thus decided to show games in a more positive light in his light novels. He stated that he does not usually put aspects of himself into his characters, he did note that "neither of us are good at forming parties. We [both] tend to play solo in these games a lot." When asked about Kirito using a sword in a gun-based game during the Gun Gale Online arc, Kawahara responded that the energy sword in Halo can be the most powerful weapon if used properly.
In the anime adaptation, Kirito is voiced in Japanese by Yoshitsugu Matsuoka and in English by Bryce Papenbrook. In an interview with Matsuoka on the similarities between him and his character, Matsuoka opinioned that "there's only one chance" for success in both acting and for Kirito in the game universe. They also noted that Kirito eventually had to "pull people along" and "pave the way for others" like themselves.
Appearances
As the main protagonist of Sword Art Online, Kazuto "Kirito" Kirigaya, is one of 1,000 testers in the previous closed beta of Sword Art Online (SAO), a Massively Multiplayer Online Rol |
https://en.wikipedia.org/wiki/Online%20optimization | Online optimization is a field of optimization theory, more popular in computer science and operations research, that deals with optimization problems having no or incomplete knowledge of the future (online). These kind of problems are denoted as online problems and are seen as opposed to the classical optimization problems where complete information is assumed (offline). The research on online optimization can be distinguished into online problems where multiple decisions are made sequentially based on a piece-by-piece input and those where a decision is made only once. A famous online problem where a decision is made only once is the Ski rental problem. In general, the output of an online algorithm is compared to the solution of a corresponding offline algorithm which is necessarily always optimal and knows the entire input in advance (competitive analysis).
In many situations, present decisions (for example, resources allocation) must be made with incomplete knowledge of the future or distributional assumptions on the future are not reliable. In such cases, online optimization can be used, which is different from other approaches such as robust optimization, stochastic optimization and Markov decision processes.
Online problems
A problem exemplifying the concepts of online algorithms is the Canadian traveller problem. The goal of this problem is to minimize the cost of reaching a target in a weighted graph where some of the edges are unreliable and may have been removed from the graph. However, that an edge has been removed (failed) is only revealed to the traveller when they reach one of the edge's endpoints. The worst case for this problem is simply that all of the unreliable edges fail and the problem reduces to the usual shortest path problem. An alternative analysis of the problem can be made with the help of competitive analysis. For this method of analysis, the offline algorithm knows in advance which edges will fail and the goal is to minimize the rat |
https://en.wikipedia.org/wiki/Engineering%20administration | Engineering Administration (EA) is a branch of engineering that is mainly concerned with the analysis and solution of operational and management problems using scientific and mathematical methods.
Engineering Administration is considered to be a subdiscipline of industrial engineering / systems engineering.
University programs
Undergraduate curriculum
In the United States the undergraduate degree earned is the Bachelor of Science (B.S.) in Engineering Administration.
Postgraduate curriculum
The postgraduate degree earned is the Master of Science in Engineering Administration (MEA).
Associations
INFORMS
Institute of Industrial Engineers
See also
Industrial Engineering
Systems Engineering
Enterprise Engineering
Engineering Management
Business Engineering
References
External links
Bachelor of Engineering Administration
Master of Engineering Administration (MEA)
Industrial engineering |
https://en.wikipedia.org/wiki/Radu%20Grigorovici | Radu Grigorovici (November 20, 1911 – August 2, 2008) was a Romanian physicist.
Biography
Radu Grigorovici was born on November 20, 1911 in Chernivtsi, being the only son of the Bucovina Social Democrats Gheorghe and Tatiana Grigorovici. After graduating from Aron Pumnul High School (1928) he studied at the Chernivtsi University, and in 1931 he got a degree in chemical sciences and in 1934 a degree in physical sciences. At the same university, he was then a trainer at the Experimental Physics Laboratory of Professor Eugen Bădărău.
In 1936 he transferred to the Faculty of Sciences of the University of Bucharest, where Bădărău had been called as head of the Laboratory of Molecular, Acoustic and Optical Physics. In 1938 he obtained a PhD in physical sciences with a dissertation on the disruptive potential of mercury vapor. He climbed the ranks of the university hierarchy, becoming an associate professor in 1949. Between 1947-1957 he worked in parallel in the light source industry (Lumen factory, then Electrofar), as a consulting engineer. He was forced, for political reasons, to give up his university career. He retired from research, becoming head of department (1960) and deputy scientific director (1963) at the Bucharest Institute of Physics of the RPR Academy; in 1970 the institute will be subordinated to the State Committee for Nuclear Energy. In 1973 he applied for retirement, continuing his activity as a leading part-time scientific researcher; in 1977, following a reorganization, he was transferred to the Institute of Physics and Materials Technology, and after a year his employment contract was terminated.
Radu Grigorovici made original contributions to the physics of electric gas discharges, flame spectral analysis, light sources, physiological and instrumental optics, size systems and physical-physiological units. At the Bucharest Institute of Physics, he organized and led a group of researchers who studied the phenomena of transport in disordered thin met |
https://en.wikipedia.org/wiki/List%20of%20electronic%20laboratory%20notebook%20software%20packages | An electronic lab notebook (also known as electronic laboratory notebook, or ELN) is a computer program designed to replace paper laboratory notebooks. Lab notebooks in general are used by scientists, engineers, and technicians to document research, experiments, and procedures performed in a laboratory. A lab notebook is often maintained to be a legal document and may be used in a court of law as evidence. Similar to an inventor's notebook, the lab notebook is also often referred to in patent prosecution and intellectual property litigation.
Electronic lab notebooks are a fairly new technology and offer many benefits to the user as well as organizations. For example: electronic lab notebooks are easier to search upon, simplify data copying and backups, and support collaboration amongst many users.
ELNs can have fine-grained access controls, and can be more secure than their paper counterparts. They also allow the direct incorporation of data from instruments, replacing the practice of printing out data to be stapled into a paper notebook.
This is a list of ELN software packages. It is incomplete, as a recent review listed 96 active & 76 inactive (172 total) ELN products. Notably, this review and other lists of ELN software often do not include widely used generic notetaking software like Onenote, Notion, Jupyter etc, due to their lack ELN nominal features like time-stamping and append-only editing. Some ELNs are web-based; others are used on premise and a few are available for both environments.
ELN Software
Open-source ELN software
References
ELN Packages |
https://en.wikipedia.org/wiki/LowerUnivalents | In proof compression, an area of mathematical logic, LowerUnivalents is an algorithm used for the compression of propositional resolution proofs. LowerUnivalents is a generalised algorithm of the LowerUnits, and it is able to lower not only units but also subproofs of non-unit clauses, provided that they satisfy some additional conditions.
References
Mathematical logic |
https://en.wikipedia.org/wiki/Idiobiology | Idiobiology is a branch of biology which studies individual organisms, or the study of organisms as individuals.
References
Branches of biology |
https://en.wikipedia.org/wiki/Kmc-Subset137 | The open-source Kmc-Subset137 Project implements the protocol described in "ERTMS/ECTS; On-line Key Management FFFIS" UNISIG SUBSET-137 ver1.0.0.
It covers the on-line distribution of cryptographic keys among the key management Centres authoritative in their respective domains. It also deals with the exchange between a key management center and its own domain KMAC entities. The open source library provides a simple C language application programming interface (API) to access and parse UNISIG SUBSET 137 messages.
For the cryptographic part of the protocol it relies on the open source GnuTLS library (or alternatively, but discouraged, on the OpenSSL library).
The library is open-source and is licensed under the GNU General Public License version 3.0.
References
* Prose in this article was copied from Kmc-Subset137 Project at Neat Embedded Computing, which is available under a Creative Commons Attribution-ShareAlike 3.0 unported license.
Key management |
https://en.wikipedia.org/wiki/Stacki | Stacki is a computer cluster software product from the company StackIQ, released as open-source software.
Description
StackIQ was originally named Clustercorp when it was founded in 2006. Its first product was a commercial version of a Linux distribution called the Rocks Cluster Distribution.
Originally based in San Jose, California, co-founders included Mason Katz and chief executive Tim McIntire.
In 2011, the company re-incorporated as StackIQ and moved to the La Jolla district in San Diego, California.
A round of venture capital funding in April and October 2014 raised about $6 million.
By then it was located in Solana Beach, California.
In August 2016, Pervez Choudhry replaced McIntire as chief executive.
A product called StackIQ cluster manager was renamed StackIQ Boss in February 2015.
Stacki works on several servers at the same time, so it takes about as long to provision any number of servers.
The system allows installations via the Preboot Execution Environment (PXE), and supports both an “all servers that boot on this network” and an “all servers in this spreadsheet” method of installations. So if the servers to be installed are on an isolated network, a Stacki tool called insert-ethers can be run to grab each machine that boots on the network and add it to Stacki, commencing an installation if needed. If the servers to be installed are on a shared network, then loading a spreadsheet of machines to install tells Stacki which ones it should install.
Stacki uses a database to manage variables for use during installation. Variables can be defined by individual server, installation type, or globally, and can be manipulated via spreadsheets or command line.
Networking, for example, can be managed with variables. A machine can be configured with multiple network cards on multiple networks with varying routes and open/closed ports.
Stacki was released in June 2015.
The StackIQ company was acquired by Teradata on June 30, 2017, for an undisclosed amount.
|
https://en.wikipedia.org/wiki/The%20Cancer%20Imaging%20Archive | The Cancer Imaging Archive (TCIA) is an open-access database of medical images for cancer research. The site is funded by the National Cancer Institute's (NCI) Cancer Imaging Program, and the contract is operated by the University of Arkansas for Medical Sciences. Data within the archive is organized into collections which typically share a common cancer type and/or anatomical site. The majority of the data consists of CT, MRI, and nuclear medicine (e.g. PET) images stored in DICOM format, but many other types of supporting data are also provided or linked to, in order to enhance research utility. All data are de-identified in order to comply with the Health Insurance Portability and Accountability Act and National Institutes of Health data sharing policies.
TCIA resources are intended to support:
Development of computer aided diagnosis methods (quantitative imaging)
Evaluation of unbiased science reproducibility by acceptable standard statistical methods
Research on correlation of clinical diagnostic medical images with digital microscopic histological images
Exploratory biomarker research for which imaging is a key element
Collaboration between cross-disciplinary investigators where imaging is crucial to research on tumor heterogeneity, between patients and within the tumor; tissue temporal response tracking - objective measurements of tumor progression; imaging genomics and Big Data linkages and analysis (clinical, histo-pathology, genomics)
TCIA is recognized as a recommended repository for the Scientific Data, PLOS One, and F1000Research journals. It is also listed in the Registry of Research Data Repositories.
History
Prior to the creation of TCIA, the NCI funded development of the National Biomedical Imaging Archive. NBIA is an open-source Web application which was designed to allow the storage and query of DICOM images. TCIA was subsequently initiated in December 2010 to expand data sharing activities by funding a service component which would h |
https://en.wikipedia.org/wiki/Operations%20engineering | Operations engineering is a branch of engineering that is mainly concerned with the analysis and optimization of operational problems using scientific and mathematical methods. More frequently it has applications in the areas of Broadcasting/Industrial Engineering and also in the Creative and Technology Industries.
Operations engineering is considered to be a subdiscipline of Operations Research and Operations Management.
Associations
INFORMS
Society of Operations Engineers
industrial operation
References
See also
Operations research
Systems engineering
Enterprise engineering
Engineering management
Business engineering
Industrial engineering |
https://en.wikipedia.org/wiki/Traceroute%20%28film%29 | Traceroute is a 2016 Austrian-American documentary film directed by Johannes Grenzfurthner. The autobiographical documentary and road movie deals with the history, politics and impact of nerd culture. Grenzfurthner calls his film a "personal journey into the uncharted depths of nerd culture, a realm full of dangers, creatures and more or less precarious working conditions", an attempt to "chase the ghosts of nerddom's past, present and future." The film was co-produced by art group monochrom and Reisenbauer Film. It features music by Kasson Crooker, Hans Nieswandt, and many others.
Concept
Artist and self-declared nerd Johannes Grenzfurthner is documenting his personal road trip from the West Coast to the East Coast of the United States, to introduce the audience to places and people that shaped and inspired his art and politics. Traceroute is a reflection on Grenzfurthner's own roots of nerddom, an "On the road style romp across the United States as he visits icons of the counterculture, the outré, and the generally questionable." Grenzfurthner summarizes the concept in an interview for Boing Boing: "It is a film on biographies and obsessions and spaces of possibility – in other words something between loving embrace and merciless vivisection. Maintaining a critical meta-outlook was just as important to me as abandoning myself to unfathomable stammerings of adoration. And that all works for one simple reason: because I take a step forward, introducing myself and confessing my guilt like in Alcoholics Anonymous, only to then take off and visit the best whiskey distilleries. In my case these destinations are not whiskey makers, but people and places and symbols of a very special pop culture." On Film Threat he adds: "It was important for me to take nerddom apart, not only analyzing it, but also excavating its potential for greatness."
The film incorporates art and illustrations by James Brothwell, Bonni Rambatan, Michael Marrak, Karin Frank, Ben Lawson, Michael Ze |
https://en.wikipedia.org/wiki/DevOps%20toolchain | A DevOps toolchain is a set or combination of tools that aid in the delivery, development, and management of software applications throughout the systems development life cycle, as coordinated by an organisation that uses DevOps practices.
Generally, DevOps tools fit into one or more activities, which supports specific DevOps initiatives: Plan, Create, Verify, Package, Release, Configure, Monitor, and Version Control.
Toolchains
In software, a toolchain is the set of programming tools that is used to perform a complex software development task or to create a software product, which is typically another computer program or a set of related programs. In general, the tools forming a toolchain are executed consecutively so the output or resulting environment state of each tool becomes the input or starting environment for the next one, but the term is also used when referring to a set of related tools that are not necessarily executed consecutively.
As DevOps is a set of practices that emphasizes the collaboration and communication of both software developers and other information technology (IT) professionals, while automating the process of software delivery and infrastructure changes, its implementation can include the definition of the series of tools used at various stages of the lifecycle; because DevOps is a cultural shift and collaboration between development and operations, there is no one product that can be considered a single DevOps tool. Instead a collection of tools, potentially from a variety of vendors, are used in one or more stages of the lifecycle.
Stages of DevOps
Plan
Plan is composed of two things: "define" and "plan". This activity refers to the business value and application requirements. Specifically "Plan" activities include:
Production metrics, objects and feedback
Requirements
Business metrics
Update release metrics
Release plan, timing and business case
Security policy and requirement
A combination of the IT personnel will be |
https://en.wikipedia.org/wiki/Site%20reliability%20engineering | Site reliability engineering (SRE) is a set of principles and practices that applies aspects of software engineering to IT infrastructure and operations. SRE claims to create highly reliable and scalable software systems. Although they are closely related, SRE is slightly different from DevOps.
History
The field of site reliability engineering originated at Google with Ben Treynor Sloss, who founded a site reliability team after joining the company in 2003. In 2016, Google employed more than 1,000 site reliability engineers. After originating at Google in 2003, the concept spread into the broader software development industry, and other companies subsequently began to employ site reliability engineers. The position is more common at larger web companies, as small companies often do not operate at a scale that would require dedicated SREs. Organizations that have adopted the concept include Airbnb, Dropbox, IBM, LinkedIn, Netflix, and Wikimedia. According to a 2021 report by the DevOps Institute, 22% of organizations in a survey of 2,000 respondents had adopted the SRE model.
Definition
Site reliability engineering, as a job role, may be performed by individual contributors or organized in teams, responsible for a combination of the following within a broader engineering organization: System availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning. Site reliability engineers often have backgrounds in software engineering, system engineering, or system administration. Focuses of SRE include automation, system design, and improvements to system resilience.
Site reliability engineering, as a set of principles and practices, can be performed by anyone. SRE is similar to security engineering in that everyone is expected to contribute to good security practices, but a company may decide to eventually hire staff specialists for the job. Conversely, for securing internet systems, companies may hire securit |
https://en.wikipedia.org/wiki/RL02 | RL01 and RL02 drives are moving head magnetic disk drives manufactured by Digital Equipment Corporation for the PDP-8 and PDP-11 microcomputers. The RL01 and RL02 drives stored approximate 5MB and 10MB respectively, utilizing a removable data cartridge. The drives are typically mounted in a standard 19" rack and weigh 34 kg. Up to four RL02 or RL01 drives may be used, in any combination, from a single controller. Typically an RL11 in the case of a Unibus PDP-11 and an RLV11 or RLV12 in the case of a Q-bus PDP-11. On the PDP-8/a the controller is an RL8A which consists of an M8433 Hex wide Omnibus card.
Cartridge format
The RL01 and RL02 data cartridges are based on IBM 5440 cartridges, but have servo tracking data pre-encoded onto the cartridge. This reduces the need for strict head alignment, allowing cartridges to be used in several drives (although there was no backwards compatibility between RL02 and RL01 cartridges, despite similar appearance). However, this prevents on-site low level formatting of cartridges. The drives have logic to prevent this servo data from being overwritten. RL01 cartridges have 256 tracks, and RL02 cartridges have 512 tracks.
Data format
On both RL01 and RL02 cartridges, each track is divided into 40 sectors of equal length. Each sector is divided into six fields, defined as follows (where each word is 16 bits).)
Header Preamble (PR1) consists of three words of 47 zeros, followed by a single one for synchronization.
Header, consisting of three words. The first word identifies whether the sector is on the upper or lower side of the platter, followed by the track number (1 to 256 or 1 to 512) and finally the sector number (1 to 40). The second word is all zeros. The third word contains a cyclic redundancy check (CRC) of the header. This is checked during a read operation.
Header Postamble (PO1) of one word consisting only of zeros. This field separates the header and data fields, enabling for mechanical tolerances between drives |
https://en.wikipedia.org/wiki/Little%20Box%20Challenge | The Little Box Challenge was an engineering competition run by Google and the IEEE's Power Electronics Society. The original challenge was posted on July 22, 2014 with modifications on December 16, 2014 and March 23, 2015. Testing was in October 2015 at the National Renewable Energy Laboratory. From the 18 finalists, CE+T Power's team called Red Electrical Devils won the $1 million prize, which was awarded to them in March 2016.
The challenge was to build a power inverter that was about one tenth the size of the state-of-the-art at the time. It had to have an efficiency greater than 95 percent and handle loads of 2 kW. It also had to fit in a metal enclosure of no more than 40 cubic inches (the eponymous "little box") and withstand 100 hours of testing.
The goals of the competition were lower cost solar photovoltaic power, more efficient uninterruptible power supplies, affordable microgrids, and the ability to use an electric vehicle's battery as backup power during a power outage. Google also hoped a smaller inverter could make its data centers run more efficiently.
More than 100 international teams from university researchers and students to large companies and garage tinkerers entered the Google Little Box Challenge competition. Eighteen finalists were chosen in October 2015. These 18 teams entered the Challenge's final stretch by submitting their competition prototypes, which underwent Google's stringent test regimen. The results of this worldwide competition were announced at the ARPA-E 2016, March conference. Of the 18 finalists, only 3 teams passed every one of Google's test requirements, those being the top three finishers.
Google managed this contest poorly and even unfairly changed the specification several times, once close to a deadline, so that some of the engineering choices made by some teams turned out to be less than optimal after the changes, leaving no time to adjust. Finalists had to travel to Denver to deposit their prototypes for testing a |
https://en.wikipedia.org/wiki/Jeremy%20Burroughes | Jeremy Henley Burroughes (born August 1960) is a British physicist and engineer, known for his contributions to the development of organic electronics through his work on the science of semiconducting polymers and molecules and their application. He is the Chief Technology Officer of Cambridge Display Technology, a company specialising in the development of technologies based on polymer light-emitting diodes.
Education
Burroughes earned his PhD from the University of Cambridge in 1989. His thesis was entitled The physical processes in organic semiconducting polymer devices.
Work
Early in his career, Burroughes discovered that certain conjugated polymers were capable of emitting light when an electric current passed through them. The discovery of this previously unknown form of electroluminescence led to the foundation of Cambridge Display Technology where Burroughes has been responsible for a number of technology innovations, including the direct printing of full-colour OLED displays.
Awards and honours
Burroughes was elected a Fellow of the Royal Society (FRS) in 2012. His certificate of election reads:
References
1960 births
Living people
British physicists
Electronic engineering
Alumni of the University of Cambridge
Fellows of the Royal Society |
https://en.wikipedia.org/wiki/EU%E2%80%93US%20Privacy%20Shield | The EU–US Privacy Shield was a legal framework for regulating transatlantic exchanges of personal data for commercial purposes between the European Union and the United States. One of its purposes was to enable US companies to more easily receive personal data from EU entities under EU privacy laws meant to protect European Union citizens. The EU–US Privacy Shield went into effect on 12 July 2016 following its approval by the European Commission. It was put in place to replace the International Safe Harbor Privacy Principles, which were declared invalid by the European Court of Justice in October 2015. The ECJ declared the EU–US Privacy Shield invalid on 16 July 2020, in the case known as Schrems II. In 2022, leaders of the US and EU announced that a new data transfer framework called the Trans-Atlantic Data Privacy Framework had been agreed to in principle, replacing Privacy Shield. However, it is uncertain what changes will be necessary or adequate for this to succeed without facing additional legal challenges.
History
In October 2015 the European Court of Justice declared the previous framework called the International Safe Harbor Privacy Principles invalid in a ruling that later became known as "Schrems I". Soon after this decision, the European Commission and the U.S. Government started talks about a new framework, and on February 2, 2016, they reached a political agreement. The European Commission published the "adequacy decision" draft, declaring principles to be equivalent to the protections offered by EU law.
The Article 29 Data Protection Working Party delivered an opinion on April 13, 2016, stating that the Privacy Shield offers major improvements compared to the Safe Harbor decisions, but that three major points of concern still remain. They relate to deletion of data, collection of massive amounts of data, and clarification of the new Ombudsperson mechanism. The European Data Protection Supervisor issued an opinion on 30 May 2016 in which he stated |
https://en.wikipedia.org/wiki/Chroma%20feature | In Western music, the term chroma feature or chromagram closely relates to twelve different pitch classes. Chroma-based features, which are also referred to as "pitch class profiles", are a powerful tool for analyzing music whose pitches can be meaningfully categorized (often into twelve categories) and whose tuning approximates to the equal-tempered scale. One main property of chroma features is that they capture harmonic and melodic characteristics of music, while being robust to changes in timbre and instrumentation.
Definition
The underlying observation is that humans perceive two musical pitches as similar in color if they differ by an octave. Based on this observation, a pitch can be separated into two components, which are referred to as tone height and chroma. Assuming the equal-tempered scale, one considers twelve chroma values represented by the set
{C, C, D, D, E , F, F, G, G, A, A, B}
that consists of the twelve pitch spelling attributes as used in Western music notation. Note that in the equal-tempered scale different pitch spellings such
C and D refer to the same chroma. Enumerating the chroma values, one can identify the set of chroma values with the set of integers {1,2,...,12}, where 1 refers to chroma C, 2 to C, and so on. A pitch class is defined as the set of all pitches that share the same chroma. For example, using the scientific pitch notation, the pitch class corresponding to the chroma C is the set
{..., C−2, C−1, C0, C1, C2, C3 ...}
consisting of all pitches separated by an integer number of octaves. Given a music representation (e.g. a musical score or an audio recording), the main idea of chroma features is to aggregate for a given local time window (e.g. specified in beats or in seconds) all information that relates to a given chroma into a single coefficient. Shifting the time window across the music representation results in a sequence of chroma features each expressing how the representation's pitch content within the time wind |
https://en.wikipedia.org/wiki/How%20to%20Clone%20a%20Mammoth | How to Clone a Mammoth: The Science of De-Extinction is a 2015 non-fiction book by biologist Beth Shapiro and published by Princeton University Press. The book describes the current state of de-extinction technology and what the processes involved require in order to accomplish the potential resurrection of extinct species.
Content
The book is laid out as a step-by-step guide on how to clone an animal, with each chapter detailing a different topic that needs to be explored and answered before de-extinction of a species will be complete. This also involves a particular focus on resurrection of the mammoth.
Several chapters deal with the genetic material itself and how to obtain it, along with the difficulties of recovering viable DNA samples from mummified or fossilized remains. Due to the actions of nucleases after cell death, most DNA of extinct species is fragmented into small pieces that have to be reconstructed at least partially if it is to be cloned. This fragmentation means that recovery of a full extinct genome is largely impossible. Thus, only partial genes can be utilized and the most viable method is to use a close evolutionary relative of the extinct species and insert the genes that differ into an embryo of the living species. For mammoth de-extinction, any trait consideration would involve the Asian elephant, the closest still-living relative. Using genes from extrapolated mammoth DNA, the Asian elephant could be made to survive across a wider range, including cold environments, protecting it against possible extinction. This gene transfer to benefit living species is one of the primary sources of research done with de-extinction technology in addition to the desire to revive lost species.
Three following chapters discuss current technology available for moving genes and creating modified elephant genomes, including CRISPR (Clustered Regularly Interspersed Short Palindromic Repeats) and TALENS (Transcription Activator-like Effector Nucleases). The f |
https://en.wikipedia.org/wiki/Clarence%20S.%20Coe | Clarence Stanley Coe (C. S. Coe) (December 24, 1865 – March 5, 1939) was an American master bridge builder and railroad civil engineer, who supervised the planning and building of the Florida East Coast Railway's Seven Mile Bridge, linking the Florida Keys to Marathon, Monroe County, which, when completed in January 1912, was acclaimed as the longest bridge in the world and an engineering marvel. Later, Coe was appointed the first city manager of Miami, Florida, and after that was appointed chief engineer of Duval County, Florida.
Early life and career
Coe was born in Riverside, Iowa, one of the nine children of Sylvester Coe and his wife, Ann (née Rowlands), a native of Llangollen, Wales.
He was educated at the University of Minnesota, earning a degree in engineering in 1889. After graduation, Coe held various engineering positions in the rapidly expanding railroad industry.
In 1905, he joined the Florida East Coast Railway, first as resident managing engineer of the Key West Extension, having charge of viaduct construction. As resident manager, he constructed viaducts totaling nearly 12 miles over open water. Coe had charge of the entire engineering and inspection departments, the labor force, and all floating equipment.
In 1910, Coe was promoted to division engineer with responsibility for overseeing construction of the Seven Mile Bridge over open ocean, a feat never before attempted.
Involvement with Florida East Coast Railway
The railroad line south of West Palm Beach was constructed in phases by the Florida East Coast Railway and its predecessor systems. Founder and owner Henry Flagler began his railroad building in 1892. Under Florida's generous land-grant laws passed in 1893, could be claimed from the state for every mile (1.6 km) built. Flagler eventually claimed a total in excess of two million acres (8,000 km2) for building the FEC, and land development and trading along the line became one of his most profitable endeavors.
Before it became |
https://en.wikipedia.org/wiki/Vehicle-to-everything | Vehicle-to-everything (V2X) is communication between a vehicle and any entity that may affect, or may be affected by, the vehicle. It is a vehicular communication system that incorporates other more specific types of communication as V2I (vehicle-to-infrastructure), V2N (vehicle-to-network), V2V (vehicle-to-vehicle), V2P (vehicle-to-pedestrian), V2D (vehicle-to-device).
The main motivations for V2X are road safety, traffic efficiency, energy savings, and mass surveillance. The U.S. NHTSA estimates a minimum of 13% reduction in traffic accidents if a V2V system were implemented, resulting in 439,000 fewer crashes per year. There are two types of V2X communication technology depending on the underlying technology being used: (1) WLAN-based, and (2) cellular-based.
The V2X contains the following sub categories:
Vehicle-to-Everything (V2X) - "communication between a vehicle and any entity that may affect, or may be affected by, the vehicle."
Vehicle-to-Device (V2D) - Bluetooth / WiFi-Direct, e.g. Apple’s CarPlay and Google’s Android Auto.
Vehicle-to-Grid (V2G) - information exchange with the smart grid to balance loads more efficiently.
Vehicle-to-Building (V2B), also known as Vehicle-to-Home (V2H)
Vehicle-to-Load (V2L)
Vehicle-to-Network (V2N) - communication based on Cellular (3GPP) / 802.11p.
Vehicle-to-Cloud (V2C) - e.g. OTA updates, remote vehicle diagnostics (DoIP).
Vehicle-to-Infrastructure (V2I) - e.g. traffic lights, lane markers and parking meters.
Vehicle-to-Pedestrian (V2P) - e.g. wheelchairs and bicycles.
Vehicle-to-Vehicle (V2V) - real-time data exchange with nearby vehicles.
History
The history of working on vehicle-to-vehicle communication projects to increase safety, reduce accidents and driver assistance can be traced back to the 1970s with projects such as the US Electronic Road Guidance System (ERGS) and Japan's CACS. Most milestones in the history of vehicle networks originate from the United States, Europe, and Japan.
Standardizat |
https://en.wikipedia.org/wiki/Masreliez%27s%20theorem | Masreliez theorem describes a recursive algorithm within the technology of extended Kalman filter, named after the Swedish-American physicist John Masreliez, who is its author. The algorithm estimates the state of a dynamic system with the help of often incomplete measurements marred by distortion.
Masreliez's theorem produces estimates that are quite good approximations to the exact conditional mean in non-Gaussian additive outlier (AO) situations. Some evidence for this can be had by Monte Carlo simulations.
The key approximation property used to construct these filters is that the state prediction density is approximately Gaussian. Masreliez discovered in 1975 that this approximation yields an intuitively appealing non-Gaussian filter recursions, with data dependent covariance (unlike the Gaussian case) this derivation also provides one of the nicest ways of establishing the standard Kalman filter recursions. Some theoretical justification for use of the Masreliez approximation is provided by the "continuity of state prediction densities" theorem in Martin (1979).
See also
Control engineering
Hidden Markov model
Bayes' theorem
Robust optimization
Probability theory
Nyquist–Shannon sampling theorem
References
Control theory
Signal processing
Control engineering |
https://en.wikipedia.org/wiki/Demonstration%20plant | A demonstration plant is an industrial system used to validate an industrial process for commercialization. It is larger than a pilot plant, and is the final stage in research, development and demonstration of a new process. Demonstration plants are built in a range of sizes, and the term 'demonstration plant' can sometimes be used interchangeably with 'pilot plant.' However, demonstration plants are generally larger than pilot plants, and are often constructed following a successful trial in a pilot scale size. Demonstration plants are used to prove a process works at industrial scale, and is financially viable in its intended industry.
Goals
The goals of a demonstration plant are generally as follows:
Prove a new technology using commercially available, pre-tested equipment.
Show a reasonable return on investment (ROI) for the capital that will be invested in a full-scale system, including the operational costs of running such a system.
In some cases, to start bringing product to market in significant enough amounts that production, distribution and target market viability can be established, including finalization of market testing.
Establish a viable product method that will endure the test of a true manufacturing operation.
Design factors
Many of the same design techniques that are used for pilot plants are also used when developing demonstration plants. 3D modeling, chemical similitude studies, mass and energy balances, risk factors, computational fluid dynamics (CFD), and mathematical modeling are common techniques used to design demonstration modules before actual fabrication occurs.
The emphasis in a demonstration plant is on using industrial equipment, rather than smaller-scale equipment, to prove process viability. A significant amount of product must be produced in equipment that will hold up over a long production lifetime and not be prohibitively expensive. A demonstration plant must show that enough end-product can be created to offset the cost |
https://en.wikipedia.org/wiki/Phonovoltaic | A phonovoltaic (pV) cell converts vibrational (phonons) energy into a direct current much like the photovoltaic effect in a photovoltaic (PV) cell converts light (photon) into power. That is, it uses a p-n junction to separate the electrons and holes generated as valence electrons absorb optical phonons more energetic than the band gap, and then collects them in the metallic contacts for use in a circuit. The pV cell is an application of heat transfer physics and competes with other thermal energy harvesting devices like the thermoelectric generator.
While the thermoelectric generator converts heat, a broad spectrum of phonon and electron energy, to electricity, the pV cell converts only a narrow band of phonon energy, i.e., only the most energetic optical phonon modes. A narrow band of excited optical phonons has much less entropy than heat. Thus, the pV cell can exceed the thermoelectric efficiency. However, exciting and harvesting the optical phonon poses a challenge.
Satisfying the laws of thermodynamics
By the first law of thermodynamics, the excitation driving electron generation in both photo- and phonovoltaic cells, i.e., the photon or phonon, must have more energy than the semiconductor band gap. For a PV cell, many materials are available with a band gap () well matched to the solar photon spectrum, like Silicon or Gallium Arsenide. For a pV cell, however, no current semiconducting materials have a band gap smaller than the energy of their most energetic (optical) phonon modes (). Thus, novel materials are required with both energetic optical phonon modes ( meV, e.g., graphene, diamond, or boron nitride) and a small band gap (, e.g., graphene).
By the second law of thermodynamics, the excitation must be "hotter" than the cell for power generation to occur. In a PV, the light comes from an outside source, for example, the sun, which is nearly 6000 kelvins, whereas the PV is around 300 kelvins. Thus, the second law is satisfied and energy conversion |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.