source
stringlengths 31
203
| text
stringlengths 28
2k
|
|---|---|
https://en.wikipedia.org/wiki/WaveLAN
|
WaveLAN was a brand name for a family of wireless networking technology sold by NCR, AT&T, Lucent Technologies, and Agere Systems as well as being sold by other companies under OEM agreements. The WaveLAN name debuted on the market in 1990 and was in use until 2000, when Agere Systems renamed their products to ORiNOCO. WaveLAN laid the important foundation for the formation of IEEE 802.11 working group and the resultant creation of Wi-Fi.
WaveLAN has been used on two different families of wireless technology:
Pre-IEEE 802.11 WaveLAN, also called Classic WaveLAN
IEEE 802.11-compliant WaveLAN, also known as WaveLAN IEEE and ORiNOCO
History
WaveLAN was originally designed by NCR Systems Engineering, later renamed into WCND (Wireless Communication and Networking Division) at Nieuwegein, in the province Utrecht in the Netherlands, a subsidiary of NCR Corporation, in 1986–7, and introduced to the market in 1990 as a wireless alternative to Ethernet and Token Ring. The next year NCR contributed the WaveLAN design to the IEEE 802 LAN/MAN Standards Committee. This led to the founding of the 802.11 Wireless LAN Working Committee which produced the original IEEE 802.11 standard, which eventually became the basis of the certification mark Wi-Fi. When NCR was acquired by AT&T in 1991, becoming the AT&T GIS (Global Information Solutions) business unit, the product name was retained, as happened two years later when the product was transferred to the AT&T GBCS (Global Business Communications Systems) business unit, and again when AT&T spun off their GBCS business unit as Lucent in 1995. The technology was also sold as WaveLAN under an OEM agreement by Epson, Hitachi, and NEC, and as the RoamAbout DS by DEC. It competed directly with Aironet's non-802.11 ARLAN lineup, which offered similar speeds, frequency ranges and hardware.
Several companies also marketed wireless bridges and routers based on the WaveLAN ISA and PC cards, like the C-Spec OverLAN, KarlNet KarlBridge, Perso
|
https://en.wikipedia.org/wiki/Paley%20graph
|
In mathematics, Paley graphs are dense undirected graphs constructed from the members of a suitable finite field by connecting pairs of elements that differ by a quadratic residue. The Paley graphs form an infinite family of conference graphs, which yield an infinite family of symmetric conference matrices. Paley graphs allow graph-theoretic tools to be applied to the number theory of quadratic residues, and have interesting properties that make them useful in graph theory more generally.
Paley graphs are named after Raymond Paley. They are closely related to the Paley construction for constructing Hadamard matrices from quadratic residues .
They were introduced as graphs independently by and . Sachs was interested in them for their self-complementarity properties, while Erdős and Rényi studied their symmetries.
Paley digraphs are directed analogs of Paley graphs that yield antisymmetric conference matrices. They were introduced by (independently of Sachs, Erdős, and Rényi) as a way of constructing tournaments with a property previously known to be held only by random tournaments: in a Paley digraph, every small subset of vertices is dominated by some other vertex.
Definition
Let q be a prime power such that q = 1 (mod 4). That is, q should either be an arbitrary power of a Pythagorean prime (a prime congruent to 1 mod 4) or an even power of an odd non-Pythagorean prime. This choice of q implies that in the unique finite field Fq of order q, the element −1 has a square root.
Now let V = Fq and let
.
If a pair {a,b} is included in E, it is included under either ordering of its two elements. For, a − b = −(b − a), and −1 is a square, from which it follows that a − b is a square if and only if b − a is a square.
By definition G = (V, E) is the Paley graph of order q.
Example
For q = 13, the field Fq is just integer arithmetic modulo 13. The numbers with square roots mod 13 are:
±1 (square roots ±1 for +1, ±5 for −1)
±3 (square roots ±4 for +3, ±6 for −
|
https://en.wikipedia.org/wiki/Lithuanian%20dictionaries
|
Dictionaries of Lithuanian language have been printed since the first half of the 17th century.
History
The first Lithuanian language dictionary was compiled by Konstantinas Sirvydas and printed in 1629 as a trilingual (Polish–Latin–Lithuanian) dictionary. Five editions of it were printed until 1713, but it was used and copied by other lexicographers until the 19th century.
The first German–Lithuanian–German dictionary, to address the necessities of Lithuania Minor, was published by Friedrich W. Haack in 1730. A better German–Lithuanian–German dictionary, with a sketch of grammar and history of the language, more words, and systematic orthography, was published by Philipp Ruhig in 1747. In 1800, printed an expanded and revised version of Ruhig's dictionary. Its foreword was the last work of Immanuel Kant printed during his life.
There also existed a number of notable unpublished dictionaries.
At the beginning of the 19th century linguists recognized the conservative character of Lithuanian, and it came into the focus of the comparative linguistics of Indo-European languages. To address the needs of linguists, Georg H. F. Nesselmann published a Lithuanian–German dictionary in 1851.
The culmination of Lithuanian linguists' efforts is the 20-volume Academic Dictionary of Lithuanian.
References
Dictionaries
Online dictionaries
|
https://en.wikipedia.org/wiki/Joint%20application%20design
|
Joint application design (JAD) is a process used in the life cycle area of the dynamic systems development method (DSDM) to collect business requirements while developing new information systems for a company. "The JAD process also includes approaches for enhancing user participation, expediting development, and improving the quality of specifications." It consists of a workshop where "knowledge workers and IT specialists meet, sometimes for several days, to define and review the business requirements for the system." The attendees include high level management officials who will ensure the product provides the needed reports and information at the end. This acts as "a management process which allows Corporate Information Services (IS) departments to work more effectively with users in a shorter time frame".
Through JAD workshops, the knowledge workers and IT specialists are able to resolve any difficulties or differences between the two parties regarding the new information system. The workshop follows an agenda in order to guarantee that all uncertainties between parties are covered and to help prevent any miscommunications. Miscommunications can create repercussions if not addressed until later on in the process. (See below for Key Participants and Key Steps to an Effective JAD). In the end, this process will result in a new information system that is feasible and appealing to both the designers and end users.
"Although the JAD design is widely acclaimed, little is actually known about its effectiveness in practice." According to the Journal of Systems and Software, a field study was done at three organizations using JAD practices to determine how JAD influenced system development outcomes. The results of the study suggest that organizations realized modest improvement in systems development outcomes by using the JAD method. JAD use was most effective in small, clearly focused projects and less effective in large, complex projects. Since 2010, the Int
|
https://en.wikipedia.org/wiki/Sierra%20Wireless
|
Sierra Wireless (a subsidiary of Semtech Corporation) is a Canadian multinational wireless communications equipment designer, manufacturer and services provider headquartered in Richmond, British Columbia, Canada. It also maintains offices and operations in the United States, Korea, Japan, Taiwan, India, France, Australia and New Zealand.
The company sells mobile computing and machine-to-machine (M2M) communications products that work over cellular networks, 2G, 3G, 4G and 5G mobile broadband wireless routers and gateways, modules, as well as software, tools, and services.
Sierra Wireless products and technologies are used in a variety of markets and industries, including automotive and transportation, energy, field service, healthcare, industrial and infrastructure, mobile computing and consumers, networking, sales and payment, and security. It also maintains a network of experts in mobile broadband and M2M integration to support customers worldwide.
The company's products are sold directly to original equipment manufacturers (OEMs), as well as indirectly through distributors and resellers.
History
Sierra Wireless was founded in 1993 in Vancouver, Canada. In August 2003, it completed an acquisition of privately held, high-speed CDMA wireless modules supplier, AirPrime, issuing approximately 3.7 million shares to AirPrime shareholders. On March 6, 2007, the company announced its purchase of Hayward, California-based AirLink Communications, a privately held developer of high-value fixed, portable, and mobile wireless data solutions. Prior to the May 2007 completion of its sale to Sierra Wireless for a total of $27 million in cash and stock, AirLink had reported $24.8 million in revenues and gross margin of 44 percent.
In August 2008, Sierra Wireless purchased the assets of Junxion, a Seattle based producer of Linux-based mobile wireless access points and network routers for enterprise and government customers.
In December 2008, Sierra Wireless made a friendl
|
https://en.wikipedia.org/wiki/Journal%20of%20Machine%20Learning%20Research
|
The Journal of Machine Learning Research is a peer-reviewed open access scientific journal covering machine learning. It was established in 2000 and the first editor-in-chief was Leslie Kaelbling. The current editors-in-chief are Francis Bach (Inria) and David Blei (Columbia University).
History
The journal was established as an open-access alternative to the journal Machine Learning. In 2001, forty editorial board members of Machine Learning resigned, saying that in the era of the Internet, it was detrimental for researchers to continue publishing their papers in expensive journals with pay-access archives. The open access model employed by the Journal of Machine Learning Research allows authors to publish articles for free and retain copyright, while archives are freely available online.
Print editions of the journal were published by MIT Press until 2004 and by Microtome Publishing thereafter. From its inception, the journal received no revenue from the print edition and paid no subvention to MIT Press or Microtome Publishing.
In response to the prohibitive costs of arranging workshop and conference proceedings publication with traditional academic publishing companies, the journal launched a proceedings publication arm in 2007 and now publishes proceedings for several leading machine learning conferences, including the International Conference on Machine Learning, COLT, AISTATS, and workshops held at the Conference on Neural Information Processing Systems.
Further reading
References
External links
Computer science journals
Open access journals
Machine learning
Academic journals established in 2000
|
https://en.wikipedia.org/wiki/Machine%20Learning%20%28journal%29
|
Machine Learning is a peer-reviewed scientific journal, published since 1986.
In 2001, forty editors and members of the editorial board of Machine Learning resigned in order to support the Journal of Machine Learning Research (JMLR), saying that in the era of the internet, it was detrimental for researchers to continue publishing their papers in expensive journals with pay-access archives. Instead, they wrote, they supported the model of JMLR, in which authors retained copyright over their papers and archives were freely available on the internet.
Following the mass resignation, Kluwer changed their publishing policy to allow authors to self-archive their papers online after peer-review.
Selected articles
References
Computer science journals
Machine learning
Delayed open access journals
Springer Science+Business Media academic journals
Academic journals established in 1986
|
https://en.wikipedia.org/wiki/Analog%20temperature%20controlled%20crystal%20oscillator
|
In physics, an Analog Temperature Controlled Crystal Oscillator or Analogue Temperature Compensated Crystal Oscillator (ATCXO) uses analog sampling techniques to correct the temperature deficiencies of a crystal oscillator circuit, its package and its environment.
Typically the correction techniques involve the physical and electrical characterisation of the motional inductance and terminal capacitance of a crystal blank, the knowledge of which is used to create a correction polynomial, or algorithm, which in turn is implemented in circuit blocks. These are usually simulated in a mathematical modeling software tool such as SPICE, to verify that the original measured data can be corrected adequately. Once the system performance has been verified, these circuits are then implemented in a silicon die, usually in a bulk CMOS technology. Once fabricated, this die is then embedded into an oscillator module along with the crystal blank. Due to the sub accuracy of this type of crystal oscillator specialist packaging must be used to ensure good ageing and temperature shock characteristics. Example applications are for use in low power or battery operated consumer electronic products such as GSM or CDMA mobile phones, or GPS satellite navigation systems.
References
Wireless Modules Score A Hit At Clay Pigeon Shoots at www.mwrf.com/ (minor mention)
Low profile high stability digital TCXO: ultra low powerconsumption TCXO at ieeexplore.ieee.org (membership required)
Electronic oscillators
|
https://en.wikipedia.org/wiki/POW-R
|
POW-R (Psychoacoustically Optimized Wordlength Reduction) is a set of commercial dithering and noise shaping algorithms used in digital audio bit-depth reduction.
Developed by a consortium of four companies – The POW-R Consortium – the algorithms were first made available in 1999 in digital audio hardware products.
POW-R is now licensed for use by many companies, particularly those in the digital audio workstation (DAW) arena, where it currently has significant market share.
History
POW-R was developed between 1997 and 1998 after an unfavorable change in the licensing terms of a leading bit-depth reduction algorithm of the time prompted some of its licensees to put together a consortium to develop a viable alternative algorithm.
Formed by four audio engineering companies: Lake Technology (Dolby Labs), Weiss Engineering, Millennia Media and Z-Systems, the consortium set out with the goal to create 'the most sonically transparent dithering algorithm possible'.
In 1999, the first products containing POW-R were released by consortium companies. Other companies became interested in using POW-R in their products, and the algorithms are now licensed to a number of leading DAW vendors including Apple, Avid (Digidesign), Sonic Studio, Ableton, Magix / Sequoia / Samplitude, and others.
Reception
One of the first products to include POW-R was a hardware dithering unit from Weiss engineering; in a review of this product in 1999, mastering engineer Bob Katz spoke highly of the new algorithm declaring it 'an incredible achievement'.
Technical details
Technically, the entire POW-R suite is not noise shaping; rather, the original POW-R algorithm is based on narrow-band Nyquist dither, while other POW-R algorithms include noise shaping and white noise. Unlike noise-shaping algorithms based on an ‘Absolute threshold of hearing’ model (i.e. the quietest sound that can be heard on otherwise silent conditions), POW-R has been designed to give optimal performance at normal listenin
|
https://en.wikipedia.org/wiki/Paxos%20%28computer%20science%29
|
Paxos is a family of protocols for solving consensus in a network of unreliable or fallible processors.
Consensus is the process of agreeing on one result among a group of participants. This problem becomes difficult when the participants or their communications may experience failures.
Consensus protocols are the basis for the state machine replication approach to distributed computing, as suggested by Leslie Lamport and surveyed by Fred Schneider. State machine replication is a technique for converting an algorithm into a fault-tolerant, distributed implementation. Ad-hoc techniques may leave important cases of failures unresolved. The principled approach proposed by Lamport et al. ensures all cases are handled safely.
The Paxos protocol was first submitted in 1989 and named after a fictional legislative consensus system used on the Paxos island in Greece, where Lamport wrote that the parliament had to function "even though legislators continually wandered in and out of the parliamentary Chamber". It was later published as a journal article in 1998.
The Paxos family of protocols includes a spectrum of trade-offs between the number of processors, number of message delays before learning the agreed value, the activity level of individual participants, number of messages sent, and types of failures. Although no deterministic fault-tolerant consensus protocol can guarantee progress in an asynchronous network (a result proved in a paper by Fischer, Lynch and Paterson), Paxos guarantees safety (consistency), and the conditions that could prevent it from making progress are difficult to provoke.
Paxos is usually used where durability is required (for example, to replicate a file or a database), in which the amount of durable state could be large. The protocol attempts to make progress even during periods when some bounded number of replicas are unresponsive. There is also a mechanism to drop a permanently failed replica or to add a new replica.
History
The top
|
https://en.wikipedia.org/wiki/Virtual%20workplace
|
A virtual workplace is a work environment where employees can perform their duties remotely, using technology such as laptops, smartphones, and video conferencing tools. A virtual workplace is not located in any one physical space. It is usually a network of several workplaces technologically connected (via a private network or the Internet) without regard to geographic boundaries. Employees are thus able to interact in a collaborative working environment regardless of where they are located. A virtual workplace integrates hardware, people, and online processes.
The phenomenon of a virtual workplace has grown in the 2000s as advances in technology have made it easier for employees to work from anywhere with an internet connection.
The virtual workplace industry includes companies that offer remote work solutions, such as virtual meeting (teleconference) software and project management tools. Consulting firms can also help companies transition to a virtual workplace if needed. The latest technology evolution in the space is virtual office software which allows companies to gather all their team members in one virtual workplace. Companies in a variety of industries, including technology, finance, and healthcare, are turning to virtual workplaces to increase employee flexibility and productivity, reduce office costs, and attract and retain top talent. Recently, there have been four industries that consider remote work suitable: communications and information technology, educational services, media and communications, and professional and business services.
History
As information technology began to play a greater role in the daily operations of organizations, virtual workplaces developed as an augmentation or alternative to traditional work environments of rooms, cubicles and office buildings.
In 2010, the Telework Enhancement Act of 2010 required each Executive agency in the United States to establish a policy allowing remote work to the maximum extent possible, s
|
https://en.wikipedia.org/wiki/Full-employment%20theorem
|
In computer science and mathematics, a full employment theorem is a term used, often humorously, to refer to a theorem which states that no algorithm can optimally perform a particular task done by some class of professionals. The name arises because such a theorem ensures that there is endless scope to keep discovering new techniques to improve the way at least some specific task is done.
For example, the full employment theorem for compiler writers states that there is no such thing as a provably perfect size-optimizing compiler, as such a proof for the compiler would have to detect non-terminating computations and reduce them to a one-instruction infinite loop. Thus, the existence of a provably perfect size-optimizing compiler would imply a solution to the halting problem, which cannot exist. This also implies that there may always be a better compiler since the proof that one has the best compiler cannot exist. Therefore, compiler writers will always be able to speculate that they have something to improve. A similar example in practical computer science is the idea of no free lunch in search and optimization, which states that no efficient general-purpose solver can exist, and hence there will always be some particular problem whose best known solution might be improved.
Similarly, Gödel's incompleteness theorems have been called full employment theorems for mathematicians. Tasks such as virus writing and detection, and spam filtering and filter-breaking are also subject to Rice's theorem.
References
Solomonoff, Ray, "A Preliminary Report on a General Theory of Inductive Inference", Report V-131, Zator Co., Cambridge, Ma. Feb 4, 1960.
p. 401, Modern Compiler Implementation in ML, Andrew W. Appel, Cambridge University Press, 1998. .
p. 27, Retargetable Compiler Technology for Embedded Systems: Tools and Applications, Rainer Leupers and Peter Marwedel, Springer-Verlag, 2001. .
Notes from a course in Modern Programming Languages at the University of Penns
|
https://en.wikipedia.org/wiki/Munching%20square
|
The Munching Square is a display hack dating back to the PDP-1 (ca. 1962, reportedly discovered by Jackson Wright), which employs a trivial computation (repeatedly plotting the graph Y = X XOR T for successive values of T) to produce an impressive display of moving and growing squares that devour the screen. The initial value of T is treated as a parameter, which, when well-chosen, can produce amazing effects. Some of these, later (re)discovered on the LISP machine, have been christened munching triangles (using bitwise AND instead of XOR, and toggling points instead of plotting them), munching w's, and munching mazes. More generally, suppose a graphics program produces an impressive and ever-changing display of some basic form, foo, on a display terminal, and does it using a relatively simple program; then the program (or the resulting display) is likely to be referred to as munching foos.
References
External links
Video of the original Munching Squares demo running on a PDP-1
Munching Squares at MathWorld
See also
HAKMEM
The Munch module of the open source XScreenSaver collection.
History of software
Screensavers
Novelty software
|
https://en.wikipedia.org/wiki/Mozuku
|
Mozuku is a collective term for various types of Japanese brown algae from the family Chordariaceae, which are used as food. These include ito-mozuku (Nemacystus decipiens), Okinawa mozuku (Cladosiphon okamuranus), ishi-mozuku (Sphaerotrichia divaricata) and futo mozuku (Tinocladia crassa). Occasionally the aquatic flowering plant Hydrilla verticillata is referred to as mozuku.
References
Edible algae
Common names of organisms
|
https://en.wikipedia.org/wiki/Backup%20site
|
A backup site or work area recovery site is a location where an organization can relocate following a disaster, such as fire, flood, terrorist threat, or other disruptive event. This is an integral part of the disaster recovery plan and wider business continuity planning of an organization.
A backup, or alternate, site can be another data center location which is either operated by the organization, or contracted via a company that specializes in disaster recovery services. In some cases, one organization will have an agreement with a second organization to operate a joint backup site. In addition, an organization may have a reciprocal agreement with another organization to set up a site at each of their data centers.
Sites are generally classified based on how prepared they are and the speed with which they can be brought into operation: "cold" (facility is prepared), "warm" (equipment is in place), "hot" (operational data is loaded) –- with increasing cost to implement and maintain with increasing "temperature".
Classification
Cold site
A cold site is an empty operational space with basic facilities like raised floors, air conditioning, power and communication lines etc. Following an incident, equipment is brought in and set up to resume operations. It does not include backed-up copies of data and information from the original location of the organization, nor does it include hardware already set up. The lack of provisioned hardware contributes to the minimal start-up costs of the cold site, but requires additional time following the disaster to have the operation running at a capacity similar to that prior to the disaster. In some cases, a cold site may have equipment available, but it is not operational.
Warm site
A warm site is a compromise between hot and cold. These sites will have hardware and connectivity already established -- though on a smaller scale. Warm sites might have backups on hand, but they may be incomplete and may be between several days t
|
https://en.wikipedia.org/wiki/Horizontal%20market%20software
|
In computer software, horizontal market software is a type of application software that is useful in a wide range of industries. This is the opposite of vertical market software, which has a scope of usefulness limited to few industries. Horizontal market software is also known as "productivity software."
Example
Examples of horizontal market software include word processors, web browsers, spreadsheets, calendars, project management applications, and generic bookkeeping applications. Since horizontal market software is developed to be used by a broad audience, it generally lacks any market-specific customizations.
See also
Horizontal market
Vertical market software
Enterprise resource planning
Product software implementation method
References
Software
Software by type
|
https://en.wikipedia.org/wiki/Bhaskaracharya%20Pratishthana
|
Bhaskaracharya Pratishthana is a research and education institute for mathematics in Pune, India, founded by noted Indian-American mathematician professor Shreeram Abhyankar.
The institute is named after the great ancient Indian Mathematician Bhaskaracharya (Born in 1114 A.D.). Bhaskaracharya Pratishthana is a Pune, India, based institute founded in 1976. It has researchers working in many areas of mathematics, particularly in algebra and number theory.
Since 1992, the Pratishthan has also been a recognized center for conducting Regional Mathematics Olympiad (RMO) under the National Board for Higher Mathematics (NBHM) for Maharashtra and Goa Region. This has enabled the Pratishthan to train lots of students from std. V to XII for this examination. Many students who received training at BP have won medals in the International Mathematical Olympiad.
Pratishthana publishes the mathematics periodical Bona Mathematica and has published texts in higher and olympiad mathematics. Besides this the Pratishthan holds annual / biennial conferences/Workshops in some research areas in higher mathematics attended by Indian/Foreign scholars and Professors. The Pratishthan has organized a number of workshops for research students and college teachers under the aegis of NBHM/NCM. The National Board for Higher Mathematics has greatly helped Pratishthan to enrich its library and the Department of Atomic Energy and the Mathematics Department of S. P. Pune University have rendered active co-operation in holding conferences/workshops.
It also conducts the BMTSC exam which is a school-level mathematics competition for students studying in the 5th and the 6th grade. The objectives of the competition are listed below:
1. Identify good students of mathematics at an early age.
2. A pre Olympiad type competition.
3. To enhance Mathematical ability and logical thinking.
4. Nurture programs for successful students to improve their ability.
Projects
Recently, in Pratishthana, two projects supp
|
https://en.wikipedia.org/wiki/Family%20of%20curves
|
In geometry, a family of curves is a set of curves, each of which is given by a function or parametrization in which one or more of the parameters is variable. In general, the parameter(s) influence the shape of the curve in a way that is more complicated than a simple linear transformation. Sets of curves given by an implicit relation may also represent families of curves.
Families of curves appear frequently in solutions of differential equations; when an additive constant of integration is introduced, it will usually be manipulated algebraically until it no longer represents a simple linear transformation.
Families of curves may also arise in other areas. For example, all non-degenerate conic sections can be represented using a single polar equation with one parameter, the eccentricity of the curve:
as the value of changes, the appearance of the curve varies in a relatively complicated way.
Applications
Families of curves may arise in various topics in geometry, including the envelope of a set of curves and the caustic of a given curve.
Generalizations
In algebraic geometry, an algebraic generalization is given by the notion of a linear system of divisors.
External links
Algebraic geometry
|
https://en.wikipedia.org/wiki/Cognitive%20dimensions%20of%20notations
|
Cognitive dimensions or cognitive dimensions of notations are design principles for notations, user interfaces and programming languages, described by researcher Thomas R.G. Green and further researched with Marian Petre. The dimensions can be used to evaluate the usability of an existing information artifact, or as heuristics to guide the design of a new one, and are useful in Human-Computer Interaction design.
Cognitive dimensions are designed to provide a lightweight approach to analyse the quality of a design, rather than an in-depth, detailed description. They provide a common vocabulary for discussing many factors in notation, UI or programming language design. Also, cognitive dimensions help in exploring the space of possible designs through design maneuvers, changes intended to improve the design along one dimension.
List of the cognitive dimensions
Thomas Green originally defined 14 cognitive dimensions:
Abstraction gradient What are the minimum and maximum levels of abstraction exposed by the notation? Can details be encapsulated?
Closeness of mapping How closely does the notation correspond to the problem world?
Consistency After part of the notation has been learned, how much of the rest can be successfully guessed?
Diffuseness / terseness How many symbols or how much space does the notation require to produce a certain result or express a meaning?
Error-proneness To what extent does the notation influence the likelihood of the user making a mistake?
Hard mental operations How much hard mental processing lies at the notational level, rather than at the semantic level? Are there places where the user needs to resort to fingers or penciled annotation to keep track of what's happening?
Hidden dependencies Are dependencies between entities in the notation visible or hidden? Is every dependency indicated in both directions? Does a change in one area of the notation lead to unexpected consequences?
Juxtaposability Can different par
|
https://en.wikipedia.org/wiki/Chopsticks%20%28hand%20game%29
|
Chopsticks (sometimes called Calculator, or just Sticks) is a hand game for two or more players, in which players extend a number of fingers from each hand and transfer those scores by taking turns to tap one hand against another. Chopsticks is an example of a combinatorial game, and is solved in the sense that with perfect play, an optimal strategy from any point is known.
If you have the same number of fingers on each hand you can strike them together and divide them leaving you with one finger on each hand.
When the player only has one finger left it can not be swapped between hands and must attack the opponent.
Abbreviation
A chopsticks position can be easily abbreviated to a four-digit code [ABCD]. A and B are the hands (in ascending order of fingers) of the player who is about to take their turn. C and D are the hands (in ascending order of fingers) of the player who is not about to take their turn. It is important to notate each player's hands in ascending order, so that a single distinct position isn't accidentally represented by two codes. For example, the code [1032] is not allowed, and should be notated [0123].
Therefore, the starting position is [1111]. The next position must be [1211]. The next position must be either [1212] or [1312]. Treating each position as a 4-digit number, the smallest position is 0000, and the largest position is 4444.
This abbreviation formula expands easily to games with more players. A three-player game can be represented by six-digits (e.g. [111211]), where each pair of adjacent digits represents a single player, and each pair is ordered based on when players will take their turns. The leftmost pair represents the hands of the player about to take his turn; the middle pair represents the player who will go next, and so on. The rightmost pair represents the player who must wait the longest before his turn (usually because he just went).
Moves
Under normal rules, there are a maximum of 14 possible moves:
Four attacks (
|
https://en.wikipedia.org/wiki/Trauzl%20lead%20block%20test
|
The Trauzl lead block test, also called the Trauzl test, or just Trauzl, is a test used to measure the strength of explosive materials. It was developed by Isidor Trauzl in 1885.
The test is performed by loading a 10-gram foil-wrapped sample of the explosive into a hole drilled into a lead block with specific dimensions and properties (a soft lead cylinder, 200 mm diameter and 200 mm high, with the hole 125 mm deep, and 25 mm diameter). The hole is then topped up with sand, and the sample is detonated electrically. After detonation, the volume increase of the cavity is measured. The result, given in cm3, is called the Trauzl number of the explosive.
The Trauzl test is not useful for some modern higher-powered explosives as their power often cracks or otherwise ruptures the lead block, leaving no hole to measure.
A variant of the test uses an aluminium block to avoid exposure of participants to lead-related hazards.
Examples
Explosive power of chemical explosives by Trauzl number:
Notes
Explosives engineering
|
https://en.wikipedia.org/wiki/Junkyard%20tornado
|
The junkyard tornado, sometimes known as Hoyle's fallacy, is an argument against abiogenesis, using a calculation of its probability based on false assumptions, as comparable to "a tornado sweeping through a junk-yard might assemble a Boeing 747 from the materials therein" and to compare the chance of obtaining even a single functioning protein by chance combination of amino acids to a solar system full of blind men solving Rubik's Cubes simultaneously. It was used originally by English astronomer Fred Hoyle (1915–2001) in his book The Intelligent Universe, where he tried to apply statistics to evolution and the origin of life. Similar reasoning were advanced in Darwin's time, and indeed as long ago as Cicero in classical antiquity. While Hoyle himself was an atheist, the argument has since become a mainstay in the rejection of evolution by religious groups.
Hoyle's fallacy contradicts many well-established and widely tested principles in the field of evolutionary biology. As the fallacy argues, the odds of the sudden construction of higher lifeforms are indeed improbable. However, what the junkyard tornado postulation fails to take into account is the vast amount of support that evolution proceeds in many smaller stages, each driven by natural selection rather than by random chance, over a long period of time. The Boeing 747 was not designed in a single unlikely burst of creativity, just as modern lifeforms were not constructed in one single unlikely event, as the junkyard tornado scenario suggests.
The theory of evolution has been studied and tested extensively by numerous researchers and scientists and is the most scientifically accurate explanation for the origins of complex life.
Hoyle's statement
According to Fred Hoyle's analysis, the probability of obtaining all of life's approximate 2000 enzymes in a random trial is about one-in-1040,000:
His junkyard analogy:
This echoes his stance, reported elsewhere:
Hoyle used this to argue in favor of panspermia
|
https://en.wikipedia.org/wiki/Van%20Hiele%20model
|
In mathematics education, the Van Hiele model is a theory that describes how students learn geometry. The theory originated in 1957 in the doctoral dissertations of Dina van Hiele-Geldof and Pierre van Hiele (wife and husband) at Utrecht University, in the Netherlands. The Soviets did research on the theory in the 1960s and integrated their findings into their curricula. American researchers did several large studies on the van Hiele theory in the late 1970s and early 1980s, concluding that students' low van Hiele levels made it difficult to succeed in proof-oriented geometry courses and advising better preparation at earlier grade levels. Pierre van Hiele published Structure and Insight in 1986, further describing his theory. The model has greatly influenced geometry curricula throughout the world through emphasis on analyzing properties and classification of shapes at early grade levels. In the United States, the theory has influenced the geometry strand of the Standards published by the National Council of Teachers of Mathematics and the Common Core Standards.
Van Hiele levels
The student learns by rote to operate with [mathematical] relations that he does not understand, and of which he has not seen the origin…. Therefore the system of relations is an independent construction having no rapport with other experiences of the child. This means that the student knows only what has been taught to him and what has been deduced from it. He has not learned to establish connections between the system and the sensory world. He will not know how to apply what he has learned in a new situation. - Pierre van Hiele, 1959
The best known part of the van Hiele model are the five levels which the van Hieles postulated to describe how children learn to reason in geometry. Students cannot be expected to prove geometric theorems until they have built up an extensive understanding of the systems of relationships between geometric ideas. These systems cannot be learned by rote, but
|
https://en.wikipedia.org/wiki/Arnold%27s%20cat%20map
|
In mathematics, Arnold's cat map is a chaotic map from the torus into itself, named after Vladimir Arnold, who demonstrated its effects in the 1960s using an image of a cat, hence the name.
Thinking of the torus as the quotient space , Arnold's cat map is the transformation given by the formula
Equivalently, in matrix notation, this is
That is, with a unit equal to the width of the square image, the image is sheared one unit up, then two units to the right, and all that lies outside that unit square is shifted back by the unit until it is within the square.
Properties
Γ is invertible because the matrix has determinant 1 and therefore its inverse has integer entries,
Γ is area preserving,
Γ has a unique hyperbolic fixed point (the vertices of the square). The linear transformation which defines the map is hyperbolic: its eigenvalues are irrational numbers, one greater and the other smaller than 1 (in absolute value), so they are associated respectively to an expanding and a contracting eigenspace which are also the stable and unstable manifolds. The eigenspaces are orthogonal because the matrix is symmetric. Since the eigenvectors have rationally independent components both the eigenspaces densely cover the torus. Arnold's cat map is a particularly well-known example of a hyperbolic toral automorphism, which is an automorphism of a torus given by a square unimodular matrix having no eigenvalues of absolute value 1.
The set of the points with a periodic orbit is dense on the torus. Actually a point is periodic if and only if its coordinates are rational.
Γ is topologically transitive (i.e. there is a point whose orbit is dense).
The number of points with period is exactly (where and are the eigenvalues of the matrix). For example, the first few terms of this series are 1, 5, 16, 45, 121, 320, 841, 2205 .... (The same equation holds for any unimodular hyperbolic toral automorphism if the eigenvalues are replaced.)
Γ is ergodic and mixing,
Γ is an Ano
|
https://en.wikipedia.org/wiki/Perceived%20performance
|
Perceived performance, in computer engineering, refers to how quickly a software feature appears to perform its task. The concept applies mainly to user acceptance aspects.
The amount of time an application takes to start up, or a file to download, is not made faster by showing a startup screen (see Splash screen) or a file progress dialog box. However, it satisfies some human needs: it appears faster to the user as well as providing a visual cue to let them know the system is handling their request.
In most cases, increasing real performance increases perceived performance, but when real performance cannot be increased due to physical limitations, techniques can be used to increase perceived performance at the cost of marginally decreasing real performance. For example, drawing and refreshing a progress bar while loading a file satisfies the user who is watching, but steals time from the process that is actually loading the file, but usually this is only a very small amount of time. All such techniques must exploit the inability of the user to accurately judge real performance, or they would be considered detrimental to performance.
Techniques for improving perceived performance may include more than just decreasing the delay between the user's request and visual feedback. Sometimes an increase in delay can be perceived as a performance improvement, such as when a variable controlled by the user is set to a running average of the users input. This can give the impression of smoother motion, but the controlled variable always reaches the desired value a bit late. Since it smooths out hi-frequency jitter, when the user is attempting to hold the value constant, they may feel like they are succeeding more readily. This kind of compromise would be appropriate for control of a sniper rifle in a video game. Another example may be doing trivial computation ahead of time rather than after a user triggers an action, such as pre-sorting a large list of data before a user w
|
https://en.wikipedia.org/wiki/Gingerbreadman%20map
|
In dynamical systems theory, the Gingerbreadman map is a chaotic two-dimensional map. It is given by the piecewise linear transformation:
See also
List of chaotic maps
References
External links
Chaotic maps
Exactly solvable models
|
https://en.wikipedia.org/wiki/Storage%20violation
|
In computing a storage violation is a hardware or software fault that occurs when a task attempts to access an area of computer storage which it is not permitted to access.
Types of storage violation
Storage violation can, for instance, consist of reading from, writing to, or freeing storage not owned by the task. A common type of storage violation is known as a stack buffer overflow where a program attempts to exceed the limits set for its call stack. It can also refer to attempted modification of memory "owned" by another thread where there is incomplete (or no) memory protection.
Avoidance of storage violations
Storage violations can occur in transaction systems such as CICS in circumstances where it is possible to write to storage not owned by the transaction; such violations can be reduced by enabling features such as storage protection and transaction isolation.
Detection of storage violations
Storage violations can be difficult to detect as a program can often run for a period of time after the violation before it crashes. For example, a pointer to a freed area of memory can be retained and later reused causing an error. As a result, efforts focus on detecting violations as they occur, rather than later when the problem is observed.
In systems such as CICS, storage violations are sometimes detected (by the CICS kernel) by the use of "signatures", which can be tested to see if they have been overlaid.
An alternative runtime library may be used to better detect storage violations, at the cost of additional overhead.
Some programming languages use software bounds checking to prevent these occurrences.
Some program debugging software will also detect violations during testing.
Common causes
A runaway subscript leading to illegal use of reference modification during run time.
Linkage layout mismatch between called and the calling elements.
Use of previously freed (and sometimes already re-allocated) memory.
Examples of software detecting storage violati
|
https://en.wikipedia.org/wiki/Stewart%27s%20theorem
|
In geometry, Stewart's theorem yields a relation between the lengths of the sides and the length of a cevian in a triangle. Its name is in honour of the Scottish mathematician Matthew Stewart, who published the theorem in 1746.
Statement
Let , , be the lengths of the sides of a triangle. Let be the length of a cevian to the side of length . If the cevian divides the side of length into two segments of length and , with adjacent to and adjacent to , then Stewart's theorem states that
A common mnemonic used by students to memorize this equation (after rearranging the terms) is:
The theorem may be written more symmetrically using signed lengths of segments. That is, take the length to be positive or negative according to whether is to the left or right of in some fixed orientation of the line. In this formulation, the theorem states that if are collinear points, and is any point, then
In the special case that the cevian is the median (that is, it divides the opposite side into two segments of equal length), the result is known as Apollonius' theorem.
Proof
The theorem can be proved as an application of the law of cosines.
Let be the angle between and and the angle between and . Then is the supplement of , and so . Applying the law of cosines in the two small triangles using angles and produces
Multiplying the first equation by and the third equation by and adding them eliminates . One obtains
which is the required equation.
Alternatively, the theorem can be proved by drawing a perpendicular from the vertex of the triangle to the base and using the Pythagorean theorem to write the distances , , in terms of the altitude. The left and right hand sides of the equation then reduce algebraically to the same expression.
History
According to , Stewart published the result in 1746 when he was a candidate to replace Colin Maclaurin as Professor of Mathematics at the University of Edinburgh. state that the result was probably known to Arc
|
https://en.wikipedia.org/wiki/Cheat%20sheet
|
A cheat sheet (also cheatsheet) or crib sheet is a concise set of notes used for quick reference. Cheat sheets were historically used by students without an instructor or teacher's knowledge to cheat on a test or exam. In the context of higher education or vocational training, where rote memorization is not as important, students may be permitted (or even encouraged) to develop and consult their cheat sheets during exams. The act of preparing such reference notes can be an educational exercise in itself, in which case students may be restricted to using only those reference notes they have developed themselves. Some universities publish guidelines for the creation of cheat sheets.
As reference cards
In more general usage, a crib sheet is any short (one- or two-page) reference to terms, commands, or symbols where the user is expected to understand the use of such terms but not necessarily to have memorized all of them. Many computer applications, for example, have crib sheets included in their documentation, which list keystrokes or menu commands needed to achieve specific tasks to save the user the effort of digging through an entire manual to find the keystroke needed to, for example, move between two windows. An example of such a crib sheet is one for the GIMP photo editing software.
See also
Academic dishonesty
Reference card
References
External links
Cheating in school
Computer programming
Educational materials
School examinations
|
https://en.wikipedia.org/wiki/High-%20and%20low-level
|
High-level and low-level, as technical terms, are used to classify, describe and point to specific goals of a systematic operation; and are applied in a wide range of contexts, such as, for instance, in domains as widely varied as computer science and business administration.
High-level describe those operations that are more abstract and general in nature; wherein the overall goals and systemic features are typically more concerned with the wider, macro system as a whole.
Low-level describes more specific individual components of a systematic operation, focusing on the details of rudimentary micro functions rather than macro, complex processes. Low-level classification is typically more concerned with individual components within the system and how they operate.
Features which emerge only at a high level of description are known as epiphenomena.
Differences
Due to the nature of complex systems, the high-level description will often be completely different from the low-level one; and, therefore, the (different) descriptions that each deliver are consequent upon the level at which each (differently) direct their study. For example,
there are features of an ant colony that are not features of any individual ant;
there are features of the human mind that are not known to be descriptive of individual neurons in the brain;
there are features of oceans which are not features of any individual water molecule; and
there are features of a human personality that are not features of any cell in a body.
Uses
In computer science, software is typically divided into two types: high-level end-user applications software (such as word processors, databases, video games, etc.), and low-level systems software (such as operating systems, hardware drivers, etc.).As such, high-level applications typically rely on low-level applications to function.In terms of programming, a high-level programming language is one which has a relatively high level of abstraction, and manipulates c
|
https://en.wikipedia.org/wiki/General-purpose%20modeling
|
General-purpose modeling (GPM) is the systematic use of a general-purpose modeling language to represent the various facets of an object or a system. Examples of GPM languages are:
The Unified Modeling Language (UML), an industry standard for modeling software-intensive systems
EXPRESS, a data modeling language for product data, standardized as ISO 10303-11
IDEF, a group of languages from the 1970s that aimed to be neutral, generic and reusable
Gellish, an industry standard natural language oriented modeling language for storage and exchange of data and knowledge, published in 2005
XML, a data modeling language now beginning to be used to model code (MetaL, Microsoft .Net )
GPM languages are in contrast with domain-specific modeling languages (DSMs).
See also
Model-driven engineering (MDE)
Specification languages
Modeling languages
|
https://en.wikipedia.org/wiki/Gangjin%20Kiln%20Sites
|
Gangjingun Kiln Sites is a tentative World Heritage site listed by the South Korean government at UNESCO. It is a complex of 188 kilns which produced Goryeo ware. The kiln sites are located in Gangjin-gun, Jeollanam-do, South Korea near the sea. Mountains in the north provided the necessary raw materials such as firewood, kaolinite, and silicon dioxide for the master potters while a well established system of distribution transported pottery throughout Korea and facilitated export to China and Japan.
History
Pottery during the Goryeo dynasty reached very high levels of refinement. The kilns at Buan-gun in Jeollabuk-do produced earthenware while the Ganjingun kilns produced celadon wares. The kiln sites are important today because they are the remnants of the pottery culture.
The 188 kilns of the Gangjingun Kiln Sites are located in the regions of Yongunni, Gyeyulli, Sadangni, and Sudongni. 98 of these are designated as historic sites by the Korean government.
The 75 kilns in Yongunni are in generally in good condition and are some of the earliest dated. These kilns are dated to the 10th and 11th centuries. These kilns provide clues for scholars interested in discovering the origins and kiln characteristics of the first Korean celadons manufactured. Fragments of ancient Chinese kiln products have also been uncovered in this region.
59 kilns remain in Gyeyulli and the kilns in this region date from the 11th century to the 13th century. Excavations have uncovered pottery similar in style to Yongunni pottery but most pottery shards are of the conventional inlaid celadon type.
The 43 kilns of Sadangni are dated from the 12th to 14th centuries. The kilns at Tangion village date from the early 12th century to the 13th century and are representative of the Goryeo ceramic kilns which were used in the production of Goryeo celadons, famous for their superior kingfisher color and inlay technique. The pottery produced here would be during the peak of the creati
|
https://en.wikipedia.org/wiki/Geranyl%20acetate
|
Geranyl acetate is a monoterpene. It is a colorless liquid with a pleasant floral or fruity rose aroma. It is a colorless liquid but commercial samples can appear yellowish. Geranyl acetate is insoluble in water but soluble in organic solvents. Several hundred tons are produced annually.
Occurrence and production
Geranyl acetate is a constituent of many essential oils, including Ceylon citronella, palmarosa, lemon grass, petit grain, neroli, geranium, coriander, carrot, Camden woollybutt, and sassafras. It can be obtained by fractional distillation of the essential oils obtained from these sources, but more commonly it is prepared by the esterification of geraniol with acetic acid.
Uses
Geranyl acetate is used primarily as a component of perfumes for creams and soaps and as a flavoring ingredient. It is used particularly in rose, lavender and geranium formulations where a sweet fruity or citrus aroma is desired. It is designated by the U.S. Food and Drug Administration as generally recognized as safe (GRAS).
See also
Neryl acetate
References
External links
Carcinogenesis Studies of Food Grade Geranyl Acetate
Perfume ingredients
Flavors
Monoterpenes
Acetate esters
|
https://en.wikipedia.org/wiki/Visual%20modeling
|
Visual modeling is the graphic representation of objects and systems of interest using graphical languages. Visual modeling is a way for experts and novices to have a common understanding of otherwise complicated ideas. By using visual models complex ideas are not held to human limitations, allowing for greater complexity without a loss of comprehension. Visual modeling can also be used to bring a group to a consensus. Models help effectively communicate ideas among designers, allowing for quicker discussion and an eventual consensus. Visual modeling languages may be General-Purpose Modeling (GPM) languages (e.g., UML, Southbeach Notation, IDEF) or Domain-Specific Modeling (DSM) languages (e.g., SysML). Visual modelling in computer science had no standard before the 90's, and was incomparable until the introduction of the UML. They include industry open standards (e.g., UML, SysML, Modelica), as well as proprietary standards, such as the visual languages associated with VisSim, MATLAB and Simulink, OPNET, NetSim, NI Multisim, and Reactive Blocks. Both VisSim and Reactive Blocks provide a royalty-free, downloadable viewer that lets anyone open and interactively simulate their models. The community edition of Reactive Blocks also allows full editing of the models as well as compilation, as long as the work is published under the Eclipse Public License. Visual modeling languages are an area of active research that continues to evolve, as evidenced by increasing interest in DSM languages, visual requirements, and visual OWL (Web Ontology Language).
See also
Service-oriented modeling
Domain-specific modeling
Model-driven engineering
Modeling language
References
External links
Visual Modeling Forum A web community dedicated to visual modeling languages and tools.
Programming language topics
Unified Modeling Language
Simulation programming languages
|
https://en.wikipedia.org/wiki/Mosaic%20virus
|
A mosaic virus is any virus that causes infected plant foliage to have a mottled appearance. Such viruses come from a variety of unrelated lineages and consequently there is no taxon that unites all mosaic viruses.
Examples
Virus species that contained the word 'mosaic' in their English language common name are listed below, though with the nomenclature and taxonomy of the ICTV 2022 release. However, not all viruses that may cause a mottled appearance belong to species that include the word "mosaic" in the name.
References
External links
Viral plant pathogens and diseases
Viruses
|
https://en.wikipedia.org/wiki/OMA%20Device%20Management
|
OMA Device Management is a device management protocol specified by the Open Mobile Alliance (OMA) Device Management (DM) Working Group and the Data Synchronization (DS) Working Group. The current approved specification of OMA DM is version 1.2.1, the latest modifications to this version released in June 2008. The candidate release 2.0 was scheduled to be finalized in September 2013.
Overview
OMA DM specification is designed for management of mobile devices such as mobile phones, PDAs, and tablet computers. Device management is intended to support the following uses:
Provisioning – Configuration of the device (including first time use), enabling and disabling features
Device Configuration – Allow changes to settings and parameters of the device
Software Upgrades – Provide for new software and/or bug fixes to be loaded on the device, including applications and system software
Fault Management – Report errors from the device, query about status of device
All of the above functions are supported by the OMA DM specification, and a device may optionally implement all or a subset of these features. Since OMA DM specification is aimed at mobile devices, it is designed with sensitivity to the following:
small footprint devices, where memory and storage space may be limited
constraint on bandwidth of communication, such as in wireless connectivity
tight security, as the devices are vulnerable to software attacks; authentication and challenges are made part of the specifications
Technical description
OMA DM was originally developed by The SyncML Initiative Ltd, an industry consortium formed by many mobile device manufacturers. The SyncML Initiative got consolidated into the OMA umbrella as the scope and use of the specification was expanded to include many more devices and support global operation.
Technically, the OMA DM protocol uses XML for data exchange, more specifically the sub-set defined by SyncML. The device management takes place by communication betw
|
https://en.wikipedia.org/wiki/Hydrostatic%20stress
|
In continuum mechanics, hydrostatic stress, also known as volumetric stress, is a component of stress which contains uniaxial stresses, but not shear stresses. A specialized case of hydrostatic stress contains isotropic compressive stress, which changes only in volume, but not in shape. Pure hydrostatic stress can be experienced by a point in a fluid such as water. It is often used interchangeably with "pressure" and is also known as confining stress, particularly in the field of geomechanics.
Hydrostatic stress is equivalent to the average of the uniaxial stresses along three orthogonal axes and can be calculated from the first invariant of the stress tensor:
Its magnitude in a fluid, , can be given by:
where
is an index denoting each distinct layer of material above the point of interest;
is the density of each layer;
is the gravitational acceleration (assumed constant here; this can be substituted with any acceleration that is important in defining weight);
is the height (or thickness) of each given layer of material.
For example, the magnitude of the hydrostatic stress felt at a point under ten meters of fresh water would be
where the index indicates "water".
Because the hydrostatic stress is isotropic, it acts equally in all directions. In tensor form, the hydrostatic stress is equal to
where is the 3-by-3 identity matrix.
Hydrostatic compressive stress is used for the determination of the bulk modulus for materials.
References
Continuum mechanics
Orientation (geometry)
|
https://en.wikipedia.org/wiki/Wireshark
|
Wireshark is a free and open-source packet analyzer. It is used for network troubleshooting, analysis, software and communications protocol development, and education. Originally named Ethereal, the project was renamed Wireshark in May 2006 due to trademark issues.
Wireshark is cross-platform, using the Qt widget toolkit in current releases to implement its user interface, and using pcap to capture packets; it runs on Linux, macOS, BSD, Solaris, some other Unix-like operating systems, and Microsoft Windows. There is also a terminal-based (non-GUI) version called TShark. Wireshark, and the other programs distributed with it such as TShark, are free software, released under the terms of the GNU General Public License version 2 or any later version.
Functionality
Wireshark is very similar to tcpdump, but has a graphical front-end and integrated sorting and filtering options.
Wireshark lets the user put network interface controllers into promiscuous mode (if supported by the network interface controller), so they can see all the traffic visible on that interface including unicast traffic not sent to that network interface controller's MAC address. However, when capturing with a packet analyzer in promiscuous mode on a port on a network switch, not all traffic through the switch is necessarily sent to the port where the capture is done, so capturing in promiscuous mode is not necessarily sufficient to see all network traffic. Port mirroring or various network taps extend capture to any point on the network. Simple passive taps are extremely resistant to tampering.
On Linux, BSD, and macOS, with libpcap 1.0.0 or later, Wireshark 1.4 and later can also put wireless network interface controllers into monitor mode.
If a remote machine captures packets and sends the captured packets to a machine running Wireshark using the TZSP protocol or the protocol used by OmniPeek, Wireshark dissects those packets, so it can analyze packets captured on a remote machine at the time
|
https://en.wikipedia.org/wiki/Triangle%20mesh
|
In computer graphics, a triangle mesh is a type of polygon mesh. It comprises a set of triangles (typically in three dimensions) that are connected by their common edges or vertices.
Many graphics software packages and hardware devices can operate more efficiently on triangles that are grouped into meshes than on a similar number of triangles that are presented individually. This is typically because computer graphics do operations on the vertices at the corners of triangles. With individual triangles, the system has to operate on three vertices for every triangle. In a large mesh, there could be eight or more triangles meeting at a single vertex - by processing those vertices just once, it is possible to do a fraction of the work and achieve an identical effect.
In many computer graphics applications it is necessary to manage a mesh of triangles. The mesh components are vertices, edges, and triangles. An application might require knowledge of the various connections between the mesh components. These connections can be managed independently of the actual vertex positions. This document describes a simple data structure that is convenient for managing the connections. This is not the only possible data structure. Many other types exist and have support for various queries about meshes.
Representation
Various methods of storing and working with a mesh in computer memory are possible. With the OpenGL and DirectX APIs there are two primary ways of passing a triangle mesh to the graphics hardware, triangle strips and index arrays.
Triangle strip
One way of sharing vertex data between triangles is the triangle strip. With strips of triangles each triangle shares one complete edge with one neighbour and another with the next. Another way is the triangle fan which is a set of connected triangles sharing one central vertex. With these methods vertices are dealt with efficiently resulting in the need to only process N+2 vertices in order to draw N triangles.
Tria
|
https://en.wikipedia.org/wiki/Stochastic%20volatility
|
In statistics, stochastic volatility models are those in which the variance of a stochastic process is itself randomly distributed. They are used in the field of mathematical finance to evaluate derivative securities, such as options. The name derives from the models' treatment of the underlying security's volatility as a random process, governed by state variables such as the price level of the underlying security, the tendency of volatility to revert to some long-run mean value, and the variance of the volatility process itself, among others.
Stochastic volatility models are one approach to resolve a shortcoming of the Black–Scholes model. In particular, models based on Black-Scholes assume that the underlying volatility is constant over the life of the derivative, and unaffected by the changes in the price level of the underlying security. However, these models cannot explain long-observed features of the implied volatility surface such as volatility smile and skew, which indicate that implied volatility does tend to vary with respect to strike price and expiry. By assuming that the volatility of the underlying price is a stochastic process rather than a constant, it becomes possible to model derivatives more accurately.
A middle ground between the bare Black-Scholes model and stochastic volatility models is covered by local volatility models. In these models the underlying volatility does not feature any new randomness but it isn't a constant either. In local volatility models the volatility is a non-trivial function of the underlying asset, without any extra randomness. According to this definition, models like constant elasticity of variance would be local volatility models, although they are sometimes classified as stochastic volatility models. The classification can be a little ambiguous in some cases.
The early history of stochastic volatility has multiple roots (i.e. stochastic process, option pricing and econometrics), it is reviewed in Chapter 1 o
|
https://en.wikipedia.org/wiki/NetPC
|
NetPC is a standard for diskless PCs, developed by Microsoft and Intel as a competing standard to the Network Computer standard, because many NCs did not use Intel CPUs or Microsoft software. Network Computer was launched by Oracle Corporation in the mid-1990s.
References
Diskless workstations
|
https://en.wikipedia.org/wiki/Partial%20geometry
|
An incidence structure consists of points , lines , and flags where a point is said to be incident with a line if . It is a (finite) partial geometry if there are integers such that:
For any pair of distinct points and , there is at most one line incident with both of them.
Each line is incident with points.
Each point is incident with lines.
If a point and a line are not incident, there are exactly pairs , such that is incident with and is incident with .
A partial geometry with these parameters is denoted by .
Properties
The number of points is given by and the number of lines by .
The point graph (also known as the collinearity graph) of a is a strongly regular graph: .
Partial geometries are dual structures: the dual of a is simply a .
Special case
The generalized quadrangles are exactly those partial geometries with .
The Steiner systems are precisely those partial geometries with .
Generalisations
A partial linear space of order is called a semipartial geometry if there are integers such that:
If a point and a line are not incident, there are either or exactly pairs , such that is incident with and is incident with .
Every pair of non-collinear points have exactly common neighbours.
A semipartial geometry is a partial geometry if and only if .
It can be easily shown that the collinearity graph of such a geometry is strongly regular with parameters
.
A nice example of such a geometry is obtained by taking the affine points of and only those lines that intersect the plane at infinity in a point of a fixed Baer subplane; it has parameters .
See also
Strongly regular graph
Maximal arc
References
Incidence geometry
|
https://en.wikipedia.org/wiki/Epigraph%20%28mathematics%29
|
In mathematics, the epigraph or supergraph of a function valued in the extended real numbers is the set, denoted by of all points in the Cartesian product lying on or above its graph. The strict epigraph is the set of points in lying strictly above its graph.
Importantly, although both the graph and epigraph of consists of points in the epigraph consists of points in the subset which is not necessarily true of the graph of
If the function takes as a value then will be a subset of its epigraph
For example, if then the point will belong to but not to
These two sets are nevertheless closely related because the graph can always be reconstructed from the epigraph, and vice versa.
The study of continuous real-valued functions in real analysis has traditionally been closely associated with the study of their graphs, which are sets that provide geometric information (and intuition) about these functions. Epigraphs serve this same purpose in the fields of convex analysis and variational analysis, in which the primary focus is on convex functions valued in instead of continuous functions valued in a vector space (such as or ). This is because in general, for such functions, geometric intuition is more readily obtained from a function's epigraph than from its graph. Similarly to how graphs are used in real analysis, the epigraph can often be used to give geometrical interpretations of a convex function's properties, to help formulate or prove hypotheses, or to aid in constructing counterexamples.
Definition
The definition of the epigraph was inspired by that of the graph of a function, where the of is defined to be the set
The or of a function valued in the extended real numbers is the set
In the union over that appears above on the right hand side of the last line, the set may be interpreted as being a "vertical ray" consisting of and all points in "directly above" it.
Similarly, the set of points on or below the graph of a functio
|
https://en.wikipedia.org/wiki/Enterprise%20data%20modelling
|
Enterprise data modelling or enterprise data modeling (EDM) is the practice of creating a graphical model of the data used by an enterprise or company. Typical outputs of this activity include an enterprise data model consisting of entity–relationship diagrams (ERDs), XML schemas (XSD), and an enterprise wide data dictionary.
Overview
Producing such a model allows for a business to get a 'helicopter' view of their enterprise. In EAI (enterprise application integration) an EDM allows data to be represented in a single idiom, enabling the use of a common syntax for the XML of services or operations and the physical data model for database schema creation. Data modeling tools for ERDs that also allow the user to create a data dictionary are usually used to aid in the development of an EDM.
The implementation of an EDM is closely related to the issues of data governance and data stewardship within an organization.
References
Data modeling
|
https://en.wikipedia.org/wiki/Arc%20%28projective%20geometry%29
|
An (simple) arc in finite projective geometry is a set of points which satisfies, in an intuitive way, a feature of curved figures in continuous geometries. Loosely speaking, they are sets of points that are far from "line-like" in a plane or far from "plane-like" in a three-dimensional space. In this finite setting it is typical to include the number of points in the set in the name, so these simple arcs are called -arcs. An important generalization of the -arc concept, also referred to as arcs in the literature, are the ()-arcs.
-arcs in a projective plane
In a finite projective plane (not necessarily Desarguesian) a set of points such that no three points of are collinear (on a line) is called a {{math|k - arc}}. If the plane has order then , however the maximum value of can only be achieved if is even. In a plane of order , a -arc is called an oval and, if is even, a -arc is called a hyperoval.
Every conic in the Desarguesian projective plane PG(2,), i.e., the set of zeros of an irreducible homogeneous quadratic equation, is an oval. A celebrated result of Beniamino Segre states that when is odd, every -arc in PG(2,) is a conic (Segre's theorem). This is one of the pioneering results in finite geometry.
If is even and is a -arc in , then it can be shown via combinatorial arguments that there must exist a unique point in (called the nucleus of ) such that the union of and this point is a ( + 2)-arc. Thus, every oval can be uniquely extended to a hyperoval in a finite projective plane of even order.
A -arc which can not be extended to a larger arc is called a complete arc. In the Desarguesian projective planes, PG(2,), no -arc is complete, so they may all be extended to ovals.
-arcs in a projective space
In the finite projective space PG() with , a set of points such that no points lie in a common hyperplane is called a (spatial) -arc. This definition generalizes the definition of a -arc in a plane (where ).
()-arcs in a projective plane
|
https://en.wikipedia.org/wiki/CN-group
|
In mathematics, in the area of algebra known as group theory, a more than fifty-year effort was made to answer a conjecture of : are all groups of odd order solvable? Progress was made by showing that CA-groups, groups in which the centralizer of a non-identity element is abelian, of odd order are solvable . Further progress was made showing that CN-groups, groups in which the centralizer of a non-identity element is nilpotent, of odd order are solvable . The complete solution was given in , but further work on CN-groups was done in , giving more detailed information about the structure of these groups. For instance, a non-solvable CN-group G is such that its largest solvable normal subgroup O∞(G) is a 2-group, and the quotient is a group of even order.
Examples
Solvable CN groups include
Nilpotent groups
Frobenius groups whose Frobenius complement is nilpotent
3-step groups, such as the symmetric group S4
Non-solvable CN groups include:
The Suzuki simple groups
The groups PSL2(F2n) for n>1
The group PSL2(Fp) for p>3 a Fermat prime or Mersenne prime.
The group PSL2(F9)
The group PSL3(F4)
References
Finite groups
Group theory
Properties of groups
|
https://en.wikipedia.org/wiki/Evolutionary%20pressure
|
Any cause that reduces or increases reproductive success in a portion of a population potentially exerts evolutionary pressure, selective pressure or selection pressure, driving natural selection. It is a quantitative description of the amount of change occurring in processes investigated by evolutionary biology, but the formal concept is often extended to other areas of research.
In population genetics, selective pressure is usually expressed as a selection coefficient.
Amino acids selective pressure
It has been shown that putting an amino acid bio-synthesizing gene like HIS4 gene under amino acid selective pressure in yeast causes enhancement of expression of adjacent genes which is due to the transcriptional co-regulation of two adjacent genes in Eukaryota.
Antibiotic resistance
Drug resistance in bacteria is an example of an outcome of natural selection.
When a drug is used on a species of bacteria, those that cannot resist die and do not produce offspring, while those that survive potentially pass on the resistance gene to the next generation (vertical gene transmission). The resistance gene can also be passed on to one bacterium by another of a different species (horizontal gene transmission). Because of this, the drug resistance increases over generations. For example, in hospitals, environments are created where pathogens such as C. difficile have developed a resistance to antibiotics.<ref name="Dawson">{{cite journal | author = Dawson L.F., Valiente E., Wren B.W. | year = 2009 | title = Clostridium difficile—A continually evolving and problematic pathogen. Infections | journal = Genetics and Evolution | volume = 9 | issue = 6| pages = 1410–1417 | doi=10.1016/j.meegid.2009.06.005| pmid = 19539054 }}</ref> Antibiotic resistance is made worse by the misuse of antibiotics. Antibiotic resistance is encouraged when antibiotics are used to treat non-bacterial diseases, and when antibiotics are not used for the prescribed amount of time or in the prescribed do
|
https://en.wikipedia.org/wiki/Subsurface%20utilities
|
Subsurface Utilities are the utility networks generally laid under the ground surface. These utilities include pipeline networks for water supply, sewage disposal, petrochemical liquid transmission, petrochemical gas transmission or cable networks for power transmission, telecom data transmission, any other data or signal transmission. In North America alone, there are an estimated 35 million miles of subsurface infrastructure that deliver critical services to homes and businesses.
References
The field of engineering dealing with the locating and mapping subsurface utilities is termed as Subsurface Utility Engineering (SUE).
Building engineering
|
https://en.wikipedia.org/wiki/Blotto%20game
|
A Colonel Blotto game is a type of two-person constant-sum game in which the players (officers) are tasked to simultaneously distribute limited resources over several objects (battlefields).
In the classic version of the game, the player devoting the most resources to a battlefield wins that battlefield, and the gain (or payoff) is equal to the total number of battlefields won.
The game was first proposed by Émile Borel in 1921. In 1938 Borel and Ville published a particular optimal strategy (the "disk" solution). The game was studied after the Second World War by scholars in Operation Research, and became a classic in game theory. Gross and Wagner's 1950 research memorandum states Borel's optimal strategy, and coined the fictitious Colonel Blotto and Enemy names. For three battlefields or more, the space of pure strategies is multi-dimensional (two dimensions for three battlefields) and a mixed strategy is thus a probability distribution over a continuous set. The game is a rare example of a non trivial game of that kind where optimal strategies can be explicitly found.
In addition to military strategy applications, the Colonel Blotto game has applications to political strategy (resource allocations across political battlefields), network defense, R&D patent races, and strategic hiring decisions. Consider two sports teams with must spend budget caps (or two Economics departments with use-or-lose grants) are pursuing the same set of candidates, and must decide between many modest offers or aggressive pursuit of a subset of candidates.
Example
As an example Blotto game, consider the game in which two players each write down three positive integers in non-decreasing order and such that they add up to a pre-specified number S. Subsequently, the two players show each other their writings, and compare corresponding numbers. The player who has two numbers higher than the corresponding ones of the opponent wins the game.
For S = 6 only three choices of numbers are p
|
https://en.wikipedia.org/wiki/Conway%20puzzle
|
Conway's puzzle, or blocks-in-a-box, is a packing problem using rectangular blocks, named after its inventor, mathematician John Conway. It calls for packing thirteen 1 × 2 × 4 blocks, one 2 × 2 × 2 block, one 1 × 2 × 2 block, and three 1 × 1 × 3 blocks into a 5 × 5 × 5 box.
Solution
The solution of the Conway puzzle is straightforward once one realizes, based on parity considerations, that the three 1 × 1 × 3 blocks need to be placed so that precisely one of them appears in each 5 × 5 × 1 slice of the cube. This is analogous to similar insight that facilitates the solution of the simpler Slothouber–Graatsma puzzle.
See also
Soma cube
References
External links
The Conway puzzle in Stewart Coffin's "The Puzzling World of Polyhedral Dissections"
Packing problems
Tiling puzzles
Mechanical puzzle cubes
John Horton Conway
|
https://en.wikipedia.org/wiki/Slothouber%E2%80%93Graatsma%20puzzle
|
The Slothouber–Graatsma puzzle is a packing problem that calls for packing six 1 × 2 × 2 blocks and three 1 × 1 × 1 blocks into a 3 × 3 × 3 box. The solution to this puzzle is unique (up to mirror reflections and rotations). It was named after its inventors Jan Slothouber and William Graatsma.
The puzzle is essentially the same if the three 1 × 1 × 1 blocks are left out, so that the task is to pack six 1 × 2 × 2 blocks into a cubic box with volume 27.
Solution
The solution of the Slothouber–Graatsma puzzle is straightforward when one realizes that the three 1 × 1 × 1 blocks (or the three holes) need to be placed along a body diagonal of the box, as each of the 3 x 3 layers in the various directions needs to contain such a unit block. This follows from parity considerations, because the larger blocks can only fill an even number of the 9 cells in each 3 x 3 layer.
Variations
The Slothouber–Graatsma puzzle is an example of a cube-packing puzzle using convex polycubes. More general puzzles involving the packing of convex rectangular blocks exist. The best known example is the Conway puzzle which asks for the packing of eighteen convex rectangular blocks into a 5 x 5 x 5 box. A harder convex rectangular block packing problem is to pack forty-one 1 x 2 x 4 blocks into a 7 x 7 x 7 box (thereby leaving 15 holes); the solution is analogous to the 5x5x5 case, and has three 1x1x5 cuboidal holes in mutually perpendicular directions covering all 7 slices.
See also
Soma cube
Bedlam cube
Diabolical cube
References
External links
The Slothouber-Graatsma puzzle in Stewart Coffin's "The Puzzling World of Polyhedral Dissections"
Jan Slothouber and William Graatsma: Cubic constructs
William Graatsma and Jan Slothouber: Dutch mathematical art
Packing problems
Tiling puzzles
Mechanical puzzle cubes
|
https://en.wikipedia.org/wiki/Phyllosphere
|
In microbiology, the phyllosphere is the total above-ground surface of a plant when viewed as a habitat for microorganisms. The phyllosphere can be further subdivided into the caulosphere (stems), phylloplane (leaves), anthosphere (flowers), and carposphere (fruits). The below-ground microbial habitats (i.e. the thin-volume of soil surrounding root or subterranean stem surfaces) are referred to as the rhizosphere and laimosphere.
Most plants host diverse communities of microorganisms including bacteria, fungi, archaea, and protists . Some are beneficial to the plant, others function as plant pathogens and may damage the host plant or even kill it.
The phyllosphere microbiome
The leaf surface, or phyllosphere, harbours a microbiome comprising diverse communities of bacteria, archaea, fungi, algae and viruses. Microbial colonizers are subjected to diurnal and seasonal fluctuations of heat, moisture, and radiation. In addition, these environmental elements affect plant physiology (such as photosynthesis, respiration, water uptake etc.) and indirectly influence microbiome composition. Rain and wind also cause temporal variation to the phyllosphere microbiome.
The phyllosphere includes the total aerial (above-ground) surface of a plant, and as such includes the surface of the stem, flowers and fruit, but most particularly the leaf surfaces. Compared with the rhizosphere and the endosphere the phyllosphere is nutrient poor and its environment more dynamic.
Interactions between plants and their associated microorganisms in many of these microbiomes can play pivotal roles in host plant health, function, and evolution. Interactions between the host plant and phyllosphere bacteria have the potential to drive various aspects of host plant physiology. However, as of 2020 knowledge of these bacterial associations in the phyllosphere remains relatively modest, and there is a need to advance fundamental knowledge of phyllosphere microbiome dynamics.
The assembly of the phyl
|
https://en.wikipedia.org/wiki/HealthBoards
|
HealthBoards is a long-running social networking support group website. It consists of over 280 Internet message boards for patient to patient health support (also referred to as a virtual community or an online health community). HealthBoards was one of the first stand alone health community websites. Health communities prior to it had generally been part of large web portals (WebMD, Yahoo, iVillage, etc.). The HealthBoards members post messages to share information and support on a wide range of health issues such as cancer, back pain, autism, and women's health. As of October 2013, the site had over 1 million registered members, 5 million posted messages, and over 10 million monthly visitors.
History
HealthBoards was founded in 1998 by Charles Simmons, a software engineer in Los Angeles, California. In 1997, after experiencing a variety of symptoms for which doctors had no explanation, Simmons turned to the Web for answers and support. When he did not find online support groups in the areas he needed, he realized that there was a need for a health support website covering a wide range of health topics. After a year of development, HealthBoards was launched on July 26, 1998, with 70 message boards. The original site was developed using custom Perl software written by Simmons. HealthBoards quickly gained popularity. In January 2001, the site began using an internet forum software package called UBB. By November 2003, HealthBoards had reached 100,000 members. Due to considerable growth in traffic and problems with UBB, the site was transitioned to VBulletin 3.0, a more robust internet forum software system. After 2003 HealthBoards experienced its most rapid growth and became one of the largest health communities on the Web. In 2005 HealthBoards was rated as one of the top 20 health websites by Consumer Reports Health WebWatch.
Selection for inclusion as a "Top 20" site was based solely on web traffic volume. These sites were then evaluated using criteria developed
|
https://en.wikipedia.org/wiki/Heronian%20mean
|
In mathematics, the Heronian mean H of two non-negative real numbers A and B is given by the formula
It is named after Hero of Alexandria.
Properties
Just like all means, the Heronian mean is symmetric (it does not depend on the order in which its two arguments are given) and idempotent (the mean of any number with itself is the same number).
The Heronian mean of the numbers A and B is a weighted mean of their arithmetic and geometric means:
Therefore, it lies between these two means, and between the two given numbers.
Application in solid geometry
The Heronian mean may be used in finding the volume of a frustum of a pyramid or cone. The volume is equal to the product of the height of the frustum and the Heronian mean of the areas of the opposing parallel faces.
A version of this formula, for square frusta, appears in the Moscow Mathematical Papyrus from Ancient Egyptian mathematics, whose content dates to roughly 1850 BC.
References
Means
|
https://en.wikipedia.org/wiki/TCP%20window%20scale%20option
|
The TCP window scale option is an option to increase the receive window size allowed in Transmission Control Protocol above its former maximum value of 65,535 bytes. This TCP option, along with several others, is defined in which deals with long fat networks (LFNs).
TCP windows
The throughput of a TCP communication is limited by two windows: the congestion window and the receive window. The congestion window tries not to exceed the capacity of the network (congestion control); the receive window tries not to exceed the capacity of the receiver to process data (flow control). The receiver may be overwhelmed by data if for example it is very busy (such as a Web server). Each TCP segment contains the current value of the receive window. If, for example, a sender receives an ack which acknowledges byte 4000 and specifies a receive window of 10000 (bytes), the sender will not send packets after byte 14000, even if the congestion window allows it.
Theory
TCP window scale option is needed for efficient transfer of data when the bandwidth-delay product (BDP) is greater than 64 KB. For instance, if a T1 transmission line of 1.5 Mbit/second was used over a satellite link with a 513 millisecond round-trip time (RTT), the bandwidth-delay product is bits or about 96,187 bytes. Using a maximum buffer size of 64 KB only allows the buffer to be filled to (65,535 / 96,187) = 68% of the theoretical maximum speed of 1.5 Mbit/second, or 1.02 Mbit/s.
By using the window scale option, the receive window size may be increased up to a maximum value of bytes. This is done by specifying a two byte shift count in the header options field. The true receive window size is left shifted by the value in shift count. A maximum value of 14 may be used for the shift count value. This would allow a single TCP connection to transfer data over the example satellite link at 1.5 Mbit/second utilizing all of the available bandwidth.
Essentially, not more than one full transmission window can be
|
https://en.wikipedia.org/wiki/McCormick%20Tribune%20Freedom%20Museum
|
The McCormick Freedom Museum was the first museum in the United States dedicated to the First Amendment by the McCormick Foundation. It was open from April 11, 2006, until March 1, 2009. The museum offered visitors an interactive experience focused on first amendment rights which include freedom of speech, freedom of religion, freedom of the press, assembly and petition. It was located on Michigan Avenue along the Magnificent Mile next to the historic Tribune Tower.
A sculpture by artists Amy Larimer and Peter Bernheim, titled 12151791 was put into storage when the museum closed. Its title represents the date of December 15, 1791, when the United States Bill of Rights was ratified.
One journalist has noted the irony of a Freedom Museum being named after the Robert R. McCormick, owner of the Chicago Tribune newspaper, saying the name "puts ego before freedom".
A scaled-down mobile version of the museum, dubbed the Freedom Express, made its debut in Chicago's Pioneer Court on May 27, 2010.
References
External links
Virtual museums
Defunct museums in Illinois
Museums established in 2006
Museums disestablished in 2009
|
https://en.wikipedia.org/wiki/Trencher%20%28machine%29
|
A trencher is a piece of construction equipment used to dig trenches, especially for laying pipes or electrical cables, for installing drainage, or in preparation for trench warfare. Trenchers may range in size from walk-behind models, to attachments for a skid loader or tractor, to very heavy tracked heavy equipment.
Types
Trenchers come in different sizes and may use different digging implements, depending on the required width and depth of the trench and the hardness of the surface to be cut.
Wheel trencher
A wheel trencher or rockwheel is composed of a toothed metal wheel. It is cheaper to operate and maintain than chain-type trenchers. It can work in hard or soft soils, either homogeneous (compact rocks, silts, sands) or heterogeneous (split or broken rock, alluvia, moraines). This is particularly true because a cutting wheel works by clearing the soil as a bucket-wheel does, rather than like a rasp (chain trencher). Consequently, it will be less sensitive to the presence of blocks in the soil. They are also used to cut pavement for road maintenance and to gain access to utilities under roads.
Due to its design the wheel may reach variable cutting depths with the same tool, and can keep a constant soil working angle with a relatively small wheel diameter (which reduces the weight and therefore the pressure to the ground, and the height of the unit for transport).
The cutting elements (6 to 8 depending on the diameter) are placed around the wheel, and bear the teeth which are more or less dense depending on the ground they will encounter. These tools can be easily changed manually, and adjusted to allow different cutting widths on the same wheel. The teeth are placed in a semi-spherical configuration to increase the removal of the materials from the trench. The teeth are made of high strength steel (HSLA steel, tool steel or high speed steel) or cemented carbide. When the machine is under heavy use, the teeth may need to be replaced frequently, even daily.
|
https://en.wikipedia.org/wiki/Bus%20encryption
|
Bus encryption is the use of encrypted program instructions on a data bus in a computer that includes a secure cryptoprocessor for executing the encrypted instructions. Bus encryption is used primarily in electronic systems that require high security, such as automated teller machines, TV set-top boxes, and secure data communication devices such as two-way digital radios.
Bus encryption can also mean encrypted data transmission on a data bus from one processor to another processor. For example, from the CPU to a GPU which does not require input of encrypted instructions. Such bus encryption is used by Windows Vista and newer Microsoft operating systems to protect certificates, BIOS, passwords, and program authenticity. PVP-UAB (Protected Video Path) provides bus encryption of premium video content in PCs as it passes over the PCIe bus to graphics cards to enforce digital rights management.
The need for bus encryption arises when multiple people have access to the internal circuitry of an electronic system, either because they service and repair such systems, stock spare components for the systems, own the system, steal the system, or find a lost or abandoned system. Bus encryption is necessary not only to prevent tampering of encrypted instructions that may be easily discovered on a data bus or during data transmission, but also to prevent discovery of decrypted instructions that may reveal security weaknesses that an intruder can exploit.
In TV set-top boxes, it is necessary to download program instructions periodically to customer's units to provide new features and to fix bugs. These new instructions are encrypted before transmission, but must also remain secure on data buses and during execution to prevent the manufacture of unauthorized cable TV boxes. This can be accomplished by secure crypto-processors that read encrypted instructions on the data bus from external data memory, decrypt the instructions in the cryptoprocessor, and execute the instructions
|
https://en.wikipedia.org/wiki/Time-compressed%20speech
|
Time-compressed speech refers to an audio recording of verbal text in which the text is presented in a much shorter time interval than it would through normally-paced real time speech. The basic purpose is to make recorded speech contain more words in a given time, yet still be understandable. For example: a paragraph that might normally be expected to take 20 seconds to read, might instead be presented in 15 seconds, which would represent a time-compression of 25% (5 seconds out of 20).
The term "time-compressed speech" should not be confused with "speech compression", which controls the volume range of a sound, but does not alter its time envelope.
Methods
While some voice talents are capable of speaking at rates significantly in excess of general norms, the term "time-compressed speech" most usually refers to examples in which the time-reduction has been accomplished through some form of electronic processing of the recorded speech.
In general, recorded speech can be electronically time-compressed by: increasing its speed (linear compression); removing silences (selective editing); a combination of the two (non-linear compression). The speed of a recording can be increased, which will cause the material to be presented at a faster rate (and hence in a shorter amount of time), but this has the undesirable side-effect of increasing the frequency of the whole passage, raising the pitch of the voices, which can reduce intelligibility.
There are normally silences between words and sentences, and even small silences within certain words, both of which can be reduced or removed ("edited-out") which will also reduce the amount of time occupied by the full speech recording. However, this can also have the effect of removing verbal "punctuation" from the speech, causing words and sentences to run together unnaturally, again reducing intelligibility.
Vowels are typically held a minimum of 20 milliseconds, over many cycles of the fundamental pitch.
DSP systems can
|
https://en.wikipedia.org/wiki/Empathising%E2%80%93systemising%20theory
|
The empathising–systemising (E–S) theory is a theory on the psychological basis of autism and male–female neurological differences originally put forward by English clinical psychologist Simon Baron-Cohen. It classifies individuals based on abilities in empathic thinking (E) and systematic thinking (S). It measures skills using an Empathy Quotient (EQ) and Systemising Quotient (SQ) and attempts to explain the social and communication symptoms in autism spectrum disorders as deficits and delays in empathy combined with intact or superior systemising.
According to Baron-Cohen, the E–S theory has been tested using the Empathy Quotient (EQ) and Systemising Quotient (SQ), developed by him and colleagues, and generates five different 'brain types' depending on the presence or absence of discrepancies between their scores on E or S. E–S profiles show that the profile E>S is more common in females than in males, and the profile S>E is more common in males than in females. Baron-Cohen and associates assert that E–S theory is a better predictor than gender of who chooses STEM subjects (Science, Technology, Engineering and Mathematics). The E–S theory has been extended into the extreme male brain (EMB) theory of autism and Asperger syndrome, which are associated in the E–S theory with below-average empathy and average or above-average systemising.
Baron-Cohen's studies and theory have been questioned on multiple grounds. The overrepresentation of engineers could depend on a socioeconomic status rather than E-S differences.
History
E–S theory was developed by psychologist Simon Baron-Cohen in 2002, as a reconceptualization of cognitive sex differences in the general population. This was done in an effort to understand why the cognitive difficulties in autism appeared to lie in domains in which he says on average females outperformed males, along with why cognitive strengths in autism appeared to lie in domains in which on average males outperformed females. In the first cha
|
https://en.wikipedia.org/wiki/Mathematical%20Optimization%20Society
|
The Mathematical Optimization Society (MOS), known as the Mathematical Programming Society until 2010, is an international association of researchers active in optimization. The MOS encourages the research, development, and use of optimization—including mathematical theory, software implementation, and practical applications (operations research).
Founded in , the MOS has several activities: Publishing journals and a newsletter, organizing and cosponsoring conferences, and awarding prizes.
History
In the 1960s, mathematical programming methods were gaining increasing importance both in mathematical theory and in industrial application. To provide a discussion forum for researchers in the field arose, the journal Mathematical Programming was founded in 1970.
Based on activities by George Dantzig, Albert Tucker, Philip Wolfe and others, the MOS was founded in 1973, with George Dantzig as its first president.
Activities
Conferences
Several conferences are organized or co-organized by the Mathematical Optimization Society, for instance:
The International Symposium on Mathematical Programming (ISMP), organized every three years, is open to all fields of mathematical programming.
The Integer Programming and Combinatorial Optimization (IPCO) conference, in Integer programming, is held in those years when there is no ISMP.
The International Conference on Continuous Optimization (ICCOPT), the continuous analog of the IPCO conference, was first held in 2004.
The International Conference on Stochastic Programming (ICSP) takes place every three years and is devoted to optimization using uncertain input data.
The Nordic MOS conference is a biannual meeting of researchers from Scandinavia working in all fields of optimization.
At the Université de Montréal, annual seminars on changing topics are organized by the MOS.
Journals and other publications
There are several publications by the Mathematical Optimization Society:
The journal Mathematical Programming (serie
|
https://en.wikipedia.org/wiki/Pasch%27s%20axiom
|
In geometry, Pasch's axiom is a statement in plane geometry, used implicitly by Euclid, which cannot be derived from the postulates as Euclid gave them. Its essential role was discovered by Moritz Pasch in 1882.
Statement
The axiom states that,
The fact that segments AC and BC are not both intersected by the line is proved in Supplement I,1, which was written by P. Bernays.
A more modern version of this axiom is as follows:
(In case the third side is parallel to our line, we count an "intersection at infinity" as external.) A more informal version of the axiom is often seen:
History
Pasch published this axiom in 1882, and showed that Euclid's axioms were incomplete. The axiom was part of Pasch's approach to introducing the concept of order into plane geometry.
Equivalences
In other treatments of elementary geometry, using different sets of axioms, Pasch's axiom can be proved as a theorem; it is a consequence of the plane separation axiom when that is taken as one of the axioms. Hilbert uses Pasch's axiom in his axiomatic treatment of Euclidean geometry. Given the remaining axioms in Hilbert's system, it can be shown that Pasch's axiom is logically equivalent to the plane separation axiom.
Hilbert's use of Pasch's axiom
David Hilbert uses Pasch's axiom in his book Foundations of Geometry which provides an axiomatic basis for Euclidean geometry. Depending upon the edition, it is numbered either II.4 or II.5. His statement is given above.
In Hilbert's treatment, this axiom appears in the section concerning axioms of order and is referred to as a plane axiom of order. Since he does not phrase the axiom in terms of the sides of a triangle (considered as lines rather than line segments) there is no need to talk about internal and external intersections of the line with the sides of the triangle ABC.
Caveats
Pasch's axiom is distinct from Pasch's theorem which is a statement about the order of four points on a line. However, in literature there are many in
|
https://en.wikipedia.org/wiki/Crime%20Library
|
Crime Library was a website documenting major crimes, criminals, trials, forensics, and criminal profiling from books. It was founded in 1998 and was most recently owned by truTV, a cable TV network that is part of Time Warner's Turner Broadcasting System. In August 2014, Crime Library was no longer being updated. In February 2015 the site was taken offline.
Content
Crime Library contained an extensive collection of crime related articles, which were separated into categories: Serial Killers, Notorious Murders, Criminal Mind, Terrorists & Spies and Gangsters & Outlaws. Each category was then broken down into further subcategories. For example, within Serial Killers were the subcategories Most Notorious, Sexual Predators, Truly Weird & Shocking, Unsolved Cases, Partners in Crime and Killers from History. Crime Library also featured photo galleries. These may have had anywhere from 10 to upwards of 100 slides. Some photo galleries were focused on a specific case, while others were lists of crimes linked by a theme (e.g., "Baby for Sale," cases where a person was arrested for allegedly attempting to sell his or her child), or collections of unusual booking photos.
High-profile crimes in the United States were prominent on Crime Library, but the site also contained information about historically notorious characters from various countries, including United Kingdom, Australia and France.
All articles on Crime Library were written exclusively for Crime Library by dozens of commissioned writers, many of them true-crime authors, including Chuck Hustmyre, Katherine Ramsland, Gary C. King and Anthony Bruno.
Crime Library maintained social media features where readers could interact and discuss criminal cases, including a Facebook page, a Twitter account and message boards.
History
Crime Library was founded by Marilyn J. Bardsley in January 1998. Court TV, later truTV, purchased Crime Library in 2001, the same year The Smoking Gun was acquired by Court TV. Originally
|
https://en.wikipedia.org/wiki/Artin%E2%80%93Hasse%20exponential
|
In mathematics, the Artin–Hasse exponential, introduced by , is the power series given by
Motivation
One motivation for considering this series to be analogous to the exponential function comes from infinite products. In the ring of formal power series Q[[x]] we have the identity
where μ(n) is the Möbius function. This identity can be verified by showing the logarithmic derivative of the two sides are equal and that both sides have the same constant term. In a similar way, one can verify a product expansion for the Artin–Hasse exponential:
So passing from a product over all n to a product over only n prime to p, which is a typical operation in p-adic analysis, leads from ex to Ep(x).
Properties
The coefficients of Ep(x) are rational. We can use either formula for Ep(x) to prove that, unlike ex, all of its coefficients are p-integral; in other words, the denominators of the coefficients of Ep(x) are not divisible by p. A first proof uses the definition of Ep(x) and Dwork's lemma, which says that a power series f(x) = 1 + ... with rational coefficients has p-integral coefficients if and only if f(xp)/f(x)p ≡ 1 mod pZp[[x]]. When f(x) = Ep(x), we have f(xp)/f(x)p = e−px, whose constant term is 1 and all higher coefficients are in pZp.
A second proof comes from the infinite product for Ep(x): each exponent -μ(n)/n for n not divisible by p is a p-integral, and when a rational number a is p-integral all coefficients in the binomial expansion of (1 - xn)a are p-integral by p-adic continuity of the binomial coefficient polynomials t(t-1)...(t-k+1)/k! in t together with their obvious integrality when t is a nonnegative integer (a is a p-adic limit of nonnegative integers) . Thus each factor in the product of Ep(x) has p-integral coefficients, so Ep(x) itself has p-integral coefficients.
The (p-integral) series expansion has radius of convergence 1.
Combinatorial interpretation
The Artin–Hasse exponential is the generating function for the probability a uniforml
|
https://en.wikipedia.org/wiki/National%20Industrial%20Security%20Program
|
The National Industrial Security Program, or NISP, is the nominal authority in the United States for managing the needs of private industry to access classified information.
The NISP was established in 1993 by Executive Order 12829. The National Security Council nominally sets policy for the NISP, while the Director of the Information Security Oversight Office is nominally the authority for implementation. Under the ISOO, the Secretary of Defense is nominally the Executive Agent, but the NISP recognizes four different Cognizant Security Agencies, all of which have equal authority: the Department of Defense, the Department of Energy, the Central Intelligence Agency, and the Nuclear Regulatory Commission.
Defense Counterintelligence and Security Agency administers the NISP on behalf of the Department of Defense and 34 other federal agencies.
NISP Operating Manual (DoD 5220.22-M)
A major component of the NISP is the NISP Operating Manual, also called NISPOM, or DoD 5220.22-M. The NISPOM establishes the standard procedures and requirements for all government contractors, with regards to classified information. , the current NISPOM edition is dated 28 Feb 2006. Chapters and selected sections of this edition are:
Chapter 1 – General Provisions and Requirements
Chapter 2 – Security Clearances
Section 1 – Facility Clearances
Section 2 – Personnel Security Clearances
Section 3 – Foreign Ownership, Control, or Influence (FOCI)
Chapter 3 – Security Training and Briefings
Chapter 4 – Classification and Marking
Chapter 5 – Safeguarding Classified Information
Chapter 6 – Visits and Meetings
Chapter 7 – Subcontracting
Chapter 8 – Information System Security
Chapter 9 – Special Requirements
Section 1 – RD and FRD
Section 2 – DoD Critical Nuclear Weapon Design Information (CNWDI)
Section 3 – Intelligence Information
Section 4 – Communication Security (COMSEC)
Chapter 10 – International Security Requirements
Chapter 11 – Miscellaneous Information
Section 1
|
https://en.wikipedia.org/wiki/Prefabricated%20building
|
A prefabricated building, informally a prefab, is a building that is manufactured and constructed using prefabrication. It consists of factory-made components or units that are transported and assembled on-site to form the complete building. Various materials were combined to create a part of the installation process.
History
Buildings have been built in one place and reassembled in another throughout history. This was especially true for mobile activities, or for new settlements. Elmina Castle, the first slave fort in West Africa, was also the first European prefabricated building in Sub-saharan Africa. In North America, in 1624 one of the first buildings at Cape Ann was probably partially prefabricated, and was rapidly disassembled and moved at least once. John Rollo described in 1801 earlier use of portable hospital buildings in the West Indies. Possibly the first advertised prefab house was the "Manning cottage". A London carpenter, Henry Manning, constructed a house that was built in components, then shipped and assembled by British emigrants. This was published at the time (advertisement, South Australian Record, 1837) and a few still stand in Australia. One such is the Friends Meeting House, Adelaide.
The peak year for the importation of portable buildings to Australia was 1853, when several hundred arrived. These have been identified as coming from Liverpool, Boston and Singapore (with Chinese instructions for re-assembly). In Barbados the Chattel house was a form of prefabricated building which was developed by emancipated slaves who had limited rights to build upon land they did not own. As the buildings were moveable they were legally regarded as chattels.
In 1855 during the Crimean War, after Florence Nightingale wrote a letter to The Times, Isambard Kingdom Brunel was commissioned to design a prefabricated modular hospital. In five months he designed the Renkioi Hospital: a 1,000 patient hospital, with innovations in sanitation, ventilation and a flu
|
https://en.wikipedia.org/wiki/Harmonic%20polynomial
|
In mathematics, in abstract algebra, a multivariate polynomial over a field such that the Laplacian of is zero is termed a harmonic polynomial.
The harmonic polynomials form a vector subspace of the vector space of polynomials over the field. In fact, they form a graded subspace. For the real field, the harmonic polynomials are important in mathematical physics.
The Laplacian is the sum of second partials with respect to all the variables, and is an invariant differential operator under the action of the orthogonal group via the group of rotations.
The standard separation of variables theorem states that every multivariate polynomial over a field can be decomposed as a finite sum of products of a radial polynomial and a harmonic polynomial. This is equivalent to the statement that the polynomial ring is a free module over the ring of radial polynomials.
See also
Harmonic function
Spherical harmonics
Zonal spherical harmonics
Multilinear polynomial
References
Lie Group Representations of Polynomial Rings by Bertram Kostant published in the American Journal of Mathematics Vol 85 No 3 (July 1963)
Abstract algebra
Polynomials
|
https://en.wikipedia.org/wiki/Radical%20polynomial
|
In mathematics, in the realm of abstract algebra, a radical polynomial is a multivariate polynomial over a field that can be expressed as a polynomial in the sum of squares of the variables. That is, if
is a polynomial ring, the ring of radical polynomials is the subring generated by the polynomial
Radical polynomials are characterized as precisely those polynomials that are invariant under the action of the orthogonal group.
The ring of radical polynomials is a graded subalgebra of the ring of all polynomials.
The standard separation of variables theorem asserts that every polynomial can be expressed as a finite sum of terms, each term being a product of a radical polynomial and a harmonic polynomial. This is equivalent to the statement that the ring of all polynomials is a free module over the ring of radical polynomials.
References
Abstract algebra
Polynomials
Invariant theory
|
https://en.wikipedia.org/wiki/AAPC%20%28healthcare%29
|
The AAPC, previously known by the full title of the American Academy of Professional Coders, is a professional association for people working in specific areas of administration within healthcare businesses in the United States. AAPC is one of a number of providers who offer services such as certification and training to medical coders, medical billers, auditors, compliance managers, and practice managers in the United States. , AAPC has over 190,000 worldwide members, of which nearly 155,000 are certified.
History
The AAPC was founded in 1988, as the American Academy of Professional Coders, with the aim of providing education and certification to coders working in physician-based settings. These settings include group practices and specialty centers (i.e. non-hospital settings). In 2010, as their services had expanded beyond medical and outpatient coding, the full name was dropped in favor of the AAPC initialism.
Products and services
AAPC provides training, certification, and other services to individuals and organizations across medical coding, medical billing, auditing, compliance, and practice management. These services include networking events such as medical coding seminars and conferences.
Certifications
AAPC offers a number of certifications for healthcare professionals, including:
Medical coding and medical billing, including stand-alone certifications in over 20 specialty areas,
Medical auditing
Medical compliance
Physician practice management
See also
Health informatics
Information governance
Medical classification
External links
AAPC official site
References
Organizations established in 1988
Medical associations based in the United States
Health informatics and eHealth associations
Coding schools
Companies based in Salt Lake City
Medical and health organizations based in Utah
|
https://en.wikipedia.org/wiki/Rayl
|
A Rayl, rayl or Rayleigh is one of two units of specific acoustic impedance or, equivalently, characteristic acoustic impedance; one an MKS unit, and the other a CGS unit. These have the same dimensions as momentum per volume.
The units are named after John William Strutt, 3rd Baron Rayleigh. They are not to be confused with the rayleigh unit of photon flux, which is used to measure airglow and is named after his son, Robert John Strutt, 4th Baron Rayleigh.
Explanation
Specific acoustic impedance
When sound waves pass through any physical substance the pressure of the waves causes the particles of the substance to move. The sound specific impedance is the ratio between the sound pressure and the particle velocity it produces.
Specific acoustic impedance is defined as:
where and are the specific acoustic impedance, pressure and particle velocity phasors, is the position and is the frequency.
Characteristic acoustic impedance
The Rayl is also used for the characteristic (acoustic) impedance of a medium, which is an inherent property of a medium:
Here is the characteristic impedance, and and are the density and speed of sound in the unperturbed medium (i.e. when there are no sound waves travelling in it).
In a viscous medium, there will be a phase difference between the pressure and velocity, so the specific acoustic impedance will be different from the characteristic acoustic impedance .
MKS and CGS units
The MKS unit and the CGS unit confusingly have the same name, but not the same value:
In MKS units, 1 Rayl equals 1 pascal-second per meter (Pa·s·m−1), or equivalently 1 newton-second per cubic meter (N·s·m−3). In SI base units, that is kg·s−1·m−2.
In CGS units, 1 Rayl equals 1 barye-second per centimeter (ba·s·cm−1), or equivalently 1 dyne-second per cubic centimeter (dyn·s·cm−3). In CGS base units, that is g·s−1·cm−2.
1 CGS Rayl = 10 MKS Rayl. In other words, a CGS Rayl is ten times larger than an MKS Rayl.
References
T. D. Rossing, Sprin
|
https://en.wikipedia.org/wiki/Virtuoso%20Universal%20Server
|
Virtuoso Universal Server is a middleware and database engine hybrid that combines the functionality of a traditional relational database management system (RDBMS), object–relational database (ORDBMS), virtual database, RDF, XML, free-text, web application server and file server functionality in a single system. Rather than have dedicated servers for each of the aforementioned functionality realms, Virtuoso is a "universal server"; it enables a single multithreaded server process that implements multiple protocols. The free and open source edition of Virtuoso Universal Server is also known as OpenLink Virtuoso. The software has been developed by OpenLink Software with Kingsley Uyi Idehen and Orri Erling as the chief software architects.
Database structure
Core database engine
Virtuoso provides an extended object–relational model, which combines the flexibility of relational access with inheritance, run time data typing, late binding, and identity-based access. Virtuoso Universal Server database includes physical file and in memory storage and operating system processes that interact with the storage. There is one main process, which has listeners on a specified port for HTTP, SOAP, and other protocols.
Architecture
Virtuoso is designed to take advantage of operating system threading support and multiple CPUs. It consists of a single process with an adjustable pool of threads shared between clients. Multiple threads may work on a single index tree with minimal interference with each other. One cache of database pages is shared among all threads and old dirty pages are written back to disk as a background process.
The database has at all times a clean checkpoint state and a delta of committed or uncommitted changes to this checkpointed state. This makes it possible to do a clean backup of the checkpoint state while transactions proceed on the commit state.
A transaction log file records all transactions since the last checkpoint. Transaction log files may be pr
|
https://en.wikipedia.org/wiki/Direct%20read%20after%20write
|
Direct read after write is a procedure that compares data recorded onto a medium against the source. A typical example would be CD burning software which reads a CD-ROM once it has been burned onto, effectively ensuring that data written is the same as the data it was copied from.
External links
Smart Computing Dictionary entry
Storage software
|
https://en.wikipedia.org/wiki/Photon-induced%20electric%20field%20poling
|
In physics, photon-induced electric field poling is a phenomenon whereby a pattern of local electric field orientations can be encoded in a suitable ferroelectric material, such as perovskite. The resulting encoded material is conceptually similar to the pattern of magnetic field orientations within the magnetic domains of a ferromagnet, and thus may be considered as a possible technology for computer storage media. The encoded regions are optically active (have a varying index of refraction) and thus may be "read out" optically.
Encoding process
The encoding process proceeds by application of ultraviolet light tuned to the absorption band associated with the transition of electrons from the valence band to the conduction band. During UV application, an external electric field is used to modify the electric dipole moment of regions of the ferroelectric material that are exposed to UV light. By this process, a pattern of local electric field orientations can be encoded.
Technically, the encoding effect proceeds by the creation of a population inversion between the valence and conduction bands, with the resulting creation of plasmons. During this time, ferroelectric perovskite materials can be forced to change geometry by the application of an electric field. The encoded regions become optically active due to the Pockels effect.
Decoding process
The pattern of ferroelectric domain orientations can be read out optically. The refractive index of the ferroelectric material at wavelengths from near-infrared through to near-ultraviolet is affected by the electric field within the material. A changing pattern of electric field domains within a ferroelectric substrate results in different regions of the substrate having different refractive indices. Under these conditions, the substrate behaves as a diffraction grating, allowing the pattern of domains to be inferred from the interference pattern present in the transmitted readout beam.
See also
Electro-optic effect
Bir
|
https://en.wikipedia.org/wiki/Passive%20electronically%20scanned%20array
|
A passive electronically scanned array (PESA), also known as passive phased array, is an antenna in which the beam of radio waves can be electronically steered to point in different directions (that is, a phased array antenna), in which all the antenna elements are connected to a single transmitter (such as a magnetron, a klystron or a travelling wave tube) and/or receiver.
The largest use of phased arrays is in radars. Most phased array radars in the world are PESA. The civilian microwave landing system uses PESA transmit-only arrays.
A PESA contrasts with an active electronically scanned array (AESA) antenna, which has a separate transmitter and/or receiver unit for each antenna element, all controlled by a computer; AESA is a more advanced, sophisticated versatile second-generation version of the original PESA phased array technology. Hybrids of the two can also be found, consisting of subarrays that individually resemble PESAs, where each subarray has its own RF front end. Using a hybrid approach, the benefits of AESAs (e.g., multiple independent beams) can be realized at a lower cost compared to true AESAs.
Pulsed radar systems work by connecting an antenna to a powerful radio transmitter to emit a short pulse of signal. The transmitter is then disconnected and the antenna is connected to a sensitive receiver which amplifies any echos from target objects. By measuring the time it takes for the signal to return, the radar receiver can determine the distance to the object. The receiver then sends the resulting output to a display of some sort. The transmitter elements were typically klystron tubes or magnetrons, which are suitable for amplifying or generating a narrow range of frequencies to high power levels. To scan a portion of the sky, a non-PESA radar antenna must be physically moved to point in different directions. In contrast, the beam of a PESA radar can rapidly be changed to point in a different direction, simply by electrically adjusting the phase
|
https://en.wikipedia.org/wiki/Palustrine%20wetland
|
Palustrine wetlands include any inland wetland that contains ocean-derived salts in concentrations of less than 0.5 parts per thousand, and is non-tidal. The word palustrine comes from the Latin word palus or marsh. Wetlands within this category include inland marshes and swamps as well as bogs, fens, pocosins, tundra and floodplains.
According to the Cowardin classification system Palustrine wetlands can also be considered the area on the side of a river or a lake, as long as they are covered by vegetation such as trees, shrubs, and emergent plants.
Classification
Palustrine wetlands are one of five systems of wetlands within the Cowardin classification system. This system was created by Lewis Cowardin and others from the United States Fish and Wildlife Service in 1987. The other systems are:
Marine wetlands, exposed to the open ocean
Estuarine wetlands, partially enclosed by land and containing a mix of fresh and salt water
Riverine wetlands, associated with flowing water
Lacustrine wetlands, associated with a lake or other body of fresh water
Vegetation
The vegetation within a Palustrine system typically contains multiple species that are similar. This different groups of vegetation are aquatic bed, emergent, scrub-shrub, and forested.
Aquatic bed vegetation typically includes floating-leaved plants, pondweed and waterlilies.
Emergent vegetation commonly includes cattails, bulrushes, reeds, pickerel weed, arrowheads and ferns.
Scrub-shrub wetland is dominated by woody vegetation less than 20 feet tall, such as buttonbush, alders, and many kinds of saplings.
Forested palustrine wetland is dominated by woody vegetation over 20 feet tall.
See also
Fluvial – of or relating a river
Oceanic – of or relating to an ocean
References
Aquatic ecology
|
https://en.wikipedia.org/wiki/Universal%20powerline%20bus
|
Universal Powerline Bus (UPB) is a proprietary software protocol developed by Powerline Control Systems for power-line communication between devices used for home automation. Household electrical wiring is used to send digital data between UPB devices via pulse-position modulation.
Communication is peer to peer, with no central controller necessary.
UPB addressing allows 250 devices per house and 250 houses per transformer, allowing over 62,500 total device addresses and can co-exist with other powerline carrier systems within the same home.
, UPB enjoys one of the broadest range of device types when compared to most protocols and has support from some major manufacturers in the home automation space. Most notably, Leviton and their Omni series of home automation products, as well as the UPB devices they market. UPB is also supported by many major home automation software manufacturers. A few of which are listed below.
Reliability
UPB is a highly reliable protocol for home automation. It is not susceptible to RF interference, signal blockage by walls or short distance broadcast issues like some wireless protocols. UPB transmits on the building's existing wiring and has extensive noise reduction circuitry. This allows it to traverse long distances without issues, even across multiple electrical panels, making it ideal for very large homes. Appliances that have traditionally plagued X10 devices, usually do not affect UPB. In fact, UPB signals can reliably be received by the target device even with significant amounts of electrical noise on the power lines. However, in the event that an appliance in your home causes extreme interference when operating, an inexpensive wire-in noise filter can be applied at the circuit breaker panel to solve the issue.
Interoperability
, control of UPB devices is supported by the Home Assistant open source software (in version 0.110 and later).
, control of UPB devices is supported by the OpenHAB open source software.
HomeSeer
|
https://en.wikipedia.org/wiki/AlphaStation
|
AlphaStation is the name given to a series of computer workstations, produced from 1994 onwards by Digital Equipment Corporation, and later by Compaq and HP. As the name suggests, the AlphaStations were based on the DEC Alpha 64-bit microprocessor. Supported operating systems for AlphaStations comprise Tru64 UNIX (formerly Digital UNIX), OpenVMS and Windows NT (with AlphaBIOS ARC firmware). Most of these workstations can also run various versions of Linux and BSD operating systems.
Other Alpha workstations produced by DEC include the DEC 2000 AXP (DECpc AXP 150), the DEC 3000 AXP, the Digital Personal Workstation a-series and au-series (codename Miata), the Multia VX40/41/42 and the Alpha XL/Alpha XLT line (a member of the Alcor family, which had swappable daughterboard with Pentium processor, to transform to a DEC Celebris XL line).
Models
From the XP900 onwards, all AlphaStation models were simply workstation configurations of the corresponding AlphaServer model.
Avanti family
Alcor Family
Noritake and Rawhide Family
Tsunami Family
Titan and Marvel Family
A variant of the AlphaStation 1200 was also sold as the Digital Ultimate Workstation 533au².
Some systems had one of the microprocessors deactivated, which may be reactivated with a license upgrade.
References
External links
HP AlphaStation range
Compaq Alpha System and Model Code Names (at archive.org)
FreeBSD/alpha 6.1 Hardware Notes
About NetBSD/alpha
The OpenBSD/alpha port
Debian Alpha system types
See also
AlphaVM: A full DEC Alpha system emulator running on Windows or Linux.
64-bit computers
DEC workstations
Advanced RISC Computing
Computer-related introductions in 1994
Compaq Alpha-based computers
|
https://en.wikipedia.org/wiki/AlphaServer
|
AlphaServer is a series of server computers, produced from 1994 onwards by Digital Equipment Corporation, and later by Compaq and HP. AlphaServers were based on the DEC Alpha 64-bit microprocessor. Supported operating systems for AlphaServers are Tru64 UNIX (formerly Digital UNIX), OpenVMS, MEDITECH MAGIC and Windows NT (on earlier systems, with AlphaBIOS ARC firmware), while enthusiasts have provided alternative operating systems such as Linux, NetBSD, OpenBSD and FreeBSD.
The Alpha processor was also used in a line of workstations, AlphaStation.
Some AlphaServer models were rebadged in white enclosures as Digital Servers for the Windows NT server market. These so-called "white box" models comprised the following:
Digital Server 3300/3305: rebadged AlphaServer 800
Digital Server 5300/5305: rebadged AlphaServer 1200
Digital Server 7300/7305/7310: rebadged AlphaServer 4100
As part of the roadmap to phase out Alpha-, MIPS- and PA-RISC-based systems in favor of Itanium-based systems at HP, the most recent AlphaServer systems reached their end of general availability on 27 April 2007. The availability of upgrades and options was discontinued on 25 April 2008, approximately one year after the systems were discontinued. Support for the most recent AlphaServer systems, the DS15A, DS25, ES45, ES47, ES80 and GS1280 is being provided by HP Services as of 2008. These systems are no longer supported by HP.
Models
In approximate chronological order, the following AlphaServer models were produced:
Avanti Family
Sable Family
The AlphaServer 2100 was briefly sold as the Digital 2100 before the AlphaServer brand was introduced.
Mikasa Family
Noritake Family
Rawhide Family
Turbolaser Family
Lynx Family
Tsunami Family
Titan Family
Wildfire Family
AlphaServer SC
The AlphaServer SC was a supercomputer constructed from a set of individual DS20L, ES40 or ES45 servers (called "nodes") mounted in racks. Every node was connected to every other node using a Quadrics el
|
https://en.wikipedia.org/wiki/DEC%203000%20AXP
|
DEC 3000 AXP was the name given to a series of computer workstations and servers, produced from 1992 to around 1995 by Digital Equipment Corporation. The DEC 3000 AXP series formed part of the first generation of computer systems based on the 64-bit Alpha AXP architecture. Supported operating systems for the DEC 3000 AXP series were DEC OSF/1 AXP (later renamed Digital UNIX) and OpenVMS AXP (later renamed OpenVMS).
All DEC 3000 AXP models used the DECchip 21064 (EV4) or DECchip 21064A (EV45) processor and inherited various features from the earlier MIPS architecture-based DECstation models, such as the TURBOchannel bus and the I/O subsystem.
The DEC 3000 AXP series was superseded in late 1994, with workstation models replaced by the AlphaStation line and server models replaced by the AlphaServer line.
Models
There were three DEC 3000 model families, codenamed Pelican, Sandpiper, and Flamingo. Within Digital, this led to the DEC 3000 series being affectionately referred to as "the seabirds".
Note: Server configurations of the Model 400/500/600/700/800/900 systems were suffixed with "S".
Description
The logic in Flamingo- and Sandpiper-based systems are contained on two modules (printed circuit boards), the CPU module and the I/O module, with the CPU module being the largest board. The two modules are connected via a 210-pin connector. The logic in Pelican-based systems are contained the CPU module and system module. The CPU module is a daughterboard that plugs into the system module and contains the CPU and the B-cache (L2 cache).
The architecture of the Flamingo- and Sandpiper-based systems is based around a crossbar switch implemented by an ADDR (Address) ASIC, four SLICE (data slice) ASICs and a TC (TURBOchannel) ASIC. These ASICs connect the various different width buses used in the system, allowing data to be transferred to the different subsystems. PALs were used to implement the control logic. The cache, memory and TURBOchannel controllers, as well
|
https://en.wikipedia.org/wiki/Microbial%20genetics
|
Microbial genetics is a subject area within microbiology and genetic engineering. Microbial genetics studies microorganisms for different purposes. The microorganisms that are observed are bacteria, and archaea. Some fungi and protozoa are also subjects used to study in this field. The studies of microorganisms involve studies of genotype and expression system. Genotypes are the inherited compositions of an organism. (Austin, "Genotype," n.d.) Genetic Engineering is a field of work and study within microbial genetics. The usage of recombinant DNA technology is a process of this work. The process involves creating recombinant DNA molecules through manipulating a DNA sequence. That DNA created is then in contact with a host organism. Cloning is also an example of genetic engineering.
Since the discovery of microorganisms by Robert Hooke and Antoni van Leeuwenhoek during the period 1665-1885 they have been used to study many processes and have had applications in various areas of study in genetics.
For example: Microorganisms' rapid growth rates and short generation times are used by scientists to study evolution. Robert Hooke and Antoni van Leeuwenhoek discoveries involved depictions, observations, and descriptions of microorganisms. Mucor is the microfungus that Hooke presented and gave a depiction of. His contribution being, Mucor as the first microorganism to be illustrated. Antoni van Leeuwenhoek’s contribution to the microscopic protozoa and microscopic bacteria yielded to scientific observations and descriptions. These contributions were accomplished by a simple microscope, which led to the understanding of microbes today and continues to progress scientists understanding.
Microbial genetics also has applications in being able to study processes and pathways that are similar to those found in humans such as drug metabolism.
Role in understanding evolution
Microbial genetics can focus on Charles Darwin's work and scientists have continued to study his work a
|
https://en.wikipedia.org/wiki/Food%20and%20Agriculture%20Organization%20Corporate%20Statistical%20Database
|
The Food and Agriculture Organization Corporate Statistical Database (FAOSTAT) website disseminates statistical data collected and maintained by the Food and Agriculture Organization (FAO). FAOSTAT data are provided as a time-series from 1961 in most domains for 245 countries in English, Spanish and French.
About FAOSTAT
FAOSTAT is maintained by the Statistics Division of the Food and Agriculture Organization of the United Nations. In working directly with the countries, the Statistics Division supports the development of national statistical strategies, the strengthening of Institution and technical capacities, and the improvement of statistical systems.
The FAOSTAT system is one of FAO’s most important corporate systems. It is a major component of FAO’s information systems, contributing to the organization’s strategic objective of collecting, analyzing, interpreting, and disseminating information relating to nutrition, food and agriculture for development and the fight against global hunger and malnutrition. It is at the core of the World Agricultural Information Centre (WAICENT). WAICENT gives access to FAO’s vast store of information on agricultural and food topics – statistical data, documents, books, images, and maps.
FAOSTAT Domains
Production The Agricultural Production domain covers: Produced quantities, producer prices, value at farmgate, harvested area, yield per hectare.
Trade The Agriculture Trade domain provides comprehensive, comparable, and up-to-date annual trade statistics by country, region and economic country groups for about 600 individual food and agriculture commodities since 1961. The detailed food and agriculture trade data collected, processed and disseminated by FAO according to the standard International Merchandise Trade Statistics Methodology (IMTS) are directly submitted by the national authorities to FAO or received via international or regional partner organizations. Data for missing reporters are estimated manly through the use
|
https://en.wikipedia.org/wiki/Paver%20%28vehicle%29
|
A paver (road paver finisher, asphalt finisher, road paving machine) is a piece of construction equipment used to lay asphalt concrete or Portland cement concrete on roads, bridges, parking lots and other such places. It lays the material flat and provides minor compaction. This is typically followed by final compaction by a road roller.
History
The asphalt paver was developed by the Chuck Burris Co., that originally manufactured material handling systems. In 1929 the Chicago Testing Laboratory approached them to use their material loaders to construct asphalt roads. This did not result in a partnership but Burris did develop a machine based on the concrete pavers of the day that mixed and placed the concrete in a single process. This setup did not prove as effective as desired and the processes were separated and the modern paver was on its way. In 1933 the independent float screed was invented and when combined with the tamper bar provided for uniform material density and thickness. Burris filed for a patent a "Machine for and process of laying roads" on 10 April 1936 and received patent on 6 December 1938. The main features of the paver developed by the Chuck Burris Co. have been incorporated into most pavers since, although improvements have been made to control of the machine.
Operation
The asphalt is added from a dump truck or a material transfer unit into the paver's hopper. The conveyor then carries the asphalt from the hopper to the auger. The auger places a stockpile of material in front of the screed. The screed takes the stockpile of material and spreads it over the width of the road and provides initial compaction.
The paver should provide a smooth uniform surface behind the screed. In order to provide a smooth surface a free floating screed is used. It is towed at the end of a long arm which reduces the base topology effect on the final surface. The height of the screed is controlled by a number of factors including the attack angle of the screed,
|
https://en.wikipedia.org/wiki/NuFW
|
NuFW is a software package that extends Netfilter, the Linux kernel-internal packet filtering firewall module. NuFW adds authentication to filtering rules. NuFW is also provided as a hardware firewall, in the EdenWall firewalling appliance. NuFW has been restarted by the FFI and renamed into UFWI.
Introduction
NuFW / UFWI is an extension of Netfilter which brings the notion of user to IP filtering.
NuFW / UFWI can :
Authenticate any connection that goes through your gateway or only from/to a chosen subset or a specific protocol (iptables is used to select the connections to authenticate).
Perform accounting, routing and Quality of service (QOS) based on users and not simply on IPs.
Filter packets with criteria such as application and OS used by distant users.
Be the key of a secure and simple Single Sign On system.
Principles
NuFW / UFWI refuses the idea of IP == user as an IP address can easily be spoofed. It thus uses
its own algorithm to perform authentication. It depends on two subsystems: Nufw which is connected to Netfilter and Nuauth
which is connected to clients and Nufw.
The algorithm is the following:
A standard application sends a packet.
The Nufw client sees that a connection is being initiated and sends a user request packet.
The Nufw server queues the packet and sends an auth request packet to the Nuauth server.
The Nuauth server sums the auth request and the user request packet and checks this against an authentication authority.
The Nuauth server sends answer back to the Nufw server
The Nufw server transmits the packet following the answer given to its request.
This algorithm realizes an A Posteriori authentication of the connection. As there is no time-based association, this ensures the identity of the user who sent the packet.
NuFW is the only real Authentication firewall, as it never associates a user with his machine.
Awards
2007 : Lutèce d'Or (Paris, France), Best Innovation
2005 : Les Trophées du Libre (Soissons, Franc
|
https://en.wikipedia.org/wiki/Advanced%20silicon%20etching
|
Advanced silicon etching (ASE) is a deep reactive-ion etching (DRIE) technique to etch deep and high aspect ratio structures in silicon. ASE was created by Surface Technology Systems Plc (STS) in 1994 in the UK. STS has continued to develop this process with faster etch rates. STS developed and first implemented the switched process, originally invented by Dr. Larmer in Bosch, Stuttgart. ASE consists in combining the faster etch rates achieved in an isotropic Si etch (usually making use of an SF6 plasma) with a deposition or passivation process (usually utilising a C4F8 plasma condensation process) by alternating the two process steps. This approach achieves the fastest etch rates while maintaining the ability to etch anisotropically, typically vertically in Microelectromechanical Systems (microelectromechanical systems (MEMS)) applications.
The ASE HRM is an improvement of the previous generations of ICP design, now incorporating a decoupled plasma source (patent pending). The decoupled source generates high-density plasma which is allowed to diffuse into a separate process chamber. Using a specialized chamber design, the excess ions (which negatively affect process control) are reduced, leaving a uniform distribution of fluorine free-radicals at a higher density than that available from the conventional ICP sources. The higher fluorine free-radical density facilitates increased etch rates, typically over three times the etch rates achieved with the original process.
Notes
References
.
Further reading
Surface Technology Systems
Semiconductor device fabrication
Microtechnology
Etching (microfabrication)
|
https://en.wikipedia.org/wiki/Pepper%20Pad
|
The Pepper Pad was a family of Linux-based mobile computers with Internet capability and which doubled as a handheld game console. They also served as a portable multimedia device. The devices used Bluetooth and Wi-Fi technologies for Internet connection. Pepper Pads are now obsolete, unsupported and the parent company has ceased operations.
The original prototype Pepper Pad was built in 2003 with an ARM-based PXA255 processor running at 400Mhz, an 8-inch touchscreen in portrait mode, a split QWERTY keyboard, and Wi-Fi. Only 6 were made, and it was never offered for sale. The Pepper Pad was a 2004 Consumer Electronics Show (CES) Innovations Awards Honoree in the Computer Hardware category.
The Pepper Pad 2 was introduced in 2004 with a faster 624Mhz PXA270 processor and the screen was rotated to a landscape format. The Pepper Pad 2 was the first Pepper Pad offered for commercial sale. The Pepper Pad and Pepper Pad 2 both ran Pepper's proprietary Pepper Keeper application on top of a heavily customized version of the Montavista Linux operating system.
The Pepper Pad 3 was announced in 2006 with as upgrade to a faster AMD Geode processor. The Pepper Pad 3 also used a smaller 7" screen for cost savings. Like previous versions, the Pepper Pad 3 had a split QWERTY button keyboard, built-in microphone, video camera, composite video output, and stereo speakers, Infra-Red receiver and transmitter, 800x480 7 inch LCD touchscreen (with stylus), SD/MMC Flash memory slot, 20 or 30 GB hard disk, 256MB RAM, 256KB ROM, and both Wi-Fi (b/g) and Bluetooth 2.0. The Pepper Pad 3 used a heavily customized version of the Fedora Linux operating system called Pepper Linux. Unlike the Pepper Pad 2 which was built and sold directly by Pepper, the Pepper Pad 3 was built and sold under license by Hanbit Electronics.
Support
Pepper Computer, Inc. has ceased operations and is no longer providing support or sales for Pepper Pad web computers or Pepper Linux.
Software
Pepper Pads ran Pepp
|
https://en.wikipedia.org/wiki/Frequency%20divider
|
A frequency divider, also called a clock divider or scaler or prescaler, is a circuit that takes an input signal of a frequency, , and generates an output signal of a frequency:
where is an integer. Phase-locked loop frequency synthesizers make use of frequency dividers to generate a frequency that is a multiple of a reference frequency. Frequency dividers can be implemented for both analog and digital applications.
Analog
Analog frequency dividers are less common and used only at very high frequencies. Digital dividers implemented in modern IC technologies can work up to tens of GHz.
Regenerative
A regenerative frequency divider, also known as a Miller frequency divider, mixes the input signal with the feedback signal from the mixer.
The feedback signal is . This produces sum and difference frequencies , at the output of the mixer. A low pass filter removes the higher frequency, and the frequency is amplified and fed back into the mixer.
Injection-locked
A free-running oscillator which has a small amount of a higher-frequency signal fed to it, will tend to oscillate in step with the input signal. Such frequency dividers were essential in the development of television.
It operates similarly to an injection locked oscillator. In an injection-locked frequency divider, the frequency of the input signal is a multiple (or fraction) of the free-running frequency of the oscillator. While these frequency dividers tend to be lower power than broadband static (or flip-flop-based) frequency dividers, the drawback is their low locking range. The ILFD locking range is inversely proportional to the quality factor (Q) of the oscillator tank. In integrated circuit designs, this makes an ILFD sensitive to process variations. Care must be taken to ensure the tuning range of the driving circuit (for example, a voltage-controlled oscillator) must fall within the input locking range of the ILFD.
Digital
For power-of-2 integer division, a simple binary counter can be u
|
https://en.wikipedia.org/wiki/Cross-entropy%20method
|
The cross-entropy (CE) method is a Monte Carlo method for importance sampling and optimization. It is applicable to both combinatorial and continuous problems, with either a static or noisy objective.
The method approximates the optimal importance sampling estimator by repeating two phases:
Draw a sample from a probability distribution.
Minimize the cross-entropy between this distribution and a target distribution to produce a better sample in the next iteration.
Reuven Rubinstein developed the method in the context of rare event simulation, where tiny probabilities must be estimated, for example in network reliability analysis, queueing models, or performance analysis of telecommunication systems. The method has also been applied to the traveling salesman, quadratic assignment, DNA sequence alignment, max-cut and buffer allocation problems.
Estimation via importance sampling
Consider the general problem of estimating the quantity
,
where is some performance function and is a member of some parametric family of distributions. Using importance sampling this quantity can be estimated as
,
where is a random sample from . For positive , the theoretically optimal importance sampling density (PDF) is given by
.
This, however, depends on the unknown . The CE method aims to approximate the optimal PDF by adaptively selecting members of the parametric family that are closest (in the Kullback–Leibler sense) to the optimal PDF .
Generic CE algorithm
Choose initial parameter vector ; set t = 1.
Generate a random sample from
Solve for , where
If convergence is reached then stop; otherwise, increase t by 1 and reiterate from step 2.
In several cases, the solution to step 3 can be found analytically. Situations in which this occurs are
When belongs to the natural exponential family
When is discrete with finite support
When and , then corresponds to the maximum likelihood estimator based on those .
Continuous optimization—example
The same CE algorithm
|
https://en.wikipedia.org/wiki/Semi-deciduous
|
Semi-deciduous or semi-evergreen is a botanical term which refers to plants that lose their foliage for a very short period, when old leaves fall off and new foliage growth is starting. This phenomenon occurs in tropical and sub-tropical woody species, for example in Dipteryx odorata. Semi-deciduous or semi-evergreen may also describe some trees, bushes or plants that normally only lose part of their foliage in autumn/winter or during the dry season, but might lose all their leaves in a manner similar to deciduous trees in an especially cold autumn/winter or severe dry season (drought).
See also
Brevideciduous
Evergreen
Marcescence
Hedera
References
Botany
|
https://en.wikipedia.org/wiki/Carrier%20recovery
|
A carrier recovery system is a circuit used to estimate and compensate for frequency and phase differences between a received signal's carrier wave and the receiver's local oscillator for the purpose of coherent demodulation.
In the transmitter of a communications carrier system, a carrier wave is modulated by a baseband signal. At the receiver, the baseband information is extracted from the incoming modulated waveform.
In an ideal communications system, the carrier signal oscillators of the transmitter and receiver would be perfectly matched in frequency and phase, thereby permitting perfect coherent demodulation of the modulated baseband signal.
However, transmitters and receivers rarely share the same carrier oscillator. Communications receiver systems are usually independent of transmitting systems and contain their oscillators with frequency and phase offsets and instabilities. Doppler shift may also contribute to frequency differences in mobile radio frequency communications systems.
All these frequencies and phase variations must be estimated using the information in the received signal to reproduce or recover the carrier signal at the receiver and permit coherent demodulation.
Methods
For a quiet carrier or a signal containing a dominant carrier spectral line, carrier recovery can be accomplished with a simple band-pass filter at the carrier frequency or with a phase-locked loop, or both.
However, many modulation schemes make this simple approach impractical because most signal power is devoted to modulation—where the information is present—and not to the carrier frequency. Reducing the carrier power results in greater transmitter efficiency. Different methods must be employed to recover the carrier in these conditions.
Non-data-aided
Non-data-aided/“blind” carrier recovery methods do not rely on knowledge of the modulation symbols. They are typically used for simple carrier recovery schemes or as the initial coarse carrier frequency recovery meth
|
https://en.wikipedia.org/wiki/Handheld%20television
|
A handheld television is a portable device for watching television that usually uses a TFT LCD or OLED and CRT color display. Many of these devices resemble handheld transistor radios.
History
In 1970, Panasonic released the first TV which was small enough to fit in a large pocket; called the Panasonic IC TV MODEL TR-001 and Sinclair Research released the second pocket television, the MTV-1. Since LCD technology was not yet mature at the time, the TV used a minuscule CRT which set the record for being the smallest CRT on a commercially marketed product.
Later in 1982, Sony released their first model - the FD-200, which was introduced as “Flat TV” later renamed after the nickname Watchman, a play on the word Walkman. It had grayscale video at first. Several years later, a color model with an active-matrix LCD was released. Some smartphones integrate a television receiver, although Internet broadband video is far more common.
Since the switch-over to digital broadcasting, handheld TVs have reduced in size and improved in quality. Portable TV was eventually brought to digital TV with DVB-H, although it didn't see much success. The major current manufacturers of DVB-T standard (common throughout Europe) handheld TVs are August International, ODYS and Xoro.
Hardware
These devices often have stereo 1⁄8 inch (3.5 mm) phono plugs for composite video-analog mono audio relay to serve them as composite monitors; also, some models have mono 3.5 mm jacks for the broadcast signal that is usually relayed via F connector or Belling-Lee connector on standard television models.
Some include HDMI, USB and SD ports.
Screen sizes vary from . Some handheld televisions also double as portable DVD players and USB personal video recorders.
Size
Portable televisions cannot fit in a pocket, but often run on batteries and include a cigarette lighter receptacle plug.
Pocket televisions fit in a pocket.
Wearable televisions sometimes are made in the form of a wristwatch.
Notab
|
https://en.wikipedia.org/wiki/OpenGrok
|
OpenGrok is a source code search and cross-reference engine. It helps programmers to search, cross-reference, and navigate source code trees to aid code comprehension.
It can understand various program file formats and version control histories such as Monotone, SCCS, RCS, CVS, Subversion, Mercurial, Git, Clearcase, Perforce, and Bazaar.
The name comes from the term grok, a computing jargon term meaning 'intuitive understanding.'
OpenGrok is being developed mainly by the community with the help of a few engineers from Oracle Corporation (which absorbed Sun Microsystems). OpenGrok is released under the terms of the Common Development and Distribution License (CDDL).
It is mainly written in Java, with some tooling done in Python. It relies on the analysis done by Ctags. There is an official Docker image available.
Features
OpenGrok supports:
Full text Search
Definition Search
Identifier Search
Path search
History Search
Shows matching lines
Hierarchical Search
query syntax like AND, OR, field:
Incremental update
Syntax highlighting cross references (Xref)
Quick navigation inside the file
Interface for SCM
Usable URLs
Individual file download
Changes at directory level
Multi language support
Suggester
RESTful API
See also
LXR Cross Referencer
ViewVC
FishEye (software)
References
External links
OpenGrok demo site
Super User's BSD Cross Reference
Code comprehension tools
Code navigation tools
Cross-platform free software
Free version control software
Source code
Java platform software
Concurrent Versions System
Apache Subversion
Code search engines
|
https://en.wikipedia.org/wiki/Sony%20NEWS
|
The Sony NEWS ("Network Engineering Workstation", later "NetWorkStation") is a series of Unix workstations sold during the late 1980s and 1990s. The first NEWS machine was the NWS-800, which originally appeared in Japan in January 1987 and was conceived as a desktop replacement for the VAX series of minicomputers.
History
1980s
Sony's NEWS project leader, Toshitada Doi, originally wanted to develop a computer for business applications, but his engineers wanted to develop a replacement for minicomputers running Unix that they preferred to use:
Initial development of the NEWS was completed in 1986 after only one year of development. It launched at a lower price than competitors (–16,300), and it outperformed conventional minicomputers. After a successful launch, the line expanded and the new focus for the NEWS became desktop publishing and CAD/CAM.
1990s
In 1991, Sony broadened the NEWS range with the 3250 portable workstation, reportedly described in product literature as a laptop but weighing 18 pounds and having more in common with portable computers, being "designed to be set up on a desk and plugged in". Featuring an 11-inch monochrome liquid crystal display with a resolution of and keyboard with "75 full travel keys", the machine was fitted with an internal hard drive and a 3.5-inch floppy drive. A SCSI port permitted the addition of other storage devices, and Ethernet, parallel and serial ports were provided, along with a mouse port and audio in/out ports for audio processing. In terms of its fundamental computing facilities, the system employed a 20 MHz MIPS R3000 CPU with R3010 floating-point coprocessor, offered 8 MB of RAM expandable to 36 MB, running an implementation of Unix System V Release 4 and providing an Open Software Foundation Motif graphical environment. In the United States, a configuration with 240 MB hard drive cost $9,900, with the 406 MB configuration costing $11,900.
Early PlayStation development kits were based on Sony NEWS hardware,
|
https://en.wikipedia.org/wiki/Gluster
|
Gluster Inc. (formerly known as Z RESEARCH) was a software company that provided an open source platform for scale-out public and private cloud storage. The company was privately funded and headquartered in Sunnyvale, California, with an engineering center in Bangalore, India. Gluster was funded by Nexus Venture Partners and Index Ventures. Gluster was acquired by Red Hat on October 7, 2011.
History
The name Gluster comes from the combination of the terms GNU and cluster. Despite the similarity in names, Gluster is not related to the Lustre file system and does not incorporate any Lustre code.
Gluster based its product on GlusterFS, an open-source software-based network-attached filesystem that deploys on commodity hardware. The initial version of GlusterFS was written by Anand Babu Periasamy, Gluster's founder and CTO.
In May 2010 Ben Golub became the president and chief executive officer.
Red Hat became the primary author and maintainer of the GlusterFS open-source project after acquiring the Gluster company in October 2011.
The product was first marketed as Red Hat Storage Server, but in early 2015 renamed to be Red Hat Gluster Storage since Red Hat has also acquired the Ceph file system technology.
Red Hat Gluster Storage is in the retirement phase of its lifecycle with a end of support life date of December 31, 2024.
Architecture
The GlusterFS architecture aggregates compute, storage, and I/O resources into a global namespace. Each server plus attached commodity storage (configured as direct-attached storage, JBOD, or using a storage area network) is considered to be a node. Capacity is scaled by adding additional nodes or adding additional storage to each node. Performance is increased by deploying storage among more nodes. High availability is achieved by replicating data n-way between nodes.
Public cloud deployment
For public cloud deployments, GlusterFS offers an Amazon Web Services (AWS) Amazon Machine Image (AMI), which is deployed on Elastic Comput
|
https://en.wikipedia.org/wiki/ILiad
|
The iLiad was an electronic handheld device, or e-Reader, which could be used for document reading and editing. Like the Barnes and Noble nook, Sony Reader or Amazon Kindle, the iLiad made use of an electronic paper display. In 2010, sales of the iLiad ended when its parent company, iRex Technologies, filed for bankruptcy.
Description
Main specifications:
electronic paper display, area for displaying content is 124x165mm
resolution of 768x1024 pixels, 160 dpi
16 levels of grayscale
USB connector for external storage
CompactFlash Type II slot for memory extension or other applications
MultiMediaCard slot for MMC memory cards
3.5 mm stereo audio jack for a headset
WiFi 802.11g wireless LAN
10/100 Mbit/s wired LAN
weight
400 MHz Intel XScale processor
64 MB RAM
256 MB internal flash memory, 128 for user, 128 for system
Linux-based operating system, 2.4 kernel
SDK provided, so functionality is easily extended
It measures 155 × 216 × 16 mm (width × height × depth), the size of an A5 document, or roughly a 6"×9" steno notebook. The display used is an active matrix electrophoretic display, which uses E Ink Vizplex Imaging Film manufactured by E Ink Corporation. Underneath the E Ink screen is a digitizing tablet by WACOM which requires a stylus for input. When it was introduced, the Iliad had largest screen size of existing e-paper products, but the newer iRex Digital Reader 1000's display is the largest sold as of early 2011.
The iLiad can display document files in several formats, including PDF, Mobipocket, XHTML and plain text. It can also display JPEG, BMP and PNG images, but not in color. As of May 3, 2007 Mobipocket is supported, making the mobipocket digital rights management (DRM) content available on this platform. iRex's product page for the iLiad states that "Support for additional E-book formats will become available over the coming months."
Through its wireless service, iDS, the iLiad can also directly download content. Les Echos, a F
|
https://en.wikipedia.org/wiki/Display%20%28zoology%29
|
Display behaviour is a set of ritualized behaviours that enable an animal to communicate to other animals (typically of the same species) about specific stimuli. Such ritualized behaviours can be visual, but many animals depend on a mixture of visual, audio, tactical and chemical signals. Evolution has tailored these stereotyped behaviours to allow animals to communicate both conspecifically and interspecifically which allows for a broader connection in different niches in an ecosystem. It is connected to sexual selection and survival of the species in various ways. Typically, display behaviour is used for courtship between two animals and to signal to the female that a viable male is ready to mate. In other instances, species may make territorial displays, in order to preserve a foraging or hunting territory for its family or group. A third form is exhibited by tournament species in which males will fight in order to gain the 'right' to breed. Animals from a broad range of evolutionary hierarchies avail of display behaviours - from invertebrates such as the simple jumping spider to the more complex vertebrates like the harbour seal.
In animals
Invertebrates
Insects
Communication is important for animals throughout the animal kingdom. For example, since female praying mantids are sexually cannibalistic, the male typically uses a cryptic form of display. This is a series of creeping movements executed by the male as it approaches the female, with freezing whenever the female looks towards the male. However, according to laboratory studies conducted by Loxton in 1979, one type of mantis, Ephestiasula arnoena, shows both male and female counterparts performing overt and ritualized behaviour before mating. Both displayed a semaphore behaviour, meaning waving their front legs in a boxing fashion before the slow approach of the male from behind. This semaphore display communicates that both are ready for copulation.
Flies belonging to the genus Megaselia also show s
|
https://en.wikipedia.org/wiki/Matthieu%20Brouard
|
Matthieu Brouard was a French theologian, mathematician, philosopher and historian, who was born in Saint-Denis near Paris in 1520, and died in Geneva on July 15, 1576. He is also known as Matthieu Brouart or Béroalde and (in Latin) as Mattheus Beroaldus. He taught Greek to the young Thomas Bodley and was the father of François Béroalde de Verville.
References
1520 births
1576 deaths
16th-century French theologians
Theologians from the Republic of Geneva
French mathematicians
16th-century scientists from the Republic of Geneva
French Protestant theologians
|
https://en.wikipedia.org/wiki/Lakes%20of%20Wada
|
In mathematics, the are three disjoint connected open sets of the plane or open unit square with the counterintuitive property that they all have the same boundary. In other words, for any point selected on the boundary of one of the lakes, the other two lakes' boundaries also contain that point.
More than two sets with the same boundary are said to have the Wada property; examples include Wada basins in dynamical systems. This property is rare in real-world systems.
The lakes of Wada were introduced by , who credited the discovery to Takeo Wada. His construction is similar to the construction by of an indecomposable continuum, and in fact it is possible for the common boundary of the three sets to be an indecomposable continuum.
Construction of the lakes of Wada
The Lakes of Wada are formed by starting with a closed unit square of dry land, and then digging 3 lakes according to the following rule:
On day n = 1, 2, 3,... extend lake n mod 3 (= 0, 1, 2) so that it is open and connected and passes within a distance 1/n of all remaining dry land. This should be done so that the remaining dry land remains homeomorphic to a closed unit square.
After an infinite number of days, the three lakes are still disjoint connected open sets, and the remaining dry land is the boundary of each of the 3 lakes.
For example, the first five days might be (see the image on the right):
Dig a blue lake of width 1/3 passing within /3 of all dry land.
Dig a red lake of width 1/32 passing within /32 of all dry land.
Dig a green lake of width 1/33 passing within /33 of all dry land.
Extend the blue lake by a channel of width 1/34 passing within /34 of all dry land. (The small channel connects the thin blue lake to the thick one, near the middle of the image.)
Extend the red lake by a channel of width 1/35 passing within /35 of all dry land. (The tiny channel connects the thin red lake to the thick one, near the top left of the image.)
A variation of this construction can produce
|
https://en.wikipedia.org/wiki/Google%20Pay%20Send
|
Google Pay Send, previously known as Google Wallet, was a peer-to-peer payments service developed by Google before its merger into Google Pay. It allowed people to send and receive money from a mobile device or desktop computer.
In 2018, Android Pay and Google Wallet were unified into a single pay system called Google Pay. The old Wallet app was rebranded Google Pay Send, before it was discontinued as well in 2020.
Service
Google Pay is structured to allow its patrons to send money to each other. To send money, a Google Pay user enters the email address or phone number of the recipient. The recipient must then link that phone number or email address to a bank account in order to access those funds. If the recipient also has a Google Pay account, the funds will post to that account directly.
Users can link up to two bank accounts when the Wallet account is created. Received money goes to the Google Pay Balance and stays there until the user decides to cash out to a linked account.
The Google Pay app is available for free from either Google Play or the App Store. After downloading the app, the user creates a four-digit personal identification number (PIN) for managing everything within their Google Pay account. The PIN verifies access to the Wallet app on the user's mobile device.
Before it was discontinued on June 30, 2016, the Google Wallet Card was recognized by the Cirrus network operated by MasterCard (rather than the Plus network operated by Visa).
In September 2017, Google launched its first major payments service outside the United States, in Tez, India.
History
Early history
Google demonstrated the original version of the service at a press conference on May 26, 2011. The first app was released in the US only on September 19, 2011. Initially, the app only supported Mastercard cards issued by Citibank.
On May 15, 2013, Google announced the integration of Google Wallet and Gmail, allowing users to send money through Gmail attachments. While Google Wa
|
https://en.wikipedia.org/wiki/Trace%20amine-associated%20receptor
|
Trace amine-associated receptors (TAARs), sometimes referred to as trace amine receptors (TAs or TARs), are a class of G protein-coupled receptors that were discovered in 2001. TAAR1, the first of six functional human TAARs, has gained considerable interest in academic and proprietary pharmaceutical research due to its role as the endogenous receptor for the trace amines phenylethylamine, tyramine, and tryptamine – metabolic derivatives of the amino acids phenylalanine, tyrosine and tryptophan, respectively – ephedrine, as well as the synthetic psychostimulants, amphetamine, methamphetamine and methylenedioxymethamphetamine (MDMA, ecstasy). In 2004, it was shown that mammalian TAAR1 is also a receptor for thyronamines, decarboxylated and deiodinated relatives of thyroid hormones. TAAR2–TAAR9 function as olfactory receptors for volatile amine odorants in vertebrates.
Animal TAAR complement
The following is a list of the TAARs contained in selected animal genomes:
Human – 6 genes (TAAR1, TAAR2, TAAR5, TAAR6, TAAR8, TAAR9) and 3 pseudogenes (TAAR3, , )
Chimpanzee – 3 genes and 6 pseudogenes
Mouse – 15 genes and 1 pseudogene
Rat – 17 genes and 2 pseudogenes
Zebrafish – 112 genes and 4 pseudogenes
Frog – 3 genes and 0 pseudogenes
Medaka – 25 genes and 1 pseudogenes
Stickleback – 25 genes and 1 pseudogenes
Human trace amine-associated receptors
Six human trace amine-associated receptors (hTAARs) – hTAAR1, hTAAR2, hTAAR5, hTAAR6, hTAAR8, and hTAAR9 – have been identified and partially characterized. The table below contains summary information from literature reviews, pharmacology databases, and supplementary primary research articles on the expression profiles, signal transduction mechanisms, ligands, and physiological functions of these receptors.
Disease links and clinical significance
Ulotaront / SEP 363856 is a TAAR1 agonist in phase 3 clinical trials for schizophrenia and earlier trials for Parkinson's Disease psychosis. The medicine has obtained Brea
|
https://en.wikipedia.org/wiki/Projected%20dynamical%20system
|
Projected dynamical systems is a mathematical theory investigating the behaviour of dynamical systems where solutions are restricted to a constraint set. The discipline shares connections to and applications with both the static world of optimization and equilibrium problems and the dynamical world of ordinary differential equations. A projected dynamical system is given by the flow to the projected differential equation
where K is our constraint set. Differential equations of this form are notable for having a discontinuous vector field.
History of projected dynamical systems
Projected dynamical systems have evolved out of the desire to dynamically model the behaviour of nonstatic solutions in equilibrium problems over some parameter, typically take to be time. This dynamics differs from that of ordinary differential equations in that solutions are still restricted to whatever constraint set the underlying equilibrium problem was working on, e.g. nonnegativity of investments in financial modeling, convex polyhedral sets in operations research, etc. One particularly important class of equilibrium problems which has aided in the rise of projected dynamical systems has been that of variational inequalities.
The formalization of projected dynamical systems began in the 1990s. However, similar concepts can be found in the mathematical literature which predate this, especially in connection with variational inequalities and differential inclusions.
Projections and Cones
Any solution to our projected differential equation must remain inside of our constraint set K for all time. This desired result is achieved through the use of projection operators and two particular important classes of convex cones. Here we take K to be a closed, convex subset of some Hilbert space X.
The normal cone to the set K at the point x in K is given by
The tangent cone (or contingent cone) to the set K at the point x is given by
The projection operator (or closest element mapping) of
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.