source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/Virtual%20private%20server | A virtual private server (VPS) is a virtual machine sold as a service by an Internet hosting service. The term "virtual dedicated server" (VDS) also has a similar meaning.
A virtual private server runs its own copy of an operating system (OS), and customers may have superuser-level access to that operating system instance, so they can install almost any software that runs on that OS. For many purposes, it is functionally equivalent to a dedicated physical server and, being software-defined, can be created and configured more easily. A virtual server costs less than an equivalent physical server. However, as virtual servers share the underlying physical hardware with other VPSes, performance may be lower depending on the workload of any other executing virtual machines.
Virtualization
The force driving server virtualization is similar to that which led to the development of time-sharing and multiprogramming in the past. Although the resources are still shared, as under the time-sharing model, virtualization provides a higher level of security, dependent on the type of virtualization used, as the individual virtual servers are mostly isolated from each other and may run their own full-fledged operating system which can be independently rebooted as a virtual instance.
Partitioning a single server to appear as multiple servers has been increasingly common on microcomputers since the release of VMware ESX Server in 2001. The physical server typically runs a hypervisor which is tasked with creating, releasing, and managing the resources of "guest" operating systems, or virtual machines. These guest operating systems are allocated a share of resources of the physical server, typically in a manner in which the guest is not aware of any other physical resources except for those allocated to it by the hypervisor. As a VPS runs its own copy of its operating system, customers have superuser-level access to that operating system instance, and can install almost any software |
https://en.wikipedia.org/wiki/Type%20conversion | In computer science, type conversion, type casting, type coercion, and type juggling are different ways of changing an expression from one data type to another. An example would be the conversion of an integer value into a floating point value or its textual representation as a string, and vice versa. Type conversions can take advantage of certain features of type hierarchies or data representations. Two important aspects of a type conversion are whether it happens implicitly (automatically) or explicitly, and whether the underlying data representation is converted from one representation into another, or a given representation is merely reinterpreted as the representation of another data type. In general, both primitive and compound data types can be converted.
Each programming language has its own rules on how types can be converted. Languages with strong typing typically do little implicit conversion and discourage the reinterpretation of representations, while languages with weak typing perform many implicit conversions between data types. Weak typing language often allow forcing the compiler to arbitrarily interpret a data item as having different representations—this can be a non-obvious programming error, or a technical method to directly deal with underlying hardware.
In most languages, the word coercion is used to denote an implicit conversion, either during compilation or during run time. For example, in an expression mixing integer and floating point numbers (like 5 + 0.1), the compiler will automatically convert integer representation into floating point representation so fractions are not lost. Explicit type conversions are either indicated by writing additional code (e.g. adding type identifiers or calling built-in routines) or by coding conversion routines for the compiler to use when it otherwise would halt with a type mismatch.
In most ALGOL-like languages, such as Pascal, Modula-2, Ada and Delphi, conversion and casting are distinctly different |
https://en.wikipedia.org/wiki/Permeable%20paving | Permeable paving surfaces are made of either a porous material that enables stormwater to flow through it or nonporous blocks spaced so that water can flow between the gaps. Permeable paving can also include a variety of surfacing techniques for roads, parking lots, and pedestrian walkways. Permeable pavement surfaces may be composed of; pervious concrete, porous asphalt, paving stones, or interlocking pavers. Unlike traditional impervious paving materials such as concrete and asphalt, permeable paving systems allow stormwater to percolate and infiltrate through the pavement and into the aggregate layers and/or soil below. In addition to reducing surface runoff, permeable paving systems can trap suspended solids, thereby filtering pollutants from stormwater.
Permeable pavement is commonly used on roads, paths and parking lots subject to light vehicular traffic, such as cycle-paths, service or emergency access lanes, road and airport shoulders, and residential sidewalks and driveways.
Description and applications
Permeable solutions can be based on porous asphalt and concrete surfaces, concrete pavers (permeable interlocking concrete paving systems – PICP), or polymer-based grass pavers, grids and geocells. Porous pavements such as pervious concrete and pervious asphalt are better suited for urbanized areas that see more frequent vehicular traffic, while concrete pavers, grids, and geocells are better suited for light vehicular traffic, pedestrian and cycling pathways, and overflow parking lots. Pervious concrete pavers allow water to percolate and infiltrate through the pavers and into the aggregate layers and/or soil below. Impervious concrete pavers installed with ample void space between each paver function in the same way as pervious concrete pavers as they enable stormwater to drain into the voids between each paver, either filled with coarse aggregate or vegetation, to a stone and/or soil base layer for on-site infiltration and filtering. Polymer based gra |
https://en.wikipedia.org/wiki/Sender%20Policy%20Framework | Sender Policy Framework (SPF) is an email authentication method which ensures the sending mail server is authorized to originate mail from the email sender's domain. This authentication only applies to the email sender listed in the "envelope from" field during the initial SMTP connection. If the email is bounced, a message is sent to this address, and for downstream transmission it typically appears in the "Return-Path" header. To authenticate the email address which is actually visible to recipients on the "From:" line, other technologies such as DMARC must be used. Forgery of this address is known as email spoofing, and is often used in phishing and email spam.
The list of authorized sending hosts and IP addresses for a domain is published in the DNS records for that domain. Sender Policy Framework is defined in RFC 7208 dated April 2014 as a "proposed standard".
History
The first public mention of the concept was in 2000 but went mostly unnoticed. No mention was made of the concept again until a first attempt at an SPF-like specification was published in 2002 on the IETF "namedroppers" mailing list by Dana Valerie Reese, who was unaware of the 2000 mention of the idea. The very next day, Paul Vixie posted his own SPF-like specification on the same list. These posts ignited a lot of interest, led to the forming of the IETF Anti-Spam Research Group (ASRG) and their mailing list, where the SPF idea was further developed. Among the proposals submitted to the ASRG were "Reverse MX" (RMX) by Hadmut Danisch, and "Designated Mailer Protocol" (DMP) by Gordon Fecyk.
In June 2003, Meng Weng Wong merged the RMX and DMP specifications and solicited suggestions from others. Over the next six months, a large number of changes were made and a large community had started working on SPF. Originally SPF stood for Sender Permitted From and was sometimes also called SMTP+SPF; but its name was changed to Sender Policy Framework in February 2004.
In early 2004, the IETF created th |
https://en.wikipedia.org/wiki/EE%20Times | EE Times (Electronic Engineering Times) is an electronics industry magazine published in the United States since 1972. EE Times is currently owned by AspenCore, a division of Arrow Electronics since August 2016.
Since its acquisition by AspenCore, EE Times has seen major editorial and publishing technology investment and a renewed emphasis on investigative coverage. New features include The Dispatch, which profiles frontline engineers and unpacks real-life design problems and their solutions in technical yet conversational reporting.
Ownership and status
EE Times was launched in 1972 by Gerard G. Leeds of CMP Publications Inc. In 1999, the Leeds family sold CMP to United Business Media for $900 million. After 2000, EE Times moved more into web publishing. The shift in advertising from print to online began to accelerate in 2007, and the periodical shed staff to adjust to the downturn in revenue.
In July 2013, the digital edition migrated to UBM TechWeb's DeusM community platform.
On June 3, 2016, UBM announced that EE Times, along with the rest of its electronics media portfolio (EDN, Embedded.com, TechOnline, and Datasheets.com), was being sold to AspenCore Media, a company owned by Arrow Electronics, for $23.5 million. The acquisition was completed on August 1, 2016.
Availability
EE Times is free for qualified design engineers, managers, and business and corporate management in the electronics industry. It is also available online; the EE Times website offers news, columns, and features articles for semiconductor manufacturing, communications, electronic design automation, electronic engineering, technology, and products. In November 2012, UBM Electronics announced that the December 2012 issue of EE Times would be the last in print. In 2013, EE Times will be an online product only.
In 2018, EE Times rolled out a refreshed website, resurrected its print edition in Europe, and launched a new radio show, EE Times On Air, available an hour after the live br |
https://en.wikipedia.org/wiki/McFarland%20standards | In microbiology, McFarland standards are used as a reference to adjust the turbidity of bacterial suspensions so that the number of bacteria will be within a given range to standardize microbial testing. An example of such testing is antibiotic susceptibility testing by measurement of minimum inhibitory concentration which is routinely used in medical microbiology and research. If a suspension used is too heavy or too dilute, an erroneous result (either falsely resistant or falsely susceptible) for any given antimicrobial agent could occur.
Original McFarland standards were made by mixing specified amounts of barium chloride and sulfuric acid together. Mixing the two compounds forms a barium sulfate precipitate, which causes turbidity in the solution. A 0.5 McFarland standard is prepared by mixing 0.05 mL of 1.175% barium chloride dihydrate (BaCl2•2H2O), with 9.95 mL of 1% sulfuric acid (H2SO4).
Now there are McFarland standards prepared from suspensions of latex particles, which lengthens the shelf life and stability of the suspensions.
The standard can be compared visually to a suspension of bacteria in sterile saline or nutrient broth. If the bacterial suspension is too turbid, it can be diluted with more diluent. If the suspension is not turbid enough, more bacteria can be added.
McFarland nephelometer standards:{2}
*at wavelength of 600 nm
McFarland latex standards from Hardy Diagnostics (2014-12-10), measured at the UCSF DeRisi Lab:
References
THE NEPHELOMETER: AN INSTRUMENT FOR ESTIMATING THE NUMBER OF BACTERIA IN SUSPENSIONS USED FOR CALCULATING THE OPSONIC INDEX AND FOR VACCINES. JOSEPH McFARLAND, M.D. JAMA. 1907; XLIX(14):1176-1178.
Mcfarland Standards-http://www.dalynn.com/dyn/ck_assets/files/tech/TM53.pdf
Microbiology equipment |
https://en.wikipedia.org/wiki/Hong%20Kong%20Olympiad%20in%20Informatics | Hong Kong Olympiad in Informatics (HKOI; 香港電腦奧林匹克競賽) is an annual programming competition for secondary school students in Hong Kong, emphasizing on problem solving techniques and programming skills. It is co-organized by the Hong Kong Association for Computer Education (HKACE) and the Hong Kong Education Bureau (EDB). It serves as a preliminary contest to international, national and regional competitions such as the China National Olympiad in Informatics (NOI) and the International Olympiad in Informatics (IOI). The first HKOI was held in 1997.
History
Hong Kong first participated in IOI in 1992. In order select representatives for the Hong Kong Delegation Team, a selection test was held a few months before the competition. In the following years, Hong Kong started sending teams to other competitions, including the SEARCC International Schools' Software Competition (ISSC) in 1993, the Software Competition for the Youths (SCY) in 1994 and the China National Olympiad in Informatics in 1995. Selection tests were separately administered for these competitions, and the purpose of each test was solely to select team members for the competitions. A considerable amount of resources were used to organize these tests. The tests were not very popular among students in Hong Kong.
In 1996, the Hong Kong Association for Computer Education, the Hong Kong Computer Society and the Education Department of Hong Kong (now the Education Bureau) jointly organized the Joint Selection Contest to replace all the selection tests. 39 students were selected as seeds for the Hong Kong teams. They received intensive training on topics like data structures and algorithms. After that, a Team Formation Test was conducted to select the Hong Kong representatives in IOI and NOI among the seeds. Another Team Formation Test was conducted for the SEARCC-ISSC and SCY.
In 1997, the Joint Selection Contest was renamed as the Hong Kong Olympiad in Informatics. Prizes are awarded to students with good re |
https://en.wikipedia.org/wiki/Star%20Trek%20project | Star Trek is the code name that was given to a secret prototype project, running a port of Macintosh System 7 and its applications on Intel-compatible x86 personal computers. The project, starting in February 1992, was conceived in collaboration between Apple Computer, who provided the majority of engineers, and Novell, who at the time was one of the leaders of cross-platform file-servers. The plan was that Novell would market the resulting OS as a challenge to Microsoft Windows, but the project was discontinued in 1993 and never released, although components were reused in other projects. The project was named after the Star Trek science fiction franchise with the slogan "To boldly go where no Mac has gone before".
History
The impetus for the creation of the Star Trek project began out of Novell's desire to increase its competition against the monopoly of Microsoft and its DOS-based Windows products. While Microsoft was eventually convicted many years later of illegal monopoly status, Novell had called Microsoft's presence "predatory" and the US Department of Justice had called it "exclusionary" and "unlawful". Novell's first idea to extend its desktop presence with a graphical computing environment was to adapt Digital Research's GEM desktop environment, but Novell's legal department rejected this due to apprehension of a possible legal response from Apple, so the company went directly to Apple. With shared concerns in the anti-competitive marketplace, Intel's CEO Andy Grove supported the two companies in launching their joint project Star Trek on February 14, 1992 (Valentine's Day).
Apple set a deadline of October 31, 1992 (Halloween Day), promising the engineering team members a performance bonus of a large cash award and a vacation in Cancun, Mexico. Of the project, team member Fred Monroe later reflected, "We worked like dogs. It was some of the most fun I've had working".
Achieving their deadline goal and receiving their bonuses, the developers eventuall |
https://en.wikipedia.org/wiki/Irradiance | In radiometry, irradiance is the radiant flux received by a surface per unit area. The SI unit of irradiance is the watt per square metre (W⋅m−2). The CGS unit erg per square centimetre per second (erg⋅cm−2⋅s−1) is often used in astronomy. Irradiance is often called intensity, but this term is avoided in radiometry where such usage leads to confusion with radiant intensity. In astrophysics, irradiance is called radiant flux.
Spectral irradiance is the irradiance of a surface per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. The two forms have different dimensions and units: spectral irradiance of a frequency spectrum is measured in watts per square metre per hertz (W⋅m−2⋅Hz−1), while spectral irradiance of a wavelength spectrum is measured in watts per square metre per metre (W⋅m−3), or more commonly watts per square metre per nanometre (W⋅m−2⋅nm−1).
Mathematical definitions
Irradiance
Irradiance of a surface, denoted Ee ("e" for "energetic", to avoid confusion with photometric quantities), is defined as
where
∂ is the partial derivative symbol;
Φe is the radiant flux received;
A is the area.
If we want to talk about the radiant flux emitted by a surface, we speak of radiant exitance.
Spectral irradiance
Spectral irradiance in frequency of a surface, denoted Ee,ν, is defined as
where ν is the frequency.
Spectral irradiance in wavelength of a surface, denoted Ee,λ, is defined as
where λ is the wavelength.
Property
Irradiance of a surface is also, according to the definition of radiant flux, equal to the time-average of the component of the Poynting vector perpendicular to the surface:
where
is the time-average;
S is the Poynting vector;
α is the angle between a unit vector normal to the surface and S.
For a propagating sinusoidal linearly polarized electromagnetic plane wave, the Poynting vector always points to the direction of propagation while oscillating in magnitude. The irradi |
https://en.wikipedia.org/wiki/International%20Alphabet%20of%20Sanskrit%20Transliteration | The International Alphabet of Sanskrit Transliteration (IAST) is a transliteration scheme that allows the lossless romanisation of Indic scripts as employed by Sanskrit and related Indic languages. It is based on a scheme that emerged during the 19th century from suggestions by Charles Trevelyan, William Jones, Monier Monier-Williams and other scholars, and formalised by the Transliteration Committee of the Geneva Oriental Congress, in September 1894. IAST makes it possible for the reader to read the Indic text unambiguously, exactly as if it were in the original Indic script. It is this faithfulness to the original scripts that accounts for its continuing popularity amongst scholars.
Usage
Scholars commonly use IAST in publications that cite textual material in Sanskrit, Pāḷi and other classical Indian languages.
IAST is also used for major e-text repositories such as SARIT, Muktabodha, GRETIL, and sanskritdocuments.org.
The IAST scheme represents more than a century of scholarly usage in books and journals on classical Indian studies. By contrast, the ISO 15919 standard for transliterating Indic scripts emerged in 2001 from the standards and library worlds. For the most part, ISO 15919 follows the IAST scheme, departing from it only in minor ways (e.g., ṃ/ṁ and ṛ/r̥)—see comparison below.
The Indian National Library at Kolkata romanization, intended for the romanisation of all Indic scripts, is an extension of IAST.
Inventory and conventions
The IAST letters are listed with their Devanagari equivalents and phonetic values in IPA, valid for Sanskrit, Hindi and other modern languages that use Devanagari script, but some phonological changes have occurred:
* H is actually glottal, not velar.
Some letters are modified with diacritics: Long vowels are marked with an overline. Vocalic (syllabic) consonants, retroflexes and ṣ () have an underdot. One letter has an overdot: ṅ (). One has an acute accent: ś (). One letter has a line below: ḻ () (Vedic).
Unlike ASCI |
https://en.wikipedia.org/wiki/Creatures%203 | Creatures 3 is the third game in the Creatures a-life game series made by Creature Labs. In this installment, the Shee have left Albia in a spaceship, the Shee Ark, to search for a more spherical world. The Ark was abandoned by the Shee because a meteor hit the ship, but the infrastructure still remains in working order.
There are 6 main "metarooms" on the ship; the Norn Terrarium, Grendel Jungle, Ettin Desert, Marine area, Bridge, and Engineering Room. The Norn Terrarium is where you can safely hatch and raise your norns. The Grendel Jungle is where the Grendel mother (egg-layer) is, and it is well suited for Grendels. The Ettin Desert is where the Ettin mother is; it is a dry, harsh environment for all creatures. The Bridge is where you will find the most gadgets, and also the most Ettins. The Engineering Room is where the Agent Creator is, with which you can create objects for your world.
Grendels are now vicious, unlike the original Creatures Grendels; these Grendels enjoy hunting down, and then killing norns. The Ettins, which first appeared in Creatures 2, love gathering gadgets and taking them back to the Ettin Desert.
Creatures 3 runs on the CAOS engine, making it highly moddable, but now it is retired and outdated.
In December 2021, Creatures 3 will be released on the Steam games service.
See also
Creatures (artificial life program)
Bundles with Creatures 3
Docking Station (Creatures)
References
External links
Creatures 3 on the Creatures wiki
Creatures 3 source code, circa January 2000 at the Internet Archive.
Eurogamer review
CNN review
Artificial life
Creatures (video game series)
Creature Labs games
Virtual pet video games
Biological simulation video games
God games
Linux games
MacOS games
Windows games
1999 video games
Video game sequels
Video games developed in the United Kingdom
Linux Game Publishing games
Mindscape games |
https://en.wikipedia.org/wiki/Creatures%202 | Creatures 2 is the second game in the Creatures artificial life game series made by Creature Labs, and the sequel to the 1996 game Creatures. It features three species: the cute, dependent Norns, the cantankerous Grendels and the industrious Ettins. The game tries to simulate life, and includes a complex two-dimensional ecology of plants, animals and insects, which provide the environment for the three main species to live and develop in. The player interacts with the world using a hand-shaped cursor, and tries to encourage the creatures' development by manipulating various objects around the world, guiding the creatures using the cursor and encouraging the creatures to speak.
Many new gameplay features included in Creatures 2 not present in the original game include a new physics model and a global weather system, along with brand new applets and a world twice the size of the Creatures 1 world.
The executable file for the game was in fact an interpreter for its scripting language, thus allowing users to make total conversions or derivative works from the game.
Gameplay
Like the other games in the series, Creatures 2 is mostly open-ended, with no predetermined goals, allowing the player to raise Norns at their own pace. In each new world, the player begins in the incubator cavern area. The hatchery, one of the game's in-built applets, allows the player to add Norn eggs to the world, which can be hatched via the incubator. Norns may also be downloaded from the internet and imported in to use in the game.
Once Norns reach adolescence at around an hour old, they are ready to breed, and female Norns will begin an oestrogen cycle. When Norns mate, there is a long kissing sound with a pop at the end – known as a kisspop. Like real life, not all mating results in a pregnancy. The Norn reproductive process can be monitored via the Breeder's Kit.
At the beginning, the player is only able to navigate a small area of Albia. As their Norns explore the world, however, more |
https://en.wikipedia.org/wiki/Graph%20%28abstract%20data%20type%29 | In computer science, a graph is an abstract data type that is meant to implement the undirected graph and directed graph concepts from the field of graph theory within mathematics.
A graph data structure consists of a finite (and possibly mutable) set of vertices (also called nodes or points), together with a set of unordered pairs of these vertices for an undirected graph or a set of ordered pairs for a directed graph. These pairs are known as edges (also called links or lines), and for a directed graph are also known as edges but also sometimes arrows or arcs. The vertices may be part of the graph structure, or may be external entities represented by integer indices or references.
A graph data structure may also associate to each edge some edge value, such as a symbolic label or a numeric attribute (cost, capacity, length, etc.).
Operations
The basic operations provided by a graph data structure G usually include:
: tests whether there is an edge from the vertex x to the vertex y;
: lists all vertices y such that there is an edge from the vertex x to the vertex y;
: adds the vertex x, if it is not there;
: removes the vertex x, if it is there;
: adds the edge z from the vertex x to the vertex y, if it is not there;
: removes the edge from the vertex x to the vertex y, if it is there;
: returns the value associated with the vertex x;
: sets the value associated with the vertex x to v.
Structures that associate values to the edges usually also provide:
: returns the value associated with the edge (x, y);
: sets the value associated with the edge (x, y) to v.
Common data structures for graph representation
Adjacency list
Vertices are stored as records or objects, and every vertex stores a list of adjacent vertices. This data structure allows the storage of additional data on the vertices. Additional data can be stored if edges are also stored as objects, in which case each vertex stores its incident edges and each edge stores its incident vertices.
|
https://en.wikipedia.org/wiki/The%20Human%20Factor%3A%20Revolutionizing%20the%20Way%20We%20Live%20with%20Technology | The Human Factor: Revolutionizing the Way People Live with Technology () is a book by Kim Vicente that Routledge published in 2004. Vicente asserts (as cited in the Optimize article listed in the "References" section) technology in such constructs as hospitals, airplanes, and nuclear power plants have significant room for improvement. Some of the specific industrial accidents Vicente analyzes are the Walkerton Tragedy and the Chernobyl Disaster. Also, for medical error, he details many fatal vincristine dosage errors.
Contents
Preface
Part One - Technology Wreaking Havoc
In this section Vicente gives examples of technology in modern life where Human-tech design could have helped increase effectiveness or even prevent disasters such as the Chernobyl Disaster.
Part Two - Technology For Humans
Part 2 of The Human Factor titled "Technology for Humans" is organized according to what Vicente calls "The Human-tech Ladder". This ladder consists of five levels relating to Human-tech design principles. These levels include physical, psychological, team, organizational, and political elements. Each section of the second part of The Human Factor focuses on one of these design principles, explaining fully how they relate to design and giving examples that exemplify them.
Part Three - Regaining Control Of Our Lives
In this final section Vicente outlines a way to put his design viewpoint into practice. He enumerates steps for not only those in design teams but consumers as well.
External links
Book review
2004 non-fiction books
Routledge books |
https://en.wikipedia.org/wiki/Cobra%20probe | A Cobra probe is a device to measure the pressure and velocity components of a moving fluid. It is a multi-holed pressure probe with rotational axis of the probe shaft coplanar with the measurement plane of the instrument. Because of this geometry, when the instrument is rotated around the shaft's axis, the measurement elements of the probe remain in the same location. The name cobra probe comes from the shape of the probe head which gives it this property.
Cobra probes come in three-, four-, and five-hole configurations, the former used for two-dimensional flow measurement, the latter two for three-dimensional flow measurement. In the three-hole kind of instrument, there are two yaw direction tubes which are chamfered and silver soldered symmetrically on the two sides of a pitot tube. It is otherwise similar to the other kinds of yawmeters. In the four- and five-hole configurations, the central pitot tube is surrounded by three or four chamfered tubes, respectively.
References
Hydraulic engineering |
https://en.wikipedia.org/wiki/Entrez | The Entrez () Global Query Cross-Database Search System is a federated search engine, or web portal that allows users to search many discrete health sciences databases at the National Center for Biotechnology Information (NCBI) website. The NCBI is a part of the National Library of Medicine (NLM), which is itself a department of the National Institutes of Health (NIH), which in turn is a part of the United States Department of Health and Human Services. The name "Entrez" (a greeting meaning "Come in" in French) was chosen to reflect the spirit of welcoming the public to search the content available from the NLM.
Entrez Global Query is an integrated search and retrieval system that provides access to all databases simultaneously with a single query string and user interface. Entrez can efficiently retrieve related sequences, structures, and references. The Entrez system can provide views of gene and protein sequences and chromosome maps. Some textbooks are also available online through the Entrez system.
Features
The Entrez front page provides, by default, access to the global query. All databases indexed by Entrez can be searched via a single query string, supporting Boolean operators and search term tags to limit parts of the search statement to particular fields. This returns a unified results page, that shows the number of hits for the search in each of the databases, which are also linked to actual search results for that particular database.
Entrez also provides a similar interface for searching each particular database and for refining search results. The Limits feature allows the user to narrow a search, a web forms interface. The History feature gives a numbered list of recently performed queries. Results of previous queries can be referred to by number and combined via Boolean operators. Search results can be saved temporarily in a Clipboard. Users with a MyNCBI account can save queries indefinitely, and also choose to have updates with new sear |
https://en.wikipedia.org/wiki/Level%20set | In mathematics, a level set of a real-valued function of real variables is a set where the function takes on a given constant value , that is:
When the number of independent variables is two, a level set is called a level curve, also known as contour line or isoline; so a level curve is the set of all real-valued solutions of an equation in two variables and . When , a level set is called a level surface (or isosurface); so a level surface is the set of all real-valued roots of an equation in three variables , and . For higher values of , the level set is a level hypersurface, the set of all real-valued roots of an equation in variables.
A level set is a special case of a fiber.
Alternative names
Level sets show up in many applications, often under different names. For example, an implicit curve is a level curve, which is considered independently of its neighbor curves, emphasizing that such a curve is defined by an implicit equation. Analogously, a level surface is sometimes called an implicit surface or an isosurface.
The name isocontour is also used, which means a contour of equal height. In various application areas, isocontours have received specific names, which indicate often the nature of the values of the considered function, such as isobar, isotherm, isogon, isochrone, isoquant and indifference curve.
Examples
Consider the 2-dimensional Euclidean distance: A level set of this function consists of those points that lie at a distance of from the origin, that make a circle. For example, , because . Geometrically, this means that the point lies on the circle of radius 5 centered at the origin. More generally, a sphere in a metric space with radius centered at can be defined as the level set .
A second example is the plot of Himmelblau's function shown in the figure to the right. Each curve shown is a level curve of the function, and they are spaced logarithmically: if a curve represents , the curve directly "within" represents , and |
https://en.wikipedia.org/wiki/Roentgen%20equivalent%20man | The roentgen equivalent man (rem) is a CGS unit of equivalent dose, effective dose, and committed dose, which are dose measures used to estimate potential health effects of low levels of ionizing radiation on the human body.
Quantities measured in rem are designed to represent the stochastic biological risk of ionizing radiation, which is primarily radiation-induced cancer. These quantities are derived from absorbed dose, which in the CGS system has the unit rad. There is no universally applicable conversion constant from rad to rem; the conversion depends on relative biological effectiveness (RBE).
The rem has been defined since 1976 as equal to 0.01 sievert, which is the more commonly used SI unit outside the United States. Earlier definitions going back to 1945 were derived from the roentgen unit, which was named after Wilhelm Röntgen, a German scientist who discovered X-rays. The unit name is misleading, since 1 roentgen actually deposits about 0.96 rem in soft biological tissue, when all weighting factors equal unity. Older units of rem following other definitions are up to 17% smaller than the modern rem.
Doses greater than 100 rem received over a short time period are likely to cause acute radiation syndrome (ARS), possibly leading to death within weeks if left untreated. Note that the quantities that are measured in rem were not designed to be correlated to ARS symptoms. The absorbed dose, measured in rad, is a better indicator of ARS.
A rem is a large dose of radiation, so the millirem (mrem), which is one thousandth of a rem, is often used for the dosages commonly encountered, such as the amount of radiation received from medical x-rays and background sources.
Usage
The rem and millirem are CGS units in widest use among the U.S. public, industry, and government. However, the SI unit the sievert (Sv) is the normal unit outside the United States, and is increasingly encountered within the US in academic, scientific, and engineering environments.
The co |
https://en.wikipedia.org/wiki/Lenna | Lenna (or Lena) is a standard test image used in the field of digital image processing starting in 1973, but it is no longer considered appropriate by some authors. It is a picture of the Swedish model Lena Forsén, shot by photographer Dwight Hooker, cropped from the centerfold of the November 1972 issue of Playboy magazine. The continued use of the image has attracted controversy, on both technical and social grounds, and many journals have discouraged or banned its use. Forsén herself has said "It’s time I retired from tech."
The spelling "Lenna" came from the model's desire to encourage the proper pronunciation of her name. "I didn't want to be called Leena []," she explained.
History
Before Lenna, the first use of a Playboy magazine image to illustrate image processing algorithms was in 1961. Lawrence G. Roberts used two cropped six-bit grayscale facsimile scanned images from Playboy'''s July 1960 issue featuring Playmate Teddi Smith, in his MIT master's thesis on image dithering.
Intended for high resolution color image processing study, the Lenna picture's history was described in the May 2001 newsletter of the IEEE Professional Communication Society, in an article by Jamie Hutchinson:
The image's reach was limited in the 1970s and 80s, which is reflected in it initially only appearing in .org domains. But in July 1991, the image featured on the cover of Optical Engineering alongside Peppers, another popular test image. This drew the attention of Playboy to the potential copyright infringement. The peak of image hits on the internet was in 1995. The scan became one of the most used images in computer history. The use of the photo in electronic imaging has been described as "clearly one of the most important events in [its] history". The image spread to over 100 different domains, particularly .com and .edu.
In a 1999 issue of IEEE Transactions on Image Processing "Lena" was used in three separate articles, and the picture continued to appear in scientif |
https://en.wikipedia.org/wiki/Biopython | The Biopython project is an open-source collection of non-commercial Python tools for computational biology and bioinformatics, created by an international association of developers. It contains classes to represent biological sequences and sequence annotations, and it is able to read and write to a variety of file formats. It also allows for a programmatic means of accessing online databases of biological information, such as those at NCBI. Separate modules extend Biopython's capabilities to sequence alignment, protein structure, population genetics, phylogenetics, sequence motifs, and machine learning. Biopython is one of a number of Bio* projects designed to reduce code duplication in computational biology.
History
Biopython development began in 1999 and it was first released in July 2000. It was developed during a similar time frame and with analogous goals to other projects that added bioinformatics capabilities to their respective programming languages, including BioPerl, BioRuby and BioJava. Early developers on the project included Jeff Chang, Andrew Dalke and Brad Chapman, though over 100 people have made contributions to date. In 2007, a similar Python project, namely PyCogent, was established.
The initial scope of Biopython involved accessing, indexing and processing biological sequence files. While this is still a major focus, over the following years added modules have extended its functionality to cover additional areas of biology (see Key features and examples).
As of version 1.77, Biopython no longer supports Python 2.
Design
Wherever possible, Biopython follows the conventions used by the Python programming language to make it easier for users familiar with Python. For example, Seq and SeqRecord objects can be manipulated via slicing, in a manner similar to Python's strings and lists. It is also designed to be functionally similar to other Bio* projects, such as BioPerl.
Biopython is able to read and write most common file formats for each of |
https://en.wikipedia.org/wiki/ATASCII | The ATASCII character set, from ATARI Standard Code for Information Interchange, alternatively ATARI ASCII, is the variation on ASCII used in the Atari 8-bit family of home computers. The first of this family are the Atari 400 and 800, released in 1979, and later models were released throughout the 1980s. The last computer to use the ATASCII character set is the Atari XEGS which was released in 1987 and discontinued in 1992. The Atari ST family of computers use the different Atari ST character set.
Like most other non-standard ASCIIs, ATASCII has its own special block graphics symbols (arrows, blocks, circles, line segments, playing card suits, etc.) corresponding to the control character locations of the standard ASCII table (characters 0–31), and a few other character locations.
Control characters
The main difference between standard ASCII and ATASCII is the use of control characters. In standard ASCII, a character in the range 0 to 31 is construed as a command, which might move the cursor, clear the screen, end a line, and so on. Some of these were designed for use on printers and teletypes rather than on screen (to advance the paper, overtype, and so on). In ATASCII most of the ASCII control character values produce a graphics glyph instead. ATASCII uses character values different from ASCII for cursor control.
ATASCII has a character set of only 128 characters. If the high-order bit is set on a character (i.e., if the byte value of the character is between 128 and 255) the character is generally rendered in the reverse video (also called "inverse video") of its counterpart between 0 and 127, using a bitwise negation of the character's glyph. This is done by the ANTIC chip. The two exceptions to this rule are that an "escape" character (ATASCII and ASCII 27) with its high order bit set becomes an "EOL" or "End Of Line" character (ATASCII 155; ASCII 13), and a "clear screen" character (ATASCII 125) with its high order bit set becomes a "bell" or "buzzer" ch |
https://en.wikipedia.org/wiki/Baku%20TV%20Tower | The Baku TV Tower (), built in 1996, is a free standing concrete telecommunications tower in Baku, Azerbaijan. With a height of 310 metres (1017 ft or 480 meters from Caspian sea level), it is the tallest structure in Azerbaijan and the tallest reinforced concrete building in Caucasus.
The tower has become one of the most prominent landmarks of Baku, often in the establishing shot of films set in the city.
History
The TV tower was designed on the basis of the decision of the Council of Ministers of the USSR after the order from the Ministry of Communications of Azerbaijan State Institute of the Ministry of Communications of the USSR. Construction work began in 1979 and according to the project construction plan it should have been completed in 1985. After the return of Heydar Aliyev to power in 1993, the construction of the tower was continued, and in 1996 with his participation, the official opening ceremony of the complex was carried.
A rotating restaurant on the 62nd floor (175 metres) of Azeri TV Tower was opened in 2008.
Appearance
Occasionally, Baku TV Tower's lighting is changed to specific, unique arrangements for special events. Some annual events are cause for the tower to be specially lit. Such as alternating sections of the tower were lit to blue, red and green like in traditional Azerbaijani flag to help celebrate the national holidays. The tower has also had a variety of special lighting arrangements for New Year since 2004.
See also
List of towers
References
External links
Emporis.com
http://skyscraperpage.com/diagrams/?buildingID=2284
Towers in Azerbaijan
Buildings and structures in Baku
Radio masts and towers
Towers with revolving restaurants |
https://en.wikipedia.org/wiki/TightVNC | TightVNC is a free and open-source remote desktop software server and client application for Linux and Windows. A server for macOS is available under a commercial source code license only, without SDK or binary version provided. Constantin Kaplinsky developed TightVNC, using and extending the RFB protocol of Virtual Network Computing (VNC) to allow end-users to control another computer's screen remotely.
Encodings
TightVNC uses so-called "tight encoding" of areas, which improves performance over low bandwidth connection. It is effectively a combination of the JPEG and zlib compression mechanisms. It is possible to watch videos and play DirectX games through TightVNC over a broadband connection, albeit at a low frame rate.
TightVNC includes many other common features of VNC derivatives, such as file transfer capability.
Compatibility
TightVNC is cross-compatible with other client and server implementations of VNC; however, tight encoding is not supported by most other implementations, so it is necessary to use TightVNC at both ends to gain the full advantage of its enhancements.
Among notable enhancements are file transfers, support for the DemoForge DFMirage mirror driver (a type of virtual display driver) to detect screen updates (saves CPU time and increases the performance of TightVNC), ability to zoom the picture and automatic SSH tunneling on Unix.
Since the 2.0 beta, TightVNC supports auto scaling, which resizes the viewer window to the remote users desktop size, regardless of the resolution of the host computer.
TightVNC 1.3.10, released in March 2009, is the last version to support Linux/Unix. This version is still often used in guides to set up VNC for Linux.
Derived software
RemoteVNC
RemoteVNC is a fork of the TightVNC project and adds automatic traversal of NAT and firewalls using Jingle. It requires a GMail account.
TightVNC Portable Edition
The developers have also produced a portable version of the software, available as both U3 and standalon |
https://en.wikipedia.org/wiki/RealVNC | RealVNC is a company that provides remote access software. Their VNC Connect software consists of a server (VNC Server) and client (VNC Viewer) application, which exchange data over the RFB protocol to allow the Viewer to control the Server's screen remotely. The application is used, for example, by IT support engineers to provide helpdesk services to remote users.
History
Andy Harter and other members of the original VNC team at AT&T founded RealVNC Limited in 2002. The automotive division of RealVNC spun out as a separate company (VNC Automotive) in 2018.
Platforms, editions, versions
For a desktop-to-desktop connection RealVNC runs on Windows, macOS, and many Unix-like operating systems. A list of supported platforms can be found on the website. A RealVNC client also runs on the Java platform and on the Apple iPhone, iPod touch and iPad and Google Android devices.
A Windows-only client, VNC Viewer Plus was launched in 2010, designed to interface to the embedded server on Intel AMT chipsets found on Intel vPro motherboards. RealVNC removed VNC Viewer Plus from sale on 28th February 2021.
For remote access to view one computer desktop on another, RealVNC requires one of three subscriptions:
Home – free registration and activation required
Professional – commercial version geared towards home or small-business users, with authentication and encryption, remote printing, chat and file transfer
Enterprise – commercial version geared towards enterprises, with enhanced authentication and encryption, remote printing, chat, file transfer, and command-line deployment
As of release 4.3 (released August 2007), separate versions of both the Personal and Enterprise editions exist for 32-bit and 64-bit systems. Release 4.6 included features such as HTTP proxy support, chat, an address book, remote printing, unicode support, and connection notification.
Users must activate each of the server versions ("Home", "Professional", "Enterprise").
With the release of VNC 5.0 l |
https://en.wikipedia.org/wiki/Unsharp%20masking | Unsharp masking (USM) is an image sharpening technique, first implemented in darkroom photography, but now commonly used in digital image processing software. Its name derives from the fact that the technique uses a blurred, or "unsharp", negative image to create a mask of the original image. The unsharp mask is then combined with the original positive image, creating an image that is less blurry than the original. The resulting image, although clearer, may be a less accurate representation of the image's subject.
In the context of signal processing, an unsharp mask is generally a linear or nonlinear filter that amplifies the high-frequency components of a signal.
Photographic darkroom unsharp masking
For the photographic darkroom process, a large-format glass plate negative is contact-copied onto a low-contrast film or plate to create a positive image. However, the positive copy is made with the copy material in contact with the back of the original, rather than emulsion-to-emulsion, so it is blurred. After processing this blurred positive is replaced in contact with the back of the original negative. When light is passed through both negative and in-register positive (in an enlarger, for example), the positive partially cancels some of the information in the negative.
Because the positive has been blurred intentionally, only the low-frequency (blurred) information is cancelled. In addition, the mask effectively reduces the dynamic range of the original negative. Thus, if the resulting enlarged image is recorded on contrasty photographic paper, the partial cancellation emphasizes the high-spatial-frequency information (fine detail) in the original, without loss of highlight or shadow detail. The resulting print appears more acute than one made without the unsharp mask: its acutance is increased.
In the photographic procedure, the amount of blurring can be controlled by changing the "softness" or "hardness" (from point source to fully diffuse) of the light so |
https://en.wikipedia.org/wiki/Master%20theorem%20%28analysis%20of%20algorithms%29 | In the analysis of algorithms, the master theorem for divide-and-conquer recurrences provides an asymptotic analysis (using Big O notation) for recurrence relations of types that occur in the analysis of many divide and conquer algorithms. The approach was first presented by Jon Bentley, Dorothea Blostein (née Haken), and James B. Saxe in 1980, where it was described as a "unifying method" for solving such recurrences. The name "master theorem" was popularized by the widely-used algorithms textbook Introduction to Algorithms by Cormen, Leiserson, Rivest, and Stein.
Not all recurrence relations can be solved with the use of this theorem; its generalizations include the Akra–Bazzi method.
Introduction
Consider a problem that can be solved using a recursive algorithm such as the following:
procedure p(input x of size n):
if n < some constant k:
Solve x directly without recursion
else:
Create a subproblems of x, each having size n/b
Call procedure p recursively on each subproblem
Combine the results from the subproblems
The above algorithm divides the problem into a number of subproblems recursively, each subproblem being of size . Its solution tree has a node for each recursive call, with the children of that node being the other calls made from that call. The leaves of the tree are the base cases of the recursion, the subproblems (of size less than k) that do not recurse. The above example would have child nodes at each non-leaf node. Each node does an amount of work that corresponds to the size of the subproblem passed to that instance of the recursive call and given by . The total amount of work done by the entire algorithm is the sum of the work performed by all the nodes in the tree.
The runtime of an algorithm such as the above on an input of size , usually denoted , can be expressed by the recurrence relation
where is the time to create the subproblems and combine their results in the above procedure. T |
https://en.wikipedia.org/wiki/Raid%20on%20Bungeling%20Bay | Raid on Bungeling Bay (バンゲリングベイ lit.: Bungeling Bay) is a shoot 'em up video game developed by Will Wright and published by Broderbund for the Commodore 64 in 1984. It was the first video game designed by Will Wright. The Commodore 64 version was published in the UK by Ariolasoft. The game inspired Wright to develop SimCity in 1989.
Gameplay
Raid on Bungeling Bay is a 2D shoot 'em up. The player controls a helicopter launched from an aircraft carrier to bomb six factories scattered across islands on a small planetoid occupied by the Bungeling Empire (frequent villains in Broderbund games), while fending off escalating counterattacks by gun turrets, fighter jets, guided missiles, and a battleship. There is also a hidden island for the player to reload on. Failure means that the Bungeling Empire develops a war machine to take over the planet Earth. Players have to attack its infrastructure while defending the aircraft carrier which serves as home base.
The game offers an insight into the design style of Wright, who also designed SimCity. Over time, the factories grow and develop new technologies to use against the player. There are also visible signs of interdependency among the islands, such as supply boats moving between them. In order to win the game, the player must prevent the escalation by bombing all the factories as quickly as possible, keeping them from advancing their technology. If left alone for too long, the factories create enough new weaponry to overwhelm the player.
Ports
Raid on Bungeling Bay was ported to the Famicom/NES by Hudson Soft. Hudson published this version and released it in Japan on February 15, 1985. A conversion for the arcade-based VS. System was created based on this port, and it was distributed to arcades by Nintendo. An MSX version was developed by Zap and published by Sony. The Japanese releases of the game are alternatively titled . Will Wright stated in an interview that he was not sure if the arcade version was actually relea |
https://en.wikipedia.org/wiki/Hyperelliptic%20curve%20cryptography | Hyperelliptic curve cryptography is similar to elliptic curve cryptography (ECC) insofar as the Jacobian of a hyperelliptic curve is an abelian group in which to do arithmetic, just as we use the group of points on an elliptic curve in ECC.
Definition
An (imaginary) hyperelliptic curve of genus over a field is given by the equation where is a polynomial of degree not larger than and is a monic polynomial of degree . From this definition it follows that elliptic curves are hyperelliptic curves of genus 1. In hyperelliptic curve cryptography is often a finite field. The Jacobian of , denoted , is a quotient group, thus the elements of the Jacobian are not points, they are equivalence classes of divisors of degree 0 under the relation of linear equivalence. This agrees with the elliptic curve case, because it can be shown that the Jacobian of an elliptic curve is isomorphic with the group of points on the elliptic curve. The use of hyperelliptic curves in cryptography came about in 1989 from Neal Koblitz. Although introduced only 3 years after ECC, not many cryptosystems implement hyperelliptic curves because the implementation of the arithmetic isn't as efficient as with cryptosystems based on elliptic curves or factoring (RSA). The efficiency of implementing the arithmetic depends on the underlying finite field , in practice it turns out that finite fields of characteristic 2 are a good choice for hardware implementations while software is usually faster in odd characteristic.
The Jacobian on a hyperelliptic curve is an Abelian group and as such it can serve as group for the discrete logarithm problem (DLP). In short, suppose we have an Abelian group and an element of , the DLP on entails finding the integer given two elements of , namely and . The first type of group used was the multiplicative group of a finite field, later also Jacobians of (hyper)elliptic curves were used. If the hyperelliptic curve is chosen with care, then Pollard's rho method is |
https://en.wikipedia.org/wiki/Yasumasa%20Kanada | was a Japanese computer scientist most known for his numerous world records over the past three decades for calculating digits of . He set the record 11 of the past 21 times.
Kanada was a professor in the Department of Information Science at the University of Tokyo in Tokyo, Japan until 2015.
From 2002 until 2009, Kanada held the world record calculating the number of digits in the decimal expansion of pi – exactly 1.2411 trillion digits. The calculation took more than 600 hours on 64 nodes of a HITACHI SR8000/MPP supercomputer. Some of his competitors in recent years include Jonathan and Peter Borwein and the Chudnovsky brothers.
See also
Chronology of computation of
References
External links
1949 births
2020 deaths
20th-century Japanese mathematicians
21st-century Japanese mathematicians
People from Himeji, Hyōgo
Pi-related people
Tohoku University alumni
Academic staff of the University of Tokyo |
https://en.wikipedia.org/wiki/Bandlimiting | Bandlimiting refers to a process which reduces the energy of a signal to an acceptably low level outside of a desired frequency range.
Bandlimiting is an essential part of many applications in signal processing and communications. Examples include controlling interference between radio frequency communications signals, and managing aliasing distortion associated with sampling for digital signal processing.
Bandlimited signals
A bandlimited signal is, strictly speaking, a signal with zero energy outside of a defined frequency range. In practice, a signal is considered bandlimited if its energy outside of a frequency range is low enough to be considered negligible in a given application.
A bandlimited signal may be either random (stochastic) or non-random (deterministic).
In general, infinitely many terms are required in a continuous Fourier series representation of a signal, but if a finite number of Fourier series terms can be calculated from that signal, that signal is considered to be band-limited. In mathematic terminology, a bandlimited signal has a Fourier transform or spectral density with bounded support.
Sampling bandlimited signals
A bandlimited signal can be fully reconstructed from its samples, provided that the sampling rate exceeds twice the bandwidth of the signal. This minimum sampling rate is called the Nyquist rate associated with the Nyquist–Shannon sampling theorem.
Real world signals are not strictly bandlimited, and signals of interest typically have unwanted energy outside of the band of interest. Because of this, sampling functions and digital signal processing functions which change sample rates usually require bandlimiting filters to control the amount of aliasing distortion. Bandlimiting filters should be designed carefully to manage other distortions because they alter the signal of interest in both its frequency domain magnitude and phase, and its time domain properties.
An example of a simple deterministic bandlimited signal is |
https://en.wikipedia.org/wiki/Magnesium%20chloride | Magnesium chloride is an inorganic compound with the formula . It forms hydrates , where n can range from 1 to 12. These salts are colorless or white solids that are highly soluble in water. These compounds and their solutions, both of which occur in nature, have a variety of practical uses. Anhydrous magnesium chloride is the principal precursor to magnesium metal, which is produced on a large scale. Hydrated magnesium chloride is the form most readily available.
Production
Magnesium chloride can be extracted from brine or sea water. In North America, it is produced primarily from Great Salt Lake brine. In the Jordan Valley, it is obtained from the Dead Sea. The mineral bischofite () is extracted (by solution mining) out of ancient seabeds, for example, the Zechstein seabed in northwest Europe. Some deposits result from high content of magnesium chloride in the primordial ocean. Some magnesium chloride is made from evaporation of seawater.
In the Dow process, magnesium chloride is regenerated from magnesium hydroxide using hydrochloric acid:
It can also be prepared from magnesium carbonate by a similar reaction.
Structure, preparation, and general properties
crystallizes in the cadmium chloride motif, which features octahedral Mg centers. Several hydrates are known with the formula , and each loses water upon heating: n = 12 (−16.4 °C), 8 (−3.4 °C), 6 (116.7 °C), 4 (181 °C), 2 (about 300 °C). In the hexahydrate, the is also octahedral, but is coordinated to six water ligands. The thermal dehydration of the hydrates (n = 6, 12) does not occur straightforwardly. Anhydrous is produced industrially by heating the complex salt named hexamminemagnesium dichloride .
As suggested by the existence of hydrates, anhydrous is a Lewis acid, although a weak one. One derivative is tetraethylammonium tetrachloromagnesate . The adduct is another. In the coordination polymer with the formula , Mg adopts an octahedral geometry. The Lewis acidity of magnesium chloride is |
https://en.wikipedia.org/wiki/Sorting%20network | In computer science, comparator networks are abstract devices built up of a fixed number of "wires", carrying values, and comparator modules that connect pairs of wires, swapping the values on the wires if they are not in a desired order. Such networks are typically designed to perform sorting on fixed numbers of values, in which case they are called sorting networks.
Sorting networks differ from general comparison sorts in that they are not capable of handling arbitrarily large inputs, and in that their sequence of comparisons is set in advance, regardless of the outcome of previous comparisons. In order to sort larger amounts of inputs, new sorting networks must be constructed. This independence of comparison sequences is useful for parallel execution and for implementation in hardware. Despite the simplicity of sorting nets, their theory is surprisingly deep and complex. Sorting networks were first studied circa 1954 by Armstrong, Nelson and O'Connor, who subsequently patented the idea.
Sorting networks can be implemented either in hardware or in software. Donald Knuth describes how the comparators for binary integers can be implemented as simple, three-state electronic devices. Batcher, in 1968, suggested using them to construct switching networks for computer hardware, replacing both buses and the faster, but more expensive, crossbar switches. Since the 2000s, sorting nets (especially bitonic mergesort) are used by the GPGPU community for constructing sorting algorithms to run on graphics processing units.
Introduction
A sorting network consists of two types of items: comparators and wires. The wires are thought of as running from left to right, carrying values (one per wire) that traverse the network all at the same time. Each comparator connects two wires. When a pair of values, traveling through a pair of wires, encounter a comparator, the comparator swaps the values if and only if the top wire's value is greater or equal to the bottom wire's value.
In |
https://en.wikipedia.org/wiki/Toothing | Toothing was originally a hoax claim that Bluetooth-enabled mobile phones or PDAs were being used to arrange random sexual encounters, perpetrated as a prank on the media who reported it. The hoax was created by Ste Curran, then Editor at Large at the gaming magazine Edge, and ex-journalist Simon Byron. They based it on the two concepts dogging and bluejacking that were popular at the time. The creators started a forum in March 2004 where they wrote fake news articles about toothing with other members and then sent them off to well-known Internet-based news services. The point of the hoax was to "highlight how journalists are happy to believe something is true without necessarily checking the facts". Dozens of news organizations, including BBC News, Wired News, and The Independent thought the toothing story was real and printed it. On April 4, 2005, Curran and Byron admitted that the whole thing was a hoax. There have, however, been real Bluetooth dating devices since.
Conception
Devised by Swedish telecommunication company Ericsson, Bluetooth is an open wireless protocol for exchanging data over short distances from mobile devices such as mobile phones, laptops, and personal computers. Originally, Bluetooth was only intended for wireless exchanging of files between these devices, but it was later discovered that it could also be used for sexual intentions. The hoax concept of toothing started around March 2004 in the form of a forum designed by Ste Curran, then Editor at Large at games magazine Edge, and ex-journalist Simon Byron. Toothing was conceived as a merger of the two concepts dogging with bluejacking, both of which were frequently mentioned in the UK media around that time. Byron said he and Curran were "idly messaging about the Stan Collymore dogging scandal, and how this stupid sexual buzzword had (apparently) come from nowhere," when they came up with the concept. "We wondered if we could create our own. We wonder a lot of things, and rarely push them |
https://en.wikipedia.org/wiki/Grady%20Booch | Grady Booch (born February 27, 1955) is an American software engineer, best known for developing the Unified Modeling Language (UML) with Ivar Jacobson and James Rumbaugh. He is recognized internationally for his innovative work in software architecture, software engineering, and collaborative development environments.
Education
Booch earned his bachelor's degree in 1977 from the United States Air Force Academy and a master's degree in electrical engineering in 1979 from the University of California, Santa Barbara.
Career and research
Booch worked at Vandenberg Air Force Base after he graduated. He started as a project engineer and later managed ground-support missions for the space shuttle and other projects. After he gained his master's degree he became an instructor at the Air Force Academy.
Booch served as Chief Scientist of Rational Software Corporation from its founding in 1981 through its acquisition by IBM in 2003, where he continued to work until March 2008. After this he became Chief Scientist, Software Engineering in IBM Research and series editor for Benjamin Cummings.
Booch has devoted his life's work to improving the art and the science of software development. In the 1980s, he wrote one of the more popular books on programming in Ada. He is best known for developing the Unified Modeling Language with Ivar Jacobson and James Rumbaugh in the 1990s.
IBM 1130
Booch got his first exposure to programming on an IBM 1130.
... I pounded the doors at the local IBM sales office until a salesman took pity on me. After we chatted for a while, he handed me a Fortran [manual]. I'm sure he gave it to me thinking, "I'll never hear from this kid again." I returned the following week saying, "This is really cool. I've read the whole thing and have written a small program. Where can I find a computer?" The fellow, to my delight, found me programming time on an IBM 1130 on weekends and late-evening hours. That was my first programming experience, and I must thank |
https://en.wikipedia.org/wiki/Butterworth%20filter | The Butterworth filter is a type of signal processing filter designed to have a frequency response that is as flat as possible in the passband. It is also referred to as a maximally flat magnitude filter. It was first described in 1930 by the British engineer and physicist Stephen Butterworth in his paper entitled "On the Theory of Filter Amplifiers".
Original paper
Butterworth had a reputation for solving very complex mathematical problems thought to be 'impossible'. At the time, filter design required a considerable amount of designer experience due to limitations of the theory then in use. The filter was not in common use for over 30 years after its publication. Butterworth stated that:
Such an ideal filter cannot be achieved, but Butterworth showed that successively closer approximations were obtained with increasing numbers of filter elements of the right values. At the time, filters generated substantial ripple in the passband, and the choice of component values was highly interactive. Butterworth showed that a low-pass filter could be designed whose cutoff frequency was normalized to 1 radian per second and whose frequency response (gain) was
where is the angular frequency in radians per second and is the number of poles in the filter—equal to the number of reactive elements in a passive filter. If = 1, the amplitude response of this type of filter in the passband is 1/ ≈ 0.7071, which is half power or −3 dB. Butterworth only dealt with filters with an even number of poles in his paper. He may have been unaware that such filters could be designed with an odd number of poles. He built his higher-order filters from 2-pole filters separated by vacuum tube amplifiers. His plot of the frequency response of 2-, 4-, 6-, 8-, and 10-pole filters is shown as A, B, C, D, and E in his original graph.
Butterworth solved the equations for two-pole and four-pole filters, showing how the latter could be cascaded when separated by vacuum tube amplifiers and so enabli |
https://en.wikipedia.org/wiki/128%20%28number%29 | 128 (one hundred [and] twenty-eight) is the natural number following 127 and preceding 129.
In mathematics
128 is the seventh power of 2. It is the largest number which cannot be expressed as the sum of any number of distinct squares. However, it is divisible by the total number of its divisors, making it a refactorable number.
The sum of Euler's totient function φ() over the first twenty integers is 128.
128 can be expressed by a combination of its digits with mathematical operators, thus 128 28 − 1, making it a Friedman number in base 10.
A hepteract has 128 vertices.
128 is the only 3-digit number that is a 7th power (27).
In bar codes
Code 128 is a Uniform Symbology Specification (USS Code 128) alphanumeric bar code that encodes text, numbers, numerous functions, and designed to encode all 128 ASCII characters (ASCII 0 to ASCII 127), as used in the shipping industry.
Subdivisions include:
128A (0–9, A–Z, ASCII control codes, special characters)
128B (0–9, A–Z, a–z, special characters)
128C (00–99 numeric characters)
GS1-128 application standard of the GS1 implementation using the Code 128 barcode specification
ISBT 128 system for blood product labeling for the International Society of Blood Transfusion
In computing
128-bit key size encryption for secure communications over the Internet
IPv6 uses 128-bit (16-byte) addresses
Any bit with a binary prefix is 128 bytes of a lesser binary prefix value, such as 1 gibibit is 128 mebibytes
128-bit integers, memory addresses, or other data units are those that are at most 128 bits 16 octets wide
Seven-segment displays have 128 possible states.
ASCII includes definitions for 128 characters (33 non-printing characters, mostly obsolete control characters that affect how text is processed, and 94 printable)
A 128-bit integer can represent up to 3.40282366...e+38 values (2128 340,282,366,920,938,463,463,374,607,431,768,211,456).
CAST-128 is a block cipher used in a number of products, notably as the d |
https://en.wikipedia.org/wiki/175%20%28number%29 | 175 (one hundred [and] seventy-five) is the natural number following 174 and preceding 176.
In mathematics
Raising the decimal digits of 175 to the powers of successive integers produces 175 back again:
175 is a figurate number for a rhombic dodecahedron, the difference of two consecutive fourth powers: It is also a decagonal number and a decagonal pyramid number, the smallest number after 1 that has both properties.
In other fields
In the Book of Genesis 25:7-8, Abraham is said to have lived to be 175 years old.
175 is the fire emergency number in Lebanon.
See also
The year AD 175 or 175 BC
List of highways numbered 175
References
Integers |
https://en.wikipedia.org/wiki/260%20%28number%29 | 260 (two hundred [and] sixty) is the natural number following 259 and preceding 261.
It is also the magic constant of the n×n normal magic square and n-queens problem for n = 8, the size of an actual chess board.
260 is also the magic constant of the Franklin magic square devised by Benjamin Franklin.
The minor diagonal gives 260, and in addition a number of combinations of two half diagonals of four numbers from a corner to the center give 260.
There are 260 days in the Mayan sacred calendar Tzolkin.
260 may also refer to the years AD 260 and 260 BC.
Integers from 261 to 269
261
261 = 32·29, lucky number, nonagonal number, unique period in base 2, number of possible unfolded tesseract patterns.
262
262 = 2·131, meandric number, open meandric number, untouchable number, happy number, palindrome number, semiprime.
263
263 is a prime, safe prime, happy number, sum of five consecutive primes (43 + 47 + 53 + 59 + 61), balanced prime, Chen prime, Eisenstein prime with no imaginary part, strictly non-palindromic number, Bernoulli irregular prime, Euler irregular prime, Gaussian prime, full reptend prime, Solinas prime, Ramanujan prime.
264
264 = 23·3·11 = number of edges in a 11·11 square grid. The sum of all 2-digit numbers from 264, is 264: 24 + 42 + 26 + 62 + 46 + 64 = 264. 132 and 396 share this property.
264 equals the sum of the squares of the digits of its own square in base 15. This property is shared with 1, 159, 284, 306 and 387.
265
265 = 5·53, semiprime, Padovan number, number of derangements of 6 elements, centered square number, Smith number, subfactorial 6.
266
266 = 2·7·19 = , sphenic number, nontotient, noncototient, repdigit in base 11 (222). 266 is also the index of the largest proper subgroups of the sporadic group known as the Janko group J1.
267
268
269
269 is a prime, twin prime with 271, sum of three consecutive primes (83 + 89 + 97), Chen prime, Eisenstein prime with no imaginary part, highly cototient number, strictly non-pa |
https://en.wikipedia.org/wiki/Simulink | Simulink is a MATLAB-based graphical programming environment for modeling, simulating and analyzing multidomain dynamical systems. Its primary interface is a graphical block diagramming tool and a customizable set of block libraries. It offers tight integration with the rest of the MATLAB environment and can either drive MATLAB or be scripted from it. Simulink is widely used in automatic control and digital signal processing for multidomain simulation and model-based design.
Add-on products
MathWorks and other third-party hardware and software products can be used with Simulink. For example, Stateflow extends Simulink with a design environment for developing state machines and flow charts.
MathWorks claims that, coupled with another of their products, Simulink can automatically generate C source code for real-time implementation of systems. As the efficiency and flexibility of the code improves, this is becoming more widely adopted for production systems, in addition to being a tool for embedded system design work because of its flexibility and capacity for quick iteration. Embedded Coder creates code efficient enough for use in embedded systems.
Simulink Real-Time (formerly known as xPC Target), together with x86-based real-time systems, is an environment for simulating and testing Simulink and Stateflow models in real-time on the physical system. Another MathWorks product also supports specific embedded targets. When used with other generic products, Simulink and Stateflow can automatically generate synthesizable VHDL and Verilog.
Simulink Verification and Validation enables systematic verification and validation of models through modeling style checking, requirements traceability and model coverage analysis. Simulink Design Verifier uses formal methods to identify design errors like integer overflow, division by zero and dead logic, and generates test case scenarios for model checking within the Simulink environment.
SimEvents is used to add a library of |
https://en.wikipedia.org/wiki/Jay%20Last | Jay Taylor Last (October 18, 1929 – November 11, 2021) was an American physicist, silicon pioneer, and member of the so-called "traitorous eight" that founded Silicon Valley.
Early life and education
Last was born in Butler, Pennsylvania, on October 18, 1929, at the beginning of the Stock Market Crash of 1929, and grew up during the Great Depression. Both his parents were teachers, but his father left teaching to work in a steel mill in hopes of earning a better living. During the depression, there was no work in the steel mills, but the family managed by growing and preserving its own food. During World War II, his father worked six to seven days a week, 12 hours a day, under demanding and dangerous physical conditions. Jay Last enjoyed hiking, walking, and exploring while growing up. Between his junior and senior years of school, at age 16, he and a friend hitch-hiked to San Jose, California, and worked for the summer picking fruit.
A voracious reader, he tended to complete his schoolwork well in advance of the rest of the class. He was encouraged by his chemistry teacher, Lucille Critchlow, who recommended him to work with Frank W. Preston, a local industrial chemist whose laboratory studied glass and glass fracture. Last began working at Preston's lab as a high-school student and continued to work for him as a university student, whenever he had a break.
Last graduated from Butler Senior High School in 1947 and applied for a scholarship to study Optics at the University of Rochester. Last had heard about the program from his father and did not apply anywhere else. It was a rigorous program, and three-quarters of the entering class had dropped out by the time the program was finished. The program had close ties to Eastman Kodak and to Bausch & Lomb: Last's class in optical design was taught by Rudolph Kingslake of Kodak. Last worked for a summer at the trouble-shooting department of Kodak's optical instrumentation plant, before his senior year of university |
https://en.wikipedia.org/wiki/Higher-order%20logic | In mathematics and logic, a higher-order logic (abbreviated HOL) is a form of predicate logic that is distinguished from first-order logic by additional quantifiers and, sometimes, stronger semantics. Higher-order logics with their standard semantics are more expressive, but their model-theoretic properties are less well-behaved than those of first-order logic.
The term "higher-order logic" is commonly used to mean higher-order simple predicate logic. Here "simple" indicates that the underlying type theory is the theory of simple types, also called the simple theory of types. Leon Chwistek and Frank P. Ramsey proposed this as a simplification of the complicated and clumsy ramified theory of types specified in the Principia Mathematica by Alfred North Whitehead and Bertrand Russell. Simple types is sometimes also meant to exclude polymorphic and dependent types.
Quantification scope
First-order logic quantifies only variables that range over individuals; second-order logic, also quantifies over sets; third-order logic also quantifies over sets of sets, and so on.
Higher-order logic is the union of first-, second-, third-, ..., nth-order logic; i.e., higher-order logic admits quantification over sets that are nested arbitrarily deeply.
Semantics
There are two possible semantics for higher-order logic.
In the standard or full semantics, quantifiers over higher-type objects range over all possible objects of that type. For example, a quantifier over sets of individuals ranges over the entire powerset of the set of individuals. Thus, in standard semantics, once the set of individuals is specified, this is enough to specify all the quantifiers. HOL with standard semantics is more expressive than first-order logic. For example, HOL admits categorical axiomatizations of the natural numbers, and of the real numbers, which are impossible with first-order logic. However, by a result of Kurt Gödel, HOL with standard semantics does not admit an effective, sound, and com |
https://en.wikipedia.org/wiki/Dc%20%28computer%20program%29 | dc (desk calculator) is a cross-platform reverse-Polish calculator which supports arbitrary-precision arithmetic. Written by Lorinda Cherry and Robert Morris at Bell Labs, it is one of the oldest Unix utilities, preceding even the invention of the C programming language. Like other utilities of that vintage, it has a powerful set of features but terse syntax.
Traditionally, the bc calculator program (with infix notation) was implemented on top of dc.
This article provides some examples in an attempt to give a general flavour of the language; for a complete list of commands and syntax, one should consult the man page for one's specific implementation.
History
dc is the oldest surviving Unix language program. When its home Bell Labs received a PDP-11, dcwritten in Bwas the first language to run on the new computer, even before an assembler. Ken Thompson has opined that dc was the very first program written on the machine.
Basic operations
To multiply four and five in dc (note that most of the whitespace is optional):
$ cat << EOF > cal.txt
4 5 *
p
EOF
$ dc cal.txt
20
$
The results are also available from the commands:
$ echo "4 5 * p" | dc
or
$ dc -
4 5*pq
20
$ dc
4 5 *
p
20
q
$ dc -e '4 5 * p'
This translates into "push four and five onto the stack, then, with the multiplication operator, pop two elements from the stack, multiply them and push the result onto the stack." Then the p command is used to examine (print out to the screen) the top element on the stack. The q command quits the invoked instance of dc. Note that numbers must be spaced from each other even as some operators need not be.
The arithmetic precision is changed with the command k, which sets the number of fractional digits (the number of digits following the point) to be used for arithmetic operations. Since the default precision is zero, this sequence of commands produces 0 as a result:
2 3 / p
By adjusting the precision with k, an arbitrary number of decimal places can be produced. |
https://en.wikipedia.org/wiki/3000%20%28number%29 | 3000 (three thousand) is the natural number following 2999 and preceding 3001. It is the smallest number requiring thirteen letters in English (when "and" is required from 101 forward).
Selected numbers in the range 3001–3999
3001 to 3099
3001 – super-prime; divides the Euclid number 2999# + 1
3003 – triangular number, only number known to appear eight times in Pascal's triangle; no number is known to appear more than eight times other than 1. (see Singmaster's conjecture)
3019 – super-prime, happy prime
3023 – 84th Sophie Germain prime, 51st safe prime
3025 = 552, sum of the cubes of the first ten integers, centered octagonal number, dodecagonal number
3037 – star number, cousin prime with 3041
3045 – sum of the integers 196 to 210 and sum of the integers 211 to 224
3046 – centered heptagonal number
*3052 – decagonal number
3059 – centered cube number
3061 – prime of the form 2p-1
3063 – perfect totient number
3067 – super-prime
3071 – Thabit number
3072 – 3-smooth number (210×3)
3075 – nonagonal number
3078 – 18th pentagonal pyramidal number
3080 – pronic number
3081 – triangular number, 497th sphenic number
3087 – sum of first 40 primes
3100 to 3199
3109 – super-prime
3119 – safe prime
3121 – centered square number, emirp, largest minimal prime in quinary.
3125 – a solution to the expression , where ().
3136 = 562, palindromic in ternary (110220113), tribonacci number
3137 – Proth prime, both a left- and right-truncatable prime
3149 – highly cototient number
3155 – member of the Mian–Chowla sequence
3160 – triangular number
3167 – safe prime
3169 – super-prime, Cuban prime of the form .
3192 – pronic number
3200 to 3299
3203 – safe prime
3207 – number of compositions of 14 whose run-lengths are either weakly increasing or weakly decreasing
3229 – super-prime
3240 – triangular number
3248 – member of a Ruth-Aaron pair with 3249 under second definition, largest number whose factorial is less than 1010000 – hence its factoria |
https://en.wikipedia.org/wiki/4000%20%28number%29 | 4000 (four thousand) is the natural number following 3999 and preceding 4001. It is a decagonal number.
Selected numbers in the range 4001–4999
4001 to 4099
4005 – triangular number
4007 – safe prime
4010 – magic constant of n × n normal magic square and n-queens problem for n = 20.
4013 – balanced prime
4019 – Sophie Germain prime
4027 – super-prime
4028 – sum of the first 45 primes
4030 – third weird number
4031 – sum of the cubes of the first six primes
4032 – pronic number
4033 – sixth super-Poulet number; strong pseudoprime in base 2
4060 – tetrahedral number
4073 – Sophie Germain prime
4079 – safe prime
4091 – super-prime
4092 – an occasional glitch in the game The Legend of Zelda: Ocarina of Time causes the Gossip Stones to say this number
4095 – triangular number and odd abundant number; number of divisors in the sum of the fifth and largest known unitary perfect number, largest Ramanujan–Nagell number of the form .
4096 = 642 = 163 = 84 = 46 = 212, the smallest number with exactly 13 divisors, a superperfect number
4100 to 4199
4104 = 23 + 163 = 93 + 153
4127 – safe prime
4133 – super-prime
4139 – safe prime
4140 – Bell number
4141 – centered square number
4147 – smallest cyclic number in duodecimal represented in base-12 notation as 249712.2×4147dez = 4972123×4147dez = 7249124×4147dez = 972412
4153 – super-prime
4160 – pronic number
4166 – centered heptagonal number,
4167 = 7! − 6! − 5! − 4! − 3! − 2! − 1!, number of planar partitions of 14
4169 – a number of points of norm n </= n in cubic lattice
4177 – prime of the form 2p-1
4181 – Fibonacci number, Markov number
4186 – triangular number
4187 – factor of R13, also the record number of wickets taken in first-class cricket by Wilfred Rhodes.
4199 – highly cototient number, product of three consecutive primes
4200 to 4299
4200 – nonagonal number, pentagonal pyramidal number,
4210 – 11th semi-meandric number
4211 – Sophie Germain prime
4213 – Riordan number
4217 |
https://en.wikipedia.org/wiki/5000%20%28number%29 | 5000 (five thousand) is the natural number following 4999 and preceding 5001. Five thousand is the largest isogrammic numeral in the English language.
Selected numbers in the range 5001–5999
5001 to 5099
5003 – Sophie Germain prime
5020 – amicable number with 5564
5021 – super-prime, twin prime with 5023
5023 – twin prime with 5021
5039 – factorial prime, Sophie Germain prime
5040 = 7!, superior highly composite number
5041 = 712, centered octagonal number
5050 – triangular number, Kaprekar number, sum of first 100 integers
5051 – Sophie Germain prime
5059 – super-prime
5076 – decagonal number
5081 – Sophie Germain prime
5087 – safe prime
5099 – safe prime
5100 to 5199
5107 – super-prime, balanced prime
5113 – balanced prime
5117 – sum of the first 50 primes
5151 – triangular number
5167 – Leonardo prime, cuban prime of the form x = y + 1
5171 – Sophie Germain prime
5184 = 722
5186 – φ(5186) = 2592
5187 – φ(5187) = 2592
5188 – φ(5189) = 2592, centered heptagonal number
5189 – super-prime
5200 to 5299
5209 - largest minimal prime in base 6
5226 – nonagonal number
5231 – Sophie Germain prime
5244 = 222 + 232 + … + 292 = 202 + 212 + … + 282
5249 – highly cototient number
5253 – triangular number
5279 – Sophie Germain prime, twin prime with 5281, 700th prime number
5280 is the number of feet in a mile. It is divisible by three, yielding 1760 yards per mile and by 16.5, yielding 320 rods per mile. Also, 5280 is connected with both Klein's J-invariant and the Heegner numbers. Specifically:
5281 – super-prime, twin prime with 5279
5282 - used in various paintings by Thomas Kinkade
5292 – Kaprekar number
5300 to 5399
5303 – Sophie Germain prime, balanced prime
5329 = 732, centered octagonal number
5333 – Sophie Germain prime
5335 – magic constant of n × n normal magic square and n-queens problem for n = 22.
5340 – octahedral number
5356 – triangular number
5365 – decagonal number
5381 – super-prime
5387 – safe prime, bala |
https://en.wikipedia.org/wiki/6000%20%28number%29 | 6000 (six thousand) is the natural number following 5999 and preceding 6001.
Selected numbers in the range 6001–6999
6001 to 6099
6025 – Rhythm guitarist of the Dead Kennedys from June 1978 to March 1979. Full name is Carlos Cadona.
6028 – centered heptagonal number
6037 – super-prime, prime of the form 2p-1
6042 – 6042 Cheshirecat is a Mars-crossing asteroid.
6047 – safe prime
6053 – Sophie Germain prime
6069 – nonagonal number
6073 – balanced prime
6079 – The serial number Winston Smith is referred to as in the George Orwell novel Nineteen Eighty-Four
6084 = 782, sum of the cubes of the first twelve integers
6089 – highly cototient number
6095 – magic constant of n × n normal magic square and n-Queens Problem for n = 23.
6100 to 6199
6101 – Sophie Germain prime
6105 – triangular number
6113 – Sophie Germain prime, super-prime
6121 – prime of the form 2p-1
6131 – Sophie Germain prime, twin prime with 6133
6133 – 800th prime number, twin prime with 6131
6143 – Thabit number
6144 – 3-smooth number (211×3)
6173 – Sophie Germain prime
6174 – Kaprekar's constant
6181 – octahedral number
6200 to 6299
6200 – harmonic divisor number
6201 – square pyramidal number
6211 – cuban prime of the form x = y + 1
6216 – triangular number
6217 – super-prime, prime of the form 2p-1
6229 – super-prime
6232 – amicable number with 6368
– Most widely accepted figure for the number of verses in the Qur'an
6241 = 792, centered octagonal number
6250 – Leyland number
6263 – Sophie Germain prime, balanced prime
6269 – Sophie Germain prime
6280 – decagonal number
6300 to 6399
6311 – super-prime
6317 – balanced prime
6322 – centered heptagonal number
6323 – Sophie Germain prime, balanced prime, super-prime
6328 – triangular number
6329 – Sophie Germain prime
– Number of verses in the Qur'an according to the sect founded by Rashad Khalifa.
6348 – pentagonal pyramidal number
6361 – prime of the form 2p-1, twin prime
6364 – nonagonal number
63 |
https://en.wikipedia.org/wiki/Sundaland | Sundaland (also called Sundaica or the Sundaic region) is a biogeographical region of South-eastern Asia corresponding to a larger landmass that was exposed throughout the last 2.6 million years during periods when sea levels were lower. It includes Bali, Borneo, Java, and Sumatra in Indonesia, and their surrounding small islands, as well as the Malay Peninsula on the Asian mainland.
Extent
The area of Sundaland encompasses the Sunda Shelf, a tectonically stable extension of Southeast Asia's continental shelf that was exposed during glacial periods of the last 2 million years.
The extent of the Sunda Shelf is approximately equal to the 120-meter isobath. In addition to the Malay Peninsula and the islands of Borneo, Java, and Sumatra, it includes the Java Sea, the Gulf of Thailand, and portions of the South China Sea. In total, the area of Sundaland is approximately 1,800,000 km2. The area of exposed land in Sundaland has fluctuated considerably during the past recent 2 million years; the modern land area is approximately half of the maximum extent.
The western and southern borders of Sundaland are clearly marked by the deeper waters of the Sunda Trench - some of the deepest in the world - and the Indian Ocean. The eastern boundary of Sundaland is the Wallace Line, identified by Alfred Russel Wallace as the eastern boundary of the range of Asia's land mammal fauna, and thus the boundary of the Indomalayan and Australasian realms. The islands east of the Wallace line are known as Wallacea, a separate biogeographical region that is considered part of Australasia. The Wallace Line corresponds to a deep-water channel that has never been crossed by any land bridges. The northern border of Sundaland is more difficult to define in bathymetric terms; a phytogeographic transition at approximately 9ºN is considered to be the northern boundary.
Greater portions of Sundaland were most recently exposed during the last glacial period from approximately 110,000 to 12,000 ye |
https://en.wikipedia.org/wiki/Biological%20rhythm | Biological rhythms are repetitive biological processes. Some types of biological rhythms have been described as biological clocks. They can range in frequency from microseconds to less than one repetitive event per decade. Biological rhythms are studied by chronobiology. In the biochemical context biological rhythms are called biochemical oscillations.
The variations of the timing and duration of biological activity in living organisms occur for many essential biological processes. These occur (a) in animals (eating, sleeping, mating, hibernating, migration, cellular regeneration, etc.), (b) in plants (leaf movements, photosynthetic reactions, etc.), and in microbial organisms such as fungi and protozoa. They have even been found in bacteria, especially among the cyanobacteria (aka blue-green algae, see bacterial circadian rhythms).
Circadian rhythm
The best studied rhythm in chronobiology is the circadian rhythm, a roughly 24-hour cycle shown by physiological processes in all these organisms. The term circadian comes from the Latin circa, meaning "around" and dies, "day", meaning "approximately a day." It is regulated by circadian clocks.
The circadian rhythm can further be broken down into routine cycles during the 24-hour day:
Diurnal, which describes organisms active during daytime
Nocturnal, which describes organisms active in the night
Crepuscular, which describes animals primarily active during the dawn and dusk hours (ex: white-tailed deer, some bats)
While circadian rhythms are defined as regulated by endogenous processes, other biological cycles may be regulated by exogenous signals. In some cases, multi-trophic systems may exhibit rhythms driven by the circadian clock of one of the members (which may also be influenced or reset by external factors). The endogenous plant cycles may regulate the activity of the bacterium by controlling availability of plant-produced photosynthate.
Other cycles
Many other important cycles are also studied, includin |
https://en.wikipedia.org/wiki/Wallacea | Wallacea is a biogeographical designation for a group of mainly Indonesian islands separated by deep-water straits from the Asian and Australian continental shelves. Wallacea includes Sulawesi, the largest island in the group, as well as Lombok, Sumbawa, Flores, Sumba, Timor, Halmahera, Buru, Seram, and many smaller islands. The islands of Wallacea lie between the Sunda Shelf (the Malay Peninsula, Sumatra, Borneo, Java, and Bali) to the west, and the Sahul Shelf including Australia and New Guinea to the south and east. The total land area of Wallacea is .
Geography
Wallacea is defined as the series of islands stretching between the two continental shelves of Sunda and Sahul, but excluding the Philippines. Its eastern border (separating Wallacea from Sahul) is represented by a zoogeographical boundary known as Lydekker's Line, while the Wallace Line (separating Wallacea from Sunda) defines its western border.
The Weber Line is the midpoint, at which Asian and Australian fauna and flora are approximately equally represented. It follows the deepest straits traversing the Indonesian Archipelago.
The Wallace Line is named after the Welsh naturalist Alfred Russel Wallace, who recorded the differences between mammal and bird fauna between the islands on either side of the line. The islands of Sundaland to the west of the line, including Sumatra, Java, Bali, and Borneo, share a mammal fauna similar to that of East Asia, which includes tigers, rhinoceros, and apes; whereas the mammal fauna of Lombok and areas extending eastwards are mostly populated by marsupials and birds similar to those in Australasia. Sulawesi shows signs of both.
During the ice ages, sea levels were lower, exposing the Sunda shelf that links the islands of Sundaland to one another and to Asia and allowing Asian land animals to inhabit these islands.
The islands of Wallacea have few land mammals, land birds, or freshwater fish of continental origin, which find it difficult to cross open ocean. Ma |
https://en.wikipedia.org/wiki/Membrane%20potential | Membrane potential (also transmembrane potential or membrane voltage) is the difference in electric potential between the interior and the exterior of a biological cell. That is, there is a difference in the energy required for electric charges to move from the internal to exterior cellular environments and vice versa, as long as there is no acquisition of kinetic energy or the production of radiation. The concentration gradients of the charges directly determine this energy requirement. For the exterior of the cell, typical values of membrane potential, normally given in units of milli volts and denoted as mV, range from –80 mV to –40 mV.
All animal cells are surrounded by a membrane composed of a lipid bilayer with proteins embedded in it. The membrane serves as both an insulator and a diffusion barrier to the movement of ions. Transmembrane proteins, also known as ion transporter or ion pump proteins, actively push ions across the membrane and establish concentration gradients across the membrane, and ion channels allow ions to move across the membrane down those concentration gradients. Ion pumps and ion channels are electrically equivalent to a set of batteries and resistors inserted in the membrane, and therefore create a voltage between the two sides of the membrane.
Almost all plasma membranes have an electrical potential across them, with the inside usually negative with respect to the outside. The membrane potential has two basic functions. First, it allows a cell to function as a battery, providing power to operate a variety of "molecular devices" embedded in the membrane. Second, in electrically excitable cells such as neurons and muscle cells, it is used for transmitting signals between different parts of a cell. Signals are generated by opening or closing of ion channels at one point in the membrane, producing a local change in the membrane potential. This change in the electric field can be quickly sensed by either adjacent or more distant ion chann |
https://en.wikipedia.org/wiki/Torus-based%20cryptography | Torus-based cryptography involves using algebraic tori to construct a group for use in ciphers based on the discrete logarithm problem. This idea was first introduced by Alice Silverberg and Karl Rubin in 2003 in the form of a public key algorithm by the name of CEILIDH. It improves on conventional cryptosystems by representing some elements of large finite fields compactly and therefore transmitting fewer bits.
See also
Torus
References
Karl Rubin, Alice Silverberg: Torus-Based Cryptography. CRYPTO 2003: 349–365
External links
Torus-Based Cryptography — the paper introducing the concept (in PDF).
Public-key cryptography |
https://en.wikipedia.org/wiki/369%20%28number%29 | Three hundred sixty-nine is the natural number following three hundred sixty-eight and preceding three hundred seventy.
In mathematics
369 is the magic constant of the 9 × 9 magic square and the n-Queens Problem for n = 9.
There are 369 free octominoes (polyominoes of order 8).
369 is a Ruth-Aaron Pair with 370. The sum of their prime factors are equivalent.
References
Integers |
https://en.wikipedia.org/wiki/Biogenic%20substance | A biogenic substance is a product made by or of life forms. While the term originally was specific to metabolite compounds that had toxic effects on other organisms, it has developed to encompass any constituents, secretions, and metabolites of plants or animals. In context of molecular biology, biogenic substances are referred to as biomolecules. They are generally isolated and measured through the use of chromatography and mass spectrometry techniques. Additionally, the transformation and exchange of biogenic substances can by modelled in the environment, particularly their transport in waterways.
The observation and measurement of biogenic substances is notably important in the fields of geology and biochemistry. A large proportion of isoprenoids and fatty acids in geological sediments are derived from plants and chlorophyll, and can be found in samples extending back to the Precambrian. These biogenic substances are capable of withstanding the diagenesis process in sediment, but may also be transformed into other materials. This makes them useful as biomarkers for geologists to verify the age, origin and degradation processes of different rocks.
Biogenic substances have been studied as part of marine biochemistry since the 1960s, which has involved investigating their production, transport, and transformation in the water, and how they may be used in industrial applications. A large fraction of biogenic compounds in the marine environment are produced by micro and macro algae, including cyanobacteria. Due to their antimicrobial properties they are currently the subject of research in both industrial projects, such as for anti-fouling paints, or in medicine.
History of discovery and classification
During a meeting of the New York Academy of Sciences' Section of Geology and Mineralogy in 1903, geologist Amadeus William Grabau proposed a new rock classification system in his paper 'Discussion of and Suggestions Regarding a New Classification of Rocks'. Within |
https://en.wikipedia.org/wiki/Floppy%20disk%20drive%20interface | Each generation of floppy disk drive (FDD) began with a variety of incompatible interfaces but soon evolved into one de facto standard interface for the generations of 8-inch FDDs, 5.25-inch FDDs and 3.5-inch FDDs. For example, before adopting 3.5-inch FDD standards for interface, media and form factor there were drives and media proposed by Hitachi, Tabor, Sony, Tandon, Shugart and Canon.
Sizes
8 inch
The de facto standard 8 inch FDD interface is based upon the Shugart Associates models SA800/801 FDDs and models SA850/851 FDDs. The signal interface uses a dual in-line 50-pin PCB edge connector which mates to a flat ribbon cable connector; separate connectors are provided for both AC and DC power.
5.25 inch
The de facto standard 5.25 inch FDD interface is based upon the Shugart Associates SA400 FDD. The signal interface uses a dual in-line 34-pin PCB edge connector which mates to a flat ribbon cable connector; a separate connector is for DC power. The 34-pin connector is similar in pinout to the standard 50-pin connector for 8 inch FDDs.
3.5 inch
The de facto standard for 3.5 inch drives uses a dual in-line pin style connector mating to a socket connector, collectively slightly smaller than the PCB edge pin connector and mating socket used for the 5¼ inch standard but with the same 34 pin definitions as the 5¼ inch standard. A 'universal' cable would have four drive connectors, two for each size of FDD, although cables which have only two drive connectors are common. The cable is normally formed into a ribbon, and a twist located between the pairs of connectors for the drives (see image) is usually applied to the conductors for pins 10 to 16 inclusive. This allows two drives connected to the same cable to be addressed by the host controller. Only two drives may be connected to such a cable. If there are four drive connectors at least two must remain unused. A separate connector is provided for DC power.
Signal and control interface
3.5-inch and 5.25- |
https://en.wikipedia.org/wiki/DISCiPLE | The DISCiPLE is a floppy disk interface for the ZX Spectrum home computer. Designed by Miles Gordon Technology, it was marketed by Rockfort Products and launched in 1986.
Like Sinclair's own ZX Interface 1, the DISCiPLE was a wedge-shaped unit fitting underneath the Spectrum. It was designed as a super-interface, providing all the facilities a Spectrum owner could need. In addition to floppy-disk, parallel port printer interface and a "magic button" (see Non-maskable interrupt), it also offered twin joystick ports, Sinclair ZX Net-compatible network ports and an inhibit button for disabling the device.
At the rear of the unit was a pass-through port for connecting further devices, although the complexity of the DISCiPLE meant that many would not work, or only if the DISCiPLE was "turned off" using the inhibit button.
The DISCiPLE was a considerable success but its sophistication (the device included 8kB of ROM) meant that it was expensive and the plastic casing, located beneath the computer itself, was sometimes prone to overheating. These factors led to the development of MGT's later +D interface.
The DISCiPLE's DOS was named GDOS. MGT's later DOSs (G+DOS for the +D, and SAM DOS for the SAM Coupé) were backwards-compatible with GDOS. In later years a complete new system called UNI-DOS was developed by SD Software for the DISCiPLE and +D interfaces. In October 1993 "The Complete DISCiPLE Disassembly" was published in book form, documenting the "GDOS system 3d" version.
The popularity of the DISCiPLE led to the formation of a user group and magazine, INDUG, which later became Format Publications. Usergroups like INDUG/Format in the UK or DISCiPLE-Nieuwsbrief in the Netherlands produced enhancements such as extended printer support.
See also
Beta Disk Interface
References
Microcomputers
Home computers
ZX Spectrum
Computer storage devices |
https://en.wikipedia.org/wiki/Septentrional | Septentrional, meaning "of the north", is a Latinate adjective sometimes used in English. It is a form of the Latin noun septentriones, which refers to the seven stars of the Plough (Big Dipper), occasionally called the Septentrion.
In the 18th century, septentrional languages was a recognised term for the Germanic languages.
Etymology and background
The Oxford English Dictionary gives the etymology of septentrional as:
"Septentrional" is more or less synonymous with the term "boreal", derived from Boreas, a Greek god of the North Wind. The constellation Ursa Major, containing the Big Dipper, or Plough, dominates the skies of the North. The usual antonym for septentrional is the term meridional, which refers to the noonday sun.
Usage
The term septentrional is found on maps, mostly those made before 1700. Early maps of North America often refer to the northern- and northwesternmost unexplored areas of the continent as at the "Septentrional" and as "America Septentrionalis", sometimes with slightly varying spellings. Sometimes abbreviated to "Sep.", it was used in historical astronomy to indicate the northern direction on the celestial globe, together with Meridional ("Mer.") for southern, Oriental ("Ori.") for eastern and Occidental ("Occ.") for western.
The linguistic usage in the 17th and 18th centuries was as an umbrella term. It described "the Germanic languages, usually with particular emphasis on Anglo-Saxon, Old Norse and Gothic." Writing of Johann Georg Keyßler in 1758, Thomas Gray distinguished between "Celtic" and "septentrional" antiquities. Thomas Percy actively criticised the blurring of the Celtic and the Germanic in the name of the "septentrional", while at the same time Ossianism favoured it. James Ingram in his inaugural lecture of 1807 called George Hickes "the first of septentrional scholars" for his pioneering lexicographical work on Anglo-Saxon. In current usage, "septentrional fiction" may refer to a setting in the Canadian North.
In Fran |
https://en.wikipedia.org/wiki/Gell-Mann%20matrices | The Gell-Mann matrices, developed by Murray Gell-Mann, are a set of eight linearly independent 3×3 traceless Hermitian matrices used in the study of the strong interaction in particle physics.
They span the Lie algebra of the SU(3) group in the defining representation.
Matrices
{| border="0" cellpadding="8" cellspacing="0"
|
|
|
|-
|
|
|
|-
|
|
|
|}
Properties
These matrices are traceless, Hermitian, and obey the extra trace orthonormality relation (so they can generate unitary matrix group elements of SU(3) through exponentiation). These properties were chosen by Gell-Mann because they then naturally generalize the Pauli matrices for SU(2) to SU(3), which formed the basis for Gell-Mann's quark model. Gell-Mann's generalization further extends to general SU(n). For their connection to the standard basis of Lie algebras, see the Weyl–Cartan basis.
Trace orthonormality
In mathematics, orthonormality typically implies a norm which has a value of unity (1). Gell-Mann matrices, however, are normalized to a value of 2. Thus, the trace of the pairwise product results in the ortho-normalization condition
where is the Kronecker delta.
This is so the embedded Pauli matrices corresponding to the three embedded subalgebras of SU(2) are conventionally normalized. In this three-dimensional matrix representation, the Cartan subalgebra is the set of linear combinations (with real coefficients) of the two matrices and , which commute with each other.
There are three significant SU(2) subalgebras:
and
where the and are linear combinations of and . The SU(2) Casimirs of these subalgebras mutually commute.
However, any unitary similarity transformation of these subalgebras will yield SU(2) subalgebras. There is an uncountable number of such transformations.
Commutation relations
The 8 generators of SU(3) satisfy the commutation and anti-commutation relations
with the structure constants
The structure constants are completely antisymmetric in |
https://en.wikipedia.org/wiki/Newgrounds | Newgrounds is a company and entertainment website founded by Tom Fulp in 1995. It hosts user-generated content such as games, films, audio, and artwork. Fulp produces in-house content at the headquarters and offices in Glenside, Pennsylvania.
In the 2000s and 2010s, Newgrounds played an important role in Internet culture, and in Internet animation and independent video gaming in particular. It has been called a "distinct time in gaming history", a place "where many animators and developers cut their teeth and gained a following long before social media was even a thing", and "a haven for fostering the greats of internet animation".
Content
User-generated content can be uploaded and categorized into either one of the site's four web portals: Games, Movies, Audio, and Art. A Movie or Games submission entered undergoes the process termed "judgment", where it can be rated by all users (from 0 to 5 stars) and reviewed by other users. The average score calculated at various points during judgment determines if whether the content will be "saved" (added onto the database) or "blammed" (deleted with only its reviews saved in the "Obituaries" section).
Since Adobe Flash Player was shut down on most current browsers by late 2020, Newgrounds uses the Ruffle emulator, an Adobe Flash emulator written in Rust and is sponsored by Newgrounds along with other popular sites like Cool Math Games and Armor Games for content created with Flash. As of 2022, Ruffle only supported a select few number of Flash projects written in ActionScript 3.0 which meant users had to download the "Newgrounds Player", the site's own Flash emulator which it used prior to Ruffle, to run projects written in AS3.
Art and Audio are processed using a different method called "scouting", which the site describes as "a way to vet users and weed out spam, stolen works, low quality submissions, etc." All users can put art and audio onto their own page, but only those that are "scouted" will appear in the publ |
https://en.wikipedia.org/wiki/Electromigration | Electromigration is the transport of material caused by the gradual movement of the ions in a conductor due to the momentum transfer between conducting electrons and diffusing metal atoms. The effect is important in applications where high direct current densities are used, such as in microelectronics and related structures. As the structure size in electronics such as integrated circuits (ICs) decreases, the practical significance of this effect increases.
History
The phenomenon of electromigration has been known for over 100 years, having been discovered by the French scientist Gerardin. The topic first became of practical interest during the late 1960s when packaged ICs first appeared. The earliest commercially available ICs failed in a mere three weeks of use from runaway electromigration, which led to a major industry effort to correct this problem. The first observation of electromigration in thin films was made by I. Blech. Research in this field was pioneered by a number of investigators throughout the fledgling semiconductor industry. One of the most important engineering studies was performed by Jim Black of Motorola, after whom Black's equation is named. At the time, the metal interconnects in ICs were still about 10 micrometres wide. Currently interconnects are only hundreds to tens of nanometers in width, making research in electromigration increasingly important.
Practical implications of electromigration
Electromigration decreases the reliability of integrated circuits (ICs). It can cause the eventual loss of connections or failure of a circuit. Since reliability is critically important for space travel, military purposes, anti-lock braking systems, medical equipment like Automated External Defibrillators and is even important for personal computers or home entertainment systems, the reliability of chips (ICs) is a major focus of research efforts.
Due to difficulty of testing under real conditions, Black's equation is used to predict the life s |
https://en.wikipedia.org/wiki/Baby-step%20giant-step | In group theory, a branch of mathematics, the baby-step giant-step is a meet-in-the-middle algorithm for computing the discrete logarithm or order of an element in a finite abelian group by Daniel Shanks. The discrete log problem is of fundamental importance to the area of public key cryptography.
Many of the most commonly used cryptography systems are based on the assumption that the discrete log is extremely difficult to compute; the more difficult it is, the more security it provides a data transfer. One way to increase the difficulty of the discrete log problem is to base the cryptosystem on a larger group.
Theory
The algorithm is based on a space–time tradeoff. It is a fairly simple modification of trial multiplication, the naive method of finding discrete logarithms.
Given a cyclic group of order , a generator of the group and a group element , the problem is to find an integer such that
The baby-step giant-step algorithm is based on rewriting :
Therefore, we have:
The algorithm precomputes for several values of . Then it fixes an and tries values of in the right-hand side of the congruence above, in the manner of trial multiplication. It tests to see if the congruence is satisfied for any value of , using the precomputed values of .
The algorithm
Input: A cyclic group G of order n, having a generator α and an element β.
Output: A value x satisfying .
m ← Ceiling()
For all j where 0 ≤ j < m:
Compute αj and store the pair (j, αj) in a table. (See )
Compute α−m.
γ ← β. (set γ = β)
For all i where 0 ≤ i < m:
Check to see if γ is the second component (αj) of any pair in the table.
If so, return im + j.
If not, γ ← γ • α−m.
In practice
The best way to speed up the baby-step giant-step algorithm is to use an efficient table lookup scheme. The best in this case is a hash table. The hashing is done on the second component, and to perform the check in step 1 of the main loop, γ is hashed and the resulting memory address checked. Since ha |
https://en.wikipedia.org/wiki/OpenAL | OpenAL (Open Audio Library) is a cross-platform audio application programming interface (API). It is designed for efficient rendering of multichannel three-dimensional positional audio. Its API style and conventions deliberately resemble those of OpenGL. OpenAL is an environmental 3D audio library, which can add realism to a game by simulating attenuation (degradation of sound over distance), the Doppler effect (change in frequency as a result of motion), and material densities.
OpenAL aimed to originally be an open standard and open-source replacement for proprietary (and generally incompatible with one another) 3D audio APIs such as DirectSound and Core Audio, though in practice has largely been implemented on various platforms as a wrapper around said proprietary APIs or as a proprietary and vendor-specific fork. While the reference implementation later became proprietary and unmaintained, there are open source implementations such as OpenAL Soft available.
History
OpenAL was originally developed in 2000 by Loki Software to help them in their business of porting Windows games to Linux. After the demise of Loki, the project was maintained for a time by the free software/open source community, and implemented on NVIDIA nForce sound cards and motherboards. It was hosted (and largely developed) by Creative Technology until circa 2012.
Since 1.1 (2009), the sample implementation by Creative has turned proprietary, with the last releases in free licenses still accessible through the project's Subversion source code repository. However, OpenAL Soft is a widely used open source alternative and remains actively maintained and extended.
While the OpenAL charter says that there will be an "Architecture Review Board" (ARB) modeled on the OpenGL ARB, no such organization has ever been formed and the OpenAL specification is generally handled and discussed via email on its public mailing list.
The original mailing list, openal-devel hosted by Creative, ran from March 2003 |
https://en.wikipedia.org/wiki/Look-and-say%20sequence | In mathematics, the look-and-say sequence is the sequence of integers beginning as follows:
1, 11, 21, 1211, 111221, 312211, 13112221, 1113213211, 31131211131221, ... .
To generate a member of the sequence from the previous member, read off the digits of the previous member, counting the number of digits in groups of the same digit. For example:
1 is read off as "one 1" or 11.
11 is read off as "two 1s" or 21.
21 is read off as "one 2, one 1" or 1211.
1211 is read off as "one 1, one 2, two 1s" or 111221.
111221 is read off as "three 1s, two 2s, one 1" or 312211.
The look-and-say sequence was analyzed by John Conway
after he was introduced to it by one of his students at a party.
The idea of the look-and-say sequence is similar to that of run-length encoding.
If started with any digit d from 0 to 9 then d will remain indefinitely as the last digit of the sequence. For any d other than 1, the sequence starts as follows:
d, 1d, 111d, 311d, 13211d, 111312211d, 31131122211d, …
Ilan Vardi has called this sequence, starting with d = 3, the Conway sequence . (for d = 2, see )
Basic properties
Growth
The sequence grows indefinitely. In fact, any variant defined by starting with a different integer seed number will (eventually) also grow indefinitely, except for the degenerate sequence: 22, 22, 22, 22, ...
Digits presence limitation
No digits other than 1, 2, and 3 appear in the sequence, unless the seed number contains such a digit or a run of more than three of the same digit.
Cosmological decay
Conway's cosmological theorem asserts that every sequence eventually splits ("decays") into a sequence of "atomic elements", which are finite subsequences that never again interact with their neighbors. There are 92 elements containing the digits 1, 2, and 3 only, which John Conway named after the 92 naturally-occurring chemical elements up to uranium, calling the sequence audioactive. There are also two "transuranic" elements (Np and Pu) for each digit other t |
https://en.wikipedia.org/wiki/Kato%20%28The%20Green%20Hornet%29 | Kato is a fictional character from The Green Hornet franchise. This character has appeared with the Green Hornet in radio, film, television, book and comic book versions. Kato is the Hornet's assistant and has been played by a number of actors. On radio, Kato was initially played by Raymond Hayashi, then Roland Parker who had the role for most of the run, and in the later years Mickey Tolan and Paul Carnegie. Keye Luke took the role in the movie serials, and in the television series, he was portrayed by Bruce Lee. Jay Chou played Kato in the 2011 Green Hornet film.
Character history
Kato is Britt Reid's valet, who doubles as The Green Hornet's masked driver and partner to help him in his vigilante adventures, disguised as the activities of a racketeer and his chauffeur/bodyguard/enforcer. According to the storyline, years before the events depicted in the series, Britt Reid saved Kato's life while traveling in the Far East. Depending on the version of the story, this prompts Kato to become Reid's assistant or friend. In the anthology book, The Green Hornet Chronicles from Moonstone Books, author Richard Dean Starr's story "Nothing Gold Can Stay: An Origin Story of Kato" explores the character's background and how he ends up living in America, suggesting that Kato met Britt Reid on a later trip back to his homeland while in search of his mother.
Radio program and nationality
George W. Trendle, the owner of radio station WXYZ in Michigan first created and produced "The Green Hornet" show in 1936, with the scripts being written by Fran Striker. The show became so popular it ran for nearly two decades and spun off at least two films. This was Trendle and Striker's second big radio hit; their first was "The Lone Ranger".
In the 1936 premiere of the radio program, Kato was presented as being Japanese. By 1939, the invasion of China by the Empire of Japan made this bad for public relations, and from that year until 1945 "Britt Reid's Japanese valet" in the show's openi |
https://en.wikipedia.org/wiki/Delimiter | A delimiter is a sequence of one or more characters for specifying the boundary between separate, independent regions in plain text, mathematical expressions or other data streams. An example of a delimiter is the comma character, which acts as a field delimiter in a sequence of comma-separated values. Another example of a delimiter is the time gap used to separate letters and words in the transmission of Morse code.
In mathematics, delimiters are often used to specify the scope of an operation, and can occur both as isolated symbols (e.g., colon in "") and as a pair of opposing-looking symbols (e.g., angled brackets in ).
Delimiters represent one of various means of specifying boundaries in a data stream. Declarative notation, for example, is an alternate method that uses a length field at the start of a data stream to specify the number of characters that the data stream contains.
Overview
Delimiters may be characterized as field and record delimiters, or as bracket delimiters.
Field and record delimiters
Field delimiters separate data fields. Record delimiters separate groups of fields.
For example, the CSV format uses a comma as the delimiter between fields, and an end-of-line indicator as the delimiter between records:
fname,lname,age,salary
nancy,davolio,33,$30000
erin,borakova,28,$25250
tony,raphael,35,$28700
This specifies a simple flat file database table using the CSV file format.
Bracket delimiters
Bracket delimiters, also called block delimiters, region delimiters, or balanced delimiters, mark both the start and end of a region of text.
Common examples of bracket delimiters include:
Conventions
Historically, computing platforms have used certain delimiters by convention. The following tables depict a few examples for comparison.
Programming languages
(See also, Comparison of programming languages (syntax)).
Field and Record delimiters (See also, ASCII, Control character).
Delimiter collision
Delimiter collision is a problem that occurs when a |
https://en.wikipedia.org/wiki/Site-directed%20mutagenesis | Site-directed mutagenesis is a molecular biology method that is used to make specific and intentional mutating changes to the DNA sequence of a gene and any gene products. Also called site-specific mutagenesis or oligonucleotide-directed mutagenesis, it is used for investigating the structure and biological activity of DNA, RNA, and protein molecules, and for protein engineering.
Site-directed mutagenesis is one of the most important laboratory techniques for creating DNA libraries by introducing mutations into DNA sequences. There are numerous methods for achieving site-directed mutagenesis, but with decreasing costs of oligonucleotide synthesis, artificial gene synthesis is now occasionally used as an alternative to site-directed mutagenesis. Since 2013, the development of the CRISPR/Cas9 technology, based on a prokaryotic viral defense system, has also allowed for the editing of the genome, and mutagenesis may be performed in vivo with relative ease.
History
Early attempts at mutagenesis using radiation or chemical mutagens were non-site-specific, generating random mutations. Analogs of nucleotides and other chemicals were later used to generate localized point mutations, examples of such chemicals are aminopurine, nitrosoguanidine, and bisulfite. Site-directed mutagenesis was achieved in 1974 in the laboratory of Charles Weissmann using a nucleotide analogue N4-hydroxycytidine, which induces transition of GC to AT. These methods of mutagenesis, however, are limited by the kind of mutation they can achieve, and they are not as specific as later site-directed mutagenesis methods.
In 1971, Clyde Hutchison and Marshall Edgell showed that it is possible to produce mutants with small fragments of phage ϕX174 and restriction nucleases. Hutchison later produced with his collaborator Michael Smith in 1978 a more flexible approach to site-directed mutagenesis by using oligonucleotides in a primer extension method with DNA polymerase. For his part in the development |
https://en.wikipedia.org/wiki/Expression%20vector | An expression vector, otherwise known as an expression construct, is usually a plasmid or virus designed for gene expression in cells. The vector is used to introduce a specific gene into a target cell, and can commandeer the cell's mechanism for protein synthesis to produce the protein encoded by the gene. Expression vectors are the basic tools in biotechnology for the production of proteins.
The vector is engineered to contain regulatory sequences that act as enhancer and promoter regions and lead to efficient transcription of the gene carried on the expression vector. The goal of a well-designed expression vector is the efficient production of protein, and this may be achieved by the production of significant amount of stable messenger RNA, which can then be translated into protein. The expression of a protein may be tightly controlled, and the protein is only produced in significant quantity when necessary through the use of an inducer, in some systems however the protein may be expressed constitutively. Escherichia coli is commonly used as the host for protein production, but other cell types may also be used. An example of the use of expression vector is the production of insulin, which is used for medical treatments of diabetes.
Elements
An expression vector has features that any vector may have, such as an origin of replication, a selectable marker, and a suitable site for the insertion of a gene like the multiple cloning site. The cloned gene may be transferred from a specialized cloning vector to an expression vector, although it is possible to clone directly into an expression vector. The cloning process is normally performed in Escherichia coli. Vectors used for protein production in organisms other than E.coli may have, in addition to a suitable origin of replication for its propagation in E. coli, elements that allow them to be maintained in another organism, and these vectors are called shuttle vectors.
Elements for expression
An expression vect |
https://en.wikipedia.org/wiki/Catalan%20solid | In mathematics, a Catalan solid, or Archimedean dual, is a polyhedron that is dual to an Archimedean solid. There are 13 Catalan solids. They are named for the Belgian mathematician Eugène Catalan, who first described them in 1865.
The Catalan solids are all convex. They are face-transitive but not vertex-transitive. This is because the dual Archimedean solids are vertex-transitive and not face-transitive. Note that unlike Platonic solids and Archimedean solids, the faces of Catalan solids are not regular polygons. However, the vertex figures of Catalan solids are regular, and they have constant dihedral angles. Being face-transitive, Catalan solids are isohedra.
Additionally, two of the Catalan solids are edge-transitive: the rhombic dodecahedron and the rhombic triacontahedron. These are the duals of the two quasi-regular Archimedean solids.
Just as prisms and antiprisms are generally not considered Archimedean solids, bipyramids and trapezohedra are generally not considered Catalan solids, despite being face-transitive.
Two of the Catalan solids are chiral: the pentagonal icositetrahedron and the pentagonal hexecontahedron, dual to the chiral snub cube and snub dodecahedron. These each come in two enantiomorphs. Not counting the enantiomorphs, bipyramids, and trapezohedra, there are a total of 13 Catalan solids.
List of Catalan solids and their duals
Symmetry
The Catalan solids, along with their dual Archimedean solids, can be grouped in those with tetrahedral, octahedral and icosahedral symmetry.
For both octahedral and icosahedral symmetry there are six forms. The only Catalan solid with genuine tetrahedral symmetry is the triakis tetrahedron (dual of the truncated tetrahedron). The rhombic dodecahedron and tetrakis hexahedron have octahedral symmetry, but they can be colored to have only tetrahedral symmetry. Rectification and snub also exist with tetrahedral symmetry, but they are Platonic instead of Archimedean, so their duals are Platonic instead of |
https://en.wikipedia.org/wiki/Virtual%20ground | In electronics, a virtual ground (or virtual earth) is a node of a circuit that is maintained at a steady reference potential, without being connected directly to the reference potential. In some cases the reference potential is considered to be that of the surface of the earth, and the reference node is called "ground" or "earth" as a consequence.
The virtual ground concept aids circuit analysis in operational amplifier and other circuits and provides useful practical circuit effects that would be difficult to achieve in other ways.
In circuit theory, a node may have any value of current or voltage but physical implementations of a virtual ground will have limitations of current handling ability and a non-zero impedance which may have practical side effects.
Construction
A voltage divider, using two resistors, can be used to create a virtual ground node. If two voltage sources are connected in series with two resistors, it can be shown that the midpoint becomes a virtual ground if
An active virtual ground circuit is sometimes called a rail splitter. Such a circuit uses an op-amp or some other circuit element that has gain. Since an operational amplifier has very high open-loop gain, the potential difference between its inputs tend to zero when a feedback network is implemented.
This means that the output supplies the inverting input (via the feedback network) with enough voltage to reduce the potential difference between the inputs to microvolts. More precisely, it can be shown that the output voltage of the amplifier in the figure is approximately equal to .
Thus, as far as the amplifier is working in its linear region (output not saturated, frequencies inside the range of the opamp), the voltage at the inverting input terminal remains constant with respect to the real ground, and independent from the loads to which the output may be connected.
This property is characterized a "virtual ground".
Applications
Voltage is a differential quantity, which ap |
https://en.wikipedia.org/wiki/Digital%20control | Digital control is a branch of control theory that uses digital computers to act as system controllers.
Depending on the requirements, a digital control system can take the form of a microcontroller to an ASIC to a standard desktop computer.
Since a digital computer is a discrete system, the Laplace transform is replaced with the Z-transform. Since a digital computer has finite precision (See quantization), extra care is needed to ensure the error in coefficients, analog-to-digital conversion, digital-to-analog conversion, etc. are not producing undesired or unplanned effects.
Since the creation of the first digital computer in the early 1940s the price of digital computers has dropped considerably, which has made them key pieces to control systems because they are easy to configure and reconfigure through software, can scale to the limits of the memory or storage space without extra cost, parameters of the program can change with time (See adaptive control) and digital computers are much less prone to environmental conditions than capacitors, inductors, etc.
Digital controller implementation
A digital controller is usually cascaded with the plant in a feedback system. The rest of the system can either be digital or analog.
Typically, a digital controller requires:
Analog-to-digital conversion to convert analog inputs to machine-readable (digital) format
Digital-to-analog conversion to convert digital outputs to a form that can be input to a plant (analog)
A program that relates the outputs to the inputs
Output program
Outputs from the digital controller are functions of current and past input samples, as well as past output samples - this can be implemented by storing relevant values of input and output in registers. The output can then be formed by a weighted sum of these stored values.
The programs can take numerous forms and perform many functions
A digital filter for low-pass filtering
A state space model of a system to act as a state observer
A telemetr |
https://en.wikipedia.org/wiki/Discrete%20system | In theoretical computer science, a discrete system is a system with a countable number of states. Discrete systems may be contrasted with continuous systems, which may also be called analog systems. A final discrete system is often modeled with a directed graph and is analyzed for correctness and complexity according to computational theory. Because discrete systems have a countable number of states, they may be described in precise mathematical models.
A computer is a finite-state machine that may be viewed as a discrete system. Because computers are often used to model not only other discrete systems but continuous systems as well, methods have been developed to represent real-world continuous systems as discrete systems. One such method involves sampling a continuous signal at discrete time intervals.
See also
Digital control
Finite-state machine
Frequency spectrum
Mathematical model
Sample and hold
Sample rate
Sample time
Z-transform
References
Automata (computation)
Models of computation
Signal processing |
https://en.wikipedia.org/wiki/Hybrid%20system | A hybrid system is a dynamical system that exhibits both continuous and discrete dynamic behavior – a system that can both flow (described by a differential equation) and jump (described by a state machine or automaton). Often, the term "hybrid dynamical system" is used, to distinguish over hybrid systems such as those that combine neural nets and fuzzy logic, or electrical and mechanical drivelines. A hybrid system has the benefit of encompassing a larger class of systems within its structure, allowing for more flexibility in modeling dynamic phenomena.
In general, the state of a hybrid system is defined by the values of the continuous variables and a discrete mode. The state changes either continuously, according to a flow condition, or discretely according to a control graph. Continuous flow is permitted as long as so-called invariants hold, while discrete transitions can occur as soon as given jump conditions are satisfied. Discrete transitions may be associated with events.
Examples
Hybrid systems have been used to model several cyber-physical systems, including physical systems with impact, logic-dynamic controllers, and even Internet congestion.
Bouncing ball
A canonical example of a hybrid system is the bouncing ball, a physical system with impact. Here, the ball (thought of as a point-mass) is dropped from an initial height and bounces off the ground, dissipating its energy with each bounce. The ball exhibits continuous dynamics between each bounce; however, as the ball impacts the ground, its velocity undergoes a discrete change modeled after an inelastic collision. A mathematical description of the bouncing ball follows. Let be the height of the ball and be the velocity of the ball. A hybrid system describing the ball is as follows:
When , flow is governed by
,
where is the acceleration due to gravity. These equations state that when the ball is above ground, it is being drawn to the ground by gravity.
When , jumps are governed by
,
where |
https://en.wikipedia.org/wiki/Closed-loop%20controller | A closed-loop controller or feedback controller is a control loop which incorporates feedback, in contrast to an open-loop controller or non-feedback controller.
A closed-loop controller uses feedback to control states or outputs of a dynamical system. Its name comes from the information path in the system: process inputs (e.g., voltage applied to an electric motor) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured with sensors and processed by the controller; the result (the control signal) is "fed back" as input to the process, closing the loop.
In the case of linear feedback systems, a control loop including sensors, control algorithms, and actuators is arranged in an attempt to regulate a variable at a setpoint (SP). An everyday example is the cruise control on a road vehicle; where external influences such as hills would cause speed changes, and the driver has the ability to alter the desired set speed. The PID algorithm in the controller restores the actual speed to the desired speed in an optimum way, with minimal delay or overshoot, by controlling the power output of the vehicle's engine.
Control systems that include some sensing of the results they are trying to achieve are making use of feedback and can adapt to varying circumstances to some extent. Open-loop control systems do not make use of feedback, and run only in pre-arranged ways.
Closed-loop controllers have the following advantages over open-loop controllers:
disturbance rejection (such as hills in the cruise control example above)
guaranteed performance even with model uncertainties, when the model structure does not match perfectly the real process and the model parameters are not exact
unstable processes can be stabilized
reduced sensitivity to parameter variations
improved reference tracking performance
In some systems, closed-loop and open-loop control are used simultaneously. In such systems, the open-loop control is termed feedforward and |
https://en.wikipedia.org/wiki/Cell%20growth | Cell growth refers to an increase in the total mass of a cell, including both cytoplasmic, nuclear and organelle volume. Cell growth occurs when the overall rate of cellular biosynthesis (production of biomolecules or anabolism) is greater than the overall rate of cellular degradation (the destruction of biomolecules via the proteasome, lysosome or autophagy, or catabolism).
Cell growth is not to be confused with cell division or the cell cycle, which are distinct processes that can occur alongside cell growth during the process of cell proliferation, where a cell, known as the mother cell, grows and divides to produce two daughter cells. Importantly, cell growth and cell division can also occur independently of one another. During early embryonic development (cleavage of the zygote to form a morula and blastoderm), cell divisions occur repeatedly without cell growth. Conversely, some cells can grow without cell division or without any progression of the cell cycle, such as growth of neurons during axonal pathfinding in nervous system development.
In multicellular organisms, tissue growth rarely occurs solely through cell growth without cell division, but most often occurs through cell proliferation. This is because a single cell with only one copy of the genome in the cell nucleus can perform biosynthesis and thus undergo cell growth at only half the rate of two cells. Hence, two cells grow (accumulate mass) at twice the rate of a single cell, and four cells grow at 4-times the rate of a single cell. This principle leads to an exponential increase of tissue growth rate (mass accumulation) during cell proliferation, owing to the exponential increase in cell number.
Cell size depends on both cell growth and cell division, with a disproportionate increase in the rate of cell growth leading to production of larger cells and a disproportionate increase in the rate of cell division leading to production of many smaller cells. Cell proliferation typically involves bala |
https://en.wikipedia.org/wiki/Numero%20sign | The numero sign or numero symbol, №, (also represented as Nº, No̱, No. or no.), is a typographic abbreviation of the word number(s) indicating ordinal numeration, especially in names and titles. For example, using the numero sign, the written long-form of the address is shortened to , yet both forms are spoken long.
Typographically, the numero sign combines as a single ligature the uppercase Latin letter with a usually superscript lowercase letter , sometimes underlined, resembling the masculine ordinal indicator . The ligature has a code point in Unicode as a precomposed character, .
The Oxford English Dictionary derives the numero sign from Latin , the ablative form of ("number", with the ablative denotations of "by the number, with the number"). In Romance languages, the numero sign is understood as an abbreviation of the word for "number", e.g. Italian , French , and Portuguese and Spanish .
This article describes other typographical abbreviations for "number" in different languages, in addition to the numero sign proper.
Usages
The numero sign's substitution by the two separate letters and is common. A capital or lower-case "n" may be used, followed by "o.", superscript "o", ordinal indicator, or the degree sign; this will be understood in most languages.
Bulgarian
In Bulgarian the numero sign is often used and it is present in three widely used keyboard layouts accessible with in BDS and prBDS and with on the Phonetic layout.
English
In English, the non-ligature form is typical and is often used to abbreviate the word "number". In North America, the number sign, , is more prevalent. The ligature form does not appear on British or American QWERTY keyboards.
French
The numero symbol is not in common use in France and does not appear on a standard AZERTY keyboard. Instead, the French Imprimerie nationale recommends the use of the form "no" (an "n" followed by a superscript lowercase "o"). The plural form "nos" can also be used. In practice, the " |
https://en.wikipedia.org/wiki/Nslookup | nslookup (from "name server lookup") is a network administration command-line tool for querying the Domain Name System (DNS) to obtain the mapping between domain name and IP address, or other DNS records.
Overview
nslookup was a member of the BIND name server software. Early in the development of BIND 9, the Internet Systems Consortium planned to deprecate nslookup in favor of host and dig. This decision was reversed in 2004 with the release of BIND 9.3 and nslookup has been fully supported since then.
Unlike dig, nslookup does not use the operating system's local Domain Name System resolver library to perform its queries, and thus may behave differently. Additionally, vendor-provided versions may include output of other sources of name information, such as host files, and Network Information Service. Some behaviors of nslookup may be modified by the contents of resolv.conf.
The Linux version of nslookup was written by Andrew Cherenson.
The ReactOS version was developed by Lucas Suggs and is licensed under the GPL.
Usage
nslookup operates in interactive or non-interactive mode. When used interactively by invoking it without arguments or when the first argument is - (minus sign) and the second argument is a hostname or Internet address of a name server, the user issues parameter configurations or requests when presented with the nslookup prompt (>). When no arguments are given, then the command queries the default server. The - (minus sign) invokes subcommands which are specified on the command line and should precede nslookup commands. In non-interactive mode, i.e. when the first argument is a name or Internet address of the host being searched, parameters and the query are specified as command line arguments in the invocation of the program. The non interactive mode searches the information for a specified host using the default name server.
See also
dig, a utility interrogates DNS servers directly for troubleshooting and system administration purposes.
hos |
https://en.wikipedia.org/wiki/Bernard%20Fr%C3%A9nicle%20de%20Bessy | Bernard Frénicle de Bessy (c. 1604 – 1674), was a French mathematician born in Paris, who wrote numerous mathematical papers, mainly in number theory and combinatorics. He is best remembered for , a treatise on magic squares published posthumously in 1693, in which he described all 880 essentially different normal magic squares of order 4. The Frénicle standard form, a standard representation of magic squares, is named after him. He solved many problems created by Fermat and also discovered the cube property of the number 1729 (Ramanujan number), later referred to as a taxicab number. He is also remembered for his treatise Traité des triangles rectangles en nombres published (posthumously) in 1676 and reprinted in 1729.
Bessy was a member of many of the scientific circles of his day, including the French Academy of Sciences, and corresponded with many prominent mathematicians, such as Mersenne and Pascal. Bessy was also particularly close to Fermat, Descartes and Wallis, and was best known for his insights into number theory.
In 1661 he proposed to John Wallis a problem of what amounted to the following system of equations in integers,
x2 + y2 = z2, x2 = u2 + v2, x − y = u − v > 0.
A solution was given by Théophile Pépin in 1880.
La Méthode des exclusions
Frénicle's La Méthode des exclusions was published (posthumously) in 1693, which appeared in the fifth volume of (1729, Paris), though the work appears to have been written around 1640. The book contains a short introduction followed by ten rules, intended to serve as a "method" or general rules one should apply in order to solve mathematical problems. During the Renaissance, "method" was primarily used for educational purposes, rather than for professional mathematicians (or natural philosophers). However, Frénicle's rules imply slight methodological preferences which suggests a turn towards explorational purposes.
Frénicle's text provided a number of examples on how his rules ought to be applied. He pr |
https://en.wikipedia.org/wiki/Robert%20Woodhouse | Robert Woodhouse (28 April 177323 December 1827) was a British mathematician and astronomer.
Biography
Early life and education
Robert Woodhouse was born on 28 April 1773 in Norwich, Norfolk, the son of Robert Woodhouse, linen draper, and Judith Alderson, the daughter of a Unitarian minister from Lowestoft. Robert junior was baptised at St George's Church, Colegate, Norwich, on 19 May, 1773. A younger son, John Thomas Woodhouse, was born in 1780. The brothers were educated at the Paston School in North Walsham, north of Norwich.
In May 1790 Woodhouse was admitted to Gonville and Caius College, Cambridge, the college where Paston pupils were traditionally sent. In 1795 he graduated as the Senior Wrangler (ranked first among the mathematics undergraduates at the university), and took the First Smith's Prize. He obtained his Master's degree at Cambridge in 1798.
Marriage and career at Cambridge
Woodhouse was a fellow of the college from 1798 to 1823, after which he resigned so as to be able to marry Harriet, the daughter of William Wilkin, a Norwich architect. They were married on 20 February 1823; the marriage produced a son, also named Robert. Harriet Woodhouse died at Cambridge on 31 March 1826.
Woodhouse was elected a Fellow of the Royal Society on 16 December 1802. His earliest work, entitled the Principles of Analytical Calculation, was published at Cambridge in 1803. In this he explained the differential notation and strongly pressed the employment of it; but he severely criticised the methods used by continental writers, and their constant assumption of non-evident principles.
In 1809 Woodhouse published a textbook covering planar trigonometry and spherical trigonometry and the next year a historical treatise on the calculus of variations and isoperimetrical problems. He next produced an astronomy; of which the first book (usually bound in two volumes), on practical and descriptive astronomy, was issued in 1812, and the second book, containing an accou |
https://en.wikipedia.org/wiki/Particle%20displacement | Particle displacement or displacement amplitude is a measurement of distance of the movement of a sound particle from its equilibrium position in a medium as it transmits a sound wave.
The SI unit of particle displacement is the metre (m). In most cases this is a longitudinal wave of pressure (such as sound), but it can also be a transverse wave, such as the vibration of a taut string. In the case of a sound wave travelling through air, the particle displacement is evident in the oscillations of air molecules with, and against, the direction in which the sound wave is travelling.
A particle of the medium undergoes displacement according to the particle velocity of the sound wave traveling through the medium, while the sound wave itself moves at the speed of sound, equal to in air at .
Mathematical definition
Particle displacement, denoted δ, is given by
where v is the particle velocity.
Progressive sine waves
The particle displacement of a progressive sine wave is given by
where
is the amplitude of the particle displacement;
is the phase shift of the particle displacement;
is the angular wavevector;
is the angular frequency.
It follows that the particle velocity and the sound pressure along the direction of propagation of the sound wave x are given by
where
is the amplitude of the particle velocity;
is the phase shift of the particle velocity;
is the amplitude of the acoustic pressure;
is the phase shift of the acoustic pressure.
Taking the Laplace transforms of v and p with respect to time yields
Since , the amplitude of the specific acoustic impedance is given by
Consequently, the amplitude of the particle displacement is related to those of the particle velocity and the sound pressure by
See also
Sound
Sound particle
Particle velocity
Particle acceleration
References and notes
Related Reading:
External links
Acoustic Particle-Image Velocimetry. Development and Applications
Ohm's Law as Acoustic Equivalent. Calculations
Relationships of |
https://en.wikipedia.org/wiki/7000%20%28number%29 | 7000 (seven thousand) is the natural number following 6999 and preceding 7001.
Selected numbers in the range 7001–7999
7001 to 7099
7021 – triangular number
7043 – Sophie Germain prime
7056 = 842
7057 – cuban prime of the form x = y + 1, super-prime
7073 – Leyland number
7079 – Sophie Germain prime, safe prime
7100 to 7199
7103 – Sophie Germain prime, sexy prime with 7109
7106 – octahedral number
7109 – super-prime, sexy prime with 7103
7121 – Sophie Germain prime
7140 – triangular number, also a pronic number and hence = 3570 is also a triangular number, tetrahedral number
7151 – Sophie Germain prime
7155 – number of 19-bead necklaces (turning over is allowed) where complements are equivalent
7187 – safe prime
7192 – weird number
7193 – Sophie Germain prime, super-prime
7200 to 7299
7200 – pentagonal pyramidal number
7211 – Sophie Germain prime
7225 = 852, centered octagonal number
7230 = 362 + 372 + 382 + 392 + 402 = 412 + 422 + 432 + 442
7246 – centered heptagonal number
7247 – safe prime
7260 – triangular number
7267 – decagonal number
7272 – Kaprekar number
7283 – super-prime
7291 – nonagonal number
7300 to 7399
7316 – number of 18-bead binary necklaces with beads of 2 colors where the colors may be swapped but turning over is not allowed
7338 – Fine number.
7349 – Sophie Germain prime
7351 – super-prime, cuban prime of the form x = y + 1
7381 – triangular number
7385 – Keith number
7396 = 862
7400 to 7499
7417 – super-prime
7433 – Sophie Germain prime
7471 – centered cube number
7481 – super-prime, cousin prime
7500 to 7599
7503 – triangular number
7523 – balanced prime, safe prime, super-prime
7537 – prime of the form 2p-1
7541 – Sophie Germain prime
7559 – safe prime
7560 – highly composite number
7561 – Markov prime
7568 – centered heptagonal number
7569 = 872, centered octagonal number
7583 – balanced prime
7600 to 7699
7607 – safe prime, super-prime
7612 – decagonal number
7614 – nonagonal number |
https://en.wikipedia.org/wiki/8000%20%28number%29 | 8000 (eight thousand) is the natural number following 7999 and preceding 8001.
8000 is the cube of 20, as well as the sum of four consecutive integers cubed, 113 + 123 + 133 + 143.
The fourteen tallest mountains on Earth, which exceed 8000 meters in height, are sometimes referred to as eight-thousanders.
Selected numbers in the range 8001–8999
8001 to 8099
8001 – triangular number
8002 – Mertens function zero
8011 – Mertens function zero, super-prime
8012 – Mertens function zero
8017 – Mertens function zero
8021 – Mertens function zero
8039 – safe prime
8059 – super-prime
8069 – Sophie Germain prime
8093 – Sophie Germain prime
8100 to 8199
8100 = 902
8101 – super-prime
8111 – Sophie Germain prime
8117 – super-prime, balanced prime
8119 – octahedral number; 8119/5741 ≈ √2
8125 – pentagonal pyramidal number
8128 – perfect number, harmonic divisor number, 127th triangular number, 64th hexagonal number, eighth 292-gonal number, fourth 1356-gonal number
8147 – safe prime
8189 – highly cototient number
8190 – harmonic divisor number
8191 – Mersenne prime
8192 = 213
8200 to 8299
8208 – base 10 narcissistic number as 84 + 24 + 04 + 84 = 8208
8219 – twin prime with 8221
8221 – super-prime, twin prime with 8219
8233 – super-prime, centered heptagonal number
8243 – Sophie Germain prime
8256 – triangular number
8257 – sum of the squares of the first fourteen primes
8269 – cuban prime of the form x = y + 1
8273 – Sophie Germain prime
8281 = 912, sum of the cubes of the first thirteen integers, nonagonal number, centered octagonal number
8287 – super-prime
8300 to 8399
8321 – super-Poulet number
8326 – decagonal number
8361 – Leyland number
8377 – super-prime
8385 – triangular number
8389 – super-prime, twin prime
8400 to 8499
8423 – safe prime
8436 – tetrahedral number
8464 = 922
8500 to 8599
8513 – Sophie Germain prime, super-prime
8515 – triangular number
8521 – sexy prime with 8527
8527 – super-prime, sexy prime with 8521 |
https://en.wikipedia.org/wiki/RIVA%20128 | Released in August 1997 by Nvidia, the RIVA 128, or "NV3", was one of the first consumer graphics processing units to integrate 3D acceleration in addition to traditional 2D and video acceleration. Its name is an acronym for Real-time Interactive Video and Animation accelerator.
Following the less successful "NV1" accelerator, the RIVA 128 was the first product to gain Nvidia widespread recognition. It was also a major change in technological direction for Nvidia.
Architecture
Nvidia's "NV1" chip had been designed for a fundamentally different type of rendering technology, called quadratic texture mapping, a technique not supported by Direct3D. The RIVA 128 was instead designed to accelerate Direct3D to the utmost extent possible. It was built to render within the Direct3D 5 and OpenGL API specifications. The graphics accelerator consists of 3.5 million transistors built on a 350 nm fabrication process and is clocked at 100 MHz. RIVA 128 has a single pixel pipeline capable of 1 pixel per clock when sampling one texture. It is specified to output pixels at a rate of 100 million per second and 25-pixel triangles at 1.5 million per second. There are 12 KiB of on-chip memory used for pixel and vertex caches. The chip was limited to a 16-bit (Highcolor) pixel format when performing 3D acceleration and a 16-bit Z-buffer.
The 2D accelerator engine within the RIVA 128 is 128 bits wide and also operates at 100 MHz. In this "fast and wide" configuration, as Nvidia referred to it, the RIVA 128 performed admirably for GUI acceleration compared to competitors. A 32-bit hardware VESA-compliant SVGA/VGA core was implemented as well. Video acceleration aboard the chip is optimized for MPEG-2 but lacks full acceleration of that standard. Final picture output is routed through an integrated 206 MHz RAMDAC. RIVA 128 had the advantage of being a combination 2D/3D graphics chip, unlike Voodoo Graphics. This meant that the computer did not require a separate 2D card for output outsid |
https://en.wikipedia.org/wiki/RIVA%20TNT | The RIVA TNT, codenamed NV4, is a 2D, video, and 3D graphics accelerator chip for PCs that was developed by Nvidia. It was released in March 1998 and cemented Nvidia's reputation as a worthy rival within the developing consumer 3D graphics adapter industry. The first RIVA TNT based card was released on June 15, 1998, by STB Systems: Velocity 4400. RIVA is an acronym for Real-time Interactive Video and Animation accelerator. The "TNT" suffix refers to the chip's ability to work on two texels at once (TwiN Texel).
Overview
The TNT was designed as a follow-up to the RIVA 128 and a response to 3Dfx's introduction of the Voodoo2. It added a second pixel pipeline, practically doubling rendering speed, and used considerably faster memory. Unlike the Voodoo2 (but like the slower Matrox G200) it also added support for a 32-bit (truecolor) pixel format, 24-bit Z-buffer in 3D mode, an 8-bit stencil buffer and support for 1024×1024 pixel textures. Improved mipmapping and texture filtering techniques, including newly added support for trilinear filtering, dramatically improved quality compared to the TNT's predecessor. TNT also added support for up to 16 MiB of SDR SDRAM. Like RIVA 128, RIVA TNT is a single chip solution.
The TNT shipped later than originally planned, ran quite hot, and was clocked lower than Nvidia had planned at 90 MHz instead of 110 MHz. Originally planned specifications should have placed the card ahead of Voodoo2 in theoretical performance for Direct3D applications, but at 90 MHz it did not quite match the Voodoo2. At the time, most games supported 3dfx's proprietary Glide API which gave the Voodoo2 a large advantage in speed and image quality, and some games only used the Glide API for 3D acceleration, leaving TNT users no better off than people who didn't have a 3D accelerator. Even in "OpenGL only" comparisons such as the case in Quake 2, the Voodoo2 had the upper hand as a custom "MiniGL" driver was made specifically for 3dfx cards to run the game (a |
https://en.wikipedia.org/wiki/RIVA%20TNT2 | The RIVA TNT2 is a graphics processing unit manufactured by Nvidia starting in early 1999. The chip is codenamed "NV5" because it is the 5th graphics chip design by Nvidia, succeeding the RIVA TNT (NV4). RIVA is an acronym for Real-time Interactive Video and Animation accelerator. The "TNT" suffix refers to the chip's ability to work on two texels at once (TwiN Texel). Nvidia removed RIVA from the name later in the chip's lifetime.
Overview
The TNT2 core features the same basic dual-pipeline layout as the RIVA TNT, however with a few updates, such as larger 2048x2048 texture support, 32-bit Z-buffer/stencil support, AGP 4X support, up to 32MB of VRAM, and a process shrink from 0.35 μm to 0.25 μm. It was the process shrink that enabled improved clock speeds (from 90 MHz to 150+ MHz), which is where the substantial performance improvement came from.
A low-cost version, known as the TNT2 M64, was produced with the memory interface reduced from 128-bit to 64-bit. Sometimes these were labeled "Vanta", continuing the Vanta name started with a value-oriented RIVA TNT-based product. This chipset outperformed the older RIVA TNT while being less costly to produce. They proved quite popular in the OEM market, as most consumers simply assumed all TNT2 cards were the same.
Product comparisons
RIVA TNT2's competition included the 3dfx Voodoo2, 3dfx Voodoo3, the Matrox G400, and the ATI Rage 128. The main competitor to the TNT2 was the Voodoo3, which compared to the TNT2 lacked 32-bit color output in 3D. This was a distinguishing point for the TNT2, while the Voodoo3 was marketed under the premise of superior speed and game compatibility. The 3dfx Glide API was still popular at this time, and frequently performed faster and with better image quality than non-vendor locked APIs Direct3D and OpenGL. Some games also had exclusive 3D features when used with Glide, including Wing Commander: Prophecy, and the popular Unreal had a troubled development history with regards to Direct |
https://en.wikipedia.org/wiki/Symbolic%20method | In mathematics, the symbolic method in invariant theory is an algorithm developed by Arthur Cayley, Siegfried Heinrich Aronhold, Alfred Clebsch, and Paul Gordan in the 19th century for computing invariants of algebraic forms. It is based on treating the form as if it were a power of a degree one form, which corresponds to embedding a symmetric power of a vector space into the symmetric elements of a tensor product of copies of it.
Symbolic notation
The symbolic method uses a compact, but rather confusing and mysterious notation for invariants, depending on the introduction of new symbols a, b, c, ... (from which the symbolic method gets its name) with apparently contradictory properties.
Example: the discriminant of a binary quadratic form
These symbols can be explained by the following example from Gordan. Suppose that
is a binary quadratic form with an invariant given by the discriminant
The symbolic representation of the discriminant is
where a and b are the symbols. The meaning of the expression (ab)2 is as follows. First of all, (ab) is a shorthand form for the determinant of a matrix whose rows are a1, a2 and b1, b2, so
Squaring this we get
Next we pretend that
so that
and we ignore the fact that this does not seem to make sense if f is not a power of a linear form.
Substituting these values gives
Higher degrees
More generally if
is a binary form of higher degree, then one introduces new variables a1, a2, b1, b2, c1, c2, with the properties
What this means is that the following two vector spaces are naturally isomorphic:
The vector space of homogeneous polynomials in A0,...An of degree m
The vector space of polynomials in 2m variables a1, a2, b1, b2, c1, c2, ... that have degree n in each of the m pairs of variables (a1, a2), (b1, b2), (c1, c2), ... and are symmetric under permutations of the m symbols a, b, ....,
The isomorphism is given by mapping aa, bb, .... to Aj. This mapping does not preserve products of polynomials.
More varia |
https://en.wikipedia.org/wiki/Disassortative%20mating | Disassortative mating (also known as negative assortative mating or heterogamy) is a mating pattern in which individuals with dissimilar phenotypes mate with one another more frequently than would be expected under random mating. Disassortative mating reduces the mean genetic similarities within the population and produces a greater number of heterozygotes. The pattern is character specific, but does not affect allele frequencies. This nonrandom mating pattern will result in deviation from the Hardy-Weinberg principle (which states that genotype frequencies in a population will remain constant from generation to generation in the absence of other evolutionary influences, such as "mate choice" in this case).
Disassortative mating is different from outbreeding, which refers to mating patterns in relation to genotypes rather than phenotypes.
Due to homotypic preference (bias toward the same type), assortative mating occurs more frequently then disassortative mating. This is due to the fact that homotypic preferences increase relatedness between mates and between parents and offspring that would promote cooperation and increases inclusive fitness. With disassortative mating, heterotypic preference (bias towards different types) in many cases has been shown to increase overall fitness. When this preference is favored, it allows a population to generate and/or maintain polymorphism (genetic variation within a population).
The fitness advantage aspect of disassortative mating seems straightforward, but the evolution of selective forces involved in disassortative mating are still largely unknown in natural populations.
Types of disassortative mating
Imprinting is one example of disassortative mating. A model shows that individuals imprint on a genetically transmitted trait during early ontogeny and choosy females later use those parental images as a basis of mate choice. A viability-reducing trait may be maintained even without the fertility cost of same-type matings. |
https://en.wikipedia.org/wiki/IBM%201130 | The IBM 1130 Computing System, introduced in 1965, was IBM's least expensive computer at that time. A binary 16-bit machine, it was marketed to price-sensitive, computing-intensive technical markets, like education and engineering, succeeding the decimal IBM 1620 in that market segment. Typical installations included a 1 megabyte disk drive that stored the operating system, compilers and object programs, with program source generated and maintained on punched cards. Fortran was the most common programming language used, but several others, including APL, were available.
The 1130 was also used as an intelligent front-end for attaching an IBM 2250 Graphics Display Unit, or as remote job entry (RJE) workstation, connected to a System/360 mainframe.
Description
The total production run of the 1130 has been estimated at 10,000.
The 1130 holds a place in computing history because it (and its non-IBM clones) gave many people their first direct interaction with a computer. Its price-performance ratio was good and it notably included inexpensive, removable disk storage, with reliable, easy-to-use software that could be in several high-level languages. The low price (from around $32,000 or $41,000 with disk drive) and well-balanced feature set enabled interactive "open shop" program development.
The IBM 1130 uses the same electronics packaging, called Solid Logic Technology (SLT), used in System/360. It has a 16-bit binary architecture, as do later minicomputers like the PDP-11 and Data General Nova.
The address space is 15 bits, limiting the 1130 to words () of memory. The 1130 uses magnetic-core memory, which the processor addresses on word boundaries, using direct, indirect, and indexed addressing modes.
Models
IBM implemented five models of the 1131 Central Processing Unit, the primary processing component of the IBM 1130. The Model 1 through Model 5 describe the core memory cycle time, as well as the model's ability to have disk storage. A letter A through D ap |
https://en.wikipedia.org/wiki/Tint%20control | Because the NTSC color television standard relies on the absolute phase of the color information, color errors occur when the phase of the video signal is altered between source and receiver, or due to non linearities in electronics. To correct for phase errors, a tint control is provided on NTSC television sets, which allows the user to manually adjust the phase relationship between the color information in the video and the reference for decoding the color information, known as the "color burst", so that correct colors may be displayed.
The tint control is normally set by sight to create satisfactory skin tones in a picture. The range of adjustment typically allows these colors to be adjusted from a green to a magenta tint. Television sets produced in recent decades typically include a (sometimes non-defeatable) distortion of the color decoding spectrum, to minimize the visual effects of phase error and lessen the need to adjust the tint control.
On broadcast equipment, such as timebase correctors and studio monitors, this control is typically marked "phase," as it adjusts the phase of the color signal with respect to the color burst signal.
Since the problem of phase errors in the real world became well known after the introduction of NTSC, the later PAL and SECAM color television standards attempted to correct for them. PAL uses the same color modulation scheme as NTSC but averages the received color information over adjacent scan lines, resulting in reduced color detail but canceling out small to moderate phase errors. (Severe phase errors result in picture grain and loss of color saturation in the PAL scheme.) SECAM uses a different modulation scheme that does not rely on the phase of the color signal. Because of this the amplitude of the color signal (color saturation) is unaffected as well. Because SECAM only broadcasts half the color information on each line, the color resolution is halved just like in the PAL system. Most TV sets designed for these lat |
https://en.wikipedia.org/wiki/Distcc | In software development, distcc is a tool for speeding up compilation of source code by using distributed computing over a computer network. With the right configuration, distcc can dramatically reduce a project's compilation time.
It is designed to work with the C programming language (and its derivatives like C++ and Objective-C) and to use GCC as its backend, though it provides varying degrees of compatibility with the Intel C++ Compiler and Sun Microsystems' Sun Studio Compiler Suite. Distributed under the terms of the GNU General Public License, distcc is free software.
Design
distcc is designed to speed up compilation by taking advantage of unused processing power on other computers. A machine with distcc installed can send code to be compiled across the network to a computer which has the distccd daemon and a compatible compiler installed.
distcc works as an agent for the compiler. A distcc daemon has to run on each of the participating machines. The originating machine invokes a preprocessor to handle header files, preprocessing directives (such as #ifdef) and the source files and sends the preprocessed source to other machines over the network via TCP either unencrypted or using SSH. Remote machines compile those source files without any local dependencies (such as libraries, header files or macro definitions) to object files and send them back to the originator for further compilation.
distcc version 3 supports a mode (called pump mode) in which included header files are sent to the remote machines,
so that preprocessing is also distributed.
Related software
distcc was an option for distributed builds in versions of Apple's Xcode development suite prior to 4.3, but has been removed.
Goma
Goma is a similar tool made by Google to replace distcc & ccache in compiling chromium.
Ccache
ccache is another tool aimed to reduce the compilation time by caching the output from the same input source files. ccache can also use distcc as its backend, providing |
https://en.wikipedia.org/wiki/N%20Seoul%20Tower | The N Seoul Tower (), officially the YTN Seoul Tower and commonly known as Namsan Tower or Seoul Tower, is a communication and observation tower located on Nam Mountain in central Seoul, South Korea. The -tall tower marks the second highest point in Seoul and is considered a local landmark.
Built in 1969, the N Seoul Tower is South Korea's first general radio wave tower, providing TV and radio broadcasting in Seoul. Currently, the tower broadcasts signals for Korean media outlets, such as KBS, MBC, and SBS.
History
Built in 1969 at a cost of approximately US$2.5 million, Seoul Tower was completed on 3 December 1971, designed by architects at Jangjongryul though at the time the facility interior was not furnished. N Seoul Tower opened to the public in October 1980. Since then, the tower has been a landmark of Seoul. Tower elevation ranges from 236.7 m (777 ft) at the base to 479.7 m (1,574 ft) above sea level. Seoul Tower had its name changed to N Seoul Tower in 2005, with the "N" standing for 'new', 'Namsan', and 'nature.' Approximately 15 billion KRW was spent in renovating and remodeling the tower.
When N Seoul Tower's original owner merged with CJ Corporation, it was renamed the N Seoul Tower (official name CJ Seoul Tower). YTN acquired it from CJ Corporation in 1999, and changed its name to YTN Seoul Tower. It has also been known as the Namsan Tower or Seoul Tower. It is also Korea's first general radio wave tower that holds transmissions antennas of KBS, MBC, SBS TV, FM, PBC, TBS, CBS, and BBS FM.
N Seoul Tower, along with Changdeokgung Palace, was selected as one of the world's top 500 tourist destinations in Lonely Planet’s Ultimate Travel List, based on global travel expert evaluation and reader preference surveys.
Floors and amenities
N Seoul Tower is divided into three main sections, including the N Lobby, N Plaza, and the N Tower. The N Plaza consists of two floors, while the N Tower consists of four floors.
Lobby
Plaza P0/B1 (Lobby): Includes: |
https://en.wikipedia.org/wiki/Piconet | A piconet is an ad hoc network that links a wireless user group of devices using Bluetooth technology protocols. A piconet consists of two or more devices occupying the same physical channel (synchronized to a common clock and hopping sequence). It allows one master device to interconnect with up to seven active slave devices. Up to 255 further slave devices can be inactive, or parked, which the master device can bring into active status at any time, but an active station must go into parked first.
Some examples of piconets include a cell phone connected to a computer, a laptop and a Bluetooth-enabled digital camera, or several PDAs that are connected to each other.
Overview
A group of devices are connected via Bluetooth technology in an ad hoc fashion. A piconet starts with two connected devices, and may grow to eight connected devices. Bluetooth communication always designates one of the Bluetooth devices as a main controlling unit or master unit. Other devices that follow the master unit are slave units. This allows the Bluetooth system to be non-contention based (no collisions). This means that after a Bluetooth device has been added to the piconet, each device is assigned a specific time period to transmit and they do not collide or overlap with other units operating within the same piconet.
Piconet range varies according to the class of the Bluetooth device. Data transfer rates vary between about 200 and 2100 kilobits per second.
Because the Bluetooth system hops over 79 channels, the probability of interfering with another Bluetooth system is less than 1.5%. This allows several Bluetooth piconets to operate in the same area at the same time with minimal interference.
See also
Personal area network (PAN)
Scatternet
IEEE 802.15
Further reading
Bluetooth |
https://en.wikipedia.org/wiki/Scatternet | A scatternet is a type of ad hoc computer network consisting of two or more piconets. The terms "scatternet" and "piconet" are typically applied to Bluetooth wireless technology.
Description
A piconet is the type of connection that is formed between two or more Bluetooth-enabled devices such as modern cell phones. Bluetooth enabled devices are "peer units" in that they are able to act as either master or slave. However, when a piconet is formed between two or more devices, one device takes the role of the 'master', and all other devices assume a 'slave' role for synchronization reasons. Piconets have a 7 member address space (3 bits, with zero reserved for broadcast), which limits the maximum size of a piconet to 8 devices, i.e. 1 master and 7 slaves.
A scatternet is a number of interconnected piconets that supports communication between more than 8 devices. Scatternets can be formed when a member of one piconet (either the master or one of the slaves) elects to participate as a slave in a second, separate piconet. The device participating in both piconets can relay data between members of both ad hoc networks. However, the basic Bluetooth protocol does not support this relaying - the host software of each device would need to manage it. Using this approach, it is possible to join together numerous piconets into a large scatternet, and to expand the physical size of the network beyond Bluetooth's limited range.
Currently there are very few actual implementations of scatternets due to limitations of Bluetooth and the MAC address protocol. However, there is a growing body of research being conducted with the goal of developing algorithms to efficiently form scatternets.
Future applications
Scatternets have the potential to bring the interconnectivity of the Internet to the physical world through wireless devices. A number of companies have attempted to launch social networking and dating services that leverage early scatternet implementations (see Bluedating). Sca |
https://en.wikipedia.org/wiki/8b/10b%20encoding | In telecommunications, 8b/10b is a line code that maps 8-bit words to 10-bit symbols to achieve DC balance and bounded disparity, and at the same time provide enough state changes to allow reasonable clock recovery. This means that the difference between the counts of ones and zeros in a string of at least 20 bits is no more than two, and that there are not more than five ones or zeros in a row. This helps to reduce the demand for the lower bandwidth limit of the channel necessary to transfer the signal.
An 8b/10b code can be implemented in various ways with focus on different performance parameters. One implementation was designed by K. Odaka for the DAT digital audio recorder. Kees Schouhamer Immink designed an 8b/10b code for the DCC audio recorder. The IBM implementation was described in 1983 by Al Widmer and Peter Franaszek.
IBM implementation
As the scheme name suggests, eight bits of data are transmitted as a 10-bit entity called a symbol, or character. The low five bits of data are encoded into a 6-bit group (the 5b/6b portion) and the top three bits are encoded into a 4-bit group (the 3b/4b portion). These code groups are concatenated together to form the 10-bit symbol that is transmitted on the wire. The data symbols are often referred to as D.x.y where x ranges over 0–31 and y over 0–7. Standards using the 8b/10b encoding also define up to 12 special symbols (or control characters) that can be sent in place of a data symbol. They are often used to indicate start-of-frame, end-of-frame, link idle, skip and similar link-level conditions. At least one of them (i.e. a "comma" symbol) needs to be used to define the alignment of the 10-bit symbols. They are referred to as K.x.y and have different encodings from any of the D.x.y symbols.
Because 8b/10b encoding uses 10-bit symbols to encode 8-bit words, some of the possible 1024 (10 bit, 210) symbols can be excluded to grant a run-length limit of 5 consecutive equal bits and to ensure the difference between |
https://en.wikipedia.org/wiki/Data%20link%20connection%20identifier | A data link connection identifier (DLCI) is a Frame Relay 10-bit-wide link-local virtual circuit identifier used to assign frames to a specific PVC or SVC. Frame Relay networks use DLCIs to statistically multiplex frames. DLCIs are preloaded into each switch and act as road signs to the traveling frames.
The standard allows the existence of 1024 DLCIs. DLCI 0 is reserved for the ANSI/q993a LMI standard—only numbers 16 to 976 are usable for end-user equipment. DLCI 1023 is reserved for Cisco LMI, however, numbers 16 to 1007 are usable.
In summary, if using Cisco LMI, numbers from 16 to 1007 are available for end-user equipment. The rest are reserved for various management purposes.
DLCI are Layer 2 Addresses that are locally significant. No two devices have the same DLCI mapped to its interface in one frame relay cloud.
References
Link protocols
Frame Relay |
https://en.wikipedia.org/wiki/Sleep%20cycle | The sleep cycle is an oscillation between the slow-wave and REM (paradoxical) phases of sleep. It is sometimes called the ultradian sleep cycle, sleep–dream cycle, or REM-NREM cycle, to distinguish it from the circadian alternation between sleep and wakefulness. In humans, this cycle takes 70 to 110 minutes (90 ± 20 minutes).
Characteristics
Electroencephalography shows the timing of sleep cycles by virtue of the marked distinction in brainwaves manifested during REM and non-REM sleep. Delta wave activity, correlating with slow-wave (deep) sleep, in particular shows regular oscillations throughout a good night's sleep. Secretions of various hormones, including renin, growth hormone, and prolactin, correlate positively with delta-wave activity, while secretion of thyroid-stimulating hormone correlates inversely. Heart rate variability, well known to increase during REM, predictably also correlates inversely with delta-wave oscillations over the ~90-minute cycle.
In order to determine in which stage of sleep the asleep subject is, electroencephalography is combined with other devices used for this differentiation. EMG (electromyography) is a crucial method to distinguish between sleep phases: for example, a decrease of muscle tone is in general a characteristic of the transition from wake to sleep, and during REM sleep, there is a state of muscle atonia (paralysis), resulting in an absence of signals in the EMG.
EOG (electrooculography), the measure of the eyes’ movement, is the third method used in the sleep architecture measurement; for example, REM sleep, as the name indicates, is characterized by a rapid eye movement pattern, visible thanks to the EOG.
Moreover, methods based on cardiorespiratory parameters are also effective in the analysis of sleep architecture—if they are associated with the other aforementioned measurements (such as electroencephalography, electrooculography and the electromyography).
Homeostatic functions, especially thermoregulation, o |
https://en.wikipedia.org/wiki/880%20%28number%29 | 880 (eight hundred [and] eighty) is the natural number following 879 and preceding 881.
It is the number of 4-by-4 magic squares.
And the triple factorial: 11!!! = 880.
880 is the frequency in hertz of the musical note A5.
880 is also:
The code for international direct dialing phone calls to Bangladesh
The year 880 BC or AD 880.
Interstate 880, several Interstate highways in the United States.
Dodge Custom 880, an automobile manufactured from 1962 to 1965.
References
Integers |
https://en.wikipedia.org/wiki/Quasi-arithmetic%20mean | In mathematics and statistics, the quasi-arithmetic mean or generalised f-mean or Kolmogorov-Nagumo-de Finetti mean is one generalisation of the more familiar means such as the arithmetic mean and the geometric mean, using a function . It is also called Kolmogorov mean after Soviet mathematician Andrey Kolmogorov. It is a broader generalization than the regular generalized mean.
Definition
If f is a function which maps an interval of the real line to the real numbers, and is both continuous and injective, the f-mean of numbers
is defined as , which can also be written
We require f to be injective in order for the inverse function to exist. Since is defined over an interval, lies within the domain of .
Since f is injective and continuous, it follows that f is a strictly monotonic function, and therefore that the f-mean is neither larger than the largest number of the tuple nor smaller than the smallest number in .
Examples
If = ℝ, the real line, and , (or indeed any linear function , not equal to 0) then the f-mean corresponds to the arithmetic mean.
If = ℝ+, the positive real numbers and , then the f-mean corresponds to the geometric mean. According to the f-mean properties, the result does not depend on the base of the logarithm as long as it is positive and not 1.
If = ℝ+ and , then the f-mean corresponds to the harmonic mean.
If = ℝ+ and , then the f-mean corresponds to the power mean with exponent .
If = ℝ and , then the f-mean is the mean in the log semiring, which is a constant shifted version of the LogSumExp (LSE) function (which is the logarithmic sum), . The corresponds to dividing by , since logarithmic division is linear subtraction. The LogSumExp function is a smooth maximum: a smooth approximation to the maximum function.
Properties
The following properties hold for for any single function :
Symmetry: The value of is unchanged if its arguments are permuted.
Idempotency: for all x, .
Monotonicity: is monotonic in eac |
https://en.wikipedia.org/wiki/UnixWare | UnixWare is a Unix operating system. It was originally released by Univel, a jointly owned venture of AT&T's Unix System Laboratories (USL) and Novell. It was then taken over by Novell. Via Santa Cruz Operation (SCO), it went on to Caldera Systems, Caldera International, and The SCO Group before it was sold to UnXis (now Xinuos). UnixWare is typically deployed as a server rather than a desktop. Binary distributions of UnixWare are available for x86 architecture computers. UnixWare is primarily marketed as a server operating system.
History
Univel (1991–1993)
After the SVR4 effort to merge SunOS and System V, AT&T's Unix System Laboratories (USL) formed the Univel partnership with Novell to develop a desktop version of Unix for i386 and i486 machines, codenamed "Destiny".
Destiny is based on the Unix System V release 4.2 kernel. The MoOLIT toolkit is used for the windowing system, allowing the user to choose between an OPEN LOOK or MOTIF-like look and feel at runtime. In order to make the system more robust on commodity desktop hardware, the Veritas VXFS journaling file system is used in place of the UFS file system used in SVR4. Networking support in UnixWare includes both TCP/IP and interoperability with Novell's NetWare protocols (IPX/SPX); the former were the standard among Unix users at the time of development, while PC networking was much more commonly based on NetWare.
Destiny was released in 1992 as UnixWare 1.0, with the intention of unifying the fragmented PC Unix market behind this single variant of the operating system. The system was earlier to reach the corporate computing market than Microsoft's Windows NT, but observers of the period remarked that UnixWare was "just another flavor of Unix", Novell's involvement being more a marketing ploy than a significant influx of technology. There two editions of Destiny: a Personal Edition, which includes Novell IPX networking but not TCP/IP, and an Advanced Server Edition with TCP/IP and other server soft |
https://en.wikipedia.org/wiki/Hashcash | Hashcash is a proof-of-work system used to limit E-mail spam and denial-of-service attacks. Hashcash was proposed in 1997 by Adam Back and described more formally in Back's 2002 paper "Hashcash - A Denial of Service Counter-Measure".
Background
The idea "...to require a user to compute a moderately hard, but not intractable function..." was proposed by Cynthia Dwork and Moni Naor in their 1992 paper "Pricing via Processing or Combatting Junk Mail".
How it works
Hashcash is a cryptographic hash-based proof-of-work algorithm that requires a selectable amount of work to compute, but the proof can be verified efficiently. For email uses, a textual encoding of a hashcash stamp is added to the header of an email to prove the sender has expended a modest amount of CPU time calculating the stamp prior to sending the email. In other words, as the sender has taken a certain amount of time to generate the stamp and send the email, it is unlikely that they are a spammer. The receiver can, at negligible computational cost, verify that the stamp is valid. However, the only known way to find a header with the necessary properties is brute force, trying random values until the answer is found; though testing an individual string is easy, satisfactory answers are rare enough that it will require a substantial number of tries to find the answer.
The hypothesis is that spammers, whose business model relies on their ability to send large numbers of emails with very little cost per message, will cease to be profitable if there is even a small cost for each spam they send. Receivers can verify whether a sender made such an investment and use the results to help filter email.
Technical details
The header line looks something like this:
X-Hashcash: 1:20:1303030600:anni@cypherspace.org::McMybZIhxKXu57jd:ckvi
The header contains:
ver: Hashcash format version, 1 (which supersedes version 0).
bits: Number of "partial pre-image" (zero) bits in the hashed code.
date: The time that the |
https://en.wikipedia.org/wiki/Biodiversity%20hotspot | A biodiversity hotspot is a biogeographic region with significant levels of biodiversity that is threatened by human habitation. Norman Myers wrote about the concept in two articles in The Environmentalist in 1988 and 1990, after which the concept was revised following thorough analysis by Myers and others into “Hotspots: Earth’s Biologically Richest and Most Endangered Terrestrial Ecoregions” and a paper published in the journal Nature, both in 2000.
To qualify as a biodiversity hotspot on Myers' 2000 edition of the hotspot map, a region must meet two strict criteria: it must contain at least 1,500 species of vascular plants (more than 0.5% of the world's total) as endemics, and it has to have lost at least 70% of its primary vegetation. Globally, 36 zones qualify under this definition. These sites support nearly 60% of the world's plant, bird, mammal, reptile, and amphibian species, with a high share of those species as endemics. Some of these hotspots support up to 15,000 endemic plant species, and some have lost up to 95% of their natural habitat.
Biodiversity hotspots host their diverse ecosystems on just 2.4% of the planet's surface. Ten hotspots were originally identified by Myer; the current 36 used to cover more than 15.7% of all the land but have lost around 85% of their area. This loss of habitat is why approximately 60% of the world's terrestrial life lives on only 2.4% of the land surface area. Caribbean Islands like Haiti and Jamaica are facing serious pressures on the populations of endemic plants and vertebrates as a result of rapid deforestation. Other areas include the Tropical Andes, Philippines, Mesoamerica, and Sundaland, which, under the current levels at which deforestation is occurring, will likely lose most of their plant and vertebrate species.
Hotspot conservation initiatives
Only a small percentage of the total land area within biodiversity hotspots is now protected. Several international organizations are working to conserve biodiver |
https://en.wikipedia.org/wiki/Mental%20calculation | Mental calculation consists of arithmetical calculations using only the human brain, with no help from any supplies (such as pencil and paper) or devices such as a calculator. People may use mental calculation when computing tools are not available, when it is faster than other means of calculation (such as conventional educational institution methods), or even in a competitive context. Mental calculation often involves the use of specific techniques devised for specific types of problems. People with unusually high ability to perform mental calculations are called mental calculators or lightning calculators.
Many of these techniques take advantage of or rely on the decimal numeral system. Usually, the choice of radix is what determines which method or methods to use.
Methods and techniques
Casting out nines
After applying an arithmetic operation to two operands and getting a result, the following procedure can be used to improve confidence in the correctness of the result:
Sum the digits of the first operand; any 9s (or sets of digits that add to 9) can be counted as 0.
If the resulting sum has two or more digits, sum those digits as in step one; repeat this step until the resulting sum has only one digit.
Repeat steps one and two with the second operand. At this point there are two single-digit numbers, the first derived from the first operand and the second derived from the second operand.
Apply the originally specified operation to the two condensed operands, and then apply the summing-of-digits procedure to the result of the operation.
Sum the digits of the result that were originally obtained for the original calculation.
If the result of step 4 does not equal the result of step 5, then the original answer is wrong. If the two results match, then the original answer may be right, though it is not guaranteed to be.
Example
Assume the calculation 6,338 × 79, manually done, yielded a result of 500,702:
Sum the digits of 6,338: (6 + 3 = 9, so count tha |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.