source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/NetBeans
NetBeans is an integrated development environment (IDE) for Java. NetBeans allows applications to be developed from a set of modular software components called modules. NetBeans runs on Windows, macOS, Linux and Solaris. In addition to Java development, it has extensions for other languages like PHP, C, C++, HTML5, and JavaScript. Applications based on NetBeans, including the NetBeans IDE, can be extended by third party developers. History NetBeans began in 1996 as Xelfi (word play on Delphi), a Java IDE student project under the guidance of the Faculty of Mathematics and Physics at Charles University in Prague. In 1997, Roman Staněk formed a company around the project and produced commercial versions of the NetBeans IDE until it was bought by Sun Microsystems in 1999. Sun open-sourced the NetBeans IDE in June of the following year. Since then, the NetBeans community has continued to grow. In 2010, Sun (and thus NetBeans) was acquired by Oracle Corporation. Under Oracle, NetBeans had to find some synergy with JDeveloper, a freeware IDE that has historically been a product of the company, by 2012 both IDEs were rebuilt around a shared codebase - the NetBeans Platform. In September 2016, Oracle submitted a proposal to donate the NetBeans project to The Apache Software Foundation, stating that it was "opening up the NetBeans governance model to give NetBeans constituents a greater voice in the project's direction and future success through the upcoming release of Java 9 and NetBeans 9 and beyond". The move was endorsed by Java creator James Gosling. The project entered the Apache Incubator in October 2016. NetBeans IDE NetBeans IDE is an open-source integrated development environment. NetBeans IDE supports development of all Java application types (Java SE (including JavaFX), Java ME, web, EJB and mobile applications) out of the box. Among other features are an Ant-based project system, Maven support, refactorings, version control (supporting CVS, Subversion, Git, M
https://en.wikipedia.org/wiki/Stateful%20firewall
In computing, a stateful firewall is a network-based firewall that individually tracks sessions of network connections traversing it. Stateful packet inspection, also referred to as dynamic packet filtering, is a security feature often used in non-commercial and business networks. Description A stateful firewall keeps track of the state of network connections, such as TCP streams, UDP datagrams, and ICMP messages, and can apply labels such as LISTEN, ESTABLISHED, or CLOSING. State table entries are created for TCP streams or UDP datagrams that are allowed to communicate through the firewall in accordance with the configured security policy. Once in the table, all RELATED packets of a stored session are streamlined, taking fewer CPU cycles than standard inspection. Related packets are also permitted to return through the firewall even if no rule is configured to allow communications from that host. If no traffic is seen for a specified time (time-out implementation dependent), the connection is removed from the state table. This can lead to applications experiencing unexpected disconnects or half-open TCP connections. Applications can be written to send keepalive messages periodically to prevent a firewall from dropping the connection during periods of no activity or for applications which by design have long periods of silence. The method of maintaining a session's state depends on the transport protocol being used. TCP is a connection-oriented protocol and sessions are established with a three-way handshake using SYN packets and ended by sending a FIN notification. The firewall can use these unique connection identifiers to know when to remove a session from the state table without waiting for a timeout. UDP is a connectionless protocol, which means it does not send unique connection-related identifiers while communicating. Because of that, a session will only be removed from the state table after the configured time-out. UDP hole punching is a technology that l
https://en.wikipedia.org/wiki/Emergent%20algorithm
An emergent algorithm is an algorithm that exhibits emergent behavior. In essence an emergent algorithm implements a set of simple building block behaviors that when combined exhibit more complex behaviors. One example of this is the implementation of fuzzy motion controllers used to adapt robot movement in response to environmental obstacles. An emergent algorithm has the following characteristics: it achieves predictable global effects it does not require global visibility it does not assume any kind of centralized control it is self-stabilizing Other examples of emergent algorithms and models include cellular automata, artificial neural networks and swarm intelligence systems (ant colony optimization, bees algorithm, etc.). See also AI alignment Artificial intelligence detection software Emergence Evolutionary computation Fuzzy logic Genetic algorithm Heuristic References Algorithm Heuristic algorithms Algorithms Artificial intelligence Cybernetics
https://en.wikipedia.org/wiki/Piero%20della%20Francesca
Piero della Francesca (, , ; ; – 12 October 1492) was an Italian painter of the Early Renaissance. To contemporaries he was also known as a mathematician and geometer. Nowadays Piero della Francesca is chiefly appreciated for his art. His painting is characterized by its serene humanism, its use of geometric forms and perspective. His most famous work is the cycle of frescoes The History of the True Cross in the church of San Francesco in the Tuscan town of Arezzo. Biography Early years Piero was born Piero di Benedetto in the town of Borgo Santo Sepolcro, modern-day Tuscany, to Benedetto de' Franceschi, a tradesman, and Romana di Perino da Monterchi, members of the Florentine and Tuscan Franceschi noble family. His father died before his birth, and he was called Piero della Francesca after his mother, who was referred to as "la Francesca" due to her marriage into the Franceschi family (similar to Lisa Gherardini who was known as "la Gioconda" through her marriage into the Giocondo family). Romana supported his education in mathematics and art. He was most probably apprenticed to the local painter Antonio di Giovanni d'Anghiari, because in documents about payments it is noted that he was working with Antonio in 1432 and May 1438. He certainly took notice of the work of some of the Sienese artists active in San Sepolcro during his youth; e.g. Sassetta. In 1439 Piero received, together with Domenico Veneziano, payments for his work on frescoes for the church of Sant'Egidio in Florence, now lost. In Florence he must have met leading masters like Fra Angelico, Luca della Robbia, Donatello, and Brunelleschi. The classicism of Masaccio's frescoes and his majestic figures in the Santa Maria del Carmine were for him an important source of inspiration. Dating of Piero's undocumented work is difficult because his style does not seem to have developed over the years. Mature work Piero returned to his hometown in 1442 and was elected to the City Council of Sansepolcro. Thr
https://en.wikipedia.org/wiki/Write%20protection
Write protection is any physical mechanism that prevents writing, modifying, or erasing data on a device. Most commercial software, audio and video on writeable media is write-protected when distributed. Examples IBM -inch magnetic tape reels, introduced in the 1950s, had a circular groove on one side of the reel, into which a soft plastic ring had to be placed in order to write on the tape. ("No ring, no write.") Audio cassettes and VHS videocassettes have tabs on the top/rear edge that can be broken off (uncovered = protected). 8 and -inch floppies can have, respectively, write-protect and write-enable notches on the right side (8-inch punched = protected; -inch covered/notch not present = protected). A common practice with single-sided floppies was to punch a second notch on the opposite side of the disk to enable use of both sides of the media, creating a flippy disk, so called because one originally had to flip the disk over to use the other side. -inch floppy disks have a sliding tab in a window on the right side (open = protected). Iomega Zip disks were write-protected using the IomegaWare software. Syquest EZ-drive (135 & 250 MB) disks were write-protected using a small metal switch on the rear of the disk at the bottom. VHS-C, Video8, Hi8, and DV videocassettes have a sliding tab on the rear edge. Iomega ditto tape cartridges had a small sliding tab on the top left hand corner on the front face of the cartridge. USB flash drives sometimes have a small switch, though this has become uncommon. An example of a USB flash drive that supported write protection via a switch is the Transcend JetFlash series. Secure Digital (SD) cards have a write-protect tab on the left side. Extensively, media that, by means of design, can't operate outside from this mode: CD-R, DVD-R, Vinyl records, etc. These mechanisms are intended to prevent only accidental data loss or attacks by computer viruses. A determined user can easily circumvent them either by covering a
https://en.wikipedia.org/wiki/Stress%20testing%20%28computing%29
In computing, stress testing (sometimes called torture testing) can be applied to either hardware or software. It is used to determine the maximum capability of a computer system and is often used for purposes such as scaling for production use and ensuring reliability and stability. Stress tests typically involve running a large amount of resource-intensive processes until the system either crashes or nearly does so. Hardware Software See also Burn-in Destructive testing Load and performance test tools Black box testing Load testing Software performance testing Scenario analysis Simulation Software testing White box testing Technischer Überwachungsverein (TÜV) – product testing and certification Concurrency testing using the CHESS model checker Jinx automates stress testing by automatically exploring unlikely execution scenarios. Highly accelerated life test References Software testing Product testing
https://en.wikipedia.org/wiki/Somatic%20%28biology%29
In cellular biology, the term somatic is derived from the French somatique which comes from Ancient Greek σωματικός (sōmatikós, “bodily”), and σῶμα (sôma, “body”.) is often used to refer to the cells of the body, in contrast to the reproductive (germline) cells, which usually give rise to the egg or sperm (or other gametes in other organisms). These somatic cells are diploid, containing two copies of each chromosome, whereas germ cells are haploid, as they only contain one copy of each chromosome (in preparation for fertilisation). Although under normal circumstances all somatic cells in an organism contain identical DNA, they develop a variety of tissue-specific characteristics. This process is called differentiation, through epigenetic and regulatory alterations. The grouping of similar cells and tissues creates the foundation for organs. Somatic mutations are changes to the genetics of a multicellular organism that are not passed on to its offspring through the germline. Most cancers are due to somatic mutations. Somatic is also defined as relating to the wall of the body cavity, particularly as distinguished from the head, limbs, or viscera. It is also used in the term somatic nervous system, which is the portion of the vertebrate nervous system that regulates voluntary movements of the body. Mutation frequency The frequency of mutations in mouse somatic tissue (brain, liver, Sertoli cells) was compared to the mutation frequency in male germline cells at sequential stages of spermatogenesis. The spontaneous mutation frequency was found to be significantly higher (5 to 10-fold) in the somatic cell types than in the male germline cells. In female mice, somatic cells were also found to have a higher mutation frequency than germline cells. It was suggested that elevated levels of DNA repair enzymes play a prominent role in the lower mutation frequency of male and female germline cells, and that enhanced genetic integrity is a fundamental characteristic of germli
https://en.wikipedia.org/wiki/OBject%20EXchange
OBEX (abbreviation of OBject EXchange, also termed IrOBEX) is a communication protocol that facilitates the exchange of binary objects between devices. It is maintained by the Infrared Data Association but has also been adopted by the Bluetooth Special Interest Group and the SyncML wing of the Open Mobile Alliance (OMA). One of OBEX's earliest popular applications was in the Palm III. This PDA and its many successors use OBEX to exchange business cards, data, even applications. Although OBEX was initially designed for infrared, it has now been adopted by Bluetooth, and is also used over RS-232, USB, WAP and in devices such as Livescribe smartpens. Comparison to HTTP OBEX is similar in design and function to HTTP in providing the client with a reliable transport for connecting to a server and may then request or provide objects. But OBEX differs in many important respects: HTTP is normally layered above a TCP/IP link. OBEX can also be, but is commonly implemented on an IrLAP/IrLMP/Tiny TP stack on an IrDA device. In Bluetooth, OBEX is implemented on a Baseband/ACL/L2CAP (and, for legacy uses, RFCOMM) stack. Other such "bindings" of OBEX are possible, such as over USB. HTTP uses human-readable text, but OBEX uses binary-formatted type–length–value triplets named "Headers" to exchange information about a request or an object. These are much easier to parse by resource-limited devices. HTTP transactions are inherently stateless; generally an HTTP client opens a connection, makes a single request, receives its response, and either closes the connection or makes other unrelated requests. In OBEX, a single transport connection may bear many related operations. In fact, recent additions to the OBEX specification allow an abruptly closed transaction to be resumed with all state information intact. Objects OBEX works by exchanging objects, which are used for a variety of purposes: establishing the parameters of a connection, sending and requesting data, changing the curre
https://en.wikipedia.org/wiki/Harmonic%20number
In mathematics, the -th harmonic number is the sum of the reciprocals of the first natural numbers: Starting from , the sequence of harmonic numbers begins: Harmonic numbers are related to the harmonic mean in that the -th harmonic number is also times the reciprocal of the harmonic mean of the first positive integers. Harmonic numbers have been studied since antiquity and are important in various branches of number theory. They are sometimes loosely termed harmonic series, are closely related to the Riemann zeta function, and appear in the expressions of various special functions. The harmonic numbers roughly approximate the natural logarithm function and thus the associated harmonic series grows without limit, albeit slowly. In 1737, Leonhard Euler used the divergence of the harmonic series to provide a new proof of the infinity of prime numbers. His work was extended into the complex plane by Bernhard Riemann in 1859, leading directly to the celebrated Riemann hypothesis about the distribution of prime numbers. When the value of a large quantity of items has a Zipf's law distribution, the total value of the most-valuable items is proportional to the -th harmonic number. This leads to a variety of surprising conclusions regarding the long tail and the theory of network value. The Bertrand-Chebyshev theorem implies that, except for the case , the harmonic numbers are never integers. Identities involving harmonic numbers By definition, the harmonic numbers satisfy the recurrence relation The harmonic numbers are connected to the Stirling numbers of the first kind by the relation The functions satisfy the property In particular is an integral of the logarithmic function. The harmonic numbers satisfy the series identities and These two results are closely analogous to the corresponding integral results and Identities involving There are several infinite summations involving harmonic numbers and powers of : Calculation An integral representation
https://en.wikipedia.org/wiki/Porting
In software engineering, porting is the process of adapting software for the purpose of achieving some form of execution in a computing environment that is different from the one that a given program (meant for such execution) was originally designed for (e.g., different CPU, operating system, or third party library). The term is also used when software/hardware is changed to make them usable in different environments. Software is portable when the cost of porting it to a new platform is significantly less than the cost of writing it from scratch. The lower the cost of porting software relative to its implementation cost, the more portable it is said to be. Etymology The term "port" is derived from the Latin portāre, meaning "to carry". When code is not compatible with a particular operating system or architecture, the code must be "carried" to the new system. The term is not generally applied to the process of adapting software to run with less memory on the same CPU and operating system. Software developers often claim that the software they write is portable, meaning that little effort is needed to adapt it to a new environment. The amount of effort actually needed depends on several factors, including the extent to which the original environment (the source platform) differs from the new environment (the target platform), the experience of the original authors in knowing which programming language constructs and third party library calls are unlikely to be portable, and the amount of effort invested by the original authors in only using portable constructs (platform specific constructs often provide a cheaper solution). History The number of significantly different CPUs and operating systems used on the desktop today is much smaller than in the past. The dominance of the x86 architecture means that most desktop software is never ported to a different CPU. In that same market, the choice of operating systems has effectively been reduced to three: Micro
https://en.wikipedia.org/wiki/X86%20assembly%20language
x86 assembly language is the name for the family of assembly languages which provide some level of backward compatibility with CPUs back to the Intel 8008 microprocessor, which was launched in April 1972. It is used to produce object code for the x86 class of processors. Regarded as a programming language, assembly is machine-specific and low-level. Like all assembly languages, x86 assembly uses mnemonics to represent fundamental CPU instructions, or machine code. Assembly languages are most often used for detailed and time-critical applications such as small real-time embedded systems, operating-system kernels, and device drivers, but can also be used for other applications. A compiler will sometimes produce assembly code as an intermediate step when translating a high-level program into machine code. Keyword Reserved keywords of x86 assembly language Mnemonics and opcodes Each x86 assembly instruction is represented by a mnemonic which, often combined with one or more operands, translates to one or more bytes called an opcode; the NOP instruction translates to 0x90, for instance, and the HLT instruction translates to 0xF4. There are potential opcodes with no documented mnemonic which different processors may interpret differently, making a program using them behave inconsistently or even generate an exception on some processors. These opcodes often turn up in code writing competitions as a way to make the code smaller, faster, more elegant or just show off the author's prowess. Syntax x86 assembly language has two main syntax branches: Intel syntax and AT&T syntax. Intel syntax is dominant in the DOS and Windows world, and AT&T syntax is dominant in the Unix world, since Unix was created at AT&T Bell Labs. Here is a summary of the main differences between Intel syntax and AT&T syntax: Many x86 assemblers use Intel syntax, including FASM, MASM, NASM, TASM, and YASM. GAS, which originally used AT&T syntax, has supported both syntaxes since version 2.10 via t
https://en.wikipedia.org/wiki/Disodium%20inosinate
Disodium inosinate (E631) is the disodium salt of inosinic acid with the chemical formula C10H11N4Na2O8P. It is used as a food additive and often found in instant noodles, potato chips, and a variety of other snacks. Commercial disodium inosinate may either be obtained from bacterial fermentation of sugars or prepared from animal products. The Vegetarian Society reports that production from meat or fish is more widespread, but the Vegetarian Resource Group reports that all three "leading manufacturers" claim to use fermentation. Use as a food additive Disodium inosinate is used as a flavor enhancer, in synergy with monosodium glutamate (MSG) to provide the umami taste. It is often added to foods in conjunction with disodium guanylate; the combination is known as disodium 5′-ribonucleotides. As a relatively expensive product, disodium inosinate is usually not used independently of glutamic acid; if disodium inosinate is present in a list of ingredients, but MSG does not appear to be, it is possible that glutamic acid is provided as part of another ingredient or is naturally occurring in another ingredient like tomatoes, Parmesan cheese, or yeast extract. Origin Inosinate is naturally found in meat and fish at levels of 80–800 mg/100 g. It can also be made by fermentation of sugars such as tapioca starch. Some sources claim that industrial levels of production are achieved by extraction from animal products, making E631 non-vegetarian. However, an interview by the Vegetarian Resource Group reports that all three "leading manufacturers" (one being Ajinomoto) claims to use an all-vegetarian fermentation process. Producers are generally open to providing information on the origin. E631 is in some cases labeled as "vegetarian" in ingredients lists when produced from plant sources. Toxicology and safety In the United States, consumption of added 5′-ribonucleotides averages 4 mg per day, compared to 2 g per day of naturally occurring purines. A review of literature
https://en.wikipedia.org/wiki/Repunit
In recreational mathematics, a repunit is a number like 11, 111, or 1111 that contains only the digit 1 — a more specific type of repdigit. The term stands for "repeated unit" and was coined in 1966 by Albert H. Beiler in his book Recreations in the Theory of Numbers. A repunit prime is a repunit that is also a prime number. Primes that are repunits in base-2 are Mersenne primes. As of May 2023, the largest known prime number , the largest probable prime R8177207 and the largest elliptic curve primality-proven prime R86453 are all repunits in various bases. Definition The base-b repunits are defined as (this b can be either positive or negative) Thus, the number Rn(b) consists of n copies of the digit 1 in base-b representation. The first two repunits base-b for n = 1 and n = 2 are In particular, the decimal (base-10) repunits that are often referred to as simply repunits are defined as Thus, the number Rn = Rn(10) consists of n copies of the digit 1 in base 10 representation. The sequence of repunits base-10 starts with 1, 11, 111, 1111, 11111, 111111, ... . Similarly, the repunits base-2 are defined as Thus, the number Rn(2) consists of n copies of the digit 1 in base-2 representation. In fact, the base-2 repunits are the well-known Mersenne numbers Mn = 2n − 1, they start with 1, 3, 7, 15, 31, 63, 127, 255, 511, 1023, 2047, 4095, 8191, 16383, 32767, 65535, ... . Properties Any repunit in any base having a composite number of digits is necessarily composite. Only repunits (in any base) having a prime number of digits might be prime. This is a necessary but not sufficient condition. For example, R35(b) = = 11111 × 1000010000100001000010000100001 = 1111111 × 10000001000000100000010000001, since 35 = 7 × 5 = 5 × 7. This repunit factorization does not depend on the base-b in which the repunit is expressed. If p is an odd prime, then every prime q that divides Rp(b) must be either 1 plus a multiple of 2p, or a factor of b − 1. For example, a prime f
https://en.wikipedia.org/wiki/111%20%28number%29
111 (One hundred [and] eleven) is the natural number following 110 and preceding 112. In mathematics 111 is a perfect totient number. 111 is R3 or the second repunit, a number like 11, 111, or 1111 that consists of repeated units, or 1's. It equals 3 × 37, therefore all triplets (numbers like 222 or 777) in base ten are of the form 3n × 37. As a repunit, it also follows that 111 is a palindromic number. All triplets in all bases are multiples of 111 in that base, therefore the number represented by 111 in a particular base is the only triplet that can ever be prime. 111 is not prime in base ten, but is prime in base two, where 1112 = 710. It is also prime in these other bases up to 128: 3, 5, 6, 8, 12, 14, 15, 17, 20, 21, 24, 27, 33, 38, 41, 50, 54, 57, 59, 62, 66, 69, 71, 75, 77, 78, 80, 89, 90, 99, 101, 105, 110, 111, 117, 119 In base 18, the number 111 is 73 (= 34310) which is the only base where 111 is a perfect power. The smallest magic square using only 1 and prime numbers has a magic constant of 111: A six-by-six magic square using the numbers 1 to 36 also has a magic constant of 111: (The square has this magic constant because 1 + 2 + 3 + ... + 34 + 35 + 36 = 666, and 666 / 6 = 111). 111 is also the magic constant of the n-Queens Problem for n = 6. It is also a nonagonal number. In base 10, it is a Harshad number as well as a strobogrammatic number. Nelson In cricket, the number 111 is sometimes called "a Nelson" after Admiral Nelson, who allegedly only had "One Eye, One Arm, One Leg" near the end of his life. This is in fact inaccurate—Nelson never lost a leg. Alternate meanings include "One Eye, One Arm, One Ambition" and "One Eye, One Arm, One Arsehole". Particularly in cricket, multiples of 111 are called a double Nelson (222), triple Nelson (333), quadruple Nelson (444; also known as a salamander) and so on. A score of 111 is considered by some to be unlucky. To combat the supposed bad luck, some watching lift their feet off the ground. Si
https://en.wikipedia.org/wiki/Appeasement
Appeasement, in an international context, is a diplomatic policy of making political, material, or territorial concessions to an aggressive power to avoid conflict. The term is most often applied to the foreign policy of the British governments of Prime Ministers Ramsay MacDonald (in office 1929–1935), Stanley Baldwin (in office 1935–1937) and (most notably) Neville Chamberlain (in office 1937–1940) towards Nazi Germany (from 1933) and Fascist Italy (from 1922) between 1935 and 1939. Under British pressure, appeasement of Nazism and Fascism also played a role in French foreign policy of the period but was always much less popular there than in the United Kingdom. In the early 1930s, appeasing concessions were widely seen as desirable because of the anti-war reaction to the trauma of World War I (1914–1918), second thoughts about the perceived vindictive treatment by some of Germany during the 1919 Treaty of Versailles, and a perception that fascism was a useful form of anti-communism. However, by the time of the Munich Agreement, which was concluded on 30 September 1938 between Germany, the United Kingdom, France, and Italy, the policy was opposed by the Labour Party and by a few Conservative dissenters such as future Prime Minister Winston Churchill, Secretary of State for War Duff Cooper, and future Prime Minister Anthony Eden. Appeasement was strongly supported by the British upper class, including royalty, big business (based in the City of London), the House of Lords, and media such as the BBC and The Times. As alarm grew about the rise of fascism in Europe, Chamberlain resorted to attempts at news censorship to control public opinion. He confidently announced after Munich that he had secured "peace for our time". Academics, politicians and diplomats have intensely debated the 1930s appeasement policies ever since they occurred. Historians' assessments have ranged from condemnation ("Lesson of Munich") for allowing Hitler's Germany to grow too strong to the
https://en.wikipedia.org/wiki/Base64
In computer programming, Base64 is a group of tetrasexagesimal binary-to-text encoding schemes that represent binary data (more specifically, a sequence of 8-bit bytes) in sequences of 24 bits that can be represented by four 6-bit Base64 digits. As with all binary-to-text encoding schemes, Base64 is designed to carry data stored in binary formats across channels that only reliably support text content. Base64 is particularly prevalent on the World Wide Web where one of its uses is the ability to embed image files or other binary assets inside textual assets such as HTML and CSS files. Base64 is also widely used for sending e-mail attachments. This is required because SMTP – in its original form – was designed to transport 7-bit ASCII characters only. This encoding causes an overhead of 33–37% (33% by the encoding itself; up to 4% more by the inserted line breaks). Design The particular set of 64 characters chosen to represent the 64-digit values for the base varies between implementations. The general strategy is to choose 64 characters that are common to most encodings and that are also printable. This combination leaves the data unlikely to be modified in transit through information systems, such as email, that were traditionally not 8-bit clean. For example, MIME's Base64 implementation uses A–Z, a–z, and 0–9 for the first 62 values. Other variations share this property but differ in the symbols chosen for the last two values; an example is UTF-7. The earliest instances of this type of encoding were created for dial-up communication between systems running the same OS, for example, uuencode for UNIX and BinHex for the TRS-80 (later adapted for the Macintosh), and could therefore make more assumptions about what characters were safe to use. For instance, uuencode uses uppercase letters, digits, and many punctuation characters, but no lowercase. Base64 table from RFC 4648 This is the Base64 alphabet defined in RFC 4648 §4 . See also Variants summary (below).
https://en.wikipedia.org/wiki/Michael%20A.%20Jackson%20%28computer%20scientist%29
Michael Anthony Jackson (born 16 February 1936) is a British computer scientist, and independent computing consultant in London, England. He is also a visiting research professor at the Open University in the UK. Biography Born in Birmingham to Montagu M. Jackson and Bertha (Green) Jackson, Jackson was educated at Harrow School in Harrow, London, England. There he was taught by Christopher Strachey and wrote his first program under Strachey's guidance. From 1954 to 1958, he studied classics (known as "Greats") at Merton College, Oxford; a fellow student, two years ahead of him, was C. A. R. Hoare. They shared an interest in logic, which was studied as part of Greats at Oxford. After his graduation in 1961, Jackson started as computer science designer and consultant for Maxwell Stamp Associates in London. Here he designed, coded and tested his first programs for IBM and Honeywell computers, working in assembler. There Jackson found his calling, as he recollected in 2000: "Although I was a careful designer — drawing meticulous flowcharts before coding — and a conscientious tester, I realised that program design was hard and the results likely to be erroneous..." Information system design was in need of a structured approach. In 1964, Jackson joined the new consultancy firm John Hoskyns and Company in London, before founding his own company Michael Jackson Systems Limited in 1971. In the 1960s, he had started his search for a "more reliable and systematic way of programming." He contributed to the emerging modular programming movement, meeting Larry Constantine, George H. Mealy and several others on a 1968 symposium. In the 1970s, Jackson developed Jackson Structured Programming (JSP). In the 1980s, with John Cameron, he developed Jackson System Development (JSD). Then, in the 1990s, he developed the Problem Frames Approach. As a part-time researcher at AT&T Labs Research, in collaboration with Pamela Zave, Jackson created "Distributed Feature Composition", a vir
https://en.wikipedia.org/wiki/Pascal%27s%20wager
Pascal's wager is a philosophical argument advanced by Blaise Pascal (1623–1662), a notable seventeenth-century French mathematician, philosopher, physicist, and theologian. This argument posits that individuals essentially engage in a life-defining gamble regarding the belief in the existence of God. Pascal contends that a rational person should adopt a lifestyle consistent with the existence of God and actively strive to believe in God. The reasoning behind this stance lies in the potential outcomes: if God does not exist, the individual incurs only finite losses, potentially sacrificing certain pleasures and luxuries. However, if God does indeed exist, they stand to gain immeasurably, as represented for example by an eternity in Heaven in Abrahamic tradition, while simultaneously avoiding boundless losses associated with an eternity in Hell. The original articulation of this wager can be found in Pascal's posthumously published work titled Pensées ("Thoughts"), which comprises a compilation of previously unpublished notes. Notably, Pascal's wager is significant as it marks the initial formal application of decision theory, existentialism, pragmatism, and voluntarism. Critics of the wager question the ability to provide definitive proof of God's existence. The argument from inconsistent revelations highlights the presence of various belief systems, each claiming exclusive access to divine truths. Additionally, the argument from inauthentic belief raises concerns about the genuineness of faith in God if solely motivated by potential benefits and losses. Despite these critiques, Pascal's wager remains a subject of contemplation, sparking ongoing discussions about belief, rationality, and the complexities surrounding the existence of a higher power. The wager The wager uses the following logic (excerpts from Pensées, part III, §233): God is, or God is not. Reason cannot decide between the two alternatives A Game is being played... where heads or tails will
https://en.wikipedia.org/wiki/Gambas
Gambas is the name of an object-oriented dialect of the BASIC programming language, as well as the integrated development environment that accompanies it. Designed to run on Linux and other Unix-like computer operating systems, its name is a recursive acronym for Gambas Almost Means Basic. Gambas is also the word for prawns in the Spanish, French, and Portuguese languages, from which the project's logos are derived. History Gambas was developed by the French programmer Benoît Minisini, with its first release coming in 1999. Benoît had grown up with the BASIC language, and decided to make a free software development environment that could quickly and easily make programs with user interfaces. The Gambas 1.x versions featured an interface made up of several different separate windows for forms and IDE dialogues in a similar fashion to the interface of earlier versions of the GIMP. It could also only develop applications using Qt and was more oriented towards the development of applications for KDE. The last release of the 1.x versions was Gambas 1.0.19. The first of the 2.x versions was released on January 2, 2008, after three to four years of development. It featured a major redesign of the interface, now with all forms and functions embedded in a single window, as well as some changes to the Gambas syntax, although for the most part code compatibility was kept. It featured major updates to existing Gambas components as well as the addition of some new ones, such as new components that could use GTK+ or SDL for drawing or utilize OpenGL acceleration. Gambas 2.x versions can load up and run Gambas 1.x projects, with occasional incompatibilities; the same is true for Gambas 2.x to 3.x, but not from Gambas 1.x to 3.x. The next major iteration of Gambas, the 3.x versions, was released on December 31, 2011. A 2015 benchmark published on the Gambas website showed Gambas 3.8.90 scripting as being faster to varying degrees than Perl 5.20.2 and the then-latest 2.7.10
https://en.wikipedia.org/wiki/Permanent%20%28mathematics%29
In linear algebra, the permanent of a square matrix is a function of the matrix similar to the determinant. The permanent, as well as the determinant, is a polynomial in the entries of the matrix. Both are special cases of a more general function of a matrix called the immanant. Definition The permanent of an matrix is defined as The sum here extends over all elements σ of the symmetric group Sn; i.e. over all permutations of the numbers 1, 2, ..., n. For example, and The definition of the permanent of A differs from that of the determinant of A in that the signatures of the permutations are not taken into account. The permanent of a matrix A is denoted per A, perm A, or Per A, sometimes with parentheses around the argument. Minc uses Per(A) for the permanent of rectangular matrices, and per(A) when A is a square matrix. Muir and Metzler use the notation . The word, permanent, originated with Cauchy in 1812 as “fonctions symétriques permanentes” for a related type of function, and was used by Muir and Metzler in the modern, more specific, sense. Properties If one views the permanent as a map that takes n vectors as arguments, then it is a multilinear map and it is symmetric (meaning that any order of the vectors results in the same permanent). Furthermore, given a square matrix of order n: perm(A) is invariant under arbitrary permutations of the rows and/or columns of A. This property may be written symbolically as perm(A) = perm(PAQ) for any appropriately sized permutation matrices P and Q, multiplying any single row or column of A by a scalar s changes perm(A) to s⋅perm(A), perm(A) is invariant under transposition, that is, perm(A) = perm(AT). If and are square matrices of order n then, where s and t are subsets of the same size of {1,2,...,n} and are their respective complements in that set. If is a triangular matrix, i.e. , whenever or, alternatively, whenever , then its permanent (and determinant as well) equals the product of the diagon
https://en.wikipedia.org/wiki/Win32s
Win32s is a 32-bit application runtime environment for the Microsoft Windows 3.1 and 3.11 operating systems. It allowed some 32-bit applications to run on the 16-bit operating system using call thunks. A beta version of Win32s was available in October 1992. Version 1.10 was released in July 1993 simultaneously with Windows NT 3.1. Concept and characteristics Win32s was intended as a partial implementation of the Win32 Windows API as it existed in early versions of Windows NT. The "s" in Win32s signifies subset, as Win32s lacked a number of Windows NT functions, including multi-threading, asynchronous I/O, newer serial port functions and many GDI extensions. This generally limited it to "Win32s applications" which were specifically designed for the Win32s platform, although some standard Win32 programs would work correctly, including Microsoft's 3D Pinball Space Cadet and some of Windows 95's included applets. Early versions of Internet Explorer (up to Version 5) were also Win32s compatible, although these also existed in 16-bit format. Generally, for a 32-bit application to be compatible with Win32s, it had to not use more than 16MB of memory or any extended features such as DirectX. Win32s inherits many of the limitations of the Win16 environment. True Win32 applications execute within a private virtual address space, whereas Windows 3.x used an address space shared among all running applications. An application running on Win32s has the shared address space and cooperative multitasking characteristics of Windows 3.1. Consequently, for a Win32 application to run on Win32s, it must contain relocation information. A technique named thunking is fundamental to the implementation of Win32s as well as Chicago-kernel operating systems, which are Windows 95, Windows 98, and Windows ME. However, allowing user-level thunking greatly complicates attempts to provide stable memory management or memory protection on a system-wide basis, as well as core or kernel security—
https://en.wikipedia.org/wiki/Leroy%20Hood
Leroy "Lee" Edward Hood (born October 10, 1938) is an American biologist who has served on the faculties at the California Institute of Technology (Caltech) and the University of Washington. Hood has developed ground-breaking scientific instruments which made possible major advances in the biological sciences and the medical sciences. These include the first gas phase protein sequencer (1982), for determining the sequence of amino acids in a given protein; a DNA synthesizer (1983), to synthesize short sections of DNA; a peptide synthesizer (1984), to combine amino acids into longer peptides and short proteins; the first automated DNA sequencer (1986), to identify the order of nucleotides in DNA; ink-jet oligonucleotide technology for synthesizing DNA and nanostring technology for analyzing single molecules of DNA and RNA. The protein sequencer, DNA synthesizer, peptide synthesizer, and DNA sequencer were commercialized through Applied Biosystems, Inc. and the ink-jet technology was commercialized through Agilent Technologies. The automated DNA sequencer was an enabling technology for the Human Genome Project. The peptide synthesizer was used in the synthesis of the HIV protease by Stephen Kent and others, and the development of a protease inhibitor for AIDS treatment. Hood established the first cross-disciplinary biology department, the Department of Molecular Biotechnology (MBT), at the University of Washington in 1992, and co-founded the Institute for Systems Biology in 2000. Hood is credited with introducing the term "systems biology", and advocates for "P4 medicine", medicine that is "predictive, personalized, preventive, and participatory." Scientific American counted him among the 10 most influential people in the field of biotechnology in 2015. Hood was elected a member of the National Academy of Engineering in 2007 for the invention and commercialization of key instruments, notably the automated DNA sequencer, that have enabled the biotechnology re
https://en.wikipedia.org/wiki/Genetically%20modified%20food
Genetically modified foods (GM foods), also known as genetically engineered foods (GE foods), or bioengineered foods are foods produced from organisms that have had changes introduced into their DNA using various methods of genetic engineering. Genetic engineering techniques allow for the introduction of new traits as well as greater control over traits when compared to previous methods, such as selective breeding and mutation breeding. The discovery of DNA and the improvement of genetic technology in the 20th century played a crucial role in the development of transgenic technology. In 1988, genetically modified microbial enzymes were first approved for use in food manufacture. Recombinant rennet was used in few countries in the 1990s. Commercial sale of genetically modified foods began in 1994, when Calgene first marketed its unsuccessful Flavr Savr delayed-ripening tomato. Most food modifications have primarily focused on cash crops in high demand by farmers such as soybean, maize/corn, canola, and cotton. Genetically modified crops have been engineered for resistance to pathogens and herbicides and for better nutrient profiles. The production of golden rice in 2000 marked a further improvement in the nutritional value of genetically modified food. GM livestock have been developed, although, , none were on the market. As of 2015, the AquAdvantage salmon was the only animal approved for commercial production, sale and consumption by the FDA. It is the first genetically modified animal to be approved for human consumption. Genes encoded for desired features, for instance an improved nutrient level, pesticide and herbicide resistances, and the possession of therapeutic substances, are often extracted and transferred to the target organisms, providing them with superior survival and production capacity. The improved utilization value usually gave consumers benefit in specific aspects. There is a scientific consensus that currently available food derived from GM
https://en.wikipedia.org/wiki/Protein%20engineering
Protein engineering is the process of developing useful or valuable proteins through the design and production of unnatural polypeptides, often by altering amino acid sequences found in nature. It is a young discipline, with much research taking place into the understanding of protein folding and recognition for protein design principles. It has been used to improve the function of many enzymes for industrial catalysis. It is also a product and services market, with an estimated value of $168 billion by 2017. There are two general strategies for protein engineering: rational protein design and directed evolution. These methods are not mutually exclusive; researchers will often apply both. In the future, more detailed knowledge of protein structure and function, and advances in high-throughput screening, may greatly expand the abilities of protein engineering. Eventually, even unnatural amino acids may be included, via newer methods, such as expanded genetic code, that allow encoding novel amino acids in genetic code. Approaches Rational design In rational protein design, a scientist uses detailed knowledge of the structure and function of a protein to make desired changes. In general, this has the advantage of being inexpensive and technically easy, since site-directed mutagenesis methods are well-developed. However, its major drawback is that detailed structural knowledge of a protein is often unavailable, and, even when available, it can be very difficult to predict the effects of various mutations since structural information most often provide a static picture of a protein structure. However, programs such as Folding@home and Foldit have utilized crowdsourcing techniques in order to gain insight into the folding motifs of proteins. Computational protein design algorithms seek to identify novel amino acid sequences that are low in energy when folded to the pre-specified target structure. While the sequence-conformation space that needs to be searched is
https://en.wikipedia.org/wiki/Food%20quality
Food quality is a concept often based on the organoleptic characteristics (e.g., taste, aroma, appearance) and nutritional value of food. Producers reducing potential pathogens and other hazards through food safety practices is another important factor in gauging standards. A food's origin, and even its branding, can play a role in how consumers perceive the quality of products. Sensory Consumer acceptability of foods is typically based upon flavor and texture, as well as its color and smell. Safety The International Organization for Standardization identifies requirements for a producer's food safety management system, including the processes and procedures a company must follow to control hazards and promote safe products, through ISO 22000. Federal and state level departments, specifically The Food and Drug Administration, are responsible for promoting public health by, among other things, ensuring food safety. Food quality in the United States is enforced by the Food Safety Act 1990. The European Food Safety Authority provides scientific advice and communicates on risks associated with the food chain on the continent. There are many existing international quality institutes testing food products in order to indicate to all consumers which are higher quality products. Founded in 1961 in Brussels, The international Monde Selection quality award is the oldest in evaluating food quality. The judgements are based on the following areas: taste, health, convenience, labelling, packaging, environmental friendliness and innovation. As many consumers rely on manufacturing and processing standards, the Institute Monde Selection takes into account the European Food Law. Food quality in the United States is enforced by the Food Safety Act 1990. Members of the public complain to trading standards professionals, [specify] who submit complaint samples and also samples used to routinely monitor the food marketplace to public analysts. Public analysts carry out scientific ana
https://en.wikipedia.org/wiki/Pesticide%20resistance
Pesticide resistance describes the decreased susceptibility of a pest population to a pesticide that was previously effective at controlling the pest. Pest species evolve pesticide resistance via natural selection: the most resistant specimens survive and pass on their acquired heritable changes traits to their offspring. If a pest has resistance then that will reduce the pesticide's efficacy efficacy and resistance are inversely related. Cases of resistance have been reported in all classes of pests (i.e. crop diseases, weeds, rodents, etc.), with 'crises' in insect control occurring early-on after the introduction of pesticide use in the 20th century. The Insecticide Resistance Action Committee (IRAC) definition of insecticide resistance is a heritable change in the sensitivity of a pest population that is reflected in the repeated failure of a product to achieve the expected level of control when used according to the label recommendation for that pest species. Pesticide resistance is increasing. Farmers in the US lost 7% of their crops to pests in the 1940s; over the 1980s and 1990s, the loss was 13%, even though more pesticides were being used. Over 500 species of pests have evolved a resistance to a pesticide. Other sources estimate the number to be around 1,000 species since 1945. Although the evolution of pesticide resistance is usually discussed as a result of pesticide use, it is important to keep in mind that pest populations can also adapt to non-chemical methods of control. For example, the northern corn rootworm (Diabrotica barberi) became adapted to a corn-soybean crop rotation by spending the year when the field is planted with soybeans in a diapause. , few new weed killers are near commercialization, and none with a novel, resistance-free mode of action. Similarly, discovery of new insecticides is more expensive and difficult than ever. Causes Pesticide resistance probably stems from multiple factors: Many pest species produce large number
https://en.wikipedia.org/wiki/New%20product%20development
In business and engineering, product development or new product development (PD or NPD) covers the complete process of bringing a new product to market, renewing an existing product and introducing a product in a new market. A central aspect of NPD is product design, along with various business considerations. New product development is described broadly as the transformation of a market opportunity into a product available for sale. The products developed by an organisation provide the means for it to generate income. For many technology-intensive firms their approach is based on exploiting technological innovation in a rapidly changing market. The product can be tangible (something physical which one can touch) or intangible (like a service or experience), though sometimes services and other processes are distinguished from "products."NPD requires an understanding of customer needs and wants, the competitive environment, and the nature of the market. Cost, time, and quality are the main variables that drive customer needs. Aiming at these three variables, innovative companies develop continuous practices and strategies to better satisfy customer requirements and to increase their own market share by a regular development of new products. There are many uncertainties and challenges which companies must face throughout the process. Product Development: Process Structure The product development process typically consists of several activities that firms employ in the complex process of delivering new products to the market. A process management approach is used to provide a structure. Product development often overlaps much with the engineering design process, particularly if the new product being developed involves application of math and/or science. Every new product will pass through a series of stages/phases, including ideation among other aspects of design, as well as manufacturing and market introduction. In highly complex engineered products (e.g. aircraf
https://en.wikipedia.org/wiki/Conservation%20biology
Conservation biology is the study of the conservation of nature and of Earth's biodiversity with the aim of protecting species, their habitats, and ecosystems from excessive rates of extinction and the erosion of biotic interactions. It is an interdisciplinary subject drawing on natural and social sciences, and the practice of natural resource management. The conservation ethic is based on the findings of conservation biology. Origins The term conservation biology and its conception as a new field originated with the convening of "The First International Conference on Research in Conservation Biology" held at the University of California, San Diego in La Jolla, California, in 1978 led by American biologists Bruce A. Wilcox and Michael E. Soulé with a group of leading university and zoo researchers and conservationists including Kurt Benirschke, Sir Otto Frankel, Thomas Lovejoy, and Jared Diamond. The meeting was prompted due to concern over tropical deforestation, disappearing species, and eroding genetic diversity within species. The conference and proceedings that resulted sought to initiate the bridging of a gap between theory in ecology and evolutionary genetics on the one hand and conservation policy and practice on the other. Conservation biology and the concept of biological diversity (biodiversity) emerged together, helping crystallize the modern era of conservation science and policy. The inherent multidisciplinary basis for conservation biology has led to new subdisciplines including conservation social science, conservation behavior and conservation physiology. It stimulated further development of conservation genetics which Otto Frankel had originated first but is now often considered a subdiscipline as well. Description The rapid decline of established biological systems around the world means that conservation biology is often referred to as a "Discipline with a deadline". Conservation biology is tied closely to ecology in researching the popu
https://en.wikipedia.org/wiki/Biocide
A biocide is defined in the European legislation as a chemical substance or microorganism intended to destroy, deter, render harmless, or exert a controlling effect on any harmful organism. The US Environmental Protection Agency (EPA) uses a slightly different definition for biocides as "a diverse group of poisonous substances including preservatives, insecticides, disinfectants, and pesticides used for the control of organisms that are harmful to human or animal health or that cause damage to natural or manufactured products". When compared, the two definitions roughly imply the same, although the US EPA definition includes plant protection products and some veterinary medicines. The terms "biocides" and "pesticides" are regularly interchanged, and often confused with "plant protection products". To clarify this, pesticides include both biocides and plant protection products, where the former refers to substances for non-food and feed purposes and the latter refers to substances for food and feed purposes. When discussing biocides a distinction should be made between the biocidal active substance and the biocidal product. The biocidal active substances are mostly chemical compounds, but can also be microorganisms (e.g. bacteria). Biocidal products contain one or more biocidal active substances and may contain other non-active co-formulants that ensure the effectiveness as well as the desired pH, viscosity, colour, odour, etc. of the final product. Biocidal products are available on the market for use by professional and/or non-professional consumers. Although most of the biocidal active substances have a relative high toxicity, there are also examples of active substances with low toxicity, such as , which exhibit their biocidal activity only under certain specific conditions such as in closed systems. In such cases, the biocidal product is the combination of the active substance and the device that ensures the intended biocidal activity, i.e. suffocation of rod
https://en.wikipedia.org/wiki/Vegetation
Vegetation is an assemblage of plant species and the ground cover they provide. It is a general term, without specific reference to particular taxa, life forms, structure, spatial extent, or any other specific botanical or geographic characteristics. It is broader than the term flora which refers to species composition. Perhaps the closest synonym is plant community, but vegetation can, and often does, refer to a wider range of spatial scales than that term does, including scales as large as the global. Primeval redwood forests, coastal mangrove stands, sphagnum bogs, desert soil crusts, roadside weed patches, wheat fields, cultivated gardens and lawns; all are encompassed by the term vegetation. The vegetation type is defined by characteristic dominant species, or a common aspect of the assemblage, such as an elevation range or environmental commonality. The contemporary use of vegetation approximates that of ecologist Frederic Clements' term earth cover, an expression still used by the Bureau of Land Management. History of definition The distinction between vegetation (the general appearance of a community) and flora (the taxonomic composition of a community) was first made by Jules Thurmann (1849). Prior to this, the two terms (vegetation and flora) were used indiscriminately, and still are in some contexts. Augustin de Candolle (1820) also made a similar distinction but he used the terms "station" (habitat type) and "habitation" (botanical region). Later, the concept of vegetation would influence the usage of the term biome with the inclusion of the animal element. Other concepts similar to vegetation are "physiognomy of vegetation" (Humboldt, 1805, 1807) and "formation" (Grisebach, 1838, derived from "Vegetationsform", Martius, 1824). Departing from Linnean taxonomy, Humboldt established a new science, dividing plant geography between taxonomists who studied plants as taxa and geographers who studied plants as vegetation. The physiognomic approach in the s
https://en.wikipedia.org/wiki/Bioregion
A bioregion is an ecologically and geographically defined area that is smaller than a biogeographic realm, but larger than an ecoregion or an ecosystem, in the World Wide Fund for Nature classification scheme. There is also an attempt to use the term in a rank-less generalist sense, similar to the terms "biogeographic area" or "biogeographic unit". It may be conceptually similar to an ecoprovince. It is also differently used in the environmentalist context, being coined by Berg and Dasmann (1977). WWF bioregions The World Wide Fund for Nature (WWF) scheme divides some of the biogeographic realms into bioregions, defined as "geographic clusters of ecoregions that may span several habitat types, but have strong biogeographic affinities, particularly at taxonomic levels higher than the species level (genus, family)." The WWF bioregions are as follows: Afrotropical realm Western Africa and Sahel Central Africa Eastern and Southern Africa Horn of Africa Madagascar–Indian Ocean Antarctic realm Australasian realm Australia New Guinea and Melanesia New Zealand Wallacea Indomalayan realm Indian subcontinent Indochina Sunda Shelf and Philippine Archipelago Nearctic realm Canadian Shield Eastern North America Northern Mexico and Southwestern United States Western North America Neotropical realm Amazonia Caribbean Central America Central Andes Eastern South America Everglades Northern Andes Orinoco Southern South America Oceanian realm Micronesia Polynesia Palearctic realm Asia East Asia north of the Himalayan system's foothills to the Arctic Himalayan Tibetan Plateau steppe Yunnan–Guizhou Plateau Northeast Asia Russian Far East Central Asia – Iranian Plateau and north to the Arctic. Temperate Asia biocountry Mongolian Plateau Eurasian Steppe Asian Russia (central) Asian-Siberian region Western Asia Arabian Desert Mediterranean Near East (roughly corresponds to the Levant) Anatolian Plateau Transcaucasia Northern Africa Atlantic coastal desert Sahara Desert Mediterranean
https://en.wikipedia.org/wiki/Value%20of%20life
The value of life is an economic value used to quantify the benefit of avoiding a fatality. It is also referred to as the cost of life, value of preventing a fatality (VPF), implied cost of averting a fatality (ICAF), and value of a statistical life (VSL). In social and political sciences, it is the marginal cost of death prevention in a certain class of circumstances. In many studies the value also includes the quality of life, the expected life time remaining, as well as the earning potential of a given person especially for an after-the-fact payment in a wrongful death claim lawsuit. As such, it is a statistical term, the cost of reducing the average number of deaths by one. It is an important issue in a wide range of disciplines including economics, health care, adoption, political economy, insurance, worker safety, environmental impact assessment, globalization, and process safety. The motivation for placing a monetary value on life is to enable policy and regulatory analysts to allocate the limited supply of resources, infrastructure, labor, and tax revenue. Estimates for the value of a life are used to compare the life-saving and risk-reduction benefits of new policies, regulations, and projects against a variety of other factors, often using a cost-benefit analysis. Estimates for the statistical value of life are published and used in practice by various government agencies. In Western countries and other liberal democracies, estimates for the value of a statistical life typically range from —; for example, the United States FEMA estimated the value of a statistical life at in 2020. Treatment in economics and methods of calculation There is no standard concept for the value of a specific human life in economics. However, when looking at risk/reward trade-offs that people make with regard to their health, economists often consider the value of a statistical life (VSL). Note that the VSL is very different from the value of an actual life. It is the value
https://en.wikipedia.org/wiki/JSTOR
JSTOR ( ; short for Journal Storage) is a digital library of academic journals, books, and primary sources founded in 1994. Originally containing digitized back issues of academic journals, it now encompasses books and other primary sources as well as current issues of journals in the humanities and social sciences. It provides full-text searches of almost 2,000 journals. Most access is by subscription but some of the site is public domain, and open access content is available free of charge. History William G. Bowen, president of Princeton University from 1972 to 1988, founded JSTOR in 1994. JSTOR was originally conceived as a solution to one of the problems faced by libraries, especially research and university libraries, due to the increasing number of academic journals in existence. Most libraries found it prohibitively expensive in terms of cost and space to maintain a comprehensive collection of journals. By digitizing many journal titles, JSTOR allowed libraries to outsource the storage of journals with the confidence that they would remain available long-term. Online access and full-text searchability improved access dramatically. Bowen initially considered using CD-ROMs for distribution. However, Ira Fuchs, Princeton University's vice president for Computing and Information Technology, convinced Bowen that CD-ROM was becoming an increasingly outdated technology and that network distribution could eliminate redundancy and increase accessibility. (For example, all Princeton's administrative and academic buildings were networked by 1989; the student dormitory network was completed in 1994; and campus networks like the one at Princeton were, in turn, linked to larger networks such as BITNET and the Internet.) JSTOR was initiated in 1995 at seven different library sites, and originally encompassed ten economics and history journals. JSTOR access improved based on feedback from its initial sites, and it became a fully searchable index accessible from any ordin
https://en.wikipedia.org/wiki/Newton%20polynomial
In the mathematical field of numerical analysis, a Newton polynomial, named after its inventor Isaac Newton, is an interpolation polynomial for a given set of data points. The Newton polynomial is sometimes called Newton's divided differences interpolation polynomial because the coefficients of the polynomial are calculated using Newton's divided differences method. Definition Given a set of k + 1 data points where no two xj are the same, the Newton interpolation polynomial is a linear combination of Newton basis polynomials with the Newton basis polynomials defined as for j > 0 and . The coefficients are defined as where is the notation for divided differences. Thus the Newton polynomial can be written as Newton forward divided difference formula The Newton polynomial can be expressed in a simplified form when are arranged consecutively with equal spacing. Introducing the notation for each and , the difference can be written as . So the Newton polynomial becomes This is called the Newton forward divided difference formula. Newton backward divided difference formula If the nodes are reordered as , the Newton polynomial becomes If are equally spaced with and for i = 0, 1, ..., k, then, is called the Newton backward divided difference formula. Significance Newton's formula is of interest because it is the straightforward and natural differences-version of Taylor's polynomial. Taylor's polynomial tells where a function will go, based on its y value, and its derivatives (its rate of change, and the rate of change of its rate of change, etc.) at one particular x value. Newton's formula is Taylor's polynomial based on finite differences instead of instantaneous rates of change. Addition of new points As with other difference formulas, the degree of a Newton interpolating polynomial can be increased by adding more terms and points without discarding existing ones. Newton's form has the simplicity that the new points are always added at one end:
https://en.wikipedia.org/wiki/Boot%20sector
A boot sector is the sector of a persistent data storage device (e.g., hard disk, floppy disk, optical disc, etc.) which contains machine code to be loaded into random-access memory (RAM) and then executed by a computer system's built-in firmware (e.g., the BIOS). Usually, the very first sector of the hard disk is the boot sector, regardless of sector size (512 or 4096 bytes) and partitioning flavor (MBR or GPT). The purpose of defining one particular sector as the boot sector is inter-operability between firmware and various operating systems. The purpose of chain loading first a firmware (e.g., the BIOS), then some code contained in the boot sector, and then, for example, an operating system, is maximal flexibility. The IBM PC and compatible computers On an IBM PC compatible machine, the BIOS selects a boot device, then copies the first sector from the device (which may be an MBR, VBR or any executable code), into physical memory at memory address 0x7C00. On other systems, the process may be quite different. Unified Extensible Firmware Interface (UEFI) The UEFI (not legacy boot via CSM) does not rely on boot sectors, UEFI system loads the boot loader (EFI application file in USB disk or in the EFI system partition) directly. Additionally, the UEFI specification also contains "secure boot", which basically wants the UEFI code to be digitally signed. Damage to the boot sector In case a boot sector receives physical damage, the hard disk will no longer be bootable, unless used with a custom BIOS that defines a non-damaged sector as the boot sector. However, since the very first sector additionally contains data regarding the partitioning of the hard disk, the hard disk will become entirely unusable except when used in conjunction with custom software. Partition tables A disk can be partitioned into multiple partitions and, on conventional systems, it is expected to be. There are two definitions on how to store the information regarding the partitionin
https://en.wikipedia.org/wiki/Hilbert%20cube
In mathematics, the Hilbert cube, named after David Hilbert, is a topological space that provides an instructive example of some ideas in topology. Furthermore, many interesting topological spaces can be embedded in the Hilbert cube; that is, can be viewed as subspaces of the Hilbert cube (see below). Definition The Hilbert cube is best defined as the topological product of the intervals for That is, it is a cuboid of countably infinite dimension, where the lengths of the edges in each orthogonal direction form the sequence The Hilbert cube is homeomorphic to the product of countably infinitely many copies of the unit interval In other words, it is topologically indistinguishable from the unit cube of countably infinite dimension. Some authors use the term "Hilbert cube" to mean this Cartesian product instead of the product of the . If a point in the Hilbert cube is specified by a sequence with then a homeomorphism to the infinite dimensional unit cube is given by The Hilbert cube as a metric space It is sometimes convenient to think of the Hilbert cube as a metric space, indeed as a specific subset of a separable Hilbert space (that is, a Hilbert space with a countably infinite Hilbert basis). For these purposes, it is best not to think of it as a product of copies of but instead as as stated above, for topological properties, this makes no difference. That is, an element of the Hilbert cube is an infinite sequence that satisfies Any such sequence belongs to the Hilbert space so the Hilbert cube inherits a metric from there. One can show that the topology induced by the metric is the same as the product topology in the above definition. Properties As a product of compact Hausdorff spaces, the Hilbert cube is itself a compact Hausdorff space as a result of the Tychonoff theorem. The compactness of the Hilbert cube can also be proved without the axiom of choice by constructing a continuous function from the usual Cantor set onto the Hilbert cube.
https://en.wikipedia.org/wiki/Daisyworld
Daisyworld, a computer simulation, is a hypothetical world orbiting a star whose radiant energy is slowly increasing or decreasing. It is meant to mimic important elements of the Earth-Sun system. James Lovelock and Andrew Watson introduced it in a paper published in 1983 to illustrate the plausibility of the Gaia hypothesis. In the original 1983 version, Daisyworld is seeded with two varieties of daisy as its only life forms: black daisies and white daisies. White petaled daisies reflect light, while black petaled daisies absorb light. The simulation tracks the two daisy populations and the surface temperature of Daisyworld as the sun's rays grow more powerful. The surface temperature of Daisyworld remains almost constant over a broad range of solar output. Mathematical model to sustain the Gaia hypothesis The purpose of the model is to demonstrate that feedback mechanisms can evolve from the actions or activities of self-interested organisms, rather than through classic group selection mechanisms. Daisyworld examines the energy budget of a planet populated by two different types of plants, black daisies and white daisies. The colour of the daisies influences the albedo of the planet such that black daisies absorb light and warm the planet, while white daisies reflect light and cool the planet. Competition between the daisies (based on temperature-effects on growth rates) leads to a balance of populations that tends to favour a planetary temperature close to the optimum for daisy growth. Lovelock and Watson demonstrated the stability of Daisyworld by making its sun evolve along the main sequence, taking it from low to high solar constant. This perturbation of Daisyworld's receipt of solar radiation caused the balance of daisies to gradually shift from black to white but the planetary temperature was always regulated back to this optimum (except at the extreme ends of solar evolution). This situation is very different from the corresponding abiotic world, wh
https://en.wikipedia.org/wiki/IPAQ
The iPAQ is a discontinued Pocket PC and personal digital assistant which was first unveiled by Compaq in April 2000. HP's line-up of iPAQ devices included PDA-devices, smartphones and GPS-navigators. A substantial number of devices were outsourced from Taiwanese HTC corporation. The name was borrowed from Compaq's earlier iPAQ Desktop Personal Computers. Following Hewlett-Packard's acquisition of Compaq, the product has been marketed by HP. The devices use a Windows Mobile interface. In addition to this, there are several Linux distributions that also operate on some of these devices. Earlier units were modular. Sleeve accessories were released called "jackets", which slide around the unit and add functionality such as a card reader, wireless networking, GPS, and extra batteries. Later versions of iPAQs have most of these features integrated into the base device itself, some including GPRS mobile telephony (SIM card slot and radio). History The iPAQ was developed by Compaq based on the SA-1110 "Assabet" and SA-1111 "Neponset" reference boards that were engineered by a StrongARM development group located at Digital Equipment Corporation's Hudson Massachusetts facility. At the time when these boards were in development, this facility was acquired by Intel. When the "Assabet" board is combined with the "Neponset" companion processor board they provide support for 32 megabytes of SDRAM in addition to CompactFlash and PCMCIA slots along with an I2S or AC-Link serial audio bus, PS/2 mouse and trackpad interfaces, a USB host controller and 18 additional GPIO pins. Software drivers for a CompactFlash Ethernet device, IDE storage devices such as the IBM Microdrive and the Lucent WaveLAN/IEEE 802.11 Wifi device were also available. An earlier StrongARM SA-1100 based research handheld device call the "Itsy" had been developed at Digital Equipment Corporation's Western Research Laboratory (later to become the Compaq Western Research Laboratory). The first iPAQ Pocket PC w
https://en.wikipedia.org/wiki/Real%20options%20valuation
Real options valuation, also often termed real options analysis, (ROV or ROA) applies option valuation techniques to capital budgeting decisions. A real option itself, is the right—but not the obligation—to undertake certain business initiatives, such as deferring, abandoning, expanding, staging, or contracting a capital investment project. For example, real options valuation could examine the opportunity to invest in the expansion of a firm's factory and the alternative option to sell the factory. Real options are generally distinguished from conventional financial options in that they are not typically traded as securities, and do not usually involve decisions on an underlying asset that is traded as a financial security. A further distinction is that option holders here, i.e. management, can directly influence the value of the option's underlying project; whereas this is not a consideration as regards the underlying security of a financial option. Moreover, management cannot measure uncertainty in terms of volatility, and must instead rely on their perceptions of uncertainty. Unlike financial options, management also have to create or discover real options, and such creation and discovery process comprises an entrepreneurial or business task. Real options are most valuable when uncertainty is high; management has significant flexibility to change the course of the project in a favorable direction and is willing to exercise the options. Real options analysis, as a discipline, extends from its application in corporate finance, to decision making under uncertainty in general, adapting the techniques developed for financial options to "real-life" decisions. For example, R&D managers can use Real Options Valuation to help them deal with various uncertainties in making decisions about the allocation of resources among R&D projects. Non-business examples might be evaluating the cost of cryptocurrency mining machines, or the decision to join the work force, or rather,
https://en.wikipedia.org/wiki/Catalan%20number
In combinatorial mathematics, the Catalan numbers are a sequence of natural numbers that occur in various counting problems, often involving recursively defined objects. They are named after the French-Belgian mathematician Eugène Charles Catalan. The nth Catalan number can be expressed directly in terms of the central binomial coefficients by The first Catalan numbers for n = 0, 1, 2, 3, ... are 1, 1, 2, 5, 14, 42, 132, 429, 1430, 4862, 16796, 58786, ... . Properties An alternative expression for Cn is for which is equivalent to the expression given above because . This expression shows that Cn is an integer, which is not immediately obvious from the first formula given. This expression forms the basis for a proof of the correctness of the formula. Another alternative expression is which can be directly interpreted in terms of the cycle lemma; see below. The Catalan numbers satisfy the recurrence relations and Asymptotically, the Catalan numbers grow as in the sense that the quotient of the nth Catalan number and the expression on the right tends towards 1 as n approaches infinity. This can be proved by using the asymptotic growth of the central binomial coefficients, by Stirling's approximation for , or via generating functions. A more accurate asymptotic analysis shows that the Catalan numbers are approximated by the fourth order approximation . The only Catalan numbers Cn that are odd are those for which n = 2k − 1; all others are even. The only prime Catalan numbers are C2 = 2 and C3 = 5. The Catalan numbers have the integral representations which immediately yields . This has a simple probabilistic interpretation. Consider a random walk on the integer line, starting at 0. Let -1 be a "trap" state, such that if the walker arrives at -1, it will remain there. The walker can arrive at the trap state at times 1, 3, 5, 7..., and the number of ways the walker can arrive at the trap state at time is . Since the 1D random walk is recurrent, th
https://en.wikipedia.org/wiki/222%20%28number%29
222 (two hundred [and] twenty-two) is the natural number following 221 and preceding 223. In mathematics It is a decimal repdigit and a strobogrammatic number (meaning that it looks the same turned upside down on a calculator display). It is one of the numbers whose digit sum in decimal is the same as it is in binary. 222 is a noncototient, meaning that it cannot be written in the form n − φ(n) where φ is Euler's totient function counting the number of values that are smaller than n and relatively prime to it. There are exactly 222 distinct ways of assigning a meet and join operation to a set of ten unlabelled elements in order to give them the structure of a lattice, and exactly 222 different six-edge polysticks. References Integers
https://en.wikipedia.org/wiki/Wardriving
Wardriving is the act of searching for Wi-Fi wireless networks, usually from a moving vehicle, using a laptop or smartphone. Software for wardriving is freely available on the internet. Warbiking, warcycling, warwalking and similar use the same approach but with other modes of transportation. Etymology War driving originated from wardialing, a method popularized by a character played by Matthew Broderick in the film WarGames, and named after that film. War dialing consists of dialing every phone number in a specific sequence in search of modems. Variants Warbiking or warcycling is similar to wardriving, but is done from a moving bicycle or motorcycle. This practice is sometimes facilitated by mounting a Wi-Fi enabled device on the vehicle. Warwalking, or warjogging, is similar to wardriving, but is done on foot rather than from a moving vehicle. The disadvantages of this method are a slower speed of travel (leading to the discovery of more infrequently discovered networks) and the absence of a convenient computing environment. Consequently, handheld devices such as pocket computers, which can perform such tasks while users are walking or standing, have dominated this practice. Technology advances and developments in the early 2000s expanded the extent of this practice. Advances include computers with integrated Wi-Fi, rather than CompactFlash (CF) or PC Card (PCMCIA) add-in cards in computers such as Dell Axim, Compaq iPAQ and Toshiba pocket computers starting in 2002. Later, the active Nintendo DS and Sony PSP enthusiast communities gained Wi-Fi abilities on these devices. Further, nearly all modern smartphones integrate Wi-Fi and Global Positioning System (GPS). Warrailing, or Wartraining, is similar to wardriving, but is done on a train or tram rather than from a slower more controllable vehicle. The disadvantages of this method are higher speed of travel (resulting in less discovery of more infrequently discovered networks) and often limited to major roads
https://en.wikipedia.org/wiki/Commutator%20%28electric%29
A commutator is a rotary electrical switch in certain types of electric motors and electrical generators that periodically reverses the current direction between the rotor and the external circuit. It consists of a cylinder composed of multiple metal contact segments on the rotating armature of the machine. Two or more electrical contacts called "brushes" made of a soft conductive material like carbon press against the commutator, making sliding contact with successive segments of the commutator as it rotates. The windings (coils of wire) on the armature are connected to the commutator segments. Commutators are used in direct current (DC) machines: dynamos (DC generators) and many DC motors as well as universal motors. In a motor the commutator applies electric current to the windings. By reversing the current direction in the rotating windings each half turn, a steady rotating force (torque) is produced. In a generator the commutator picks off the current generated in the windings, reversing the direction of the current with each half turn, serving as a mechanical rectifier to convert the alternating current from the windings to unidirectional direct current in the external load circuit. The first direct current commutator-type machine, the dynamo, was built by Hippolyte Pixii in 1832, based on a suggestion by André-Marie Ampère. Commutators are relatively inefficient, and also require periodic maintenance such as brush replacement. Therefore, commutated machines are declining in use, being replaced by alternating current (AC) machines, and in recent years by brushless DC motors which use semiconductor switches. Principle of operation A commutator consists of a set of contact bars fixed to the rotating shaft of a machine, and connected to the armature windings. As the shaft rotates, the commutator reverses the flow of current in a winding. For a single armature winding, when the shaft has made one-half complete turn, the winding is now connected so t
https://en.wikipedia.org/wiki/Tarski%27s%20circle-squaring%20problem
Tarski's circle-squaring problem is the challenge, posed by Alfred Tarski in 1925, to take a disc in the plane, cut it into finitely many pieces, and reassemble the pieces so as to get a square of equal area. This was proven to be possible by Miklós Laczkovich in 1990; the decomposition makes heavy use of the axiom of choice and is therefore non-constructive. Laczkovich estimated the number of pieces in his decomposition at roughly 1050, the pieces used in his decomposition are non-measurable subsets. A constructive solution was given by Łukasz Grabowski, András Máthé and Oleg Pikhurko in 2016 which worked everywhere except for a set of measure zero. More recently, gave a completely constructive solution using about Borel pieces. In 2021 Máthé, Noel and Pikhurko improved the properties of the pieces. In particular, Lester Dubins, Morris W. Hirsch & Jack Karush proved it is impossible to dissect a circle and make a square using pieces that could be cut with an idealized pair of scissors (that is, having Jordan curve boundary). Laczkovich actually proved the reassembly can be done using translations only; rotations are not required. Along the way, he also proved that any simple polygon in the plane can be decomposed into finitely many pieces and reassembled using translations only to form a square of equal area. The Bolyai–Gerwien theorem is a related but much simpler result: it states that one can accomplish such a decomposition of a simple polygon with finitely many polygonal pieces if both translations and rotations are allowed for the reassembly. It follows from a result of that it is possible to choose the pieces in such a way that they can be moved continuously while remaining disjoint to yield the square. Moreover, this stronger statement can be proved as well to be accomplished by means of translations only. These results should be compared with the much more paradoxical decompositions in three dimensions provided by the Banach–Tarski paradox; those dec
https://en.wikipedia.org/wiki/Network%20Information%20Service
The Network Information Service, or NIS (originally called Yellow Pages or YP), is a client–server directory service protocol for distributing system configuration data such as user and host names between computers on a computer network. Sun Microsystems developed the NIS; the technology is licensed to virtually all other Unix vendors. Because British Telecom PLC owned the name "Yellow Pages" as a registered trademark in the United Kingdom for its paper-based, commercial telephone directory, Sun changed the name of its system to NIS, though all the commands and functions still start with "yp". A NIS/YP system maintains and distributes a central directory of user and group information, hostnames, e-mail aliases and other text-based tables of information in a computer network. For example, in a common UNIX environment, the list of users for identification is placed in and secret authentication hashes in . NIS adds another "global" user list which is used for identifying users on any client of the NIS domain. Administrators have the ability to configure NIS to serve password data to outside processes to authenticate users using various versions of the Unix crypt(3) hash algorithms. However, in such cases, any NIS(0307) client can retrieve the entire password database for offline inspection. Successor technologies The original NIS design was seen to have inherent limitations, especially in the areas of scalability and security, so other technologies have come to replace it. Sun introduced NIS+ as part of Solaris 2 in 1992, with the intention for it to eventually supersede NIS. NIS+ features much stronger security and authentication features, as well as a hierarchical design intended to provide greater scalability and flexibility. However, it was also more cumbersome to set up and administer, and was more difficult to integrate into an existing NIS environment than many existing users wished. NIS+ has been removed from Solaris 11. As a result, many users choos
https://en.wikipedia.org/wiki/Network%20information%20system
A network information system (NIS) is an information system for managing networks, such as electricity network, water supply network, gas supply network, telecommunications network., or street light network NIS may manage all data relevant to the network, e.g.- all components and their attributes, the connectivity between them and other information, relating to the operation, design and construction of such networks. NIS for electricity may manage any, some or all voltage levels- Extra High, High, Medium and low voltage. It may support only the distribution network or also the transmission network. Telecom NIS typically consists of the physical network inventory and logical network inventory. Physical network inventory is used to manage outside plant components, such as cables, splices, ducts, trenches, nodes and inside plant components such as active and passive devices. The most differentiating factor of telecom NIS from traditional GIS is the capability of recording thread level connectivity. The logical network inventory is used to manage the logical connections and circuits utilizing the logical connections. Traditionally, the logical network inventory has been a separate product but in most modern systems the functionality is built in the GIS serving both the functionality of the physical network and logical network. Water network information system typically manages the water network components, such as ducts, branches, valves, hydrants, reservoirs and pumping stations. Some systems such as include the water consumers as well as water meters and their readings in the NIS. Sewage and stormwater components are typically included in the NIS. By adding sensors as well as analysis and calculations based on the measured values the concept of Smart water system is included in the NIS. By adding actuators into the network the concept of SCADA can be included in the NIS. NIS may be built on top of a GIS (Geographical information system). Private Cloud based NIS
https://en.wikipedia.org/wiki/Agroecosystem
Agroecosystems are the ecosystems supporting the food production systems in farms and gardens. As the name implies, at the core of an agroecosystem lies the human activity of agriculture. As such they are the basic unit of study in Agroecology, and Regenerative Agriculture using ecological approaches. Like other ecosystems, agroecosystems form partially closed systems in which animals, plants, microbes, and other living organisms and their environment are interdependent and regularly interact. They are somewhat arbitrarily defined as a spatially and functionally coherent unit of agricultural activity. An agroecosystem can be seen as not restricted to the immediate site of agricultural activity (e.g. the farm). That is, it includes the region that is impacted by this activity, usually by changes to the complexity of species assemblages and energy flows, as well as to the net nutrient balance. Agroecosystems, particularly those managed intensively, are characterized as having simpler species composition, energy and nutrient flows than "natural" ecosystems. Likewise, agroecosystems are often associated with elevated nutrient input, much of which exits the farm leading to eutrophication of connected ecosystems not directly engaged in agriculture. Utilization Forest gardens are probably the world's oldest and most resilient agroecosystem. Forest gardens originated in prehistoric times along jungle-clad river banks and in the wet foothills of monsoon regions. In the gradual process of a family improving their immediate environment, useful tree and vine species were identified, protected and improved whilst undesirable species were eliminated. Eventually superior foreign species were selected and incorporated into the family's garden. Some major organizations are hailing farming within agroecosystems as the way forward for mainstream agriculture. Current farming methods have resulted in over-stretched water resources, high levels of erosion and reduced soil fertility.
https://en.wikipedia.org/wiki/Electronic%20design%20automation
Electronic design automation (EDA), also referred to as electronic computer-aided design (ECAD), is a category of software tools for designing electronic systems such as integrated circuits and printed circuit boards. The tools work together in a design flow that chip designers use to design and analyze entire semiconductor chips. Since a modern semiconductor chip can have billions of components, EDA tools are essential for their design; this article in particular describes EDA specifically with respect to integrated circuits (ICs). History Early days The earliest electronic design automation is attributed to IBM with the documentation of its 700 series computers in the 1950s. Prior to the development of EDA, integrated circuits were designed by hand and manually laid out. Some advanced shops used geometric software to generate tapes for a Gerber photoplotter, responsible for generating a monochromatic exposure image, but even those copied digital recordings of mechanically drawn components. The process was fundamentally graphic, with the translation from electronics to graphics done manually; the best-known company from this era was Calma, whose GDSII format is still in use today. By the mid-1970s, developers started to automate circuit design in addition to drafting and the first placement and routing tools were developed; as this occurred, the proceedings of the Design Automation Conference catalogued the large majority of the developments of the time. The next era began following the publication of "Introduction to VLSI Systems" by Carver Mead and Lynn Conway in 1980; considered the standard textbook for chip design. The result was an increase in the complexity of the chips that could be designed, with improved access to design verification tools that used logic simulation. The chips were easier to lay out and more likely to function correctly, since their designs could be simulated more thoroughly prior to construction. Although the languages and tools h
https://en.wikipedia.org/wiki/LookSmart
LookSmart is an American search advertising, content management, online media, and technology company. It provides search, machine learning and chatbot technologies as well as pay-per-click and contextual advertising services. LookSmart also licenses and manages search ad networks as white-label products. It abides by the click measurement guidelines of the Interactive Advertising Bureau. LookSmart also owns several subsidiaries, including Clickable Inc., LookSmart AdCenter, Novatech.io, ShopWiki and Syncapse. The current CEO of LookSmart is Michael Onghai and the company is headquartered in Henderson, Nevada. Etymology The name "LookSmart" is a double entendre, referring to both its selective, editorially compiled directory and as a compliment to users whom the company thinks "look smart". History 1995–1998 LookSmart was founded as Homebase in 1995 in Melbourne, Australia by husband and wife Evan Thornley and Tracy Ellery, executives of McKinsey & Company. Reader's Digest invested $5 million in the company for an 80% stake. The original concept of Homebase was to build a female and family-friendly web portal to supplement the Reader's Digest magazine. After leadership and strategy changes at Reader's Digest, which reduced RD's focus on its online business, RD wanted to shut down Homebase, which would have cost $4 million in payouts and other termination costs. The founders and former McKinsey's employee Martin Hosking instead proposed a cheaper leveraged buyout of Homebase. On 28 October 1996, the company launched its LookSmart search engine. At launch, the search engine listed more than 85,000 sites and had a "Java-enhanced" interface. In June 1997, the search engine underwent a major redesign, dropping its original Java-based browsing system. LookSmart was sold back to the founders as well as Martin Hosking through a leveraged buyout in 1998, with Reader's Digest providing a $1.5 million loan and retaining about a 10% equity stake. Also in 1998, a searc
https://en.wikipedia.org/wiki/Glycobiology
Defined in the narrowest sense, glycobiology is the study of the structure, biosynthesis, and biology of saccharides (sugar chains or glycans) that are widely distributed in nature. Sugars or saccharides are essential components of all living things and aspects of the various roles they play in biology are researched in various medical, biochemical and biotechnological fields. History According to Oxford English Dictionary the specific term glycobiology was coined in 1988 by Prof. Raymond Dwek to recognize the coming together of the traditional disciplines of carbohydrate chemistry and biochemistry. This coming together was as a result of a much greater understanding of the cellular and molecular biology of glycans. However, as early as the late nineteenth century pioneering efforts were being made by Emil Fisher to establish the structure of some basic sugar molecules. Each year the Society of Glycobiology awards the Rosalind Kornfeld award for lifetime achievement in the field of glycobiology. Glycoconjugates Sugars may be linked to other types of biological molecule to form glycoconjugates. The enzymatic process of glycosylation creates sugars/saccharides linked to themselves and to other molecules by the glycosidic bond, thereby producing glycans. Glycoproteins, proteoglycans and glycolipids are the most abundant glycoconjugates found in mammalian cells. They are found predominantly on the outer cell membrane and in secreted fluids. Glycoconjugates have been shown to be important in cell-cell interactions due to the presence on the cell surface of various glycan binding receptors in addition to the glycoconjugates themselves. In addition to their function in protein folding and cellular attachment, the N-linked glycans of a protein can modulate the protein's function, in some cases acting as an on-off switch. Glycomics "Glycomics, analogous to genomics and proteomics, is the systematic study of all glycan structures of a given cell type or organism" and is
https://en.wikipedia.org/wiki/Eclipse%20%28software%29
Eclipse is an integrated development environment (IDE) used in computer programming. It contains a base workspace and an extensible plug-in system for customizing the environment. It is the second-most-popular IDE for Java development, and, until 2016, was the most popular. Eclipse is written mostly in Java and its primary use is for developing Java applications, but it may also be used to develop applications in other programming languages via plug-ins, including Ada, ABAP, C, C++, C#, Clojure, COBOL, D, Erlang, Fortran, Groovy, Haskell, JavaScript, Julia, Lasso, Lua, NATURAL, Perl, PHP, Prolog, Python, R, Ruby (including Ruby on Rails framework), Rust, Scala, and Scheme. It can also be used to develop documents with LaTeX (via a TeXlipse plug-in) and packages for the software Mathematica. Development environments include the Eclipse Java development tools (JDT) for Java and Scala, Eclipse CDT for C/C++, and Eclipse PDT for PHP, among others. The initial codebase originated from IBM VisualAge. The Eclipse software development kit (SDK), which includes the Java development tools, is meant for Java developers. Users can extend its abilities by installing plug-ins written for the Eclipse Platform, such as development toolkits for other programming languages, and can write and contribute their own plug-ins. Since Eclipse 3.0 (released in 2004), plug-ins are installed and managed as "bundles" using Equinox, an implementation of OSGi. The Eclipse SDK is free and open-source software, released under the terms of the Eclipse Public License, although it is incompatible with the GNU General Public License. It was one of the first IDEs to run under GNU Classpath and it runs without problems under IcedTea. History Eclipse was inspired by the Smalltalk-based VisualAge family of integrated development environment (IDE) products. Although fairly successful, a major drawback of the VisualAge products was that developed code was not in a component-based software engineering mode
https://en.wikipedia.org/wiki/Social%20welfare%20function
In welfare economics, a social welfare function is a function that ranks social states (alternative complete descriptions of the society) as less desirable, more desirable, or indifferent for every possible pair of social states. Inputs of the function include any variables considered to affect the economic welfare of a society. In using welfare measures of persons in the society as inputs, the social welfare function is individualistic in form. One use of a social welfare function is to represent prospective patterns of collective choice as to alternative social states. The social welfare function provides the government with a simple guideline for achieving the optimal distribution of income. The social welfare function is analogous to the consumer theory of indifference-curve–budget constraint tangency for an individual, except that the social welfare function is a mapping of individual preferences or judgments of everyone in the society as to collective choices, which apply to all, whatever individual preferences are for (variable) constraints on factors of production. One point of a social welfare function is to determine how close the analogy is to an ordinal utility function for an individual with at least minimal restrictions suggested by welfare economics, including constraints on the number of factors of production. There are two major distinct but related types of social welfare functions: A Bergson–Samuelson social welfare function considers welfare for a given set of individual preferences or welfare rankings. An Arrow social welfare function considers welfare across different possible sets of individual preferences or welfare rankings and seemingly reasonable axioms that constrain the function. Bergson–Samuelson social welfare function In a 1938 article, Abram Bergson introduced the social welfare function. The object was "to state in precise form the value judgments required for the derivation of the conditions of maximum economic welfare
https://en.wikipedia.org/wiki/Invertible%20matrix
In linear algebra, an -by- square matrix is called invertible (also nonsingular, nondegenerate or —rarely used— regular), if there exists an -by- square matrix such thatwhere denotes the -by- identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix is uniquely determined by , and is called the (multiplicative) inverse of , denoted by . Matrix inversion is the process of finding the matrix that satisfies the prior equation for a given invertible matrix . Over a field, a square matrix that is not invertible is called singular or degenerate. A square matrix with entries in a field is singular if and only if its determinant is zero. Singular matrices are rare in the sense that if a square matrix's entries are randomly selected from any bounded region on the number line or complex plane, the probability that the matrix is singular is 0, that is, it will "almost never" be singular. Non-square matrices, i.e. -by- matrices for which , do not have an inverse. However, in some cases such a matrix may have a left inverse or right inverse. If is -by- and the rank of is equal to , (), then has a left inverse, an -by- matrix such that . If has rank (), then it has a right inverse, an -by- matrix such that . While the most common case is that of matrices over the real or complex numbers, all these definitions can be given for matrices over any algebraic structure equipped with addition and multiplication (i.e. rings). However, in the case of a ring being commutative, the condition for a square matrix to be invertible is that its determinant is invertible in the ring, which in general is a stricter requirement than it being nonzero. For a noncommutative ring, the usual determinant is not defined. The conditions for existence of left-inverse or right-inverse are more complicated, since a notion of rank does not exist over rings. The set of invertible matrices together with the operation of matrix multiplica
https://en.wikipedia.org/wiki/Stochastic%20matrix
In mathematics, a stochastic matrix is a square matrix used to describe the transitions of a Markov chain. Each of its entries is a nonnegative real number representing a probability. It is also called a probability matrix, transition matrix, substitution matrix, or Markov matrix. The stochastic matrix was first developed by Andrey Markov at the beginning of the 20th century, and has found use throughout a wide variety of scientific fields, including probability theory, statistics, mathematical finance and linear algebra, as well as computer science and population genetics. There are several different definitions and types of stochastic matrices: A right stochastic matrix is a real square matrix, with each row summing to 1. A left stochastic matrix is a real square matrix, with each column summing to 1. A doubly stochastic matrix is a square matrix of nonnegative real numbers with each row and column summing to 1. In the same vein, one may define a stochastic vector (also called probability vector) as a vector whose elements are nonnegative real numbers which sum to 1. Thus, each row of a right stochastic matrix (or column of a left stochastic matrix) is a stochastic vector. A common convention in English language mathematics literature is to use row vectors of probabilities and right stochastic matrices rather than column vectors of probabilities and left stochastic matrices; this article follows that convention. In addition, a substochastic matrix is a real square matrix whose row sums are all History The stochastic matrix was developed alongside the Markov chain by Andrey Markov, a Russian mathematician and professor at St. Petersburg University who first published on the topic in 1906. His initial intended uses were for linguistic analysis and other mathematical subjects like card shuffling, but both Markov chains and matrices rapidly found use in other fields. Stochastic matrices were further developed by scholars like Andrey Kolmogorov, who expanded thei
https://en.wikipedia.org/wiki/Cray
Cray Inc., a subsidiary of Hewlett Packard Enterprise, is an American supercomputer manufacturer headquartered in Seattle, Washington. It also manufactures systems for data storage and analytics. Several Cray supercomputer systems are listed in the TOP500, which ranks the most powerful supercomputers in the world. Cray manufactures its products in part in Chippewa Falls, Wisconsin, where its founder, Seymour Cray, was born and raised. The company also has offices in Bloomington, Minnesota (which have been converted to Hewlett Packard Enterprise offices), and numerous other sales, service, engineering, and R&D locations around the world. In 1972, the company's predecessor, Cray Research, Inc. (CRI), was founded by computer designer Seymour Cray. In 1989, Seymour Cray formed Cray Computer Corporation (CCC), which went bankrupt in 1995. In 1996, Cray Research was acquired by Silicon Graphics (SGI). In 2000, Cray Inc. was formed when Tera Computer Company purchased the Cray Research Inc. business from SGI and adopted the name of its acquisition. In 2019, the company was acquired by Hewlett Packard Enterprise for $1.3 billion. History Background: 1950–1972 In 1950, Seymour Cray began working in the computing field when he joined Engineering Research Associates (ERA) in Saint Paul, Minnesota. There, he helped to create the ERA 1103. ERA eventually became part of UNIVAC, and began to be phased out. In 1960, he left the company, a few years after former ERA employees set up Control Data Corporation (CDC). He initially worked out of the CDC headquarters in Minneapolis, but grew upset by constant interruptions by managers. He eventually set up a lab in his hometown of Chippewa Falls, Wisconsin, about 85 miles to the east. Cray had a string of successes at CDC, including the CDC 6600 and CDC 7600. Cray Research Inc. and Cray Computer Corporation: 1972–1996 When CDC ran into financial difficulties in the late 1960s, development funds for Cray's follow-on CDC 8600 bec
https://en.wikipedia.org/wiki/INT%20%28x86%20instruction%29
INT is an assembly language instruction for x86 processors that generates a software interrupt. It takes the interrupt number formatted as a byte value. When written in assembly language, the instruction is written like this: INT X where X is the software interrupt that should be generated (0-255). As is customary with machine binary arithmetic, interrupt numbers are often written in hexadecimal form, which can be indicated with a prefix 0x or with the suffix h. For example, INT 13H will generate the 20th software interrupt (0x13 is the number 19 -- nineteen -- written in hexadecimal notation, and the count starts with 0), causing the function pointed to by the 20th vector in the interrupt table to be executed. Real mode When generating a software interrupt, the processor calls one of the 256 functions pointed to by the interrupt address table, which is located in the first 1024 bytes of memory while in real mode (see Interrupt vector). It is therefore entirely possible to use a far-call instruction to start the interrupt-function manually after pushing the flag register. An example of a useful DOS software interrupt was interrupt 0x21. By calling it with different parameters in the registers (mostly ah and al) you could access various IO operations, string output and more. Most Unix systems and derivatives do not use software interrupts, with the exception of interrupt 0x80, used to make system calls. This is accomplished by entering a 32-bit value corresponding to a kernel function into the EAX register of the processor and then executing INT 0x80. INT3 The INT3 instruction is a one-byte-instruction defined for use by debuggers to temporarily replace an instruction in a running program in order to set a code breakpoint. The more general INT XXh instructions are encoded using two bytes. This makes them unsuitable for use in patching instructions (which can be one byte long); see SIGTRAP. The opcode for INT3 is 0xCC, as opposed to the opcode for INT imme
https://en.wikipedia.org/wiki/Eternity
Eternity, in common parlance, is an infinite amount of time that never ends or the quality, condition or fact of being everlasting or eternal. Classical philosophy, however, defines eternity as what is timeless or exists outside time, whereas sempiternity corresponds to infinite duration. Philosophy Classical philosophy defines eternity as what exists outside time, as in describing timeless supernatural beings and forces, distinguished from sempiternity which corresponds to infinite time, as described in requiem prayers for the dead. Some thinkers, such as Aristotle, suggest the eternity of the natural cosmos in regard to both past and future eternal duration. Boethius defined eternity as "simultaneously full and perfect possession of interminable life". Thomas Aquinas believed that God's eternity does not cease, as it is without either a beginning or an end; the concept of eternity is of divine simplicity, thus incapable of being defined or fully understood by humankind. Thomas Hobbes (1588–1679) and many others in the Age of Enlightenment drew on the classical distinction to put forward metaphysical hypotheses such as "eternity is a permanent now". Contemporary philosophy and physics Today cosmologists, philosophers, and others look towards analyses of the concept from across cultures and history. They debate, among other things, whether an absolute concept of eternity has real application for fundamental laws of physics; compare the issue of the entropy as an arrow of time. Religion Eternity as infinite duration is an important concept in many lives and religions. God or gods are often said to endure eternally, or exist for all time, forever, without beginning or end. Religious views of an afterlife may speak of it in terms of eternity or eternal life. Christian theologians may regard immutability, like the eternal Platonic forms, as essential to eternity. Symbolism Eternity is often symbolized by the endless snake, swallowing its own tail, the ouro
https://en.wikipedia.org/wiki/Lagrange%20polynomial
In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data. Given a data set of coordinate pairs with the are called nodes and the are called values. The Lagrange polynomial has degree and assumes each value at the corresponding node, Although named after Joseph-Louis Lagrange, who published it in 1795, the method was first discovered in 1779 by Edward Waring. It is also an easy consequence of a formula published in 1783 by Leonhard Euler. Uses of Lagrange polynomials include the Newton–Cotes method of numerical integration, Shamir's secret sharing scheme in cryptography, and Reed–Solomon error correction in coding theory. For equispaced nodes, Lagrange interpolation is susceptible to Runge's phenomenon of large oscillation. Definition Given a set of nodes , which must all be distinct, for indices , the Lagrange basis for polynomials of degree for those nodes is the set of polynomials each of degree which take values if and . Using the Kronecker delta this can be written Each basis polynomial can be explicitly described by the product: Notice that the numerator has roots at the nodes while the denominator scales the resulting polynomial so that The Lagrange interpolating polynomial for those nodes through the corresponding values is the linear combination: Each basis polynomial has degree , so the sum has degree , and it interpolates the data because The interpolating polynomial is unique. Proof: assume the polynomial of degree interpolates the data. Then the difference is zero at distinct nodes But the only polynomial of degree with more than roots is the constant zero function, so or Barycentric form Each Lagrange basis polynomial can be rewritten as the product of three parts, a function common to every basis polynomial, a node-specific constant (called the barycentric weight), and a part representing the displacement from to : By facto
https://en.wikipedia.org/wiki/Quorn
Quorn is a brand of meat substitute products, or the company that makes them. Quorn originated in the UK and is sold primarily in Europe, but is available in 14 countries. The brand is owned by parent company Monde Nissin. Quorn is sold as both a cooking ingredient and as a meat substitute used in a range of prepackaged meals. All Quorn foods contain mycoprotein as an ingredient, which is derived from the Fusarium venenatum fungus. In most Quorn products, the fungus culture is dried and mixed with egg albumen, which acts as a binder, and then is adjusted in texture and pressed into various forms. A vegan formulation also exists that uses potato protein as a binder instead of egg albumen. History Quorn was launched in 1985 by Marlow Foods, a joint venture between Rank Hovis McDougall (RHM) and Imperial Chemical Industries (ICI) Microbial biomass is produced commercially as single-cell protein (SCP) for human food or animal feed and as viable yeast cells for the baking industry. The industrial production of bakers' yeast started in the early 1900s, and yeast biomass was used as human food in Germany during World War I. The development of large-scale processes for the production of microbial biomass as a source of commercial protein began in earnest in the late 1960s. Several of the processes investigated did not come to fruition owing to political and economic problems, but the establishment of the ICI Pruteen process for the production of bacterial SCP for animal feed was a milestone in the development of the fermentation industry. This process used continuous culture on a large scale . The economics of the production of SCP as animal feed were marginal, which eventually led to the discontinuation of the Pruteen process. The technical expertise gained from the Pruteen process assisted ICI in collaborating with company Rank Hovis McDougall on a process for the production of fungal biomass for human food. A continuous fermentation process for the production o
https://en.wikipedia.org/wiki/Conjugate%20transpose
In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an complex matrix is an matrix obtained by transposing and applying complex conjugate on each entry (the complex conjugate of being , for real numbers and ). It is often denoted as or or or (often in physics) . For real matrices, the conjugate transpose is just the transpose, . Definition The conjugate transpose of an matrix is formally defined by where the subscript denotes the -th entry, for and , and the overbar denotes a scalar complex conjugate. This definition can also be written as where denotes the transpose and denotes the matrix with complex conjugated entries. Other names for the conjugate transpose of a matrix are Hermitian conjugate, adjoint matrix or transjugate. The conjugate transpose of a matrix can be denoted by any of these symbols: , commonly used in linear algebra , commonly used in linear algebra (sometimes pronounced as A dagger), commonly used in quantum mechanics , although this symbol is more commonly used for the Moore–Penrose pseudoinverse In some contexts, denotes the matrix with only complex conjugated entries and no transposition. Example Suppose we want to calculate the conjugate transpose of the following matrix . We first transpose the matrix: Then we conjugate every entry of the matrix: Basic remarks A square matrix with entries is called Hermitian or self-adjoint if ; i.e., . Skew Hermitian or antihermitian if ; i.e., . Normal if . Unitary if , equivalently , equivalently . Even if is not square, the two matrices and are both Hermitian and in fact positive semi-definite matrices. The conjugate transpose "adjoint" matrix should not be confused with the adjugate, , which is also sometimes called adjoint. The conjugate transpose of a matrix with real entries reduces to the transpose of , as the conjugate of a real number is the number itself. Motivation The conjugate transpose can be motivated by noting
https://en.wikipedia.org/wiki/Self-modifying%20code
In computer science, self-modifying code (SMC or SMoC) is code that alters its own instructions while it is executing – usually to reduce the instruction path length and improve performance or simply to reduce otherwise repetitively similar code, thus simplifying maintenance. The term is usually only applied to code where the self-modification is intentional, not in situations where code accidentally modifies itself due to an error such as a buffer overflow. Self-modifying code can involve overwriting existing instructions or generating new code at run time and transferring control to that code. Self-modification can be used as an alternative to the method of "flag setting" and conditional program branching, used primarily to reduce the number of times a condition needs to be tested. The method is frequently used for conditionally invoking test/debugging code without requiring additional computational overhead for every input/output cycle. The modifications may be performed: only during initialization – based on input parameters (when the process is more commonly described as software 'configuration' and is somewhat analogous, in hardware terms, to setting jumpers for printed circuit boards). Alteration of program entry pointers is an equivalent indirect method of self-modification, but requiring the co-existence of one or more alternative instruction paths, increasing the program size. throughout execution ("on the fly") – based on particular program states that have been reached during the execution In either case, the modifications may be performed directly to the machine code instructions themselves, by overlaying new instructions over the existing ones (for example: altering a compare and branch to an unconditional branch or alternatively a 'NOP'). In the IBM System/360 architecture, and its successors up to z/Architecture, an EXECUTE (EX) instruction logically overlays the second byte of its target instruction with the low-order 8 bits of register 1. T
https://en.wikipedia.org/wiki/Domain%20relational%20calculus
In computer science, domain relational calculus (DRC) is a calculus that was introduced by Michel Lacroix and Alain Pirotte as a declarative database query language for the relational data model. In DRC, queries have the form: where each Xi is either a domain variable or constant, and denotes a DRC formula. The result of the query is the set of tuples X1 to Xn that make the DRC formula true. This language uses the same operators as tuple calculus, the logical connectives ∧ (and), ∨ (or) and ¬ (not). The existential quantifier (∃) and the universal quantifier (∀) can be used to bind the variables. Its computational expressiveness is equivalent to that of relational algebra. Examples Let (A, B, C) mean (Rank, Name, ID) in the Enterprise relation and let (D, E, F) mean (Name, DeptName, ID) in the Department relation All captains of the starship USS Enterprise: In this example, A, B, C denotes both the result set and a set in the table Enterprise. Names of Enterprise crew members who are in Stellar Cartography: In this example, we're only looking for the name, and that's B. The condition F = C is a requirement that describes the intersection of Enterprise crew members AND members of the Stellar Cartography Department. An alternate representation of the previous example would be: In this example, the value of the requested F domain is directly placed in the formula and the C domain variable is re-used in the query for the existence of a department, since it already holds a crew member's ID. See also Relational calculus References External links DES – An educational tool for working with Domain Relational Calculus and other formal languages WinRDBI – An educational tool for working with Domain Relational Calculus and other formal languages Relational model Logical calculi
https://en.wikipedia.org/wiki/Septic%20tank
A septic tank is an underground chamber made of concrete, fiberglass, or plastic through which domestic wastewater (sewage) flows for basic sewage treatment. Settling and anaerobic digestion processes reduce solids and organics, but the treatment efficiency is only moderate (referred to as "primary treatment"). Septic tank systems are a type of simple onsite sewage facility. They can be used in areas that are not connected to a sewerage system, such as rural areas. The treated liquid effluent is commonly disposed in a septic drain field, which provides further treatment. Nonetheless, groundwater pollution may occur and can be a problem. The term "septic" refers to the anaerobic bacterial environment that develops in the tank that decomposes or mineralizes the waste discharged into the tank. Septic tanks can be coupled with other onsite wastewater treatment units such as biofilters or aerobic systems involving artificially forced aeration. The rate of accumulation of sludge—also called septage or fecal sludge—is faster than the rate of decomposition. Therefore, the accumulated fecal sludge must be periodically removed, which is commonly done with a vacuum truck. Description A septic tank consists of one or more concrete or plastic tanks of between 4,500 and 7,500 litres (1,000 and 2,000 gallons); one end is connected to an inlet wastewater pipe and the other to a septic drain field. Generally these pipe connections are made with a T pipe, allowing liquid to enter and exit without disturbing any crust on the surface. Today, the design of the tank usually incorporates two chambers, each equipped with an access opening and cover, and separated by a dividing wall with openings located about midway between the floor and roof of the tank. Wastewater enters the first chamber of the tank, allowing solids to settle and scum to float. The settled solids are anaerobically digested, reducing the volume of solids. The liquid component flows through the dividing wall into the
https://en.wikipedia.org/wiki/JXTA
JXTA (Juxtapose) was an open-source peer-to-peer protocol specification begun by Sun Microsystems in 2001. The JXTA protocols were defined as a set of XML messages which allow any device connected to a network to exchange messages and collaborate independently of the underlying network topology. As JXTA was based upon a set of open XML protocols, it could be implemented in any modern computer language. Implementations were developed for Java SE, C/C++, C# and Java ME. The C# Version used the C++/C native bindings and was not a complete re-implementation in its own right. JXTA peers create a virtual overlay network which allows a peer to interact with other peers even when some of the peers and resources are behind firewalls and NATs or use different network transports. In addition, each resource is identified by a unique ID, a 160 bit SHA-1 URN in the Java binding, so that a peer can change its localization address while keeping a constant identification number. Status "In November 2010, Oracle officially announced its withdrawal from the JXTA projects". As of August 2011, the JXTA project has not yet been continued or otherwise announced to retain operations, neither a decision was made on the assembly of its Board nor an answer by Oracle regarding a pending request to move the source-code to Apache license version 2. Protocols in JXTA Peer Resolver Protocol Peer Information Protocol Rendezvous Protocol Peer Membership Protocol Pipe Binding Protocol Endpoint Routing Protocol Categories of peers JXTA defines two main categories of peers: edge peers and super-peers. The super-peers can be further divided into rendezvous and relay peers. Each peer has a well defined role in the JXTA peer-to-peer model. The edge peers are usually defined as peers which have transient, low bandwidth network connectivity. They usually reside on the border of the Internet, hidden behind corporate firewalls or accessing the network through non-dedicated connections. A Rendez
https://en.wikipedia.org/wiki/Leo%20Esaki
Reona Esaki (江崎 玲於奈 Esaki Reona, born March 12, 1925), also known as Leo Esaki, is a Japanese physicist who shared the Nobel Prize in Physics in 1973 with Ivar Giaever and Brian David Josephson for his work in electron tunneling in semiconductor materials which finally led to his invention of the Esaki diode, which exploited that phenomenon. This research was done when he was with Tokyo Tsushin Kogyo (now known as Sony). He has also contributed in being a pioneer of the semiconductor superlattices. Early life and education Esaki was born in Takaida-mura, Nakakawachi-gun, Osaka Prefecture (now part of Higashiōsaka City) and grew up in Kyoto, near by Kyoto Imperial University and Doshisha University. He first had contact with American culture in . After graduating from the Third Higher School, he studied physics at Tokyo Imperial University, where he had attended Hideki Yukawa's course in nuclear theory in October 1944. Also, he lived through the Bombing of Tokyo while he was at college. Esaki received his B.Sc. and Ph.D. in 1947 and 1959, respectively, from the University of Tokyo (UTokyo). Career Esaki diode From 1947 to 1960, Esaki joined Kawanishi Corporation (now Denso Ten) and Tokyo Tsushin Kogyo (now Sony). Meanwhile, American physicists John Bardeen, Walter Brattain, and William Shockley invented the transistor, which encouraged Esaki to change fields from vacuum tube to heavily-doped germanium and silicon research in Sony. One year later, he recognized that when the PN junction width of germanium is thinned, the current-voltage characteristic is dominated by the influence of the tunnel effect and, as a result, he discovered that as the voltage is increased, the current decreases inversely, indicating negative resistance. This discovery was the first demonstration of solid tunneling effects in physics, and it was the birth of new electronic devices in electronics called Esaki diode (or tunnel diode). He received a doctorate degree from UTokyo due to this
https://en.wikipedia.org/wiki/10BROAD36
10BROAD36 is an obsolete computer network standard in the Ethernet family. It was developed during the 1980s and specified in IEEE 802.3b-1985. The Institute of Electrical and Electronics Engineers standards committee IEEE 802 published the standard that was ratified in 1985 as an additional section 11 to the base Ethernet standard. It was also issued as ISO/IEC 8802-3 in 1989. The standard supports 10 Mbit/s Ethernet signals over standard 75 ohm cable television (CATV) cable over a 3600-meter range. 10BROAD36 modulates its data onto a higher frequency carrier signal, much as an audio signal would modulate a carrier signal to be transmitted in a radio station. In telecommunications engineering, this is a broadband signaling technique. Broadband provides several advantages over the baseband signal used, for instance in 10BASE5. Range is greatly extended (3600 meters, versus 500 meters for 10BASE5), and multiple signals can be carried on the same cable. 10BROAD36 can even share a cable with standard television channels. Deployment 10BROAD36 was less successful than its contemporaries because of the high equipment complexity (and cost) associated with it. The individual stations are much more expensive due to the extra radio frequency circuitry involved; however the primary extra complexity comes from the fact that 10BROAD36 is unidirectional. Signals can only travel one direction along the line, so head-end stations must be present on the line to repeat the signals (ensuring that no packets travel through the line indefinitely) on either another, backwards direction frequency on the same line, or another line entirely. This also increases latency and prevents bidirectional signal flow. The extra complexity outweighed the advantage of reusability of CATV technology for the intended campus networks and metropolitan area networks. An installer at Boston University using the Ungermann-Bass product noted that no installers understood both the digital and analog aspects
https://en.wikipedia.org/wiki/IBM%20PS/2
The Personal System/2 or PS/2 is IBM's second generation of personal computers. Released in 1987, it officially replaced the IBM PC, XT, AT, and PC Convertible in IBM's lineup. Many of the PS/2's innovations, such as the 16550 UART (serial port), 1440 KB 3.5-inch floppy disk format, 72-pin SIMMs, the PS/2 port, and the VGA video standard, went on to become standards in the broader PC market. The PS/2 line was created by IBM partly in an attempt to recapture control of the PC market by introducing the advanced yet proprietary Micro Channel architecture (MCA) on higher-end models. These models were in the strange position of being incompatible with the hardware standards previously established by IBM and adopted in the IBM PC compatible industry. IBM's initial PS/2 computers were popular with target market corporate buyers, and by September 1988, IBM reported that it had sold 3 million PS/2 machines. This was only 18 months after the new range had been introduced. Most major PC manufacturers balked at IBM's licensing terms for MCA-compatible hardware, particularly the per-machine royalties. In 1992, Macworld stated that "IBM lost control of its own market and became a minor player with its own technology." IBM officially retired the PS/2 line in July 1995. The OS/2 operating system was announced at the same time as the PS/2 line and was intended to be the primary operating system for models with Intel 80286 or later processors. However, at the time of the first shipments, only IBM PC DOS 3.3 was available. OS/2 1.0 (text-mode only) and Microsoft's Windows 2.0 became available several months later. IBM also released AIX PS/2, a UNIX operating system for PS/2 models with Intel 386 or later processors. Technology IBM's PS/2 was designed to remain software compatible with their PC/AT/XT line of computers upon which the large PC clone market was built, but the hardware was quite different. PS/2 had two BIOSes: one named ABIOS (Advanced BIOS) which provided a new prote
https://en.wikipedia.org/wiki/Enhanced%20Small%20Disk%20Interface
Enhanced Small Disk Interface (ESDI) is an obsolete disk interface designed by Maxtor Corporation in the early 1980s to be a follow-on to the ST-412/506 interface. ESDI improved on ST-506 by moving certain parts that were traditionally kept on the controller (such as the data separator) into the drives themselves, and also generalizing the control bus such that more kinds of devices (such as removable disks and tape drives) could be connected. ESDI uses the same cabling as ST-506 (one 34-pin common control cable, and a 20-pin data channel cable for each device), and thus could easily be retrofitted to ST-506 applications. ESDI was popular in the mid-to-late 1980s, when SCSI and IDE technologies were young and immature, and ST-506 was neither fast nor flexible enough. ESDI could handle data rates of 10, 15, or 20 Mbit/s (as opposed to ST-506's top speed of 7.5 Mbit/s), and many high-end SCSI drives of the era were actually high-end ESDI drives with SCSI bridges integrated on the drive. By 1990, SCSI had matured enough to handle high data rates and multiple types of drives, and ATA was quickly overtaking ST-506 in the desktop market. These two events made ESDI less and less important over time, and by the mid-1990s, ESDI was no longer in common use. Connector pinouts References Micropolis ESDI X3.170-90 Interface Specification ESDI Specification, Magnetic Peripherals Inc, 1984 IBM ESDI Fixed Disk Drive Adapter/A Technical Reference Computer storage buses
https://en.wikipedia.org/wiki/Advanced%20Audio%20Coding
Advanced Audio Coding (AAC) is an audio coding standard for lossy digital audio compression. Designed to be the successor of the MP3 format, AAC generally achieves higher sound quality than MP3 encoders at the same bit rate. AAC has been standardized by ISO and IEC as part of the MPEG-2 and MPEG-4 specifications. Part of AAC, HE-AAC ("AAC+"), is part of MPEG-4 Audio and is adopted into digital radio standards DAB+ and Digital Radio Mondiale, and mobile television standards DVB-H and ATSC-M/H. AAC supports inclusion of 48 full-bandwidth (up to 96 kHz) audio channels in one stream plus 16 low frequency effects (LFE, limited to 120 Hz) channels, up to 16 "coupling" or dialog channels, and up to 16 data streams. The quality for stereo is satisfactory to modest requirements at 96 kbit/s in joint stereo mode; however, hi-fi transparency demands data rates of at least 128 kbit/s (VBR). Tests of MPEG-4 audio have shown that AAC meets the requirements referred to as "transparent" for the ITU at 128 kbit/s for stereo, and 320 kbit/s for 5.1 audio. AAC uses only a modified discrete cosine transform (MDCT) algorithm, giving it higher compression efficiency than MP3, which uses a hybrid coding algorithm that is part MDCT and part FFT. AAC is the default or standard audio format for iPhone, iPod, iPad, Nintendo DSi, Nintendo 3DS, Apple Music, iTunes, DivX Plus Web Player, PlayStation 4 and various Nokia Series 40 phones. It is supported on a wide range of devices and software such as PlayStation Vita, Wii, digital audio players like Sony Walkman or SanDisk Clip, Android and BlackBerry devices, various in-dash car audio systems, and is also one of the audio formats used on the Spotify web player. History Background The discrete cosine transform (DCT), a type of transform coding for lossy compression, was proposed by Nasir Ahmed in 1972, and developed by Ahmed with T. Natarajan and K. R. Rao in 1973, publishing their results in 1974. This led to the development of the modified
https://en.wikipedia.org/wiki/Characteristic%20polynomial
In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any base (that is, the characteristic polynomial does not depend on the choice of a basis). The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero. In spectral graph theory, the characteristic polynomial of a graph is the characteristic polynomial of its adjacency matrix. Motivation In linear algebra, eigenvalues and eigenvectors play a fundamental role, since, given a linear transformation, an eigenvector is a vector whose direction is not changed by the transformation, and the corresponding eigenvalue is the measure of the resulting change of magnitude of the vector. More precisely, if the transformation is represented by a square matrix an eigenvector and the corresponding eigenvalue must satisfy the equation or, equivalently, where is the identity matrix, and (although the zero vector satisfies this equation for every it is not considered an eigenvector). It follows that the matrix must be singular, and its determinant must be zero. In other words, the eigenvalues of are the roots of which is a monic polynomial in of degree if is a matrix. This polynomial is the characteristic polynomial of . Formal definition Consider an matrix The characteristic polynomial of denoted by is the polynomial defined by where denotes the identity matrix. Some authors define the characteristic polynomial to be That polynomial differs from the one defined here by a sign so it makes no difference for properties like having as roots the eigenvalues of ; however the definit
https://en.wikipedia.org/wiki/Lean%20manufacturing
Lean manufacturing is a production method aimed primarily at reducing times within the production system as well as response times from suppliers and to customers. It is closely related to another concept called just-in-time manufacturing (JIT manufacturing in short). Just-in-time manufacturing tries to match production to demand by only supplying goods which have been ordered and focuses on efficiency, productivity (with a commitment to continuous improvement) and reduction of "wastes" for the producer and supplier of goods. Lean manufacturing adopts the just-in-time approach and additionally focuses on reducing cycle, flow and throughput times by further eliminating activities which do not add any value for the customer. Lean manufacturing also involves people who work outside of the manufacturing process, such as in marketing and customer service. Lean manufacturing is particularly related to the operational model implemented in the post-war 1950s and 1960s by the Japanese automobile company Toyota called Toyota Production System (TPS), known in the USA as "The Toyota Way". Toyota's system was erected on the two pillars of just-in-time inventory management and automated quality control. The seven "wastes" ( in Japanese), first formulated by Toyota engineer Shigeo Shingo, are the waste of superfluous inventory of raw material and finished goods, the waste of overproduction (producing more than what is needed now), the waste of over-processing (processing or making parts beyond the standard expected by customer), the waste of transportation (unnecessary movement of people and goods inside the system), the waste of excess motion (mechanizing or automating before improving the method), the waste of waiting (inactive working periods due to job queues), and the waste of making defective products (reworking to fix avoidable defects in products and processes). The term Lean was coined in 1988 by American businessman John Krafcik in his article "Triumph of the Lean P
https://en.wikipedia.org/wiki/Sheffer%20sequence
In mathematics, a Sheffer sequence or poweroid is a polynomial sequence, i.e., a sequence of polynomials in which the index of each polynomial equals its degree, satisfying conditions related to the umbral calculus in combinatorics. They are named for Isador M. Sheffer. Definition Fix a polynomial sequence (pn). Define a linear operator Q on polynomials in x by This determines Q on all polynomials. The polynomial sequence pn is a Sheffer sequence if the linear operator Q just defined is shift-equivariant; such a Q is then a delta operator. Here, we define a linear operator Q on polynomials to be shift-equivariant if, whenever f(x) = g(x + a) = Ta g(x) is a "shift" of g(x), then (Qf)(x) = (Qg)(x + a); i.e., Q commutes with every shift operator: TaQ = QTa. Properties The set of all Sheffer sequences is a group under the operation of umbral composition of polynomial sequences, defined as follows. Suppose ( pn(x) : n = 0, 1, 2, 3, ... ) and ( qn(x) : n = 0, 1, 2, 3, ... ) are polynomial sequences, given by Then the umbral composition is the polynomial sequence whose nth term is (the subscript n appears in pn, since this is the n term of that sequence, but not in q, since this refers to the sequence as a whole rather than one of its terms). The identity element of this group is the standard monomial basis Two important subgroups are the group of Appell sequences, which are those sequences for which the operator Q is mere differentiation, and the group of sequences of binomial type, which are those that satisfy the identity A Sheffer sequence ( pn(x) : n = 0, 1, 2, ... ) is of binomial type if and only if both and The group of Appell sequences is abelian; the group of sequences of binomial type is not. The group of Appell sequences is a normal subgroup; the group of sequences of binomial type is not. The group of Sheffer sequences is a semidirect product of the group of Appell sequences and the group of sequences of binomial type. It follows that each coset o
https://en.wikipedia.org/wiki/Binary%20translation
In computing, binary translation is a form of binary recompilation where sequences of instructions are translated from a source instruction set to the target instruction set. In some cases such as instruction set simulation, the target instruction set may be the same as the source instruction set, providing testing and debugging features such as instruction trace, conditional breakpoints and hot spot detection. The two main types are static and dynamic binary translation. Translation can be done in hardware (for example, by circuits in a CPU) or in software (e.g. run-time engines, static recompiler, emulators). Motivation Binary translation is motivated by a lack of a binary for a target platform, the lack of source code to compile for the target platform, or otherwise difficulty in compiling the source for the target platform. Statically-recompiled binaries run potentially faster than their respective emulated binaries, as the emulation overhead is removed. This is similar to the difference in performance between interpreted and compiled programs in general. Static binary translation A translator using static binary translation aims to convert all of the code of an executable file into code that runs on the target architecture without having to run the code first, as is done in dynamic binary translation. This is very difficult to do correctly, since not all the code can be discovered by the translator. For example, some parts of the executable may be reachable only through indirect branches, whose value is known only at run-time. One such static binary translator uses universal superoptimizer peephole technology (developed by Sorav Bansal and Alex Aiken from Stanford University) to perform efficient translation between possibly many source and target pairs, with considerably low development costs and high performance of the target binary. In experiments of PowerPC-to-x86 translations, some binaries even outperformed native versions, but on average they ra
https://en.wikipedia.org/wiki/PDP-6
The PDP-6, short for Programmed Data Processor model 6, is a computer developed by Digital Equipment Corporation (DEC) during 1963 and first delivered in the summer of 1964. It was an expansion of DEC's existing 18-bit systems to use a 36-bit data word, which was at that time a common word size for large machines like IBM mainframes. The system was constructed using the same germanium transistor-based System Module layout as DEC's earlier machines, like the PDP-1 and PDP-4. The system was designed with real-time computing use in mind, not just batch processing as was typical for most mainframes. This made it popular in university settings and its support for the Lisp language made it particularly useful in artificial intelligence labs like Project MAC at MIT. It was also complex, expensive, and unreliable as a result of its use of so many early-model transistors. Only 23 were sold, at prices ranging from $120,000 to $300,000. The lasting influence of the PDP-6 was its re-implementation using modern silicon transistors and the newer Flip-Chip module packaging to produce the PDP-10. The instruction sets of the two machines are almost identical. The PDP-10 was less expensive and more reliable, and about 1500 were sold during its lifetime. History DEC's first products were not computers but a series of plug-in circuits known as Digital Laboratory Modules that performed digital logic. Users could wire the modules together to perform specific tasks. DEC soon introduced the PDP-1 which was built out of large numbers of these modules, now known as System Building Blocks or System Modules. The PDP-1 used an 18-bit word. Word lengths in the early 1960s were generally some multiple of six bits, as the character codes of the era were 6 bits long and it was also a useful size for storing binary coded decimal digits with an optional sign, as commonly used on IBM machines of the era. Large machines generally used a 36-bit word length, but there were many variations. The PDP-1
https://en.wikipedia.org/wiki/Load%20%28computing%29
In UNIX computing, the system load is a measure of the amount of computational work that a computer system performs. The load average represents the average system load over a period of time. It conventionally appears in the form of three numbers which represent the system load during the last one-, five-, and fifteen-minute periods. Unix-style load calculation All Unix and Unix-like systems generate a dimensionless metric of three "load average" numbers in the kernel. Users can easily query the current result from a Unix shell by running the uptime command: $ uptime 14:34:03 up 10:43, 4 users, load average: 0.06, 0.11, 0.09 The w and top commands show the same three load average numbers, as do a range of graphical user interface utilities. In operating systems based on the Linux kernel, this information can be easily accessed by reading the /proc/loadavg file. To explore this kind of information in dept, according to the Linux's Filesystem Hierarchy Standard, architecture-dependent information are exposed on the file /proc/stat. An idle computer has a load number of 0 (the idle process is not counted). Each process using or waiting for CPU (the ready queue or run queue) increments the load number by 1. Each process that terminates decrements it by 1. Most UNIX systems count only processes in the running (on CPU) or runnable (waiting for CPU) states. However, Linux also includes processes in uninterruptible sleep states (usually waiting for disk activity), which can lead to markedly different results if many processes remain blocked in I/O due to a busy or stalled I/O system. This, for example, includes processes blocking due to an NFS server failure or too slow media (e.g., USB 1.x storage devices). Such circumstances can result in an elevated load average, which does not reflect an actual increase in CPU use (but still gives an idea of how long users have to wait). Systems calculate the load average as the exponentially damped/weighted moving average o
https://en.wikipedia.org/wiki/Risk%20assessment
Risk assessment determines possible mishaps, their likelihood and consequences, and the tolerances for such events. The results of this process may be expressed in a quantitative or qualitative fashion. Risk assessment is an inherent part of a broader risk management strategy to help reduce any potential risk-related consequences. More precisely, risk assessment identifies and analyses potential (future) events that may negatively impact individuals, assets, and/or the environment (i.e. hazard analysis). It also makes judgments "on the tolerability of the risk on the basis of a risk analysis" while considering influencing factors (i.e. risk evaluation). Categories Individual risk assessment Risk assessments can be done in individual cases, including in patient and physician interactions. In the narrow sense chemical risk assessment is the assessment of a health risk in response to environmental exposures. The ways statistics are expressed and communicated to an individual, both through words and numbers impact his or her interpretation of benefit and harm. For example, a fatality rate may be interpreted as less benign than the corresponding survival rate. A systematic review of patients and doctors from 2017 found that overstatement of benefits and understatement of risks occurred more often than the alternative. A 2017 systematic review from the Cochrane collaboration suggested "well-documented decision aids" are helpful in reducing effects of such tendencies or biases. An individual´s own risk perception may be affected by psychological, ideological, religious or otherwise subjective factors, which impact rationality of the process. Individuals tend to be less rational when risks and exposures concern themselves as opposed to others. There is also a tendency to underestimate risks that are voluntary or where the individual sees themselves as being in control, such as smoking. Systems risk assessment Risk assessment can also be made on a much larger systems the
https://en.wikipedia.org/wiki/Binary%20code
A binary code represents text, computer processor instructions, or any other data using a two-symbol system. The two-symbol system used is often "0" and "1" from the binary number system. The binary code assigns a pattern of binary digits, also known as bits, to each character, instruction, etc. For example, a binary string of eight bits (which is also called a byte) can represent any of 256 possible values and can, therefore, represent a wide variety of different items. In computing and telecommunications, binary codes are used for various methods of encoding data, such as character strings, into bit strings. Those methods may use fixed-width or variable-width strings. In a fixed-width binary code, each letter, digit, or other character is represented by a bit string of the same length; that bit string, interpreted as a binary number, is usually displayed in code tables in octal, decimal or hexadecimal notation. There are many character sets and many character encodings for them. A bit string, interpreted as a binary number, can be translated into a decimal number. For example, the lower case a, if represented by the bit string 01100001 (as it is in the standard ASCII code), can also be represented as the decimal number "97". History of binary codes The modern binary number system, the basis for binary code, was invented by Gottfried Leibniz in 1689 and appears in his article Explication de l'Arithmétique Binaire. The full title is translated into English as the "Explanation of the binary arithmetic", which uses only the characters 1 and 0, with some remarks on its usefulness, and on the light it throws on the ancient Chinese figures of Fu Xi. Leibniz's system uses 0 and 1, like the modern binary numeral system. Leibniz encountered the I Ching through French Jesuit Joachim Bouvet and noted with fascination how its hexagrams correspond to the binary numbers from 0 to 111111, and concluded that this mapping was evidence of major Chinese accomplishments in the sor
https://en.wikipedia.org/wiki/Copy%20protection
Copy protection, also known as content protection, copy prevention and copy restriction, describes measures to enforce copyright by preventing the reproduction of software, films, music, and other media. Copy protection is most commonly found on videotapes, DVDs, Blu-ray discs, HD-DVDs, computer software discs, video game discs and cartridges, audio CDs and some VCDs. Some methods of copy protection have also led to criticism because it caused inconvenience for paying consumers or secretly installed additional or unwanted software to detect copying activities on the consumer's computer. Making copy protection effective while protecting consumer rights remains a problem with media publication. Terminology Media corporations have always used the term copy protection, but critics argue that the term tends to sway the public into identifying with the publishers, who favor restriction technologies, rather than with the users. Copy prevention and copy control may be more neutral terms. "Copy protection" is a misnomer for some systems, because any number of copies can be made from an original and all of these copies will work, but only in one computer, or only with one dongle, or only with another device that cannot be easily copied. The term is also often related to, and confused with, the concept of digital restrictions management. Digital restrictions management is a more general term because it includes all sorts of management of works, including copy restrictions. Copy restriction may include measures that are not digital. A more appropriate term may be "technological protection measures" (TPMs), which is often defined as the use of technological tools in order to restrict the use or access to a work. Business rationale Unauthorized copying and distribution accounted for $2.4 billion per year in lost revenue in the United States alone in 1990, and is assumed to be causing impact on revenues in the music and the video game industry, leading to proposal of stri
https://en.wikipedia.org/wiki/Population%20genetics
Population genetics is a subfield of genetics that deals with genetic differences within and among populations, and is a part of evolutionary biology. Studies in this branch of biology examine such phenomena as adaptation, speciation, and population structure. Population genetics was a vital ingredient in the emergence of the modern evolutionary synthesis. Its primary founders were Sewall Wright, J. B. S. Haldane and Ronald Fisher, who also laid the foundations for the related discipline of quantitative genetics. Traditionally a highly mathematical discipline, modern population genetics encompasses theoretical, laboratory, and field work. Population genetic models are used both for statistical inference from DNA sequence data and for proof/disproof of concept. What sets population genetics apart from newer, more phenotypic approaches to modelling evolution, such as evolutionary game theory and adaptive dynamics, is its emphasis on such genetic phenomena as dominance, epistasis, the degree to which genetic recombination breaks linkage disequilibrium, and the random phenomena of mutation and genetic drift. This makes it appropriate for comparison to population genomics data. History Population genetics began as a reconciliation of Mendelian inheritance and biostatistics models. Natural selection will only cause evolution if there is enough genetic variation in a population. Before the discovery of Mendelian genetics, one common hypothesis was blending inheritance. But with blending inheritance, genetic variance would be rapidly lost, making evolution by natural or sexual selection implausible. The Hardy–Weinberg principle provides the solution to how variation is maintained in a population with Mendelian inheritance. According to this principle, the frequencies of alleles (variations in a gene) will remain constant in the absence of selection, mutation, migration and genetic drift. The next key step was the work of the British biologist and statistician Ronald Fi
https://en.wikipedia.org/wiki/Codon%20usage%20bias
Codon usage bias refers to differences in the frequency of occurrence of synonymous codons in coding DNA. A codon is a series of three nucleotides (a triplet) that encodes a specific amino acid residue in a polypeptide chain or for the termination of translation (stop codons). There are 64 different codons (61 codons encoding for amino acids and 3 stop codons) but only 20 different translated amino acids. The overabundance in the number of codons allows many amino acids to be encoded by more than one codon. Because of such redundancy it is said that the genetic code is degenerate. The genetic codes of different organisms are often biased towards using one of the several codons that encode the same amino acid over the others—that is, a greater frequency of one will be found than expected by chance. How such biases arise is a much debated area of molecular evolution. Codon usage tables detailing genomic codon usage bias for organisms in GenBank and RefSeq can be found in the HIVE-Codon Usage Tables (HIVE-CUTs) project, which contains two distinct databases, CoCoPUTs and TissueCoCoPUTs. Together, these two databases provide comprehensive, up-to-date codon, codon pair and dinucleotide usage statistics for all organisms with available sequence information and 52 human tissues, respectively. It is generally acknowledged that codon biases reflect the contributions of 3 main factors, GC-biased gene conversion that favors GC-ending codons in diploid organisms, arrival biases reflecting mutational preferences (typically favoring AT-ending codons), and natural selection for codons that are favorable in regard to translation. Optimal codons in fast-growing microorganisms, like Escherichia coli or Saccharomyces cerevisiae (baker's yeast), reflect the composition of their respective genomic transfer RNA (tRNA) pool. It is thought that optimal codons help to achieve faster translation rates and high accuracy. As a result of these factors, translational selection is expected t
https://en.wikipedia.org/wiki/Heap%20overflow
A heap overflow, heap overrun, or heap smashing is a type of buffer overflow that occurs in the heap data area. Heap overflows are exploitable in a different manner to that of stack-based overflows. Memory on the heap is dynamically allocated at runtime and typically contains program data. Exploitation is performed by corrupting this data in specific ways to cause the application to overwrite internal structures such as linked list pointers. The canonical heap overflow technique overwrites dynamic memory allocation linkage (such as malloc metadata) and uses the resulting pointer exchange to overwrite a program function pointer. For example, on older versions of Linux, two buffers allocated next to each other on the heap could result in the first buffer overwriting the second buffer's metadata. By setting the in-use bit to zero of the second buffer and setting the length to a small negative value which allows null bytes to be copied, when the program calls free() on the first buffer it will attempt to merge these two buffers into a single buffer. When this happens, the buffer that is assumed to be freed will be expected to hold two pointers FD and BK in the first 8 bytes of the formerly allocated buffer. BK gets written into FD and can be used to overwrite a pointer. Consequences An accidental overflow may result in data corruption or unexpected behavior by any process that accesses the affected memory area. On operating systems without memory protection, this could be any process on the system. For example, a Microsoft JPEG GDI+ buffer overflow vulnerability could allow remote execution of code on the affected machine. iOS jailbreaking often uses heap overflows to gain arbitrary code execution. Detection and prevention As with buffer overflows there are primarily three ways to protect against heap overflows. Several modern operating systems such as Windows and Linux provide some implementation of all three. Prevent execution of the payload by separating the
https://en.wikipedia.org/wiki/Golden%20spiral
In geometry, a golden spiral is a logarithmic spiral whose growth factor is , the golden ratio. That is, a golden spiral gets wider (or further from its origin) by a factor of for every quarter turn it makes. Approximations of the golden spiral There are several comparable spirals that approximate, but do not exactly equal, a golden spiral. For example, a golden spiral can be approximated by first starting with a rectangle for which the ratio between its length and width is the golden ratio. This rectangle can then be partitioned into a square and a similar rectangle and this rectangle can then be split in the same way. After continuing this process for an arbitrary number of steps, the result will be an almost complete partitioning of the rectangle into squares. The corners of these squares can be connected by quarter-circles. The result, though not a true logarithmic spiral, closely approximates a golden spiral. Another approximation is a Fibonacci spiral, which is constructed slightly differently. A Fibonacci spiral starts with a rectangle partitioned into 2 squares. In each step, a square the length of the rectangle's longest side is added to the rectangle. Since the ratio between consecutive Fibonacci numbers approaches the golden ratio as the Fibonacci numbers approach infinity, so too does this spiral get more similar to the previous approximation the more squares are added, as illustrated by the image. Spirals in nature Approximate logarithmic spirals can occur in nature, for example the arms of spiral galaxies – golden spirals are one special case of these logarithmic spirals, although there is no evidence that there is any general tendency towards this case appearing. Phyllotaxis is connected with the golden ratio because it involves successive leaves or petals being separated by the golden angle; it also results in the emergence of spirals, although again none of them are (necessarily) golden spirals. It is sometimes stated that spiral galaxies and
https://en.wikipedia.org/wiki/Golden%20rectangle
In geometry, a golden rectangle is a rectangle whose side lengths are in the golden ratio, , which is (the Greek letter phi), where is approximately 1.618. Golden rectangles exhibit a special form of self-similarity: All rectangles created by adding or removing a square from an end are golden rectangles as well. Construction A golden rectangle can be constructed with only a straightedge and compass in four steps: Drawing a square Drawing a line from the midpoint of one side of the square to an opposite corner Using that line as the radius to draw an arc that defines the height of the rectangle Completing the golden rectangle A distinctive feature of this shape is that when a square section is added—or removed—the product is another golden rectangle, having the same aspect ratio as the first. Square addition or removal can be repeated infinitely, in which case corresponding corners of the squares form an infinite sequence of points on the golden spiral, the unique logarithmic spiral with this property. Diagonal lines drawn between the first two orders of embedded golden rectangles will define the intersection point of the diagonals of all the embedded golden rectangles; Clifford A. Pickover referred to this point as "the Eye of God". History The proportions of the golden rectangle have been observed as early as the Babylonian Tablet of Shamash (c. 888–855 BC), though Mario Livio calls any knowledge of the golden ratio before the Ancient Greeks "doubtful". According to Livio, since the publication of Luca Pacioli's Divina proportione in 1509, "the Golden Ratio started to become available to artists in theoretical treatises that were not overly mathematical, that they could actually use." The 1927 Villa Stein designed by Le Corbusier, some of whose architecture utilizes the golden ratio, features dimensions that closely approximate golden rectangles. Relation to regular polygons and polyhedra Euclid gives an alternative construction of the golden rectang
https://en.wikipedia.org/wiki/Network%20segment
A network segment is a portion of a computer network. The nature and extent of a segment depends on the nature of the network and the device or devices used to interconnect end stations. Ethernet According to the defining IEEE 802.3 standards for Ethernet, a network segment is an electrical connection between networked devices using a shared medium. In the original 10BASE5 and 10BASE2 Ethernet varieties, a segment would therefore correspond to a single coax cable and all devices tapped into it. At this point in the evolution of Ethernet, multiple network segments could be connected with repeaters (in accordance with the 5-4-3 rule for 10 Mbit Ethernet) to form a larger collision domain. With twisted-pair Ethernet, electrical segments can be joined together using repeaters or repeater hubs as can other varieties of Ethernet. This corresponds to the extent of an OSI layer 1 network and is equivalent to the collision domain. The 5-4-3 rule applies to this collision domain. Using switches or bridges, multiple layer-1 segments can be combined to a common layer-2 segment, i.e. all nodes can communicate with each other through MAC addressing or broadcasts. A layer-2 segment is equivalent to a broadcast domain. Traffic within a layer-2 segment can be separated into virtually distinct partitions by using VLANs. Each VLAN forms its own logical layer-2 segment. IP A layer-3 segment in an IP network is called a subnetwork, formed by all nodes sharing the same network prefix as defined by their IP addresses and the network mask. Communication between layer-3 subnets requires a router. Hosts on a subnet communicate directly using the layer-2 segment that connects them. Most often a subnetwork corresponds exactly with the underlying layer-2 segment but it is also possible to run multiple subnets on a single layer-2 segment. References Ethernet Network architecture
https://en.wikipedia.org/wiki/Egg%20cell
The egg cell, or ovum (: ova), is the female reproductive cell, or gamete, in most anisogamous organisms (organisms that reproduce sexually with a larger, female gamete and a smaller, male one). The term is used when the female gamete is not capable of movement (non-motile). If the male gamete (sperm) is capable of movement, the type of sexual reproduction is also classified as oogamous. A nonmotile female gamete formed in the oogonium of some algae, fungi, oomycetes, or bryophytes is an oosphere. When fertilized the oosphere becomes the oospore. When egg and sperm fuse during fertilisation, a diploid cell (the zygote) is formed, which rapidly grows into a new organism. History While the non-mammalian animal egg was obvious, the doctrine ex ovo omne vivum ("every living [animal comes from] an egg"), associated with William Harvey (1578–1657), was a rejection of spontaneous generation and preformationism as well as a bold assumption that mammals also reproduced via eggs. Karl Ernst von Baer discovered the mammalian ovum in 1827. The fusion of spermatozoa with ova (of a starfish) was observed by Oskar Hertwig in 1876. Animals In animals, egg cells are also known as ova (singular ovum, from the Latin word meaning 'egg'). The term ovule in animals is used for the young ovum of an animal. In vertebrates, ova are produced by female gonads (sex glands) called ovaries. A number of ova are present at birth in mammals and mature via oogenesis. Studies performed on humans, dogs, and cats in the 1870s suggested that the production of oocytes (immature egg cells) stops at or shortly after birth. A review of reports from 1900 to 1950 by zoologist Solomon Zuckerman cemented the belief that females have a finite number of oocytes that are formed before they are born. This dogma has been challenged by a number of studies since 2004. Several studies suggest that ovarian stem cells exist within the mammalian ovary. Whether or not mature mammals can actually create new egg cells
https://en.wikipedia.org/wiki/Levinson%20recursion
Levinson recursion or Levinson–Durbin recursion is a procedure in linear algebra to recursively calculate the solution to an equation involving a Toeplitz matrix. The algorithm runs in time, which is a strong improvement over Gauss–Jordan elimination, which runs in Θ(n3). The Levinson–Durbin algorithm was proposed first by Norman Levinson in 1947, improved by James Durbin in 1960, and subsequently improved to and then multiplications by W. F. Trench and S. Zohar, respectively. Other methods to process data include Schur decomposition and Cholesky decomposition. In comparison to these, Levinson recursion (particularly split Levinson recursion) tends to be faster computationally, but more sensitive to computational inaccuracies like round-off errors. The Bareiss algorithm for Toeplitz matrices (not to be confused with the general Bareiss algorithm) runs about as fast as Levinson recursion, but it uses space, whereas Levinson recursion uses only O(n) space. The Bareiss algorithm, though, is numerically stable, whereas Levinson recursion is at best only weakly stable (i.e. it exhibits numerical stability for well-conditioned linear systems). Newer algorithms, called asymptotically fast or sometimes superfast Toeplitz algorithms, can solve in for various p (e.g. p = 2, p = 3 ). Levinson recursion remains popular for several reasons; for one, it is relatively easy to understand in comparison; for another, it can be faster than a superfast algorithm for small n (usually n < 256). Derivation Background Matrix equations follow the form The Levinson–Durbin algorithm may be used for any such equation, as long as M is a known Toeplitz matrix with a nonzero main diagonal. Here is a known vector, and is an unknown vector of numbers xi yet to be determined. For the sake of this article, êi is a vector made up entirely of zeroes, except for its ith place, which holds the value one. Its length will be implicitly determined by the surrounding context. The term N r
https://en.wikipedia.org/wiki/In-place%20algorithm
In computer science, an in-place algorithm is an algorithm that operates directly on the input data structure without requiring extra space proportional to the input size. In other words, it modifies the input in place, without creating a separate copy of the data structure. An algorithm which is not in-place is sometimes called not-in-place or out-of-place. In-place can have slightly different meanings. In its strictest form, the algorithm can only have a constant amount of extra space, counting everything including function calls and pointers. However, this form is very limited as simply having an index to a length array requires bits. More broadly, in-place means that the algorithm does not use extra space for manipulating the input but may require a small though nonconstant extra space for its operation. Usually, this space is , though sometimes anything in is allowed. Note that space complexity also has varied choices in whether or not to count the index lengths as part of the space used. Often, the space complexity is given in terms of the number of indices or pointers needed, ignoring their length. In this article, we refer to total space complexity (DSPACE), counting pointer lengths. Therefore, the space requirements here have an extra factor compared to an analysis that ignores the length of indices and pointers. An algorithm may or may not count the output as part of its space usage. Since in-place algorithms usually overwrite their input with output, no additional space is needed. When writing the output to write-only memory or a stream, it may be more appropriate to only consider the working space of the algorithm. In theoretical applications such as log-space reductions, it is more typical to always ignore output space (in these cases it is more essential that the output is write-only). Examples Given an array of items, suppose we want an array that holds the same elements in reversed order and to dispose of the original. One seemingly simp
https://en.wikipedia.org/wiki/Simpson%27s%20rule
In numerical integration, Simpson's rules are several approximations for definite integrals, named after Thomas Simpson (1710–1761). The most basic of these rules, called Simpson's 1/3 rule, or just Simpson's rule, reads In German and some other languages, it is named after Johannes Kepler, who derived it in 1615 after seeing it used for wine barrels (barrel rule, ). The approximate equality in the rule becomes exact if is a polynomial up to and including 3rd degree. If the 1/3 rule is applied to n equal subdivisions of the integration range [a, b], one obtains the composite Simpson's 1/3 rule. Points inside the integration range are given alternating weights 4/3 and 2/3. Simpson's 3/8 rule, also called Simpson's second rule, requires one more function evaluation inside the integration range and gives lower error bounds, but does not improve on order of the error. If the 3/8 rule is applied to n equal subdivisions of the integration range [a, b], one obtains the composite Simpson's 3/8 rule. Simpson's 1/3 and 3/8 rules are two special cases of closed Newton–Cotes formulas. In naval architecture and ship stability estimation, there also exists Simpson's third rule, which has no special importance in general numerical analysis, see Simpson's rules (ship stability). Simpson's 1/3 rule Simpson's 1/3 rule, also simply called Simpson's rule, is a method for numerical integration proposed by Thomas Simpson. It is based upon a quadratic interpolation. Simpson's 1/3 rule is as follows: where is the step size. The error in approximating an integral by Simpson's rule for is where (the Greek letter xi) is some number between and . The error is asymptotically proportional to . However, the above derivations suggest an error proportional to . Simpson's rule gains an extra order because the points at which the integrand is evaluated are distributed symmetrically in the interval . Since the error term is proportional to the fourth derivative of at , this shows t
https://en.wikipedia.org/wiki/Standard%20streams
In computer programming, standard streams are interconnected input and output communication channels between a computer program and its environment when it begins execution. The three input/output (I/O) connections are called standard input (stdin), standard output (stdout) and standard error (stderr). Originally I/O happened via a physically connected system console (input via keyboard, output via monitor), but standard streams abstract this. When a command is executed via an interactive shell, the streams are typically connected to the text terminal on which the shell is running, but can be changed with redirection or a pipeline. More generally, a child process inherits the standard streams of its parent process. Application Users generally know standard streams as input and output channels that handle data coming from an input device, or that write data from the application. The data may be text with any encoding, or binary data. In many modern systems, the standard error stream of a program is redirected into a log file, typically for error analysis purposes. Streams may be used to chain applications, meaning that the output stream of one program can be redirected to be the input stream to another application. In many operating systems this is expressed by listing the application names, separated by the vertical bar character, for this reason often called the pipeline character. A well-known example is the use of a pagination application, such as more, providing the user control over the display of the output stream on the display. Background In most operating systems predating Unix, programs had to explicitly connect to the appropriate input and output devices. OS-specific intricacies caused this to be a tedious programming task. On many systems it was necessary to obtain control of environment settings, access a local file table, determine the intended data set, and handle hardware correctly in the case of a punch card reader, magnetic tape drive, disk dr
https://en.wikipedia.org/wiki/Set-builder%20notation
In set theory and its applications to logic, mathematics, and computer science, set-builder notation is a mathematical notation for describing a set by enumerating its elements, or stating the properties that its members must satisfy. Defining sets by properties is also known as set comprehension, set abstraction or as defining a set's intension. Sets defined by enumeration A set can be described directly by enumerating all of its elements between curly brackets, as in the following two examples: is the set containing the four numbers 3, 7, 15, and 31, and nothing else. is the set containing , , and , and nothing else (there is no order among the elements of a set). This is sometimes called the "roster method" for specifying a set. When it is desired to denote a set that contains elements from a regular sequence, an ellipsis notation may be employed, as shown in the next examples: is the set of integers between 1 and 100 inclusive. is the set of natural numbers. is the set of all integers. There is no order among the elements of a set (this explains and validates the equality of the last example), but with the ellipses notation, we use an ordered sequence before (or after) the ellipsis as a convenient notational vehicle for explaining which elements are in a set. The first few elements of the sequence are shown, then the ellipses indicate that the simplest interpretation should be applied for continuing the sequence. Should no terminating value appear to the right of the ellipses, then the sequence is considered to be unbounded. In general, denotes the set of all natural numbers such that . Another notation for is the bracket notation . A subtle special case is , in which is equal to the empty set . Similarly, denotes the set of all for . In each preceding example, each set is described by enumerating its elements. Not all sets can be described in this way, or if they can, their enumeration may be too long or too complicated to be useful.
https://en.wikipedia.org/wiki/Index%20of%20anatomy%20articles
Articles related to anatomy include: A abdomen abdominal aorta abducens nerve abducens nucleus abducent abducent nerve abduction accessory bone accessory cuneate nucleus accessory nerve accessory olivary nucleus accommodation reflex acetabulum Achilles tendon acoustic nerve acromion adenohypophysis adenoids adipose aditus aditus ad antrum adrenal gland adrenergic afferent neuron agger nasi agnosia agonist alar ligament albuginea alimentary allantois allocortex alpha motor neurons alveolar artery alveolar process alveolus alveus of the hippocampus amatory anatomy amaurosis Ammon's horn ampulla Ampulla of Vater amygdala amygdalofugal pathway amygdaloid amylacea anaesthesia analgesia analogous anastomosis anatomical pathology anatomical position anatomical snuffbox anatomical terms of location anatomical terms of motion anatomy Anatomy of the human heart anconeus angiography angiology angular gyrus anhidrosis animal morphology anisocoria ankle ankle reflex annular ligament annulus of Zinn anomaly anomic aphasia anosognosia ansa cervicalis ansa lenticularis anterior cerebral artery Anterior chamber of eyeball anterior choroidal artery anterior commissure anterior communicating artery anterior corticospinal tract anterior cranial fossa anterior cruciate ligament anterior ethmoidal foramen anterior ethmoidal nerve anterior funiculus anterior horn cells anterior horn of the lateral ventricle anterior hypothalamus anterior inferior cerebellar artery anterior limb of the internal capsule anterior lobe of cerebellum anterior nucleus of the thalamus anterior perforated substance anterior pituitary anterior root anterior spinal artery anterior spinocerebellar tract anterior superior alveolar artery anterior tibial artery anterior vertebral muscle anterior white commissure anterolateral region of the neck anterolateral system antidromic antihelix antrum anulus fibrosus anulus tendineus anus aorta aortic body aponeurosis apophysis appendage appendicular skeleton appendix apros
https://en.wikipedia.org/wiki/The%20Invincible
The Invincible () is a hard science fiction novel by Polish writer Stanisław Lem, published in 1964. The Invincible originally appeared as the title story in Lem's collection Niezwyciężony i inne opowiadania ("The Invincible and Other Stories"). A translation into German was published in 1967; an English translation by Wendayne Ackerman, based on the German one, was published in 1973. A direct translation into English from Polish, by Bill Johnston, was published in 2006. It was one of the first{{refn|group=nb|Earlier, a concept similar to nanotechnology, called "micromechanical devices", was described in Lem's 1959 novel Eden. See "Nanotechnology in fiction" for still earlier examples. <ref> Doktryna nieingerencji, In: Marek Oramus, Bogowie Lema</ref>}} novels to explore the ideas of microrobots, smartdust, artificial swarm intelligence, and "necroevolution" (a term suggested by Lem for the evolution of non-living matter). Plot summary A heavily armed interstellar spacecraft called Invincible lands on the planet Regis III, which seems uninhabited and bleak, to investigate the loss of her sister ship, Condor. During the investigation, the crew finds evidence of a form of quasi-life, born through evolution of autonomous, self-replicating machines, apparently left behind by an alien civilization ship which landed on Regis III a very long time ago. The protagonists come to speculate that a kind of evolution must have taken place under the selection pressures of "robot wars", with the only surviving form being swarms of minuscule, insect-like micromachines. Individually, or in small groups, they are quite harmless and capable of only very simple behavior. When threatened, they can assemble into huge clouds, travel at a high speed, and even climb to the top of the troposphere. These swarms display complex behavior arising from self-organization and can incapacitate any intelligent threat by a powerful surge of electromagnetic interference. Condors crew suffered a
https://en.wikipedia.org/wiki/Wallace%20Line
The Wallace line or Wallace's line is a faunal boundary line drawn in 1859 by the British naturalist Alfred Russel Wallace and named by the English biologist T.H. Huxley that separates the biogeographical realms of Asia and 'Wallacea', a transitional zone between Asia and Australia also called the Malay Archipelago and the Indo-Australian Archipelago. To the west of the line are found organisms related to Asiatic species; to the east, a mixture of species of Asian and Australian origins is present. Wallace noticed this clear division in both land mammals and birds during his travels through the East Indies in the 19th century. The line runs through Indonesia, such as Makassar Strait between Borneo and Sulawesi (Celebes), and through the Lombok Strait between Bali and Lombok, where the distance is strikingly small, only about 35 kilometers (22 mi), but enough for a contrast in species present on each island. The complex biogeography of the Indo-Australian Archipelago is a result of its location at the merging point of four major tectonic plates and other semi-isolated microplates in combination with ancient sea levels. Those caused the isolation of different taxonomic groups on islands at present relatively close to each other. Wallace's line is one of the many boundaries drawn by naturalists and biologists since the mid-1800s intended to delineate constraints on the distribution of the fauna and flora of the archipelago. Historical context One of the earliest descriptions of the biodiversity in the Indo-Australian Archipelago dates back to 1521 when Venetian explorer Pigafetta recorded the biological contrasts between the Philippines and the Maluku Islands (Spice Islands) (on opposite sides of the Wallace's Line) during the continuation of the voyage of Ferdinand Magellan, after Magellan had been killed on Mactan. Later on, the English navigator G.W. Earl published his observations in faunal differences between the islands in the Indo-Australian archipelago. In
https://en.wikipedia.org/wiki/Instruction%20pipelining
In computer engineering, instruction pipelining is a technique for implementing instruction-level parallelism within a single processor. Pipelining attempts to keep every part of the processor busy with some instruction by dividing incoming instructions into a series of sequential steps (the eponymous "pipeline") performed by different processor units with different parts of instructions processed in parallel. Concept and motivation In a pipelined computer, instructions flow through the central processing unit (CPU) in stages. For example, it might have one stage for each step of the von Neumann cycle: Fetch the instruction, fetch the operands, do the instruction, write the results. A pipelined computer usually has "pipeline registers" after each stage. These store information from the instruction and calculations so that the logic gates of the next stage can do the next step. This arrangement lets the CPU complete an instruction on each clock cycle. It is common for even-numbered stages to operate on one edge of the square-wave clock, while odd-numbered stages operate on the other edge. This allows more CPU throughput than a multicycle computer at a given clock rate, but may increase latency due to the added overhead of the pipelining process itself. Also, even though the electronic logic has a fixed maximum speed, a pipelined computer can be made faster or slower by varying the number of stages in the pipeline. With more stages, each stage does less work, and so the stage has fewer delays from the logic gates and could run at a higher clock rate. A pipelined model of computer is often the most economical, when cost is measured as logic gates per instruction per second. At each instant, an instruction is in only one pipeline stage, and on average, a pipeline stage is less costly than a multicycle computer. Also, when made well, most of the pipelined computer's logic is in use most of the time. In contrast, out of order computers usually have large amounts of idl
https://en.wikipedia.org/wiki/Vaio
VAIO () is a brand of personal computers and consumer electronics, currently developed by Japanese manufacturer , headquartered in Azumino, Nagano Prefecture. VAIO was originally or formerly a brand of Sony, introduced in 1996. In February 2014, Sony created VAIO Corporation Inc., a special purpose company with investment firm Japan Industrial Partners, as part of its restructuring effort to focus on mobile devices. Sony maintains a minority stake in the new, independent company, which currently sells computers in the United States, Japan, India, and Brazil, and maintains exclusive marketing agreements in other regions. Sony still holds the intellectual property rights for the VAIO brand and logo. As of 2023, Vaio operates its stores in several countries and regions, such as Argentina, Brazil, Chile, Hong Kong, India, Japan, Malaysia, Singapore, Taiwan, the United States, Uruguay and so on. Etymology Originally an acronym of Video Audio Input Output, later amended to Video Audio Integrated Operation, and later to Visual Audio Intelligent Organizer in 2008 to celebrate the brand's 10th anniversary. The logo, along with the first of the VAIO computers, were designed by Teiyu Goto, supervisor of product design from the Sony Creative Center in Tokyo. He incorporated many meanings into the logo and acronym: the pronunciation in both English (VAIO) and Japanese () is similar to "bio", which is symbolic of life and the product's future evolution; it's also near "violet", which is why most early Vaios were purple or included purple components. Additionally, the logo is stylized to make the "VA" look like a sine wave and the "IO" like binary digits 1 and 0, the combination representing the merging of analog and digital signals. The sound some Vaio models make when starting up is derived from the melody created when pressing a telephone keypad to spell the letters V-A-I-O. Global operations As of 2023, Vaio is operational in the following countries and regions: Argenti
https://en.wikipedia.org/wiki/Ascocarp
An ascocarp, or ascoma (: ascomata), is the fruiting body (sporocarp) of an ascomycete phylum fungus. It consists of very tightly interwoven hyphae and millions of embedded asci, each of which typically contains four to eight ascospores. Ascocarps are most commonly bowl-shaped (apothecia) but may take on a spherical or flask-like form that has a pore opening to release spores (perithecia) or no opening (cleistothecia). Classification The ascocarp is classified according to its placement (in ways not fundamental to the basic taxonomy). It is called epigeous if it grows above ground, as with the morels, while underground ascocarps, such as truffles, are termed hypogeous. The structure enclosing the hymenium is divided into the types described below (apothecium, cleistothecium, etc.) and this character is important for the taxonomic classification of the fungus. Apothecia can be relatively large and fleshy, whereas the others are microscopic—about the size of flecks of ground pepper. Apothecium An apothecium (plural: apothecia) is a wide, open, saucer-shaped or cup-shaped fruit body. It is sessile and fleshy. The structure of the apothecium chiefly consists of three parts: hymenium (upper concave surface), hypothecium, and excipulum (the "foot"). The asci are present in the hymenium layer. The asci are freely exposed at maturity. An example are the members of Dictyomycetes. Here the fertile layer is free, so that many spores can be dispersed simultaneously. The morel, Morchella, an edible ascocarp, not a mushroom, favored by gourmets, is a mass of apothecia fused together in a single large structure or cap. The genera Helvella and Gyromitra are similar. Cleistothecium A cleistothecium (plural: cleistothecia) is a globose, completely closed fruit body with no special opening to the outside. The ascomatal wall is called peridium and typically consists of densely interwoven hyphae or pseudoparenchyma cells. It may be covered with hyphal outgrowth called appendage
https://en.wikipedia.org/wiki/Ascus
An ascus (; : asci) is the sexual spore-bearing cell produced in ascomycete fungi. Each ascus usually contains eight ascospores (or octad), produced by meiosis followed, in most species, by a mitotic cell division. However, asci in some genera or species can occur in numbers of one (e.g. Monosporascus cannonballus), two, four, or multiples of four. In a few cases, the ascospores can bud off conidia that may fill the asci (e.g. Tympanis) with hundreds of conidia, or the ascospores may fragment, e.g. some Cordyceps, also filling the asci with smaller cells. Ascospores are nonmotile, usually single celled, but not infrequently may be coenocytic (lacking a septum), and in some cases coenocytic in multiple planes. Mitotic divisions within the developing spores populate each resulting cell in septate ascospores with nuclei. The term ocular chamber, or oculus, refers to the epiplasm (the portion of cytoplasm not used in ascospore formation) that is surrounded by the "bourrelet" (the thickened tissue near the top of the ascus). Typically, a single ascus will contain eight ascospores (or octad). The eight spores are produced by meiosis followed by a mitotic division. Two meiotic divisions turn the original diploid zygote nucleus into four haploid ones. That is, the single original diploid cell from which the whole process begins contains two complete sets of chromosomes. In preparation for meiosis, all the DNA of both sets is duplicated, to make a total of four sets. The nucleus that contains the four sets divides twice, separating into four new nuclei – each of which has one complete set of chromosomes. Following this process, each of the four new nuclei duplicates its DNA and undergoes a division by mitosis. As a result, the ascus will contain four pairs of spores. Then the ascospores are released from the ascus. In many cases the asci are formed in a regular layer, the hymenium, in a fruiting body which is visible to the naked eye, here called an ascocarp or ascoma.
https://en.wikipedia.org/wiki/Just-in-time%20compilation
In computing, just-in-time (JIT) compilation (also dynamic translation or run-time compilations) is compilation (of computer code) during execution of a program (at run time) rather than before execution. This may consist of source code translation but is more commonly bytecode translation to machine code, which is then executed directly. A system implementing a JIT compiler typically continuously analyses the code being executed and identifies parts of the code where the speedup gained from compilation or recompilation would outweigh the overhead of compiling that code. JIT compilation is a combination of the two traditional approaches to translation to machine code—ahead-of-time compilation (AOT), and interpretation—and combines some advantages and drawbacks of both. Roughly, JIT compilation combines the speed of compiled code with the flexibility of interpretation, with the overhead of an interpreter and the additional overhead of compiling and linking (not just interpreting). JIT compilation is a form of dynamic compilation, and allows adaptive optimization such as dynamic recompilation and microarchitecture-specific speedups. Interpretation and JIT compilation are particularly suited for dynamic programming languages, as the runtime system can handle late-bound data types and enforce security guarantees. History The earliest published JIT compiler is generally attributed to work on LISP by John McCarthy in 1960. In his seminal paper Recursive functions of symbolic expressions and their computation by machine, Part I, he mentions functions that are translated during runtime, thereby sparing the need to save the compiler output to punch cards (although this would be more accurately known as a "Compile and go system"). Another early example was by Ken Thompson, who in 1968 gave one of the first applications of regular expressions, here for pattern matching in the text editor QED. For speed, Thompson implemented regular expression matching by JITing to IBM 7094
https://en.wikipedia.org/wiki/Geometrization%20conjecture
In mathematics, Thurston's geometrization conjecture states that each of certain three-dimensional topological spaces has a unique geometric structure that can be associated with it. It is an analogue of the uniformization theorem for two-dimensional surfaces, which states that every simply connected Riemann surface can be given one of three geometries (Euclidean, spherical, or hyperbolic). In three dimensions, it is not always possible to assign a single geometry to a whole topological space. Instead, the geometrization conjecture states that every closed 3-manifold can be decomposed in a canonical way into pieces that each have one of eight types of geometric structure. The conjecture was proposed by , and implies several other conjectures, such as the Poincaré conjecture and Thurston's elliptization conjecture. Thurston's hyperbolization theorem implies that Haken manifolds satisfy the geometrization conjecture. Thurston announced a proof in the 1980s and since then several complete proofs have appeared in print. Grigori Perelman announced a proof of the full geometrization conjecture in 2003 using Ricci flow with surgery in two papers posted at the arxiv.org preprint server. Perelman's papers were studied by several independent groups that produced books and online manuscripts filling in the complete details of his arguments. Verification was essentially complete in time for Perelman to be awarded the 2006 Fields Medal for his work, and in 2010 the Clay Mathematics Institute awarded him its 1 million USD prize for solving the Poincare conjecture, though Perelman declined to accept either award. The Poincaré conjecture and the spherical space form conjecture are corollaries of the geometrization conjecture, although there are shorter proofs of the former that do not lead to the geometrization conjecture. The conjecture A 3-manifold is called closed if it is compact and has no boundary. Every closed 3-manifold has a prime decomposition: this means i
https://en.wikipedia.org/wiki/Exterior%20derivative
On a differentiable manifold, the exterior derivative extends the concept of the differential of a function to differential forms of higher degree. The exterior derivative was first described in its current form by Élie Cartan in 1899. The resulting calculus, known as exterior calculus, allows for a natural, metric-independent generalization of Stokes' theorem, Gauss's theorem, and Green's theorem from vector calculus. If a differential -form is thought of as measuring the flux through an infinitesimal -parallelotope at each point of the manifold, then its exterior derivative can be thought of as measuring the net flux through the boundary of a -parallelotope at each point. Definition The exterior derivative of a differential form of degree (also differential -form, or just -form for brevity here) is a differential form of degree . If is a smooth function (a -form), then the exterior derivative of is the differential of . That is, is the unique -form such that for every smooth vector field , , where is the directional derivative of in the direction of . The exterior product of differential forms (denoted with the same symbol ) is defined as their pointwise exterior product. There are a variety of equivalent definitions of the exterior derivative of a general -form. In terms of axioms The exterior derivative is defined to be the unique -linear mapping from -forms to -forms that has the following properties: is the differential of for a -form . for a -form . where is a -form. That is to say, is an antiderivation of degree on the exterior algebra of differential forms (see the graded product rule). The second defining property holds in more generality: for any -form ; more succinctly, . The third defining property implies as a special case that if is a function and is a -form, then because a function is a -form, and scalar multiplication and the exterior product are equivalent when one of the arguments is a scalar. In terms of local coordi