source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Iliococcygeal%20raphe
The iliococcygeal raphe is a raphe representing the midline location where the levatores ani converge. See also Anococcygeal body
https://en.wikipedia.org/wiki/SINADR
Signal-to-noise and distortion ratio (SINADR) is a measurement of the purity of a signal. SINADR is typically used in data converter specifications. SINADR is defined as: where is the average power of the signal, quantization error, random noise and distortion components. SINADR is usually expressed in dB. SINADR is a standard metric for analog-to-digital converter and digital-to-analog converter. SINADR (in dB) is related to ENOB by the following equation:
https://en.wikipedia.org/wiki/MEMS%20magnetic%20actuator
A MEMS magnetic actuator is a device that uses the microelectromechanical systems (MEMS) to convert an electric current into a mechanical output by employing the well-known Lorentz Force Equation or the theory of Magnetism. Overview of MEMS Micro-Electro-Mechanical System (MEMS) technology is a process technology in which mechanical and electro-mechanical devices or structures are constructed using special micro-fabrication techniques. These techniques include: bulk micro-machining, surface micro-machining, LIGA, wafer bonding, etc. A device is considered to be a MEMS device if it satisfies the following: If its feature size is between 0.1 µm and hundreds of micrometers. (below this range, it becomes a nano device and above the range, it is considered a mesosystem) If it has some electrical functionality in its operation. This could include the generation of voltage by electromagnetic induction, by changing the gap between 2 electrodes or by a piezoelectric material. If the device has some mechanical functionality such as the deformation of a beam or diaphragm due to stress or strain. If it has a system-like functionality. The device must be integrable to other circuitries to form a system. This would be the interfacing circuitry and packaging for the device to become useful. For the analysis of every MEMS device, the Lumped assumption is made: that if the size of the device is far less than the characteristic length scale of the phenomenon (wave or diffusion), then there would be no spatial variations across the entire device. Modelling becomes easy under this assumption. Operations in MEMS The three major operations in MEMS are: Sensing: measuring a mechanical input by converting it to an electrical signal, e.g. a MEMS accelerometer or a pressure sensor (could also measure electrical signals as in the case of current sensors) Actuation: using an electrical signal to cause the displacement (or rotation) of a mechanical structure, e.g. a synthetic jet actua
https://en.wikipedia.org/wiki/Dubna%2048K
The Dubna 48K (Дубна 48К) is a Soviet clone of the ZX Spectrum home computer launched in 1991. It was based on an analogue of the Zilog Z80 microprocessor. Its name comes from Dubna, a town near Moscow, where it was produced on the "TENSOR" instrument factory, and "48K" stands for 48 KBs of RAM. Overview According to the manual, this computer was intended for: studying the principles of PC operation various kinds of calculations "intellectual games" By the time this computer was released (1991), there were already much more powerful x86 CPUs and commercially available advanced operating systems, such as Unix, DOS and Windows. The Dubna 48K had only a built-in BASIC interpreter, and loaded its programs from a cassette recorder, so it couldn't run any of the modern operating systems. However, the Dubna 48K and many other Z80 clones, though outdated by that time, were introduced in high schools of the Soviet Union. Many of the games for the Z80-based machine were ported from games already available for Nintendo's 8-bit game console, marketed in Russia under the brand Dendy. The machine comes in two versions: in a metal case for the initial 1991 model, and in a plastic case for the 1992 model. Included items The Dubna 48K was shipped with the following units: Main unit ("data processing unit", as stated on its back side), with mainboard and built-in keyboard External power unit Video adapter for connecting the computer to the TV set BASIC programming manual Reference book, including complete schematic circuit Additionally, there were some optional items: Joystick 32 cm (12") colour monitor The computer could also connect to a ZX Microdrive, but such device was never included. Technical details CPU: 8-bit MME 80A at 1.875 MHz (half the speed of the original ZX Spectrum) RAM: 48 KB (16× КР565РУ5Г chips) ROM: 16 KB (2× К573РФ4А) Resolution: 256 x 192 pixels, or 24 rows of 32 characters each Number of colours: 8 colours in either normal or bright mode,
https://en.wikipedia.org/wiki/Scientific%20pitch
Scientific pitch, also known as philosophical pitch, Sauveur pitch or Verdi tuning, is an absolute concert pitch standard which is based on middle C (C4) being set to 256 Hz rather than 261.62 Hz, making it approximately 37.6 cents lower than the common A440 pitch standard. It was first proposed in 1713 by French physicist Joseph Sauveur, promoted briefly by Italian composer Giuseppe Verdi in the 19th century, then advocated by the Schiller Institute beginning in the 1980s with reference to the composer, but naming a pitch slightly lower than Verdi's preferred 432 Hz for A, and making controversial claims regarding the effects of this pitch. Scientific pitch is not used by concert orchestras but is still sometimes favored in scientific writings for the convenience of all the octaves of C being an exact round number in the binary system when expressed in hertz (symbol Hz). The octaves of C remain a whole number in Hz all the way down to 1 Hz in both binary and decimal counting systems. Instead of A above middle C (A4) being set to the widely used standard of 440 Hz, scientific pitch assigns it a frequency of 430.54 Hz. Since 256 is a power of 2, only octaves (factor 2:1) and, in just tuning, higher-pitched perfect fifths (factor 3:2) of the scientific pitch standard will have a frequency of a convenient integer value. With a Verdi pitch standard of A4 = 432 Hz = 24 × 33, in just tuning all octaves (factor 2), perfect fourths (factor 4:3) and fifths (factor 3:2) will have pitch frequencies of integer numbers, but not the major thirds (factor 5:4) nor major sixths (factor 5:3) which have a prime factor 5 in their ratios. However scientific tuning implies an equal temperament tuning where the frequency ratio between each half tone in the scale is the same, being the 12th root of 2 (a factor of approximately 1.059463), which is not a rational number: therefore in scientific pitch only the octaves of C have a frequency of a whole number in hertz. History Concert tuni
https://en.wikipedia.org/wiki/Sentry%20Firewall
Sentry Firewall is a free open-source network firewall Linux distribution that was first published in 2001 and has been the subject of multiple magazine reviews. The distribution is particularly notable because it consists solely of a bootable CD-ROM that is designed to be used in a computer with no hard disk. Configuration information is retrieved at boot time by automatically searching on an attached floppy disk drive, USB flash memory drive, or another server on the local network willing to provide the configuration. Overview Sentry Firewall starts from CD-ROM and immediately constructs a RAM disk in the computer's memory. Before the system fully boots, a script searches for removable media containing a file called "sentry.conf". If that file is found, it may contain detailed instructions and a list of files to be copied from the removable media to the RAM disk before the system is finally allowed to boot. The CD-ROM is pre-loaded with a variety of configurable network tools, including iptables. Because the RAM disk is created each time the machine boots, it is possible to recover from any sort of problem simply by rebooting the machine. From a security perspective, this is compelling because the machine essentially becomes immune to viruses or file corruption - or at least the effects of either problem can't survive a reboot. Configuration While basic Linux familiarity is necessary to configure a basic set of files necessary to use the firewall, there exist Windows programs capable of creating the bulk of the configuration scripts based on interaction with a graphical user interface. Firewall Builder is one such example; this program also works with other firewall products unrelated to Sentry Firewall. Current status According to the project's maintainer, Sentry Firewall has not been updated since its January 2005 release. External links Sentry Firewall on ArchiveOS.org
https://en.wikipedia.org/wiki/Vakhitov%E2%80%93Kolokolov%20stability%20criterion
The Vakhitov–Kolokolov stability criterion is a condition for linear stability (sometimes called spectral stability) of solitary wave solutions to a wide class of U(1)-invariant Hamiltonian systems, named after Soviet scientists Aleksandr Kolokolov (Александр Александрович Колоколов) and Nazib Vakhitov (Назиб Галиевич Вахитов). The condition for linear stability of a solitary wave with frequency has the form where is the charge (or momentum) of the solitary wave , conserved by Noether's theorem due to U(1)-invariance of the system. Original formulation Originally, this criterion was obtained for the nonlinear Schrödinger equation, where , , and is a smooth real-valued function. The solution is assumed to be complex-valued. Since the equation is U(1)-invariant, by Noether's theorem, it has an integral of motion, , which is called charge or momentum, depending on the model under consideration. For a wide class of functions , the nonlinear Schrödinger equation admits solitary wave solutions of the form , where and decays for large (one often requires that belongs to the Sobolev space ). Usually such solutions exist for from an interval or collection of intervals of a real line. The Vakhitov–Kolokolov stability criterion, is a condition of spectral stability of a solitary wave solution. Namely, if this condition is satisfied at a particular value of , then the linearization at the solitary wave with this has no spectrum in the right half-plane. This result is based on an earlier work by Vladimir Zakharov. Generalizations This result has been generalized to abstract Hamiltonian systems with U(1)-invariance. It was shown that under rather general conditions the Vakhitov–Kolokolov stability criterion guarantees not only spectral stability but also orbital stability of solitary waves. The stability condition has been generalized to traveling wave solutions to the generalized Korteweg–de Vries equation of the form . The stability condition has also been g
https://en.wikipedia.org/wiki/Wireless%20Markup%20Language
Wireless Markup Language (WML), based on XML, is an obsolete markup language intended for devices that implement the Wireless Application Protocol (WAP) specification, such as mobile phones. It provides navigational support, data input, hyperlinks, text and image presentation, and forms, much like HTML (Hypertext Markup Language). It preceded the use of other markup languages used with WAP, such as XHTML and HTML itself, which achieved dominance as processing power in mobile devices increased. WML history Building on Openwave's HDML, Nokia's "Tagged Text Markup Language" (TTML) and Ericsson's proprietary markup language for mobile content, the WAP Forum created the WML 1.1 standard in 1998. WML 2.0 was specified in 2001, but has not been widely adopted. It was an attempt at bridging WML and XHTML Basic before the WAP 2.0 spec was finalized. In the end, XHTML Mobile Profile became the markup language used in WAP 2.0. The newest WML version in active use is 1.3. The first company to launch a public WML site was Dutch mobile phone network operator Telfort in October 1999 and the first company in the world to launch the Nokia 7110. The Telfort WML site was created and developed as side project to test the device's capabilities by a billing engineer called Christopher Bee and National Deployment Manager, Euan McLeod. The WML site consists of four pages in both Dutch and English that contained many grammatical errors in Dutch as the two developers were unaware the WML was configured on the Nokia 7110 as the home page and neither were native Dutch speakers. WML markup WML documents are XML documents that validate against the WML DTD (Document Type Definition) . The W3C Markup Validation service (http://validator.w3.org/) can be used to validate WML documents (they are validated against their declared document type). For example, the following WML page could be saved as "example.wml": <?xml version="1.0"?> <!DOCTYPE wml PUBLIC "-//WAPFORUM//DTD WML 1.1//EN" "http:/
https://en.wikipedia.org/wiki/List%20of%20strains%20of%20Escherichia%20coli
Strains Innocuous: Laboratory: E. coli K-12, one of two laboratory strains (innocuous) Clifton wild type (!) W3110 DH5α Dam dcm strain HB101 Escherichia coli B, the other of the two lab strains from which all lab substrains originate Escherichia coli BL21(DE3) Pathogenic: Enterotoxigenic E. coli (ETEC) Enteropathogenic E. coli (EPEC) Enteroinvasive E. coli (EIEC) Enterohemorrhagic E. coli (EHEC) Enteroaggregative E. coli (EAEC) Uropathogenic E. coli (UPEC) Verotoxin-producing E. coli E. coli O157:H7 is an enterohemorrhagic strain also 2006 North American E. coli outbreak E. coli O104:H4, also 2011 E. coli O104:H4 outbreak Escherichia coli O121 Escherichia coli O104:H21 Escherichia coli K1, meningitis Adherent Invasive Escherichia coli (AIEC), morbus Crohn Escherichia coli NC101 Shigella Shigella flexneri Shigella dysenteriae Shigella boydii Shigella sonnei
https://en.wikipedia.org/wiki/Panine%20gammaherpesvirus%201
Panine gammaherpesvirus 1 (PnHV-1), commonly known as chimpanzee lymphocryptovirus, is a species of virus in the genus Lymphocryptovirus, subfamily Gammaherpesvirinae, family Herpesviridae, and order Herpesvirales. The virus infects chimpanzee (Pan troglodytes) leukocytes. The glycoprotein B (gB) gene of the chimpanzee Lymphocryptovirus is virtually identical to the corresponding gene in the orangutan lymphocryptovirus. This suggests that the virus may have been transmitted between chimps and orangutans relatively recently (either in the wild or in captivity). It is 35 to 45% homologous to the human Epstein–Barr virus, which is classified in the same genus.
https://en.wikipedia.org/wiki/NEDD8
NEDD8 is a protein that in humans is encoded by the NEDD8 gene. (in saccharomyces cerevisiae this protein is known as Rub1) This ubiquitin-like (UBL) protein becomes covalently conjugated to a limited number of cellular proteins, in a process called NEDDylation similar to ubiquitination. Human NEDD8 shares 60% amino acid sequence identity to ubiquitin. The primary known substrates of NEDD8 modification are the cullin subunits of cullin-based E3 ubiquitin ligases, which are active only when NEDDylated. Their NEDDylation is critical for the recruitment of E2 to the ligase complex, thus facilitating ubiquitin conjugation. NEDD8 modification has therefore been implicated in cell cycle progression and cytoskeletal regulation. Activation and conjugation As with ubiquitin and SUMO, NEDD8 is conjugated to cellular proteins after its C-terminal tail is processed. The NEDD8 activating E1 enzyme is a heterodimer composed of APPBP1 and UBA3 subunits. The APPBP1/UBA3 enzyme has homology to the N- and C-terminal halves of the ubiquitin E1 enzyme, respectively. The UBA3 subunit contains the catalytic center and activates NEDD8 in an ATP-dependent reaction by forming a high-energy thiolester intermediate. The activated NEDD8 is subsequently transferred to the UbcH12 E2 enzyme, and is then conjugated to specific substrates in the presence of the appropriate E3 ligases. Substrates for NEDD8 As reviewed by Brown et al., the best-characterized activated-NEDD8 substrates are the cullins (CUL1, 2, 3, 4A, 4B, 5, and 7 and PARC in human cells), that serve as molecular scaffolds for cullin-RING ubiquitin ligases (CRLs). Neddylation results in covalent conjugation of a NEDD8 moiety onto a conserved cullin lysine residue. Cullin neddylation increases CRL ubiquitylation activity via conformational changes that optimize ubiquitin transfer to target proteins Removal There are several different proteases which can remove NEDD8 from protein conjugates. UCHL1, UCHL3 and USP21 proteases have
https://en.wikipedia.org/wiki/Spectrogram
A spectrogram is a visual representation of the spectrum of frequencies of a signal as it varies with time. When applied to an audio signal, spectrograms are sometimes called sonographs, voiceprints, or voicegrams. When the data are represented in a 3D plot they may be called waterfall displays. Spectrograms are used extensively in the fields of music, linguistics, sonar, radar, speech processing, seismology, and others. Spectrograms of audio can be used to identify spoken words phonetically, and to analyse the various calls of animals. A spectrogram can be generated by an optical spectrometer, a bank of band-pass filters, by Fourier transform or by a wavelet transform (in which case it is also known as a scaleogram or scalogram). A spectrogram is usually depicted as a heat map, i.e., as an image with the intensity shown by varying the colour or brightness. Format A common format is a graph with two geometric dimensions: one axis represents time, and the other axis represents frequency; a third dimension indicating the amplitude of a particular frequency at a particular time is represented by the intensity or color of each point in the image. There are many variations of format: sometimes the vertical and horizontal axes are switched, so time runs up and down; sometimes as a waterfall plot where the amplitude is represented by height of a 3D surface instead of color or intensity. The frequency and amplitude axes can be either linear or logarithmic, depending on what the graph is being used for. Audio would usually be represented with a logarithmic amplitude axis (probably in decibels, or dB), and frequency would be linear to emphasize harmonic relationships, or logarithmic to emphasize musical, tonal relationships. Generation Spectrograms of light may be created directly using an optical spectrometer over time. Spectrograms may be created from a time-domain signal in one of two ways: approximated as a filterbank that results from a series of band-pass filter
https://en.wikipedia.org/wiki/The%20Human-Induced%20Earthquake%20Database
The Human-Induced Earthquake Database (HiQuake) is an online database that documents all reported cases of induced seismicity proposed on scientific grounds. It is the most complete compilation of its kind and is freely available to download via the associated website. The database is periodically updated to correct errors, revise existing entries, and add new entries reported in new scientific papers and reports. Suggestions for revisions and new entries can be made via the associated website. History In 2016, Nederlandse Aardolie Maatschappij funded a team of researchers from Durham University and Newcastle University to conduct a full review of induced seismicity. This review formed part of a scientific workshop aimed at estimating the maximum possible magnitude earthquake that might be induced by conventional gas production in the Groningen gas field. The resulting database from the review was publicly released online on the 26 January 2017. The database was accompanied by the publication of two scientific papers, the more detailed of which is freely available online.
https://en.wikipedia.org/wiki/EF-Tu
EF-Tu (elongation factor thermo unstable) is a prokaryotic elongation factor responsible for catalyzing the binding of an aminoacyl-tRNA (aa-tRNA) to the ribosome. It is a G-protein, and facilitates the selection and binding of an aa-tRNA to the A-site of the ribosome. As a reflection of its crucial role in translation, EF-Tu is one of the most abundant and highly conserved proteins in prokaryotes. It is found in eukaryotic mitochondria as TUFM. As a family of elongation factors, EF-Tu also includes its eukaryotic and archaeal homolog, the alpha subunit of eEF-1 (EF-1A). Background Elongation factors are part of the mechanism that synthesizes new proteins through translation in the ribosome. Transfer RNAs (tRNAs) carry the individual amino acids that become integrated into a protein sequence, and have an anticodon for the specific amino acid that they are charged with. Messenger RNA (mRNA) carries the genetic information that encodes the primary structure of a protein, and contains codons that code for each amino acid. The ribosome creates the protein chain by following the mRNA code and integrating the amino acid of an aminoacyl-tRNA (also known as a charged tRNA) to the growing polypeptide chain. There are three sites on the ribosome for tRNA binding. These are the aminoacyl/acceptor site (abbreviated A), the peptidyl site (abbreviated P), and the exit site (abbreviated E). The P-site holds the tRNA connected to the polypeptide chain being synthesized, and the A-site is the binding site for a charged tRNA with an anticodon complementary to the mRNA codon associated with the site. After binding of a charged tRNA to the A-site, a peptide bond is formed between the growing polypeptide chain on the P-site tRNA and the amino acid of the A-site tRNA, and the entire polypeptide is transferred from the P-site tRNA to the A-site tRNA. Then, in a process catalyzed by the prokaryotic elongation factor EF-G (historically known as translocase), the coordinated tr
https://en.wikipedia.org/wiki/Foliose%20lichen
A foliose lichen is a lichen with flat, leaf-like , which are generally not firmly bonded to the substrate on which it grows. It is one of the three most common growth forms of lichens. It typically has distinct upper and lower surfaces, each of which is usually covered with a cortex; some, however, lack a lower cortex. The photobiont layer lies just below the upper cortex. Where present, the lower cortex is usually dark (sometimes even black), but occasionally white. Foliose lichens are attached to their substrate either by hyphae extending from the cortex or , or by root-like structures called . The latter, which are found only in foliose lichens, come in a variety of shapes, the specifics of which can aid in species identification. Some foliose lichens attach only at a single stout peg called a , typically located near the lichen's centre. Lichens with this structure are called "umbilicate". In general, medium to large epiphytic foliose lichens are moderately sensitive to air pollution, while smaller or ground-dwelling foliose lichens are more tolerant. The term "foliose" derives from the Latin word foliosus, meaning "leafy". Lichens play an important role environmentally. They provide a food source for many animals such as deer, goats, and caribou, and are used as building material for bird nests. Some species can even be used in antibiotics. They are also a useful indicator of atmospheric pollution level. Pollution There is a direct correlation between pollution and the abundance and distribution of lichen. Foliose lichens are extremely sensitive to sulphur dioxide, which is a by-product of atmospheric pollution. Sulphur dioxide reacts with the chlorophyll in lichen, which produces phaeophytin and magnesium ions. When this reaction occurs in plants the lichen will then have less chlorophyll causing a decrease in respiration which eventually kills the lichen. Weathering of rocks Minerals in rocks can be weathered by the growth of lichens on exposed rock surfa
https://en.wikipedia.org/wiki/MoBlock
MoBlock is free software for blocking connections to and from a specified range of hosts. Moblock is an IP address filtering program for Linux that is similar to PeerGuardian for Microsoft Windows. Its development has been stopped in favor of Phoenix Labs' official PeerGuardian Linux and parts of its code have been merged in PeerGuardian Linux. See also PeerGuardian iplist External links MoBlock Homepage Debian packages for MoBlock and PeerGuardian Linux PeerGuardian project at sourceforge Firewall software Internet privacy software
https://en.wikipedia.org/wiki/Pachinko%20allocation
In machine learning and natural language processing, the pachinko allocation model (PAM) is a topic model. Topic models are a suite of algorithms to uncover the hidden thematic structure of a collection of documents. The algorithm improves upon earlier topic models such as latent Dirichlet allocation (LDA) by modeling correlations between topics in addition to the word correlations which constitute topics. PAM provides more flexibility and greater expressive power than latent Dirichlet allocation. While first described and implemented in the context of natural language processing, the algorithm may have applications in other fields such as bioinformatics. The model is named for pachinko machines—a game popular in Japan, in which metal balls bounce down around a complex collection of pins until they land in various bins at the bottom. History Pachinko allocation was first described by Wei Li and Andrew McCallum in 2006. The idea was extended with hierarchical Pachinko allocation by Li, McCallum, and David Mimno in 2007. In 2007, McCallum and his colleagues proposed a nonparametric Bayesian prior for PAM based on a variant of the hierarchical Dirichlet process (HDP). The algorithm has been implemented in the MALLET software package published by McCallum's group at the University of Massachusetts Amherst. Model PAM connects words in V and topics in T with an arbitrary directed acyclic graph (DAG), where topic nodes occupy the interior levels and the leaves are words. The probability of generating a whole corpus is the product of the probabilities for every document: See also Probabilistic latent semantic indexing (PLSI), an early topic model from Thomas Hofmann in 1999. Latent Dirichlet allocation, a generalization of PLSI developed by David Blei, Andrew Ng, and Michael Jordan in 2002, allowing documents to have a mixture of topics. MALLET, an open-source Java library that implements Pachinko allocation.
https://en.wikipedia.org/wiki/Radek%20Maneuver
The Radek Maneuver is a scale-up-then-scale-down tactic used in the administration of web services, specifically those deployed under a cloud computing paradigm (by a provider e.g. Amazon Elastic Compute Cloud or Microsoft Azure). History Developed by Olivier "Radek" Dabrowski in the mid-2010s, the Radek Maneuver was originally conceived of in using and maintaining applications running on a PaaS system. Execution The Radek Maneuver consists of a series of steps, usually executed via the PaaS or web portal interface. The tactic should be used when a service is misbehaving or otherwise experiencing errors, and the suspected cause is the underlying cloud layer, rather than the application layer. This includes networking issues and other "bad box" problems. The steps are as follows: Identify the application or service which is misbehaving. Increase the compute resource (number of CPU cores, amount of ram) for the instance on which the application is running. This is also known as scaling up. Wait for the application to re-deploy and stabilize. Scale back down to the original instance size. Principle of action This scale-up-scale-down method is understood to shift the application to a different physical machine underlying the PaaS service or application virtual machine. While this layer of the cloud computing stack is generally out of the access of an application developer (instead in the hands of the cloud provider), the maneuver allows troubleshooting and dodging errors in that layer.
https://en.wikipedia.org/wiki/Crema%20%28dairy%20product%29
Crema is the Spanish word for cream. In the United States, or in the English language, it is sometimes referred to as crema espesa (English: "thick cream"), also referred to as crema fresca (English: "fresh cream") in Mexico. Crema fresca or crema espesa is a Mexican dairy product prepared with two ingredients, heavy cream and buttermilk. Salt and lime juice may also be used in its preparation. Crema's fat content can range between 18 and 36 percent. In Mexico, it is sold directly to consumers through ranches outside large cities, as well as being available in Mexican and Latin American grocery stores in the United States. Crema is used as a food topping, a condiment and as an ingredient in sauces. It is similar in texture and flavor to France's crème fraîche and sour cream. Production Outside of the larger cities in Mexico, crema is sold directly to consumers by ranches that prepare the product. In the United States, commercial preparations of crema are typically pasteurized, packaged in glass jars, and sold in the refrigerated section of Mexican and Hispanic grocery stores. Uses Crema is used as a topping for foods and as an ingredient in sauces. It can be spooned or drizzled atop various foods and dishes. For example, crema is added as a condiment atop soups, tacos, roasted corn, beans and various Mexican street foods, referred to as antojitos. Its use can impart added richness to the flavor of foods and dishes. It may have a mildly salty flavor. In Mexican cuisine, rajas are roasted chili peppers that are traditionally served with crema. The creaminess of crema can serve to counterbalance the spiciness of dishes prepared with roasted chili peppers, such as chipotle. Similar foods Crema is similar to the French crème fraîche in flavor and consistency. Compared with sour cream, crema has a softer and tangier flavor, and has a thinner texture. Some recipes that call for the use of crema state that sour cream or crème fraîche can be used as a viable substitute.
https://en.wikipedia.org/wiki/SCP%2006F6
SCP 06F6 is (or was) an astronomical object of unknown type, discovered on 21 February 2006 in the constellation Boötes during a survey of galaxy cluster CL 1432.5+3332.8 with the Hubble Space Telescope's Advanced Camera for Surveys Wide Field Channel. According to research authored by Kyle Barbary of the Supernova Cosmology Project, the object brightened over a period of roughly 100 days, reaching a peak intensity of magnitude 21; it then faded over a similar period. Barbary and colleagues report that the spectrum of light emitted from the object does not match known supernova types, and is dissimilar to any known phenomenon in the Sloan Digital Sky Survey database. The light in the blue region shows broad line features, while the red region shows continuous emission. The spectrum shows a handful of spectral lines, but when astronomers try to trace any one of them to an element the other lines fail to match up with any other known elements. Because of its uncommon spectrum, the team was not able to determine the distance to the object using standard redshift techniques; it is not even known whether the object is within or outside the Milky Way. Furthermore, no Milky Way star or external galaxy has been detected at this location, meaning any source is very faint. The European X-ray satellite XMM Newton made an observation in early August 2006 which appears to show an X-ray glow around SCP 06F6, two orders of magnitude more luminous than that of supernovae. Observations from the Palomar Transient Factory, reported in 2009, indicate a redshift z = 1.189 and a peak magnitude of −23.5 absolute (comparable to SN2005ap), making SCP 06F6 one of the most luminous transient phenomena known as of that date. Possible causes Supernovae reach their maximum brightness in only 20 days, and then take much longer to fade away. Researchers had initially conjectured that SCP 06F6 might be an extremely remote supernova; relativistic time dilation might have caused a 20-day event
https://en.wikipedia.org/wiki/Hypanthium
In angiosperms, a hypanthium or floral cup is a structure where basal portions of the calyx, the corolla, and the stamens form a cup-shaped tube. It is sometimes called a floral tube, a term that is also used for corolla tube and calyx tube. It often contains the nectaries of the plant. It is present in many plant families, although varies in structural dimensions and appearance. This differentiation between the hypanthium in particular species is useful for identification. Some geometric forms are obconic shapes as in toyon, whereas some are saucer-shaped as in Mitella caulescens. Its presence is diagnostic of many families, including the Rosaceae, Grossulariaceae, and Fabaceae. In some cases, it can be so deep, with such a narrow top, that the flower can appear to have an inferior ovary - the ovary is below the other attached floral parts. The hypanthium is known by different common names in differing species. In the eucalypts, it is referred to as the gum nut; in roses it is called the hip. Variations in plant species In myrtles the hypanthium can either surround the ovary loosely or tightly; in some cases it can be fused to the walls of the ovary. It can vary in length. The rims around the outside of the hypanthium contain the calyx lobes or free sepals, petals and either the stamen or multiple stamen that are attached at one or two points. The flowers of the family Rosaceae, or the rose family, always have some type of hypanthium or at least a floral cup from which the sepals, petals and stamens all arise, and which is lined with nectar-producing tissue known as nectaries. The nectar is a sugary substance that attracts birds and bees to the flower, who then take the pollen from the lining of the hypanthium and transfer it to the next flower they visit, usually a neighbouring plant. The stamens borne on the hypanthium are the pollen-producing reproductive organs of the flower. The hypanthium helps in many ways with the reproduction and cross pollination pat
https://en.wikipedia.org/wiki/Secure%20Remote%20Password%20protocol
The Secure Remote Password protocol (SRP) is an augmented password-authenticated key exchange (PAKE) protocol, specifically designed to work around existing patents. Like all PAKE protocols, an eavesdropper or man in the middle cannot obtain enough information to be able to brute-force guess a password or apply a dictionary attack without further interactions with the parties for each guess. Furthermore, being an augmented PAKE protocol, the server does not store password-equivalent data. This means that an attacker who steals the server data cannot masquerade as the client unless they first perform a brute force search for the password. In layman's terms, during SRP (or any other PAKE protocol) authentication, one party (the "client" or "user") demonstrates to another party (the "server") that they know the password, without sending the password itself nor any other information from which the password can be derived. The password never leaves the client and is unknown to the server. Furthermore, the server also needs to know about the password (but not the password itself) in order to instigate the secure connection. This means that the server also authenticates itself to the client which prevents phishing without reliance on the user parsing complex URLs. The only mathematically proven security property of SRP is that it is equivalent to Diffie-Hellman against a passive attacker. Newer PAKEs such as AuCPace and OPAQUE offer stronger guarantees. Overview The SRP protocol has a number of desirable properties: it allows a user to authenticate themselves to a server, it is resistant to dictionary attacks mounted by an eavesdropper, and it does not require a trusted third party. It effectively conveys a zero-knowledge password proof from the user to the server. In revision 6 of the protocol only one password can be guessed per connection attempt. One of the interesting properties of the protocol is that even if one or two of the cryptographic primitives it uses
https://en.wikipedia.org/wiki/Edinburgh%20Phrenological%20Society
The Edinburgh Phrenological Society was founded in 1820 by George Combe, an Edinburgh lawyer, with his physician brother Andrew Combe. The Edinburgh Society was the first and foremost phrenology grouping in Great Britain; more than forty phrenological societies followed in other parts of the British Isles. The Society's influence was greatest over its first two decades but declined in the 1840s; the final meeting was recorded in 1870. The central concept of phrenology is that the brain is the organ of the mind and that human behaviour can be usefully understood in broadly neuropsychological rather than philosophical or religious terms. Phrenologists discounted supernatural explanations and stressed the modularity of mind. The Edinburgh phrenologists also acted as midwives to evolutionary theory and inspired a renewed interest in psychiatric disorder and its moral treatment. Phrenology claimed to be scientific but is now regarded as a pseudoscience as its formal procedures did not conform to the usual standards of scientific method. Edinburgh phrenologists included George and Andrew Combe; asylum doctor and reformer William A.F. Browne, father of James Crichton-Browne; Robert Chambers, author of the 1844 proto-Darwinian book Vestiges of the Natural History of Creation; William Ballantyne Hodgson, economist and pioneer of women's education; astronomer John Pringle Nichol; and botanist and evolutionary thinker Hewett Cottrell Watson. Charles Darwin, a medical student in Edinburgh in 1825–7, took part in phrenological discussions at the Plinian Society and returned to Edinburgh in 1838 when formulating his concepts concerning natural selection. Background Phrenology emerged from the views of the medical doctor and scientific researcher Franz Joseph Gall in 18th-century Vienna. Gall suggested that facets of the mind corresponded to regions of the brain, and that it was possible to determine character traits by examining the shape of a person's skull. This "craniologi
https://en.wikipedia.org/wiki/Medicine%20wheel%20%28symbol%29
The modern Medicine Wheel symbol was invented as a teaching tool in about 1972 by Charles Storm, aka Arthur C. Storm, writing under the name Hyemeyohsts Storm, in his book Seven Arrows and further expanded upon in his book Lightningbolt. It has since been used by various people to symbolize a variety of concepts, some based on Native American religions, others newly invented and of more New Age orientation. It is also a common symbol in some pan-Indian and twelve-step recovery groups. Recent invention Charles Storm, pen name Hyemeyohsts Storm, was the son of a German immigrant who claimed to be Cheyenne; he misappropriated and misrepresented Native American teachings and symbols from a variety of different cultures, claiming that they were Cheyenne, such as some symbolism connected to the Plains Sun dance, to create the modern Medicine Wheel symbol around 1972. Anthropologist Alice Kehoe suspects that Storm took the idea of the medicine wheel from ahkoiyu, which are little model wheels used in games, and for decoration during rites by the Cheyenne, which were talked about in George Bird Grinnell's 19th century ethnography titled The Cheyenne Indians: Their History and Lifeways. Subsequently Vincent LaDuke (a New Age spiritual leader going by the name Sun Bear), who was of Ojibwe descent, started also using the Medicine Wheel symbol, combining the basic concept with pieces of disparate spiritual practices from various Indigenous cultures, and adding elements of new age and occult spiritualism. LaDuke self-published a newsletter and several books, and formed a group of followers that he named the Bear Tribe, of which he appointed himself the medicine chief. For a fee, his mostly wealthy and white followers attended his workshops and retreats, joined his "tribe", and could buy titles and honours that are traditionally reserved for respected elders and knowledge keepers. For these activities LaDuke was denounced and picketed by the American Indian Movement. Storm and
https://en.wikipedia.org/wiki/Margaretia
Margaretia is a frondose organism known from the middle Cambrian Burgess shale and the Kinzers Formation of Pennsylvania. Its fronds reached about 10 cm in length and are peppered with a range of length-parallel oval holes. It was originally interpreted as an alcyonarian coral. It was later reclassified as a green alga closely resembling modern Caulerpa by D.F. Satterthwait in her Ph.D. thesis in 1976, a finding supported by Conway Morris and Robison in 1988. More recently, it has been treated as an organic tube, that is used as nest of hemichordate Oesia.
https://en.wikipedia.org/wiki/Thermovirga
Thermovirga is a Gram-negative, anaerobic and motile genus of bacteria from the family of Synergistaceae with one known species (Thermovirga lienii). Thermovirga lienii has been isolated from production water from an oil well from the North Sea in Norway.
https://en.wikipedia.org/wiki/Gravitational%20lens
A gravitational lens is a distribution of matter (such as a cluster of galaxies) or a point particle between a distant light source and an observer that is capable of bending the light from the source as the light travels toward the observer. This effect is known as gravitational lensing, and the amount of bending is one of the predictions of Albert Einstein's general theory of relativity. Treating light as corpuscles travelling at the speed of light, Newtonian physics also predicts the bending of light, but only half of that predicted by general relativity. Although Einstein made unpublished calculations on the subject in 1912, Orest Khvolson (1924) and Frantisek Link (1936) are generally credited with being the first to discuss the effect in print. However, this effect is more commonly associated with Einstein, who published an article on the subject in 1936. Fritz Zwicky posited in 1937 that the effect could allow galaxy clusters to act as gravitational lenses. It was not until 1979 that this effect was confirmed by observation of the Twin QSO SBS 0957+561. Description Unlike an optical lens, a point-like gravitational lens produces a maximum deflection of light that passes closest to its center, and a minimum deflection of light that travels furthest from its center. Consequently, a gravitational lens has no single focal point, but a focal line. The term "lens" in the context of gravitational light deflection was first used by O.J. Lodge, who remarked that it is "not permissible to say that the solar gravitational field acts like a lens, for it has no focal length". If the (light) source, the massive lensing object, and the observer lie in a straight line, the original light source will appear as a ring around the massive lensing object (provided the lens has circular symmetry). If there is any misalignment, the observer will see an arc segment instead. This phenomenon was first mentioned in 1924 by the St. Petersburg physicist Orest Khvolson, and quantif
https://en.wikipedia.org/wiki/Creatio%20ex%20nihilo
(Latin for "creation out of nothing") is the doctrine that matter is not eternal but had to be created by some divine creative act. It is a theistic answer to the question of how the universe came to exist. It is in contrast to Ex nihilo nihil fit or "nothing comes from nothing", which means that all things were formed from preexisting things; an idea by the Greek philosopher Parmenides (c. 540 – c. 480 BC) about the nature of all things, and later more formally stated by Titus Lucretius Carus (c. 99 – c. 55 BC). Understanding of concept through contrast Ex nihilo nihil fit: uncreated matter Ex nihilo nihil fit means that nothing comes from nothing. In ancient creation myths, the universe is formed from eternal formless matter, namely the dark and still primordial ocean of chaos. In Sumerian myth this cosmic ocean is personified as the goddess Nammu "who gave birth to heaven and earth" and had existed forever; in the Babylonian creation epic Enuma Elish pre-existent chaos is made up of fresh-water Apsu and salt-water Tiamat, and from Tiamat the god Marduk created Heaven and Earth; in Egyptian creation myths a pre-existent watery chaos personified as the god Nun and associated with darkness, gave birth to the primeval hill (or in some versions a primeval lotus flower, or in others a celestial cow); and in Greek traditions the ultimate origin of the universe, depending on the source, is sometimes Oceanus (a river that circles the Earth), Night, or water. To these can be added the account of the Book of Genesis, which opens with God creating the heavens and the earth, separating and restraining the waters, not creating the waters themselves out of nothing. Alternately, whether this is truly 'ex nihilo', is unclear. The Hebrew sentence which opens Genesis, Bereshit bara Elohim et hashamayim ve'et ha'aretz, can be interpreted in at least three ways: As a statement that the cosmos had an absolute beginning (In the beginning, God created the heavens and earth). As
https://en.wikipedia.org/wiki/Viscous%20damping
In continuum mechanics, viscous damping is a formulation of the damping phenomena, in which the source of damping force is modeled as a function of the volume, shape, and velocity of an object traversing through a real fluid with viscosity. Typical examples of viscous damping in mechanical systems include: Fluid films between surfaces Fluid flow around a piston in a cylinder Fluid flow through an orifice Fluid flow within a journal bearing Viscous damping also refers to damping devices. Most often they damp motion by providing a force or torque opposing motion proportional to the velocity. This may be affected by fluid flow or motion of magnetic structures. The intended effect is to improve the damping ratio. Shock absorbers in cars Seismic retrofitting with viscous dampers Tuned mass dampers in tall buildings Deployment actuators in spacecraft Single-degree-of-freedom system In a single-degree-of-freedom system, viscous damping model relates force to velocity as shown below: Where is the viscous damping coefficient with SI units of . This model adequately describes the damping force on a body that is moving at a moderate speed through a fluid. It is also the most common modeling choice for damping. See also Hysteresis Coulomb damping
https://en.wikipedia.org/wiki/List%20of%20recreational%20number%20theory%20topics
This is a list of recreational number theory topics (see number theory, recreational mathematics). Listing here is not pejorative: many famous topics in number theory have origins in challenging problems posed purely for their own sake. See list of number theory topics for pages dealing with aspects of number theory with more consolidated theories. Number sequences Integer sequence Fibonacci sequence Golden mean base Fibonacci coding Lucas sequence Padovan sequence Figurate numbers Polygonal number Triangular number Square number Pentagonal number Hexagonal number Heptagonal number Octagonal number Nonagonal number Decagonal number Centered polygonal number Centered square number Centered pentagonal number Centered hexagonal number Tetrahedral number Pyramidal number Triangular pyramidal number Square pyramidal number Pentagonal pyramidal number Hexagonal pyramidal number Heptagonal pyramidal number Octahedral number Star number Perfect number Quasiperfect number Almost perfect number Multiply perfect number Hyperperfect number Semiperfect number Primitive semiperfect number Unitary perfect number Weird number Untouchable number Amicable number Sociable number Abundant number Deficient number Amenable number Aliquot sequence Super-Poulet number Lucky number Powerful number Primeval number Palindromic number Telephone number Triangular square number Harmonic divisor number Sphenic number Smith number Double Mersenne number Zeisel number Heteromecic number Niven numbers Superparticular number Highly composite number Highly totient number Practical number Juggler sequence Look-and-say sequence Digits Polydivisible number Automorphic number Armstrong number Self number Harshad number Keith number Kaprekar number Digit sum Persistence of a number Perfect digital invariant Happy number Perfect digit-to-digit invariant Factorion Emirp Palindromic prime Home prime Normal number Stoneham number Champernowne constant Absolutely normal number Repunit Repdigit Prime and r
https://en.wikipedia.org/wiki/ATP-binding%20motif
An ATP-binding motif is a 250-residue sequence within an ATP-binding protein’s primary structure. The binding motif is associated with a protein’s structure and/or function. ATP is a molecule of energy, and can be a coenzyme, involved in a number of biological reactions. ATP is proficient at interacting with other molecules through a binding site. The ATP binding site is the environment in which ATP catalytically actives the enzyme and, as a result, is hydrolyzed to ADP. The binding of ATP causes a conformational change to the enzyme it is interacting with. The genetic and functional similarity of such a motif demonstrates micro-evolution: proteins have co-opted the same binding sequence from other enzymes rather than developing them independently. ATP binding sites, which may be representative of an ATP binding motif, are present in many proteins which require an input of energy (from ATP), such sites as active membrane transporters, microtubule subunits, flagellum proteins, and various hydrolytic and proteolytic enzymes. Primary sequence The short motifs involving ATP-binding are the Walker motifs, Walker A, also known as the P-loop, and Walker B, as well as the C motif and switch motif. Walker A motif The Walker site A has a primary amino acid sequence of or . The letter can represent any amino acid. Walker B motif The primary amino acid sequence of the Walker B site is , in which represents any hydrophobic amino acid. C motif The C motif, also known as the signature motif, LSGGQ motif, or the linker peptide, has a primary amino acid sequence of . Due to the variety of different amino acids that can be used in the primary sequence, of both the Walker site A and B, the non-variant amino acids within the sequence are highly conserved. A mutation of any of these amino acids will affect the binding ATP or interfere with the catalytic activity of the enzyme. The primary amino acid sequence determines the three dimensional structure of each motif. Struct
https://en.wikipedia.org/wiki/Shooting%20and%20bouncing%20rays
The shooting and bouncing rays (SBR) method in computational electromagnetics was first developed for computation of radar cross section (RCS). Since then, the method has been generalized to be used also for installed antenna performance. The SBR method is an approximate method applied to high frequencies. The method can be implemented for GPU computing, which makes the computation very efficient. Theory The first step in the SBR method is to use geometrical optics (GO, ray-tracing) for computing equivalent currents, either on metallic structures or on an exit aperture. The scattered field is thereafter computed by integrating these currents using physical optics (PO), by the Kirchhoff's diffraction formula. The current on a perfect electrical conductor (PEC) is related to the incident magnetic field by . This approximation holds best for short wavelengths, and it assumes that the radius of curvature of the scatterer is large compared to the wavelength. Extending SBR for edge diffraction Since the approximation described above assumes that the radius of curvature is large compared to the wavelength, the diffraction from edges needs to be handled separately. The SBR method can be extended with physical theory of diffraction (PTD) in order to include edge diffraction in the model. Implementation in commercial software The SBR method is implemented in the following commercial codes: Altair Feko (method there known as RL-GO - Ray Launching Geometrical Optics) CST Microwave Studio, Asymptotic Solver Ansys HFSS SBR+, previously Delcross Savant XGTD See also Computational electromagnetics
https://en.wikipedia.org/wiki/A%20Language%20for%20Process%20Specification
A Language for Process Specification (ALPS) is a model and data exchange language developed by the National Institute of Standards and Technology in the early 1990s to capture and communicate process plans developed in the discrete and process manufacturing industries.
https://en.wikipedia.org/wiki/SHETRAN
SHETRAN is a hydrological modelling system for water flow, solute and sediment transport in river catchments. SHETRAN is a physically based, distributed model (PBDM) that can simulate the entire land phase of the hydrologic cycle including surface water flow and groundwater flow. The plan area of the catchment in SHETRAN is usually in the range of one to a few thousand square kilometres and the horizontal depth of the subsurface is usually less than 100m. In the 1980s the Système Hydrologique Européen (SHE) model was developed by a consortium of three European organizations: the Institute of Hydrology (the United Kingdom), SOGREAH (France) and DHI (Denmark). Its successors are MIKE SHE (DHI) and SHETRAN (School of Civil Engineering and Geosciences, Newcastle University). The SHE model was renamed SHETRAN at School of Civil Engineering and Geosciences, Newcastle University, after the introduction of the sediment and solute transport component. Since then it has undergone further improvements. The biggest change was the introduction of a fully 3-dimensional subsurface or variably saturated subsurface (VSS) component. Recent changes have focused on making the model more user friendly with the introduction of a Graphical user interface. This includes the automatic generation of river channels from a Digital elevation model so that a catchment simulation can be set up rapidly. See also Hydrological transport model
https://en.wikipedia.org/wiki/Primula%20hortensis
Primula hortensis is a name which has been applied to various hybrids in the genus Primula, e.g. to Primula × polyantha Mill. by Focke and to Primula × pubescens by Wittstein. The name Primula hortensis is not an accepted taxon name, however. hortensis
https://en.wikipedia.org/wiki/Win%E2%80%93stay%2C%20lose%E2%80%93switch
In psychology, game theory, statistics, and machine learning, win–stay, lose–switch (also win–stay, lose–shift) is a heuristic learning strategy used to model learning in decision situations. It was first invented as an improvement over randomization in bandit problems. It was later applied to the prisoner's dilemma in order to model the evolution of altruism. The learning rule bases its decision only on the outcome of the previous play. Outcomes are divided into successes (wins) and failures (losses). If the play on the previous round resulted in a success, then the agent plays the same strategy on the next round. Alternatively, if the play resulted in a failure the agent switches to another action. A large-scale empirical study of players of the game rock, paper, scissors shows that a variation of this strategy is adopted by real-world players of the game, instead of the Nash equilibrium strategy of choosing entirely at random between the three options.
https://en.wikipedia.org/wiki/Herglotz%E2%80%93Zagier%20function
In mathematics, the Herglotz–Zagier function, named after Gustav Herglotz and Don Zagier, is the function introduced by who used it to obtain a Kronecker limit formula for real quadratic fields.
https://en.wikipedia.org/wiki/William%20Kneale
William Calvert Kneale (22 June 1906 – 24 June 1990) was an English logician best known for his 1962 book The Development of Logic, a history of logic from its beginnings in Ancient Greece written with his wife Martha. Kneale was also known as a philosopher of science and the author of a book on probability and induction. Educated at the Liverpool Institute High School for boys, he later became a fellow of Exeter College, Oxford, and in 1960 succeeded to the White's Professor of Moral Philosophy previously occupied by the linguistic philosopher J. L. Austin. He retired in 1966. Life and work Kneale's interest in the history of logic began in the 1940s. The focus of much of Kneale's early work was the legacy of the work of the 19th century logician George Boole. His first major publication in the history of logic was his paper "Boole and the Revival of Logic," published in the philosophy journal Mind in 1948. He was also the author of a number of papers in philosophical logic, particularly on the nature of truth for natural languages, and the role that linguistic concepts play in the treatment of logical paradoxes. Kneale worked on his great history of logic from 1947 to 1957 together with his wife Martha (who was responsible for the chapters on the ancient Greeks). The result was the 800-page The Development of Logic, first published in 1962, which went through five impressions before going into a second, paperback, edition in 1984. The 'History' is commonly referred to in the academic world simply as "Kneale and Kneale". It was the only major history of logic available in English in the mid-twentieth century, and the first major history of logic in English since The Development of Symbolic Logic published in 1906 by A. T. Shearman. The treatise has been a standard work in the history of logic for decades. In 1938 he married Martha Hurst; they had two children, George (born 1942) and Jane (married name Heal); born 1946).
https://en.wikipedia.org/wiki/Network%20redirector
In DOS and Windows, a network redirector, or redirector, is an operating system driver that sends data to and receives data from a remote device. A network redirector provides mechanisms to locate, open, read, write, and delete files and submit print jobs. It provides application services such as named pipes and MailSlots. When an application needs to send or receive data from a remote device, it sends a call to the redirector. The redirector provides the functionality of the presentation layer of the OSI model. Networks Hosts communicate through use of this client software: Shells, Redirectors and Requesters. In Microsoft Networking, the network redirectors are implemented as Installable File System (IFS) drivers. See also Universal Naming Convention (UNC)
https://en.wikipedia.org/wiki/Riesz%20sequence
In mathematics, a sequence of vectors (xn) in a Hilbert space is called a Riesz sequence if there exist constants such that for all sequences of scalars (an) in the ℓp space ℓ2. A Riesz sequence is called a Riesz basis if . Alternatively, one can define the Riesz basis as a family of the form , where is an orthonormal basis for and is a bounded bijective operator. Paley-Wiener criterion Let be an orthonormal basis for a Hilbert space and let be "close" to in the sense that for some constant , , and arbitrary scalars . Then is a Riesz basis for . Hence, Riesz bases need not be orthonormal. Theorems If H is a finite-dimensional space, then every basis of H is a Riesz basis. Let be in the Lp space L2(R), let and let denote the Fourier transform of . Define constants c and C with . Then the following are equivalent: The first of the above conditions is the definition for () to form a Riesz basis for the space it spans. See also Orthonormal basis Hilbert space Frame of a vector space
https://en.wikipedia.org/wiki/Sony%20Rolly
Rolly is an egg-shaped digital robotic music player made by Sony, combining music functions with robotic dancing. It has two wheels that allow it to rotate and spin, as well as two bands of colored LED light running around its edge and cup-like "wings" (or "arms" according to the Sony sonystyle USA website) which can open and close on either end, all of which can be synchronized to the music being played. Rolly has several operating modes, including Bluetooth functionality. Rolly can play music streamed directly from any Bluetooth-enabled cell phone, computer, or mp3 player. Rolly is able to dance along to streaming music, but the Rolly Choreographer software produces far better results when it analyzes tracks and creates motion files before loading them onto Rolly. The Rolly player uses files to store motion data along with a particular music track. Pre-made motion files can be downloaded and uploaded from Rolly Go. Rolly also has an accelerometer which detects if the player is laying horizontally or being held upright. When held upright, the track next/previous can be controlled by the top wheel and volume up/down can be controlled by the bottom wheel. Tracks can be shuffled by holding the unit upright, pressing the button once, then shaking the unit up and down (light color changes to purple). You can return to continuous play (light color blue) by simply repeating this process. Rolly has 2 gigabytes of flash memory to store music files. In some markets it came pre-loaded with the track Boogie Wonderland by Earth, Wind & Fire. On August 20, 2007, Sony launched an initial teaser advertising campaign for the product. The product was unveiled on September 20, 2007, and went on sale in Japan on September 29, and was for sale at the Sony sonystyle USA website for $229.99 USD, down from $399.99 USD. It is available in black and white. Sony offers the "Engrave it." option for this item, and a number of accessories, including "arms" in different colors. Arou
https://en.wikipedia.org/wiki/Elainabella
Elainabella is a 560 million-year-old fossil of the first multicellular alga. It was discovered in 2014 in Nevada by Steve Rowland and Margarita Rodriguez, a paleontologist and alumna duo from the University of Nevada, Las Vegas. The specific species name for the Elainabella fossil found is E. deepspringensis. Elainabella is named after Elaine Hatch Sawyer, Rodriguez's mentor, and deepspringensis is named after the Deep Spring Formation, that being the rock layer where the fossils were found. The fossil is from the Ediacaran Period and was found near Gold Point, Esmeralda County, Nevada in the Deep Spring Lagerstatte. The Elainabella holotype is housed in the Nevada State Museum, Las Vegas. Soft tissue is normally not preserved in fossils. However, Elainabella shows cellular structure. The fossil is quite small, only one millimeter wide and has subsidiary, thinner branches. It was interpreted as being the thallus of a multicellular alga. It does not have any shell; its preservation represents a case of soft-bodied preservation. Rowland believes that, when alive, Elainabella would have lived as part of a microbial reef community. These reefs occurred on the West Coast of North America, which was near the equator at the time. Elainebella does not comfortably fit into any known Ediacaran or metazoan taxon. While it is similar to some green algae, like in having similar segmented branches, the discoverers did not place Elainabella with them because of differences such as in size. The cellular structure of Elainabella is unlike that of any siphonous taxon.
https://en.wikipedia.org/wiki/Object%20storage
Object storage (also known as object-based storage) is a computer data storage that manages data as objects, as opposed to other storage architectures like file systems which manages data as a file hierarchy, and block storage which manages data as blocks within sectors and tracks. Each object typically includes the data itself, a variable amount of metadata, and a globally unique identifier. Object storage can be implemented at multiple levels, including the device level (object-storage device), the system level, and the interface level. In each case, object storage seeks to enable capabilities not addressed by other storage architectures, like interfaces that are directly programmable by the application, a namespace that can span multiple instances of physical hardware, and data-management functions like data replication and data distribution at object-level granularity. Object storage systems allow retention of massive amounts of unstructured data in which data is written once and read once (or many times). Object storage is used for purposes such as storing objects like videos and photos on Facebook, songs on Spotify, or files in online collaboration services, such as Dropbox. One of the limitations with object storage is that it is not intended for transactional data, as object storage was not designed to replace NAS file access and sharing; it does not support the locking and sharing mechanisms needed to maintain a single, accurately updated version of a file. History Origins In 1995, research led by Garth Gibson on Network-Attached Secure Disks first promoted the concept of splitting less common operations, like namespace manipulations, from common operations, like reads and writes, to optimize the performance and scale of both. In the same year, a Belgian company - FilePool - was established to build the basis for archiving functions. Object storage was proposed at Gibson's Carnegie Mellon University lab as a research project in 1996. Another key conce
https://en.wikipedia.org/wiki/Loewy%20decomposition
In the study of differential equations, the Loewy decomposition breaks every linear ordinary differential equation (ODE) into what are called largest completely reducible components. It was introduced by Alfred Loewy. Solving differential equations is one of the most important subfields in mathematics. Of particular interest are solutions in closed form. Breaking ODEs into largest irreducible components, reduces the process of solving the original equation to solving irreducible equations of lowest possible order. This procedure is algorithmic, so that the best possible answer for solving a reducible equation is guaranteed. A detailed discussion may be found in. Loewy's results have been extended to linear partial differential equations (PDEs) in two independent variables. In this way, algorithmic methods for solving large classes of linear PDEs have become available. Decomposing linear ordinary differential equations Let denote the derivative with respect to the variable . A differential operator of order is a polynomial of the form where the coefficients , are from some function field, the base field of . Usually it is the field of rational functions in the variable , i.e. . If is an indeterminate with , becomes a differential polynomial, and is the differential equation corresponding to . An operator of order is called reducible if it may be represented as the product of two operators and , both of order lower than . Then one writes , i.e. juxtaposition means the operator product, it is defined by the rule ; is called a left factor of , a right factor. By default, the coefficient domain of the factors is assumed to be the base field of , possibly extended by some algebraic numbers, i.e. is allowed. If an operator does not allow any right factor it is called irreducible. For any two operators and the least common left multiple is the operator of lowest order such that both and divide it from the right. The greatest common right divisior is
https://en.wikipedia.org/wiki/COOL-ER
The COOL-ER is a discontinued e-book reader from UK company Interead. The device is compatible with both Mac and Windows computers, comes in a variety of colors, and supports e-books in English, Spanish, Portuguese, German, French, Dutch, Russian, Korean, Ukrainian, Mandarin and Japanese. The device is commonly compared with the Amazon Kindle. Reviewers cite the lower price, MP3 support, and lighter weight as advantages; but complain of the COOL-ER's lack of wireless connectivity and button insensitivity. On 8 June 2010, Interead went into liquidation after failing to secure funding. Specifications Dimensions Height (mm): 183 Width (mm): 117.74 Depth (mm): 10.89 Volume (litres): 0.23 Weight (g): 178 Screen Size: 6" DPI: 170 pixels per inch Levels of Greyscale: 8 Type: E Ink Vizplex Manufacturer: PVI (E Ink) Hardware Storage: 1 GB Memory: 128 MB Processor: Samsung S3C2440 ARM 400 MHz Battery: Li-Polymer battery (1000 mAh) Battery Life: 8000 page turns Memory Expansion: SD (up to 4GB) Wireless: No Compatibility PC: Yes Mac: Yes Supported formats: PDF, EPUB, FB2, RTF, TXT, HTML, PRC, JPG, MP3
https://en.wikipedia.org/wiki/CCDC11
Coiled-Coil Domain Containing 11, also known as CCDC11 is a protein, that is encoded by CCDC11 gene located at chromosome 18 in humans. Overview The CCDC11 gene is located on chromosome 18q21.1 and is made of 39303 BP. The CCDC11 protein is made of 514 amino acids and has a mass of 61835 Da. Its aliases include coiled-coil domain containing protein 11, FLJ32743, OTTHUMP00000163503. Homology CCDC has orthologs all the way back to betaproteobacteria which is a class of Proteobacteria. The following table represents a small selection of orthologs found using searches in BLAST and BLAT. This is by no means a comprehensive list; however, it shows the diversity of species that CCDC11 is found. Gene neighborhood The below image from NCBI shows the gene neighborhood of CCDC11 on chromosome 18. CCDC11 has methyl-CPG binding domain protein 1, MBD1, upstream and myosin VB3, MYO5B, downstream. All CCDC11, MBD1 and MYO5B are oriented negatively. Note the close proximity between MBD1 and CCDC11. Predicted Post-Translational Modification Tools on the ExPASy Proteomics site predict the following post-translational modifications: One tool on the ExPASy, iPSORT, showed no signal peptide;however, it indicated CCDC11 to be mitochondrial. Another tool on the ExPASy, SOSUI, indicated the protein to be a soluble protein.
https://en.wikipedia.org/wiki/Disjunctive%20sequence
A disjunctive sequence is an infinite sequence (over a finite alphabet of characters) in which every finite string appears as a substring. For instance, the binary Champernowne sequence formed by concatenating all binary strings in shortlex order, clearly contains all the binary strings and so is disjunctive. (The spaces above are not significant and are present solely to make clear the boundaries between strings). The complexity function of a disjunctive sequence S over an alphabet of size k is pS(n) = kn. Any normal sequence (a sequence in which each string of equal length appears with equal frequency) is disjunctive, but the converse is not true. For example, letting 0n denote the string of length n consisting of all 0s, consider the sequence obtained by splicing exponentially long strings of 0s into the shortlex ordering of all binary strings. Most of this sequence consists of long runs of 0s, and so it is not normal, but it is still disjunctive. A disjunctive sequence is recurrent but never uniformly recurrent/almost periodic. Examples The following result can be used to generate a variety of disjunctive sequences: If a1, a2, a3, ..., is a strictly increasing infinite sequence of positive integers such that n → ∞ (an+1 / an) = 1, then for any positive integer m and any integer base b ≥ 2, there is an an whose expression in base b starts with the expression of m in base b. (Consequently, the infinite sequence obtained by concatenating the base-b expressions for a1, a2, a3, ..., is disjunctive over the alphabet {0, 1, ..., b-1}.) Two simple cases illustrate this result: an = nk, where k is a fixed positive integer. (In this case, n → ∞ (an+1 / an) = n → ∞ ( (n+1)k / nk ) = n → ∞ (1 + 1/n)k = 1.) E.g., using base-ten expressions, the sequences 123456789101112... (k = 1, positive natural numbers), 1491625364964... (k = 2, squares), 182764125216343... (k = 3, cubes), etc., are disjunctive on {0,1,2,3,4,5,6,7,8,9}. an = pn, where pn is the nt
https://en.wikipedia.org/wiki/Distance-vector%20routing%20protocol
A distance-vector routing protocol in data networks determines the best route for data packets based on distance. Distance-vector routing protocols measure the distance by the number of routers a packet has to pass; one router counts as one hop. Some distance-vector protocols also take into account network latency and other factors that influence traffic on a given route. To determine the best route across a network, routers using a distance-vector protocol exchange information with one another, usually routing tables plus hop counts for destination networks and possibly other traffic information. Distance-vector routing protocols also require that a router inform its neighbours of network topology changes periodically. Distance-vector routing protocols use the Bellman–Ford algorithm to calculate the best route. Another way of calculating the best route across a network is based on link cost, and is implemented through link-state routing protocols. The term distance vector refers to the fact that the protocol manipulates vectors (arrays) of distances to other nodes in the network. The distance vector algorithm was the original ARPANET routing algorithm and was implemented more widely in local area networks with the Routing Information Protocol (RIP). Overview Distance-vector routing protocols use the Bellman–Ford algorithm. In these protocols, each router does not possess information about the full network topology. It advertises its distance value (DV) calculated to other routers and receives similar advertisements from other routers unless changes are done in the local network or by neighbours (routers). Using these routing advertisements each router populates its routing table. In the next advertisement cycle, a router advertises updated information from its routing table. This process continues until the routing tables of each router converge to stable values. Some of these protocols have the disadvantage of slow convergence. Examples of distance-vector ro
https://en.wikipedia.org/wiki/Bacterial%20secretion%20system
Bacterial secretion systems are protein complexes present on the cell membranes of bacteria for secretion of substances. Specifically, they are the cellular devices used by pathogenic bacteria to secrete their virulence factors (mainly of proteins) to invade the host cells. They can be classified into different types based on their specific structure, composition and activity. Generally, proteins can be secreted through two different processes. One process is a one-step mechanism in which proteins from the cytoplasm of bacteria are transported and delivered directly through the cell membrane into the host cell. Another involves a two-step activity in which the proteins are first transported out of the inner cell membrane, then deposited in the periplasm, and finally through the outer cell membrane into the host cell. These major differences can be distinguished between Gram-negative diderm bacteria and Gram-positive monoderm bacteria. But the classification is by no means clear and complete. There are at least eight types specific to Gram-negative bacteria, four to Gram-positive bacteria, while two are common to both. In addition, there is appreciable difference between diderm bacteria with lipopolysaccharide on the outer membrane (diderm-LPS) and those with mycolic acid (diderm-mycolate). Export pathways The export pathway is responsible for crossing the inner cell membrane in diderms, and the only cell membrane in monoderms. Sec system The general secretion (Sec) involves secretion of unfolded proteins that first remain inside the cells. In Gram-negative bacteria, the secreted protein is sent to either the inner membrane or the periplasm. But in Gram-positive bacteria, the protein can stay in the cell or is mostly transported out of the bacteria using other secretion systems. Among Gram-negative bacteria, Escherichia coli, Vibrio cholerae, Klebsiella pneumoniae, and Yersinia enterocolitica use the Sec system. Staphylococcus aureus and Listeria monocytogenes a
https://en.wikipedia.org/wiki/LAPACK
LAPACK ("Linear Algebra Package") is a standard software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squares, eigenvalue problems, and singular value decomposition. It also includes routines to implement the associated matrix factorizations such as LU, QR, Cholesky and Schur decomposition. LAPACK was originally written in FORTRAN 77, but moved to Fortran 90 in version 3.2 (2008). The routines handle both real and complex matrices in both single and double precision. LAPACK relies on an underlying BLAS implementation to provide efficient and portable computational building blocks for its routines. LAPACK was designed as the successor to the linear equations and linear least-squares routines of LINPACK and the eigenvalue routines of EISPACK. LINPACK, written in the 1970s and 1980s, was designed to run on the then-modern vector computers with shared memory. LAPACK, in contrast, was designed to effectively exploit the caches on modern cache-based architectures and the instruction-level parallelism of modern superscalar processors, and thus can run orders of magnitude faster than LINPACK on such machines, given a well-tuned BLAS implementation. LAPACK has also been extended to run on distributed memory systems in later packages such as ScaLAPACK and PLAPACK. Netlib LAPACK is licensed under a three-clause BSD style license, a permissive free software license with few restrictions. Naming scheme Subroutines in LAPACK have a naming convention which makes the identifiers very compact. This was necessary as the first Fortran standards only supported identifiers up to six characters long, so the names had to be shortened to fit into this limit. A LAPACK subroutine name is in the form pmmaaa, where: p is a one-letter code denoting the type of numerical constants used. S, D stand for real floating-point arithmetic respectively in single and double precision, while C and Z stand for complex arithmetic
https://en.wikipedia.org/wiki/Polylogarithm
In mathematics, the polylogarithm (also known as Jonquière's function, for Alfred Jonquière) is a special function of order and argument . Only for special values of does the polylogarithm reduce to an elementary function such as the natural logarithm or a rational function. In quantum statistics, the polylogarithm function appears as the closed form of integrals of the Fermi–Dirac distribution and the Bose–Einstein distribution, and is also known as the Fermi–Dirac integral or the Bose–Einstein integral. In quantum electrodynamics, polylogarithms of positive integer order arise in the calculation of processes represented by higher-order Feynman diagrams. The polylogarithm function is equivalent to the Hurwitz zeta function — either function can be expressed in terms of the other — and both functions are special cases of the Lerch transcendent. Polylogarithms should not be confused with polylogarithmic functions, nor with the offset logarithmic integral , which has the same notation without the subscript. The polylogarithm function is defined by a power series in , which is also a Dirichlet series in : This definition is valid for arbitrary complex order and for all complex arguments with ; it can be extended to by the process of analytic continuation. (Here the denominator is understood as ). The special case involves the ordinary natural logarithm, , while the special cases and are called the dilogarithm (also referred to as Spence's function) and trilogarithm respectively. The name of the function comes from the fact that it may also be defined as the repeated integral of itself: thus the dilogarithm is an integral of a function involving the logarithm, and so on. For nonpositive integer orders , the polylogarithm is a rational function. Properties In the case where the order is an integer, it will be represented by (or when negative). It is often convenient to define where is the principal branch of the complex logarithm so that Also, all e
https://en.wikipedia.org/wiki/Network%20theory%20of%20aging
The network theory of aging supports the idea that multiple connected processes contribute to the biology of aging. Kirkwood and Kowald helped to establish the first model of this kind by connecting theories and predicting specific mechanisms. In departure of investigating a single mechanistic cause or single molecules that lead to senescence, the network theory of aging takes a systems biology view to integrate theories in conjunction with computational models and quantitative data related to the biology of aging. Implications The free radical theory, describing the reactions of free radicals, antioxidants and proteolytic enzymes, was computationally connected with the protein error theory to describe the error propagation loops within the cellular translation machinery. The study of gene networks revealed proteins associated with aging to have significantly higher connectivity than expected by chance. Investigation of aging on multiple levels of biological organization contributed to a physiome view, from genes to organisms, predicting lifespans based on scaling laws, fractal supply networks and metabolism as well as aging related molecular networks. The network theory of aging has encouraged the development of data bases related to human aging. Proteomic network maps suggest a relationship between the genetics of development and the genetics of aging. Hierarchical Elements The network theory of aging provides a deeper look at the damage and repair processes at the cellular level and the ever changing balance between those processes. To fully understand the network theory as its applied to aging you must look at the different hierarchical elements of the theory as it pertains to aging. Elementary particles of quantum systems- The aging process is described as an equation where a structure in an unbalanced state begins to change and that is seen primarily in the actions of quantum particles. Monomers of biological macro-molecules- After a while, differen
https://en.wikipedia.org/wiki/Nobel%20Committee%20for%20Physics
The Nobel Committee for Physics is the Nobel Committee responsible for proposing laureates for the Nobel Prize for Physics. The Nobel Committee for Physics is appointed by the Royal Swedish Academy of Sciences. It usually consists of Swedish professors of physics who are members of the Academy, although the Academy in principle could appoint anyone to the Committee. The Committee is a working body without decision power, and the final decision to award the Nobel Prize for Physics is taken by the entire Royal Swedish Academy of Sciences, after having a first discussion in the Academy's Class for Physics. Current members The members of the Committee (as of 2023) are: Ulf Danielsson (Secretary) David Haviland Anders Irbäck Eva Olsson (Chair) John Wettlaufer Ellen Moons Co-opted members Mats Larsson Olle Eriksson Göran Johansson Mark Pearce Secretary The secretary takes part in the meeting, but cannot cast a vote unless the secretary is also a member of the Committee. Until 1973, the Nobel Committees for Physics and Chemistry had a common secretary. Wilhelm Palmær, 1900–1926 Arne Westgren, 1926–1943 Arne Ölander, 1943–1965 Arne Magnéli, 1966–1973 Bengt Nagel, 1974–1988 Anders Bárány, 1989–2003 Lars Bergström, 2004–2015 Gunnar Ingelman, 2016–present Former members Hugo Hildebrand Hildebrandsson, 1900–1910 Robert Thalén, 1900–1903 Klas Bernhard Hasselberg, 1900–1922 Knut Ångström, 1900–1909 Svante Arrhenius, 1900–1927 Gustaf Granqvist, 1904–1922 Vilhelm Carlheim-Gyllensköld, 1910–1934 Allvar Gullstrand, 1911–1929 Carl Wilhelm Oseen, 1923–1944 Manne Siegbahn, 1923–1961 (chairman ?–1957) Henning Pleijel, 1928–1947 Erik Hulthén, 1929–1962 (chairman 1958–1962) Axel E. Lindh, 1935–1960 Ivar Waller, 1945–1972 Gustaf Ising, 1947–1953 Oskar Klein, 1954–1965 Bengt Edlén, 1961–1976 Erik Rudberg, 1963–1972 (chairman 1963–1972) Kai Siegbahn, 1963–1974 (chairman 1973–1974) Lamek Hulthén, 1966–1979 (chairman 1975–1979) Per-Olov Löwd
https://en.wikipedia.org/wiki/MAX232
The MAX232 is an integrated circuit by Maxim Integrated Products, now a subsidiary of Analog Devices, that converts signals from a TIA-232 (RS-232) serial port to signals suitable for use in TTL-compatible digital logic circuits. The MAX232 is a dual transmitter / dual receiver that typically is used to convert the RX, TX, CTS, RTS signals. The drivers provide TIA-232 voltage level outputs (about ±7.5 volts) from a single 5-volt supply by on-chip charge pumps and external capacitors. This makes it useful for implementing TIA-232 in devices that otherwise do not need any other voltages. The receivers translates the TIA-232 input voltages (up to ±25 volts, though MAX232 supports up to ±30 volts) down to standard 5 volt TTL levels. These receivers have a typical threshold of 1.3 volts and a typical hysteresis of 0.5 volts. The MAX232 replaced an older pair of chips MC1488 and MC1489 that performed similar RS-232 translation. The MC1488 quad transmitter chip required 12 volt and −12 volt power, and MC1489 quad receiver chip required 5 volt power. The main disadvantages of this older solution was the ±12 volt power requirement, only supported 5 volt digital logic, and two chips instead of one. History The MAX232 was proposed by Charlie Allen and designed by Dave Bingham. Maxim Integrated Products announced the MAX232 no later than 1986. Versions The later MAX232A is backward compatible with the original MAX232 but may operate at higher baud rates and can use smaller external capacitors 0.1 μF in place of the 1.0 μF capacitors used with the original device. The newer MAX3232 and MAX3232E are also backwards compatible, but operates at a broader voltage range, from 3 to 5.5 V. Pin-to-pin compatible versions from other manufacturers are ICL232, SP232, ST232, ADM232 and HIN232. Texas Instruments makes compatible chips, using MAX232 as the part number. Voltage levels The MAX232 translates a TTL logic 0 input to between +3 and +15 V, and changes TTL logic 1 input to bet
https://en.wikipedia.org/wiki/Mammoth%20steppe
During the Last Glacial Maximum, the mammoth steppe, also known as steppe-tundra, was once the Earth's most extensive biome. It stretched east-to-west, from the Iberian Peninsula in the west of Europe, across Eurasia to North America, through Beringia (what is today Alaska) and Canada; from north-to-south, the steppe reached from the arctic islands southward to China. The mammoth steppe was cold and dry, and relatively featureless, though topography and geography varied considerably throughout. Some areas featured rivers which, through erosion, naturally created gorges, gulleys, or small glens. The continual glacial recession and advancement over millennia contributed more to the formation of larger valleys and different geographical features. Overall, however, the steppe is known to be flat and expansive grassland. The vegetation was dominated by palatable, high-productivity grasses, herbs and willow shrubs. The animal biomass was dominated by species such as reindeer, muskox, saiga antelope, steppe bison, horses, woolly rhinoceros and woolly mammoth. These herbivores, in turn, were followed and preyed upon by various carnivores, such as brown bears, Panthera spelaea (the cave or steppe-lion), scimitar cats, wolverines and wolves, among others. This ecosystem covered wide areas of the northern part of the globe, thrived for approximately 100,000 years without major changes, but then diminished to small regions around 12,000 years ago. Modern humans began to inhabit the biome following their expansion out of Africa, reaching the Arctic Circle in Northeast Siberia by about 32,000 years ago. Naming At the end of the 19th century, Alfred Nehring (1890) and Jan Czerski (Iwan Dementjewitsch Chersky, 1891) proposed that during the last glacial period a major part of northern Europe had been populated by large herbivores and that a steppe climate had prevailed there. In 1982, the scientist R. Dale Guthrie coined the term "mammoth steppe" for this paleoregion. Origin
https://en.wikipedia.org/wiki/Howland%20Forest
The Howland Research Forest is a tract of mature evergreen forest in the North Maine Woods, within Penobscot County, central Maine. It is located west of the town of Howland. History The tract is part of the 1.1 million acres (4,500 km2) of Maine forest sold in 2005 by International Paper (IP) to the Seven Islands Land Company, a private forest investment management holding company. In 2007, the research forest was purchased by Northeast Wilderness Trust ensuring its wild and natural state into the future. The Howland Forest is a founding member of the AmeriFlux and FLUXNET research networks. Ecology The Howland Forest study site is located in a boreal transitional forest of the New England/Acadian forests ecoregion. The forest is dominated by mixed spruce, hemlock, aspen, and birch stands ranging in age from 45 to 130 years. The soils are formed on coarse-loamy granitic basal till. Research forest The tract had previously been designated as a research forest under IP's ownership, attracting researchers from the US Forest Service, the University of Maine, NASA, NOAA, and the Woods Hole Research Center. Areas of study included acid rain, nutrient cycling, soil ecology, and more recently, forest carbon uptake and loss. The forest has one of the longest records of carbon flux measurement in the world, dating to 1996, providing important information about carbon sequestration in mature forests.
https://en.wikipedia.org/wiki/DASH-IF
The DASH Industry Forum (DASH-IF) creates interoperability guidelines for the usage of the MPEG-DASH streaming standard, promotes and catalyzes the adoption of MPEG-DASH, and helps transition it from a specification into a real business. It consists of the major streaming and media companies, such as Microsoft, Netflix, Google, Ericsson, Samsung and Adobe. Interoperability One of the main goals of the DASH Industry Forum is to attain interoperability of DASH-enabled products on the market. The DASH Industry Forum has produced several documents as implementation guidelines: DASH-AVC/264 Interoperability Points V3.0: DRM updates, Improved Live, Ad Insertion, Events, H.265/HEVC support, Trick Modes, CEA608/708 DASH-AVC/264 Interoperability Points V2.0: HD and Multi-Channel Audio Extensions DASH-AVC/264 Interoperability Points V1.0 Open-Source Reference Player The DASH Industry Forum provides the open source MPEG-DASH player dash.js See also H.264/MPEG-4 AVC
https://en.wikipedia.org/wiki/Anti-diagonal%20matrix
In mathematics, an anti-diagonal matrix is a square matrix where all the entries are zero except those on the diagonal going from the lower left corner to the upper right corner (↗), known as the anti-diagonal (sometimes Harrison diagonal, secondary diagonal, trailing diagonal, minor diagonal, off diagonal or bad diagonal). Formal definition An n-by-n matrix A is an anti-diagonal matrix if the (i, j) element is zero Example An example of an anti-diagonal matrix is Properties All anti-diagonal matrices are also persymmetric. The product of two anti-diagonal matrices is a diagonal matrix. Furthermore, the product of an anti-diagonal matrix with a diagonal matrix is anti-diagonal, as is the product of a diagonal matrix with an anti-diagonal matrix. An anti-diagonal matrix is invertible if and only if the entries on the diagonal from the lower left corner to the upper right corner are nonzero. The inverse of any invertible anti-diagonal matrix is also anti-diagonal, as can be seen from the paragraph above. The determinant of an anti-diagonal matrix has absolute value given by the product of the entries on the diagonal from the lower left corner to the upper right corner. However, the sign of this determinant will vary because the one nonzero signed elementary product from an anti-diagonal matrix will have a different sign depending on whether the permutation related to it is odd or even: More precisely, the sign of the elementary product needed to calculate the determinant of an anti-diagonal matrix is related to whether the corresponding triangular number is even or odd. This is because the number of inversions in the permutation for the only nonzero signed elementary product of any n × n anti-diagonal matrix is always equal to the nth such number. See also Main diagonal, all off-diagonal elements are zero in a diagonal matrix. Exchange matrix, an anti-diagonal matrix with 1s along the counter-diagonal. External links Matrix calculator Sparse matrices Ma
https://en.wikipedia.org/wiki/List%20of%20equations%20in%20quantum%20mechanics
This article summarizes equations in the theory of quantum mechanics. Wavefunctions A fundamental physical constant occurring in quantum mechanics is the Planck constant, h. A common abbreviation is , also known as the reduced Planck constant or Dirac constant. The general form of wavefunction for a system of particles, each with position ri and z-component of spin sz i. Sums are over the discrete variable sz, integrals over continuous positions r. For clarity and brevity, the coordinates are collected into tuples, the indices label the particles (which cannot be done physically, but is mathematically necessary). Following are general mathematical results, used in calculations. Equations Wave–particle duality and time evolution Non-relativistic time-independent Schrödinger equation Summarized below are the various forms the Hamiltonian takes, with the corresponding Schrödinger equations and forms of wavefunction solutions. Notice in the case of one spatial dimension, for one particle, the partial derivative reduces to an ordinary derivative. Non-relativistic time-dependent Schrödinger equation Again, summarized below are the various forms the Hamiltonian takes, with the corresponding Schrödinger equations and forms of solutions. Photoemission Quantum uncertainty Angular momentum Magnetic moments In what follows, B is an applied external magnetic field and the quantum numbers above are used. The Hydrogen atom See also Defining equation (physical chemistry) List of electromagnetism equations List of equations in classical mechanics List of equations in fluid mechanics List of equations in gravitation List of equations in nuclear and particle physics List of equations in wave theory List of photonics equations List of relativistic equations Footnotes Sources Further reading Physical quantities SI units Physical chemistry Equations of physics
https://en.wikipedia.org/wiki/CO-OPN
The CO-OPN (Concurrent Object-Oriented Petri Nets) specification language is based on both algebraic specifications and algebraic Petri nets formalisms. The former formalism represent the data structures aspects, while the latter stands for the behavioral and concurrent aspects of systems. In order to deal with large specifications some structuring capabilities have been introduced. The object-oriented paradigm has been adopted, which means that a CO-OPN specification is a collection of objects which interact concurrently. Cooperation between the objects is achieved by means of a synchronization mechanism, i.e., each object event may request to be synchronized with some methods (parameterized events) of one or a group of partners by means of a synchronization expression. A CO-OPN specification consists of a collection of two different modules: the abstract data type modules and the object modules. The abstract data type modules concern the data structure component of the specifications, and many sorted algebraic specifications are used when describing these modules. Furthermore, the object modules represent the concept of encapsulated entities that possess an internal state and provide the exterior with various services. For this second sort of modules, an algebraic net formalism has been adopted. Algebraic Petri nets, a kind of high level nets, are a great improvement over the Petri nets, i.e. Petri nets tokens are replaced with data structures which are described by means of algebraic abstract data types. For managing visibility, both abstract data type modules and object modules are composed of an interface (which allows some operations to be visible from the outside) and a body (which mainly encapsulates the operations properties and some operation which are used for building the model). In the case of the objects modules, the state and the behavior of the objects remain concealed within the body section. To develop models using the CO-OPN language it is poss
https://en.wikipedia.org/wiki/Tecplot
Tecplot is the name of a family of visualization & analysis software tools developed by American company Tecplot, Inc., which is headquartered in Bellevue, Washington. The firm was formerly operated as Amtec Engineering. In 2016, the firm was acquired by Vela Software, an operating group of Constellation Software, Inc. (TSX:CSU). Tecplot 360 Tecplot 360 is a Computational Fluid Dynamics (CFD) and numerical simulation software package used in post-processing simulation results. Tecplot 360 is also used in chemistry applications to visualize molecule structure by post-processing charge density data. Common tasks associated with post-processing analysis of flow solver (e.g. Fluent, OpenFOAM) data include calculating grid quantities (e.g. aspect ratios, skewness, orthogonality and stretch factors), normalizing data; Deriving flow field functions like pressure coefficient or vorticity magnitude, verifying solution convergence, estimating the order of accuracy of solutions, interactively exploring data through cut planes (a slice through a region), iso-surfaces (3-D maps of concentrations), particle paths (dropping an object in the "fluid" and watching where it goes). Tecplot 360 may be used to visualize output from programming languages such as Fortran. Tecplot's native data format is PLT or SZPLT. Many other formats are also supported, including: CFD Formats: CGNS, FLOW-3D (Flow Science, Inc.), ANSYS CFX, ANSYS FLUENT .cas and .dat format and polyhedra, OpenFOAM, PLOT3D (Flow Science, Inc.), Tecplot and polyhedra, Ensight Gold and HDF5 (Hierarchical Data Format). Data Formats: HDF, Microsoft Excel (Windows only), comma- or space-delimited ASCII. FEA Formats: Abaqus, ANSYS, FIDAP Neutral, LSTC/DYNA LS-DYNA, NASTRAN MSC Software, Patran MSC Software, PTC Mechanica, SDRC/IDEAS universal and 3D Systems STL. ParaView supports Tecplot format through a VisIt importer. Tecplot RS Tecplot RS is a tool tailored towards visualizing the results of reservoir simulations,
https://en.wikipedia.org/wiki/Index%20notation
In mathematics and computer programming, index notation is used to specify the elements of an array of numbers. The formalism of how indices are used varies according to the subject. In particular, there are different methods for referring to the elements of a list, a vector, or a matrix, depending on whether one is writing a formal mathematical paper for publication, or when one is writing a computer program. In mathematics It is frequently helpful in mathematics to refer to the elements of an array using subscripts. The subscripts can be integers or variables. The array takes the form of tensors in general, since these can be treated as multi-dimensional arrays. Special (and more familiar) cases are vectors (1d arrays) and matrices (2d arrays). The following is only an introduction to the concept: index notation is used in more detail in mathematics (particularly in the representation and manipulation of tensor operations). See the main article for further details. One-dimensional arrays (vectors) A vector treated as an array of numbers by writing as a row vector or column vector (whichever is used depends on convenience or context): Index notation allows indication of the elements of the array by simply writing ai, where the index i is known to run from 1 to n, because of n-dimensions. For example, given the vector: then some entries are . The notation can be applied to vectors in mathematics and physics. The following vector equation can also be written in terms of the elements of the vector (aka components), that is where the indices take a given range of values. This expression represents a set of equations, one for each index. If the vectors each have n elements, meaning i = 1,2,…n, then the equations are explicitly Hence, index notation serves as an efficient shorthand for representing the general structure to an equation, while applicable to individual components. Two-dimensional arrays More than one index is used to describe arrays of number
https://en.wikipedia.org/wiki/Phenomenological%20quantum%20gravity
Phenomenological quantum gravity is the research field that deals with phenomenology of quantum gravity. The relevance of this research area derives from the fact that none of the candidate theories for quantum gravity has yielded experimentally testable predictions. Phenomenological models are designed to bridge this gap by allowing physicists to test for general properties that the hypothetical correct theory of quantum gravity has. Furthermore, due to this current lack of experiments, it is not known for sure that gravity is indeed quantum (i.e. that general relativity can be quantized), and so evidence is required to determine whether this is the case. Phenomenological models are also necessary to assess the promise of future quantum gravity experiments. Direct experiments for quantum gravity (perhaps by detecting gravitons) would require reaching the Planck energy — on the order of 1028 eV, around 15 orders of magnitude higher than can be achieved with current particle accelerators — as well as needing a detector the size of a large planet. As a result, experimental investigation of quantum gravity was long thought to be impossible with current levels of technology. However, in the early 21st century, new experiment designs and technologies have arisen which suggest that indirect approaches to testing quantum gravity may be feasible over the next few decades. See also Phenomenology (physics) Modern searches for Lorentz violation Quantum gravity
https://en.wikipedia.org/wiki/CCleaner
CCleaner (, originally Crap Cleaner), developed by Piriform Software, is a utility used to clean potentially unwanted files and invalid Windows Registry entries from a computer. It is one of the longest-established system cleaners, first launched in 2004. It was originally developed for Microsoft Windows only, but in 2012, a macOS version was released. An Android version was released in 2014. Features CCleaner can delete potentially unwanted files left by certain programs, including Microsoft Edge, Internet Explorer, Firefox, Google Chrome, Opera, Safari, Windows Media Player, eMule, Google Toolbar, Netscape, Microsoft Office, Nero, Adobe Acrobat, McAfee, Adobe Flash Player, Sun Java, WinRAR, WinAce, WinZip and GIMP along with browsing history, cookies, recycle bin, memory dumps, file fragments, log files, system caches, application data, autocomplete form history, and various other data. The program includes a registry cleaner to locate and correct problems in the Windows registry, such as missing references to shared DLLs, unused registration entries for file extensions, and missing references to application paths. CCleaner 2.27 and later can wipe the MFT free space of a drive, or the entire drive. CCleaner can uninstall programs or modify the list of programs that execute on startup. Since version 2.19, CCleaner can delete Windows System Restore points. CCleaner can also automatically update installed programs and computer drivers. CCleaner also has its own web browser called CCleaner Browser. CCleaner Browser is included to optionally install in the CCleaner installer, but it can also be installed from its website. CCleaner Browser avoids advertising, avoids tracking, has built-in security against all kinds of malware, phishing, malicious downloads, and also avoids unwanted elements such as pop-ups or excessive browser cache. It is based on Google's free and open-source project Chromium. The browser is only available for Microsoft Windows. History CCleaner
https://en.wikipedia.org/wiki/Caldanaerobius%20polysaccharolyticus
Caldanaerobius polysaccharolyticus is a Gram-positive thermophilic, anaerobic, non-spore-forming bacterium from the genus of Caldanaerobius which has been isolated from organic waste leachate from Hoopeston in the United States.
https://en.wikipedia.org/wiki/Jack%20%28flag%29
A jack is a flag flown from a short jackstaff at the bow (front) of a vessel, while the ensign is flown on the stern (rear). Jacks on bowsprits or foremasts appeared in the 17th century. A country may have different jacks for different purposes, especially when (as in the United Kingdom and the Netherlands) the naval jack is forbidden to other vessels. The United Kingdom has an official civil jack; the Netherlands has several unofficial ones. In some countries, ships of other government institutions may fly the naval jack, e.g. the ships of the United States Coast Guard and the National Oceanic and Atmospheric Administration in the case of the US jack. Certain organs of the UK's government have their own departmental jacks. Commercial or pleasure craft may fly the flag of an administrative division (state, province, land) or municipality at the bow. Merchant ships may fly a house flag. Yachts may fly a club burgee or officer's flag or the owner's private signal at the bow. Practice may be regulated by law, custom, or personal judgment. Etymology "Jack" occupies 6 pages of the complete second edition of the Oxford English Dictionary and the use of the word in English goes back to the 14th century, appearing as a forename in Piers Plowman. Quite early on it was used as a name for a peasant or "a man of the lower orders". It continued the low class connotations in phrases such as "jack tar" for a common seaman, "every man jack," or the use of jack for the knave in cards. The diminutive form is also seen in "Jack of all trades, master of none", where Jack implies a poor tradesman, possibly not up to journeyman standard. The term was taken into inanimate objects and denoted a small (or occasionally inferior) component: jack-pit (a small mine shaft), jackplug (single pronged, low current), jack-shaft (intermediate or idler shaft), jack (in bowling: the small ball) or jack-engine (a donkey or barring engine). Incidentally, a jack is a garment for the upper bo
https://en.wikipedia.org/wiki/Oseledets%20theorem
In mathematics, the multiplicative ergodic theorem, or Oseledets theorem provides the theoretical background for computation of Lyapunov exponents of a nonlinear dynamical system. It was proved by Valery Oseledets (also spelled "Oseledec") in 1965 and reported at the International Mathematical Congress in Moscow in 1966. A conceptually different proof of the multiplicative ergodic theorem was found by M. S. Raghunathan. The theorem has been extended to semisimple Lie groups by V. A. Kaimanovich and further generalized in the works of David Ruelle, Grigory Margulis, Anders Karlsson, and François Ledrappier. Cocycles The multiplicative ergodic theorem is stated in terms of matrix cocycles of a dynamical system. The theorem states conditions for the existence of the defining limits and describes the Lyapunov exponents. It does not address the rate of convergence. A cocycle of an autonomous dynamical system X is a map C : X×T → Rn×n satisfying where X and T (with T = Z⁺ or T = R⁺) are the phase space and the time range, respectively, of the dynamical system, and In is the n-dimensional unit matrix. The dimension n of the matrices C is not related to the phase space X. Examples A prominent example of a cocycle is given by the matrix Jt in the theory of Lyapunov exponents. In this special case, the dimension n of the matrices is the same as the dimension of the manifold X. For any cocycle C, the determinant det C(x, t) is a one-dimensional cocycle. Statement of the theorem Let μ be an ergodic invariant measure on X and C a cocycle of the dynamical system such that for each t ∈ T, the maps and are L1-integrable with respect to μ. Then for μ-almost all x and each non-zero vector u ∈ Rn the limit exists and assumes, depending on u but not on x, up to n different values. These are the Lyapunov exponents. Further, if λ1 > ... > λm are the different limits then there are subspaces Rn = R1 ⊃ ... ⊃ Rm ⊃ Rm+1 = {0}, depending on x, such that the limit is λi
https://en.wikipedia.org/wiki/History%20of%20computer%20hardware%20in%20Yugoslavia
The Socialist Federal Republic of Yugoslavia (SFRY) was a socialist country that existed in the second half of the 20th century. Being socialist meant that strict technology import rules and regulations shaped the development of computer history in the country, unlike in the Western world. However, since it was a non-aligned country, it had no ties to the Soviet Bloc either. One of the major ideas contributing to the development of any technology in SFRY was the apparent need to be independent of foreign suppliers for spare parts, fueling domestic computer development. Development Early computers In former Yugoslavia, at the end of 1962 there were 30 installed electronic computers, in 1966, there were 56, and in 1968 there were 95. Having received training in the European computer centres (Paris 1954 and 1955, Darmstadt 1959, Wien 1960, Cambridge 1961 and London 1964), engineers from the BK.Institute-Vinča and the Mihailo Pupin Institute- Belgrade, led by Prof. dr Tihomir Aleksić, started a project of designing the first "domestic" digital computer at the end of the 1950s. This was to become a line of CER (), starting with the model CER-10 in 1960, a primarily vacuum tube and electronic relays-based computer. By 1964, CER-20 computer was designed and completed as "electronic bookkeeping machine", as the manufacturer recognized increasing need in accounting market. This special-purpose trend continued with the release of CER-22 in 1967, which was intended for on-line "banking" applications. There were more CER models, such as CER-11, CER-12, and CER-200, but there is currently little information here available on them. In the late 1970s, "Ei-Niš Računarski Centar" from Niš, Serbia, started assembling Mainframe computers H6000 under Honeywell license, mainly for banking businesses. Computer initially had a great success that later led into local limited parts production. In addition, the company produced models such as H6 and H66 and was alive as late as early 2
https://en.wikipedia.org/wiki/Boiling-point%20elevation
Boiling-point elevation describes the phenomenon that the boiling point of a liquid (a solvent) will be higher when another compound is added, meaning that a solution has a higher boiling point than a pure solvent. This happens whenever a non-volatile solute, such as a salt, is added to a pure solvent, such as water. The boiling point can be measured accurately using an ebullioscope. Explanation The boiling point elevation is a colligative property, which means that it is dependent on the presence of dissolved particles and their number, but not their identity. It is an effect of the dilution of the solvent in the presence of a solute. It is a phenomenon that happens for all solutes in all solutions, even in ideal solutions, and does not depend on any specific solute–solvent interactions. The boiling point elevation happens both when the solute is an electrolyte, such as various salts, and a nonelectrolyte. In thermodynamic terms, the origin of the boiling point elevation is entropic and can be explained in terms of the vapor pressure or chemical potential of the solvent. In both cases, the explanation depends on the fact that many solutes are only present in the liquid phase and do not enter into the gas phase (except at extremely high temperatures). Put in vapor pressure terms, a liquid boils at the temperature when its vapor pressure equals the surrounding pressure. For the solvent, the presence of the solute decreases its vapor pressure by dilution. A nonvolatile solute has a vapor pressure of zero, so the vapor pressure of the solution is less than the vapor pressure of the solvent. Thus, a higher temperature is needed for the vapor pressure to reach the surrounding pressure, and the boiling point is elevated. Put in chemical potential terms, at the boiling point, the liquid phase and the gas (or vapor) phase have the same chemical potential (or vapor pressure) meaning that they are energetically equivalent. The chemical potential is dependent on the temper
https://en.wikipedia.org/wiki/Internet%20archive%20%28disambiguation%29
Internet Archive is a nonprofit organization based in San Francisco, California, United States. Internet archive may also refer to: Wayback Machine, digital archive of the World Wide Web maintained by Internet Archive arXiv, a repository of scientific preprints ("e-prints") available online Web archiving, archiving of the World Wide Web itself Marxists Internet Archive See also Web archive (disambiguation) . Online archives
https://en.wikipedia.org/wiki/Gilbert%E2%80%93Varshamov%20bound
In coding theory, the Gilbert–Varshamov bound (due to Edgar Gilbert and independently Rom Varshamov) is a limit on the parameters of a (not necessarily linear) code. It is occasionally known as the Gilbert–Shannon–Varshamov bound (or the GSV bound), but the name "Gilbert–Varshamov bound" is by far the most popular. Varshamov proved this bound by using the probabilistic method for linear codes. For more about that proof, see Gilbert–Varshamov bound for linear codes. Statement of the bound Let denote the maximum possible size of a q-ary code with length n and minimum Hamming distance d (a q-ary code is a code over the field of q elements). Then: Proof Let be a code of length and minimum Hamming distance having maximal size: Then for all  , there exists at least one codeword such that the Hamming distance between and satisfies since otherwise we could add x to the code whilst maintaining the code's minimum Hamming distance – a contradiction on the maximality of . Hence the whole of is contained in the union of all balls of radius having their centre at some : Now each ball has size since we may allow (or choose) up to of the components of a codeword to deviate (from the value of the corresponding component of the ball's centre) to one of possible other values (recall: the code is q-ary: it takes values in ). Hence we deduce That is: An improvement in the prime power case For q a prime power, one can improve the bound to where k is the greatest integer for which See also Singleton bound Hamming bound Johnson bound Plotkin bound Griesmer bound Grey–Rankin bound Gilbert–Varshamov bound for linear codes Elias-Bassalygo bound
https://en.wikipedia.org/wiki/Therapeutic%20gene%20modulation
Therapeutic gene modulation refers to the practice of altering the expression of a gene at one of various stages, with a view to alleviate some form of ailment. It differs from gene therapy in that gene modulation seeks to alter the expression of an endogenous gene (perhaps through the introduction of a gene encoding a novel modulatory protein) whereas gene therapy concerns the introduction of a gene whose product aids the recipient directly. Modulation of gene expression can be mediated at the level of transcription by DNA-binding agents (which may be artificial transcription factors), small molecules, or synthetic oligonucleotides. It may also be mediated post-transcriptionally through RNA interference. Transcriptional gene modulation An approach to therapeutic modulation utilizes agents that modulate endogenous transcription by specifically targeting those genes at the gDNA level. The advantage to this approach over modulation at the mRNA or protein level is that every cell contains only a single gDNA copy. Thus the target copy number is significantly lower allowing the drugs to theoretically be administered at much lower doses. This approach also offers several advantages over traditional gene therapy. Directly targeting endogenous transcription should yield correct relative expression of splice variants. In contrast, traditional gene therapy typically introduces a gene which can express only one transcript, rather than a set of stoichiometrically-expressed spliced transcript variants. Additionally, virally-introduced genes can be targeted for gene silencing by methylation which can counteract the effect of traditional gene therapy. This is not anticipated to be a problem for transcriptional modulation as it acts on endogenous DNA. There are three major categories of agents that act as transcriptional gene modulators: triplex-forming oligonucleotides (TFOs), synthetic polyamides (SPAs), and DNA binding proteins. Triplex-forming oligonucleotides What are
https://en.wikipedia.org/wiki/Egg%20white
Egg white is the clear liquid (also called the albumen or the glair/glaire) contained within an egg. In chickens it is formed from the layers of secretions of the anterior section of the hen's oviduct during the passage of the egg. It forms around fertilized or unfertilized egg yolks. The primary natural purpose of egg white is to protect the yolk and provide additional nutrition for the growth of the embryo (when fertilized). Egg white consists primarily of about 90% water into which about 10% proteins (including albumins, mucoproteins, and globulins) are dissolved. Unlike the yolk, which is high in lipids (fats), egg white contains almost no fat, and carbohydrate content is less than 1%. Egg whites contain about 56% of the protein in the egg. Egg white has many uses in food (e.g. meringue, mousse) as well as many other uses (e.g. in the preparation of vaccines such as those for influenza). Composition Egg white makes up around two-thirds of a chicken egg by weight. Water constitutes about 90% of this, with protein, trace minerals, fatty material, vitamins, and glucose contributing the remainder. A raw U.S. large egg contains around 33 grams of egg white with 3.6 grams of protein, 0.24 grams of carbohydrate and 55 milligrams of sodium. It contains no cholesterol and the energy content is about 17 calories. Egg white is an alkaline solution and contains around 149 proteins. The table below lists the major proteins in egg whites by percentage and their natural functions. Ovalbumin is the most abundant protein in albumen. Classed as phosphoglycoprotein, during storage, it converts into s-ovalbumin (5% at the time of laying) and can reach up to 80% after six months of cold storage. Ovalbumin in solution is heat-resistant. Denaturation temperature is around 84°C, but it can be easily denatured by physical stresses. Conalbumin/ovotransferrin is a glycoprotein which has the capacity to bind the bi- and trivalent metal cations into a complex and is more heat sensitive t
https://en.wikipedia.org/wiki/History%20of%20bioelectricity
The history of bioelectricity dates back to ancient Egypt, where the shocks delivered by the electric catfish were used medicinally. In the 18th century, the abilities of the torpedo ray and the electric eel were investigated by scientists including Hugh Williamson and John Walsh. Fish that give shocks Ancient Egypt The electric catfish of the Nile was well known to the ancient Egyptians. The Egyptians reputedly used the electric shock from them when treating arthritic pain. They would use only smaller fish, as a large fish may generate an electric shock from 300 to 400 volts. The Egyptians depicted the fish in their mural paintings and elsewhere; the first known depiction of an electric catfish is on a slate palette of the predynastic Egyptian ruler Narmer about 3100 BC. Ancient Greece and Rome Electric fishes were known to Aristotle, Theophrastus, and Pliny the Elder among other classical authors. They did not always distinguish between the marine torpedo ray and the freshwater electric catfish. Eighteenth century The naturalists Bertrand Bajon, a French military surgeon in French Guiana and the Jesuit in the River Plate basin conducted early experiments on the numbing discharges of electric eels in the 1760s. In 1775, the "torpedo" (the electric ray) was studied by John Walsh; both fish were dissected by the surgeon and anatomist John Hunter. Hunter informed the Royal Society that "Gymnotus Electricus ... appears very much like an eel ... but it has none of the specific properties of that fish." He observed that there were "two pair of these [electric] organs, a larger [the main organ] and a smaller [Hunter's organ]; one being placed on each side", and that they occupied "perhaps ... more than one-third of the whole animal [by volume]". He described the structure of the organs (stacks of electrocytes) as "extremely simple and regular, consisting of two parts; viz. flat partitions or septa, and cross divisions between them." He measured the electrocyte
https://en.wikipedia.org/wiki/Luca%20Perregrini
Luca Perregrini from the University of Pavia, Italy was named Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2016 for contributions to numerical techniques for electromagnetic modeling.
https://en.wikipedia.org/wiki/Delay%20equalization
In signal processing, delay equalization corresponds to adjusting the relative phases of different frequencies to achieve a constant group delay, using by adding an all-pass filter in series with an uncompensated filter. Clever machine-learning techniques are now being applied to the design of such filters.
https://en.wikipedia.org/wiki/Arieh%20Ben-Naim
Arieh Ben-Naim (; Jerusalem, 11 July 1934) is a professor of physical chemistry who retired in 2003 from the Hebrew University of Jerusalem. He has made major contributions over 40 years to the theory of the structure of water, aqueous solutions and hydrophobic-hydrophilic interactions. He is mainly concerned with theoretical and experimental aspects of the general theory of liquids and solutions. In recent years, he has advocated the use of information theory to better understand and advance statistical mechanics and thermodynamics. Contributions to the theory of liquids Books written by Arieh Ben-Naim: Water and Aqueous Solutions: Introduction to a Molecular Theory. 1974, (out of print). Hydrophobic Interactions. 1980, . Molecular Theory of Solutions. 2006, . Molecular Theory of Water and Aqueous Solutions: Understanding Water. 2009, . Molecular Theory of Water and Aqueous Solutions, Part II: The role of Water in Protein Folding, Self assembly and Molecular Recognition, World Scientific (2011). Alice’s Adventures in Water-Land, World Scientific (2011). The Protein Folding Problem and its Solutions, World Scientific (2013) Alice’s Adventures in Molecular Biology, World Scientific (2013). Myths and Verities in Protein Folding Theories, World Scientific (2016). Contributions to statistical mechanics in terms of information theory In 2017, Ben-Naim posted three articles in arXiv. In those articles, three radical ideas were introduced into a field which is considered as classical. The ideas followed from the new definition of entropy based on Shannon's measure of information. The three ideas are, briefly, as follows. First, there is no relationship between either entropy or the second law of thermodynamics and the so called arrow of time. This false association between the Second Law and time was first suggested by Arthur Eddington. Also the Boltzmann's H-Theorem is not about the time dependence of the entropy, but the time dependence of the Shann
https://en.wikipedia.org/wiki/Boundary%20friction
Boundary friction occurs when a surface is at least partially wet, but not so lubricated that there is no direct friction between two surfaces. The Effect When two consistent, unlubricated surfaces slide against each other, there is a specific, predictable amount of friction that occurs. This amount increases as velocity does, but only up to a certain point. That increase generally follows what is known as a Stribeck curve, after Richard Stribeck. On the other hand, if the two surfaces are completely lubricated, there is no direct friction or rubbing at all. In real life, though, there is often a situation where the surfaces are not completely dry, but also not so lubricated that they do not touch. This "boundary friction" produces various effects, like an increase in lubrication through the generation of shearing forces, or an oscillation effect during motion, as the friction increases and decreases. For example, one can experience vibration when trying to brake on a partially damp road, or a cold glass that is slowly condensing moisture can be lifted until it spontaneously slides across the surface it is resting on.
https://en.wikipedia.org/wiki/Armorial%20of%20Hungary
This gallery of Hungaroian coats of arms shows the coats of arms of the Counties and certain Hungarian cities. They are used to visually identify historical and present-day counties, as well as cities, within France. National The coat of arms of Hungary was adopted on 3 July 1990, after the end of communist rule. The arms have been used before, both with and without the Holy Crown of Hungary, sometimes as part of a larger, more complex coat of arms, and its elements date back to the Middle Ages. The shield is split into two parts: The dexter (the right side from the bearer's perspective, the left side from the viewer's) features the so-called Árpád stripes, four Gules (red) and four Argent (silver) stripes. Traditionally, the silver stripes represent four rivers: Duna (Danube), Tisza, Dráva, and Száva. The sinister (the left side from the bearer's perspective, the right side from the viewer's) consists of an Argent (silver) double cross on Gules (red) base, situated inside a small Or (golden) crown, the crown is placed on the middle heap of three Vert (green) hills, representing the mountain ranges (trimount) Tátra, Mátra, and Fátra. Atop the shield rests the Holy Crown of St. Stephen (Stephen I of Hungary, István király), a crown that remains in the Parliament building (Országház) in Budapest today. See also Hungarian heraldry
https://en.wikipedia.org/wiki/Kazuya%20Mishima
is a character in Bandai Namco's Tekken fighting game series, first featured as the protagonist in the original 1994 game and later became one of the major antagonists and antihero of the series. The son of worldwide conglomerate Mishima Zaibatsu CEO Heihachi Mishima, Kazuya seeks revenge against his father for throwing him off a cliff years earlier. Kazuya becomes corrupted in later games, seeking to obtain more power and later eventually comes into conflict with his son Jin Kazama. Kazuya Mishima possesses the Devil Gene, a demonic mutation, which he inherited from his late mother, Kazumi Mishima, which can transform him into a demonic version of himself known as . Devil Kazuya has often appeared as a separate character in previous installments (excluding Tekken (1994)) prior to becoming part of Kazuya's moveset in Tekken Tag Tournament 2 and later games. Kazuya Mishima is also present in related series media and other games. The character was based on writer Yukio Mishima, with whom he shares a last name. A number of staff members have considered him one of the franchise's strongest characters, which has led to debates about reducing the damage of some of his moves or removing together. Kazuya Mishima's devil form was created to bring unrealistic fighters into the series, but the incarnation has made few appearances. Several voice actors have portrayed Kazuya Mishima in video games and films related to Tekken. In addition to appearances in spin-offs in the Tekken series, Kazuya also appears as a playable character in Namco × Capcom, Project X Zone 2, Street Fighter X Tekken, The King of Fighters All Star and Super Smash Bros. Ultimate. Kazuya Mishima has been positively received by critics. A number of websites have listed him as one of the best Tekken characters and one of the best characters in fighting games. Journalists have praised Kazuya Mishima's moves and dark characterization, which rivals that of his father. In contrast, critical reception of Kazuya M
https://en.wikipedia.org/wiki/Paul%20Lauterbur
Paul Christian Lauterbur (May 6, 1929 – March 27, 2007) was an American chemist who shared the Nobel Prize in Physiology or Medicine in 2003 with Peter Mansfield for his work which made the development of magnetic resonance imaging (MRI) possible. Lauterbur was a professor at Stony Brook University from 1963 until 1985, where he conducted his research for the development of the MRI. In 1985 he became a professor along with his wife Joan at the University of Illinois at Urbana-Champaign for 22 years until his death in Urbana. He never stopped working with undergraduates on research, and he served as a professor of chemistry, with appointments in bioengineering, biophysics, the College of Medicine at Urbana-Champaign and computational biology at the Center for Advanced Study. Early life Lauterbur was of Luxembourgish ancestry. Born and raised in Sidney, Ohio, Lauterbur graduated from Sidney High School, where a new Chemistry, Physics, and Biology wing was dedicated in his honor. As a teenager, he built his own laboratory in the basement of his parents' house. His chemistry teacher at school understood that he enjoyed experimenting on his own, so the teacher allowed him to do his own experiments at the back of class. When he was drafted into the United States Army in the 1953 and was assigned to the Army Chemical Center in Edgewood, Maryland. His superiors allowed him to spend his time working on an early nuclear magnetic resonance (NMR) machine; he had published four scientific papers by the time he left the Army. Paul became an atheist later on. Education and career Lauterbur earned his bachelor of science in industrial chemistry from the Case Institute of Technology in 1951, now part of Case Western Reserve University in Cleveland, Ohio where he became a Brother of the Alpha Delta chapter of Phi Kappa Tau fraternity. He then went to work at the Mellon Institute laboratories of the Dow Corning Corporation, with a 2-year break to serve at the Army Chemical Cente
https://en.wikipedia.org/wiki/United%20Devices
United Devices, Inc. was a privately held, commercial volunteer computing company that focused on the use of grid computing to manage high-performance computing systems and enterprise cluster management. Its products and services allowed users to "allocate workloads to computers and devices throughout enterprises, aggregating computing power that would normally go unused." It operated under the name Univa UD for a time, after merging with Univa on September 17, 2007. History Founded in 1999 in Austin, Texas, United Devices began with volunteer computing expertise from distributed.net and SETI@home, although only a few of the original technical staff from those organizations remained through the years. In April 2001, grid.org was formally announced as a philanthropic non-profit website to demonstrate the benefits of Internet-based large scale grid computing. Later in 2002 with help from UD, NTT Data launched a similar Internet-based Cell Computing project targeting Japanese users. In 2004, IBM and United Devices worked together to start the World Community Grid project as another demonstration of Internet-based grid computing. In August 2005, United Devices acquired the Paris-based GridXpert company and added Synergy to its product lineup. In 2006, the company acknowledged seeing an industry shift from only using grid computing for compute-intensive applications towards data center automation and business application optimization. Partly in response to the market shifts and reorganization, grid.org was shut down on April 27, 2007, after completing its mission to "demonstrate the viability and benefits of large-scale Internet-based grid computing". On September 17, 2007, the company announced that it would merge with the Lisle, Illinois-based Univa and operate under the new name Univa UD. The combined company would offer open source solutions based around Globus Toolkit, while continuing to sell its existing grid products and support its existing customers.
https://en.wikipedia.org/wiki/Trihexagonal%20tiling
In geometry, the trihexagonal tiling is one of 11 uniform tilings of the Euclidean plane by regular polygons. It consists of equilateral triangles and regular hexagons, arranged so that each hexagon is surrounded by triangles and vice versa. The name derives from the fact that it combines a regular hexagonal tiling and a regular triangular tiling. Two hexagons and two triangles alternate around each vertex, and its edges form an infinite arrangement of lines. Its dual is the rhombille tiling. This pattern, and its place in the classification of uniform tilings, was already known to Johannes Kepler in his 1619 book Harmonices Mundi. The pattern has long been used in Japanese basketry, where it is called kagome. The Japanese term for this pattern has been taken up in physics, where it is called a kagome lattice. It occurs also in the crystal structures of certain minerals. Conway calls it a hexadeltille, combining alternate elements from a hexagonal tiling (hextille) and triangular tiling (deltille). Kagome Kagome () is a traditional Japanese woven bamboo pattern; its name is composed from the words kago, meaning "basket", and me, meaning "eye(s)", referring to the pattern of holes in a woven basket. The kagome pattern is common in bamboo weaving in East Asia. In 2022, archaeologists found bamboo weaving remains at the Dongsunba ruins in Chongqing, China, 200 BC. After 2200 years, the kagome pattern is still clear. It is a woven arrangement of laths composed of interlaced triangles such that each point where two laths cross has four neighboring points, forming the pattern of a trihexagonal tiling. The woven process gives the Kagome a chiral wallpaper group symmetry, p6, (632). Kagome lattice The term kagome lattice was coined by Japanese physicist Kôdi Husimi, and first appeared in a 1951 paper by his assistant Ichirō Shōji. The kagome lattice in this sense consists of the vertices and edges of the trihexagonal tiling. Despite the name, these crossing points do
https://en.wikipedia.org/wiki/Orthopterists%27%20Society
The Orthopterists' Society (formerly the Pan American Acridological Society) is an international scientific organization devoted to facilitating communication and research among persons interested in Orthoptera and related organisms. (The Orthoptera include grasshoppers, locusts, crickets, katydids and other insects.) The Society currently has 330 members from 43 countries on six continents. The journal publishes papers on all aspects of the biology of these insects from ecology and taxonomy to physiology, endocrinology, cytogenetics, and control measures. The Society publishes the refereed biannual Journal of Orthoptera Research. The Society was founded in 1976 by some 35 orthopterists who met at San Martín de los Andes, Argentina. Its Constitution and Bylaws were adopted in 1977, and it was accorded tax-exempt status by the United States government in 1978. The meetings held since San Martín have been at Bozeman (US), Maracay, (Venezuela), Saskatoon (Canada), Valsaín (Spain), Hilo (US), Cairns (Australia), Montpellier (France), and Canmore (Canada). Graduate students and young researchers can apply for the Orthopterists' Society Research Fund. External links The Orthopterists' Society Journal of Orthoptera Research Entomological societies Orthoptera
https://en.wikipedia.org/wiki/Two-dimensional%20correlation%20analysis
Two dimensional correlation analysis is a mathematical technique that is used to study changes in measured signals. As mostly spectroscopic signals are discussed, sometime also two dimensional correlation spectroscopy is used and refers to the same technique. In 2D correlation analysis, a sample is subjected to an external perturbation while all other parameters of the system are kept at the same value. This perturbation can be a systematic and controlled change in temperature, pressure, pH, chemical composition of the system, or even time after a catalyst was added to a chemical mixture. As a result of the controlled change (the perturbation), the system will undergo variations which are measured by a chemical or physical detection method. The measured signals or spectra will show systematic variations that are processed with 2D correlation analysis for interpretation. When one considers spectra that consist of few bands, it is quite obvious to determine which bands are subject to a changing intensity. Such a changing intensity can be caused for example by chemical reactions. However, the interpretation of the measured signal becomes more tricky when spectra are complex and bands are heavily overlapping. Two dimensional correlation analysis allows one to determine at which positions in such a measured signal there is a systematic change in a peak, either continuous rising or drop in intensity. 2D correlation analysis results in two complementary signals, which referred to as the 2D synchronous and 2D asynchronous spectrum. These signals allow amongst others to determine the events that are occurring at the same time (in phase) and those events that are occurring at different times (out of phase) to determine the sequence of spectral changes to identify various inter- and intramolecular interactions band assignments of reacting groups to detect correlations between spectra of different techniques, for example near infrared spectroscopy (NIR) and Raman spectroscopy
https://en.wikipedia.org/wiki/Non-contact%20force
A non-contact force is a force which acts on an object without coming physically in contact with it. The most familiar non-contact force is gravity, which confers weight. In contrast, a contact force is a force which acts on an object coming physically in contact with it. All four known fundamental interactions are non-contact forces: Gravity, the force of attraction that exists among all bodies that have mass. The force exerted on each body by the other through weight is proportional to the mass of the first body times the mass of the second body divided by the square of the distance between them. Electromagnetism is the force that causes the interaction between electrically charged particles; the areas in which this happens are called electromagnetic fields. Examples of this force include: electricity, magnetism, radio waves, microwaves, infrared, visible light, X-rays and gamma rays. Electromagnetism mediates all chemical, biological, electrical and electronic processes. Strong nuclear force: Unlike gravity and electromagnetism, the strong nuclear force is a short distance force that takes place between fundamental particles within a nucleus. It is charge independent and acts equally between a proton and a proton, a neutron and a neutron, and a proton and a neutron. The strong nuclear force is the strongest force in nature; however, its range is small (acting only over distances of the order of 10−15 m). The strong nuclear force mediates both nuclear fission and fusion reactions. Weak nuclear force: The weak nuclear force mediates the β decay of a neutron, in which the neutron decays into a proton and in the process emits a β particle and an uncharged particle called a neutrino. As a result of mediating the β decay process, the weak nuclear force plays a key role in supernovas. Both the strong and weak forces form an important part of quantum mechanics.The Casimir effect could also be thought of as a non-contact force. See also Tension Body force Surface
https://en.wikipedia.org/wiki/Terrestrial%20ecosystem
Terrestrial ecosystems are ecosystems that are found on land. Examples include tundra, taiga, temperate deciduous forest, tropical rain forest, grassland, deserts. Terrestrial ecosystems differ from aquatic ecosystems by the predominant presence of soil rather than water at the surface and by the extension of plants above this soil/water surface in terrestrial ecosystems. There is a wide range of water availability among terrestrial ecosystems (including water scarcity in some cases), whereas water is seldom a limiting factor to organisms in aquatic ecosystems. Because water buffers temperature fluctuations, terrestrial ecosystems usually experience greater diurnal and seasonal temperature fluctuations than do aquatic ecosystems in similar climates. Terrestrial ecosystems are of particular importance especially in meeting Sustainable Development Goal 15 that targets the conservation-restoration and sustainable use of terrestrial ecosystems. Organisms and processes Organisms in terrestrial ecosystems have adaptations that allow them to obtain water when the entire body is no longer bathed in that fluid, means of transporting the water from limited sites of acquisition to the rest of the body, and means of preventing the evaporation of water from body surfaces. They also have traits that provide body support in the atmosphere, a much less buoyant medium than water, and other traits that render them capable of withstanding the extremes of temperature, wind, and humidity that characterize terrestrial ecosystems. Finally, the organisms in terrestrial ecosystems have evolved many methods of transporting gametes in environments where fluid flow is much less effective as a transport medium. This is terrestrial ecosystems. Size and plants Terrestrial ecosystems occupy 55,660,000 mi2 (144,150,000 km2), or 28.26% of Earth's surface. Major plant taxa in terrestrial ecosystems are members of the division Magnoliophyta (flowering plants), of which there are about 275,000
https://en.wikipedia.org/wiki/EU%E2%80%93US%20Privacy%20Shield
The EU–US Privacy Shield was a legal framework for regulating transatlantic exchanges of personal data for commercial purposes between the European Union and the United States. One of its purposes was to enable US companies to more easily receive personal data from EU entities under EU privacy laws meant to protect European Union citizens. The EU–US Privacy Shield went into effect on 12 July 2016 following its approval by the European Commission. It was put in place to replace the International Safe Harbor Privacy Principles, which were declared invalid by the European Court of Justice in October 2015. The ECJ declared the EU–US Privacy Shield invalid on 16 July 2020, in the case known as Schrems II. In 2022, leaders of the US and EU announced that a new data transfer framework called the Trans-Atlantic Data Privacy Framework had been agreed to in principle, replacing Privacy Shield. However, it is uncertain what changes will be necessary or adequate for this to succeed without facing additional legal challenges. History In October 2015 the European Court of Justice declared the previous framework called the International Safe Harbor Privacy Principles invalid in a ruling that later became known as "Schrems I". Soon after this decision, the European Commission and the U.S. Government started talks about a new framework, and on February 2, 2016, they reached a political agreement. The European Commission published the "adequacy decision" draft, declaring principles to be equivalent to the protections offered by EU law. The Article 29 Data Protection Working Party delivered an opinion on April 13, 2016, stating that the Privacy Shield offers major improvements compared to the Safe Harbor decisions, but that three major points of concern still remain. They relate to deletion of data, collection of massive amounts of data, and clarification of the new Ombudsperson mechanism. The European Data Protection Supervisor issued an opinion on 30 May 2016 in which he stated
https://en.wikipedia.org/wiki/Tautology%20%28logic%29
In mathematical logic, a tautology (from ) is a formula or assertion that is true in every possible interpretation. An example is "x=y or x≠y". Similarly, "either the ball is green, or the ball is not green" is always true, regardless of the colour of the ball. The philosopher Ludwig Wittgenstein first applied the term to redundancies of propositional logic in 1921, borrowing from rhetoric, where a tautology is a repetitive statement. In logic, a formula is satisfiable if it is true under at least one interpretation, and thus a tautology is a formula whose negation is unsatisfiable. In other words, it cannot be false. It cannot be untrue. Unsatisfiable statements, both through negation and affirmation, are known formally as contradictions. A formula that is neither a tautology nor a contradiction is said to be logically contingent. Such a formula can be made either true or false based on the values assigned to its propositional variables. The double turnstile notation is used to indicate that S is a tautology. Tautology is sometimes symbolized by "Vpq", and contradiction by "Opq". The tee symbol is sometimes used to denote an arbitrary tautology, with the dual symbol (falsum) representing an arbitrary contradiction; in any symbolism, a tautology may be substituted for the truth value "true", as symbolized, for instance, by "1". Tautologies are a key concept in propositional logic, where a tautology is defined as a propositional formula that is true under any possible Boolean valuation of its propositional variables. A key property of tautologies in propositional logic is that an effective method exists for testing whether a given formula is always satisfied (equiv., whether its negation is unsatisfiable). The definition of tautology can be extended to sentences in predicate logic, which may contain quantifiers—a feature absent from sentences of propositional logic. Indeed, in propositional logic, there is no distinction between a tautology and a logically v
https://en.wikipedia.org/wiki/List%20of%20web%20testing%20tools
This is a list of web testing tools, giving a general overview in terms of features, sometimes used for Web scraping. Main features Web testing tools may be classified based on different prerequisites that a user may require to test web applications mainly scripting requirements, GUI functionalities and browser compatibility. See also Comparison of GUI testing tools Headless browser
https://en.wikipedia.org/wiki/Terminal%20aerodrome%20forecast
In meteorology and aviation, terminal aerodrome forecast (TAF) is a format for reporting weather forecast information, particularly as it relates to aviation. TAFs are issued at least four times a day, every six hours, for major civil airfields: 0000, 0600, 1200 and 1800 UTC, and generally apply to a 24- or 30-hour period, and an area within approximately (or in Canada) from the center of an airport runway complex. TAFs are issued every three hours for military airfields and some civil airfields and cover a period ranging from 3 hours to 30 hours. TAFs complement and use similar encoding to METAR reports. They are produced by a human forecaster based on the ground. For this reason, there are considerably fewer TAF locations than there are airports for which METARs are available. TAFs can be more accurate than numerical weather forecasts, since they take into account local, small-scale, geographic effects. In the United States, the weather forecasters responsible for the TAFs in their respective areas are located within one of the 122 Weather Forecast Offices operated by the United States' National Weather Service. In contrast, a trend type forecast (TTF), which is similar to a TAF, is always produced by a person on-site where the TTF applies. In the United Kingdom, most TAFs for military airfields are produced locally, however TAFs for civil airfields are produced at the Met Office headquarters in Exeter. The United States Air Force employs active duty enlisted personnel as TAF writers. Air Force weather personnel are responsible for providing weather support for all Air Force and Army operations. Different countries use different change criteria for their weather groups. In the United Kingdom, TAFs for military airfields use colour states as one of the change criteria. Civil airfields in the UK use slightly different criteria. Code This TAF example of a 30-hour TAF was released on November 5, 2008, at 1730 UTC: TAF KXYZ 051730Z 0518/0624 31008KT 3SM -SHRA
https://en.wikipedia.org/wiki/Digest%20access%20authentication
Digest access authentication is one of the agreed-upon methods a web server can use to negotiate credentials, such as username or password, with a user's web browser. This can be used to confirm the identity of a user before sending sensitive information, such as online banking transaction history. It applies a hash function to the username and password before sending them over the network. In contrast, basic access authentication uses the easily reversible Base64 encoding instead of hashing, making it non-secure unless used in conjunction with TLS. Technically, digest authentication is an application of MD5 cryptographic hashing with usage of nonce values to prevent replay attacks. It uses the HTTP protocol. Overview Digest access authentication was originally specified by (An Extension to HTTP: Digest Access Authentication). RFC 2069 specifies roughly a traditional digest authentication scheme with security maintained by a server-generated nonce value. The authentication response is formed as follows (where HA1 and HA2 are names of string variables): HA1 = MD5(username:realm:password) HA2 = MD5(method:digestURI) response = MD5(HA1:nonce:HA2) An MD5 hash is a 16-byte value. The HA1 and HA2 values used in the computation of the response are the hexadecimal representation (in lowercase) of the MD5 hashes respectively. RFC 2069 was later replaced by (HTTP Authentication: Basic and Digest Access Authentication). RFC 2617 introduced a number of optional security enhancements to digest authentication; "quality of protection" (qop), nonce counter incremented by client, and a client-generated random nonce. These enhancements are designed to protect against, for example, chosen-plaintext attack cryptanalysis. If the algorithm directive's value is "MD5" or unspecified, then HA1 is HA1 = MD5(username:realm:password) If the algorithm directive's value is "MD5-sess", then HA1 is HA1 = MD5(MD5(username:realm:password):nonce:cnonce) If the qop directive's value is "
https://en.wikipedia.org/wiki/Named%20set%20theory
Named set theory is a branch of theoretical mathematics that studies the structures of names. The named set is a theoretical concept that generalizes the structure of a name described by Frege. Its generalization bridges the descriptivists theory of a name, and its triad structure (name, sensation and reference), with mathematical structures that define mathematical names using triplets. It deploys the former to view the latter at a higher abstract level that unifies a name and its relationship to a mathematical structure as a constructed reference. This enables all names in science and technology to be treated as named sets or as systems of named sets. Informally, named set theory is a generalization that studies collections of objects (may be, one object) connected to other objects (may be, to one object). The paradigmatic example of a named set is a collection of objects connected to its name. Mathematical examples of named sets are coordinate spaces (objects are points and coordinates are names of these points), vector fields on manifolds (objects are points of the manifold and vectors assigned to points are names of these points), binary relations between two sets (objects are elements of the first set and elements of the second set are names) and fiber bundles (objects form a topological space, names from another topological space and the connection is a continuous projection). The language of named set theory can be used in the definitions of all of these abstract objects. History In the 20th century, many generalizations of sets were invented, e.g., fuzzy sets (Zadeh, 1965), or rediscovered, e.g., multisets (Knuth, 1997). As a result, these generalizations created a unification problem in the foundation of mathematics. The concept of a named set was created as a solution to this problem. Its generalization of mathematical structures allowed for the unification of all known generalizations of sets. Later it was demonstrated that all basic mathematical
https://en.wikipedia.org/wiki/Bertrand%20paradox%20%28probability%29
The Bertrand paradox is a problem within the classical interpretation of probability theory. Joseph Bertrand introduced it in his work Calcul des probabilités (1889), as an example to show that the principle of indifference may not produce definite, well-defined results for probabilities if it is applied uncritically when the domain of possibilities is infinite. Bertrand's formulation of the problem The Bertrand paradox is generally presented as follows: Consider an equilateral triangle inscribed in a circle. Suppose a chord of the circle is chosen at random. What is the probability that the chord is longer than a side of the triangle? Bertrand gave three arguments (each using the principle of indifference), all apparently valid, yet yielding different results: The "random endpoints" method: Choose two random points on the circumference of the circle and draw the chord joining them. To calculate the probability in question imagine the triangle rotated so its vertex coincides with one of the chord endpoints. Observe that if the other chord endpoint lies on the arc between the endpoints of the triangle side opposite the first point, the chord is longer than a side of the triangle. The length of the arc is one third of the circumference of the circle, therefore the probability that a random chord is longer than a side of the inscribed triangle is . The "random radial point" method: Choose a radius of the circle, choose a point on the radius and construct the chord through this point and perpendicular to the radius. To calculate the probability in question imagine the triangle rotated so a side is perpendicular to the radius. The chord is longer than a side of the triangle if the chosen point is nearer the center of the circle than the point where the side of the triangle intersects the radius. The side of the triangle bisects the radius, therefore the probability a random chord is longer than a side of the inscribed triangle is . The "random midpoint" method:
https://en.wikipedia.org/wiki/Statistical%20finance
Statistical finance, is the application of econophysics to financial markets. Instead of the normative roots of finance, it uses a positivist framework. It includes exemplars from statistical physics with an emphasis on emergent or collective properties of financial markets. Empirically observed stylized facts are the starting point for this approach to understanding financial markets. Stylized facts Stock markets are characterised by bursts of price volatility. Price changes are less volatile in bull markets and more volatile in bear markets. Price change correlations are stronger with higher volatility, and their auto-correlations die out quickly. Almost all real data have more extreme events than suspected. Volatility correlations decay slowly. Trading volumes have memory the same way that volatilities do. Past price changes are negatively correlated with future volatilities. Research objectives Statistical finance is focused on three areas: Empirical studies focused on the discovery of interesting statistical features of financial time-series data aimed at extending and consolidating the known stylized facts. The use of these discoveries to build and implement models that better price derivatives and anticipate stock price movement with an emphasis on non-Gaussian methods and models. The study of collective and emergent behaviour in simulated and real markets to uncover the mechanisms responsible for the observed stylized facts with an emphasis on agent-based models. Financial econometrics also has a focus on the first two of these three areas. However, there is almost no overlap or interaction between the community of statistical finance researchers (who typically publish in physics journals) and the community of financial econometrics researchers (who typically publish in economics journals). Behavioral finance and statistical finance Behavioural finance attempts to explain price anomalies in terms of the biased behaviour of individuals, m
https://en.wikipedia.org/wiki/Segment%20protection
Segment protection is a type of backup technique that can be used in most networks. It can be implemented as a dedicated backup or as a shared backup protection. Overlapping segments and non-overlapping segments are allowed; each providing different advantages. Technique Terms Working path - is the chosen route from source to destination. Segment protection path - is the working path where the broken segment is using the protected path. Primary segment - is a segment of the working path. Protected segment - is the backup path of one segment. End-to-end protection - is the protection of one segment where is source and destination are also the end points of the backup protection. Examples In "Working path" animation on the right it can be seen that for a chosen route the primary path becomes the working path. This example illustrates that the source (node A) is routed to B,then C,D,E, and lastly the destination (node F). We can see that segment protection has been implemented. Segment consists of nodes A, B, C, and D while segment consists of nodes C, D, E, and F. Lets assume that link B-C failed. Nodes B and C know that the link between them is down so they signal to their neighboring nodes that a link is down and to move to a backup path. Node A sends its traffic over to node D directly. Node D then sends the traffic over its route to E then finally destination F. Note: in this case the segment protection path for segment does not contain any intermediate nodes; this is usually not the case, but the example would follow respectively. Overlapping vs. non-overlapping Overlapping and non-overlapping segment protection have one main difference but provide different protections at different costs. The diagrams to the right, "overlapping protection" and "non-overlapping protection" illustrate the difference between the two. The overlapping scheme makes sure that there is at least one link that is protected by two segments, while the non-overlapping scheme begins
https://en.wikipedia.org/wiki/Artificial%20heart
An artificial heart is a device that replaces the heart. Artificial hearts are typically used to bridge the time to heart transplantation, or to permanently replace the heart in the case that a heart transplant (from a deceased human or, experimentally, from a deceased genetically engineered pig) is impossible. Although other similar inventions preceded it from the late 1940s, the first artificial heart to be successfully implanted in a human was the Jarvik-7 in 1982, designed by a team including Willem Johan Kolff, William DeVries and Robert Jarvik. An artificial heart is distinct from a ventricular assist device (VAD; for either one or both of the ventricles, the heart's lower chambers), which can be a permanent solution also, or the intra-aortic balloon pump – both devices are designed to support a failing heart. It is also distinct from a cardiopulmonary bypass machine, which is an external device used to provide the functions of both the heart and lungs, used only for a few hours at a time, most commonly during cardiac surgery. It is also distinct from a ventilator, used to support failing lungs, or the extracorporeal membrane oxygenation (ECMO), which is used to support those with both inadequate heart and lung function for up to days or weeks, unlike the bypass machine. History Origins A synthetic replacement for a heart remains a long-sought "holy grail" of modern medicine. The obvious benefit of a functional artificial heart would be to lower the need for heart transplants because the demand for organs always greatly exceeds supply. Although the heart is conceptually a pump, it embodies subtleties that defy straightforward emulation with synthetic materials and power supplies. Consequences of these issues include severe foreign-body rejection and external batteries that limit mobility. These complications limited the lifespan of early human recipients from hours to days. The artificial heart has developed many innovation through the years and each new
https://en.wikipedia.org/wiki/ECLR-attributed%20grammar
ECLR-attributed grammars are a special type of attribute grammars. They are a variant of LR-attributed grammars where an equivalence relation on inherited attributes is used to optimize attribute evaluation. EC stands for equivalence class. Rie is based on ECLR-attributed grammars. External links http://www.is.titech.ac.jp/~sassa/lab/rie-e.html M. Sassa, H. Ishizuka and I. Nakata: ECLR-attributed grammars: a practical class of LR-attributed grammars. Inf. Process. Lett. 24 (1987), 31–41. M. Sassa, H. Ishizuka and I. Nakata: Rie, a Compiler Generator Based on a One-pass-type Attribute Grammar. Software—practice and experience 25:3 (March 1995), 229–250. Formal languages Compiler construction
https://en.wikipedia.org/wiki/List%20of%20speech%20recognition%20software
Speech recognition software is available for many computing platforms, operating systems, use models, and software licenses. Here is a listing of such, grouped in various useful ways. Acoustic models and speech corpus (compilation) The following list presents notable speech recognition software engines with a brief synopsis of characteristics. Macintosh Cross-platform web apps based on Chrome The following list presents notable speech recognition software that operate in a Chrome browser as web apps. They make use of HTML5 Web-Speech-API. Mobile devices and smartphones Many mobile phone handsets, including feature phones and smartphones such as iPhones and BlackBerrys, have basic dial-by-voice features built in. Many third-party apps have implemented natural-language speech recognition support, including: Windows Windows built-in speech recognition The Windows Speech Recognition version 8.0 by Microsoft comes built into Windows Vista, Windows 7, Windows 8 and Windows 10. Speech Recognition is available only in English, French, Spanish, German, Japanese, Simplified Chinese, and Traditional Chinese and only in the corresponding version of Windows; meaning you cannot use the speech recognition engine in one language if you use a version of Windows in another language. Windows 7 Ultimate and Windows 8 Pro allow you to change the system language, and therefore change which speech engine is available. Windows Speech Recognition evolved into Cortana (software), a personal assistant included in Windows 10. Windows 7, 8, 10, 11 third-party speech recognition Braina – Dictate into third party software and websites, fill web forms and execute vocal commands. Dragon NaturallySpeaking from Nuance Communications – Successor to the older DragonDictate product. Focus on dictation. 64-bit Windows support since version 10.1. Tazti – Create speech command profiles to play PC games and control applications – programs. Create speech commands to open files, folders, webpages, a
https://en.wikipedia.org/wiki/Bokosuka%20Wars
is a 1983 action-strategy role-playing video game developed by Kōji Sumii (住井浩司) and released by ASCII for the Sharp X1 computer, followed by ports to the MSX, FM-7, NEC PC-6001, NEC PC-8801 and NEC PC-9801 computer platforms, as well as an altered version released for the Family Computer console and later the Virtual Console service. It revolves around a leader who must lead an army in phalanx formation across a battlefield in real-time against overwhelming enemy forces while freeing and recruiting soldiers along the way, with each unit able to gain experience and level up through battle. The player must make sure that the leader stays alive, until the army reaches the enemy castle to defeat the leader of the opposing forces. The game was responsible for laying the foundations for the tactical role-playing game genre, or the "simulation RPG" genre as it is known in Japan, with its blend of role-playing and strategy game elements. The game has also variously been described as an early example of an action role-playing game, an early prototype real-time strategy game, and a unique reverse tower defense game. In its time, the game was considered a major success in Japan. Release Originally developed in 1983 for the Sharp X1 computer, it won ASCII Entertainment's first "Software Contest" and was sold boxed by them that year. An MSX port was then released in 1984, followed in 1985 by versions for the S1, PC-6000mkII, PC-8801, PC-9801, FM-7 and the Family Computer (the latter released on December 14, 1985). LOGiN Magazine's November 1984 issue featured a sequel for the X1 entitled New Bokosuka Wars with the source code included. With all-new enemy characters and redesigned items and traps, the level of difficulty became more balanced. It was also included in Tape Login Magazine's November 1984 issue, but never sold in any other form. The PC-8801 version used to be sold as a download from Enterbrain and was ported for the i-Mode service in 2004. The Famicom version wa