source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Ethnopsychopharmacology
A growing body of research has begun to highlight differences in the way racial and ethnic groups respond to psychiatric medication. It has been noted that there are "dramatic cross-ethnic and cross-national variations in the dosing practices and side-effect profiles in response to practically all classes of psychotropics." Differences in drug metabolism Drug metabolism is controlled by a number of specific enzymes, and the action of these enzymes varies among individuals. For example, most individuals show normal activity of the IID6 isoenzyme that is responsible for the metabolism of many tricyclic antidepressant medications and most antipsychotic drugs. However, studies have found that one-third of Asian Americans and African Americans have a genetic alteration that decreases the metabolic rate of the IID6 isoenzyme, leading to a greater risk of side effects and toxicity. The CYP2D6 enzyme, important for the way in which the liver clears many drugs from the body, varies greatly between individuals in ways that can be ethnically specific. Though enzyme activity is genetically influenced, it can also be altered by cultural and environmental factors such as diet, the use of other medications, alcohol and disease states. Differences in pharmacodynamics If two individuals have the same blood level of a medication there may still be differences in the way that the body responds due to pharmacodynamic differences; pharmacodynamic responses may also be influenced by racial and cultural factors. In addition to biology and environment, culturally determined attitudes toward illness may affect how an individual responds to psychiatric medication. Cultural factors In addition to biology and environment, culturally determined attitudes toward illness and its treatment may affect how an individual responds to psychiatric medication. Some cultures see suffering and illness as unavoidable and not amenable to medication, while others treat symptoms with polypharmacy, oft
https://en.wikipedia.org/wiki/Statistical%20energy%20analysis
Statistical energy analysis (SEA) is a method for predicting the transmission of sound and vibration through complex structural acoustic systems. The method is particularly well suited for quick system level response predictions at the early design stage of a product, and for predicting responses at higher frequencies. In SEA a system is represented in terms of a number of coupled subsystems and a set of linear equations are derived that describe the input, storage, transmission and dissipation of energy within each subsystem. The parameters in the SEA equations are typically obtained by making certain statistical assumptions about the local dynamic properties of each subsystem (similar to assumptions made in room acoustics and statistical mechanics). These assumptions significantly simplify the analysis and make it possible to analyze the response of systems that are often too complex to analyze using other methods (such as finite element and boundary element methods). History The initial derivation of SEA arose from independent calculations made in 1959 by Richard Lyon and Preston Smith as part of work concerned with the development of methods for analyzing the response of large complex aerospace structures subjected to spatially distributed random loading. Lyon's calculation showed that under certain conditions, the flow of energy between two coupled oscillators is proportional to the difference in the oscillator energies (suggesting a thermal analogy exists in structural-acoustic systems). Smith's calculation showed that a structural mode and a diffuse reverberant sound field attain a state of 'equipartition of energy' as the damping of the mode is reduced (suggesting a state of thermal equilibrium can exist in structural-acoustic systems). The extension of the two oscillator results to more general systems is often referred to as the modal approach to SEA. While the modal approach provides physical insights into the mechanisms that govern energy flow it
https://en.wikipedia.org/wiki/Hole%20punching%20%28networking%29
Hole punching (or sometimes punch-through) is a technique in computer networking for establishing a direct connection between two parties in which one or both are behind firewalls or behind routers that use network address translation (NAT). To punch a hole, each client connects to an unrestricted third-party server that temporarily stores external and internal address and port information for each client. The server then relays each client's information to the other, and using that information each client tries to establish direct connection; as a result of the connections using valid port numbers, restrictive firewalls or routers accept and forward the incoming packets on each side. Hole punching does not require any knowledge of the network topology to function. ICMP hole punching, UDP hole punching and TCP hole punching respectively use Internet Control Message, User Datagram and Transmission Control Protocols. Overview Networked devices with public or globally accessible IP addresses can create connections between one another easily. Clients with private addresses may also easily connect to public servers, as long as the client behind a router or firewall initiates the connection. However, hole punching (or some other form of NAT traversal) is required to establish a direct connection between two clients that both reside behind different firewalls or routers that use network address translation (NAT). Both clients initiate a connection to an unrestricted server, which notes endpoint and session information including public IP and port along with private IP and port. The firewalls also note the endpoints in order to allow responses from the server to pass back through. The server then sends each client's endpoint and session information to the other client, or peer. Each client tries to connect to its peer through the specified IP address and port that the peer's firewall has opened for the server. The new connection attempt punches a hole in the client's fir
https://en.wikipedia.org/wiki/Overlay%20transport%20virtualization
Overlay transport virtualization (OTV) is a Cisco proprietary protocol for relaying layer 2 communications between layer 3 computer networks. See also Distributed Overlay Virtual Ethernet (DOVE) Generic Routing Encapsulation (GRE) IEEE 802.1ad, an Ethernet networking standard, also known as provider bridging, Stacked VLANs, or simply QinQ. NVGRE, a similar competing specification Virtual Extensible LAN (VXLAN) Virtual LAN (VLAN) External links Cisco Overlay Transport Virtualization overview Network protocols Tunneling protocols
https://en.wikipedia.org/wiki/Data%20stream%20management%20system
A data stream management system (DSMS) is a computer software system to manage continuous data streams. It is similar to a database management system (DBMS), which is, however, designed for static data in conventional databases. A DBMS also offers a flexible query processing so that the information needed can be expressed using queries. However, in contrast to a DBMS, a DSMS executes a continuous query that is not only performed once, but is permanently installed. Therefore, the query is continuously executed until it is explicitly uninstalled. Since most DSMS are data-driven, a continuous query produces new results as long as new data arrive at the system. This basic concept is similar to Complex event processing so that both technologies are partially coalescing. Functional principle One important feature of a DSMS is the possibility to handle potentially infinite and rapidly changing data streams by offering flexible processing at the same time, although there are only limited resources such as main memory. The following table provides various principles of DSMS and compares them to traditional DBMS. Processing and streaming models One of the biggest challenges for a DSMS is to handle potentially infinite data streams using a fixed amount of memory and no random access to the data. There are different approaches to limit the amount of data in one pass, which can be divided into two classes. For the one hand, there are compression techniques that try to summarize the data and for the other hand there are window techniques that try to portion the data into (finite) parts. Synopses The idea behind compression techniques is to maintain only a synopsis of the data, but not all (raw) data points of the data stream. The algorithms range from selecting random data points called sampling to summarization using histograms, wavelets or sketching. One simple example of a compression is the continuous calculation of an average. Instead of memorizing each data point, th
https://en.wikipedia.org/wiki/Multisync%20monitor
A multiple-sync (multisync) monitor, also known as a multiscan or multimode monitor, is a raster-scan analog video monitor that can properly synchronise with multiple horizontal and vertical scan rates. In contrast, fixed frequency monitors can only synchronise with a specific set of scan rates. They are generally used for computer displays, but sometimes for television, and the terminology is mostly applied to CRT displays although the concept applies to other technologies. Multiscan computer monitors appeared during the mid 1980s, offering flexibility as computer video hardware shifted from producing a single fixed scan rate to multiple possible scan rates. "MultiSync" specifically was a trademark of one of NEC's first multiple-sync monitors. Computers History Early home computers output video to ordinary televisions or composite monitors, utilizing television display standards such as NTSC, PAL or SECAM. These display standards had fixed scan rates, and only used the vertical and horizontal sync pulses embedded in the video signals to ensure synchronization, not to set the actual scan rates. Early dedicated computer monitors still often relied on fixed scan rates. IBM's original 1981 PC, for instance, was sold with a choice of two video cards (MDA and CGA) which were intended for use with custom IBM monitors which still used fixed scan rates. The CGA timings were identical to NTSC television, whereas the MDA card used a custom timing for higher resolution to provide better text quality. Early Macintosh monitors also used fixed scan rates. In 1984, IBM's EGA added a second resolution which necessitated the use of a monitor supporting two scan rates, the original CGA rate as well as a second scan rate for the new video modes. This monitor as well as others that could be manually switched between these two sync rates were known as dual-scan displays. The NEC Multisync was released in 1985 for use with the IBM PC, supporting a wide range of sync frequencies in
https://en.wikipedia.org/wiki/Hole%20argument
In general relativity, the hole argument is an apparent paradox that much troubled Albert Einstein while developing his famous field equations. Some philosophers of physics take the argument to raise a problem for manifold substantialism, a doctrine that the manifold of events in spacetime is a "substance" which exists independently of the metric field defined on it or the matter within it. Other philosophers and physicists disagree with this interpretation, and view the argument as a confusion about gauge invariance and gauge fixing instead. Einstein's hole argument In a usual field equation, knowing the source of the field, and the boundary conditions, determines the field everywhere. For example, if we are given the current and charge density and appropriate boundary conditions, Maxwell's equations determine the electric and magnetic fields. They do not determine the vector potential though, because the vector potential depends on an arbitrary choice of gauge. Einstein noticed that if the equations of gravity are generally covariant, then the metric cannot be determined uniquely by its sources as a function of the coordinates of spacetime. As an example: consider a gravitational source, such as the Sun. Then there is some gravitational field described by a metric g(r). Now perform a coordinate transformation r r' where r' is the same as r for points which are inside the Sun but r' is different from r outside the Sun. The coordinate description of the interior of the Sun is unaffected by the transformation, but the functional form of the metric g' for the new coordinate values outside the Sun is changed. Due to the general covariance of the field equations, this transformed metric g' is also a solution in the untransformed coordinate system. This means that one source, the Sun, can be the source of many seemingly different metrics. The resolution is immediate: any two fields which only differ by such a "hole" transformation are physically equivalent, just as t
https://en.wikipedia.org/wiki/List%20of%20taxa%20named%20after%20human%20genitals
This a list of species, genera, and other taxa named after human genitals. Plants Families Orchidaceae. The type genus is Orchis, whose name comes from the Ancient Greek (), literally meaning "testicle", because of the shape of the twin tubers in some species of Orchis. Genera Amorphophallus Clitoria Orchis Species Alysicarpus vaginalis Baumea vaginalis Chenopodium vulvaria Festuca vaginalis Pontederia vaginalis Varieties Capsicum annum annum var. annum 'penis pepper' Fungi Orders Phallales Families Phallaceae Genera Phallus Species Amanita phalloides Amanita vaginata Animals Subspecies Muntiacus muntjak vaginalis General Pubescens. The word originates from the Latin pubes, "adult, full-grown"; "genital area, groin" (e.g., Pubis); "the down or soft hair which begins to grow on young persons when they come to the age of puberty". The use of the term in biology to refer to hairiness or soft down is recorded since 1760 for plants and since 1826 for insects. Vaginalis. The common specific name is derived from the Latin vagina, originally meaning "sheath, scabbard, covering; sheath of an ear of grain, hull, husk." The specific epithet may refer to a sheathed trait or habit of an organism (e.g. Alysicarpus vaginalis), or may refer to resemblance/relation to the vagina (e.g. Gardnerella vaginalis)
https://en.wikipedia.org/wiki/28th%20meridian%20west
The meridian 28° west of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, Greenland, the Atlantic Ocean, the Azores, the Southern Ocean, and Antarctica to the South Pole. The 28th meridian west forms a great circle with the 152nd meridian east. From Pole to Pole Starting at the North Pole and heading south to the South Pole, the 28th meridian west passes through: {| class="wikitable plainrowheaders" ! scope="col" width="125" | Co-ordinates ! scope="col" | Country, territory or sea ! scope="col" | Notes |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Arctic Ocean | style="background:#b0e0e6;" | |- | ! scope="row" | | Moore Glacier (Northern Peary Land) |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Frederick E. Hyde Fjord | style="background:#b0e0e6;" | |- | ! scope="row" | | Melville Land (Southern Peary Land) |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Independence Fjord | style="background:#b0e0e6;" | |-valign="top" | ! scope="row" | | Mainland, the island of Milne Land and the mainland again |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Atlantic Ocean | style="background:#b0e0e6;" | |- | ! scope="row" | | Island of Graciosa, Azores |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Atlantic Ocean | style="background:#b0e0e6;" | |- | ! scope="row" | | Island of São Jorge, Azores |-valign="top" | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Atlantic Ocean | style="background:#b0e0e6;" | Passing just east of Pico Island, Azores, (at ) Passing just east of Leskov Island, (at ) |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Southern Ocean | style="background:#b0e0e6;" | |-valign="top" | ! scope="row" | Antarctica | Claimed by both (Argentine Antarctica) and (British Antarctic Territory)
https://en.wikipedia.org/wiki/WJXT
WJXT (channel 4) is an independent television station in Jacksonville, Florida, United States. It is owned by Graham Media Group alongside CW affiliate WCWJ (channel 17). The two stations share studios at 4 Broadcast Place on the south bank of the St. Johns River in Jacksonville; WJXT's transmitter is located on Anders Boulevard in the city's Killarney Shores section. History As a CBS affiliate WJXT originally signed on the air on September 15, 1949, as WMBR-TV. It was Jacksonville's first television station, the second television station in Florida and a primary CBS affiliate on VHF channel 4 after WTVJ (also on channel 4, now an NBC owned-and-operated station on channel 6) in Miami–Fort Lauderdale. The station was co-owned alongside WMBR radio (1460 AM, now WQOP; and 96.1 FM, now WEJZ). Though the station was originally a primary CBS affiliate, it also maintained secondary affiliations with NBC, ABC and the DuMont Television Network. In 1953, the WMBR stations were purchased by The Washington Post Company. WMBR-TV dropped the DuMont affiliation in 1955, less than a year before the network ceased operations. Since its only competition in the Jacksonville market came from UHF station WJHP-TV (which signed on in 1953 and went dark three years later), channel 4 had a virtual television monopoly in northern Florida until September 1957, when it lost the NBC affiliation to upstart WFGA (channel 12, now WTLV). The Washington Post Company sold WMBR-AM-FM in 1958, while it kept the television station, whose callsign it changed to the current WJXT. WJXT remained a primary CBS and secondary ABC affiliate until WJKS-TV (channel 17, now CW sister station WCWJ) took the ABC affiliation upon its sign-on in February 1966, leaving WJXT exclusively with CBS. For much of its tenure as a CBS affiliate, WJXT was the only station affiliated with the network that was located between Savannah, Georgia, and Orlando, Florida, and was thus carried on many cable systems between Jacksonvil
https://en.wikipedia.org/wiki/Control%20volume
In continuum mechanics and thermodynamics, a control volume (CV) is a mathematical abstraction employed in the process of creating mathematical models of physical processes. In an inertial frame of reference, it is a fictitious region of a given volume fixed in space or moving with constant flow velocity through which the continuuum (a continuous medium such as gas, liquid or solid) flows. The closed surface enclosing the region is referred to as the control surface. At steady state, a control volume can be thought of as an arbitrary volume in which the mass of the continuum remains constant. As a continuum moves through the control volume, the mass entering the control volume is equal to the mass leaving the control volume. At steady state, and in the absence of work and heat transfer, the energy within the control volume remains constant. It is analogous to the classical mechanics concept of the free body diagram. Overview Typically, to understand how a given physical law applies to the system under consideration, one first begins by considering how it applies to a small, control volume, or "representative volume". There is nothing special about a particular control volume, it simply represents a small part of the system to which physical laws can be easily applied. This gives rise to what is termed a volumetric, or volume-wise formulation of the mathematical model. One can then argue that since the physical laws behave in a certain way on a particular control volume, they behave the same way on all such volumes, since that particular control volume was not special in any way. In this way, the corresponding point-wise formulation of the mathematical model can be developed so it can describe the physical behaviour of an entire (and maybe more complex) system. In continuum mechanics the conservation equations (for instance, the Navier-Stokes equations) are in integral form. They therefore apply on volumes. Finding forms of the equation that are independent of th
https://en.wikipedia.org/wiki/Hippocrates%20Prize%20for%20Poetry%20and%20Medicine
The Hippocrates Prize for Poetry and Medicine was founded in 2009 by Donald Singer and Michael Hulse. The founders "wished to draw together national and international perspectives on three major historical and contemporary themes uniting the disciplines of poetry and medicine: medicine as inspiration for the writings of poets; effects of poetic creativity on the experience of illness by patients, their families, friends, and carers; and poetry as therapy". Background The Hippocrates Prize for Poetry and Medicine provides international awards for unpublished poems in English by any living poet. There are seven main awards in the Hippocrates Prize for Poetry and Medicine. Three awards are given for international health students or international Health Service-related professionals, including clinicians, educators, researchers, and biomedical scientists. Three awards are given for open international entries. The International Hippocrates Prize for Young Poets was launched in 2012, and is also given for an unpublished poem on a medical theme. The Hippocrates Prize is awarded at an annual international conference on poetry and medicine. The 2010 and 2011 International Conferences on Poetry and Medicine and Hippocrates Awards generated the International Hippocrates Research Forum for Poetry and Medicine. The Hippocrates poetry and medicine initiative was awarded the 2011 Times Higher Education award for Excellence and Innovation. Hippocrates Prize conferences The 2017 Hippocrates Prize for Poetry and Medicine was awarded at Harvard Medical School in partnership with its Arts and Humanities Initiative. The 2018 Hippocrates Prize for Poetry and Medicine was awarded in Chicago in partnership with the Center for Bioethics and Medical Humanities at Northwestern University's Feinberg School of Medicine, the Poetry Foundation, the Arts and Humanities Initiative at Harvard Medical School and the Fellowship of Postgraduate Medicine. The 2019 Hippocrates Prize for Poetry
https://en.wikipedia.org/wiki/Carl%20Johnson%20%28Grand%20Theft%20Auto%29
Carl "CJ" Johnson is a fictional character and the playable protagonist of the 2004 video game Grand Theft Auto: San Andreas, the fifth main installment in Rockstar Games' Grand Theft Auto series. He is voiced by Young Maylay, who also served as the likeness for the character. In the game's storyline, Carl is depicted as the underboss of the Grove Street Families, a fictional street gang based in his home city of Los Santos, San Andreas (a fictional parody of Los Angeles, California). The gang is led by his older brother, "Sweet" Johnson, with whom Carl became estranged after the death of their younger brother, Brian, in a gang-related attack prior to the events of the game. Feeling his life in Los Santos is unpromising, Carl eventually decides to move to Liberty City in 1987, only to return home five years later after his mother, Beverly, is similarly killed. While seeking to make amends with friends and family for abandoning them, Carl returns to his previous gangster lifestyle as he embarks on an eventful journey across the entire state of San Andreas, which sees him making new allies and clashing with several powerful criminal organisations and corrupt authorities. Although initially portrayed as somewhat naïve, clumsy, and selfish, Carl develops over the course of San Andreas' storyline, both on a professional level, becoming a successful criminal and businessman, and on an emotional level, as he learns to appreciate his roots and those around him. Carl received critical acclaim, with praise going to his complexity, lack of stereotype and his sense of conscience, and is regarded as one of the greatest video game characters of all time. Concept and design Portrayal Carl's physical appearance is modeled after Los Angeles-based rapper and actor Young Maylay, who also provided the character's voice and motion-capture work. When asked about the character model for Carl, Young Maylay stated that the development team took "very professional" photographs of him to
https://en.wikipedia.org/wiki/Human%20female%20sexuality
Human female sexuality encompasses a broad range of behaviors and processes, including female sexual identity and sexual behavior, the physiological, psychological, social, cultural, political, and spiritual or religious aspects of sexual activity. Various aspects and dimensions of female sexuality, as a part of human sexuality, have also been addressed by principles of ethics, morality, and theology. In almost any historical era and culture, the arts, including literary and visual arts, as well as popular culture, present a substantial portion of a given society's views on human sexuality, which includes both implicit (covert) and explicit (overt) aspects and manifestations of feminine sexuality and behavior. In most societies and legal jurisdictions, there are legal bounds on what sexual behavior is permitted. Sexuality varies across the cultures and regions of the world, and has continually changed throughout history, and this also applies to female sexuality. Aspects of female sexuality include issues pertaining to biological sex, body image, self-esteem, personality, sexual orientation, values and attitudes, gender roles, relationships, activity options, and communication. While most women are heterosexual, significant minorities are homosexual or varying degrees of bisexual. Bisexual females are more common than bisexual males. Physiological General Sexual activity can encompass various sexually stimulating factors (physiological stimulation or psychological stimulation), including sexual fantasies and different sex positions, or the use of sex toys. Foreplay may precede some sexual activities, often leading to sexual arousal of the partners. It is also common for people to be sexually satisfied by being kissed, touched erotically, or held. Orgasm Orgasm, or sexual climax, is the sudden discharge of accumulated sexual tension during the sexual response cycle, resulting in rhythmic muscular contractions in the pelvic region characterized by an intense sen
https://en.wikipedia.org/wiki/Goma%20%28software%29
Goma is an open-source, parallel, and scalable multiphysics software package for modeling and simulation of real-life physical processes, with a basis in computational fluid dynamics for problems with evolving geometry. It solves problems in all branches of mechanics, including fluids, solids, and thermal analysis. Goma uses advanced numerical methods, focusing on the low-speed flow regime with coupled phenomena for manufacturing and performance applications. It also provides a flexible software development environment for specialty physics. Goma was created by Sandia National Laboratories and is currently supported by both Sandia and the University of New Mexico. Capabilities Goma is a finite element program which solves problems from all branches of mechanics, including fluid mechanics, solid mechanics, chemical reactions and mass transport, and energy transport. The conservation principles for momentum, mass, species, and energy, together with material constitutive relations, can be described by partial differential equations. The equations are made discrete for solution on a digital computer with the finite element method in space and the finite difference method in time. The resulting nonlinear, time-dependent, algebraic equations are solved with a full Newton-Raphson method. The linearized equations are solved with direct or Krylov-based iterative solvers. The simulations can be run on a single processor or on multiple processors in parallel using domain decomposition, which can greatly speed up engineering analysis. Example applications include, but are not limited to, coating and polymer processing flows, super-alloy processing, welding/soldering, electrochemical processes, and solid-network or solution film drying. A full description of Goma's capabilities can be found in Goma's capabilities document. Goma is frequently used in conjunction with other software packages. Cubit is typically used to generate computational meshes, while ParaView
https://en.wikipedia.org/wiki/Hermite%20interpolation
In numerical analysis, Hermite interpolation, named after Charles Hermite, is a method of polynomial interpolation, which generalizes Lagrange interpolation. Lagrange interpolation allows computing a polynomial of degree less than that takes the same value at given points as a given function. Instead, Hermite interpolation computes a polynomial of degree less than such that the polynomial and its first derivatives have the same values at given points as a given function and its first derivatives. Hermite's method of interpolation is closely related to the Newton's interpolation method, in that both are derived from the calculation of divided differences. However, there are other methods for computing a Hermite interpolating polynomial. One can use linear algebra, by taking the coefficients of the interpolating polynomial as unknowns, and writing as linear equations the constraints that the interpolating polynomial must satisfy. For another method, see . Statement of the problem Hermite interpolation consists of computing a polynomial of degree as low as possible that matches an unknown function both in observed value, and the observed value of its first derivatives. This means that values must be known. The resulting polynomial has a degree less than . (In a more general case, there is no need for to be a fixed value; that is, some points may have more known derivatives than others. In this case the resulting polynomial has a degree less than the number of data points.) Let us consider a polynomial of degree less than with indeterminate coefficients; that is, the coefficients of are new variables. Then, by writing the constraints that the interpolating polynomial must satisfy, one gets a system of linear equations in unknowns. In general, such a system has exactly one solution. Charles Hermite proved that this is effectively the case here, as soon as the are pairwise different, and provided a method for computing it, which is described below.
https://en.wikipedia.org/wiki/Router%20on%20a%20stick
A router on a stick, also known as a one-armed router, is a router that has a single physical or logical connection to a network. It is a method of inter-VLAN routing where one router is connected to a switch via a single cable. The router has physical connections to the broadcast domains where one or more VLANs require the need for routing between them. Devices on separate VLANs or in a typical local area network are unable to communicate with each other. Therefore, it is often used to forward traffic between locally attached hosts on separate logical routing domains or to facilitate routing table administration, distribution and relay. Details One-armed routers that perform traffic forwarding are often implemented on VLANs. They use a single Ethernet network interface port that is part of two or more Virtual LANs, enabling them to be joined. A VLAN allows multiple virtual LANs to coexist on the same physical LAN. This means that two machines attached to the same switch cannot send Ethernet frames to each other even though they pass over the same wires. If they need to communicate, then a router must be placed between the two VLANs to forward packets, just as if the two LANs were physically isolated. The only difference is that the router in question may contain only a single Ethernet network interface controller (NIC) that is part of both VLANs. Hence, "one-armed". While uncommon, hosts on the same physical medium may be assigned with addresses and to different networks. A one-armed router could be assigned addresses for each network and be used to forward traffic between locally distinct networks and to remote networks through another gateway. One-armed routers are also used for administration purposes such as route collection, multi hop relay and looking glass servers. All traffic goes over the trunk twice, so the theoretical maximum sum of up and download speed is the line rate. For a two-armed configuration, uploading does not need to impact downloa
https://en.wikipedia.org/wiki/Ioctl
In computing, ioctl (an abbreviation of input/output control) is a system call for device-specific input/output operations and other operations which cannot be expressed by regular system calls. It takes a parameter specifying a request code; the effect of a call depends completely on the request code. Request codes are often device-specific. For instance, a CD-ROM device driver which can instruct a physical device to eject a disc would provide an ioctl request code to do so. Device-independent request codes are sometimes used to give userspace access to kernel functions which are only used by core system software or still under development. The ioctl system call first appeared in Version 7 of Unix under that name. It is supported by most Unix and Unix-like systems, including Linux and macOS, though the available request codes differ from system to system. Microsoft Windows provides a similar function, named "DeviceIoControl", in its Win32 API. Background Conventional operating systems can be divided into two layers, userspace and the kernel. Application code such as a text editor resides in userspace, while the underlying facilities of the operating system, such as the network stack, reside in the kernel. Kernel code handles sensitive resources and implements the security and reliability barriers between applications; for this reason, user mode applications are prevented by the operating system from directly accessing kernel resources. Userspace applications typically make requests to the kernel by means of system calls, whose code lies in the kernel layer. A system call usually takes the form of a "system call vector", in which the desired system call is indicated with an index number. For instance, exit() might be system call number 1, and write() number 4. The system call vector is then used to find the desired kernel function for the request. In this way, conventional operating systems typically provide several hundred system calls to the userspace. Thoug
https://en.wikipedia.org/wiki/Fish%20sauce
Fish sauce is a liquid condiment made from fish or krill that have been coated in salt and fermented for up to two years. It is used as a staple seasoning in East Asian cuisine and Southeast Asian cuisine, particularly Myanmar, Cambodia, Laos, Philippines, Thailand, and Vietnam. Some garum-related fish sauces have been used in the West since the Roman times. Due to its ability to add a savory umami flavor to dishes, it has been embraced globally by chefs and home cooks. The umami flavor in fish sauce is due to its glutamate content. Fish sauce is used as a seasoning during or after cooking, and as a base in dipping sauces. Soy sauce is regarded by some in the West as a vegetarian alternative to fish sauce though they are very different in flavor. History Asia Sauces that included fermented fish parts with other ingredients such as meat and soy bean were recorded in China, 2300 years ago. During the Zhou dynasty of ancient China, fish fermented with soybeans and salt was used as a condiment. By the time of the Han dynasty, soy beans were fermented without the fish into soy paste and its by-product soy sauce, with fermented fish-based sauces developing separately into fish sauce. A fish sauce, called kôechiap in Hokkien Chinese, might be the precursor of ketchup. By 50-100 BC, demand for fish sauces and fish pastes in China had fallen drastically, with fermented bean products becoming a major trade commodity. Fish sauce, however, developed massive popularity in Southeast Asia. Food scholars traditionally divide East Asia into two distinct condiment regions, separated by a bean-fish divide: Southeast Asia, mainly using fermented fish (Vietnam, Thailand, Cambodia), and Northeast Asia, using mainly fermented beans (China, Korea, Japan). Fish sauce re-entered China in the 17th and 18th centuries, brought from Vietnam and Cambodia by Chinese traders up the coast of the southern provinces Guangdong and Fujian. Europe Fish sauces were widely used in ancient Mediterra
https://en.wikipedia.org/wiki/Haboush%27s%20theorem
In mathematics Haboush's theorem, often still referred to as the Mumford conjecture, states that for any semisimple algebraic group G over a field K, and for any linear representation ρ of G on a K-vector space V, given v ≠ 0 in V that is fixed by the action of G, there is a G-invariant polynomial F on V, without constant term, such that F(v) ≠ 0. The polynomial can be taken to be homogeneous, in other words an element of a symmetric power of the dual of V, and if the characteristic is p>0 the degree of the polynomial can be taken to be a power of p. When K has characteristic 0 this was well known; in fact Weyl's theorem on the complete reducibility of the representations of G implies that F can even be taken to be linear. Mumford's conjecture about the extension to prime characteristic p was proved by W. J. , about a decade after the problem had been posed by David Mumford, in the introduction to the first edition of his book Geometric Invariant Theory. Applications Haboush's theorem can be used to generalize results of geometric invariant theory from characteristic 0, where they were already known, to characteristic p>0. In particular Nagata's earlier results together with Haboush's theorem show that if a reductive group (over an algebraically closed field) acts on a finitely generated algebra then the fixed subalgebra is also finitely generated. Haboush's theorem implies that if G is a reductive algebraic group acting regularly on an affine algebraic variety, then disjoint closed invariant sets X and Y can be separated by an invariant function f (this means that f is 0 on X and 1 on Y). C.S. Seshadri (1977) extended Haboush's theorem to reductive groups over schemes. It follows from the work of , Haboush, and Popov that the following conditions are equivalent for an affine algebraic group G over a field K: G is reductive (its unipotent radical is trivial). For any non-zero invariant vector in a rational representation of G, there is an invariant homogeneou
https://en.wikipedia.org/wiki/Essential%20manifold
In geometry, an essential manifold is a special type of closed manifold. The notion was first introduced explicitly by Mikhail Gromov. Definition A closed manifold M is called essential if its fundamental class [M] defines a nonzero element in the homology of its fundamental group , or more precisely in the homology of the corresponding Eilenberg–MacLane space K(, 1), via the natural homomorphism where n is the dimension of M. Here the fundamental class is taken in homology with integer coefficients if the manifold is orientable, and in coefficients modulo 2, otherwise. Examples All closed surfaces (i.e. 2-dimensional manifolds) are essential with the exception of the 2-sphere S2. Real projective space RPn is essential since the inclusion is injective in homology, where is the Eilenberg–MacLane space of the finite cyclic group of order 2. All compact aspherical manifolds are essential (since being aspherical means the manifold itself is already a K(, 1)) In particular all compact hyperbolic manifolds are essential. All lens spaces are essential. Properties The connected sum of essential manifolds is essential. Any manifold which admits a map of nonzero degree to an essential manifold is itself essential.
https://en.wikipedia.org/wiki/Sum%20coloring
In graph theory, a sum coloring of a graph is a labeling of its vertices by positive integers, with no two adjacent vertices having equal labels, that minimizes the sum of the labels. The minimum sum that can be achieved is called the chromatic sum of the graph. Chromatic sums and sum coloring were introduced by Supowit in 1987 using non-graph-theoretic terminology, and first studied in graph theoretic terms by Ewa Kubicka (independently of Supowit) in her 1989 doctoral thesis. Obtaining the chromatic sum may require using more distinct labels than the chromatic number of the graph, and even when the chromatic number of a graph is bounded, the number of distinct labels needed to obtain the optimal chromatic sum may be arbitrarily large. Computing the chromatic sum is NP-hard. However it may be computed in linear time for trees and pseudotrees, and in polynomial time for outerplanar graphs. There is a constant-factor approximation algorithm for interval graphs and for bipartite graphs. The interval graph case remains NP-hard. It is the case arising in Supowit's original application in VLSI design, and also has applications in scheduling.
https://en.wikipedia.org/wiki/Ectoderm
The ectoderm is one of the three primary germ layers formed in early embryonic development. It is the outermost layer, and is superficial to the mesoderm (the middle layer) and endoderm (the innermost layer). It emerges and originates from the outer layer of germ cells. The word ectoderm comes from the Greek ektos meaning "outside", and derma meaning "skin". Generally speaking, the ectoderm differentiates to form epithelial and neural tissues (spinal cord, peripheral nerves and brain). This includes the skin, linings of the mouth, anus, nostrils, sweat glands, hair and nails, and tooth enamel. Other types of epithelium are derived from the endoderm. In vertebrate embryos, the ectoderm can be divided into two parts: the dorsal surface ectoderm also known as the external ectoderm, and the neural plate, which invaginates to form the neural tube and neural crest. The surface ectoderm gives rise to most epithelial tissues, and the neural plate gives rise to most neural tissues. For this reason, the neural plate and neural crest are also referred to as the neuroectoderm. History Heinz Christian Pander, a Baltic German–Russian biologist, has been credited for the discovery of the three germ layers that form during embryogenesis. Pander received his doctorate in zoology from the University of Würzburg in 1817. He began his studies in embryology using chicken eggs, which allowed for his discovery of the ectoderm, mesoderm and endoderm. Due to his findings, Pander is sometimes referred to as the "founder of embryology". Pander's work of the early embryo was continued by a Prussian–Estonian biologist named Karl Ernst von Baer. Baer took Pander's concept of the germ layers and through extensive research of many different types of species, he was able to extend this principle to all vertebrates. Baer also received credit for the discovery of the blastula. Baer published his findings, including his germ layer theory, in a textbook which translates to On the Development of
https://en.wikipedia.org/wiki/Covid-Organics
Covid-Organics (CVO) is an Artemisia-based drink that Andry Rajoelina, president of Madagascar, claims can prevent and cure Coronavirus disease 2019 (COVID-19). The drink is produced from a species under the Artemisia genus from which artemisinin is extracted for malaria treatment. No publicly available clinical trial data supports the safety or efficacy of this drink. Covid-Organics was developed and produced in Madagascar by the Malagasy Institute of Applied Research. Madagascar was the first country to decide to integrate Artemisia into COVID-19 treatment when the NGO Maison de l'Artemisia France contacted numerous African countries during the COVID-19 pandemic. At least one researcher from another part of Africa, Dr. Jérôme Munyangi of the Democratic Republic of the Congo, contributed. Some of the research on Artemisia, led by African scientists, had been carried out in France and Canada. On 20 April 2020, Rajoelina announced in a television broadcast that his country had found "preventive and curative" cure for COVID-19. Rajoelina publicly sipped from a bottle of Covid-Organics and ordered a nation-wide distribution to families. In 2022, Covid-Organics is not recommended by the WHO. World Health Organization On 20 May 2020, Rajoelina announced on his Twitter account that the World Health Organization (WHO) will sign a confidentiality agreement with Madagascar regarding the formulation of CVO in order to perform clinical observation. On 21 May 2020, WHO director general Tedros Adhanom confirmed his video conference with Rajoelina, and that the WHO will cooperate with Madagascar on research and development of COVID-19 therapy. The WHO does not recommend the use of non-pharmaceutical Artemisia plant matter. The official position of WHO is that it "supports scientifically-proven traditional medicine" and "recognizes that traditional, complementary and alternative medicine has many advantages". Controversy A wide range of scientific criticism followed the launc
https://en.wikipedia.org/wiki/Heat%20loss%20due%20to%20linear%20thermal%20bridging
The heat loss due to linear thermal bridging () is a physical quantity used when calculating the energy performance of buildings. It appears in both United Kingdom and Irish methodologies. Calculation The calculation of the heat loss due to linear thermal bridging is relatively simple, given by the formula below: In the formula, if Accredited Construction details used, and otherwise, and is the sum of all the exposed areas of the building envelope,
https://en.wikipedia.org/wiki/Timeline%20of%20cryptography
Below is a timeline of notable events related to cryptography. B.C. 36th century The Sumerians develop cuneiform writing and the Egyptians develop hieroglyphic writing. 16th century The Phoenicians develop an alphabet 600-500 Hebrew scholars make use of simple monoalphabetic substitution ciphers (such as the Atbash cipher) c. 400 Spartan use of scytale (alleged) c. 400 Herodotus reports use of steganography in reports to Greece from Persia (tattoo on shaved head) 100-1 A.D.- Notable Roman ciphers such as the Caesar cipher. 1–1799 A.D. 801–873 A.D. Cryptanalysis and frequency analysis leading to techniques for breaking monoalphabetic substitution ciphers are developed in A Manuscript on Deciphering Cryptographic Messages by the Muslim mathematician, Al-Kindi (Alkindus), who may have been inspired by textual analysis of the Qur'an. He also covers methods of encipherments, cryptanalysis of certain encipherments, and statistical analysis of letters and letter combinations in Arabic. 1355-1418 Ahmad al-Qalqashandi writes Subh al-a 'sha, a 14-volume encyclopedia including a section on cryptology, attributed to Ibn al-Durayhim (1312–1361). The list of ciphers in this work include both substitution and transposition, and for the first time, a cipher with multiple substitutions for each plaintext letter. It also included an exposition on and worked example of cryptanalysis, including the use of tables of letter frequencies and sets of letters which cannot occur together in one word. 1450 The Chinese develop wooden block movable type printing. 1450–1520 The Voynich manuscript, an example of a possibly encoded illustrated book, is written. 1466 Leon Battista Alberti invents polyalphabetic cipher, also first known mechanical cipher machine 1518 Johannes Trithemius' book on cryptology 1553 Bellaso invents Vigenère cipher 1585 Vigenère's book on ciphers 1586 Cryptanalysis used by spymaster Sir Francis Walsingham to implicate Mary, Queen of Scots, in the Babingto
https://en.wikipedia.org/wiki/BlankOn%20Linux
BlankOn Linux is a Debian-based Linux distribution made in Indonesia. This distribution was developed by the BlankOn Development Team with support from the Indonesian Linux Mobilization Foundation (YPLI) since 2004. History In April 2021, BlankOn 12 Beta was released to the public.
https://en.wikipedia.org/wiki/Render%20output%20unit
In computer graphics, the render output unit (ROP) or raster operations pipeline is a hardware component in modern graphics processing units (GPUs) and one of the final steps in the rendering process of modern graphics cards. The pixel pipelines take pixel (each pixel is a dimensionless point) and texel information and process it, via specific matrix and vector operations, into a final pixel or depth value; this process is called rasterization. Thus, ROPs control antialiasing, when more than one sample is merged into one pixel. The ROPs perform the transactions between the relevant buffers in the local memory – this includes writing or reading values, as well as blending them together. Dedicated antialiasing hardware used to perform hardware-based antialiasing methods like MSAA is contained in ROPs. All data rendered has to travel through the ROP in order to be written to the framebuffer, from there it can be transmitted to the display. Therefore, the ROP is where the GPU's output is assembled into a bitmapped image ready for display. Historically the number of ROPs, texture mapping units (TMUs), and shader processing units/stream processors have been equal. However, from 2004, several GPUs have decoupled these areas to allow optimum transistor allocation for application workload and available memory performance. As the trend continues, it is expected that graphics processors will continue to decouple the various parts of their architectures to enhance their adaptability to future graphics applications. This design also allows chip makers to build a modular line-up, where the top-end GPUs are essentially using the same logic as the low-end products. See also Graphics pipeline Rendering (computer graphics) Execution unit
https://en.wikipedia.org/wiki/Map%20%28mathematics%29
In mathematics, a map or mapping is a function in its general sense. These terms may have originated as from the process of making a geographical map: mapping the Earth surface to a sheet of paper. The term map may be used to distinguish some special types of functions, such as homomorphisms. For example, a linear map is a homomorphism of vector spaces, while the term linear function may have this meaning or it may mean a linear polynomial. In category theory, a map may refer to a morphism. The term transformation can be used interchangeably, but transformation often refers to a function from a set to itself. There are also a few less common uses in logic and graph theory. Maps as functions In many branches of mathematics, the term map is used to mean a function, sometimes with a specific property of particular importance to that branch. For instance, a "map" is a "continuous function" in topology, a "linear transformation" in linear algebra, etc. Some authors, such as Serge Lang, use "function" only to refer to maps in which the codomain is a set of numbers (i.e. a subset of R or C), and reserve the term mapping for more general functions. Maps of certain kinds are the subjects of many important theories. These include homomorphisms in abstract algebra, isometries in geometry, operators in analysis and representations in group theory. In the theory of dynamical systems, a map denotes an evolution function used to create discrete dynamical systems. A partial map is a partial function. Related terms such as domain, codomain, injective, and continuous can be applied equally to maps and functions, with the same meaning. All these usages can be applied to "maps" as general functions or as functions with special properties. As morphisms In category theory, "map" is often used as a synonym for "morphism" or "arrow", which is a structure-respecting function and thus may imply more structure than "function" does. For example, a morphism in a concrete category
https://en.wikipedia.org/wiki/Coliform%20index
The coliform index is a rating of the purity of water based on a count of fecal bacteria. It is one of many tests done to assure sufficient water quality. Coliform bacteria are microorganisms that primarily originate in the intestines of warm-blooded animals. By testing for coliforms, especially the well known Escherichia coli (E. coli), which is a thermotolerant coliform, one can determine if the water has possibly been exposed to fecal contamination; that is, whether it has come in contact with human or animal feces. It is important to know this because many disease-causing organisms are transferred from human and animal feces to water, from where they can be ingested by people and infect them. Water that has been contaminated by feces usually contains pathogenic bacteria, which can cause disease. Some types of coliforms cause disease, but the coliform index is primarily used to judge if other types of pathogenic bacteria are likely to be present in the water. The coliform index is used because it is difficult to test for pathogenic bacteria directly. There are many different types of disease-causing bacteria, and they are usually present in low numbers which do not always show up in tests. Thermotolerant coliforms are present in higher numbers than individual types of pathogenic bacteria and they can be tested relatively easily. However, the coliform index is far from perfect. Thermotolerant coliforms can survive in water on their own, especially in tropical regions, so they do not always indicate fecal contamination. Furthermore, they do not give a good indication of how many pathogenic bacteria are present in the water, and they give no idea at all of whether there are pathogenic viruses or protozoa which also cause diseases and are rarely tested for. Therefore, it does not always give accurate or useful results regarding the purity of water. See also Indicator organism Bacteriological water analysis
https://en.wikipedia.org/wiki/Negative%20energy
Negative energy is a concept used in physics to explain the nature of certain fields, including the gravitational field and various quantum field effects. Gravitational energy Gravitational energy, or gravitational potential energy, is the potential energy a massive object has because it is within a gravitational field. In classical mechanics, two or more masses always have a gravitational potential. Conservation of energy requires that this gravitational field energy is always negative, so that it is zero when the objects are infinitely far apart. As two object move apart and the distance between them approaches infinity, the gravitational force between them approaches zero from the positive side of the real number line and the gravitational potential approaches zero from the negative side. Conversely, as two massive objects move towards each other, the motion accelerates under gravity causing an increase in the (positive) kinetic energy of the system and, in order to conserve the total sum of energy, the increase of the same amount in the gravitational potential energy of the object is treated as negative. A universe in which positive energy dominates will eventually collapse in a "Big Crunch", while an "open" universe in which negative energy dominates will either expand indefinitely or eventually disintegrate in a "big rip". In the zero-energy universe model ("flat" or "Euclidean"), the total amount of energy in the universe is exactly zero: its amount of positive energy in the form of matter is exactly cancelled out by its negative energy in the form of gravity. It is unclear which, if any, of these models accurately describes the real universe. Black hole ergosphere For a classically rotating black hole, the rotation creates an ergosphere outside the event horizon, in which spacetime itself begins to rotate, in a phenomenon known as frame-dragging. Since the ergosphere is outside the event horizon, particles can escape from it. Within the ergosphere, a pa
https://en.wikipedia.org/wiki/Marcus%20Garvey
Marcus Mosiah Garvey Jr. (17 August 188710 June 1940) was a Jamaican political activist. He was the founder and first President-General of the Universal Negro Improvement Association and African Communities League (UNIA-ACL, commonly known as UNIA), through which he declared himself Provisional President of Africa. Ideologically a black nationalist and Pan-Africanist, his ideas came to be known as Garveyism. Garvey was born into a moderately prosperous Afro-Jamaican family in Saint Ann's Bay and was apprenticed into the print trade as a teenager. Working in Kingston, he got involved in trade unionism before living briefly in Costa Rica, Panama, and England. On returning to Jamaica, he founded the UNIA in 1914. In 1916, he moved to the United States and established a UNIA branch in New York City's Harlem district. Emphasising unity between Africans and the African diaspora, he campaigned for an end to European colonial rule in Africa and advocated the political unification of the continent. He envisioned a unified Africa as a one-party state, governed by himself, that would enact laws to ensure black racial purity. Although he never visited the continent, he was committed to the Back-to-Africa movement, arguing that part of the diaspora should migrate there. Garveyist ideas became increasingly popular and the UNIA grew in membership. His black separatist views—and his relationship with white racists like the Ku Klux Klan (KKK) in the interest of advancing their shared goal of racial separatism—caused a division between Garvey and other prominent African-American civil rights activists such as W. E. B. Du Bois who promoted racial integration. Believing that black people needed to be financially independent from white-dominated societies, Garvey launched various businesses in the U.S., including the Negro Factories Corporation and Negro World newspaper. In 1919, he became President of the Black Star Line shipping and passenger company, designed to forge a link betwe
https://en.wikipedia.org/wiki/Azoic%20hypothesis
The Azoic hypothesis (sometimes referred to as the Abyssus theory) is a superseded scientific theory proposed by Edward Forbes in 1843, stating that the abundance and variety of marine life decreased with increasing depth and, by extrapolation of his own measurements, Forbes calculated that marine life would cease to exist below . Overview The theory was based upon Forbes' findings aboard , a survey ship to which he had been appointed naturalist by the ship's commander Captain Thomas Graves. With Forbes aboard, HMS Beacon set sail around the Aegean Sea on 17 April 1841, from Malta. It was at this point that Forbes began to take dredging samples at various depths of the ocean, he observed that samples from greater depths displayed a narrower diversity of creatures which were generally smaller in size. Forbes reported his findings from the Aegean Sea in his 1843 report to the British Association entitled Report on the Mollusca and Radiata of the Aegean Sea. His findings were widely accepted by the scientific community and were bolstered by other scientific figures of the time. David Page (1814–1879), a respected geologist, reinforced the theory by stating that "according to experiment, water at the depth of 1000 feet is compressed th of its own bulk; and at this rate of compression we know that at great depths animal and vegetable life as known to us cannot possibly exist – the extreme depressions of seas being thus, like the extreme elevations of the land, barren and lifeless solitudes." The theory was not disproven until the late 1860s when biologist Michael Sars, Professor of Zoology at Christiania (now Oslo) University, discovered life at a depth greater than 300 fathoms. Sars listed 427 animal species which had been found along the Norwegian coast at a depth of 450 fathoms, and gave a description of a crinoid Rhizocrinus lofotensis which his son had recovered from a depth of 300 fathoms in Lofoten. In 1869, Charles Wyville Thomson dredged marine life from a de
https://en.wikipedia.org/wiki/Naked-back%20knifefish
The naked-back knifefishes are a family (Gymnotidae) of knifefishes found only in fresh waters of Central America and South America. All have organs adapted to electroreception. The family has about 43 valid species in two genera. These fish are nocturnal and mostly occur in quiet waters from deep rivers to swamps. In strongly flowing waters, they may bury themselves. Physical characteristics Like the other gymnotiforms, gymnotids have classic knifefish bodies. The body is long and eel-like, the dorsal fin and pelvic fins are absent, and the anal fin is extremely long and used for movement. The sole member of Electrophorus is the electric eel, which produces both strong (up to 600 volts) and weak (<1 V) electric discharges, for use in predation and communication/navigation, respectively. The electric eel is the largest of the gymnotiform fishes, growing up to more than length. Species of Gymnotus range from about in total length. These knife fishes also use electricity to assist in their movement and navigation in the water due to their limited vision. Genera According to FishBase, there are 43 species in two genera: Electrophorus (3 species) Gymnotus (40 species) Historically, Electrophorus was in a separate family Electrophoridae and ITIS continues to do this, but this is contradicted by available evidence and not followed by other authorities.
https://en.wikipedia.org/wiki/J%C3%BCrgen%20Gmehling
Jürgen Gmehling (born January 13, 1946, in Duisburg) is a retired German professor of technical and industrial chemistry at the Carl von Ossietzky University of Oldenburg. Biography His career started with an apprenticeship as a laboratory assistant at the Duisburg copper works before he studied chemical engineering at the engineering school in Essen and then chemistry in Dortmund and Clausthal. He received his diploma from the University of Dortmund in 1970 and his PhD (Dr. rer. nat., inorganic chemistry) in 1973. After this he worked as a scientific coworker in Dortmund before he became a private lecturer and, after his habilitation, an assistant professor. Gmehling was appointed a full professor for technical chemistry at the University of Oldenburg in 1989 and retired in 2011. Fields of research Gmehling's main focus is the process development. This includes the development of software for process synthesis and process simulation as well as measurement, collection, and estimation of thermophysical properties of pure components and component mixtures. The following list summarizes fields of his scientific work but is in no way complete. Measurements Phase equilibrium data (vapor-liquid equilibria, liquid-liquid equilibria, solid–liquid equilibria, gas solubilities, heats of mixing, activity coefficients and more) Data collection Gmehling began in the 1970s with the systematic evaluation of the scientific literature, aiming to build a data bank for vapor-liquid equilibria. These data were needed for the development of a new method for the prediction of activity coefficients named UNIFAC. This data bank is still named the Dortmund Data Bank. Development of estimation and correlation models Gmehling developed with colleagues models for the estimation of several thermodynamic and thermophysical properties: Activity coefficient models like UNIFAC (see also group contribution method) and extensions. For the further development of these widely used methods Gmeh
https://en.wikipedia.org/wiki/Ahlfors%20measure%20conjecture
In mathematics, the Ahlfors conjecture, now a theorem, states that the limit set of a finitely-generated Kleinian group is either the whole Riemann sphere, or has measure 0. The conjecture was introduced by , who proved it in the case that the Kleinian group has a fundamental domain with a finite number of sides. proved the Ahlfors conjecture for topologically tame groups, by showing that a topologically tame Kleinian group is geometrically tame, so the Ahlfors conjecture follows from Marden's tameness conjecture that hyperbolic 3-manifolds with finitely generated fundamental groups are topologically tame (homeomorphic to the interior of compact 3-manifolds). This latter conjecture was proved, independently, by and by . also showed that in the case when the limit set is the whole sphere, the action of the Kleinian group on the limit set is ergodic.
https://en.wikipedia.org/wiki/Nils%20Dencker
Nils Jonas Dencker (born 14 December 1953 in Lund) is a Swedish mathematician. Dencker earned his doctorate from Lund University in 1981 under supervision of Lars Hörmander, and is a professor of mathematics at Lund University. From 1981 until 1983, he was the C. L. E. Moore instructor at the Massachusetts Institute of Technology. He won the 2005 Clay Research Award for his proof of the Nirenberg–Treves conjecture. Dencker has been a member of the Royal Swedish Academy of Sciences since 2008. He was an invited speaker at the European Congress of Mathematics in Amsterdam in 2008, and at the International Congress of Mathematicians in Hyderabad, India, in 2010. In 2012 he became a fellow of the American Mathematical Society.
https://en.wikipedia.org/wiki/Proprietary%20firmware
Proprietary firmware is any firmware that has had its use, private modification, copying, or republishing restricted by the producer. Proprietors may enforce restrictions by technical means, such as by restricting source code access, firmware replacement restrictions (by denying complete tooling that may be necessary in order to recompile and replace the firmware), or by legal means, such as through copyright and patents. Alternatives to proprietary firmware may be free (libre) or open-source. Distribution Proprietary firmware (and especially the microcode) is much more difficult to avoid than proprietary software or even proprietary device drivers, because the firmware is usually very specific to the manufacturer of each device (often being unique for each model), and the programming documentation and complete specifications that would be necessary to create a replacement are often withheld by the hardware manufacturer. Many open-source operating systems reluctantly choose to include proprietary firmware files in their distributions simply to make their device drivers work, because manufacturers try to save money by removing flash memory or EEPROM from their devices, requiring the operating system to upload the firmware each time the device is used. However, in order to do so, the operating system still has to have distribution rights for this proprietary microcode. Security concerns Proprietary firmware poses a significant security risk to the user because of the direct memory access (DMA) architecture of modern computers and the potential for DMA attacks. Theo de Raadt of OpenBSD suggests that wireless firmware are kept proprietary because of poor design quality and firmware defects. Mark Shuttleworth of Ubuntu suggests that "it's reasonable to assume that all firmware is a cesspool of insecurity courtesy of incompetence of the worst degree from manufacturers, and competence of the highest degree from a very wide range of such agencies". The security and r
https://en.wikipedia.org/wiki/Fermentation
Fermentation is a metabolic process that produces chemical changes in organic substances through the action of enzymes. In biochemistry, it is narrowly defined as the extraction of energy from carbohydrates in the absence of oxygen. In food production, it may more broadly refer to any process in which the activity of microorganisms brings about a desirable change to a foodstuff or beverage. The science of fermentation is known as zymology. In microorganisms, fermentation is the primary means of producing adenosine triphosphate (ATP) by the degradation of organic nutrients anaerobically. Humans have used fermentation to produce foodstuffs and beverages since the Neolithic age. For example, fermentation is used for preservation in a process that produces lactic acid found in such sour foods as pickled cucumbers, kombucha, kimchi, and yogurt, as well as for producing alcoholic beverages such as wine and beer. Fermentation also occurs within the gastrointestinal tracts of all animals, including humans. Industrial fermentation is a broader term used for the process of applying microbes for the large-scale production of chemicals, biofuels, enzymes, proteins and pharmaceuticals. Definitions and etymology Below are some definitions of fermentation ranging from informal, general usages to more scientific definitions. Preservation methods for food via microorganisms (general use). Any large-scale microbial process occurring with or without air (common definition used in industry, also known as industrial fermentation). Any process that produces alcoholic beverages or acidic dairy products (general use). Any energy-releasing metabolic process that takes place only under anaerobic conditions (somewhat scientific). Any metabolic process that releases energy from a sugar or other organic molecule, does not require oxygen or an electron transport system, and uses an organic molecule as the final electron acceptor (most scientific). The word "ferment" is derived from the
https://en.wikipedia.org/wiki/TuVox
TuVox is a company that produces VXML-based telephone speech-recognition applications to replace DTMF touch-tone systems for their clients. History TuVox was founded in 2001 by Steven S. Pollock and Ashok Khosla, formerly of Apple Computer Corporation and Claris Corporation. Since then, TuVox has grown to over 150 employees and has US offices in Cupertino, California and Boca Raton, Florida as well as international offices in London, Vancouver and Sydney. In 2005, TuVox acquired the customers and hosting facilities of Net-By-Tel. In 2007, the company raised $20m for its speech recognition, and phone menu software. On July 22, 2010, West Interactive — a subsidiary of West Corporation — announced its acquisition of TuVox. Customers TuVox clients include 1-800-Flowers.com, AMC Entertainment, American Airlines, British Airways, M&T Bank, Canon Inc., Gateway, Inc., Motorola, Progress Energy Inc., Telecom New Zealand, Time, Inc., BECU, Virgin America and USAA.
https://en.wikipedia.org/wiki/Communications%20in%20Statistics
Communications in Statistics is a peer-reviewed scientific journal that publishes papers related to statistics. It is published by Taylor & Francis in three series, Theory and Methods, Simulation and Computation, and Case Studies, Data Analysis and Applications. Communications in Statistics – Theory and Methods This series started publishing in 1970 and publishes papers related to statistical theory and methods. It publishes 20 issues each year. Based on Web of Science, the five most cited papers in the journal are: Kulldorff M. A spatial scan statistic, 1997, 982 cites. Holland PW, Welsch RE. Robust regression using iteratively reweighted least-squares, 1977, 526 cites. Sugiura N. Further analysts of the data by Akaike's information criterion and the finite corrections, 1978, 490 cites. Hosmer DW, Lemeshow S. Goodness of fit tests for the multiple logistic regression model, 1980, 401 cites. Iman RL, Conover WJ. Small sample sensitivity analysis techniques for computer models. with an application to risk assessment, 1980, 312 cites. Abstracting and indexing Communications in Statistics – Theory and Methods is indexed in the following services: Current Index to Statistics Science Citation Index Expanded Zentralblatt MATH Communications in Statistics – Simulation and Computation This series started publishing in 1972 and publishes papers related to computational statistics. It publishes 6 issues each year. Based on Web of Science, the five most cited papers in the journal are: Iman RL, Conover WJ. A distribution-free approach to inducing rank correlation among input variables, 1982, 519 cites. Wolfinger R. Covariance structure selection in general mixed models, 1993, 248 cites. Helland IS, On the structure of partial least squares regression, 1988, 246 cites. McCulloch JH. Simple consistent estimators of stable distribution parameters, 1986, 191 cites. Sullivan Pepe M, Anderson GL. A cautionary note on inference for marginal regression models wi
https://en.wikipedia.org/wiki/Locallife
Locallife is an on-line directory that represents local communities in multiple countries. Locallife hosts local maps & directions, special coupon offers and hyperlinks to business websites. History and spread Locallife began in the United Kingdom in August 1999. The company continued to expand throughout the UK both physically and on the Internet. Locallife launched the first franchise in autumn of 2004. In 2008, the first location for Locallife USA was established at SkySong in Scottsdale, Arizona. Locallife spread across the United States, France, and New Zealand. In the United States, Locallife directories represent over 12 million businesses. Locallife hosts approximately 320 local community online directories in the United Kingdom, 380 in France, and more in New Zealand. Locallife for users Locallife is a network of sites that constitutes a guide for both visitors and residents of a particular area. On local sites, users browse menus of restaurants, retailers, contractors, professional services, details of schools, and much more. The individual Locallife sites are identical in design and layout. Locallife websites are organized into nine main categories, which are color-coded for clarity and ease of use.
https://en.wikipedia.org/wiki/Jet%20Propulsion%20Laboratory%20Development%20Ephemeris
Jet Propulsion Laboratory Development Ephemeris (abbreved JPL DE(number), or simply DE(number)) designates one of a series of mathematical models of the Solar System produced at the Jet Propulsion Laboratory in Pasadena, California, for use in spacecraft navigation and astronomy. The models consist of numeric representations of positions, velocities and accelerations of major Solar System bodies, tabulated at equally spaced intervals of time, covering a specified span of years. Barycentric rectangular coordinates of the Sun, eight major planets and Pluto, and geocentric coordinates of the Moon are tabulated. History There have been many versions of the JPL DE, from the 1960s through the present, in support of both robotic and crewed spacecraft missions. Available documentation is limited, but we know DE69 was announced in 1969 to be the third release of the JPL Ephemeris Tapes, and was a special purpose, short-duration ephemeris. The then-current JPL Export Ephemeris was DE19. These early releases were distributed on magnetic tape. In the days before personal computers, computers were large and expensive, and numerical integrations such as these were run by large organizations with ample resources. The JPL ephemerides prior to DE405 were integrated on a Univac mainframe in double precision. For instance, DE102, which was created in 1977, took six million steps and ran for nine days on a Univac 1100/81. DE405 was integrated on a DEC Alpha in quadruple precision. In the 1970s and early 1980s, much work was done in the astronomical community to update the astronomical almanacs from the theoretical work of the 1890s to modern, relativistic theory. From 1975 through 1982, six ephemerides were produced at JPL using the modern techniques of least-squares adjustment of numerically-integrated output to high precision data: DE96 in Nov. 1975, DE102 in Sep. 1977, DE111 in May 1980, DE118 in Sep. 1981, and DE200 in 1982. DE102 was the first numerically integrated so-called
https://en.wikipedia.org/wiki/Fock%20matrix
In the Hartree–Fock method of quantum mechanics, the Fock matrix is a matrix approximating the single-electron energy operator of a given quantum system in a given set of basis vectors. It is most often formed in computational chemistry when attempting to solve the Roothaan equations for an atomic or molecular system. The Fock matrix is actually an approximation to the true Hamiltonian operator of the quantum system. It includes the effects of electron-electron repulsion only in an average way. Because the Fock operator is a one-electron operator, it does not include the electron correlation energy. The Fock matrix is defined by the Fock operator. In its general form the Fock operator writes: Where i runs over the total N spin orbitals. In the closed-shell case, it can be simplified by considering only the spatial orbitals. Noting that the terms are duplicated and the exchange terms are null between different spins. For the restricted case which assumes closed-shell orbitals and single- determinantal wavefunctions, the Fock operator for the i-th electron is given by: where: is the Fock operator for the i-th electron in the system, is the one-electron Hamiltonian for the i-th electron, is the number of electrons and is the number of occupied orbitals in the closed-shell system, is the Coulomb operator, defining the repulsive force between the j-th and i-th electrons in the system, is the exchange operator, defining the quantum effect produced by exchanging two electrons. The Coulomb operator is multiplied by two since there are two electrons in each occupied orbital. The exchange operator is not multiplied by two since it has a non-zero result only for electrons which have the same spin as the i-th electron. For systems with unpaired electrons there are many choices of Fock matrices. See also Hartree–Fock method Unrestricted Hartree–Fock Restricted open-shell Hartree–Fock
https://en.wikipedia.org/wiki/Quantum%20Computation%20Language
Quantum Computation Language (QCL) is one of the first implemented quantum programming languages. The most important feature of QCL is the support for user-defined operators and functions. Its syntax resembles the syntax of the C programming language and its classical data types are similar to primitive data types in C. One can combine classical code and quantum code in the same program. The language was created to explore programming concepts for quantum computers. The QCL library provides standard quantum operators used in quantum algorithms such as: Controlled-not with many target qubits, Hadamard operation on many qubits, Phase and controlled phase. Quantum algorithms for addition, multiplication and exponentiation with binary constants (all modulus n) The quantum fourier transform Syntax Data types Quantum - qureg, quvoid, quconst, quscratch, qucond Classical - int, real, complex, boolean, string, vector, matrix, tensor Function types qufunct - Pseudo-classic operators. Can only change the permutation of basis states. operator - General unitary operators. Can change the amplitude. procedure - Can call measure, print, and dump inside this function. This function is non-invertible. Built-in functions Quantum qufunct - Fanout, Swap, Perm2, Perm4, Perm8, Not, CNot operator - Matrix2x2, Matrix4x4, Matrix8x8, Rot, Mix, H, CPhase, SqrtNot, X, Y, Z, S, T procedure - measure, dump, reset Classical Arithmetic - sin, cos, tan, log, sqrt, ... Complex - Re, Im, conj Examples The basic built-in quantum data type in QCL is the qureg (quantum register). It can be interpreted as an array of qubits (quantum bits). qureg x1[2]; // 2-qubit quantum register x1 qureg x2[2]; // 2-qubit quantum register x2 H(x1); // Hadamard operation on x1 H(x2[1]); // Hadamard operation on the first qubit of the register x2 Since the qcl interpreter uses qlib simulation library, it is possible to observe the internal state of the quantum machine during execution of the quantum progra
https://en.wikipedia.org/wiki/SimPy
SimPy is a process-based discrete-event simulation framework based on standard Python. It enables users to model active components such as customers, vehicles, or agents as simple Python generator functions. SimPy is released as open source software under the MIT License. The first version was released in December 2002. Its event dispatcher is based on Python's generators and can be used for asynchronous networking or to implement multi-agent systems (with both, simulated and real communication). Simulations can be performed “as fast as possible”, in real time (wall clock time) or by manually stepping through the events. Though it is theoretically possible to do continuous simulations with SimPy, it lacks features to support them. However, for simulations with a fixed step size where processes don't interact with each other or with shared resources, a simple while loop is sufficient. Additionally, SimPy provides different types of shared resources to simulate congestion points that have limited capacity, such as servers, checkout counters, and tunnels. In version 3.1 and above, SimPy offers monitoring capabilities to assist in collecting statistics about processes and resources. SimPy 3.0 requires Python 3., while SimPy 4.0 requires Python 3.6+. SimPy distribution contains tutorials, documentation, and examples. Example The following is a SimPy simulation showing a clock process that prints the current simulation time at each step: >>> import simpy >>> >>> def clock(env, name, tick): ... while True: ... print(name, env.now) ... yield env.timeout(tick) ... >>> env = simpy.Environment() >>> env.process(clock(env, 'fast', 0.5)) <Process(clock) object at 0x...> >>> env.process(clock(env, 'slow', 1)) <Process(clock) object at 0x...> >>> env.run(until=2) fast 0 slow 0 fast 0.5 slow 1 fast 1.0 fast 1.5
https://en.wikipedia.org/wiki/Allen%20Brain%20Atlas
The Allen Mouse and Human Brain Atlases are projects within the Allen Institute for Brain Science which seek to combine genomics with neuroanatomy by creating gene expression maps for the mouse and human brain. They were initiated in September 2003 with a $100 million donation from Paul G. Allen and the first atlas went public in September 2006. , seven brain atlases have been published: Mouse Brain Atlas, Human Brain Atlas, Developing Mouse Brain Atlas, Developing Human Brain Atlas, Mouse Connectivity Atlas, Non-Human Primate Atlas, and Mouse Spinal Cord Atlas. There are also three related projects with data banks: Glioblastoma, Mouse Diversity, and Sleep. It is the hope of the Allen Institute that their findings will help advance various fields of science, especially those surrounding the understanding of neurobiological diseases. The atlases are free and available for public use online. History In 2001, Paul Allen gathered a group of scientists, including James Watson and Steven Pinker, to discuss the future of neuroscience and what could be done to enhance neuroscience research (Jones 2009). During these meetings David Anderson from the California Institute of Technology proposed the idea that a three-dimensional atlas of gene expression in the mouse brain would be of great use to the neuroscience community. The project was set in motion in 2003 with a 100 million dollar donation by Allen through the Allen Institute for Brain Science. The project used a technique for mapping gene expression developed by Gregor Eichele and colleagues at the Max Planck Institute for Biophysical Chemistry in Goettingen, Germany. The technique uses colorimetric in situ hybridization to map gene expression. The project set a 3-year goal of finishing the project and making it available to the public. An initial release of the first atlas, the mouse brain atlas, occurred in December 2004. Subsequently, more data for this atlas was released in stages. The final genome-wide data set
https://en.wikipedia.org/wiki/Faculty%20of%20Mathematics%2C%20University%20of%20Cambridge
The Faculty of Mathematics at the University of Cambridge comprises the Department of Pure Mathematics and Mathematical Statistics (DPMMS) and the Department of Applied Mathematics and Theoretical Physics (DAMTP). It is housed in the Centre for Mathematical Sciences site in West Cambridge, alongside the Isaac Newton Institute. Many distinguished mathematicians have been members of the faculty. Some current members DPMMS Béla Bollobás John Coates Thomas Forster Timothy Gowers Peter Johnstone Imre Leader Gabriel Paternain Statistical Laboratory John Aston Geoffrey Grimmett Frank Kelly Ioannis Kontoyiannis Richard Nickl James Norris Richard Samworth David Spiegelhalter Richard Weber DAMTP Gary Gibbons Julia Gog, professor of mathematical biology Raymond E. Goldstein Rich Kerswell Paul Linden Michael Green Peter Haynes, fluid dynamicist John Hinch, fluid dynamicist, retired 2014 Richard Jozsa Hugh Osborn John Papaloizou Malcolm Perry David Tong, theoretical physicist Paul Townsend Grae Worster, editor for the Journal of Fluid Mechanics Mihaela van der Schaar Carola-Bibiane Schönlieb Pure Mathematics and Mathematical Statistics The Department of Pure Mathematics and Mathematical Statistics (DPMMS) was created in 1964 under the headship of Sir William Hodge. It was housed in a converted warehouse at 16 Mill Lane, adjacent to its sister department DAMTP, until its move around 2000 to the present Centre for Mathematical Sciences where it occupies Pavilions C, D, and E. Heads of department 1964–1969 W. V. D. Hodge 1969–1984 J. W. S. Cassels 1984–1991 D. J. H. Garling 1991–1997 John H. Coates 1997–2002 W. B. R. Lickorish 2002–2007 Geoffrey Grimmett 2007–2014 Martin Hyland 2014–2018 Gabriel Paternain 2018–2023 James Norris 2023- Ivan Smith Statistical Laboratory The Statistical Laboratory is a Sub-Department of DPMMS. It was created in 1947 with accommodation in a "temporary hut", and was established on 21 March 1953 within the Faculty of Mathematics. It moved in 19
https://en.wikipedia.org/wiki/Nucleon%20spin%20structure
Nucleon spin structure describes the partonic structure of nucleon (proton and neutron) intrinsic angular momentum (spin). The key question is how the nucleon's spin, whose magnitude is 1/2ħ, is carried by its constituent partons (quarks and gluons). It was originally expected before the 1980s that quarks carry all of the nucleon spin, but later experiments contradict this expectation. In the late 1980s, the European Muon Collaboration (EMC) conducted experiments that suggested the spin carried by quarks is not sufficient to account for the total spin of the nucleons. This finding astonished particle physicists at that time, and the problem of where the missing spin lies is sometimes referred to as the proton spin crisis. Experimental research on these topics has been continued by the Spin Muon Collaboration (SMC) and the COMPASS experiment at CERN, experiments E142, E143, E154 and E155 at SLAC, HERMES at DESY, experiments at JLab and RHIC, and others. Global analysis of data from all major experiments confirmed the original EMC discovery and showed that the quark spin did contribute about 30% to the total spin of the nucleon. A major topic of modern particle physics is to find the missing angular momentum, which is believed to be carried either by gluon spin, or by gluon and quark orbital angular momentum. This fact is expressed by the sum rule, The gluon spin components are being measured by many experiments. Quark and gluon angular momenta will be studied by measuring so-called generalized parton distributions (GPD) through deeply virtual compton scattering (DVCS) experiments, conducted at CERN (COMPASS) and at Jefferson Lab, among other laboratories. External links Polarized colliders may prove to be the key in mapping out proton spin structure Dr. Deshpande research webpage Spin Muon Collaboration HERMES COMPASS The Spin Structure of the Nucleon - Status and Recent Results Standard Model
https://en.wikipedia.org/wiki/Pulsar%20timing%20array
A pulsar timing array (PTA) is a set of galactic pulsars that is monitored and analysed to search for correlated signatures in the pulse arrival times on Earth. As such, they are galactic-sized detectors. Although there are many applications for pulsar timing arrays, the best known is the use of an array of millisecond pulsars to detect and analyse long-wavelength (i.e., low-frequency) gravitational wave background. Such a detection would entail a detailed measurement of a gravitational wave (GW) signature, like the GW-induced quadrupolar correlation between arrival times of pulses emitted by different millisecond pulsar pairings that depends only on the pairings' angular separations in the sky. Larger arrays may be better for GW detection because the quadrupolar spatial correlations induced by GWs can be better sampled by many more pulsar pairings. With such a GW detection, millisecond pulsar timing arrays would open a new low-frequency window in gravitational-wave astronomy to peer into potential ancient astrophysical sources and early Universe processes, inaccessible by any other means. Overview The proposal to use pulsars as gravitational wave (GW) detectors was originally made by Sazhin and Detweiler in the late 1970s. The idea is to treat the solar system barycenter and a galactic pulsar as opposite ends of an imaginary arm in space. The pulsar acts as the reference clock at one end of the arm sending out regular signals which are monitored by an observer on Earth. The effect of a passing long-wavelength GW would be to perturb the galactic spacetime and cause a small change in the observed time of arrival of the pulses. In 1983, Hellings and Downs extended this idea to an array of pulsars and found that a stochastic background of GWs would produce a distinctive GW signature: a quadrupolar spatial correlation between arrival times of pulses emitted by different millisecond pulsar pairings that depends only on the pairing's angular separation in the sky as vi
https://en.wikipedia.org/wiki/Atriplex%20semibaccata
Atriplex semibaccata, commonly known as Australian saltbush, berry saltbush, or creeping saltbush, is a species of flowering plant in the family Amaranthaceae and is endemic to Australia. It is a perennial herb native to Western Australia, South Australia, Queensland and New South Wales, but has been introduced into other states and to overseas countries. It flowers and fruits in spring, and propagates from seed when the fruit splits open. This species of saltbush is adapted to inconsistent rainfall, temperature and humidity extremes and to poor soil. It is used for rehabilitation, medicine, as a cover crop and for fodder. Its introduction to other countries has had an environmental and economic impact on them. Description Atriplex semibaccata is a taproot perennial herb, that has prostrated and decumbent characteristics. Native to Australia and widespread in all mainland Australian states, A. semibaccata thrives in harsh and saline conditions. A. semibaccata is often mat-forming or semi-erect and can grow 40–80 cm tall, spanning a diameter of 1.5-2m. Its slender branches arise from a woody taproot. Leaves are white scruffy, subsessile (small stalk) and are spatulate or obovate (oblong or elliptic) when the plant is young. Leaves develop a green to grey-green colour, with a length of 5-30mm and a width of 2-9mm, where the base is tapered and tip obtuse. Leaves are thin, oblong-elliptic, obtuse and have short petiolate (1–2 cm). Staminate flowers are tiny, terminal and 1.5mm wide, whereas pistillate flowers cluster distally from leaves. A. semibaccata is monoecious. Fruiting bracteoles are red or orange when mature, as well as having a convex and rhombic shape (diamond like appearance). Fruits are succulent, united at base, margin toothed, sessile and are a length of 4-6mm. A. semibaccata is seed propagated and seeds are dimorphic. Black seeds are 1.5-1.7mm, while brown seeds are 2mm in size. It can be used as fodder and is useful for degraded or salt affected l
https://en.wikipedia.org/wiki/Karel%20Hrb%C3%A1%C4%8Dek
Karel Hrbáček (born 1944) is professor emeritus of mathematics at City College of New York. He specializes in mathematical logic, set theory, and non-standard analysis. Early life and education Karel studied at Charles University with Petr Vopěnka, looking at large cardinal numbers. He was awarded the degree RNDr. Before his appointment at CCNY he was an exchange fellow at University of California, Berkeley and a research associate at Rockefeller University. In 1980 he received an award from the Mathematical Association of America for his article on Non-standard Set Theory. Selected publications 1999: (with Thomas Jech) Introduction to Set Theory, Third edition. Monographs and Textbooks in Pure and Applied Mathematics, 220. Marcel Dekker 1992: (with David Ballard) "Standard foundations for nonstandard analysis", Journal of Symbolic Logic 57(2): 741–748 1979: "Nonstandard set theory", American Mathematical Monthly 86(8): 659–677 1978: "Axiomatic foundations for nonstandard analysis", Fundamenta Mathematicae 98(1): 1–19
https://en.wikipedia.org/wiki/Debye%20sheath
The Debye sheath (also electrostatic sheath) is a layer in a plasma which has a greater density of positive ions, and hence an overall excess positive charge, that balances an opposite negative charge on the surface of a material with which it is in contact. The thickness of such a layer is several Debye lengths thick, a value whose size depends on various characteristics of plasma (e.g. temperature, density, etc.). A Debye sheath arises in a plasma because the electrons usually have a temperature on the order of magnitude or greater than that of the ions and are much lighter. Consequently, they are faster than the ions by at least a factor of . At the interface to a material surface, therefore, the electrons will fly out of the plasma, charging the surface negative relative to the bulk plasma. Due to Debye shielding, the scale length of the transition region will be the Debye length . As the potential increases, more and more electrons are reflected by the sheath potential. An equilibrium is finally reached when the potential difference is a few times the electron temperature. The Debye sheath is the transition from a plasma to a solid surface. Similar physics is involved between two plasma regions that have different characteristics; the transition between these regions is known as a double layer, and features one positive, and one negative layer. Description Sheaths were first described by American physicist Irving Langmuir. In 1923 he wrote: "Electrons are repelled from the negative electrode while positive ions are drawn towards it. Around each negative electrode there is thus a sheath of definite thickness containing only positive ions and neutral atoms. [..] Electrons are reflected from the outside surface of the sheath while all positive ions which reach the sheath are attracted to the electrode. [..] it follows directly that no change occurs in the positive ion current reaching the electrode. The electrode is in fact perfectly screened from the discharg
https://en.wikipedia.org/wiki/Multiplicative%20number%20theory
Multiplicative number theory is a subfield of analytic number theory that deals with prime numbers and with factorization and divisors. The focus is usually on developing approximate formulas for counting these objects in various contexts. The prime number theorem is a key result in this subject. The Mathematics Subject Classification for multiplicative number theory is 11Nxx. Scope Multiplicative number theory deals primarily in asymptotic estimates for arithmetic functions. Historically the subject has been dominated by the prime number theorem, first by attempts to prove it and then by improvements in the error term. The Dirichlet divisor problem that estimates the average order of the divisor function d(n) and Gauss's circle problem that estimates the average order of the number of representations of a number as a sum of two squares are also classical problems, and again the focus is on improving the error estimates. The distribution of primes numbers among residue classes modulo an integer is an area of active research. Dirichlet's theorem on primes in arithmetic progressions shows that there are an infinity of primes in each co-prime residue class, and the prime number theorem for arithmetic progressions shows that the primes are asymptotically equidistributed among the residue classes. The Bombieri–Vinogradov theorem gives a more precise measure of how evenly they are distributed. There is also much interest in the size of the smallest prime in an arithmetic progression; Linnik's theorem gives an estimate. The twin prime conjecture, namely that there are an infinity of primes p such that p+2 is also prime, is the subject of active research. Chen's theorem shows that there are an infinity of primes p such that p+2 is either prime or the product of two primes. Methods The methods belong primarily to analytic number theory, but elementary methods, especially sieve methods, are also very important. The large sieve and exponential sums are usually considered p
https://en.wikipedia.org/wiki/Automatic%20number%20announcement%20circuit
An automatic number announcement circuit (ANAC) is a component of a central office of a telephone company that provides a service to installation and service technicians to determine the telephone number of a telephone line. The facility has a telephone number that may be called to listen to an automatic announcement that includes the caller's telephone number. The ANAC facility is useful primarily during the installation of landline telephones to quickly identify one of multiple wire pairs in a bundle or at a termination point. Operation By connecting a test telephone set, a technician calls the local telephone number of the automatic number announcement service. This call is connected to equipment at the central office that uses automatic equipment to announce the telephone number of the line calling in. The main purpose of this system is to allow telephone company technicians to identify the telephone line they are connected to. Automatic number announcement systems are based on automatic number identification. They are intended for use by phone company technicians, the ANAC system bypasses customer features, such as unlisted numbers, caller ID blocking, and outgoing call blocking. Installers of multi-line business services where outgoing calls from all lines display the company's main number on call display can use ANAC to identify a specific line in the system, even if CID displays every line as "line one". Most ANAC systems are provider-specific in each wire center, while others are regional or state-/province- or area-code-wide. No official lists of ANAC numbers are published, as telephone companies guard against abuse that would interfere with availability for installers. Exchange prefixes for testing The North American Numbering Plan reserves the exchange (central office) prefixes 958 and 959 for plant testing purposes. Code 959 with three or four additional digits is dedicated for access to office test lines in local exchange carrier and interoffice c
https://en.wikipedia.org/wiki/Suzanne%20Scarlata
Suzanne Frances Scarlata is the Richard Whitcomb Professor at Worcester Polytechnic Institute. She is known for her work on how cells respond to hormones and neurotransmitters. She is an elected fellow of the American Association for the Advancement of Science. Education and career Scarlata grew up in Philadelphia and received a B.A. from Temple University in 1979. She went on to earn her Ph.D. from the University of Illinois, Urbana-Champaign. After her Ph.D. she accepted a position at AT&T Bell Laboratories where she worked on methods for testing circuit boards. She then moved to New York City where she worked at Cornell University Medical College. In 1991 she moved to Stony Brook University where she remained for 24 years. In 2016 she moved to Worcester Polytechnic Institute where, as of 2022, she is the Richard Whitcomb Professor of Chemistry and Biochemistry. In 2016, Scarlata was elected president of the Biophysical Society. Research In her own words, Scarlata is "fascinated by the way that cells grow, move, or die depending on their environment". Her early research examined the motion of fluorophores. She went on to examine histones under high pressure, the compression of lipid membranes, and the binding affinities of compounds within lipids. Scarlata is also interested in the use of enzymes to alter materials used in building construction. In 2021, Scarlata was involved in a research project that used the enzyme carbonic anhydrase to fix cracks in concrete. Selected publications Awards and honors In 2020 Scarlatta was elected a fellow of the American Association for the Advancement of Science.
https://en.wikipedia.org/wiki/Methylglyoxal%20pathway
The methylglyoxal pathway is an offshoot of glycolysis found in some prokaryotes, which converts glucose into methylglyoxal and then into pyruvate. However unlike glycolysis the methylglyoxal pathway does not produce adenosine triphosphate, ATP. The pathway is named after the substrate methylglyoxal which has three carbons and two carbonyl groups located on the 1st carbon and one on the 2nd carbon. Methylglyoxal is, however, a reactive aldehyde that is very toxic to cells, it can inhibit growth in E. coli at milimolar concentrations. The excessive intake of glucose by a cell is the most important process for the activation of the methylglyoxal pathway. The Methylglyoxal pathway The methylglyoxal pathway is activated by the increased intercellular uptake of carbon containing molecules such as glucose, glucose-6-phosphate, lactate, or glycerol. Methylglyoxal is formed from dihydroxyacetone phosphate (DHAP) by the enzyme methylglyoxal synthase, giving off a phosphate group. Methylglyoxal is then converted into two different products, either D-lactate, and L-lactate. Methylglyoxal reductase and aldehyde dehydrogenase convert methylglyoxal into lactaldehyde and, eventually, L-lactate. If methylglyoxal enters the glyoxylase pathway, it is converted into lactoylguatathione and eventually D-lactate. Both D-lactate, and L-lactate are then converted into pyruvate. The pyruvate that is created most often goes on to enter the Krebs cycle (Weber 711–13). Enzymes and regulation The potentially hazardous effects of methylglyoxal require regulation of the reactions with this substrate. Synthesis of methylglyoxal is regulated by levels of DHAP and phosphate concentrations. High concentrations of DHAP encourage methylglyoxal synthase to produce methylglyoxal, while high phosphate concentrations inhibit the enzyme, and therefore the production of more methylglyoxal. The enzyme triose phosphate isomerase affects the levels of DHAP by converting glyceraldehyde 3-phosphat
https://en.wikipedia.org/wiki/Runtime%20%28program%20lifecycle%20phase%29
In computer science, runtime, run time, or execution time is the final phase of a computer programs life cycle, in which the code is being executed on the computer's central processing unit (CPU) as machine code. In other words, "runtime" is the running phase of a program. A runtime error is detected after or during the execution (running state) of a program, whereas a compile-time error is detected by the compiler before the program is ever executed. Type checking, register allocation, code generation, and code optimization are typically done at compile time, but may be done at runtime depending on the particular language and compiler. Many other runtime errors exist and are handled differently by different programming languages, such as division by zero errors, domain errors, array subscript out of bounds errors, arithmetic underflow errors, several types of underflow and overflow errors, and many other runtime errors generally considered as software bugs which may or may not be caught and handled by any particular computer language. Implementation details When a program is to be executed, a loader first performs the necessary memory setup and links the program with any dynamically linked libraries it needs, and then the execution begins starting from the program's entry point. In some cases, a language or implementation will have these tasks done by the language runtime instead, though this is unusual in mainstream languages on common consumer operating systems. Some program debugging can only be performed (or is more efficient or accurate when performed) at runtime. Logic errors and array bounds checking are examples. For this reason, some programming bugs are not discovered until the program is tested in a production environment with real data, despite sophisticated compile-time checking and pre-release testing. In this case, the end-user may encounter a "runtime error" message. Application errors (exceptions) Exception handling is one language feature
https://en.wikipedia.org/wiki/Convolution%20power
In mathematics, the convolution power is the n-fold iteration of the convolution with itself. Thus if is a function on Euclidean space Rd and is a natural number, then the convolution power is defined by where ∗ denotes the convolution operation of functions on Rd and δ0 is the Dirac delta distribution. This definition makes sense if x is an integrable function (in L1), a rapidly decreasing distribution (in particular, a compactly supported distribution) or is a finite Borel measure. If x is the distribution function of a random variable on the real line, then the nth convolution power of x gives the distribution function of the sum of n independent random variables with identical distribution x. The central limit theorem states that if x is in L1 and L2 with mean zero and variance σ2, then where Φ is the cumulative standard normal distribution on the real line. Equivalently, tends weakly to the standard normal distribution. In some cases, it is possible to define powers x*t for arbitrary real t > 0. If μ is a probability measure, then μ is infinitely divisible provided there exists, for each positive integer n, a probability measure μ1/n such that That is, a measure is infinitely divisible if it is possible to define all nth roots. Not every probability measure is infinitely divisible, and a characterization of infinitely divisible measures is of central importance in the abstract theory of stochastic processes. Intuitively, a measure should be infinitely divisible provided it has a well-defined "convolution logarithm." The natural candidate for measures having such a logarithm are those of (generalized) Poisson type, given in the form In fact, the Lévy–Khinchin theorem states that a necessary and sufficient condition for a measure to be infinitely divisible is that it must lie in the closure, with respect to the vague topology, of the class of Poisson measures . Many applications of the convolution power rely on being able to define the analog of
https://en.wikipedia.org/wiki/Glycerol%20dialkyl%20glycerol%20tetraether
Glycerol dialkyl glycerol tetraether lipids (GDGTs) are a class of membrane lipids synthesized by archaea and some bacteria, making them useful biomarkers for these organisms in the geological record. Their presence, structure, and relative abundances in natural materials can be useful as proxies for temperature, terrestrial organic matter input, and soil pH for past periods in Earth history. Some structural forms of GDGT form the basis for the TEX86 paleothermometer. Isoprenoid GDGTs, now known to be synthesized by many archaeal classes, were first discovered in extremophilic archaea cultures. Branched GDGTs, likely synthesized by acidobacteriota, were first discovered in a natural Dutch peat sample in 2000. Chemical structure The two primary structural classes of GDGTs are isoprenoid (isoGDGT) and branched (brGDGT), which refer to differences in the carbon skeleton structures. Isoprenoid compounds are numbered -0 through -8, with the numeral representing the number of cyclopentane rings present within the carbon skeleton structure. The exception is crenarchaeol, a Nitrososphaerota product with one cyclohexane ring moiety in addition to four cyclopentane rings. Branched GDGTs have zero, one, or two cyclopentane moieties and are further classified based the positioning of their branches. They are numbered with roman numerals and letters, with -I indicating structures with four modifications (i.e. either a branch or a cyclopentane moiety), -II indicating structures with five modifications, and -III indicating structures with six modifications. The suffix a after the roman numeral means one of its modifications is a cyclopentane moiety; b means two modifications are cyclopentane moieties. For example, GDGT-IIb is a compound with three branches and two cyclopentane moieties (a total of five modifications). GDGTs form as monolayers and with ether bonds to glycerol, as opposed to as bilayers and with ester bonds as is the case in eukaryotes and most bacteria. Biologi
https://en.wikipedia.org/wiki/Grundlagen%20der%20Mathematik
Grundlagen der Mathematik (English: Foundations of Mathematics) is a two-volume work by David Hilbert and Paul Bernays. Originally published in 1934 and 1939, it presents fundamental mathematical ideas and introduced second-order arithmetic. Publication history 1934/1939 (Vol. I, II) First German edition, Springer 1944 Reprint of first edition by J. W. Edwards, Ann Arbor, Michigan. 1968/1970 (Vol. I, II) Second revised German edition, Springer 1979/1982 (Vol. I, II) Russian translation of 1968/1970, Nauka Publ., Moscow 2001/2003 (Vol. I, II) French translation, L’Harmattan, Paris 2011/2013 (Parts A and B of Vol. I, prefaces and sections 1-5) English translation of 1968 and 1934, bilingual with German facsimile on the left-hand sides. The Hilbert Bernays Project is producing an English translation. See also Hilbert–Bernays paradox
https://en.wikipedia.org/wiki/Breeding%20bird%20survey
A breeding bird survey monitors the status and trends of bird populations. Data from the survey are an important source for the range maps found in field guides. The North American Breeding Bird Survey is a joint project of the United States Geological Survey (USGS) and the Canadian Wildlife Service. The UK Breeding Bird Survey is administered by the British Trust for Ornithology, the Joint Nature Conservation Committee, and the Royal Society for the Protection of Birds. The results of the BBS are valuable in evaluating the increasing and decreasing range of bird population which can be a key point to bird conservation. The BBS was designed to provide a continent-wide perspective of population change. History The North American Breeding Bird Survey was launched in 1966 after the concept of a continental monitoring program for all breeding birds had been developed by Chandler Robbins and his associates from the Migratory Bird Population Station. The program was developed in Laurel, Maryland. In the first year of its existence there were nearly 600 surveys conducted east of the Mississippi River. One year later, in 1967, the survey spread to the Great Plains states and by 1968 almost 2000 routes had been established across southern Canada and 48 American states. As more birders were introduced to this program, the number of active BBS routes continued to increase. In the 1980s, the Breeding Bird Survey included Yukon, Northwest Territories of Canada and Alaska. Additionally, the number of routes in established states has increased. Currently, there are approximately 3700 active BBS routes in the United States and Canada, of which approximately 2900 are surveyed on an annual basis. The density of the routes varies greatly across the continent and the largest number of routes can be found in New England and Mid-Atlantic states. Many bird watchers participate in these surveys as they find the experience rewarding. Future plans for the BBS include expanding coverage
https://en.wikipedia.org/wiki/Differentiation%20rules
This is a summary of differentiation rules, that is, rules for computing the derivative of a function in calculus. Elementary rules of differentiation Unless otherwise stated, all functions are functions of real numbers (R) that return real values; although more generally, the formulae below apply wherever they are well defined — including the case of complex numbers (C). Constant term rule For any value of , where , if is the constant function given by , then . Proof Let and . By the definition of the derivative, This shows that the derivative of any constant function is 0. Intuitive (geometric) explanation The derivative of the function at a point is the slope of the line tangent to the curve at the point. Slope of the constant function is zero, because the tangent line to the constant function is horizontal and it's angle is zero. In other words, the value of the constant function, y, will not change as the value of x increases or decreases. Differentiation is linear For any functions and and any real numbers and , the derivative of the function with respect to is: In Leibniz's notation this is written as: Special cases include: The constant factor rule The sum rule The difference rule The product rule For the functions f and g, the derivative of the function h(x) = f(x) g(x) with respect to x is In Leibniz's notation this is written The chain rule The derivative of the function is In Leibniz's notation, this is written as: often abridged to Focusing on the notion of maps, and the differential being a map , this is written in a more concise way as: The inverse function rule If the function has an inverse function , meaning that and then In Leibniz notation, this is written as Power laws, polynomials, quotients, and reciprocals The polynomial or elementary power rule If , for any real number then When this becomes the special case that if then Combining the power rule with the sum and constant multiple rules permit
https://en.wikipedia.org/wiki/National%20Biodiversity%20Centre%20%28Singapore%29
The National Biodiversity Centre (: NBC; Chinese: 国家生物多样性中心; ; ) is a branch of the National Parks Board and serves as Singapore's one-stop centre for biodiversity-related information and activities. It manages all available information and data on biodiversity in Singapore. Diverse biodiversity-related information and data are currently generated, stored and updated by different organisations and individuals. The National Biodiversity Centre will maximize the usefulness of such information and data by linking them in a single meta-database. Having complete and up-to-date information is crucial for many decision-making processes involving biodiversity. This hub of biodiversity information and data at the National Biodiversity Centre will also allow knowledge gaps to be better identified and addressed. The National Biodiversity Centre takes responsibility for the conservation of both terrestrial and marine flora and fauna in Singapore and represents the National Parks Board in its role as the government's scientific authority on nature conservation. The National Biodiversity Centre will also represent Singapore in various biodiversity-related international and regional conventions, including the Convention on Biological Diversity, ASEAN Center for Biodiversity, ASEAN Working Group on Nature Conservation and Biodiversity and ASEANET. History The National Biodiversity Centre was formerly the Conservation Branch of National Parks Board. On 1 April 2003, it was renamed as the Biodiversity Centre. On 22 May 2006, the National Biodiversity Reference Centre was established. It was first mentioned to the public at the Biodiversity of Singapore Symposium in 2003 by Vivian Balakrishnan. In January 2008, it was renamed as the National Biodiversity Centre. Organisation structure The National Biodiversity Centre is a branch of the Conservation Division of National Parks Board. NBC consists of four departments; the Coastal and Marine Environment Program Office, International
https://en.wikipedia.org/wiki/Simple%20sequence%20length%20polymorphism
Simple Sequence Length Polymorphisms (SSLPs) are used as genetic markers with polymerase chain reaction (PCR). An SSLP is a type of polymorphism: a difference in DNA sequence amongst individuals. SSLPs are repeated sequences over varying base lengths in intergenic regions of deoxyribonucleic acid (DNA). Variance in the length of SSLPs can be used to understand genetic variation between two individuals in a certain species. Applications An example of the usage of SSLPs (microsatellites) is seen in a study by Rosenberg et al., where SSLPs were used to cluster different continental populations of human beings. The study was critical to Nicholas Wade's New York Times Bestseller, Before the Dawn: Recovering the Lost History of Our Ancestors. Rosenberg Study Rosenberg studied 377 SSLPs in 1000 people in 52 different regions of the world. By using PCR and cluster analysis, Rosenberg was able to group individuals that had the same SSLPs . These SSLPs were extremely useful to the experiment because they do not affect the phenotypes of the individuals, thus being unaffected by natural selection.
https://en.wikipedia.org/wiki/Jasmine%20Directory
Jasmine Directory is a human-edited web directory providing websites and businesses categorized topically and regionally. It offers thirteen topic-based categories and one region-based category with hand-picked and reviewed users' suggested resources. Jasmine Directory was founded in 2009 by Pécsi András and Robert Gomboș and is headquartered in Valley Cottage, New York. It won eight prizes during 2013–14 for its editorial discretion and manually added resources. Jasmine Directory proved to be useful for SEO Google search results since they manually add about 90% of the resources. History The directory was launched in 2009 at the Budapest University of Technology and Economics by Pécsi András and Robert Gomboș. It operates from Valley Cottage, New York. Operation and structure Jasmine Directory lists educational resources and businesses of public interest, which can be filtered based on business category. Jasmine Directory's editors manually add resources to the index, which according to co-founder Gomboș represents 90% of its listings. Businesses and site owners can also suggest their websites for review by paying a "suggestion fee;" however, inclusion is not guaranteed if the suggested resources do not comply with the editorial guidelines, in which case the review fee is refunded. The annual fee for standard listings is $59. Site owners are not paying for placement, but they are paid for editing and the administrative effort to review and create a listing. The directory labels listings chosen by editors using an "EP" mark to separate those websites from ones submitted by site owners. To suggest a resource for inclusion, users may select an appropriate category; they can also customize their listing by adding their websites' social media fan pages and contact information based on which businesses' Google Maps location will be generated accordingly. Once the listing is posted, its publisher can edit the details any time, but an upgrade feature is also available
https://en.wikipedia.org/wiki/Treehouse%20%28company%29
Treehouse or (Teamtreehouse) is an online technology school that offers beginner to advanced courses in web design, web development, mobile development and game development. Its courses are aimed at beginners looking to learn computer coding skills for a career in the tech industry. The Treehouse learning program includes videos combined with interactive quizzes and code challenges. Treehouse Tracks are guided curricula, composed of courses that train students in large topic areas. Treehouse for Teams is designed to assist businesses, organizations, schools and community programs in technology training. As of 2011, companies including Simple and LivingSocial used Treehouse to recruit new employees based on their progress and achievements on Treehouse. History Ryan and Gillian Carson founded Treehouse in 2011, a project that emerged from Carson's previous company, Carsonified, and its video-tutorial service Think Vitamin Membership. Carson redesigned and rebranded the service as Treehouse because the name "reflects the wonder of learning as a child." In 2011, Treehouse opened their first office in Orlando, Florida, In 2012 they opened their second office and moved Treehouse HQ to Portland, Oregon. In April 2012, Treehouse raised $4.75M in funding. In April 2013, Treehouse closed a US$7 million Series-B fundraising round led by Social+Capital and Kaplan bringing its total raised capital to $12.6 million. In July 2013, Treehouse released its first iPad app for accessing Treehouse's content. Treehouse also released an Android application in 2014 and added a course for Apple's Swift programming language. At the time, Carson was the current CEO of Treehouse. In May 2016, Treehouse announced the launch of the Techdegree Program. The Techdegree program is a guided learning program that is designed to help students prepare for entry-level development jobs. There are currently Techdegree programs available in Front End Web Development, Full Stack JavaScript, User Experi
https://en.wikipedia.org/wiki/Martin%20and%20Mitchell%20defection
In September 1960, two U.S. National Security Agency (NSA) cryptologists, William Hamilton Martin and Bernon F. Mitchell, defected to the Soviet Union. A secret 1963 NSA study said that: "Beyond any doubt, no other event has had, or is likely to have in the future, a greater impact on the Agency's security program." Martin and Mitchell met while serving in the U.S. Navy in Japan in the early 1950s and both joined the NSA on the same day in 1957. They defected together to the Soviet Union in 1960 and, at a Moscow press conference, revealed and denounced various U.S. policies, especially provocative incursions into the air space of other nations and spying on America's own allies. Underscoring their apprehension of nuclear war, they said: "we would attempt to crawl to the moon if we thought it would lessen the threat of an atomic war." Within days of the press conference, citing a trusted source, Congressman Francis E. Walter, chairman of the House Un-American Activities Committee (HUAC), said Martin and Mitchell were "sex deviates", prompting sensational press coverage. U.S. officials at the National Security Council privately shared their assumption that the two were part of a traitorous homosexual network. Classified NSA investigations, on the other hand, determined the pair had "greatly inflated opinions concerning their intellectual attainments and talents" and had defected to satisfy social aspirations. The House Un-American Activities Committee publicly intimated its interpretation of the relationship between Martin and Mitchell as homosexual and that reading guided the Pentagon's discussion of the defection for decades. Early lives and careers William Hamilton Martin (May 27, 1931 – January 17, 1987) was born in Columbus, Georgia. His family soon moved to Washington state where his father was president of the Ellensburg Chamber of Commerce. He graduated from Ellensburg High School after two years. After studies at Central Washington College of Education (n
https://en.wikipedia.org/wiki/.debug%20info
Introduction The .debug_info section of an ELF contains information generated by compilers to describe the source code while debugging by keeping symbols and its type, scope, file, line number, etc. The .debug_info section is one of the main components of DWARF debug info files. This is generated by a compiler when -g switch or its variants are used. Other debug ELF sections
https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2032
In molecular biology, glycoside hydrolase family 32 is a family of glycoside hydrolases , which are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes. Family 32 glycosyl hydrolases comprise two distinct domains. The N-terminal domain, which forms a five bladed beta propeller, and the C-terminal domain, which forms a beta sandwich structure.
https://en.wikipedia.org/wiki/Application%20Defined%20Network
Application Defined Network (ADN) is an enterprise data network that uses virtual networks and security components to provide a dedicated logical network for each application. This allows customized security and network policies to be created to meet the requirements of that specific application. ADN technology allows for a simple physical architecture with fewer devices, less device configuration and integration. ADN solutions simplify businesses' need to securely deploy multiple applications across the enterprise footprint and partner networks, regardless of where the application resides. ADN platforms provide policy-based, application-specific delivery to corporate data centers, cloud services and third-party networks securely and cost-effectively. Some ADN solutions integrate 3G/4G wireless backup services to enable a second internet connection automatically and instantly when connectivity is lost on the primary access connection. The ADN design provides an application-to-application (A2A) based model that evolves enterprise networks beyond the site-to-site (S2S) private model. ADN fundamentals ADN solutions addresses the need to enable multiple different applications, such as guest Wi-Fi (Hotspot) while securing regulated applications such as payment on the same network. Traditionally, in site-to-site networks, having multiple applications introduces security policy conflicts. Technologies, such as guest Wi-Fi, mobile payment and cloud services open the traditional private network to outside security threats and create complexity in security policies and network administration. ADNs can be customized with security features that address specific application needs. They can also be enhanced with performance and reliability features such as traffic management for application prioritization and fail-over for back-up connection services. Complexity breeds vulnerability. Application Defined Networks (ADNs) reduce complexity and the resulting costs of multiple devi
https://en.wikipedia.org/wiki/Lamberto%20Cesari
Lamberto Cesari (23 September 1910 – 12 March 1990) was an Italian mathematician naturalized in the United States, known for his work on the theory of surface area, the theory of functions of bounded variation, the theory of optimal control and on the stability theory of dynamical systems: in particular, by extending the concept of Tonelli plane variation, he succeeded in introducing the class of functions of bounded variation of several variables in its full generality. Biography In 1933, he was awarded his laurea degree at the Scuola Normale Superiore in Pisa under the direction of Leonida Tonelli. After a period of study from 1934 to 1935 in Germany at Monaco di Baviera under the direction of Constantin Carathéodory, he went back to Pisa at the Scuola Normale Superiore for a year, and then to Rome at the Istituto Nazionale per le Applicazioni del Calcolo, at the time directed by Mauro Picone. From 1938 to 1946 he went back as a professore incaricato at Pisa University: in 1947 he was at the University of Bologna as a professor of mathematical analysis. In 1948 he went to the United States as a visiting professor at the Institute for Advanced Study in Princeton, at Purdue University in Lafayette, at the University of California - Berkeley and at the University of Wisconsin–Madison. In 1960 he was appointed as a professor of mathematical analysis at the University of Michigan at Ann Arbor where he remained until his retirement in 1981. In 1976 he became a citizen of the United States, while keeping close scientific contacts with the Italian mathematical community. The Lamberto Cesari chair The department of Mathematics at the University of Michigan honored the memory of Lamberto Cesari with the creation of a professorship chair. Work Research activity He is remembered for his achievements on the Plateau's problem, on the theory of parametric minimal surfaces, on Lebesgue measure of continuous and related other variational problems: he also worked in the fi
https://en.wikipedia.org/wiki/Digifold
Digifold is a new generation of four and six corner folding device for box gluers and associated machinery. Development began in the late 1990s, and was first exhibited at IPEX 2002 trade show. A system using Siemens S7-200 PLC and Siemens Servo controllers was developed throughout the late 1990s, into the year 2000. It was the first system, in the box folding industry, to use carbon fibre as a construction material (used in the drive shafts), and advanced aluminium (aluminum) alloys in the clamping devices. Although it was not the first servo driven backfolding system, it used advanced materials to improve speed and reduce cost over comparable systems (notably from Jagenberg, Bobst and others)
https://en.wikipedia.org/wiki/List%20of%20numbers
This is a list of notable numbers and articles about notable numbers. The list does not contain all numbers in existence as most of the number sets are infinite. Numbers may be included in the list based on their mathematical, historical or cultural notability, but all numbers have qualities which could arguably make them notable. Even the smallest "uninteresting" number is paradoxically interesting for that very property. This is known as the interesting number paradox. The definition of what is classed as a number is rather diffuse and based on historical distinctions. For example, the pair of numbers (3,4) is commonly regarded as a number when it is in the form of a complex number (3+4i), but not when it is in the form of a vector (3,4). This list will also be categorised with the standard convention of types of numbers. This list focuses on numbers as mathematical objects and is not a list of numerals, which are linguistic devices: nouns, adjectives, or adverbs that designate numbers. The distinction is drawn between the number five (an abstract object equal to 2+3), and the numeral five (the noun referring to the number). Natural numbers The natural numbers are a subset of the integers and are of historical and pedagogical value as they can be used for counting and often have ethno-cultural significance (see below). Beyond this, natural numbers are widely used as a building block for other number systems including the integers, rational numbers and real numbers. Natural numbers are those used for counting (as in "there are six (6) coins on the table") and ordering (as in "this is the third (3rd) largest city in the country"). In common language, words used for counting are "cardinal numbers" and words used for ordering are "ordinal numbers". Defined by the Peano axioms, the natural numbers form an infinitely large set. Often referred to as "the naturals", the natural numbers are usually symbolised by a boldface (or blackboard bold , Unicode ). The incl
https://en.wikipedia.org/wiki/Lymph%20node%20stromal%20cell
Lymph node stromal cells are essential to the structure and function of the lymph node whose functions include: creating an internal tissue scaffold for the support of hematopoietic cells; the release of small molecule chemical messengers that facilitate interactions between hematopoietic cells; the facilitation of the migration of hematopoietic cells; the presentation of antigens to immune cells at the initiation of the adaptive immune system; and the homeostasis of lymphocyte numbers. Stromal cells originate from multipotent mesenchymal stem cells. Structure Lymph nodes are enclosed in an external fibrous capsule, from which thin walls of sinew called trabeculae penetrate into the lymph node, partially dividing it. Beneath the external capsule and along the courses of the trabeculae, are peritrabecular and subcapsular sinuses. These sinuses are cavities containing macrophages (specialised cells which help to keep the extracellular matrix in order). The interior of the lymph node has two regions: the cortex and the medulla. In the cortex, lymphoid tissue is organized into nodules. In the nodules, T lymphocytes are located in the T cell zone. B lymphocytes are located in the B cell follicle. The primary B cell follicle matures in germinal centers. In the medulla are hematopoietic cells (which contribute to the formation of the blood) and stromal cells. Near the medulla is the hilum of lymph node. This is the place where blood vessels enter and leave the lymph node and lymphatic vessels leave the lymph node. Lymph vessels entering the node do so along the perimeter (outer surface). Function The lymph nodes, the spleen and Peyer's patches, together are known as secondary lymphoid organs. Lymph nodes are found between lymphatic ducts and blood vessels. Afferent lymphatic vessels bring lymph fluid from the peripheral tissues to the lymph nodes. The lymph tissue in the lymph nodes consists of immune cells (95%), for example lymphocytes, and stromal cells (1% to
https://en.wikipedia.org/wiki/Index%20of%20physics%20articles%20%28Z%29
The index of physics articles is split into multiple pages due to its size. To navigate by individual letter use the table of contents below. Z Z' boson Z(4430) Z-pinch ZEPLIN-III ZETA (fusion reactor) ZEUS (particle detector) ZINDO ZPPR Zakharov system Zakharov–Schulman system Zastruga Zeeman effect Zeitschrift für Angewandte Mathematik und Physik Zeitschrift für Physik C Zeldovich pancake Zemax Zero-dispersion wavelength Zero-energy universe Zero-lift drag coefficient Zero-mode waveguide Zero-phonon line and phonon sideband Zero-point energy Zero differential overlap Zero gravity (disambiguation) Zero lift axis Zero sound Zeroth law of thermodynamics Zeta function regularization Zevatron Ze'ev Lev Zhang Jie (scientist) Zhao Jiuzhang Zhores Alferov Zhou Guangzhao Zhou Peiyuan Zhu Guangya Zhurnal Eksperimental'noi i Teoreticheskoi Fiziki Zia Mian Zimm–Bragg model Zinc sulfide Zinovii Shulman Zip-cord Zirconium alloy Zitterbewegung Zoltán Lajos Bay Zonal and poloidal Zonal flow (plasma) Zone melting Zone plate Zone valve Zoé (reactor) Z Pulsed Power Facility Zsolt Bor Zubbles Zygmunt Florenty Wróblewski Zénobe Gramme Indexes of physics articles
https://en.wikipedia.org/wiki/Artificial%20plants
Artificial plants are imitations of natural plants used for commercial or residential decoration. They are sometimes made for scientific purposes (the collection of glass flowers at Harvard University, for example, illustrates the flora of the United States). Artificial plants vary widely from mass-produced varieties that are distinguishable from real plants by casual observation to highly detailed botanical or artistic specimens. Materials used in their manufacture have included painted linen and shavings of stained horn in ancient Egypt, gold and silver in ancient Rome, rice-paper in China, silkworm cocoons in Italy, colored feathers in South America, and wax and tinted shells. Modern techniques involve carved or formed soap, nylon netting stretched over wire frames, ground clay, and mass-produced injection plastic mouldings. Polyester has been the main material for manufacturing artificial flowers since the 1970s. Most artificial flowers in the market nowadays are made of polyester fabric. Production The industry is now highly specialized with several different manufacturing processes. Hundreds of artificial flower factories in the Pearl River delta area of Guangdong province in China have been built since the early 1980s. Thousands of 40-foot containers of polyester flowers and plants are exported to many countries every year. Polyester and paper Five main processes may be distinguished: The first step consists of putting the polyester fabric in gelatine in order to stiffen it. The second consists of cutting up the various polyester fabrics and materials employed into shapes suitable for forming the leaves, petals, etc.; this may be done with scissors, but is more often done with stamps that can cut through a dozen or more thicknesses at one blow. Next, the veins of the leaves are impressed by means of silk screen printing with a dye, and the petals are given their natural rounded forms by goffering irons of various shapes. The next step is to assemble
https://en.wikipedia.org/wiki/Moons%20of%20Haumea
The outer Solar System planetoid Haumea has two known moons, Hiʻiaka and Namaka, named after Hawaiian goddesses. These small moons were discovered in 2005, from observations of Haumea made at the large telescopes of the W. M. Keck Observatory in Hawaii. Haumea's moons are unusual in a number of ways. They are thought to be part of its extended collisional family, which formed billions of years ago from icy debris after a large impact disrupted Haumea's ice mantle. Hiʻiaka, the larger, outermost moon, has large amounts of pure water ice on its surface, which is rare among Kuiper belt objects. Namaka, about one tenth the mass, has an orbit with surprising dynamics: it is unusually eccentric and appears to be greatly influenced by the larger satellite. History Two small satellites were discovered around Haumea (which was at that time still designated 2003 EL61) through observations using the W.M. Keck Observatory by a Caltech team in 2005. The outer and larger of the two satellites was discovered 26 January 2005, and formally designated S/2005 (2003 EL61) 1, though nicknamed "Rudolph" by the Caltech team. The smaller, inner satellite of Haumea was discovered on 30 June 2005, formally termed S/2005 (2003 EL61) 2, and nicknamed "Blitzen". On 7 September 2006, both satellites were numbered and admitted into the official minor planet catalogue as (136108) 2003 EL61 I and II, respectively. The permanent names of these moons were announced, together with that of 2003 EL61, by the International Astronomical Union on 17 September 2008: (136108) Haumea I Hiʻiaka and (136108) Haumea II Namaka. Each moon was named after a daughter of Haumea, the Hawaiian goddess of fertility and childbirth. Hiʻiaka is the goddess of dance and patroness of the Big Island of Hawaii, where the Mauna Kea Observatory is located. Nāmaka is the goddess of water and the sea; she cooled her sister Pele's lava as it flowed into the sea, turning it into new land. In her legend, Haumea's many children c
https://en.wikipedia.org/wiki/Katharine%20Bishop
Katharine Julia Scott Bishop (June 23, 1889 – September 20, 1975) was a trained anatomist, medical physician, researcher and educator best known for co-discovering Vitamin E. Early life In 1889, Bishop was born in New York as Katharine Scott, to Walter and Katherine Emma (Campbell) Scott. She attended the Somerville High School (Massachusetts) for high school and later received her undergraduate degree from Wellesley College in 1910. After taking premedical courses at Radcliffe College, Bishop went on to graduate from The Johns Hopkins University School of Medicine and earned her medical degree in 1915. Discovery of Vitamin E After graduating from medical school, Bishop moved to Berkeley to teach histology in the anatomy department at the University of California Medical School until 1923. During this time, Bishop did her medical research with anatomist and endocrinologist Herbert McLean Evans. Together, they published a monograph on the vital staining of connective tissue cells. The discovery of Vitamin E came as a result of the study of the reproductive cycle of rats. After establishing a standard diet for the rats to maintain their regular reproductive cycle, Bishop and Evans started experimenting with dietary deficiencies. In 1923, they found a previously unknown factor that is vital for reproduction. When the rats were fed with a diet where lard was the only source of fat, though grew healthily, the female rats were unable to carry babies full term due to the breakdown of the placentas, and the male rats became sterile since the sperm-forming cells in the testes would deteriorate. Initially called "Factor X", Bishop and Evans narrowed down that this factor came from the lipid extract of lettuce and wheat germ. The name "Vitamin E" later came after Vitamin D. Later life From 1924 to 1929, Bishop worked as a histopathologist at the George Williams Hooper Foundation for Medical Research in San Francisco. After her marriage and birth of her two daughters, she
https://en.wikipedia.org/wiki/Magnitude%20%28mathematics%29
In mathematics, the magnitude or size of a mathematical object is a property which determines whether the object is larger or smaller than other objects of the same kind. More formally, an object's magnitude is the displayed result of an ordering (or ranking) of the class of objects to which it belongs. In physics, magnitude can be defined as quantity or distance. History The Greeks distinguished between several types of magnitude, including: Positive fractions Line segments (ordered by length) Plane figures (ordered by area) Solids (ordered by volume) Angles (ordered by angular magnitude) They proved that the first two could not be the same, or even isomorphic systems of magnitude. They did not consider negative magnitudes to be meaningful, and magnitude is still primarily used in contexts in which zero is either the smallest size or less than all possible sizes. Numbers The magnitude of any number is usually called its absolute value or modulus, denoted by . Real numbers The absolute value of a real number r is defined by: Absolute value may also be thought of as the number's distance from zero on the real number line. For example, the absolute value of both 70 and −70 is 70. Complex numbers A complex number z may be viewed as the position of a point P in a 2-dimensional space, called the complex plane. The absolute value (or modulus) of z may be thought of as the distance of P from the origin of that space. The formula for the absolute value of is similar to that for the Euclidean norm of a vector in a 2-dimensional Euclidean space: where the real numbers a and b are the real part and the imaginary part of z, respectively. For instance, the modulus of is . Alternatively, the magnitude of a complex number z may be defined as the square root of the product of itself and its complex conjugate, , where for any complex number , its complex conjugate is . (where ). Vector spaces Euclidean vector space A Euclidean vector represents the p
https://en.wikipedia.org/wiki/Amiga%20programming%20languages
This article deals with programming languages used in the Amiga line of computers, running the AmigaOS operating system and its derivatives AROS and MorphOS. It is a split of the main article Amiga software. See also related articles Amiga productivity software, Amiga music software, Amiga Internet and communications software and Amiga support and maintenance software for other information regarding software that runs on Amiga. History Many games and software, especially in the early years of the Amiga were written to directly access the hardware instead of using the operating system for graphics and input. Consequently, games could achieve much faster and smoother game-play, but at the cost of compatibility with newer Amiga models. Cross-platform libraries and programming facilities Several cross-platform libraries and facilities are available for Amiga: MUI and ReAction are Amiga standard Object Oriented systems for building graphical interfaces. SDL libraries are widely used in all modern Amiga systems Cairo support is built into AmigaOS 4.1 and MorphOS 3.0 Anti-Grain Geometry CLib2 is a portable ISO C (1994) runtime library for the Amiga. Allegro Library has been ported to AmigaOS 4 and MorphOS . an Amiga port of wxWidgets is being worked on wxWidgets-AOS. Gallium3D is now part of AROS Icaros Desktop Live Distro. OpenAL free software cross-platform audio API, designed for efficient rendering of multichannel three-dimensional positional audio, is available for MorphOS and any AmigaOS version 3 and later revisions. AROS and MorphOS support FreeType library in various projects, included its version release of Origyn Web Browser. FLTK "Fast, Light Toolkit" version for AmigaOS 4.0 is almost complete and it offers all the functionality of the official 1.1.6 version, including the standard and plastic scheme. For many years Amiga lacked a complete integrated development environment (IDE). This changed in 2005–2006 when Cubic IDE was created, based on th
https://en.wikipedia.org/wiki/Scarlet%20elf%20cap
Scarlet elf cap or Scarlet elf cup may refer to one of two small scarlet fungi:
https://en.wikipedia.org/wiki/Biermann%20battery
In astrophysics, the Biermann battery is a process by which a weak seed magnetic field can be generated from zero initial conditions. The relative motion between electrons and ions is driven by rotation. The process was discovered by Ludwig Biermann in 1950.
https://en.wikipedia.org/wiki/Cleft%20palate%20incidence%20by%20population
Cleft lip and/or palate is a congenital abnormality that is seen frequently around the world. On average, about 1 in every 500-750 live births result in a cleft (Hardin-Jones, Karnell, & Peterson-Falzone, 2001). Furthermore, in the U.S., the prevalence for cleft lip with or without cleft palate (CL +/- P) is 2.2 to 11.7 per 10,000 births. Cleft palate alone (CP) results in a prevalence rate of 5.5 to 6.6 per 10,000 births (Forrester & Merz, 2004). Cleft of the lip, palate, or both is one of the most common congenital abnormalities and has a birth prevalence rate ranging from 1/1000 to 2.69/1000 amongst different parts of the world (McLeod, Saeed, & Arana- Urioste, 2004). Africans and African Americans A look into the prevalence rates of different cultures in the U.S. when compared to country of origin begins with Africans and African Americans. One per 2500 African Americans are born with a cleft (Suleiman, Hamzah, Abusalab, & Samaan, 2005). African Americans have a lower prevalence rate of CL +/- P when compared to Caucasians. A prevalence rate of 0.61 per 1,000 and 1.05 per 1,000 live births respectively was reported by Croen, Shaw, Wasserman and Tolarova (1998). In Malawi there is a reported low prevalence rate for cleft lip and/or palate, 0.7 per 1,000 live births (Chisi, Igbibi, & Msamati, 2000). Suleiman et al. (2005) found that the prevalence rate of clefting among a group of Sudanese hospital new-borns in the city of Khartoum is 0.9 per 1,000 live births. Mestizo Americans Mestizo Americans are people of mixed European and Native American origins from Mexico, Central America and South America, and the Caribbean (Meyerson, 1990). The prevalence of Mestizo Americans is lower than that of Caucasians and Native Americans, yet it is still higher than African Americans (Croen et al., 1998). Mestizo have a prevalence of clefting of 9.7 per 10,000 live births (Kirby, Petrini & Alter, 2000). In Sucre, Bolivia the prevalence rate of CL +/- P is 1.23 per 1,000 live b
https://en.wikipedia.org/wiki/Aerodynamic%20center
In aerodynamics, the torques or moments acting on an airfoil moving through a fluid can be accounted for by the net lift and net drag applied at some point on the airfoil, and a separate net pitching moment about that point whose magnitude varies with the choice of where the lift is chosen to be applied. The aerodynamic center is the point at which the pitching moment coefficient for the airfoil does not vary with lift coefficient (i.e. angle of attack), making analysis simpler. where is the aircraft lift coefficient. The lift and drag forces can be applied at a single point, the center of pressure, about which they exert zero torque. However, the location of the center of pressure moves significantly with a change in angle of attack and is thus impractical for aerodynamic analysis. Instead the aerodynamic center is used and as a result the incremental lift and drag due to change in angle of attack acting at this point is sufficient to describe the aerodynamic forces acting on the given body. Theory Within the assumptions embodied in thin airfoil theory, the aerodynamic center is located at the quarter-chord (25% chord position) on a symmetric airfoil while it is close but not exactly equal to the quarter-chord point on a cambered airfoil. From thin airfoil theory: where is the section lift coefficient, is the angle of attack in radian, measured relative to the chord line. where is the moment taken at quarter-chord point and is a constant. Differentiating with respect to angle of attack For symmetrical airfoils , so the aerodynamic center is at 25% of chord. But for cambered airfoils the aerodynamic center can be slightly less than 25% of the chord from the leading edge, which depends on the slope of the moment coefficient, . These results obtained are calculated using the thin airfoil theory so the use of the results are warranted only when the assumptions of thin airfoil theory are realistic. In precision experimentation with real airfoils and ad
https://en.wikipedia.org/wiki/Timeline%20of%20plant%20evolution
This article attempts to place key plant innovations in a geological context. It concerns itself only with novel adaptations and events that had a major ecological significance, not those that are of solely anthropological interest. The timeline displays a graphical representation of the adaptations; the text attempts to explain the nature and robustness of the evidence. Plant evolution is an aspect of the study of biological evolution, predominantly involving evolution of plants suited to live on land, greening of various land masses by the filling of their niches with land plants, and diversification of groups of land plants. Earliest plants In the strictest sense, the name plant refers to those land plants that form the clade Embryophyta, comprising the bryophytes and vascular plants. However, the clade Viridiplantae or green plants includes some other groups of photosynthetic eukaryotes, including green algae. It is widely believed that land plants evolved from a group of charophytes, most likely simple single-celled terrestrial algae similar to extant Klebsormidiophyceae. Chloroplasts in plants evolved from an endosymbiotic relationship between a cyanobacterium, a photosynthesising prokaryote and a non-photosynthetic eukaryotic organism, producing a lineage of photosynthesizing eukaryotic organisms in marine and freshwater environments. These earliest photosynthesizing single-celled autotrophs evolved into multicellular organisms such as the Charophyta, a group of freshwater green algae. Fossil evidence of plants begins around 3000 Ma with indirect evidence of oxygen-producing photosynthesis in the geological record, in the form of chemical and isotopic signatures in rocks and fossil evidence of colonies of cyanobacteria, photosynthesizing prokaryotic organisms. Cyanobacteria use water as a reducing agent, producing atmospheric oxygen as a byproduct, and they thereby profoundly changed the early reducing atmosphere of the earth to one in which modern
https://en.wikipedia.org/wiki/ShotCode
ShotCode is a circular barcode created by High Energy Magic of Cambridge University. It uses a dartboard-like circle, with a bullseye in the centre and datacircles surrounding it. The technology reads databits from these datacircles by measuring the angle and distance from the bullseye for each. ShotCodes are designed to be read with a regular camera (including those found on mobile phones and webcams) without the need to purchase other specialised hardware. ShotCodes differ from matrix barcodes in that they do not store regular data - rather, they store a look up number consisting of 40 bits of data. This needs to link to a server that holds information regarding a mapped URL which the reading device can connect to in order to download said data. History ShotCode was created in 1999 at the University of Cambridge when researching a low cost vision based method to track locations and developed TRIPCode as a result. It has been used to track printed TRIPCode paperbadges in realtime with webcams. After that in Cambridge it had another research use; to read barcodes with mobile phone cams, and they used TRIPCode in a round barcode which was named SpotCode. High Energy Magic was founded in 2003 to commercialise research from the University of Cambridge Computer Laboratory and Laboratory for Communications Engineering. Least Bango.net, a mobile company used SpotCode 2004 in their ads. In 2005 High Energy Magic Ltd. sold the entire SpotCode IPR to OP3. Afterwards the name was changed from SpotCode to ShotCode. Heineken was the first company to officially use the ShotCode technology. ShotCode's software The software used to read a ShotCode captured by a mobile camera is called ‘ShotReader’. It is lightweight and is only around 17kB. It ‘reads’ the camera’s picture of a ShotCode in real time and prompts the browsers to navigate to a particular site. The last website update was from 2007, suggesting that updates for phones based on Android and iPhone will not be availa
https://en.wikipedia.org/wiki/Persistence%20%28discontinuity%29
Persistence determines the possibilities of relative movement along a discontinuity in a soil or rock mass in geotechnical engineering. Discontinuities are usually differentiated in persistent, non-persistent, and abutting discontinuities (figure). Persistent discontinuity A persistent discontinuity is a continuous plane in a soil or rock mass. Shear displacement takes place if the shear stress along the discontinuity plane exceeds the shear strength of the discontinuity plane. Non-persistent discontinuity A non-persistent discontinuity ends in intact soil or rock. Before movement of the material on both sides of a non-persistent discontinuity is possible, the discontinuity has to extend and break through intact material. As intact material has virtually always far higher shear strength than the discontinuity, a non-persistent discontinuity will have larger shear strength than a persistent discontinuity. Abutting discontinuity An abutting discontinuity abuts against another discontinuity. Abutting discontinuities might continue at the other side of the intersecting discontinuity, however, with a displacement to give so-called ‘stepped planes’. Shear displacement along the discontinuity can take place if the shear strength along the discontinuity plane is exceeded, and the blocks of material against which the discontinuity abuts can move or break. Anisotropic persistence A discontinuity might be persistent in dip direction but be not persistent perpendicular to the dip direction or vice versa. See also Discontinuity (geotechnical engineering) Rock mechanics Shear strength (discontinuity)
https://en.wikipedia.org/wiki/Synantherology
Synantherology is a branch of botany that deals with the study of the plant family Asteraceae (also called Compositae). The name of the field refers to the fused anthers possessed by members of the family, and recalls an old French name, synantherées, for the family. Although many of the plants of the Asteraceae were described for the European community at least as long ago as Theophrastus, an organization of the family into tribes, which remained largely stable throughout the 20th century, was published in 1873 by George Bentham. In a 1970 article titled "The New Synantherology", Harold E. Robinson advocated greater attention to microstructures (studied with the compound light microscope). He was not the first, as Alexandre de Cassini and others of the 19th century split species based on fine distinctions of microstructure, a tendency which Bentham found excessive. Noted United States synantherologists include: T. M. Barkley V. A. Funk D. J. Keil R. M. King Harold E. Robinson J. A. Soule T. F. Stuessy Billie Lee Turner Sr.
https://en.wikipedia.org/wiki/Jessie%20Stevenson%20Kovalenko%20Medal
The Jessie Stevenson Kovalenko Medal is awarded every two years by the US National Academy of Sciences "for important contributions to the medical sciences." It was first awarded in 1952 and involves a prize of $25,000 plus $50,000 for research. The Kovalenko Fund was donated by Michael S. Kovalenko in 1949 to the National Academy of Sciences in memory of his wife, Jessie Stevenson Kovalenko. Recipients See also List of medicine awards Prizes named after people
https://en.wikipedia.org/wiki/S%C3%ADragon
Síragon, C.A. is a Venezuelan manufacturer and assembler of computer hardware and other electronic products such as digital cameras, tablet, computers and LCD televisions. Siragon also designs and manufactures its own RAM and flash memory and printed circuit boards. The company was created in an alliance between Venezuela and Japanese investors. Its plant is located in the North Industrial Zone of Valencia, Carabobo, in Venezuela. In November 2009, Síragon started to distribute its product line in Argentina, Allied with the Argentinian computer wholesale vendor Greentech. Síragon manufactures its own designs and also builds under license, all-in-one computers from Brazilian Itautec. Siragon products are all manufactured in Venezuela. At the 2012 international consumer electronics show Siragon formally announced its intent to enter the US market by the end of 2012. Siragon is engaged in a design partnership with BMW for which it both manufactures electronics for and collaborates on electronic designs with. Siragon currently holds the third largest share of the electronics market in Venezuela. Products Digital cameras Video cameras Desktop computers Laptop computers Netbooks Computer servers LCD televisions and monitors LED television Plasma TV screens Sound systems Peripherals Tablet computers
https://en.wikipedia.org/wiki/Unified%20Code%20for%20Units%20of%20Measure
The Unified Code for Units of Measure (UCUM) is a system of codes for unambiguously representing measurement units. Its primary purpose is machine-to-machine communication rather than communication between humans. The code set includes all units defined in ISO 1000, ISO 2955-1983, ANSI X3.50-1986, HL7 and ENV 12435, and explicitly and verifiably addresses the naming conflicts and ambiguities in those standards to resolve them. It provides for representations of units in 7 bit ASCII for machine-to-machine communication, with unambiguous mapping between case-sensitive and case-insensitive representations. A reference open-source implementation is available as a Java applet. Also an OSGi based implementation at Eclipse Foundation. Base units Units are represented in UCUM with reference to a set of seven base units. The UCUM base units are the metre for measurement of length, the second for time, the gram for mass, the coulomb for charge, the kelvin for temperature, the candela for luminous intensity, and the radian for plane angle. The UCUM base units form a set of mutually independent dimensions as required by dimensional analysis. Some of the UCUM base units are different from the SI base units. UCUM is compatible with, but not isomorphic with SI. There are four differences between the two sets of base units: The gram is the base unit of mass instead of the kilogram, since in UCUM base units do not have prefixes. Electric charge is the base quantity for electromagnetic phenomena instead of electric current, since the elementary charge of electrons is more fundamental physically. The mole is dimensionless in UCUM, since it can be defined in terms of the Avogadro number, The radian is a distinct base unit for plane angle, to distinguish angular velocity from rotational frequency and to distinguish the radian from the steradian for solid angles. Metric and non-metric units Each unit represented in UCUM is identified as either "metric" or "non-metric". Metric un
https://en.wikipedia.org/wiki/List%20of%20religious%20flags%20of%20Vietnam
The following is a list of flags used by the various religious communities that inhabit the country of Vietnam. Five-color and festival flags List of Vietnamese five-colour, or festival flags, those that incorporate imagery of other religions are listed at their specific sections. Family name flags The family flag (Cờ họ tộc) is considered one of the most sacred symbols of a family, symbolising the spirit, will, affection and strength of the family's unity. Family flags are typically hung in front of or inside of space near roads, at temples, family mausoleums, and on the occasions of death, an anniversary, and the Tết Nguyên Đán holiday. Most of the family flags are designed based on the structural principles of the traditional five-colour flag, with the square in the same red colour, and in its centre the family name (surname) is typically written in Chinese script in the colour yellow. The most common style of writing the family name is in Khải thư, but in cases when the character is featured on both sides of the flag the obverse side typically features Khải thư while the reverse side typically features Chữ Triện. Not all family flags maintain the five-colour scheme of traditional flags as some only feature 4 colours. Taoist flags Five elements flags Taoist temple flags Buddhist flags Đạo Hòa Hảo Followers of Hòa Hảo denomination of Buddhism use a plain brown (maroon) flag. The colour of which is of particular importance to the community, because the altars are made by placing a similar brown cloth on the wall to mark the point faced during the prayers as well as the habits of the Hòa Hảo clergy being also brown in colour. In Vietnam, the Hòa Hảo religious flag is usually accompanied with the national Vietnamese flag. Among the Vietnamese diaspora, Hòa Hảo religious flag is typically used together with the pre-1975 flag of South Vietnam and the flag of the United States. Christian flags Catholic flags Vietnamese Catholics have adopted localised
https://en.wikipedia.org/wiki/Trends%20in%20International%20Mathematics%20and%20Science%20Study
The IEA's Trends in International Mathematics and Science Study (TIMSS) is a series of international assessments of the mathematics and science knowledge of students around the world. The participating students come from a diverse set of educational systems (countries or regional jurisdictions of countries) in terms of economic development, geographical location, and population size. In each of the participating educational systems, a minimum of 4,000 to 5,000 students is evaluated. Contextual data about the conditions in which participating students learn mathematics and science are collected from the students and their teachers, their principals, and their parents via questionnaires. TIMSS is one of the studies established by IEA aimed at allowing educational systems worldwide to compare students' educational achievement and learn from the experiences of others in designing effective education policy. This assessment was first conducted in 1995, and has been administered every four years thereafter. Therefore, some of the participating educational systems have trend data across assessments from 1995 to 2019. TIMSS assesses 4th and 8th grade students, while TIMSS Advanced assesses students in the final year of secondary school in advanced mathematics and physics. Definition of Terms "Eighth grade" in the United States is approximately 13–14 years of age and equivalent to: Year 9 (Y9) in England and Wales 2nd Year (S2) in Scotland 2nd Year in the Republic of Ireland 1st Year in South Africa Form 2 in Hong Kong 4ème in France Year 9 in New Zealand Form 2 in Malaysia "Fourth grade" in the United States is approximately equivalent to 9–10 years of age and equivalent to: Year 5 (Y5) in England and Wales Primary 6 (P6) in Scotland Group 6 in the Netherlands CM1 in France Fourth Class in the Republic of Ireland Standard 3 or Year 5 in New Zealand History A precursor to TIMSS was the First International Mathematics Study (FIMS) performed in 1964 in 11
https://en.wikipedia.org/wiki/History%20of%20electronic%20engineering
This article details the history of electronics engineering. Chambers Twentieth Century Dictionary (1972) defines electronics as "The science and technology of the conduction of electricity in a vacuum, a gas, or a semiconductor, and devices based thereon". Electronics engineering as a profession sprang from technological improvements in the telegraph industry during the late 19th century and in the radio and telephone industries during the early 20th century. People gravitated to radio, attracted by the technical fascination it inspired, first in receiving and then in transmitting. Many who went into broadcasting in the 1920s had become "amateurs" in the period before World War I. The modern discipline of electronics engineering was to a large extent born out of telephone-, radio-, and television-equipment development and the large amount of electronic-systems development during World War II of radar, sonar, communication systems, and advanced munitions and weapon systems. In the interwar years, the subject was known as radio engineering. The word electronics began to be used in the 1940s In the late 1950s, the term electronics engineering started to emerge. Electronic laboratories (Bell Labs, for instance) created and subsidized by large corporations in the industries of radio, television, and telephone equipment, began churning out a series of electronic advances. The electronics industry was revolutionized by the inventions of the first transistor in 1948, the integrated circuit chip in 1959, and the silicon MOSFET (metal–oxide–semiconductor field-effect transistor) in 1959. In the UK, the subject of electronics engineering became distinct from electrical engineering as a university-degree subject around 1960. (Before this time, students of electronics and related subjects like radio and telecommunications had to enroll in the electrical engineering department of the university as no university had departments of electronics. Electrical engineering was the nea
https://en.wikipedia.org/wiki/List%20of%20participants%20in%20the%20Evolving%20Genes%20and%20Proteins%20symposium
This is a list of scientists who participated in the 1964 Evolving Genes and Proteins symposium, a landmark event in the history of molecular evolution. The symposium, supported by the National Science Foundation, took place on September 17 and September 18, 1964 at the Institute of Microbiology of Rutgers University. A summary of the proceedings was published in Science, and the full proceedings were edited by Vernon Bryson and Henry J. Vogel and published in 1965.
https://en.wikipedia.org/wiki/Star%20Trek%3A%20Bridge%20Crew
Star Trek: Bridge Crew is a virtual-reality action-adventure video game developed by Red Storm Entertainment and published by Ubisoft for Microsoft Windows, PlayStation 4, and Oculus Quest. Plot Star Trek: Bridge Crew takes place in the timeline established in the 2009 Star Trek film and sees the Starfleet ship USS Aegis searching for a new homeworld for the Vulcans after the destruction of their planet. The ship heads for a region of space called 'The Trench', which is being occupied by Klingons. Gameplay The game is played through four roles: captain, tactical officer, engineer and helm officer. The captain is the only role to which mission objectives are directly displayed; they are responsible for communicating these to the crew and issuing orders to accomplish them. The helm officer controls the ship's course and travel between regions through impulse or warp drive. The tactical officer is in charge of sensors and weapons. The engineer manages the ship's power distribution and supervises repairs. Each role except the captain may be occupied by a human player or by an NPC indirectly controlled by the captain. Both story and randomly generated missions exist. In December 2017, the game developers modified the game so that it can be played without a virtual-reality headset. Prior to that, the game could only be played using a headset. Development It was developed by Red Storm Entertainment and published by Ubisoft. Series actors Karl Urban, LeVar Burton and Jeri Ryan appeared at E3 2016 to promote the game during Ubisoft's press conference. A new trailer was showcased at CES 2017. The game was released on May 30, 2017. Reception Star Trek: Bridge Crew received "generally positive" reviews, according to review aggregator Metacritic. Eurogamer ranked it 42nd on their list of the "Top 50 Games of 2017", while GamesRadar+ ranked it 25th on their list of the 25 Best Games of 2017. Many reviews compared it to Artemis: Spaceship Bridge Simulator, an indie game ins
https://en.wikipedia.org/wiki/Oracle%20RAC
In database computing, Oracle Real Application Clusters (RAC) — an option for the Oracle Database software produced by Oracle Corporation and introduced in 2001 with Oracle9i — provides software for clustering and high availability in Oracle database environments. Oracle Corporation includes RAC with the Enterprise Edition, provided the nodes are clustered using Oracle Clusterware. Functionality Oracle RAC allows multiple computers to run Oracle RDBMS software simultaneously while accessing a single database, thus providing clustering. In a non-RAC Oracle database, a single instance accesses a single database. The database consists of a collection of data files, control files, and redo logs located on disk. The instance comprises the collection of Oracle-related memory and background processes that run on a computer system. In an Oracle RAC environment, 2 or more instances concurrently access a single database. This allows an application or user to connect to either computer and have access to a single coordinated set of data. The instances are connected with each other through an "Interconnect" which enables all the instances to be in sync in accessing the data. Aims The main aim of Oracle RAC is to implement a clustered database to provide performance, scalability and resilience & high availability of data at instance level. Implementation Oracle RAC depends on the infrastructure component Oracle Clusterware to coordinate multiple servers and their sharing of data storage. The FAN (Fast Application Notification) technology detects down-states. RAC administrators can use the srvctl tool to manage RAC configurations, Cache Fusion Prior to Oracle 9, network-clustered Oracle databases used a storage device as the data-transfer medium (meaning that one node would write a data block to disk and another node would read that data from the same disk), which had the inherent disadvantage of lackluster performance. Oracle 9i addressed this issue: RAC uses a dedicated
https://en.wikipedia.org/wiki/Neocatastrophism
Neocatastrophism is the hypothesis that life-exterminating events such as gamma-ray bursts have acted as a galactic regulation mechanism in the Milky Way upon the emergence of complex life in its habitable zone. It is one of several proposed solutions to the Fermi paradox since it provides a mechanism which would have delayed the advent of intelligent beings in local galaxies near Earth. The problem It is estimated that Earth-like planets in the Milky Way started forming 9 billion years ago, and that their median age is 6.4 ± 0.7 Ga. Moreover, 75% of stars in the galactic habitable zone are older than the Sun. This makes the existence of potential planets with evolved intelligent life more likely than not to be older than that of the Earth (4.54 Ga). This creates an observational dilemma since interstellar travel, (even of the "slow" kind that is nearly within the reach of present Earth technology) could in theory, if it had arisen elsewhere, take only 5 to 50 million years to colonize the galaxy. This leads to a conundrum first posed in 1950 by the physicist Enrico Fermi in his namesake paradox: "Why are no aliens or their artifacts physically here?" The neocatastrophism resolution The hypothesis posits that astrobiological evolution is subject to regulation mechanisms that arrest or postpone the advent of complex creatures capable of interstellar communication and traveling technology. These regulation mechanisms act to temporarily sterilize planets of biology in the galactic habitable zone. The main proposed regulation mechanism is gamma-ray bursts. Part of the neocatastrophism hypothesis is that stellar evolution produces a decreasing frequency of such catastrophic events increasing the length of the "window" in which intelligent life might arise as galaxies age. According to modeling, this creates the possibility of a phase transition at which point a galaxy turns from a place that is essentially dead (with a few pockets of simple life) to one that is crowd
https://en.wikipedia.org/wiki/Europa%20%28consort%20of%20Zeus%29
In Greek mythology, Europa (; , Eurṓpē, ) was a Phoenician princess from Tyre, Lebanon and the mother of King Minos of Crete. The continent of Europe may be named after her. The story of her abduction by Zeus in the form of a bull was a Cretan story; as classicist Károly Kerényi points out, "most of the love-stories concerning Zeus originated from more ancient tales describing his marriages with goddesses. This can especially be said of the story of Europa." Europa's earliest literary reference is in the Iliad, which is commonly dated to the 8th century BC. Another early reference to her is in a fragment of the Hesiodic Catalogue of Women, discovered at Oxyrhynchus. The earliest vase-painting securely identifiable as Europa dates from the mid-7th century BC. Etymology Greek (Eurṓpē) contains the elements εὐρύς (eurus), "wide, broad" and ὤψ/ὠπ-/ὀπτ- (ōps/ōp-/opt-) "eye, face, countenance". Broad has been an epithet of Earth herself in the reconstructed Proto-Indo-European religion. It is common in ancient Greek mythology and geography to identify lands or rivers with female figures. Thus, Europa is first used in a geographic context in the Homeric Hymn to Delian Apollo, in reference to the western shore of the Aegean Sea. As a name for a part of the known world, it is first used in the 6th century BC by Anaximander and Hecataeus. The weakness of an etymology with εὐρύς (eurus), is 1. that the -u stem of εὐρύς disappears in Εὐρώπη Europa and 2. the expected form εὐρυώπη euryopa that retains the -u stem in fact exists. An alternative suggestion due to Ernest Klein and Giovanni Semerano (1966) attempted to connect a Semitic term for "west", Akkadian erebu meaning "to go down, set" (in reference to the sun), Phoenician 'ereb "evening; west", which would parallel occident (the resemblance to Erebus, from PIE *h1regʷos, "darkness", is accidental, however). Barry (1999) adduces the word Ereb on an Assyrian stele with the meaning of "night", "[the country of] suns