source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/International%20Union%20of%20Nutritional%20Sciences | The International Union of Nutritional Sciences (IUNS) is an international non-governmental organization established in 1946 devoted to the advancement of nutrition.
Its mission and objectives are:
To promote advancement in nutrition science, research, and development through international cooperation at the global level.
To encourage communication and collaboration among nutrition scientists as well as to disseminate information in nutritional science through modern communication technology.
Since its 1946 foundation, the membership has grown to include 83 national adhering bodies and 17 affiliations.
IUNS International Congresses
Governing Council
The Council consists of five Officers, the President, President-Elect, Vice-President, Secretary-General, Treasurer, Immediate Past-President, and six Council members.
IUNS's current council consists of the following:
President: Alfredo Martinez Hernandez Spain;
Vice President: V. Prakash India;
President Elect: Lynette M. Neufeld Canada;
Secretary General: Catherine Geissler UK;
Treasurer: Helmut Heseker Germany;
Member: Hyun-Sook Kim Korea;
Member: Ali Dhansay South Africa;
Member: Benjamin Caballero United States;
Member: Francis Zotor Ghana;
Member: Andrew Prentice UK;
Member: Teruo Miyazawa Japan;
Immediate Past-President: Anna Lartey Ghana.
Headquarters
IUNS is registered in Vienna, Austria.
Secretariat
IUNS;
The Nutrition Society;
Boyd Orr House, 10 Cambridge Court;
210 Shepherds Bush Road;
London;
UK;
W6 7NJ
See also |
https://en.wikipedia.org/wiki/American%20Society%20of%20Animal%20Science | The American Society of Animal Science (ASAS) is a non-profit professional organization for the advancement of livestock, companion animals, exotic animals and meat science. Founded in 1908, ASAS is headquartered in Champaign, Illinois.
ASAS members are involved in university research, education, and extension as well as in the feed, pharmaceutical, and other animal-related industries. Disciplines include nutrition, reproductive physiology, genetics, and behavior of food-producing animals and processing of meat-based products, including beef, pork, and veal.
Official ASAS Mission: "The American Society of Animal Science is a membership society that supports the careers of scientists and animal producers in the United States and internationally. The American Society of Animal Science fosters the discovery, sharing and application of scientific knowledge concerning the responsible use of animals to enhance human life and well-being."
History
Organizing ASAS (originally called the American Society of Animal Nutrition) began on July 28, 1908, at Cornell University in Ithaca, New York. A committee of animal nutritionists decided to present a plan for the new society during the International Livestock Exposition in Chicago that fall. When the society first officially gathered on November 26, 1908, 33 charter members represented 17 state experiment stations, the U.S. Department of Agriculture and Canada. The goals of the new society were: "(1) to improve the quality of investigation in animal nutrition, (2) to promote more systematic and better correlated study of feeding problems, and (3) to facilitate personal interaction between investigators in this field." During the first year, the society had 100 members join.
At the society business meeting in 1912, the members made plans to broaden the membership base. On November 30, 1915, members changed the society name from the American Society of Animal Nutrition to the American Society of Animal Production. Members pa |
https://en.wikipedia.org/wiki/Ombrotrophic | Ombrotrophic ("cloud-fed"), from Ancient Greek ὄμβρος (ómvros) meaning "rain" and τροφή (trofí) meaning "food"), refers to soils or vegetation which receive all of their water and nutrients from precipitation, rather than from streams or springs. Such environments are hydrologically isolated from the surrounding landscape, and since rain is acidic and very low in nutrients, they are home to organisms tolerant of acidic, low-nutrient environments. The vegetation of ombrotrophic peatlands is often bog, dominated by Sphagnum mosses. The hydrology of these environments are directly related to their climate, as precipitation is the water and nutrient source, and temperatures dictate how quickly water evaporates from these systems.
Ombrotrophic circumstances may occur even in landscapes composed of limestone or other nutrient-rich substrates – for example, in high-rainfall areas, limestone boulders may be capped by acidic ombrotrophic bog vegetation. Epiphytic vegetation (plants growing on other plants) is ombrotrophic.
In contrast to ombrotrophic environments, minerotrophic environments are those where the water supply comes mainly from streams or springs. This water has flowed over or through rocks often acquiring dissolved chemicals which raise the nutrient levels and reduce the acidity, which leads to different vegetation such as fen or poor fen.
See also
Chalk heath
Mire
Notes |
https://en.wikipedia.org/wiki/Ed%20Pegg%20Jr. | Edward Taylor Pegg Jr. (born December 7, 1963) is an expert on mathematical puzzles and is a self-described recreational mathematician. He wrote an online puzzle column called Ed Pegg Jr.'s Math Games for the Mathematical Association of America during the years 2003–2007. His puzzles have also been used by Will Shortz on the puzzle segment of NPR's Weekend Edition Sunday. He was a fan of Martin Gardner and regularly participated in Gathering 4 Gardner conferences. In 2009 he teamed up with Tom M. Rodgers and Alan Schoen to edit two Gardner tribute books.
Pegg received a master's degree in mathematics from the University of Colorado at Colorado Springs, writing his thesis on the subject of fair dice. In 2000, he left NORAD to join Wolfram Research, where he collaborated on A New Kind of Science (NKS). In 2004 he started assisting Eric W. Weisstein at Wolfram MathWorld. He has made contributions to several hundred MathWorld articles. He was one of the chief consultants for Numb3rs. |
https://en.wikipedia.org/wiki/Self-protein | Self-protein refers to all proteins endogenously produced by DNA-level transcription and translation within an organism of interest. This does not include proteins synthesized due to viral infection, but may include those synthesized by commensal bacteria within the intestines. Proteins that are not created within the body of the organism of interest, but nevertheless enter through the bloodstream, a breach in the skin, or a mucous membrane, may be designated as “non-self” and subsequently targeted and attacked by the immune system. Tolerance to self-protein is crucial for overall wellbeing; when the body erroneously identifies self-proteins as “non-self”, the subsequent immune response against endogenous proteins may lead to the development of an autoimmune disease.
Examples
Of note, the list provided above is not exhaustive; the list does not mention all possible proteins targeted by the provided autoimmune diseases.
Identification by the immune system
Autoimmune responses and diseases are primarily instigated by T lymphocytes that are incorrectly screened for reactivity to self-protein during cell development.
During T-cell development, early T-cell progenitors first move via chemokine gradients from the bone marrow into the thymus, where T-cell receptors are randomly rearranged at the gene level to allow for T-cell receptor generation. These T-cells have the potential to bind to anything, including self-proteins.
The immune system must differentiate the T-cells that have receptors capable of binding to self versus non-self proteins; T-cells that can bind to self-proteins must be destroyed to prevent development of an autoimmune disorder. In a process known as “Central Tolerance”, T-cells are exposed to cortical epithelial cells that express a variety of different major histocompatibility complexes (MHC) of both class 1 and class 2, which have the ability to bind to T-cell receptors of CD8+ cytotoxic T-cells, and CD4+ helper T-cells, respectively. The T-ce |
https://en.wikipedia.org/wiki/Quadratic%20growth | In mathematics, a function or sequence is said to exhibit quadratic growth when its values are proportional to the square of the function argument or sequence position. "Quadratic growth" often means more generally "quadratic growth in the limit", as the argument or sequence position goes to infinity – in big Theta notation, . This can be defined both continuously (for a real-valued function of a real variable) or discretely (for a sequence of real numbers, i.e., real-valued function of an integer or natural number variable).
Examples
Examples of quadratic growth include:
Any quadratic polynomial.
Certain integer sequences such as the triangular numbers. The th triangular number has value , approximately .
For a real function of a real variable, quadratic growth is equivalent to the second derivative being constant (i.e., the third derivative being zero), and thus functions with quadratic growth are exactly the quadratic polynomials, as these are the kernel of the third derivative operator . Similarly, for a sequence (a real function of an integer or natural number variable), quadratic growth is equivalent to the second finite difference being constant (the third finite difference being zero), and thus a sequence with quadratic growth is also a quadratic polynomial. Indeed, an integer-valued sequence with quadratic growth is a polynomial in the zeroth, first, and second binomial coefficient with integer values. The coefficients can be determined by taking the Taylor polynomial (if continuous) or Newton polynomial (if discrete).
Algorithmic examples include:
The amount of time taken in the worst case by certain algorithms, such as insertion sort, as a function of the input length.
The numbers of live cells in space-filling cellular automaton patterns such as the breeder, as a function of the number of time steps for which the pattern is simulated.
Metcalfe's law stating that the value of a communications network grows quadratically as a function of its number of |
https://en.wikipedia.org/wiki/Wi-Fi%20Protected%20Setup | Wi-Fi Protected Setup (WPS) originally, Wi-Fi Simple Config, is a network security standard to create a secure wireless home network.
Created by Cisco and introduced in 2006, the point of the protocol is to allow home users who know little of wireless security and may be intimidated by the available security options to set up Wi-Fi Protected Access, as well as making it easy to add new devices to an existing network without entering long passphrases. It is used by devices made by HP, Brother and Canon for their printers. WPS is a wireless method that is used to connect certain Wi-Fi devices such printers and security cameras to the Wi-Fi network without using any password. In addition, there is another way to connect called WPS Pin that is used by some devices to connect to the wireless network. Wi-Fi Protected Setup allows the owner of Wi-Fi privileges to block other users from using their household Wi-Fi. The owner can also allow people to use Wi-Fi. This can be changed by pressing the WPS button on the home router.
A major security flaw was revealed in December 2011 that affects wireless routers with the WPS PIN feature, which most recent models have enabled by default. The flaw allows a remote attacker to recover the WPS PIN in a few hours with a brute-force attack and, with the WPS PIN, the network's WPA/WPA2 pre-shared key (PSK). Users have been urged to turn off the WPS PIN feature, although this may not be possible on some router models.
Modes
The standard emphasizes usability and security, and allows four modes in a home network for adding a new device to the network:
PIN method In which a PIN has to be read from either a sticker or display on the new wireless device. This PIN must then be entered at the "representant" of the network, usually the network's access point. Alternately, a PIN provided by the access point may be entered into the new device. This method is the mandatory baseline mode and everything must support it. The Wi-Fi Direct specific |
https://en.wikipedia.org/wiki/DEPTHX | The Deep Phreatic Thermal Explorer (DEPTHX) is an autonomous underwater vehicle designed and built by Stone Aerospace, an aerospace engineering firm based in Austin, Texas. It was designed to autonomously explore and map underwater sinkholes in northern Mexico, as well as collect water and wall core samples. This could be achieved via an autonomous form of navigation known as A-Navigation. The DEPTHX vehicle was the first of three vehicles to be built by Stone Aerospace which were funded by NASA with the goal of developing technology that can explore the oceans of Jupiter's moon Europa to look for extraterrestrial life.
DEPTHX was a collaborative project for which Stone Aerospace was the principal investigator. Co-investigators included Carnegie Mellon University, which was responsible for the navigation and guidance software, the Southwest Research Institute, which built the vehicle's science payload, and research scientists from the University of Texas at Austin, the Colorado School of Mines, and NASA Ames Research Center.
History
In 1999, Bill Stone had been involved in an underwater surveying project in Wakulla Springs, Florida. For that project, Stone had devised a digital wall mapper that was propelled by a diver propulsion vehicle and steered by divers which was designed to create a 3-D map of Wakulla Springs using an array of sonars, as well as a suite of other sophisticated sensors. The success of this project, the Wakulla Springs 2 Project, attracted the interest of planetary scientist Dan Durda from the Southwest Research Institute, who wished to create a similar piece of technology to explore the oceans of Europa, but one that could drive itself autonomously. Stone accepted the challenge, and several collaborative proposals were submitted to NASA. It wasn't until 2003 that NASA would finally fund DEPTHX as a three-year, $5 million project.
The vehicle underwent several different design concepts over the next couple of years as engineers at St |
https://en.wikipedia.org/wiki/Educational%20Broadband%20Service | The Educational Broadband Service (EBS) was formerly known as the Instructional Television Fixed Service (ITFS). ITFS was a band of twenty (20) microwave TV channels available to be licensed by the U.S. Federal Communications Commission (FCC) to local credit granting educational institutions. It was designed to serve as a means for educational institutions to deliver live or pre-recorded Instructional television to multiple sites within school districts and to higher education branch campuses. In recognition of the variety and quantity of video materials required to support instruction at numerous grade levels and in a range of subjects, licensees were typically granted a group of four channels. Its low capital and operating costs as compared to broadcast television, technical quality that compared favorably with broadcast television, and its multi-channel per licensees feature made ITFS an extremely cost effective vehicle for the delivery of Educational television materials.
The FCC changed the name of this service to the Educational Broadband Service (EBS) and changed the allocation so each licensee would not have four 6 MHz wide channels but instead would have one 6 MHz channel and one 15 MHz wide "channel" (three contiguous 5 MHz channels). There are currently several hundred EBS systems in operation delivering schedules of live and pre-recorded instruction.
History
Initial FCC authorization
The FCC initially authorized ITFS, in 1963, to operate using a one-way, analog, line-of-sight technology. Typical installations included up to four transmitters multiplexed through a single broadcast antenna with directional receive antennas at each receive site. Receive site installations included equipment to down convert the microwave channels for viewing on standard television receivers. In typical installations, the down converted ITFS signals were distributed to classrooms over multi-channel closed-circuit television systems.
FCC allows leasing
In the late 1970s |
https://en.wikipedia.org/wiki/Geometrical%20properties%20of%20polynomial%20roots | In mathematics, a univariate polynomial of degree with real or complex coefficients has complex roots, if counted with their multiplicities. They form a multiset of points in the complex plane. This article concerns the geometry of these points, that is the information about their localization in the complex plane that can be deduced from the degree and the coefficients of the polynomial.
Some of these geometrical properties are related to a single polynomial, such as upper bounds on the absolute values of the roots, which define a disk containing all roots, or lower bounds on the distance between two roots. Such bounds are widely used for root-finding algorithms for polynomials, either for tuning them, or for computing their computational complexity.
Some other properties are probabilistic, such as the expected number of real roots of a random polynomial of degree with real coefficients, which is less than for sufficiently large.
In this article, a polynomial that is considered is always denoted
where are real or complex numbers and ; thus is the degree of the polynomial.
Continuous dependence on coefficients
The roots of a polynomial of degree depend continuously on the coefficients. For simple roots, this results immediately from the implicit function theorem. This is true also for multiple roots, but some care is needed for the proof.
A small change of coefficients may induce a dramatic change of the roots, including the change of a real root into a complex root with a rather large imaginary part (see Wilkinson's polynomial). A consequence is that, for classical numeric root-finding algorithms, the problem of approximating the roots given the coefficients is ill-conditioned for many inputs.
Conjugation
The complex conjugate root theorem states that if the coefficients
of a polynomial are real, then the non-real roots appear in pairs of the form .
It follows that the roots of a polynomial with real coefficients are mirror-symmetric with resp |
https://en.wikipedia.org/wiki/Risk-based%20testing | Risk-based testing (RBT) is a type of software testing that functions as an organizational principle used to prioritize the tests of features and functions in software, based on the risk of failure, the function of their importance and likelihood or impact of failure.</ref> In theory, there are an infinite number of possible tests. Risk-based testing uses risk (re-)assessments to steer all phases of the test process, i.e., test planning, test design, test implementation, test execution and test evaluation. This includes for instance, ranking of tests, and subtests, for functionality; test techniques such as boundary-value analysis, all-pairs testing and state transition tables aim to find the areas most likely to be defective.
Assessing risks
Comparing the changes between two releases or versions is key in order to assess risk.
Evaluating critical business modules is a first step in prioritizing tests, but it does not include the notion of evolutionary risk. This is then expanded using two methods: change-based testing and regression testing.
Change-based testing allows test teams to assess changes made in a release and then prioritize tests towards modified modules.
Regression testing ensures that a change, such as a bug fix, did not introduce new faults into the software under test. One of the main reasons for regression testing is to determine whether a change in one part of the software has any effect on other parts of the software.
These two methods permit test teams to prioritize tests based on risk, change, and criticality of business modules. Certain technologies can make this kind of test strategy very easy to set up and to maintain with software changes.
Types of risk
Risk can be identified as the probability that an undetected software bug may have a negative impact on the user of a system.
The methods assess risks along a variety of dimensions:
Business or operational
High use of a subsystem, function or feature
Criticality of a subsystem, |
https://en.wikipedia.org/wiki/Reef%20Ball%20Foundation | Reef Ball Foundation, Inc. is a 501(c)(3) non-profit organization that functions as an international environmental non-governmental organization. The foundation uses reef ball artificial reef technology, combined with coral propagation, transplant technology, public education, and community training to build, restore and protect coral reefs. The foundation has established "reef ball reefs" in 59 countries. Over 550,000 reef balls have been deployed in more than 4,000 projects.
History
Reef Ball Development Group was founded in 1993 by Todd Barber, with the goal of helping to preserve and protect coral reefs for the benefit of future generations. Barber witnessed his favorite coral reef on Grand Cayman destroyed by Hurricane Gilbert, and wanted to do something to help increase the resiliency of eroding coral reefs. Barber and his father patented the idea of building reef substrate modules with a central inflatable bladder, so that the modules would be buoyant, making them easy to deploy by hand or with a small boat, rather than requiring heavy machinery.
Over the next few years, with the help of research colleagues at University of Georgia, Nationwide Artificial Reef Coordinators and the Florida Institute of Technology (FIT), Barber, his colleagues, and business partners worked to perfect the design. In 1997, Kathy Kirbo established The Reef Ball Foundation, Inc as a non-profit organization with original founders being Todd Barber as chairman and charter member, Kathy Kirbo founding executive director, board secretary, and charter member, Larry Beggs as vice president and a charter member and Eric Krasle as treasurer and a charter member, Jay Jorgensen as a charter member. Reef balls can be found in almost every coastal state in the United States, and on every continent including Antarctica. The foundation has expanded the scope of its projects to include coral rescue, propagation and transplant operations, beach restorations, mangrove restorations and nursery dev |
https://en.wikipedia.org/wiki/Vibrational%20circular%20dichroism | Vibrational circular dichroism (VCD) is a spectroscopic technique which detects differences in attenuation of left and right circularly polarized light passing through a sample. It is the extension of circular dichroism spectroscopy into the infrared and near infrared ranges.
Because VCD is sensitive to the mutual orientation of distinct groups in a molecule, it provides three-dimensional structural information. Thus, it is a powerful technique as VCD spectra of enantiomers can be simulated using ab initio calculations, thereby allowing the identification of absolute configurations of small molecules in solution from VCD spectra. Among such quantum computations of VCD spectra resulting from the chiral properties of small organic molecules are those based on density functional theory (DFT) and gauge-including atomic orbitals (GIAO). As a simple example of the experimental results that were obtained by VCD are the spectral data obtained within the carbon-hydrogen (C-H) stretching region of 21 amino acids in heavy water solutions. Measurements of vibrational optical activity (VOA) have thus numerous applications, not only for small molecules, but also for large and complex biopolymers such as muscle proteins (myosin, for example) and DNA.
Vibrational modes
Theory
While the fundamental quantity associated with the infrared absorption is the dipole strength, the differential absorption is also proportional to the rotational strength, a quantity which depends on both the electric and magnetic dipole transition moments. Sensitivity of the handedness of a molecule toward circularly polarized light results from the form of the rotational strength. A rigorous theoretical development of VCD was developed concurrently by the late Professor P.J. Stephens, FRS, at the University of Southern California, and the group of Professor A.D. Buckingham, FRS, at Cambridge University in the UK, and first implemented analytically in the Cambridge Analytical Derivative Package (CADPAC) by |
https://en.wikipedia.org/wiki/Assertion%20definition%20language | The Assertion Definition Language (ADL) is a specification language providing a predicate logic based behaviour, as well as interfaces, for computer software.
English language support
ADL uses function pre- and postconditions to specify interfaces and is designed to provide an intermediary between informal English language specifications and formal programmatic test specifications.
Tool support exists both to convert ADL specifications into the English language, and to generate test systems against which implementation code can be verified.
History
ADL is developed cooperatively by The Open Group and SunTest of Sun Microsystems
See also
Formal methods
Formal specification |
https://en.wikipedia.org/wiki/Urban%20ecosystem | In ecology, urban ecosystems are considered a ecosystem functional group within the intensive land-use biome. They are structurally complex ecosystems with highly heterogeneous and dynamic spatial structure that is created and maintained by humans. They include cities, smaller settlements and industrial areas, that are made up of diverse patch types (e.g. buildings, paved surfaces, transport infrastructure, parks and gardens, refuse areas). Urban ecosystems rely on large subsidies of imported water, nutrients, food and other resources. Compared to other natural and artificial ecosystems human population density is high, and their interaction with the different patch types produces emergent properties and complex feedbacks among ecosystem components.
In socioecology, urban areas are considered part of a broader social-ecological system in which urban landscapes and urban human communities interact with other landscape elements. Urbanization has large impacts on human and environmental health, and the study of urban ecosystems has led to proposals for sustainable urban designs and approaches to development of city fringe areas that can help reduce negative impact on surrounding environments and promote human well-being.
Urban ecosystem research
Urban ecology is a relatively new field. Because of this, the research that has been done in this field has yet to become extensive. While there is still plenty of time for growth in the research of this field, there are some key issues and biases within the current research that still need to be addressed.
The article “A Review of Urban Ecosystem Services: Six Key Challenges for Future Research'' addresses the issue of geographical bias. According to this article, there is a significant geographical bias, “towards the northern hemisphere”. The article states that case study research is done primarily in the United States and China. It goes on to explain how future research would benefit from a more geographically diverse a |
https://en.wikipedia.org/wiki/Home%20automation%20for%20the%20elderly%20and%20disabled | Home automation for the elderly and disabled focuses on making it possible for older adults and people with disabilities to remain at home, safe and comfortable. Home automation is becoming a viable option for older adults and people with disabilities who would prefer to stay in the comfort of their homes rather than move to a healthcare facility. This field uses much of the same technology and equipment as home automation for security, entertainment, and energy conservation but tailors it towards old people and people with disabilities.
Concept
There are two basic forms of home automation systems for the elderly: embedded health systems and private health networks. Embedded health systems integrate sensors and microprocessors in appliances, furniture, and clothing which collect data that is analyzed and can be used to diagnose diseases and recognize risk patterns. Private health networks implement wireless technology to connect portable devices and store data in a household health database. Due to the need for more healthcare options for the aging population "there is a significant interest from industry and policy makers in developing these technologies".
Home automation is implemented in homes of older adults and people with disabilities in order to maintain their independence and safety, also saving the costs and anxiety of moving to a health care facility. For those with disabilities smart homes give them opportunity for independence, providing emergency assistance systems, security features, fall prevention, automated timers, and alerts, also allowing monitoring from family members via an internet connection.
Telehealth implementation
Background
Telehealth is the use of electronic technology services to provide patient care and improve the healthcare delivery system. The term is often confused with telemedicine, which specifically involves remote clinical services of healthcare delivery. Telehealth is the delivery of remote clinical and non-clinical serv |
https://en.wikipedia.org/wiki/Scratch%20%28programming%20language%29 | Scratch is a high-level block-based visual programming language and website aimed primarily at children as an educational tool, with a target audience of ages 8 to 16. Users on the site, called Scratchers, can create projects on the website using a block-like interface. Projects can be exported to standalone HTML5, Android apps, Bundle (macOS) and EXE files using external tools. Scratch was conceived and designed through collaborative National Science Foundation grants awarded to Mitchell Resnick and Yasmin Kafai. The service is developed by the MIT Media Lab, and has been translated into 70+ languages, and is used in most parts of the world. Scratch is taught and used in after-school centers, schools, and colleges, as well as other public knowledge institutions. As of 15 February 2023, community statistics on the language's official website show more than 123 million projects shared by over 103 million users, over 804 million total projects ever created (including unshared projects), and more than 95 million monthly website visits.
Scratch takes its name from a technique used by disk jockeys called "scratching", where vinyl records are clipped together and manipulated on a turntable to produce different sound effects and music. Like scratching, the website lets users mix together different media (including graphics, sound, and other programs) in creative ways by creating and 'remixing' projects, like video games, animations, music, and simulations.
Scratch 3.0
User interface
The Scratch interface is divided into three main sections: a stage area, block palette, and a coding area to place and arrange the blocks into scripts that can be run by pressing the green flag or clicking on the code itself. Users may also create their own code blocks and they will appear in "My Blocks".
The stage area features the results (e.g., animations, turtle graphics, either in a small or normal size, with a full-screen option also available) and all sprites' thumbnails being list |
https://en.wikipedia.org/wiki/N%C3%A9ron%E2%80%93Tate%20height | In number theory, the Néron–Tate height (or canonical height) is a quadratic form on the Mordell–Weil group of rational points of an abelian variety defined over a global field. It is named after André Néron and John Tate.
Definition and properties
Néron defined the Néron–Tate height as a sum of local heights. Although the global Néron–Tate height is quadratic, the constituent local heights are not quite quadratic. Tate (unpublished) defined it globally by observing that the logarithmic height associated to a symmetric invertible sheaf on an abelian variety is “almost quadratic,” and used this to show that the limit
exists, defines a quadratic form on the Mordell–Weil group of rational points, and satisfies
where the implied constant is independent of . If is anti-symmetric, that is , then the analogous limit
converges and satisfies , but in this case is a linear function on the Mordell-Weil group. For general invertible sheaves, one writes as a product of a symmetric sheaf and an anti-symmetric sheaf, and then
is the unique quadratic function satisfying
The Néron–Tate height depends on the choice of an invertible sheaf on the abelian variety, although the associated bilinear form depends only on the image of in
the Néron–Severi group of . If the abelian variety is defined over a number field K and the invertible sheaf is symmetric and ample, then the Néron–Tate height is positive definite in the sense that it vanishes only on torsion elements of the Mordell–Weil group . More generally, induces a positive definite quadratic form on the real vector space .
On an elliptic curve, the Néron–Severi group is of rank one and has a unique ample generator, so this generator is often used to define the Néron–Tate height, which is denoted without reference to a particular line bundle. (However, the height that naturally appears in the statement of the Birch and Swinnerton-Dyer conjecture is twice this height.) On abelian varieties of higher dimension, there |
https://en.wikipedia.org/wiki/Singapore%20math | Singapore math (or Singapore maths in British English) is a teaching method based on the national mathematics curriculum used for first through sixth grade in Singaporean schools. The term was coined in the United States to describe an approach originally developed in Singapore to teach students to learn and master fewer mathematical concepts at greater detail as well as having them learn these concepts using a three-step learning process: concrete, pictorial, and abstract. In the concrete step, students engage in hands-on learning experiences using physical objects which can be everyday items such as paper clips, toy blocks or math manipulates such as counting bears, link cubes and fraction discs. This is followed by drawing pictorial representations of mathematical concepts. Students then solve mathematical problems in an abstract way by using numbers and symbols.
The development of Singapore math began in the 1980s when Singapore's Ministry of Education developed its own mathematics textbooks that focused on problem solving and developing thinking skills. Outside Singapore, these textbooks were adopted by several schools in the United States and in other countries such as Canada, Israel, the Netherlands, Indonesia, Chile, Jordan, India, Pakistan, Thailand, Malaysia, Japan, South Korea, the Philippines and the United Kingdom. Early adopters of these textbooks in the U.S. included parents interested in homeschooling as well as a limited number of schools. These textbooks became more popular since the release of scores from international education surveys such as Trends in International Mathematics and Science Study (TIMSS) and Programme for International Student Assessment (PISA), which showed Singapore at the top three of the world since 1995. U.S. editions of these textbooks have since been adopted by a large number of school districts as well as charter and private schools.
History
Before the development of its own mathematics textbooks in the 1980s, Singapo |
https://en.wikipedia.org/wiki/Biosocial%20theory | Biosocial Theory is a theory in behavioral and social science that describes personality disorders and mental illnesses and disabilities as biologically-determined personality traits reacting to environmental stimuli.
Biosocial Theory also explains the shift from evolution to culture when it comes to gender and mate selection. Biosocial Theory in motivational psychology identifies the differences between males and females concerning physical strength and reproductive capacity, and how these differences interact with expectations from society about social roles. This interaction produces the differences we see in gender.
Description
M. M. Linehan wrote in her 1993 paper, Cognitive–Behavioral Treatment of Borderline Personality Disorder, that "the biosocial theory suggests that BPD is a disorder of self-regulation, and particularly of emotional regulation, which results from biological irregularities combined with certain dysfunctional environments, as well as from their interaction and transaction over time"
The biological part of the model involves the idea that emotional sensitivity is inborn. As well as we have different sensitivities in our pain tolerance, in our skin, or in our digestion, we also have different sensitivities to our emotional reactions. This is part of our genetic makeup, but this alone does not cause difficulties or pathologies. It is the transaction between the biological and the social part, especially with invalidating environments, that brings problems. An invalidating environment is one in which the individuals do not fit, so it invalidates their emotions and experiences. It does not need to be an abusive environment; invalidation can occur in subtle ways. Emotional sensitivity plus invalidating environments cause pervasive emotion dysregulation which is the font of many psychopathologies.
According to a 1999 article published by McLean Hospital,
See also
Biocultural anthropology
Biosocial criminology
Sociobiology |
https://en.wikipedia.org/wiki/Software%20analysis%20pattern | Software analysis patterns or analysis patterns in software engineering are conceptual models, which capture an abstraction of a situation that can often be encountered in modelling. An analysis pattern can be represented as "a group of related, generic objects (meta-classes) with stereotypical attributes (data definitions), behaviors (method signatures), and expected interactions defined in a domain-neutral manner."
Overview
Martin Fowler defines a pattern as an "idea that has been useful in one practical context and will probably be useful in others". He further on explains the analysis pattern, which is a pattern "that reflects conceptual structures of business processes rather than actual software implementations". An example:
Martin Fowler describes this pattern as one that "captures the memory of something interesting which affects the domain".
Describing an analysis pattern
While doing Analysis we are trying to understand the problem. Fowler does not detail in his book a formal way to write or to describe analysis patterns. Suggestions have been raised since to have a consistent and uniform format for describing them. Most of them are based on the work from Erich Gamma, Frank Buschmann and Christopher Alexander on patterns (in architecture or computer science). One of them, proposed by Hahsler, has the following structure:
Pattern name: a pattern name should really reflect the meaning of what it is abstracting. It should be simple so that one can refer to it during analysis.
Intent: the intent aims to describe the goal the pattern is trying to achieve. It should also describe the problem it tries to solve.
Motivation: "A scenario that illustrates the problem and how the analysis pattern contributes to the solution in the concrete scenario"
Forces and context: "Discussion of forces and tensions which should be resolved by the analysis pattern"
Solution: "Description of solution and of the balance of forces achieved by the analysis pattern in the sce |
https://en.wikipedia.org/wiki/Visual%20MODFLOW | Visual MODFLOW (VMOD) is a graphical interface (GUI) for the open source groundwater modeling engine MODFLOW. VMOD was developed by Waterloo Hydrogeologic and first released in 1994, the first commercially available GUI for MODFLOW. On May 2012 a .NET version of the software was rebranded as Visual MODFLOW Flex. The program includes proprietary extensions, such as MODFLOW-SURFACT, MT3DMS (mass-transport 3D multi-species) and a three-dimensional model explorer. Visual MODFLOW supports MODFLOW-2000, MODFLOW-2005, MODFLOW-NWT, MODFLOW-LGR, MODFLOW-SURFACT, and SEAWAT.
The software is used primarily by hydrogeologists to simulate groundwater flow and contaminant transport.
History
The original version of Visual MODFLOW, developed for DOS by Nilson Guiguer, Thomas Franz and Bob Cleary, was released in August 1994. It was based on the USGS MODFLOW-88 and MODPATH code, and resembled the FLOWPATH program developed by Waterloo Hydrogeologic Inc. The first Windows based version was released in 1997; the main programmers were Sergei Schmakov, Alexander Liftshits, and Sean Wilson. A .NET version that included non-grid-based, graphical conceptual modelling features was released in 2012.
On January 10, 2005, Waterloo Hydrogeologic was acquired by Schlumberger's Water Services Technology Group.
On May 1, 2012, Waterloo Hydrogeologic released Visual MODFLOW Flex.
On March 13, 2015, Waterloo Hydrogeologic was acquired by Nova Metrix. |
https://en.wikipedia.org/wiki/List%20of%20Bluetooth%20protocols | The wireless data exchange standard Bluetooth uses a variety of protocols. Core protocols are defined by the trade organization Bluetooth SIG. Additional protocols have been adopted from other standards bodies. This article gives an overview of the core protocols and those adopted protocols that are widely used.
The Bluetooth is split in two parts: a "controller stack" containing the timing critical radio interface, and a "host stack" dealing with high level data. The controller stack is generally implemented in a low cost silicon device containing the Bluetooth radio and a microprocessor. The host stack is generally implemented as part of an operating system, or as an installable package on top of an operating system. For integrated devices such as Bluetooth headsets, the host stack and controller stack can be run on the same microprocessor to reduce mass production costs; this is known as a hostless system.
Controller stack
Asynchronous Connection-Less [logical transport] (ACL)
The normal type of radio link used for general data packets using a polling TDMA scheme to arbitrate access. It can carry packets of several types, which are distinguished by:
length (1, 3, or 5 time slots depending on required payload size)
Forward error correction (optionally reducing the data rate in favour of reliability)
modulation (Enhanced Data Rate packets allow up to triple data rate by using a different RF modulation for the payload)
A connection must be explicitly set up and accepted between two devices before packets can be transferred.
ACL packets are retransmitted automatically if unacknowledged, allowing for correction of a radio link that is subject to interference. For isochronous data, the number of retransmissions can be limited by a flush timeout; but without using L2PLAY retransmission and flow control mode or EL2CAP, a higher layer must handle the packet loss.
ACL links are disconnected if there is nothing received for the supervision timeout period; the defa |
https://en.wikipedia.org/wiki/Misuse%20detection | Misuse detection actively works against potential insider threats to vulnerable computer data.
Misuse
Misuse detection is an approach to detecting computer attacks. In a misuse detection approach, abnormal system behaviour is defined first, and then all other behaviour is defined as normal. It stands against the anomaly detection approach which utilizes the reverse: defining normal system behaviour first and defining all other behaviour as abnormal.
With misuse detection, anything not known is normal. An example of misuse detection is the use of attack signatures in an intrusion detection system. Misuse detection has also been used more generally to refer to all kinds of computer misuse.
Theory
In theory, misuse detection assumes that abnormal behaviour has a simple-to-define model. Its advantage is the simplicity of adding known attacks to the model. Its disadvantage is its inability to recognize unknown attacks. |
https://en.wikipedia.org/wiki/Deniable%20authentication | In cryptography, deniable authentication refers to message authentication between a set of participants where the participants themselves can be confident in the authenticity of the messages, but it cannot be proved to a third party after the event.
In practice, deniable authentication between two parties can be achieved through the use of message authentication codes (MACs) by making sure that if an attacker is able to decrypt the messages, they would also know the MAC key as part of the protocol, and would thus be able to forge authentic-looking messages. For example, in the Off-the-Record Messaging (OTR) protocol, MAC keys are derived from the asymmetric decryption key through a cryptographic hash function. In addition to that, the OTR protocol also reveals used MAC keys as part of the next message, after they have already been used to authenticate previously received messages, and will not be re-used.
See also
Deniable encryption
Plausible deniability
Malleability
Undeniable signature |
https://en.wikipedia.org/wiki/Walt%20Dawson | Walter Dawson (born April 26, 1982, in Portland, Oregon) is an Alzheimer's disease activist. He is the son of
British immigrant Cecil Dawson and Oregon native Clara Dawson. As a young boy, Dawson captured the attention of America's leaders and national media by undertaking a letter writing campaign on behalf of his father and others with Alzheimer's.
In 1992, Dawson became a national spokesperson for the Alzheimer's Association. In this role, Dawson traveled to Washington, D.C. several times to testify before the United States Senate and House of Representative committees about his family's experiences. While in Washington, Dawson was granted access to several senior legislators and public officials including President Bill Clinton and Vice-President Al Gore.
Overview
Dawson began his letter-writing campaign after the cost of his father's long care placed the family in serious financial peril. National Public Radio became the first nationwide media outlet to support his original letter-writing campaign, after Dawson (aged 9) read one of his letters on the air. Soon afterwards, NBC, CBS and Nickelodeon picked up the story of Dawson's campaign on behalf of his father and other people with Alzheimer's disease. They covered the Dawson family and their struggle for health care reform over numerous programs, gaining national exposure for the issues that mattered to people with the disease and their families.
Early adulthood
While an undergraduate at the University of Portland, Dawson was elected student body vice-president and served as President of the Student Senate. |
https://en.wikipedia.org/wiki/Franco-British%20Nuclear%20Forum | The first meeting of the Franco–British Nuclear Forum was held in Paris in November 2007, chaired by the Minister for Energy and the French Industry Minister. The working groups are focusing on specific areas for collaboration. A follow-up meeting on the issue in London was planned for March 2008, but did not take place.
See also
Nuclear power in the United Kingdom
Nuclear power in France
External links
BBC: France and UK boost nuclear ties |
https://en.wikipedia.org/wiki/Protistology | Protistology is a scientific discipline devoted to the study of protists, a highly diverse group of eukaryotic organisms. All eukaryotes apart from animals, plants and fungi are considered protists. Its field of study therefore overlaps with the more traditional disciplines of phycology, mycology, and protozoology, just as protists embrace mostly unicellular organisms described as algae, some organisms regarded previously as primitive fungi, and protozoa ("animal" motile protists lacking chloroplasts).
They are a paraphyletic group with very diverse morphologies and lifestyles. Their sizes range from unicellular picoeukaryotes only a few micrometres in diameter to multicellular marine algae several metres long.
History
The history of the study of protists has its origins in the 17th century. Since the beginning, the study of protists has been intimately linked to developments in microscopy, which have allowed important advances in the understanding of these organisms due to their generally microscopic nature. Among the pioneers was Anton van Leeuwenhoek, who observed a variety of free-living protists and in 1674 named them “very little animalcules”.
During the 18th century studies on the Infusoria were dominated by Christian Gottfried Ehrenberg and Félix Dujardin.
The term "protozoology" has become dated as understanding of the evolutionary relationships of the eukaryotes has improved, and is frequently replaced by the term "protistology". For example, the Society of Protozoologists, founded in 1947, was renamed International Society of Protistologists in 2005. However, the older term is retained in some cases (e.g., the Polish journal Acta Protozoologica).
Journals and societies
Dedicated academic journals include:
Archiv für Protistenkunde, 1902-1998, Germany (renamed Protist, 1998-);
Archives de la Societe Russe de Protistologie, 1922-1928, Russia;
Journal of Protozoology, 1954-1993, USA (renamed Journal of Eukaryotic Microbiology, 1993-);
Acta Protoz |
https://en.wikipedia.org/wiki/Enterprise%20Privacy%20Authorization%20Language | Enterprise Privacy Authorization Language (EPAL) is a formal language for writing enterprise privacy policies to govern data handling practices in IT systems according to fine-grained positive and negative authorization rights. It was submitted by IBM to the World Wide Web Consortium (W3C) in 2003 to be considered for recommendation. In 2004, a lawsuit was filed by Zero-Knowledge Systems claiming that IBM breached a copyright agreement from when they worked together in 2001 - 2002 to create Privacy Rights Markup Language (PRML). EPAL is based on PRML, which means Zero-Knowledge argued they should be a co-owner of the standard.
See also
XACML - eXtensible Access Control Markup Language, a standard by OASIS. |
https://en.wikipedia.org/wiki/Deuterated%20chloroform | Deuterated chloroform, also known as chloroform-d, is the organic compound with the formula or . Deuterated chloroform is a common solvent used in NMR spectroscopy. The properties of (chloroform) are virtually identical.
Preparation
Deuterated chloroform is commercially available. It is more easily produced and less expensive than deuterated dichloromethane. Deuterochloroform is produced by the reaction of hexachloroacetone with deuterium oxide, using pyridine as a catalyst. The large difference in boiling points between the starting material and product facilitate purification by distillation.
NMR solvent
In proton NMR spectroscopy, deuterated solvent (enriched to >99% deuterium) is typically used to avoid recording a large interfering signal or signals from the proton(s) (i.e., hydrogen-1) present in the solvent itself. If nondeuterated chloroform (containing a full equivalent of protium) were used as solvent, the solvent signal would almost certainly overwhelm and obscure any nearby analyte signals. In addition, modern instruments usually require the presence of deuterated solvent, as the field frequency is locked using the deuterium signal of the solvent to prevent frequency drift. Commercial chloroform-d does, however, still contain a small amount (0.2% or less) of non-deuterated chloroform; this results in a small singlet at 7.26 ppm, known as the residual solvent peak, which is frequently used as an internal chemical shift reference.
In carbon-13 NMR spectroscopy, the sole carbon in deuterated chloroform shows a triplet at a chemical shift of 77.16 ppm with the three peaks being about equal size, resulting from splitting by spin coupling to the attached spin-1 deuterium atom ( has a chemical shift of 77.36 ppm).
Deuterated chloroform is a general purpose NMR solvent, as it is not very chemically reactive and unlikely to exchange its deuterium with its solute, and its low boiling point allows for easy sample recovery. It, however, it is incompatible wi |
https://en.wikipedia.org/wiki/MacScan | MacScan is anti-malware software for macOS developed by SecureMac.
Features
SecureMac runs on Apple macOS. It scans for and removes malware (including spyware, Trojan horses, keystroke loggers, and tracking cookies). It also scans for remote administration programs, like Apple Remote Desktop, allowing users to verify that such programs are installed only with their authorization.
The full version is available as shareware.
Unlike other anti-malware applications available for Mac OS X (and other systems), MacScan scans exclusively for malware that affects Macs, as opposed to scanning for all forms of known threats, which would include Windows malware. Given that there is considerably less macOS malware than Windows-based malware, MacScan's definition files are smaller and more optimized.
See also
List of Macintosh software |
https://en.wikipedia.org/wiki/List%20of%20WiMAX%20networks | The following is a list of WiMAX networks.
Standards
IEEE 802.16 - called fixed WiMAX because of static connection without handover.
IEEE 802.16e - called mobile WiMAX because it allows handovers between base stations.
IEEE 802.16m - advanced air interface with data rates of 100 Mbit/s mobile and 1 Gbit/s fixed.
Networks
Africa
Americas
Asia & Oceania
Europe
See also
List of LTE networks |
https://en.wikipedia.org/wiki/Dragon%20Slayer%20%28video%20game%29 | is an action role-playing game, developed by Nihon Falcom and designed by Yoshio Kiya.<ref name="retro_3"> Reprinted from {{citation|title=Retro Gamer|issue=67|year=2009}}</ref> It was originally released in 1984 for the PC-8801, PC-9801, X1 and FM-7, and became a major success in Japan. It was followed by an MSX port published by Square in 1985 (making it one of the first titles to be published by Square), a Super Cassette Vision by Epoch in 1986 and a Game Boy port by the same company in 1990 under the name . A version for PC-6001mkII was in development but was never released. A remake of Dragon Slayer is included in the Falcom Classics collection for the Sega Saturn.Dragon Slayer began the Dragon Slayer series, a banner which encompasses a number of popular Falcom titles, such as Dragon Slayer II: Xanadu, Sorcerian, and Legacy of the Wizard. It also includes Dragon Slayer: The Legend of Heroes, which would later spawn over a dozen entries across multiple subseries.
Gameplay Dragon Slayer is an early example of the action role-playing game genre, which it laid the foundations for. Building on the prototypical action role-playing elements of Panorama Toh (1983), created by Yoshio Kiya and Nihon Falcom, as well as Namco's The Tower of Druaga (1984), Dragon Slayer is often considered the first true action role-playing game. In contrast to earlier turn-based roguelikes, Dragon Slayer was a dungeon crawl role-playing game that was entirely real-time with action-oriented combat, combining arcade style action mechanics with traditional role-playing mechanics.Dragon Slayer featured an in-game map to help with the dungeon-crawling, required item management due to the inventory being limited to one item at a time, and featured item-based puzzles similar to The Legend of Zelda. Dragon Slayer's overhead action-RPG formula was used in many later games. Along with its competitor, Hydlide, Dragon Slayer laid the foundations for the action RPG genre, including franchises such a |
https://en.wikipedia.org/wiki/Ultimate%20reality | Ultimate reality is "the supreme, final, and fundamental power in all reality". This heavily overlaps with the concept of the Absolute in certain philosophies.
Buddhism
In Theravada Buddhism, Nirvana is ultimate reality. Nirvana is described in negative terms; it is unconstructed and unconditioned. In some strands of Mahayana Buddhism, the Buddha-nature or the Dharmakaya is seen as ultimate reality. Other strands of Buddhism reject the notion of ultimate reality, regarding any existent as empty (sunyata) of inherent existence (svabhava).
Confucianism and Chinese theology
In Confucianism and general Chinese theology, Tian connotes the highest principle of creation, monistic in both structure and nature. This conception of Tian evolved over time: in the earliest Confucian canonical texts (such as the Analects of Confucius), Tian was a transcendent universal creator and ruler similar to that of the Hellenistic philosophies and Abrahamic traditions. During the Neo-Confucianism of the Song dynasty, Tian became the will and embodiment of the "natural order" of things, the universal principle guiding the cosmos.
Hellenistic philosophy
There have generally been ideas of an impersonal supreme force or ultimate reality in Hellenistic philosophy, such as among the Stoics, whose physics pantheistically identified the universe with God, rationally creating the cosmos with his pneuma, ordering the cosmos with his logos, and destroying the cosmos in ekpyrosis, only to start the process in rebirth all over again. Among the Platonists of all generations, the highest reality as Form of the Good or The One, an ineffable and transcendent first principle that is both the origin and end of all things.
Hinduism
In Hinduism, Brahman connotes the highest universal principle, the ultimate reality in the universe. In major schools of Hindu philosophy, it is the material, efficient, formal and final cause of all that exists. It is the pervasive, genderless, infinite, eternal truth an |
https://en.wikipedia.org/wiki/Forward%20kinematics | In robot kinematics, forward kinematics refers to the use of the kinematic equations of a robot to compute the position of the end-effector from specified values for the joint parameters.
The kinematics equations of the robot are used in robotics, computer games, and animation. The reverse process, that computes the joint parameters that achieve a specified position of the end-effector, is known as inverse kinematics.
Kinematics equations
The kinematics equations for the series chain of a robot are obtained using a rigid transformation [Z] to characterize the relative movement allowed at each joint and separate rigid transformation [X] to define the dimensions of each link. The result is a sequence of rigid transformations alternating joint and link transformations from the base of the chain to its end link, which is equated to the specified position for the end link,
where [T] is the transformation locating the end-link. These equations are called the kinematics equations of the serial chain.
Link transformations
In 1955, Jacques Denavit and Richard Hartenberg introduced a convention for the definition of the joint matrices [Z] and link matrices [X] to standardize the coordinate frame for spatial linkages. This convention positions the joint frame so that it consists of a screw displacement along the Z-axis
and it positions the link frame so it consists of a screw displacement along the X-axis,
Using this notation, each transformation-link goes along a serial chain robot, and can be described by the coordinate transformation,
where θi, di, αi,i+1 and ai,i+1 are known as the Denavit-Hartenberg parameters.
Kinematics equations revisited
The kinematics equations of a serial chain of n links, with joint parameters θi are given by
where is the transformation matrix from the frame of link to link . In robotics, these are conventionally described by Denavit–Hartenberg parameters.
Denavit-Hartenberg matrix
The matrices associated with these operations |
https://en.wikipedia.org/wiki/Selective%20adsorption | In surface science, selective adsorption is the effect when minima associated with bound-state resonances occur in specular intensity in atom-surface scattering.
In crystal growth, selective adsorption refers to the phenomenon where adsorbing molecules attach preferentially to certain crystal faces.
An example of selective adsorption can be demonstrated in the growth of Rochelle salt crystals. If copper ions are added to solution during the growth process, some crystal faces will slow down as copper apparently becomes a barrier to adsorption. However, by then adding sodium hydroxide to the solution, the preferred crystal faces will change once again.
Discovery
Pronounced intensity minima were first observed in 1930 by Theodor Estermann, Otto Frisch, and Otto Stern, during a series of gas-surface interaction experiments attempting to demonstrate the wave nature of atoms and molecules. The phenomenon has been explained in 1936 by John Lennard-Jones and Devonshire in terms of resonant transitions to bound surface states.
Significance
The selective adsorption binding energies can supply information on the gas-surface interaction potentials by yielding the vibrational energy spectrum of the gas atom bound to the surface. Starting from the 1970s, it has been extensively studied, both theoretically and experimentally. Energy levels measured with this technique are available for many systems. |
https://en.wikipedia.org/wiki/Tonic%20%28physiology%29 | Tonic in physiology refers to a physiological response which is slow and may be graded. This term is typically used in opposition to a fast response. For instance, tonic muscles are contrasted by the more typical and much faster twitch muscles, while tonic sensory nerve endings are contrasted to the much faster phasic sensory nerve endings.
Tonic muscles
Tonic muscles are much slower than twitch fibers in terms of time from stimulus to full activation, time to full relaxation upon cessation of stimuli, and maximal shortening velocity. These muscles are rarely found in mammals (only in the muscles moving the eye and in the middle ear), but are common in reptiles and amphibians.
Tonic sensory receptors
Tonic sensory input adapts slowly to a stimulus and continues to produce action potentials over the duration of the stimulus. In this way it conveys information about the duration of the stimulus. In contrast, phasic receptors adapt rapidly to a stimulus. The response of the cell diminishes very quickly and then stops. It does not provide information on the duration of the stimulus; instead some of them convey information on rapid changes in stimulus intensity and rate. Examples of tonic receptors are pain receptors, the joint capsule, muscle spindle, and the Ruffini corpuscle.
See also
Tonic-clonic seizure |
https://en.wikipedia.org/wiki/Giovanni%20%28meteorology%29 | Giovanni is a Web interface that allows users to analyze NASA's gridded data from various satellite and surface observations.
Giovanni lets researchers examine data on atmospheric chemistry, atmospheric temperature, water vapor and clouds, atmospheric aerosols, precipitation, and ocean chlorophyll and surface temperature. The primary data consist of global gridded data sets with reduced spatial resolution. Basic analytical functions performed by Giovanni are carried out by the Grid Analysis and Display System (GrADS).
Giovanni is an acronym for GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure.
It allows access to data from multiple remote sites, supports multiple data formats including Hierarchical Data Format (HDF), HDF-EOS, network Common Data Form (netCDF), GRIdded Binary (GRIB), and binary, and multiple plot types including area, time, Hovmoller, and image animation. |
https://en.wikipedia.org/wiki/360-degree%20video | 360-degree videos, also known as surround video, or immersive videos or spherical videos, are video recordings where a view in every direction is recorded at the same time, shot using an omnidirectional camera or a collection of cameras. During playback on normal flat display the viewer has control of the viewing direction like a panorama. It can also be played on a display or projectors arranged in a sphere or some part of a sphere.
Creation
360-degree video is typically recorded using either a special rig of multiple cameras, or using a dedicated camera that contains multiple camera lenses embedded into the device, and recording overlapping angles simultaneously. Specialized omnidirectional cameras and rigs have been developed for the purpose of recording 360-degree video, including rigs such as GoPro's Omni and Odyssey (which consist of multiple action cameras installed within a frame), and contained cameras like the Nokia OZO. There have also been handheld dual-lens cameras such as the Ricoh Theta S, Samsung Gear 360, Garmin VIRB 360, and the Kogeto Dot 360—a panoramic camera lens accessory for smartphone cameras.
This separate footage is stitched into one spherical video piece, and the color and contrast of each shot is calibrated to be consistent with the others. This process is done either by the camera itself, or using specialized software that can analyze common visuals and audio to synchronize and link the different camera feeds together. Generally, the only area that cannot be viewed is the view toward the camera support.
360-degree video is typically formatted in an equirectangular projection and is either monoscopic, with one image directed to both eyes, or stereoscopic, viewed as two distinct images directed individually to each eye for a 3D effect. Due to this projection and stitching, equirectangular video exhibits a lower quality in the middle of the image than at the top and bottom. Spherical videos are frequently in curvilinear perspective wit |
https://en.wikipedia.org/wiki/Reverse%20short-channel%20effect | In MOSFETs, reverse short-channel effect (RSCE) is an increase of threshold voltage with decreasing channel length; this is the opposite of the usual short-channel effect. The difference comes from changes in doping profiles used in modern small device manufacturing.
RSCE is a result of non-uniform channel doping (halo doping ) in modern processes. To combat drain-induced barrier lowering (DIBL), MOSFET substrate near source and drain region are heavily doped (p+ in case of NMOS and n+ in case of PMOS) to reduce the width of the depletion region in the vicinity of source/substrate and drain/substrate junctions (called halo doping to describe the limitation of this heavy doping to the immediate vicinity of the junctions). At short channel lengths the halo doping of the source overlaps that of the drain, increasing the substrate doping concentration in the channel area, and thus increasing the threshold voltage. This increased threshold voltage requires a larger gate voltage for channel inversion. However, as channel length is increased, the halo doped regions become separated and the doping mid-channel approaches a lower background level dictated by the body doping. This reduction in average channel doping concentration means Vth initially is reduced as channel length increases, but approaches a constant value independent of channel length for large enough lengths.
See also
Short-channel effect |
https://en.wikipedia.org/wiki/Drain-induced%20barrier%20lowering | Drain-induced barrier lowering (DIBL) is a short-channel effect in MOSFETs referring originally to a reduction of threshold voltage of the transistor at higher drain voltages.
In a classic planar field-effect transistor with a long channel, the bottleneck in channel formation occurs far enough from the drain contact that it is electrostatically shielded from the drain by the combination of the substrate and gate, and so classically the threshold voltage was independent of drain voltage.
In short-channel devices this is no longer true: The drain is close enough to gate the channel, and so a high drain voltage can open the bottleneck and turn on the transistor prematurely.
The origin of the threshold decrease can be understood as a consequence of charge neutrality: the Yau charge-sharing model.
The combined charge in the depletion region of the device and that in the channel of the device is balanced by three electrode charges: the gate, the source and the drain. As drain voltage is increased, the depletion region of the p-n junction between the drain and body increases in size and extends under the gate, so the drain assumes a greater portion of the burden of balancing depletion region charge, leaving a smaller burden for the gate. As a result, the charge present on the gate retains charge balance by attracting more carriers into the channel, an effect equivalent to lowering the threshold voltage of the device.
In effect, the channel becomes more attractive for electrons. In other words, the potential energy barrier for electrons in the channel is lowered. Hence the term "barrier lowering" is used to describe these phenomena. Unfortunately, it is not easy to come up with accurate analytical results using the barrier lowering concept.
Barrier lowering increases as channel length is reduced, even at zero applied drain bias, because the source and drain form pn junctions with the body, and so have associated built-in depletion layers associated with them that become |
https://en.wikipedia.org/wiki/Regius%20Professor%20of%20Botany%20%28Aberdeen%29 | Regius Professor of Botany is a regius professorship at the University of Aberdeen in Scotland.
List of Regius Professors of Botany
1860 to 1877: George Dickie
1877 to 1919: James W. H. Trail
1920 to 1933: William Grant Craib
1934 to 1959: James Robert Matthews
1959 to 1981: Paul Egerton Weatherley
1982 to 1988: Charles Henry Gimingham
1996 to 2010: Ian Alexander |
https://en.wikipedia.org/wiki/Keidel%20vacuum | The Keidel vacuum tube was a type of blood collecting device, first manufactured by Hynson, Wescott and Dunning in around 1922. This vacuum was one of the first evacuated systems, predating the more well known Vacutainer. Its primary use was to test for syphilis and typhoid fever.
Process
Essentially, the Keidel vacuum consists of a sealed ampule with or without a culture medium. Connected to the ampule was a short rubber tube with a needle at the end, using a small glass tube as a cap. The insertion of the needle into the vein crushes the ampule, thus creating a vacuum and forcing blood into the container. Typically, a prominent vein in the forearm such as the median cubital vein would suffice, although the Keidel vacuum can take blood for any prominent peripheral vein. This concept did not become popular until during World War II, when quick and efficient first aid care was necessary in the battle field. As a result, the vacutainer became the forefront device used for blood collection.
See also
Phlebotomy
Fingerprick |
https://en.wikipedia.org/wiki/International%20Food%20Information%20Council | Founded in 1985, the International Food Information Council (IFIC) is a nonprofit organization supported by the food, beverage, and agricultural industries.
According to the Center for Media and Democracy, "In reality, IFIC is a public relations arm of the food, beverage and agricultural industries, which provide the bulk of its funding." The vast majority of organizational revenues are generated by its membership fees. The members of the IFIC consists of companies with food and food related sales, companies such as packaging or equipment suppliers, service providers, design firms, inspection/testing organization, and canning/bottling companies, with an interest in nutrition and food safety issues, and non-industry organizations, such as research institutions, foundations, and Associations, with an interest in nutrition and food safety issues. [...]" 2018 Form 990. |
https://en.wikipedia.org/wiki/Michael%20Dhuey | Michael Joseph Dhuey (born July 20, 1958, in Milwaukee, Wisconsin, United States) is an electrical and computer engineer.
Information
He is chiefly known as the co-creator (with Ron Hochsprung) of the Macintosh II computer in 1987, the first Macintosh computer with expansion slots. He was also one of the two hardware engineers (with Tony Fadell) who developed the hardware for the original iPod in 2001, particularly the battery.
He began programming at age 14 at the University of Wisconsin–Milwaukee and by age 15 was working professionally as a programmer at Northwestern Mutual Life Insurance. He received his computer engineering degree in 1980 from the University of Wisconsin-Madison. He worked at Apple Computer from 1980 to 2005. He is currently employed at Cisco Systems, where he has worked on the Cisco TelePresence remote conferencing system.
Design News nominated him for "Engineer of the Year" in 2006 and 2007. |
https://en.wikipedia.org/wiki/Multimedia%20Acceleration%20eXtensions | The Multimedia Acceleration eXtensions or MAX are instruction set extensions to the Hewlett-Packard PA-RISC instruction set architecture (ISA). MAX was developed to improve the performance of multimedia applications that were becoming more prevalent during the 1990s.
MAX instructions operate on 32- or 64-bit SIMD data types consisting of multiple 16-bit integers packed in general purpose registers. The available functionality includes additions, subtractions and shifts.
The first version, MAX-1, was for the 32-bit PA-RISC 1.1 ISA. The second version, MAX-2, was for the 64-bit PA-RISC 2.0 ISA.
Notability
The approach is notable because the set of instructions is much smaller than in other multimedia CPUs, and also more general-purpose. The small set and simplicity of the instructions reduce the recurring costs of the electronics, as well as the costs and difficulty of the design. The general-purpose nature of the instructions increases their overall value. These instructions require only small changes to a CPU's arithmetic-logic unit. A similar design approach promises to be a successful model for the multimedia instructions of other CPU designs. The set is also small because the CPU already included powerful shift and bit-manipulation instructions: "Shift pair" which shifts a pair of registers, "extract" and "deposit" of bit fields, and all the common bit-wise logical operations (and, or, exclusive-or, etc.).
This set of multimedia instructions has proven its performance, as well. In 1996 the 64-bit "MAX-2" instructions enabled real-time performance of MPEG-1 and MPEG-2 video while increasing the area of a RISC CPU by only 0.2%.
Implementations
MAX-1 was first implemented with the PA-7100LC in 1994. It is usually attributed as being the first SIMD extensions to an ISA. The second version, MAX-2, was for the 64-bit PA-RISC 2.0 ISA. It was first implemented in the PA-8000 microprocessor released in 1996.
The basic approach to the arithmetic in MAX-2 is to " |
https://en.wikipedia.org/wiki/AERONET | AERONET - the AERONET (AErosol RObotic NETwork) is a network of ground-based sun photometers which measure atmospheric aerosol properties. The measurement system is a solar-powered CIMEL Electronique 318A spectral radiometer that measures Sun and sky radiances at
a number of fixed wavelengths within the visible and near-infrared spectrum. There is one sea-based reading location aboard the E/V Nautilus, the exploration vessel operated by Dr. Robert Ballard and the Sea Research Foundation. Two readings per day are taken aboard the ship while it is in operation.
AERONET provides continuous cloud-screened observations of spectral aerosol optical depth (AOD), precipitable water, and inversion aerosol products in diverse aerosol
regimes. Inversion products are retrieved from almucantar scans of radiance as a function of scattering angle and include products such as aerosol volume size distribution, aerosol complex refractive index, optical absorption (single scattering albedo) and the aerosol scattering phase function. All these products represent an average of the total aerosol column within the atmosphere.
The aerosol properties are retrieved via an inversion algorithm developed by Dubovik and King (2000).
Further algorithms were developed, for example, by Dubovik et al. (2006) to take into account non-spherical shapes of aerosol particles such as mineral dust.
AERONET is an observing system in the NOAA Observing System Architecture.
See also
Aerosol
Angstrom exponent |
https://en.wikipedia.org/wiki/National%20Centers%20for%20Biomedical%20Computing | The National Centers for Biomedical Computing (NCBCs) are part of the U.S. National Institutes of Health plan to develop and implement the core of a universal computing infrastructure that is urgently needed to speed progress in biomedical research. Their mission is to create innovative software programs and other tools that will enable the biomedical community to integrate, analyze, model, simulate, and share data on human health and disease.
Recognizing the potential benefits to human health that can be realized from applying and advancing the field of biomedical computing, the Biomedical Information Science and Technology Initiative (BISTI) was launched at the NIH in April 2000. This initiative is aimed at making optimal use of computer science and technology to address problems in biology and medicine. The full text of the original BISTI Report is available.
As of April 2016, the web site for the National Centers for Biomedical Computing (http://www.ncbcs.org) is no longer managed by that organization, though many of the centers still are supported.
Current Centers
Center for Computational Biology
National Center for Biomedical Ontology
Simbios: Physics-based Simulation of Biological Structures, Stanford University
National Center for Integrative Biomedical Informatics
National Center for Multi-Scale Study of Cellular Networks
National Alliance for Medical Imaging Computing
See also
Biositemaps
Biomedical Computation Review, a quarterly magazine created by Simbios to help build community among the diverse disciplines that participate in the field.
External links
NIH Roadmap National Centers for Biomedical Computing, archive.org
National Center for Multi-Scale Study of Cellular Networks
National Alliance for Medical Imaging Computing
Genomics
Proteomics
Medical research institutes in the United States
Bioinformatics organizations
National Institutes of Health |
https://en.wikipedia.org/wiki/FLUXNET | FLUXNET is a global network of micrometeorological tower sites that use eddy covariance methods to measure the exchanges of carbon dioxide, water vapor, and energy between the biosphere and atmosphere. FLUXNET is a global 'network of regional networks' that serves to provide an infrastructure to compile, archive and distribute data for the scientific community. The most recent FLUXNET data product, FLUXNET2015, is hosted by the Lawrence Berkeley National Laboratory (USA) and is publicly available for download. Currently there are over 1000 active and historic flux measurement sites.
FLUXNET works to ensure that different flux networks are calibrated to facilitate comparison between sites, and it provides a forum for the distribution of knowledge and data between scientists. Researchers also collect data on site vegetation, soil, trace gas fluxes, hydrology, and meteorological characteristics at the tower sites.
History and Background
FLUXNET started in 1997 and has grown from a handful of sites in North America and Europe to a current population exceeding 260 registered sites world-wide. Today, FLUXNET consists of regional networks in North America (AmeriFlux, Fluxnet-Canada, NEON), South America (LBA), Europe (CarboEuroFlux, ICOS), Australasia (OzFlux) , Asia (China Flux, and Asia Flux) and Africa (AfriFlux). At each tower site, the eddy covariance flux measurements are made every 30 minutes and are integrated on daily, monthly and annual time scales. The spatial scale of the footprint at each tower site reaches between 200 m and a kilometer.
An overarching intent of FLUXNET, and its regional partners, is to provide data that can be used to validate terrestrial carbon fluxes derived from sensors on NASA satellites, such as TERRA and AQUA, and from biogeochemical models. To achieve this overarching goal, the objectives and priorities of FLUXNET have evolved as the network has grown and matured. During the initial stages of FLUXNET, the priority of our |
https://en.wikipedia.org/wiki/Digital%20media%20player | A digital media player (also sometimes known as a streaming device or streaming box) is a type of consumer electronics device designed for the storage, playback, or viewing of digital media content. They are typically designed to be integrated into a home cinema configuration, and attached to a television or AV receiver or both.
The term is most synonymous with devices designed primarily for the consumption of content from streaming media services such as internet video, including subscription-based over-the-top content services. These devices usually have a compact form factor (either as a compact set-top box, or a dongle designed to plug into an HDMI port), and contain a 10-foot user interface with support for a remote control and, in some cases, voice commands, as control schemes. Some services may support remote control on digital media players using their respective mobile apps, while Google's Chromecast ecosystem is designed around integration with the mobile apps of content services.
A digital media player's operating system may provide a search engine for locating content available across multiple services and installed apps. Many digital media players offer internal access to digital distribution platforms, where users can download or purchase content such as films, television episodes, and apps. In addition to internet sources, digital media players may support the playback of content from other sources, such as external media (including USB drives or memory cards), or streamed from a computer or media server. Some digital media players may also support video games, though their complexity (which can range from casual games to ports of larger games) depends on operating system and hardware support, and besides those marketed as microconsoles, are not usually promoted as the device's main function.
Digital media players do not usually include a tuner for receiving terrestrial television, nor disc drives for Blu-rays or DVD. Some devices, such as standalo |
https://en.wikipedia.org/wiki/Ruppeiner%20geometry | Ruppeiner geometry is thermodynamic geometry (a type of information geometry) using the language of Riemannian geometry to study thermodynamics. George Ruppeiner proposed it in 1979. He claimed that thermodynamic systems can be represented by Riemannian geometry, and that statistical properties can be derived from the model.
This geometrical model is based on the inclusion of the theory of fluctuations into the axioms of equilibrium thermodynamics, namely, there exist equilibrium states which can be represented by points on two-dimensional surface (manifold) and the distance between these equilibrium states is related to the fluctuation between them. This concept is associated to probabilities, i.e. the less probable a fluctuation between states, the further apart they are. This can be recognized if one considers the metric tensor gij in the distance formula (line element) between the two equilibrium states
where the matrix of coefficients gij is the symmetric metric tensor which is called a Ruppeiner metric, defined as a negative Hessian of the entropy function
where U is the internal energy (mass) of the system and Na refers to the extensive parameters of the system. Mathematically, the Ruppeiner geometry is one particular type of information geometry and it is similar to the Fisher-Rao metric used in mathematical statistics.
The Ruppeiner metric can be understood as the thermodynamic limit (large systems limit) of the more general Fisher information metric. For small systems (systems where fluctuations are large), the Ruppeiner metric may not exist, as second derivatives of the entropy are not guaranteed to be non-negative.
The Ruppeiner metric is conformally related to the Weinhold metric via
where T is the temperature of the system under consideration. Proof of the conformal relation can be easily done when one writes down the first law of thermodynamics (dU=TdS+...) in differential form with a few manipulations. The Weinhold geometry is also considered a |
https://en.wikipedia.org/wiki/Eurekster | Eurekster was a New Zealandbased company that built social search engines for use on websites, which were referred to as "swickis" (for "search plus wiki"). The company was based in Christchurch, with an office in San Francisco, California. It was co-founded by Grant Ryan and Steven Marder, who served as its chief scientist and CEO, respectively. Ryan is also the co-founder and chairman of the Christchurch-based company SLI Systems, which specialize in search engines that learns from users. According to Marder, "Eurekster pioneered vertical, social search..."
Eurekster launched to the public on 21 January 2004.
In 2007, Eurekster hosted around 100,000 swickis for various websites, which total approximately 20 million searches per month, or around 800,000 searches per day.
The company shut down sometime after 2010.
Praise
In May 2006, Red Herring selected Eurekster as one of their favorite companies that push the technological limits in North America.
Eurekster was, on 17 January 2007, announced one of the 100 best companies by AlwaysOn Media 100. The selection was made by focusing on "innovation, market potential, commercialization, stakeholder value creation, and media attention or 'buzz'". |
https://en.wikipedia.org/wiki/Histoplasma%20capsulatum | Histoplasma capsulatum is a species of dimorphic fungus. Its sexual form is called Ajellomyces capsulatus. It can cause pulmonary and disseminated histoplasmosis.
H. capsulatum is "distributed worldwide, except in Antarctica, but most often associated with river valleys" and occurs chiefly in the "Central and Eastern United States" followed by "Central and South America, and other areas of the world". It is most prevalent in the Ohio and Mississippi River valleys. It was discovered by Samuel Taylor Darling in 1906.
Growth and morphology
H. capsulatum is an ascomycetous fungus closely related to Blastomyces dermatitidis. It is potentially sexual, and its sexual state, Ajellomyces capsulatus, can readily be produced in culture, though it has not been directly observed in nature. H. capsulatum groups with B. dermatitidis and the South American pathogen Paracoccidioides brasiliensis in the recently recognized fungal family Ajellomycetaceae. It is dimorphic and switches from a mould-like (filamentous) growth form in the natural habitat to a small, budding yeast form in the warm-blooded animal host.
Like B. dermatitidis, H. capsulatum has two mating types, "+" and "–". The great majority of North American isolates belongs to a single genetic type, but a study of multiple genes suggests a recombining, sexual population. A recent analysis has suggested that the prevalent North American genetic type and a less common type should be considered separate phylogenetic species, distinct from H. capsulatum isolates obtained in Central and South America and other parts of the world. These entities are temporarily designated NAm1 (the rare type, which includes a famous experimental isolate designated "the Downs strain") and NAm2 (the common type). As yet, no well-established clinical or geographic distinction is seen between these two genetic groups.
In its asexual form, the fungus grows as a colonial microfungus strongly similar in macromorphology to B. dermatitidis. A micros |
https://en.wikipedia.org/wiki/Marcel%20Vigneron | Marcel Vigneron is an American chef. He was runner-up of the second season of Top Chef, which aired in 2006–2007. From 2011 on, he had multiple other television appearances and operated a catering firm.
Early life
Vigneron is originally from Bainbridge Island, Washington.
Vigneron attended the Culinary Institute of America (CIA) in New York and achieved his associate degree in Culinary Arts. There, Vigneron met fellow chef Spike Mendelsohn. The two played a lot of frisbee together and became best friends; they would later compete together on the 5th and 8th seasons of The Next Iron Chef. At the CIA Vigneron enrolled in the teaching assistant program, where he served as the sous chef to Dwayne Lipuma at the school's Ristorante Caterina de’ Medici.
Top Chef
Vigneron appeared in season two of Bravo's reality series Top Chef, which was filmed in 2006, and aired in late 2006 and early 2007. At the time of his appearance on Top Chef, he was a Master Cook at Joël Robuchon in Las Vegas, Nevada.
On the show, he became known for his molecular gastronomy techniques, especially his use of foams. He also notably clashed with many of the show's other contestants, culminating in an incident in which several of the show's contestants egged on contestant Cliff Crooks to pin down Vigneron and shave his head. This led to Cliff being kicked off the show.
Several Top Chef viewers blogged about discrepancies in the sequence of events relating to the hair-shaving incident, including one clip that shows contestant Elia Aboumrad during the shave attempt with all of her hair intact, despite being shown shaving her head earlier in the sequence. Activity in the blogosphere eventually attracted the attention of entertainment news outlets, some of which commented that the creative editing was done in an attempt to downplay interpersonal conflicts. Vigneron characterized the event as more like a drunken assault, and confirmed that the attack on him came before the other contestants shaved t |
https://en.wikipedia.org/wiki/Allysine | Allysine is a derivative of lysine that features a formyl group in place of the terminal amine. The free amino acid does not exist, but the allysine residue does. It is produced by aerobic oxidation of lysine residues by the enzyme lysyl oxidase. The transformation is an example of a post-translational modification. The semialdehyde form exists in equilibrium with a cyclic derivative.
Allysine is involved in the production of elastin and collagen. Increased allysine concentration in tissues has been correlated to the presence of fibrosis.
Allysine residues react with sodium 2-naphthol-6-sulfonate to produce a fluorescent bis-naphtol-allysine product. In another assay, allysine-containing proteins are reduced with sodium borohydride to give a peptide containing the 6-hydroxynorleucine (6-hydroxy-2-aminocaproic acid) residue, which (unlike allysine) is stable to proteolysis.
Further reading
See also
Saccharopine |
https://en.wikipedia.org/wiki/Isodesmosine | Isodesmosine is a lysine derivative found in elastin. Isodesmosine is an isomeric pyridinium-based amino acid resulting from the condensation of four lysine residues between elastin proteins by lysyl-oxidase. These represent ideal biomarkers for monitoring elastin turnover because these special cross-links are only found in mature elastin in mammals.
See also
Desmosine |
https://en.wikipedia.org/wiki/Lead%20scandium%20tantalate | Lead scandium tantalate (PST) is a mixed oxide of lead, scandium, and tantalum. It has the formula Pb(Sc0.5Ta0.5)O3. It is a ceramic material with a perovskite structure, where the Sc and Ta atoms at the B site have an arrangement that is intermediate between ordered and disordered configurations, and can be fine-tuned with thermal treatment. It is ferroelectric at temperatures below , and is also piezoelectric. Like structurally similar lead zirconate titanate and barium strontium titanate, PST can be used for manufacture of uncooled focal plane array infrared imaging sensors for thermal cameras. |
https://en.wikipedia.org/wiki/Commercial%20Standard%20Digital%20Bus | The Commercial Standard Digital Bus (CSDB) is a multidrop bus, formerly known as the Collins Standard Digital Bus. The maximum speed is 50 kbit/s.
Most civilian aircraft use one of 3 serial buses: the Commercial Standard Digital Bus (CSDB),
ARINC 429, or AS-15531.
The Commercial Standard Digital Bus is a two-wire asynchronous broadcast data transmission bus. Data is transmitted over an interconnecting cable by devices that comply with Electronic Industries Association (EIA) RS-422A. The physical layer is EIA-422.
Messages on the CSDB consist of one address byte followed by any number of data bytes. |
https://en.wikipedia.org/wiki/AMELY | Amelogenin, Y isoform is a protein that in humans is encoded by the AMELY gene. AMELY is located on the Y chromosome and encodes a form of amelogenin. Amelogenin is an extracellular matrix protein involved in biomineralization during tooth enamel development.
Clinical significance
Mutations in the related AMELX gene on the X chromosome cause X-linked amelogenesis imperfecta. |
https://en.wikipedia.org/wiki/Duplex%20perception | Duplex perception refers to the linguistic phenomenon whereby "part of the acoustic signal is used for both a speech and a nonspeech percept." A listener is presented with two simultaneous, dichotic stimuli. One ear receives an isolated third-formant transition that sounds like a nonspeech chirp. At the same time the other ear receives a base syllable. This base syllable consists of the first two formants, complete with formant transitions, and the third formant without a transition. Normally, there would be peripheral masking in such a binaural listening task but this does not occur. Instead, the listener's percept is duplex, that is, the completed syllable is perceived and the nonspeech chirp is heard at the same time. This is interpreted as being due to the existence of a special speech module.
The phenomenon was discovered in 1974 by Timothy C. Rand at the Haskins Laboratories associated with Yale University.
Duplex perception was argued as evidence for the existence of distinct systems for general auditory perception and speech perception. It is also notable that this same phenomenon can be obtained with slamming doors.
See also
McGurk effect |
https://en.wikipedia.org/wiki/Volume%20testing | Volume testing belongs to the group of non-functional tests, which are a group of tests often misunderstood and/or used interchangeably. Volume testing refers to testing a software application with a certain amount of data to assert the system performance with a certain amount of data in the database. Volume testing is regarded by some as a type of capacity testing, and is often deemed necessary as other types of tests normally don't use large amounts of data, but rather typically use small amounts of data. It is the only type of test which checks the ability of a system to handle large pools of data. For example, the test can be used to stress the database to its maximum limit. While the amount can, in generic terms, be the database size, it could also be the size of an interface file that is the subject of volume testing. For example, if one wants to volume test an application with a specific database size, the database will be expanded to that size and the application's performance will then be tested on it. Another example could be when there is a requirement for the application to interact with an interface file (could be any file such as .dat, .xml); this interaction could be reading and/or writing on to/from the file. A sample file of an intended size can then be created and used to test the application's functionality in order to test the performance. |
https://en.wikipedia.org/wiki/Non-functional%20testing | Non-functional testing is the testing process of a software application, web application or system for its non-functional requirements: the way a system operates, rather than specific behaviours of that system. This is in contrast to functional testing, which tests against functional requirements that describe the functions of a system and its components.
The names of many non-functional tests are often used interchangeably because of the overlap in scope between various non-functional requirements. For example, software performance is a broad term that includes many specific requirements like reliability and scalability.
Non-functional testing includes:
Accessibility testing
Baseline testing
Compliance testing
Documentation testing
Endurance testing or reliability testing
Load testing
Localization testing and Internationalization testing
Performance testing
Recovery testing
Resilience testing
Security testing
Scalability testing
Stress testing
Usability testing
Volume testing |
https://en.wikipedia.org/wiki/Scalability%20testing | Scalability testing is the testing of a software application to measure its capability to scale up or scale out in terms of any of its non-functional capability.
Performance, scalability and reliability testing are usually grouped together by software quality analysts.
The main goals of scalability testing are to determine the user limit for the web application and ensure end user experience, under a high load, is not compromised. One example is if a web page can be accessed in a timely fashion with a limited delay in response. Another goal is to check if the server can cope i.e. Will the server crash if it is under a heavy load?
Dependent on the application that is being tested, different parameters are tested. If a webpage is being tested, the highest possible number of simultaneous users would be tested. Also dependent on the application being tested is the attributes that are tested - these can include CPU usage, network usage or user experience.
Successful testing will project most of the issues which could be related to the network, database or hardware/software.
Creating a scalability test
When creating a new application, it is difficult to accurately predict the number of users in 1, 2 or even 5 years. Although an estimate can be made, it is not a definite number. An issue with an increasing number of users is that it can create new areas of failure. For example, if you have 100,000 new visitors, it's not just access to the application that could be a problem; you might also experience issues with the database where you need to store all the data of these new customers.
Increment loads
This is why when creating a scalability test, it is important to scale up in increments. These steps can be split into small, medium and high loads.
We must scale up in increments as each stage tests a different aspect. Small loads ensure the system functions as it should on a basic level. Medium loads test the system can function at its expected level. High loads tes |
https://en.wikipedia.org/wiki/Gittins%20index | The Gittins index is a measure of the reward that can be achieved through a given stochastic process with certain properties, namely: the process has an ultimate termination state and evolves with an option, at each intermediate state, of terminating. Upon terminating at a given state, the reward achieved is the sum of the probabilistic expected rewards associated with every state from the actual terminating state to the ultimate terminal state, inclusive. The index is a real scalar.
Terminology
To illustrate the theory we can take two examples from a developing sector, such as from electricity generating technologies: wind power and wave power. If we are presented with the two technologies when they are both proposed as ideas we cannot say which will be better in the long run as we have no data, as yet, to base our judgments on. It would be easy to say that wave power would be too problematic to develop as it seems easier to put up many wind turbines than to make the long floating generators, tow them out to sea and lay the cables necessary.
If we were to make a judgment call at that early time in development we could be condemning one technology to being put on the shelf and the other would be developed and put into operation. If we develop both technologies we would be able to make a judgment call on each by comparing the progress of each technology at a set time interval such as every three months. The decisions we make about investment in the next stage would be based on those results.
In a paper in 1979 called Bandit Processes and Dynamic Allocation Indices John C. Gittins suggests a solution for problems such as this. He takes the two basic functions of a "scheduling Problem" and a "multi-armed bandit" problem and shows how these problems can be solved using Dynamic allocation indices. He first takes the "Scheduling Problem" and reduces it to a machine which has to perform jobs and has a set time period, every hour or day for example, to finish each job in |
https://en.wikipedia.org/wiki/Differential%20item%20functioning | Differential item functioning (DIF) is a statistical characteristic of an item that shows the extent to which the item might be measuring different abilities for members of separate subgroups. Average item scores for subgroups having the same overall score on the test are compared to determine whether the item is measuring in essentially the same way for all subgroups. The presence of DIF requires review and judgment, and it does not necessarily indicate the presence of bias. DIF analysis provides an indication of unexpected behavior of items on a test. An item does not display DIF if people from different groups have a different probability to give a certain response; it displays DIF if and only if people from different groups with the same underlying true ability have a different probability of giving a certain response. Common procedures for assessing DIF are Mantel-Haenszel, item response theory (IRT) based methods, and logistic regression.
Description
DIF refers to differences in the functioning of items across groups, oftentimes demographic, which are matched on the latent trait or more generally the attribute being measured by the items or test. It is important to note that when examining items for DIF, the groups must be matched on the measured attribute, otherwise this may result in inaccurate detection of DIF. In order to create a general understanding of DIF or measurement bias, consider the following example offered by Osterlind and Everson (2009). In this case, Y refers to a response to a particular test item which is determined by the latent construct being measured. The latent construct of interest is referred to as theta (θ) where Y is an indicator of θ which can be arranged in terms of the probability distribution of Y on θ by the expression f(Y)|θ. Therefore, response Y is conditional on the latent trait (θ). Because DIF examines differences in the conditional probabilities of Y between groups, let us label the groups as the "reference" and "fo |
https://en.wikipedia.org/wiki/Boundary%20scan%20description%20language | Boundary scan description language (BSDL) is a hardware description language for electronics testing using JTAG. It has been added to the IEEE Std. 1149.1, and BSDL files are increasingly well supported by JTAG tools for boundary scan applications, and by test case generators.
BSDL overview
BSDL was a subset of VHDL. However, since IEEE 1149.1-2013, it is no longer a "proper" subset of VHDL but it is considered based on VHDL. It is formally defined in IEEE Standard 1149.1 Annex B. Each BSDL file describes one version of an IC and has many package pin maps as are available for a particular die. This is necessary because, for example, two different BGA packages will have different balls; even if the ball has the same name it may be bonded to a different signal on the other package, and sometimes bondings change between revisions.
Each digital signal (pin or ball) on the package is defined, as are the registers and opcodes used in an IEEE 1149.1, IEEE 1149.6, IEEE 1149.8.1, IEEE 1532 and IEEE 1149.4 compliant IC. There is one instruction register, a minimum of a 1-bit bypass register, one boundary scan register and optionally a 32 bit device_id register. The registers other than the instruction register are called TDRs or Test Data Registers. The boundary scan register (BSR) is unique as it is the register which is also mapped to the I/O of the device. Many of the BSDL definitions are sets of single long string constants.
Note that registers not involved in boundary scan are often not defined. Instructions that are not publicly defined are included in the INSTRUCTION_PRIVATE section. Microprocessor register descriptions in BSDL typically do not include enough information to aid in building a 1149.1 based emulator or debugger.
External links
Free BSDL Compiler - Validates Grammar, Semantics and Syntax according to IEEE standard rules
Free public library of BSDL files for many devices
BSDL Tutorial
BSDL Files
BSDL & SVF file formats, functions and feat |
https://en.wikipedia.org/wiki/Local%20independence | Within statistics, Local independence is the underlying assumption of latent variable models.
The observed items are conditionally independent of each other given an individual score on the latent variable(s). This means that the latent variable explains why the observed items are related to one another. This can be explained by the following example.
Example
Local independence can be explained by an example of Lazarsfeld and Henry (1968). Suppose that a sample of 1000 people was asked whether they read journals A and B. Their responses were as follows:
One can easily see that the two variables (reading A and reading B) are strongly related, and thus dependent on each other. Readers of A tend to read B more often (52%) than non-readers of A (28%). If reading A and B were independent, then the formula P(A&B) = P(A)×P(B) would hold. But 260/1000 isn't 400/1000 × 500/1000. Thus, reading A and B are statistically dependent on each other.
If the analysis is extended to also look at the education level of these people, the following tables are found.
Again, if reading A and B were independent, then P(A&B) = P(A)×P(B) would hold separately for each education level. And, in fact, 240/500 = 300/500×400/500 and 20/500 = 100/500×100/500. Thus if a separation is made between people with high and low education backgrounds,
there is no dependence between readership of the two journals. That is, reading A and B are independent once educational level is taken into consideration. The educational level 'explains' the difference in reading of A and B. If educational level is never actually observed or known, it may still appear as a latent variable in the model.
See also
Conditional independence |
https://en.wikipedia.org/wiki/Oxidative%20deamination | Oxidative deamination is a form of deamination that generates α-keto acids and other oxidized products from amine-containing compounds, and occurs primarily in the liver. Oxidative deamination is stereospecific, meaning it contains different stereoisomers as reactants and products; this process is either catalyzed by L or D- amino acid oxidase and L-amino acid oxidase is present only in the liver and kidney. Oxidative deamination is an important step in the catabolism of amino acids, generating a more metabolizable form of the amino acid, and also generating ammonia as a toxic byproduct. The ammonia generated in this process can then be neutralized into urea via the urea cycle.
Much of the oxidative deamination occurring in cells involves the amino acid glutamate, which can be oxidatively deaminated by the enzyme glutamate dehydrogenase (GDH), using NAD or NADP as a coenzyme. This reaction generates α-ketoglutarate (α-KG) and ammonia. Glutamate can then be regenerated from α-KG via the action of transaminases or aminotransferase, which catalyze the transfer of an amino group from an amino acid to an α-keto acid. In this manner, an amino acid can transfer its amine group to glutamate, after which GDH can then liberate ammonia via oxidative deamination. This is a common pathway during amino acid catabolism.
Another enzyme responsible for oxidative deamination is monoamine oxidase, which catalyzes the deamination of monoamines via addition of oxygen. This generates the corresponding ketone- or aldehyde-containing form of the molecule, and generates ammonia. Monoamine oxidases MAO-A and MAO-B play vital roles in the degradation and inactivation of monoamine neurotransmitters such as serotonin and epinephrine. Monoamine oxidases are important drug targets, targeted by MAO inhibitors (MAOIs) such as selegiline. Glutamate dehydrogenase play an important role in oxidative deamination |
https://en.wikipedia.org/wiki/Serial%20Vector%20Format | Serial Vector Format (SVF) is a file format that contains boundary scan vectors to be sent to an electronic circuit using a JTAG interface. Boundary scan vectors consist of the following data:
Stimulus data: This is data to be sent to a device or electronic circuit
Expected response: This is the data the device or circuit is expected to send back if there is no error
Mask data: Defines which bits in the expected response are valid; other bits of the device's response are unknown and must be ignored when comparing the expected response and the data returned from the circuit
Additional information on how to send the data (e.g. maximum clock frequency)
The SVF standard was jointly developed by companies Texas Instruments and Teradyne. Control over the format has been handed off to boundary-scan solution provider ASSET InterTech. The most recent revision is Revision E.
SVF files are used to transfer boundary scan data between tools. As an example a VHDL compiler may create an SVF file that is read by a tool for programming CPLDs.
The SVF file is defined as an ASCII file that consists of a set of SVF statements. The maximum number of characters allowed on a line is 256, although one SVF statement can span more than one line. Each statement consists of a command and associated parameters. Each SVF statement is terminated by a semicolon. SVF is not case sensitive. Comments can be inserted into a SVF file after an exclamation point ‘!’ or a pair of slashes ‘//’. Either ‘//’ or ‘!’ will comment out the remainder of the line.
SVF commands
ENDDR: Specifies default end state for DR scan operations.
ENDIR: Specifies default end state for IR scan operations.
FREQUENCY: Specifies maximum test clock frequency for IEEE 1149.1 bus operations.
HDR: (Header Data Register) Specifies a header pattern that is prepended to the beginning of subsequent DR scan operations.
HIR: (Header Instruction Register) Specifies a header pattern that is prepended to the beginning of subse |
https://en.wikipedia.org/wiki/Talarian | Talarian was a provider of real-time infrastructure software. Now part of TIBCO, it was a veteran provider of message-oriented middleware.
Talarian was a member of the Business Integration Group (BIG), the Internet Protocol Multicast Initiative (IPMI), the Securities Industry Middleware Council (SIMC), the Object Management Group (OMG), and the Internet Engineering Task Force (IETF).
SmartSockets
SmartSockets was the main product of Talarian. It is a real-time message-oriented middleware (MOM) which is scalable and fault tolerant. Its programming model is built specifically to offer high-speed interprocess communication (IPC) for multiprocessor architecture, scalability and reliability.
It supports a variety of communication paradigms including publish-subscribe, adaptive multicast, redundant connections, peer-to-peer, and RPC.
Included as part of the SmartSockets package are graphical tools for monitoring and debugging applications.
It is supported on a wide range of platforms:
HP-UX
AIX
Linux
Compaq NSK/OSS
Tru64
OpenVMS
Windows
Solaris
Irix
VxWorks
Applications using SmartSockets can be developed with the following languages:
C API
C++ Class Libraries
ActiveX Components
Java Class Library
SmartSockets is now a product of TIBCO. See acquisition below.
Acquisition by TIBCO
In January 2002 TIBCO acquired Talarian for approximately $115 million. It was its primary competitor in the delivery of high-performance messaging solutions.
TIBCO paid $5.30 per share, half in stock and half in cash, for each of Talarian's outstanding shares.
Customers
Talarian customers were large end-users, OEMs and systems integrators in need of solutions where real-time data flow supports high information volumes.
List of Famous Customers:
Boeing
AT&T
Hewlett-Packard
Cisco
MTR
New York Stock Exchange
NASA's ground control station for the Hubble Space Telescope |
https://en.wikipedia.org/wiki/List%20of%20semiconductor%20IP%20core%20vendors | The following is a list of notable vendors in the business of licensing IP cores
Analog-to-Digital Converters
S3 Group
Cadence Design Systems
Cosmic Circuits
Dolphin Integration
Synopsys
Broadband modem and error correction
Cadence Design Systems
CEVA, Inc.
IMEC
On2 Technologies (through acquisition of Hantro)
Synopsys (through acquisition of Virage Logic)
Tensilica (now part of Cadence Design Systems)
Digital to Analog Converters
S3 Group
Cadence Design Systems
Cosmic Circuits (now part of Cadence Design Systems)
Dolphin Integration
Digital Signal Processors
Synopsys - ARC
Tensilica - Xtensa (now part of Cadence Design Systems)
DRAM
DRAM controllers
Actel
Altera
Arm
Barco Silex
Cadence Design Systems (through acquisition of Denali Software)
Faraday Technology
Lattice Semiconductor
Rambus
Synopsys
Xilinx
DRAM PHYs
Arm
Cadence Design Systems (through acquisition of Denali Software)
Synopsys (through acquisition of Virage Logic)
High-Bandwidth Memory - HBM PHYs
eSilicon
Rambus
Synopsys
Hybrid Memory Cube - HMC Controllers
Open-Silicon
University of Heidelberg
Communication IP
Network-on-Chip (NoC) / On-Chip Interconnect
Arteris IP
Arm
Bluetooth SW Stack, Link Layer and PHY
Arm (through acquisition of Dicentric and Sunrise Micro Devices)
Ethernet PHY
Arm (through acquisition of Artisan Components)
Cadence Design Systems
V by One
Socionext - HV Series
General purpose microprocessors
Arm - Arm Cortex and Neoverse processors
CEVA, Inc. - CEVA-X DSP
Dolphin Integration - 8051, 80251
eSi-RISC - eSi-RISC
Freescale and others - ColdFire
IBM and others - PowerPC
Infineon Technologies - Tricore
MIPS - MIPS
OpenCores - OpenRISC
Renesas - SuperH
Andes Technology, Codasip, SiFive and others - RISC-V
Socionext - Fujitsu_FR
Sun Microsystems and others - OpenSPARC
Synopsys - ARC
Tensilica - Xtensa (now part of Cadence Design Systems)
Western Design Center - 6502, 65816, 65xx
Xilinx - MicroBlaze
G |
https://en.wikipedia.org/wiki/Stream%20ripping | Stream ripping (also called stream recording) is the process of saving data streams to a file. The process is sometimes referred to as destreaming.
Stream ripping is most often referred in the context of saving audio or video from streaming media websites and services such as YouTube outside of the officially-provided means of offline playback (if any) using unsanctioned software and tools. This is often prohibited under each respective website or service's Terms of Use.
Legality
The Recording Industry Association of America (RIAA) has taken stances against tools that are, in particular, used to rip content from YouTube, citing that their use to download music from the website and convert them to audio formats constitutes a violation of their members' copyrights. The RIAA has targeted various stream ripping websites (including the websites themselves, and listings for them via search engines) under the anti-circumvention provisions of the U.S. Digital Millennium Copyright Act (DMCA), under its claim that a "rolling cipher" used by YouTube to generate the URL for the video file itself constitutes a technical protection measure, since it is "intended to inhibit direct access to the underlying YouTube video files, thereby preventing or inhibiting the downloading, copying, or distribution of the video files". Unlike the more common forms of takedowns performed under the Online Copyright Infringement Liability Limitation Act, there is no scheme of counter-notices for such takedowns. These actions have faced criticism, noting that there are legitimate uses for these services beyond ripping music, such as downloading video content needed to utilize one's right to fair use, or explicit rights of reuse (such as free content licenses) granted by a content creator.
In October 2020, the RIAA similarly issued takedowns to code hosting service GitHub targeting youtube-dl, an open source tool for similar purposes, also citing circumvention of the aforementioned "rolling cipher |
https://en.wikipedia.org/wiki/Staling | Staling, or "going stale", is a chemical and physical process in bread and similar foods that reduces their palatability. Stale bread is dry and hard, making it suitable for different culinary uses than fresh bread. Countermeasures and destaling techniques may reduce staling.
Mechanism and effects
Staling is a chemical and physical process in bread and similar foods that reduces their palatability. Staling is not simply a drying-out process due to evaporation. One important mechanism is the migration of moisture from the starch granules into the interstitial spaces, degelatinizing the starch; stale bread's leathery, hard texture results from the starch amylose and amylopectin molecules realigning themselves causing recrystallisation.
Stale bread
Stale bread is dry and hard. Bread will stale even in a moist environment, and stales most rapidly at temperatures just above freezing. While bread that has been frozen when fresh may be thawed acceptably, bread stored in a refrigerator will have increased staling rates.
Culinary uses
Many classic dishes rely upon otherwise unpalatable stale bread. Examples include bread sauce, bread dumplings, and flummadiddle, an early American savoury pudding. There are also many types of bread soups such as wodzionka (in Silesian cuisine) and ribollita (in Italian cuisine). An often-sweet dish is bread pudding. Cubes of stale bread can be dipped in cheese fondue, or seasoned and baked in the oven to become croutons, suitable for scattering in salads or on top of soups. Slices of stale bread soaked in an egg and milk mixture and then fried turn into French toast (known in French as pain perdu - lost bread). In Spanish and Portuguese cuisines migas is a breakfast dish using stale bread, and in Tunisian cuisine leblebi is a soup of chickpeas and stale bread.
Stale bread or breadcrumbs made from it can be used to "stretch" meat in dishes such as haslet (a type of meatloaf in British cuisine, or meatloaf itself) and garbure (a stew |
https://en.wikipedia.org/wiki/Specific%20ultraviolet%20absorbance | Specific ultraviolet absorbance (SUVA) is the absorbance of ultraviolet light in a water sample at a specified wavelength that is normalized for dissolved organic carbon (DOC) concentration. Specific UV absorbance (SUVA) wavelengths have analytical uses to measure the aromatic character of dissolved organic matter by detecting density of electron conjugation which is associated with aromatic bonds.
Derivation
To derive SUVA, first, UVC light (UV spectrum subtypes) at 254 nm or 280 nm, is measured in units of absorbance per meter of path length, often the sample must be diluted with ultrapure water because absorbance can be high. As increasing dissolved organic carbon concentration increases absorbance in the UV range, the UV light has to be normalized to the concentration of dissolved organic carbon in mg per L to ascertain differences in the aromatic quality of the water.
Aromatic character is used in the study of dissolved organic matter, from mineral soils, or organic soils, to use as an assay to whether or not dissolved organic carbon in the water is labile, a ready source of energy, or is from a relatively old source of carbon (recalcitrant). However, although a good indicator of aromaticity, caution must be used with determination of reactivity.
Measures of water purity often rely on measuring turbidity, not aromaticity. |
https://en.wikipedia.org/wiki/Kinematic%20diagram | In mechanical engineering, a kinematic diagram or kinematic scheme (also called a joint map or skeleton diagram) illustrates the connectivity of links and joints of a mechanism or machine rather than the dimensions or shape of the parts. Often links are presented as geometric objects, such as lines, triangles or squares, that support schematic versions of the joints of the mechanism or machine.
For example, the figures show the kinematic diagrams (i) of the slider-crank that forms a piston and crank-shaft in an engine, and (ii) of the first three joints for a PUMA manipulator.
|- style="text-align:center;"
| ||
|- style="text-align:center;"
| PUMA robot || and its kinematic diagram
Linkage graph
A kinematic diagram can be formulated as a graph by representing the joints of the mechanism as vertices and the links as edges of the graph. This version of the kinematic diagram has proven effective in enumerating kinematic structures in the process of machine design.
An important consideration in this design process is the degree of freedom of the system of links and joints, which is determined using the Chebychev–Grübler–Kutzbach criterion.
Elements of machines
Elements of kinematics diagrams include the frame, which is the frame of reference for all the moving components, as well as links (kinematic pairs), and joints. Primary Joints include pins, sliders and other elements that allow pure rotation or pure linear motion. Higher order joints also exist that allow a combination of rotation or linear motion. Kinematic diagrams also include points of interest, and other important components.
See also
Free body diagram
Kinematic synthesis
Left-hand–right-hand activity chart |
https://en.wikipedia.org/wiki/Toshiba%20T1000 | The Toshiba T1000 is a discontinued laptop computer manufactured by the Toshiba Corporation in 1987. It has a similar specification to the IBM PC Convertible, with a 4.77 MHz 80C88 processor, 512 KB of RAM, and a monochrome CGA-compatible LCD. Unlike the Convertible, it includes a standard serial port and parallel port, connectors for an external monitor, and a real-time clock.
Unusually for an IBM compatible PC, the T1000 contains a 256 KB ROM with a copy of MS-DOS 2.11. This acts as a small, read-only hard drive. Alternative operating systems can still be loaded from the floppy drive, or (if present) the RAM disk.
Along with the T1200 and earlier T1100, the Toshiba T1000 was one of the early computers to feature a "laptop" form factor and battery-powered operation.
Reception
PC Magazine in 1988 named the Toshiba T1000 an "Editor's Choice" among 12 tested portable computers. One reviewer called it "the first real DOS laptop" and a plausible replacement for his Tandy 200, while another praised its durability after 60,000 miles of traveling and "incredible bargain" $800 street price. BYTE in 1989 listed the T1000 as among the "Excellence" winners of the BYTE Awards, stating that it "takes portability to the limit ... as self-contained as you can get and still have a real computer that can handle real-world workloads." Noting that it was available for as little as $850, the magazine reported that "Many of us are in love with this one." In the same issue, Jerry Pournelle praised it as a "little gem". While acknowledging that it cost more than the TRS-80 Model 100 and NEC PC-8201, he believed that "you get quite a lot for the added weight and price", and reported that "Many writers swear by the T1000. David Drake loves his."
Specification
Software Compatibility
Compatible with software written for the IBM PC/XT using a color graphics adapter (CGA) display
Interfaces
RGB (CGA) color video port
Composite B&W monochrome video port
RS-232-C serial port
Parallel |
https://en.wikipedia.org/wiki/Monad%20%28philosophy%29 | The term monad () is used in some cosmic philosophy and cosmogony to refer to a most basic or original substance. As originally conceived by the Pythagoreans, the Monad is the Supreme Being, divinity or the totality of all things. According to some philosophers of the early modern period, most notably Gottfried Wilhelm Leibniz, there are infinite monads, which are the basic and immaterial elementary particles, or simplest units, that make up the universe.
Historical background
According to Hippolytus, the worldview was inspired by the Pythagoreans, who called the first thing that came into existence the "monad", which begat (bore) the dyad (from the Greek word for two), which begat the numbers, which begat the point, begetting lines or finiteness, etc. It meant divinity, the first being, or the totality of all beings, referring in cosmogony (creation theories) variously to source acting alone and/or an indivisible origin and equivalent comparators.
Pythagorean and Neoplatonic philosophers like Plotinus and Porphyry condemned Gnosticism (see Neoplatonism and Gnosticism) for its treatment of the monad.
In his Latin treaty , Alan of Lille affirms "God is an intelligible sphere, whose center is everywhere and whose circumference is nowhere." The French philosopher Rabelais ascribed this proposition to Hermes Trismegistus.
The symbolism is a free exegesis related to the Christian Trinity. Alan of Lille mentions the Trismegistus' Book of the Twenty-Four Philosophers where it says a Monad can uniquely beget another Monad in which more followers of this religion saw the come to being of God the Son from God the Father, both by way of generation or by way of creation. This statement is also shared by the pagan author of the Asclepius which sometimes has been identified with Trismegistus.
The Book of the Twenty-Four Philosophers completes the scheme adding that the ardor of the second Monad to the first Monad would be the Holy Ghost. It closes a physical circle in a logi |
https://en.wikipedia.org/wiki/Audiokinetic%20Wwise | Wwise (Wave Works Interactive Sound Engine) is Audiokinetic's software for interactive media and video games, available for free to non-commercial users and under license for commercial video game developers. It features an audio authoring tool and a cross-platform sound engine.
Description
The Wwise authoring application uses a graphical interface to centralize all aspects of audio creation. The functionality in this interface allows sound designers to:
Import audio files for use in video games
Apply audio plug-in effects
Mix in real-time
Define game states
Simulate audio environments
Manage sound integration
Apply the Windows Spatial Audio API, or Dolby Atmos.
Wwise allows for on-the-fly audio authoring directly in game. Over a local network, users can create, audition, and tweak sound effects and subtle sound behaviors while the game is being played on another host.
Wwise also includes the following components:
Cross-platform sound engine (Wwise Authoring)
Multichannel Creator (allows creation of multichannel audio)
Plug-in architecture for source, effect, and source control plug-ins, part of Wwise Launcher
SoundFrame API
Wave Viewer (allows for sampling of WAV audio files)
Supported operating systems
Wwise supports the following platforms:
Adoption by video games
Recent titles which have used Audiokinetic include:
Commercial game engine integration
Wwise is intended to be compatible with proprietary and commercial engines.
Unreal Engine 3
Unreal Engine 4
Unity
Cocos2d-x
CryEngine
Orochi 3
Gamebryo
Fox Engine
Autodesk Stingray
Open 3D Engine (which superseded Amazon Lumberyard)
Theatre
One play has used Wwise and its Interactive Music capabilities for live performance:
Dom Duardos by Gil Vicente, co-produced by Companhia Contigo Teatro and Grupo de Mímica e Teatro Oficina Versus, with music by Pedro Macedo Camacho
See also
FMOD
OpenAL
Sound design
Video game development |
https://en.wikipedia.org/wiki/GRAIL | The Gravity Recovery and Interior Laboratory (GRAIL) was an American lunar science mission in NASA's Discovery Program which used high-quality gravitational field mapping of the Moon to determine its interior structure. The two small spacecraft GRAIL A (Ebb) and GRAIL B (Flow) were launched on 10 September 2011 aboard a single launch vehicle: the most-powerful configuration of a Delta II, the 7920H-10. GRAIL A separated from the rocket about nine minutes after launch, GRAIL B followed about eight minutes later. They arrived at their orbits around the Moon 25 hours apart. The first probe entered orbit on 31 December 2011 and the second followed on 1 January 2012. The two spacecraft impacted the Lunar surface on December 17, 2012.
Overview
Maria Zuber of the Massachusetts Institute of Technology was GRAIL's principal investigator. NASA's Jet Propulsion Laboratory managed the project. NASA budgeted US$496 million for the program to include spacecraft and instrument development, launch, mission operations, and science support. Upon launch the spacecraft were named GRAIL A and GRAIL B and a contest was opened to school children to select names. Nearly 900 classrooms from 45 states, Puerto Rico and the District of Columbia, participated in the contest. The winning names, Ebb and Flow, were suggested by 4th grade students at Emily Dickinson Elementary School in Bozeman, Montana.
Each spacecraft transmitted and received telemetry from the other spacecraft and Earth-based facilities. By measuring the change in distance between the two spacecraft, the gravity field and geological structure of the Moon was obtained. The two spacecraft were able to detect very small changes in the distance between one another. Changes in distance as small as one micrometre were detectable and measurable. The gravitational field of the Moon was mapped in unprecedented detail.
Objectives
Map the structure of the lunar crust and lithosphere
Understand the asymmetric thermal evolution of |
https://en.wikipedia.org/wiki/Victor%20Flynn | Eugene Victor Flynn is an American-born mathematician. He is currently a professor of mathematics at the University of Oxford.
Biography
Flynn was born in Washington, D.C., the son of academic James Flynn who took up a position at the University of Otago. He first studied at the University of Otago, before taking a PhD at Trinity College, Cambridge, supervised by J. W. S. Cassels. He then spent a year as an assistant professor at the University of Michigan, returning to Cambridge as a research fellow at Robinson College. He then moved to the University of Liverpool, including four years as head of the pure mathematics department there. In 2005 he left Liverpool to move to the University of Oxford; he took up a fellowship at New College in October 2005 and was appointed a university professor of mathematics in October 2006.
His fields of specialisation are the arithmetic of elliptic curves and algebraic geometry.
Family
Flynn's father, James Flynn, was primarily involved in the research of intelligence and is noteworthy for his work on the Flynn effect; he died in 2020. Victor Flynn's parents met on a picket line protesting against segregation in the USA. Their daughter Natalie Flynn is a clinical psychologist in Auckland, New Zealand. |
https://en.wikipedia.org/wiki/Parthenogenesis | Parthenogenesis (; from the Greek + ) is a natural form of asexual reproduction in which growth and development of embryos occur in a gamete (egg or sperm) without combining with another gamete (e.g., egg and sperm fusing). In animals, parthenogenesis means development of an embryo from an unfertilized egg cell. In plants, parthenogenesis is a component process of apomixis. In algae, parthenogenesis can mean the development of an embryo from either an individual sperm or an individual egg.
Parthenogenesis occurs naturally in some plants, algae, invertebrate animal species (including nematodes, some tardigrades, water fleas, some scorpions, aphids, some mites, some bees, some Phasmatodea and parasitic wasps) and a few vertebrates (such as some fish, amphibians, reptiles and birds). This type of reproduction has been induced artificially in a few species including fish, amphibians, and mice.
Normal egg cells form in the process of meiosis and are haploid, with half as many chromosomes as their mother's body cells. Haploid individuals, however, are usually non-viable, and parthenogenetic offspring usually have the diploid chromosome number. Depending on the mechanism involved in restoring the diploid number of chromosomes, parthenogenetic offspring may have anywhere between all and half of the mother's alleles. In some types of parthenogenesis the offspring having all of the mother's genetic material are called full clones and those having only half are called half clones. Full clones are usually formed without meiosis. If meiosis occurs, the offspring will get only a fraction of the mother's alleles since crossing over of DNA takes place during meiosis, creating variation.
Parthenogenetic offspring in species that use either the XY or the X0 sex-determination system have two X chromosomes and are female. In species that use the ZW sex-determination system, they have either two Z chromosomes (male) or two W chromosomes (mostly non-viable but rarely a female), or th |
https://en.wikipedia.org/wiki/MEF%20Forum | MEF, founded in 2001, is a nonprofit international industry consortium, of network, cloud, and technology providers. MEF, originally known as the Metro Ethernet Forum, was dedicated to Carrier Ethernet networks and services, and in recent years, significantly broadened its scope, which now includes underlay connectivity services such as Optical, Carrier Ethernet, IP, along with overlay digital services including SD-WAN Services, as well as APIs to support orchestration of the service lifecycle (termed Lifecycle Service Orchestration, or LSO APIs based on MEF 55 Lifecycle Service Orchestration (LSO): Reference Architecture and Framework, for connectivity and digital services). Along with this change in scope, MEF re-branded from the "Metro Ethernet Forum", to simply "MEF". "MEF Forum" is MEF's legal name.
The forum is composed of service providers, incumbent local exchange carriers, network equipment vendors, cloud providers and other related organizations, within the information and communications technology industry, that share an interest in connectivity services, digital services, automation, orchestration and standardization to pragmatically enhance and accelerate the industry's digital transformation. There are approximately 200 MEF members, many of which have achieved MEF 3.0 certification of their MEF-standardized services or technology.
MEF comprises multiple technical committees to develop, evolve and promote the adoption of MEF standard services and interfaces. The forum regularly makes recommendations to, and collaborates with, existing standards bodies, such as the Internet Engineering Task Force (IETF) and the Institute of Electrical and Electronics Engineers (IEEE).
History
MEF was preceded by the Ethernet in the First Mile Alliance (EFMA), also a nonprofit international industry consortium, which was established in 2001 to promote standards-based Ethernet in the First Mile (EFM) technologies and products and position EFM as a networking technolog |
https://en.wikipedia.org/wiki/Kevin%20Buzzard | Kevin Mark Buzzard (born 21 September 1968) is a British mathematician and currently a professor of pure mathematics at Imperial College London. He specialises in arithmetic geometry and the Langlands program.
Biography
While attending the Royal Grammar School, High Wycombe he competed in the International Mathematical Olympiad, where he won a bronze medal in 1986 and a gold medal with a perfect score in 1987.
He obtained a B.A. degree (Parts I & II) in Mathematics at Trinity College, Cambridge, where he was Senior Wrangler (achiever of the highest mark), and went on to complete the C.A.S.M. He then completed his dissertation, entitled The levels of modular representations, under the supervision of Richard Taylor, for which he was awarded a Ph.D. degree.
He took a lectureship at Imperial College London in 1998, a readership in 2002, and was appointed to a professorship in 2004. From October to December 2002 he held a visiting professorship at Harvard University, having previously worked at the Institute for Advanced Study, Princeton (1995), the University of California Berkeley (1996-7), and the Institute Henri Poincaré in Paris (2000).
He was awarded a Whitehead Prize by the London Mathematical Society in 2002 for "his distinguished work in number theory", and the Senior Berwick Prize in 2008.
In 2017, he launched an ongoing formalization project and blog involving the Lean theorem prover and has since promoted the use of computer proof assistants in future mathematics research. He gave a plenary lecture at the International Congress of Mathematicians in 2022 on the topic.
He was the PhD supervisor to musician Dan Snaith, also known as Caribou, who received a PhD in mathematics from Imperial College London for his work on Overconvergent Siegel Modular Symbols. |
https://en.wikipedia.org/wiki/Angular%20velocity%20tensor | The angular velocity tensor is a skew-symmetric matrix defined by:
The scalar elements above correspond to the angular velocity vector components .
This is an infinitesimal rotation matrix.
The linear mapping Ω acts as a cross product :
where is a position vector.
When multiplied by a time difference, it results in the angular displacement tensor.
Calculation of angular velocity tensor of a rotating frame
A vector undergoing uniform circular motion around a fixed axis satisfies:
Let be the orientation matrix of a frame, whose columns , , and are the moving orthonormal coordinate vectors of the frame. We can obtain the angular velocity tensor Ω(t) of A(t) as follows:
The angular velocity must be the same for each of the column vectors , so we have:
which holds even if A(t) does not rotate uniformly. Therefore, the angular velocity tensor is:
since the inverse of an orthogonal matrix is its transpose .
Properties
In general, the angular velocity in an n-dimensional space is the time derivative of the angular displacement tensor, which is a second rank skew-symmetric tensor.
This tensor Ω will have independent components, which is the dimension of the Lie algebra of the Lie group of rotations of an n-dimensional inner product space.
Duality with respect to the velocity vector
In three dimensions, angular velocity can be represented by a pseudovector because second rank tensors are dual to pseudovectors in three dimensions. Since the angular velocity tensor Ω = Ω(t) is a skew-symmetric matrix:
its Hodge dual is a vector, which is precisely the previous angular velocity vector .
Exponential of Ω
If we know an initial frame A(0) and we are given a constant angular velocity tensor Ω, we can obtain A(t) for any given t. Recall the matrix differential equation:
This equation can be integrated to give:
which shows a connection with the Lie group of rotations.
Ω is skew-symmetric
We prove that angular velocity tensor is skew |
https://en.wikipedia.org/wiki/Toshiba%20T1200 | The Toshiba T1200 is a discontinued laptop that was manufactured by the Toshiba Corporation, first made in 1987. It is an upgraded version of the Toshiba T1100 Plus.
It is equipped with an Intel 80C86 processor at of which 384 KB can be used for LIM EMS or as a RAMdisk, CGA graphics card, one 720 KB 3.5" floppy drive and one 20 MB hard drive (Some models had two floppy drives and no hard drive controller card.) MS-DOS 3.30 is included with the laptop. It is the first laptop with a swappable battery pack. Its original price was 6499 USD.
The T1200's hard drive has an unusual 26-pin interface made by JVC, incompatible with ST506/412 or ATA interfaces. Floppy drives are connected using similar 26-pin connectors.
The computer has many unique functions, such as Hard RAM - a small part of RAM is battery-backed and can be used as a non-volatile hard drive. Another function allows to suspend the system or power control the hard drive (which is still dependent on the hard disk's on/off switch).
The Toshiba T1200xe is an upgraded model of this laptop, which contained a 12 MHz 80C286 processor and a 20 or 40 MB hard disk drive. It also has 1 MB of RAM expandable to 5 MB. The floppy drive was also upgraded from 720 KB to 1.44 MB.
See also
Toshiba T1100
Toshiba T1000
Toshiba T3100
Toshiba T1000LE |
https://en.wikipedia.org/wiki/Eumycetozoa | Eumycetozoa (), or true slime molds, is a diverse group of protists that behave as slime molds and develop fruiting bodies, either as sorocarps or as sporocarps. It is a monophyletic group or clade within the phylum Amoebozoa that contains the myxogastrids, dictyostelids and protosporangiids.
Characteristics
Eumycetozoa is a clade that includes three groups of amoebozoan protists: Myxogastria, Dictyostelia and Protosporangiida—also known as Myxomycetes, Dictyosteliomycetes and Ceratiomyxomycetes, respectively. It is defined on a node-based approach as the least inclusive clade containing the species Dictyostelium discoideum (a dictyostelid), Physarum polycephalum (a myxogastrid) and Ceratiomyxa fruticulosa (a protosporangiid).
All known members of Eumycetozoa generate fruiting bodies, either as sorocarps (in dictyostelids) or as sporocarps (in myxogastrids and protosporangiids). Within their life cycle, they may appear as a single haploid amoeboid cells (in dictyostelids), or as flagellated amoebae with two cilia that give rise to obligate amoebae with no cilia, from which the sporocarps develop (in myxogastrids and protosporangiids).
The flagellated amoebae of myxogastrids and protosporangiids and non-flagellated amoebae of dictyostelids have a flat cell shape. They form wide pseudopodia with acutely pointed subpseudopodia (i.e. smaller pseudopodia that grow beneath). Unlike other amoebae, the pseudopodia lack a prominent streaming of granular cytoplasm.
In eumycetozoans where sexual reproduction is well studied, the zygote cannibalizes on haploid amoebae.
Evolution
Eumycetozoa is a well supported clade within Amoebozoa. In independent phylogenetic analyses, it has been consistently recovered as the sister group to Archamoebae. The Eumycetozoa+Archamoebae clade is, in turn, the sister group to Variosea. Within Eumycetozoa, Dictyostelia has a basal position while Myxogastria and Protosporangiida form a clade. Together, these three groups are part of the large |
https://en.wikipedia.org/wiki/Hypoelliptic%20operator | In the theory of partial differential equations, a partial differential operator defined on an open subset
is called hypoelliptic if for every distribution defined on an open subset such that is (smooth), must also be .
If this assertion holds with replaced by real-analytic, then is said to be analytically hypoelliptic.
Every elliptic operator with coefficients is hypoelliptic. In particular, the Laplacian is an example of a hypoelliptic operator (the Laplacian is also analytically hypoelliptic). In addition, the operator for the heat equation ()
(where ) is hypoelliptic but not elliptic. However, the operator for the wave equation ()
(where ) is not hypoelliptic. |
https://en.wikipedia.org/wiki/Institute%20for%20Operations%20Research%20and%20the%20Management%20Sciences | The Institute for Operations Research and the Management Sciences (INFORMS) is an international society for practitioners in the fields of operations research (O.R.), management science, and analytics. It was established in 1995 with the merger of the Operations Research Society of America (ORSA) and The Institute of Management Sciences (TIMS).
The INFORMS Roundtable includes institutional members from operations research departments at major organizations.
INFORMS administers the honor society Omega Rho.
See also
Institute of Industrial Engineers |
https://en.wikipedia.org/wiki/IBM%20System/7 | The IBM System/7 was a computer system designed for industrial control, announced on October 28, 1970 and first shipped in 1971. It was a 16-bit machine and one of the first made by IBM to use novel semiconductor memory, instead of magnetic core memory conventional at that date.
IBM had earlier products in industrial control market, notably the IBM 1800 which appeared in 1964. However, there was minimal resemblance in architecture or software between the 1800 series and the System/7.
System/7 was designed and assembled in Boca Raton, Florida.
Hardware architecture
The processor designation for the system was IBM 5010. There were 8 registers which were mostly general purpose (capable of being used equally in instructions) although R0 had some extra capabilities for indexed memory access or system I/O. Later models may have been faster, but the versions existing in 1973 had register to register operation times of 400 ns, memory read operations at 800 ns, memory write operations at 1.2 µs, and direct IO operations were generally 2.2 μs. The instruction set would be familiar to a modern RISC programmer, with the emphasis on register operations and few memory operations or fancy addressing modes. For example, the multiply and divide instructions were done in software and needed to be specifically built into the operating system to be used.
The machine was physically compact for its day, designed around chassis/gate configurations shared with other IBM machines such as the 3705 communications controller, and a typical configuration would take up one or two racks about high, the smallest System/7's were only about high. The usual console device was a Teletype Model 33 ASR (designated as the IBM 5028), which was also how the machine would generally read its boot loader sequence. Since the semiconductor memory emptied when it lost power (in those days, losing memory when you switched off the power was regarded as a novelty) and the S/7 didn't have ROM, the machi |
https://en.wikipedia.org/wiki/Live%20sound%20mixing | Live sound mixing is the blending of multiple sound sources by an audio engineer using a mixing console or software. Sounds that are mixed include those from instruments and voices which are picked up by microphones (for drum kit, lead vocals and acoustic instruments like piano or saxophone and pickups for instruments such as electric bass) and pre-recorded material, such as songs on CD or a digital audio player. Individual sources are typically equalised to adjust the bass and treble response and routed to effect processors to ultimately be amplified and reproduced via a loudspeaker system. The live sound engineer listens and balances the various audio sources in a way that best suits the needs of the event.
Equipment
Audio equipment is usually connected together in a sequence known as the signal chain. In live sound situations, this consists of input transducers like microphones, pickups, and DI boxes. These devices are connected, often via multicore cable, to individual channels of a mixing console. Each channel on a mixing console typically has a vertical "channel strip", which is a column of knobs and buttons which are used to adjust the level and the bass, middle register and treble of the signal. The audio console also typically allows the engineer to add effects units to each channel (addition of reverb, etc.) before they are electrically summed (blended together). A live audio sound mixer basically mixes a bunch of different signals together and then sends that blended signal to outputs (speakers).
EQ mixing involved adjusting each sound source's equalization (EQ) settings to achieve a desired sound. This involves adjusting the frequency, amplitude, and resonance of each sound source. EQ can be used to create clarity and depth to the mix, as well as to customize the sound of each instrument or voice.
Audio signal processing may be applied to (inserted on) individual inputs, groups of inputs, or the entire output mix, using processors that are internal |
https://en.wikipedia.org/wiki/Behavioral%20modeling%20in%20computer-aided%20design | In computer-aided design, behavioral modeling is a high-level circuit modeling technique where behavior of logic is modeled.
The Verilog-AMS and VHDL-AMS languages are widely used to model logic behavior.
Other modeling approaches
RTL Modeling : logic is modeled at register level.
Structural Modeling : logic is modeled at both register level and gate level. |
https://en.wikipedia.org/wiki/Stotting | Stotting (also called pronking or pronging) is a behavior of quadrupeds, particularly gazelles, in which they spring into the air, lifting all four feet off the ground simultaneously. Usually, the legs are held in a relatively stiff position. Many explanations of stotting have been proposed, though for several of them there is little evidence either for or against.
The question of why prey animals stot has been investigated by evolutionary biologists including John Maynard Smith, C. D. Fitzgibbon, and Tim Caro; all of them conclude that the most likely explanation given the available evidence is that it is an honest signal to predators that the stotting animal would be difficult to catch. Such a signal is called "honest" as it is not deceptive in any way, and would benefit both predator and prey: the predator as it avoids a costly and unproductive chase, and the prey as it does not get chased.
Etymology
Stot is a common Scots and Northern English verb meaning "bounce" or "walk with a bounce". Uses in this sense include stotting a ball off a wall, and rain stotting off a pavement. Pronking comes from the Afrikaans verb pronk-, which means "show off" or "strut", and is a cognate of the English verb "prance".
Taxonomic distribution
Stotting occurs in several deer species of North America, including mule deer, pronghorn, and Columbian black-tailed deer, when a predator is particularly threatening, and in a variety of ungulate species from Africa, including Thomson's gazelle and springbok. It is also said to occur in the blackbuck, a species found in India.
Stotting occurs in domesticated livestock such as sheep and goats, typically only in young animals.
Possible explanations
Stotting makes a prey animal more visible, and uses up time and energy that could be spent on escaping from the predator. Since it is dangerous, the continued performance of stotting by prey animals must bring some benefit to the animal (or its family group) performing the behavior. Sev |
https://en.wikipedia.org/wiki/Snell%27s%20window | Snell's window (also called Snell's circle or optical man-hole) is a phenomenon by which an underwater viewer sees everything above the surface through a cone of light of width of about 96 degrees. This phenomenon is caused by refraction of light entering water, and is governed by Snell's Law. The area outside Snell's window will either be completely dark or show a reflection of underwater objects by total internal reflection.
Underwater photographers sometimes compose photographs from below such that their subjects fall inside Snell's window, which backlights and focuses attention on the subjects.
Image formation
Under ideal conditions, an observer looking up at the water surface from underneath sees a perfectly circular image of the entire above-water hemisphere—from horizon to horizon. Due to refraction at the air/water boundary, Snell's window compresses a 180° angle of view above water to a 97° angle of view below water, similar to the effect of a fisheye lens. The brightness of this image falls off to nothing at the circumference/horizon because more of the incident light at low grazing angles is reflected rather than refracted (see Fresnel equations). Refraction is very sensitive to any irregularities in the flatness of the surface (such as ripples or waves), which will cause local distortions or complete disintegration of the image. Turbidity in the water will veil the image behind a cloud of scattered light. |
https://en.wikipedia.org/wiki/Inocybe%20hystrix | Inocybe hystrix is an agaric fungus in the family Inocybaceae. It forms mycorrhiza with surrounding deciduous trees. Fruit bodies are usually found growing alone or in small groups on leaf litter during autumn months. Unlike many Inocybe species, Inocybe hystrix is densely covered in brown scales, a characteristic that aids in identification. The mushroom also has a spermatic odour that is especially noticeable when the mushroom is damaged or crushed.
Like many other Inocybe mushrooms, Inocybe hystrix contains dangerous amounts of muscarine and should not be consumed.
Taxonomy
The species was first described in 1838 by Elias Fries under the name Agaricus hystrix. Finnish mycologist Petter Karsten later (1879) transferred it to Inocybe.
Description
Fruit bodies have convex to plano-convex caps measuring in diameter. The caps are dry with scales that can be either erect or flat on the surface. The colour is brown in the centre, becoming paler towards the edges. The flesh is white, and has a spermatic odour and mild taste. The gills are closely spaced, white to dull brown, and have fringed edges. The stipe measures long by thick, and is roughly the same width throughout its length; like the cap, it is scaly.
The spore print is cinnamon brown. spores are roughly almond-shaped, smooth, inamyloid, and measure 8–12.5 by 5–6.5 µm. Clamp connections are present in the hyphae.
The species is poisonous.
Habitat and distribution
In North America and Europe, Inocybe hystrix grows in deciduous forest, especially beech. In Costa Rica, it is found in the Cordillera Talamanca, where it associates with Quercus costaricensis at elevations around .
See also
List of Inocybe species |
https://en.wikipedia.org/wiki/Pandemic%20severity%20index | The pandemic severity index (PSI) was a proposed classification scale for reporting the severity of influenza pandemics in the United States. The PSI was accompanied by a set of guidelines intended to help communicate appropriate actions for communities to follow in potential pandemic situations. Released by the United States Department of Health and Human Services (HHS) on February 1, 2007, the PSI was designed to resemble the Saffir-Simpson Hurricane Scale classification scheme. The index was replaced by the Pandemic Severity Assessment Framework in 2014, which uses quadrants based on transmissibility and clinical severity rather than a linear scale.
Development
The PSI was developed by the Centers for Disease Control and Prevention (CDC) as a new pandemic influenza planning tool for use by states, communities, businesses and schools, as part of a drive to provide more specific community-level prevention measures. Although designed for domestic implementation, the HHS has not ruled out sharing the index and guidelines with interested international parties.
The index and guidelines were developed by applying principles of epidemiology to data from the history of the last three major flu pandemics and seasonal flu transmission, mathematical models, and input from experts and citizen focus groups. Many "tried and true" practices were combined in a more structured manner:
Context
During the onset of a growing pandemic, local communities cannot rely upon widespread availability of antiviral drugs and vaccines (See Influenza research).
The goal of the index is to provide guidance as to what measures various organizations can enact that will slow down the progression of a pandemic, easing the burden of stress upon community resources while definite solutions, like drugs and vaccines, can be brought to bear on the situation. The CDC expects adoption of the PSI will allow early co-ordinated use of community mitigation measures to affect pandemic progression.
Guideline |
https://en.wikipedia.org/wiki/Sony%20CCD-VX3 | Sony 3CCD-VX3 (often referred to as simply VX-3) was a Hi-8 camcorder noteworthy for being the first to feature dichroic (prismatic) imaging. It was released to the North American market in 1992 at a street cost of about US$3500. The PAL version as well as the Japanese version had the model name CCD-VX1.
The image is created using three 1/3" CCD chips by prismatically splitting the optics into red, green, and blue, and processing each of these channels individually; this preserves quality especially with red hues. The camera imaged in 410,000 pixels with horizontal resolution of better than 530 lines.
During the mid-1990s, Sony dropped Hi-8 in favor of the emerging DV format, and as a result the VX-3 was discontinued in September 1995. However the VX-3 went on to serve as the framework for a line of professional DV cameras, including the DCR-VX1000, DCR-VX9000, and DSR-200.
External links
Sony CCD-VX1 (Official webpage)
CCD-VX3
CCD-VX3
Audiovisual introductions in 1993 |
https://en.wikipedia.org/wiki/Risk%20dominance | Risk dominance and payoff dominance are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered payoff dominant if it is Pareto superior to all other Nash equilibria in the game. When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered risk dominant if it has the largest basin of attraction (i.e. is less risky). This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.
The payoff matrix in Figure 1 provides a simple two-player, two-strategy example of a game with two pure Nash equilibria. The strategy pair (Hunt, Hunt) is payoff dominant since payoffs are higher for both players compared to the other pure NE, (Gather, Gather). On the other hand, (Gather, Gather) risk dominates (Hunt, Hunt) since if uncertainty exists about the other player's action, gathering will provide a higher expected payoff. The game in Figure 1 is a well-known game-theoretic dilemma called stag hunt. The rationale behind it is that communal action (hunting) yields a higher return if all players combine their skills, but if it is unknown whether the other player helps in hunting, gathering might turn out to be the better individual strategy for food provision, since it does not depend on coordinating with the other player. In addition, gathering alone is preferred to gathering in competition with others. Like the Prisoner's dilemma, it provides a reason why collective action might fail in the absence of credible commitments.
Formal definition
The game given in Figure 2 is a coordination game if the following payoff inequalities hold for player 1 (rows): A > B, D > C, and for player 2 (columns): a > b, d > c. |
https://en.wikipedia.org/wiki/Forward%E2%80%93backward%20algorithm | The forward–backward algorithm is an inference algorithm for hidden Markov models which computes the posterior marginals of all hidden state variables given a sequence of observations/emissions , i.e. it computes, for all hidden state variables , the distribution . This inference task is usually called smoothing. The algorithm makes use of the principle of dynamic programming to efficiently compute the values that are required to obtain the posterior marginal distributions in two passes. The first pass goes forward in time while the second goes backward in time; hence the name forward–backward algorithm.
The term forward–backward algorithm is also used to refer to any algorithm belonging to the general class of algorithms that operate on sequence models in a forward–backward manner. In this sense, the descriptions in the remainder of this article refer only to one specific instance of this class.
Overview
In the first pass, the forward–backward algorithm computes a set of forward probabilities which provide, for all , the probability of ending up in any particular state given the first observations in the sequence, i.e. . In the second pass, the algorithm computes a set of backward probabilities which provide the probability of observing the remaining observations given any starting point , i.e. . These two sets of probability distributions can then be combined to obtain the distribution over states at any specific point in time given the entire observation sequence:
The last step follows from an application of the Bayes' rule and the conditional independence of and given .
As outlined above, the algorithm involves three steps:
computing forward probabilities
computing backward probabilities
computing smoothed values.
The forward and backward steps may also be called "forward message pass" and "backward message pass" - these terms are due to the message-passing used in general belief propagation approaches. At each single observation in the sequence, |
https://en.wikipedia.org/wiki/Distance-bounding%20protocol | Distance bounding protocols are cryptographic protocols that enable a verifier V to establish an upper bound on the physical distance to a prover P.
They are based on timing the delay between sending out challenge bits and receiving back the corresponding response bits. The delay time for responses enables V to compute an upper-bound on the distance, as the round trip delay time divided into twice the speed of light. The computation is based on the fact that electro-magnetic waves travel nearly at the speed of light, but cannot travel faster.
Distance bounding protocols can have different applications. For example, when a person conducts a cryptographic identification protocol at an entrance to a building, the access control computer in the building would like to be ensured that the person giving the responses is no more than a few meters away.
RF Implementation
The distance bound computed by a radio frequency distance bounding protocol is very sensitive to even the slightest processing delay. This is because any delay introduced, anywhere in the system, will be multiplied by approximately 299,792,458 m/s (the speed of light) in order to convert time into distance. This means that even delays on the order of nanoseconds will result in significant errors in the distance bound (a timing error of 1 ns corresponds to a distance error of 15 cm).
Because of the extremely tight timing constraints and the fact that a distance bounding protocol requires that the prover apply an appropriate function to the challenge sent by the verifier, it is not trivial to implement distance bounding in actual physical hardware. Conventional radios have processing times that are orders of magnitudes too big, even if the function applied is a simple XOR.
In 2010, Rasmussen and Capkun devised a way for the prover to apply a function using pure analog components. The result is a circuit whose processing delay is below 1 nanosecond from receiving a challenge till sending back the respo |
https://en.wikipedia.org/wiki/Mouth | The mouth is the body orifice through which many animals ingest food and vocalize. The body cavity immediately behind the mouth opening, known as the oral cavity (or in Latin), is also the first part of the alimentary canal which leads to the pharynx and the gullet. In tetrapod vertebrates, the mouth is bounded on the outside by the lips and cheeks — thus the oral cavity is also known as the buccal cavity (from Latin , meaning "cheek") — and contains the tongue on the inside. Except for some groups like birds and lissamphibians, vertebrates usually have teeth in their mouths, although some fish species have pharyngeal teeth instead of oral teeth.
Most bilaterian phyla, including arthropods, molluscs and chordates, have a two-opening gut tube with a mouth at one end and an anus at the other. Which end forms first in ontogeny is a criterion used to classify bilaterian animals into protostomes and deuterostomes.
Development
In the first multicellular animals, there was probably no mouth or gut and food particles were engulfed by the cells on the exterior surface by a process known as endocytosis. The particles became enclosed in vacuoles into which enzymes were secreted and digestion took place intracellularly. The digestive products were absorbed into the cytoplasm and diffused into other cells. This form of digestion is used nowadays by simple organisms such as Amoeba and Paramecium and also by sponges which, despite their large size, have no mouth or gut and capture their food by endocytosis.
However, most animals have a mouth and a gut, the lining of which is continuous with the epithelial cells on the surface of the body. A few animals which live parasitically originally had guts but have secondarily lost these structures. The original gut of diploblastic animals probably consisted of a mouth and a one-way gut. Some modern invertebrates still have such a system: food being ingested through the mouth, partially broken down by enzymes secreted in the gut, and t |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.