source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/American%20Society%20for%20Reproductive%20Immunology | The American Society for Reproductive Immunology, or ASRI, is a US-based organization of scientists from around the world interested in reproductive immunology. It was founded in 1981 and is the oldest society of its kind. Its official scientific journal is the American Journal of Reproductive Immunology.
ASRI encompasses scientists in areas of study such as molecular biology, microbiology, mucosal immunology, genetics, pediatrics, infectious diseases, endocrinology, obstetrics, gynecology, pathology, veterinary medicine and animal science. The ASRI brings together clinicians and basic scientists to discuss contemporary topics in reproductive immunology.
ASRI objectives are achieved in two ways. First, the society is the official sponsor of the American Journal of Reproductive Immunology, which is one of two journals devoted to publishing reproductive immunology research. Prior to the founding of the journal, there were no publishing outlets specifically oriented towards reproductive immunology. Secondly, the society has held an annual meeting since its inception in 1981 to promote exchange of scientific information and to foster mentorship. The abstracts from these meetings are published as a part of the American Journal of Reproductive Immunology and many of the plenary papers are included in the journal as well. |
https://en.wikipedia.org/wiki/School%20Mathematics%20Study%20Group | The School Mathematics Study Group (SMSG) was an American academic think tank focused on the subject of reform in mathematics education. Directed by Edward G. Begle and financed by the National Science Foundation, the group was created in the wake of the Sputnik crisis in 1958 and tasked with creating and implementing mathematics curricula for primary and secondary education, which it did until its termination in 1977.
The efforts of the SMSG yielded a reform in mathematics education known as New Math which was promulgated in a series of reports, culminating in a series published by Random House called the New Mathematical Library (Vol. 1 is Ivan Niven's Numbers: Rational and Irrational). In the early years, SMSG also produced a set of draft textbooks in typewritten paperback format for elementary, middle and high school students.
Perhaps the most authoritative collection of materials from the School Mathematics Study Group is now housed in the Archives of American Mathematics in the University of Texas at Austin's Center for American History.
See also
Foundations of geometry
Further reading
1958 Letter from Ralph A. Raimi to Fred Quigley concerning the New Math
Whatever Happened to the New Math by Ralph A. Raimi
Some Technical Commentaries on Mathematics Education and History by Ralph A. Raimi
External links
The SMSG Collection at The Center for American History at UT
Archives of American Mathematics at the Center for American History at UT
Mathematics education
Curricula |
https://en.wikipedia.org/wiki/Marvel%20%28food%29 | Marvel is a United Kingdom brand of dried milk powder, now marketed by Premier Foods.
History, packaging and Brit Miller
The product was launched in 1964 and is sold in foil-coated cardboard drums with the contents sealed under a tear off foil lid and in sachets.
To make milk from the powder it is necessary to put tablespoons of it into a jug or bowl, then add cold water and stir until all the powder has dissolved.
See also
List of dried foods
External links
Marvel on the Premier foods website
Dried foods
Brand name dairy products
Marvel |
https://en.wikipedia.org/wiki/Brining | In food processing, brining is treating food with brine or coarse salt which preserves and seasons the food while enhancing tenderness and flavor with additions such as herbs, spices, sugar, caramel or vinegar. Meat and fish are typically brined for less than twenty-four hours while vegetables, cheeses and fruit are brined in a much longer process known as pickling. Brining is similar to marination, except that a marinade usually includes a significant amount of acid, such as vinegar or citrus juice. Brining is also similar to curing, which usually involves significantly drying the food, and is done over a much longer time period.
Meat
Brining is typically a process in which meat is soaked in a salt water solution similar to marination before cooking. Meat is soaked anywhere from 30 minutes to several days. The brine may be seasoned with spices and herbs. The amount of time needed to brine depends on the size of the meat: more time is needed for a large turkey compared to a broiler fryer chicken. Similarly, a large roast must be brined longer than a thin cut of meat.
Dry brining
Brining can also be achieved by covering the meat in dry coarse salt and left to rest for several hours. The salt draws moisture from the interior of the meat to the surface, where it mixes with the salt and is then reabsorbed with the salt essentially brining the meat in its own juices. The salt rub is then rinsed off and discarded before cooking.
Food scientists have two theories about the brining effect, but which one is correct is still under debate.
The brine surrounding the cells has a higher concentration of salt than the fluid within the cells, but the cell fluid has a higher concentration of other solutes. This leads salt ions to diffuse into the cell, while the solutes in the cells cannot diffuse through the cell membranes into the brine. The increased salinity of the cell fluid causes the cell to absorb water from the brine via osmosis.
The salt introduced into the cell denatu |
https://en.wikipedia.org/wiki/A%20New%20Era%20of%20Thought | A New Era of Thought is a non-fiction work written by Charles Howard Hinton, published in 1888 and reprinted in 1900 by Swan Sonnenschein & Co. Ltd., London. A New Era of Thought is about the fourth dimension and its implications on human thinking. It influenced the work of P.D. Ouspensky, particularly his book Tertium Organum where it is frequently quoted; Scientific American writer Martin Gardner, who mentioned this book in some of his articles; and Rudy Rucker's The Fourth Dimension. It is prefaced by Alicia Boole and H. John Falk. A New Era of Thought is inspired by Plato's allegory of the cave and is influenced by the works of Immanuel Kant, Carl Friedrich Gauss and Nikolai Lobachevsky. The book has xvi and 230 pages.
Synopsis
A New Era of Thought consists of two parts. The first part is a collection of philosophical and mathematical essays on the fourth dimension. These essays are somewhat disconnected. They teach the possibility of thinking four-dimensionally and about the religious and philosophical insights thus obtainable. In the second part Hinton develops a system of coloured cubes. These cubes serve as model to get a four-dimensional perception as a basis of four-dimensional thinking. This part describes how to visualize a tesseract by looking at several 3-D cross sections of it. The system of cubic models in A New Era of Thought is a forerunner of the cubic models in Hinton's book The Fourth Dimension.
Contents
Preface
Table of Contents
Introductory Note to Part I
Part I
Introduction
Chapter I.
Scepticism and Science.
Beginning of Knowledge.
Chapter II.
Apprehension of Nature.
Intelligence.
Study of Arrangement or Shape.
Chapter III.
The Elements of Knowledge.
Chapter IV.
Theory and Practice.
Chapter V.
Knowledge: Self-Elements.
Chapter VI.
Function of Mind.
Space against Metaphysics.
Self-Limitations and its Test.
A Plane World.
Chapter VII.
Self Elements in our Consciousness.
Chapter VIII.
Relation of Lower and Higher S |
https://en.wikipedia.org/wiki/Suppressor-inducer%20T%20cell | Suppressor-inducer T cells are a specific subset of CD4+ T helper cells that "induce" CD8+ cytotoxic T cells to become "suppressor" cells. Suppressor T cells are also known as CD25+–Foxp3+ regulatory T cells (nTregs), and reduce inflammation. |
https://en.wikipedia.org/wiki/Asilomar%20Conference%20on%20Recombinant%20DNA | The Asilomar Conference on Recombinant DNA was an influential conference organized by Paul Berg, Maxine Singer, and colleagues to discuss the potential biohazards and regulation of biotechnology, held in February 1975 at a conference center at Asilomar State Beach, California. A group of about 140 professionals (primarily biologists, but also including lawyers and physicians) participated in the conference to draw up voluntary guidelines to ensure the safety of recombinant DNA technology. The conference also placed scientific research more into the public domain, and can be seen as applying a version of the precautionary principle.
The effects of these guidelines are still being felt through the biotechnology industry and the participation of the general public in scientific discourse. Due to potential safety hazards, scientists worldwide had halted experiments using recombinant DNA technology, which entailed combining DNAs from different organisms. After the establishment of the guidelines during the conference, scientists continued with their research, which increased fundamental knowledge about biology and the public's interest in biomedical research.
Background: recombinant DNA technology
Recombinant DNA technology arose as a result of advances in biology that began in the 1950s and '60s. During these decades, a tradition of merging the structural, biochemical and informational approaches to the central problems of classical genetics became more apparent. Two main underlying concepts of this tradition were that genes consisted of DNA and that DNA encoded information that determined the processes of replication and protein synthesis. These concepts were embodied in the model of DNA produced through the combined efforts of James Watson, Francis Crick, and Rosalind Franklin. Further research on the Watson-Crick model yielded theoretical advances that were reflected in new capacities to manipulate DNA. One of these capacities was recombinant DNA technology.
Exper |
https://en.wikipedia.org/wiki/Algorithm%20characterizations | Algorithm characterizations are attempts to formalize the word algorithm. Algorithm does not have a generally accepted formal definition. Researchers are actively working on this problem. This article will present some of the "characterizations" of the notion of "algorithm" in more detail.
The problem of definition
Over the last 200 years, the definition of the algorithm has become more complicated and detailed as researchers have tried to pin down the term. Indeed, there may be more than one type of "algorithm". But most agree that algorithm has something to do with defining generalized processes for the creation of "output" integers from other "input" integers – "input parameters" arbitrary and infinite in extent, or limited in extent but still variable—by the manipulation of distinguishable symbols (counting numbers) with finite collections of rules that a person can perform with paper and pencil.
The most common number-manipulation schemes—both in formal mathematics and in routine life—are: (1) the recursive functions calculated by a person with paper and pencil, and (2) the Turing machine or its Turing equivalents—the primitive register-machine or "counter-machine" model, the random-access machine model (RAM), the random-access stored-program machine model (RASP) and its functional equivalent "the computer".
When we are doing "arithmetic" we are really calculating by the use of "recursive functions" in the shorthand algorithms we learned in grade school, for example, adding and subtracting.
The proofs that every "recursive function" we can calculate by hand we can compute by machine and vice versa—note the usage of the words calculate versus compute—is remarkable. But this equivalence together with the thesis (unproven assertion) that this includes every calculation/computation indicates why so much emphasis has been placed upon the use of Turing-equivalent machines in the definition of specific algorithms, and why the definition of "algorithm" itself ofte |
https://en.wikipedia.org/wiki/State%20machine%20replication | In computer science, state machine replication (SMR) or state machine approach is a general method for implementing a fault-tolerant service by replicating servers and coordinating client interactions with server replicas. The approach also provides a framework for understanding and designing replication management protocols.
Problem definition
Distributed service
In terms of clients and services. Each service comprises one or more servers and exports operations that clients invoke by making requests. Although using a single, centralized server is the simplest way to implement a service, the resulting service can only be as fault tolerant as the processor executing that server. If this level of fault tolerance is unacceptable, then multiple servers that fail independently can be used. Usually, replicas of a single server are executed on separate processors of a distributed system, and protocols are used to coordinate client interactions with these replicas.
State machine
For the subsequent discussion a State Machine will be defined as the following tuple of values (See also Mealy machine and Moore Machine):
A set of States
A set of Inputs
A set of Outputs
A transition function (Input × State → State)
An output function (Input × State → Output)
A distinguished State called Start.
A State Machine begins at the State labeled Start. Each Input received is passed through the transition and output function to produce a new State and an Output. The State is held stable until a new Input is received, while the Output is communicated to the appropriate receiver.
This discussion requires a State Machine to be deterministic: multiple copies of the same State Machine begin in the Start state, and receiving the same Inputs in the same order will arrive at the same State having generated the same Outputs.
Typically, systems based on State Machine Replication voluntarily restrict their implementations to use finite-state machines to simplify error recovery.
Fault T |
https://en.wikipedia.org/wiki/Center%20for%20Gravitational%20Wave%20Astronomy | The Center for Gravitational Wave Astronomy (CGWA) is a research center at the University of Texas Rio Grande Valley (UTRGV). Its core mission is "to further scientific research and education in gravitational wave astronomy". It was founded in 2003 at the UTRGV's predecessor institution UT Brownsville, through a grant from NASA's University Research Centers program.
Research at the CGWA includes the study of gravitational waves and the astrophysics of gravitational wave sources. The center has hosted several international conferences.
Computing facilities at the CGWA include a 41-node cluster "Funes" which was installed at the beginning of 2004, and an older cluster, "Lobizon", with 96 nodes. These computers are used mainly for source modeling and for numerical relativity simulations. |
https://en.wikipedia.org/wiki/Identity%20Commissioner | The Identity Commissioner (officially known as the National Identity Scheme Commissioner) was an independent regulator in the United Kingdom appointed under the Identity Cards Act 2006 based in London.
Following the formation of the Conservative – Liberal Democrat Coalition after the 2010 General Election, it was announced that the ID cards scheme was to be scrapped. The Identity Documents Act 2010 abolished ID cards and the Office of the Identity Commissioner.
First commissioner appointed
The Identity Commissioner was appointed by the Home Secretary and reported at least annually to Parliament on his oversight of the National Identity Service (previously known as National Identity Scheme). The first Commissioner, Sir Joseph Pilling, took office on 1 October 2009.
Role and powers of the Identity Commissioner
Under the Identity Cards Act 2006 the Identity Commissioner had the power to review matters relating to the National Identity Service including:
(i) the arrangements made by the Secretary of State for carrying out his functions under the Identity Cards Act;
(ii) the arrangements made by persons for processing information, which has been provided to them under our powers in the Act; and
(iii) the use of identity cards.
The Identity Commissioner was also obliged to review arrangements:
(i) for securing the confidentiality and integrity of the National Identity Register; and
(ii) for dealing with complaints about the way in which the Secretary of State carries out his functions relating to the Scheme.
Annual Report 2009
On 25 February 2010, as required by the 2006 Act, the Home Secretary laid before Parliament the Identity Commissioner's first annual report, outlining the work of the Commissioner since the creation of the Office of the Identity Commissioner on 1 October 2009. The report is available on the Identity Commissioner's website. |
https://en.wikipedia.org/wiki/David%20Conlon | David Conlon (born 1982) is an Irish mathematician who is a Professor of Mathematics at the California Institute of Technology. His research interests are in Hungarian-style combinatorics, particularly Ramsey theory, extremal graph theory, combinatorial number theory, and probabilistic methods in combinatorics. He proved the first superpolynomial improvement on the Erdős–Szekeres bound on diagonal Ramsey numbers. He won the European Prize in Combinatorics in 2011 for his work in Ramsey theory and for his progress on Sidorenko's conjecture, and the Whitehead Prize in 2019.
Life
Conlon represented Ireland in the International Mathematical Olympiad in 1998 and 1999. He was an undergraduate in Trinity College Dublin, where he was elected a Scholar in 2001 and graduated in 2003. He earned a PhD from Cambridge University in 2009.
In 2019 he moved to California Institute of Technology, having been a fellow of Wadham College, Oxford and Professor of Discrete Mathematics in the Mathematics Institute at the University of Oxford.
Conlon has worked in Ramsey theory, and he proved the first superpolynomial improvement on the Erdős–Szekeres bound on diagonal Ramsey numbers. He won the European Prize in Combinatorics in 2011, for his work in Ramsey theory and for his progress on Sidorenko's conjecture that, for any bipartite graph H, uniformly random graphons have the fewest subgraphs isomorphic to H when the edge density is fixed. He was awarded the Whitehead Prize in 2019 "in recognition of his many contributions to combinatorics". |
https://en.wikipedia.org/wiki/HP-35 | The HP-35 was Hewlett-Packard's first pocket calculator and the world's first scientific pocket calculator: a calculator with trigonometric and exponential functions. It was introduced in 1972.
History
In about 1970 HP co-founder Bill Hewlett challenged his co-workers to create a "shirt-pocket sized HP-9100". At the time, slide rules were the only practical portable devices for performing trigonometric and exponential functions, as existing pocket calculators could only perform addition, subtraction, multiplication, and division. Introduced at , like HP's first scientific calculator, the desktop 9100A, it used reverse Polish notation (RPN) rather than what came to be called "algebraic" entry. The "35" in the calculator's name came from the number of keys.
The original HP-35 was available from 1972 to 1975. In 2007 HP announced the release of the "retro"-look HP 35s to commemorate the 35th anniversary of the launch of the original HP-35. It was priced at .
The HP-35 was named an IEEE Milestone in 2009.
Description
The calculator used a traditional floating decimal display for numbers that could be displayed in that format, but automatically switched to scientific notation for other numbers. The fifteen-digit LED display was capable of displaying a ten-digit mantissa plus its sign and a decimal point and a two-digit exponent plus its sign. The display used a unique form of multiplexing, illuminating a single LED segment at a time rather than a single LED digit, because HP research had shown that this method was perceived by the human eye as brighter for equivalent power. Light-emitting diodes were relatively new at the time and were much dimmer than high-efficiency diodes developed in subsequent decades.
The calculator used three "AA"-sized NiCd batteries assembled into a removable proprietary battery pack. Replacement battery packs are no longer available, leaving existing HP-35 calculators to rely on AC power, or their users to rebuild the battery packs themse |
https://en.wikipedia.org/wiki/Edge%20computing | Edge computing is a distributed computing paradigm that brings computation and data storage closer to the sources of data. This is expected to improve response times and save bandwidth. Edge computing is an architecture rather than a specific technology, and a topology- and location-sensitive form of distributed computing.
The origins of edge computing lie in content distribution networks that were created in the late 1990s to serve web and video content from edge servers that were deployed close to users. In the early 2000s, these networks evolved to host applications and application components on edge servers, resulting in the first commercial edge computing services that hosted applications such as dealer locators, shopping carts, real-time data aggregators, and ad insertion engines.
Internet of things (IoT) is an example of edge computing. A common misconception is that edge and IoT are synonymous.
Definition
One definition of edge computing is the use of any type of computer program that delivers low latency nearer to the requests. Karim Arabi, in an IEEE DAC 2014 Keynote and subsequently in an invited talk at MIT's MTL Seminar in 2015, defined edge computing broadly as all computing outside the cloud happening at the edge of the network, and more specifically in applications where real-time processing of data is required. Thus, edge computing does not have the climate-controlled advantages of data centers despite the large amount of processing power necessary.
The term is often used as synonymous with fog computing. This especially is quite relevant for small deployments. However, when the deployment size is large, e.g., for Smart Cities, fog computing can be a distinct layer between the Edge and the Cloud. Hence in such deployments, Edge layer is a distinct layer too which has specific responsibilities.
According to The State of the Edge report, edge computing concentrates on servers "in proximity to the last mile network". Alex Reznik, Chair of the ETS |
https://en.wikipedia.org/wiki/Multi-state%20modeling%20of%20biomolecules | Multi-state modeling of biomolecules refers to a series of techniques used to represent and compute the behaviour of biological molecules or complexes that can adopt a large number of possible functional states.
Biological signaling systems often rely on complexes of biological macromolecules that can undergo several functionally significant modifications that are mutually compatible. Thus, they can exist in a very large number of functionally different states. Modeling such multi-state systems poses two problems: The problem of how to describe and specify a multi-state system (the "specification problem") and the problem of how to use a computer to simulate the progress of the system over time (the "computation problem"). To address the specification problem, modelers have in recent years moved away from explicit specification of all possible states, and towards rule-based modeling that allow for implicit model specification, including the κ-calculus, BioNetGen, the Allosteric Network Compiler and others. To tackle the computation problem, they have turned to particle-based methods that have in many cases proved more computationally efficient than population-based methods based on ordinary differential equations, partial differential equations, or the Gillespie stochastic simulation algorithm. Given current computing technology, particle-based methods are sometimes the only possible option. Particle-based simulators further fall into two categories: Non-spatial simulators such as StochSim, DYNSTOC, RuleMonkey, and NFSim
and spatial simulators, including Meredys, SRSim and MCell. Modelers can thus choose from a variety of tools; the best choice depending on the particular problem. Development of faster and more powerful methods is ongoing, promising the ability to simulate ever more complex signaling processes in the future.
Introduction
Multi-state biomolecules in signal transduction
In living cells, signals are processed by networks of proteins that can act a |
https://en.wikipedia.org/wiki/Cyproconazole | Cyproconazole is an agricultural fungicide of the class of azoles, used on cereal crops, coffee, sugar beet, fruit trees and grapes, on sod farms and golf courses and on wood as a preservative. It was introduced to the market by then Sandoz in 1994 (which is Syngenta as of 2000).
Mechanism of action
Cyproconazole inhibits demethylation, a particular step in the synthesis of a component of the fungal cell wall called sterol. This means it affects fungal growth, but not the fungal sporulation. This explains why it must be used when fungal growth is maximum, early in the infection, because in late infections fungal growth slows down and the agent is ineffective.
Use
Formulations
Many different formulations exist with imazalil, difenoconazole, prochloraz, propiconazole, chlorothalonil, cyprodinil, fludioxonil, azoxystrobin, and copper. In wood preservatives it is mixed with didecyldimethylammonium chloride.
It is the active ingredient in two foliar fungicides for soybeans in the U.S., Alto X, and mixed with azoxystrobin in Quadris Xtra, both by Syngenta.
It is also manufactured by Bayer CropScience and Dow AgroSciences.
Application
Cyproconazole is used against powdery mildew, rust on cereals and apple scab, and applied by air or on the ground to cereal crops, coffee, sugar beet, fruit trees and grapes.
It controls the following pests: Puccinia graminis, Puccinia spp., Pseudocercosporella herpotrichoides and Septoria species. It can be used on above-ground wood to prevent it from decay from fungi as an alternative to Chromated Copper Arsenate. It was originally marketed for use on sod farms and golf courses
In the U.S., chemigation is allowed with less than half inch application, aerial spraying with a 5 gpa minimum, ground application is adequate for coverage and canopy penetration. The re-entry interval is 12 hours. Reapplication within 30 days of harvest is not permitted.
Hazards
The European Community classifies cyproconazole into carcinogen category 3 as li |
https://en.wikipedia.org/wiki/Related-key%20attack | In cryptography, a related-key attack is any form of cryptanalysis where the attacker can observe the operation of a cipher under several different keys whose values are initially unknown, but where some mathematical relationship connecting the keys is known to the attacker. For example, the attacker might know that the last 80 bits of the keys are always the same, even though they don't know, at first, what the bits are. This appears, at first glance, to be an unrealistic model; it would certainly be unlikely that an attacker could persuade a human cryptographer to encrypt plaintexts under numerous secret keys related in some way.
KASUMI
KASUMI is an eight round, 64-bit block cipher with a 128-bit key. It is based upon MISTY1 and was designed to form the basis of the 3G confidentiality and integrity algorithms.
Mark Blunden and Adrian Escott described differential related key attacks on five and six rounds of KASUMI. Differential attacks were introduced by Biham and Shamir. Related key attacks were first introduced by Biham. Differential related key attacks are discussed in Kelsey et al.
WEP
An important example of a cryptographic protocol that failed because of a related-key attack is Wired Equivalent Privacy (WEP) used in Wi-Fi wireless networks. Each client Wi-Fi network adapter and wireless access point in a WEP-protected network shares the same WEP key. Encryption uses the RC4 algorithm, a stream cipher. It is essential that the same key never be used twice with a stream cipher. To prevent this from happening, WEP includes a 24-bit initialization vector (IV) in each message packet. The RC4 key for that packet is the IV concatenated with the WEP key. WEP keys have to be changed manually and this typically happens infrequently. An attacker therefore can assume that all the keys used to encrypt packets share a single WEP key. This fact opened up WEP to a series of attacks which proved devastating. The simplest to understand uses the fact that the 24-bit IV on |
https://en.wikipedia.org/wiki/Folkewall | The Folkewall is a construction with the dual functions of growing plants and purifying greywater. It was designed by Folke Günther in Sweden.
Inspired by the "Sanitas wall" at Dr Gösta Nilsson's Sanitas farm project in Botswana, this technique makes an efficient use of space by fulfilling two essential functions: vertical plant growing and purification of greywater. This system is also known as a living wall or green wall.
Design
The basic design is a wall of hollow concrete slabs, with compartments opening on one or both sides of the wall. The hollows are filled with inert material like gravel, expanded clay aggregate, perlite, or vermiculite. It is designed to let the water trickle over the longest possible treatment path along the length of the wall among the pebbles.
The water is brought in at the top, and percolates following a zigzag pattern inside the wall. As it does so, the plant roots grow among the inert material and extract nutrients from the water. A film of beneficial bacteria grows over the pebbles, releasing the nutrients in the percolating greywater. At the bottom of the wall a container collects the purified water, which can then be used for non-potable household use, for watering the garden, or it can be returned to the top of the wall.
Other considerations
Plants used: since the harvesting of the plants is a part of the purification process, fast-growing, herbaceous crops are particularly suited for the Folkewall. Annual food crops are suitable, perennials like trees and shrubs should be avoided.
Greywater: The water feeding the plants in the wall must be free of heavy metals and/or unsafe pollutants, notably human waste.
Advantages
Better use of greywater: most of the evaporation happens through the plant's leaves, which makes the method especially useful in arid climates. The Folkewall makes use of this aspect.
More efficient use of the area. For example in greenhouses or other glazed areas where a wall is used as a greywater purific |
https://en.wikipedia.org/wiki/Content%20delivery%20platform | A content delivery platform (CDP) is a software as a service (SaaS) content service, similar to a content management system (CMS), that utilizes embedded software code to deliver web content. Instead of the installation of software on client servers, a CDP feeds content through embedded code snippets, typically via JavaScript widget, Flash widget or server-side Ajax.
Content delivery platforms are not content delivery networks, which are utilized for large web media and do not depend on embedded software code. A CDP is utilized for all types of web content, even text-based content.
Alternatively, a content delivery platform can be utilized to import a variety of syndicated content into one central location and then re-purposed for web syndication.
The term content delivery platform was coined by Feed.Us software architect John Welborn during a presentation to the Chicago Web Developers Association.
In late 2007, two blog comment services launched utilizing CDP-based services. Intense Debate and Disqus both employ JavaScript widgets to display and collect blog comments on websites.
See also
Web content management system
Viddler, YouTube, Ustream embeddable streaming video |
https://en.wikipedia.org/wiki/Shadow%20stack | In computer security, a shadow stack is a mechanism for protecting a procedure's stored return address, such as from a stack buffer overflow. The shadow stack itself is a second, separate stack that "shadows" the program call stack. In the function prologue, a function stores its return address to both the call stack and the shadow stack. In the function epilogue, a function loads the return address from both the call stack and the shadow stack, and then compares them. If the two records of the return address differ, then an attack is detected; the typical course of action is simply to terminate the program or alert system administrators about a possible intrusion attempt. A shadow stack is similar to stack canaries in that both mechanisms aim to maintain the control-flow integrity of the protected program by detecting attacks that tamper the stored return address by an attacker during an exploitation attempt.
Shadow stacks can be implemented by recompiling programs with modified prologues and epilogues, by dynamic binary rewriting techniques to achieve the same effect, or with hardware support. Unlike the call stack, which also stores local program variables, passed arguments, spilled registers and other data, the shadow stack typically just stores a second copy of a function's return address.
Shadow stacks provide more protection for return addresses than stack canaries, which rely on the secrecy of the canary value and are vulnerable to non-contiguous write attacks. Shadow stacks themselves can be protected with guard pages or with information hiding, such that an attacker would also need to locate the shadow stack to overwrite a return address stored there.
Like stack canaries, shadow stacks do not protect stack data other than return addresses, and so offer incomplete protection against security vulnerabilities that result from memory safety errors.
In 2016, Intel announced upcoming hardware support for shadow stacks with their Control-flow Enforcement Tech |
https://en.wikipedia.org/wiki/Boletus%20nobilissimus | Boletus nobilissimus is an edible basidiomycete mushroom, of the genus Boletus in the family Boletaceae. Long considered a variety of European Boletus edulis, it has become a species on its own in 2000, with 2010 molecular study finding that it is most closely related to B. atkinsonii, B. quercophilus of Costa Rica and then B. barrowsii of western United States. It is found in abundance in open oak forests after heavy rains and warm weather (30°C or more).
Morphology
Cap
The cap is 9.5 to 15 cm in diameter, initially convex in shape, before becoming broadly convex to plane as it ages; The surface is dry with small hair, yellow brown to vinaceous brown, and then dark brown. The thick flesh is white and does not turn blue when bruised.
Pores
The pores are white when young, becoming yellowish or brownish yellow to greenish olivacous, unchanged when bruised.
Stipe
From 8 to 12 cm long; 1-3 cm thick, dry, solid; whitish or brownish; club shaped to bulbous with strongly raised reticulation.
Spore print
The spore print is yellowish-brown.
Spores
Ellipsoid to subfusiform, smooth, pale yellow, 11.5-13.5 x 4-5 µm.
Habitat and distribution
Forms mycorrhiza with hardwoods, especially oak and beech in presence of pines; single, scattered, or gregarious, in summer and fall; collected in New England, New York, and other Eastern parts of United States, with distribution limits unknown. |
https://en.wikipedia.org/wiki/Tinc%20%28protocol%29 | Tinc is an open-source, self-routing, mesh networking protocol and software implementation used for compressed and encrypted virtual private networks. It was started in 1998 by Guus Sliepen, Ivo Timmermans, and Wessel Dankers, and released as a GPL-licensed project.
Platforms
Tinc is available on Linux, FreeBSD, OpenBSD, NetBSD, DragonFly BSD, Mac OS X, Microsoft Windows, Solaris, iOS (jailbroken only), Android with full support for IPv6.
Future goals
The authors of Tinc have goals of providing a platform that is secure, stable, reliable, scalable, easily configurable, and flexible.
Embedded technologies
Tinc uses OpenSSL or LibreSSL as the encryption library and gives the options of compressing communications with zlib for "best compression" or LZO for "fast compression".
Projects that use tinc
Freifunk has tinc enabled in their routers as of October 2006.
OpenWrt has an installable package for tinc.
OPNsense, an open source router and firewall distribution, has a plugin for Tinc
pfSense has an installable package in the 2.3 release.
Tomato variants Shibby and FreshTomato include Tinc support.
NYC Mesh uses tinc to connect parts of the mesh over the public internet that would be otherwise out of range.
See also
stunnel, encrypts any TCP connection (single port service) over SSL
OpenVPN, an open source SSL VPN solution
VTUN, an open source SSL VPN solution that can bridge Ethernet |
https://en.wikipedia.org/wiki/History%20of%20cardiopulmonary%20resuscitation | The history of cardiopulmonary resuscitation (CPR) can be traced as far back as the literary works of ancient Egypt (c. 2686 – c. 2181 BCE). However, it was not until the 18th century that credible reports of cardiopulmonary resuscitation began to appear in the medical literature.
Mouth-to-mouth ventilation has been used for centuries as an element of CPR, but it fell out of favor in the late 19th century with the widespread adoption of manual resuscitative techniques such as the Marshall Hall method, Silvester's method, the Shafer method and the Holger Nielsen technique. The technique of mouth-to-mouth ventilation would not come back into favor until the late 1950s, after its "accidental rediscovery" by James Elam.
The modern elements of resuscitation for sudden cardiac arrest include CPR (consisting of ventilation of the lungs and chest compressions), defibrillation and emergency medical services (the means to bring these techniques to the patient quickly).
Earliest descriptions
The earliest references to CPR can be found in ancient Egyptian literature of the Old Kingdom of Egypt, in which Isis resurrected Osiris (her slain brother and husband) with the breath of life.
Other early references from the Iron Age can be found in the Bible. For example, according to the Genesis creation narrative, God breathed life into the nostrils of the first man. Later - according to the first Book of Kings - the prophet Elijah (the disciple and protégé of Elijah) resuscitated a Phoenician boy in the city of Zarephath. This is the first instance of resurrection of the dead recorded in the Bible. In the second Book of Kings, Elisha successfully performed mouth-to-mouth resuscitation on another apparently dead child, this time in the village of Shunem.
Renaissance
Burhan-ud-din Kermani, a physician in 15th century Persia, described his approach to the treatment of ghashy (cardiac and respiratory insufficiency), which involved moving the victim's arms and expanding and compressi |
https://en.wikipedia.org/wiki/Caldicellulosiruptor%20bescii | Caldicellulosiruptor bescii is a species of thermophilic, anaerobic cellulolytic bacteria. It was isolated from a geothermally heated freshwater pool in the Valley of Geysers on the Kamchatka Peninsula in Russia in 1990. The species was originally named Anaerocellum thermophilum, but reclassified in 2010, based on genomic data.
Biofuel production
C. bescii is commonly used to generate microbiofuel. Although growth at temperatures up to 85 degrees Celsius have been noted, the optimum growth temperature is 75 degrees Celsius. C. bescii was originally grouped in the Anaerocellum thermophilum because of growth physiology, cell wall type and morphology. 16S rRNA sequencing later showed distinguishable differences that are responsible for placement in the Caldicellulosiruptor clade. C. besci is a Gram-positive, rod-shaped bacterium notable for its ability use a variety of polymeric carbohydrates and di- and monosaccharides to produce H2, acetate, lactate, and trace amounts of ethanol.
C. bescii has been selected for study by Oak Ridge National Laboratory and the University of Georgia's Department of Genetics for its ability to degrade cellulose. Through plasmid recombination the region for the gene encoding production of lactate dehydrogenase has been deleted causing the microbe to produce elevated levels of acetate and H2. Bi-functional acetaldehyde/alcohol dehydrogenase genes from Clostridium thermocellum allow for the conversion of sugars to ethanol. This genetically modified strain is able to convert biomass composed of switch-grass to ethanol.
Diversity
C. bescii has the highest growth temperature out of nine different isolates in the genus of Caldicellulosiruptor. It can grow at temperatures up to 90°C with an optimum growth temperature of 75°C. In 1990, C. bescii was described formally and the type strain was deposited as DSM 6725. Shortly after, C. bescii was classified as a member of a new genus Anaerocellum and named Anaerocellum thermophilum, strain Z-1320. |
https://en.wikipedia.org/wiki/Dream | A dream is a succession of images, ideas, emotions, and sensations that usually occur involuntarily in the mind during certain stages of sleep. Humans spend about two hours dreaming per night, and each dream lasts around 5 to 20 minutes, although the dreamer may perceive the dream as being much longer than this.
The content and function of dreams have been topics of scientific, philosophical and religious interest throughout recorded history. Dream interpretation, practiced by the Babylonians in the third millennium BCE and even earlier by the ancient Sumerians, figures prominently in religious texts in several traditions, and has played a lead role in psychotherapy. The scientific study of dreams is called oneirology. Most modern dream study focuses on the neurophysiology of dreams and on proposing and testing hypotheses regarding dream function. It is not known where in the brain dreams originate, if there is a single origin for dreams or if multiple regions of the brain are involved, or what the purpose of dreaming is for the body or mind.
The human dream experience and what to make of it has undergone sizable shifts over the course of history. Long ago, according to writings from Mesopotamia and Ancient Egypt, dreams dictated post-dream behaviors to an extent that was sharply reduced in later millennia. These ancient writings about dreams highlight visitation dreams, where a dream figure, usually a deity or a prominent forebear, commands the dreamer to take specific actions, and which may predict future events. Framing the dream experience varies across cultures as well as through time.
Dreaming and sleep are intertwined. Dreams occur mainly in the rapid-eye movement (REM) stage of sleep—when brain activity is high and resembles that of being awake. Because REM sleep is detectable in many species, and because research suggests that all mammals experience REM, linking dreams to REM sleep has led to conjectures that animals dream. However, humans dream during n |
https://en.wikipedia.org/wiki/Ichthyology | Ichthyology is the branch of zoology devoted to the study of fish, including bony fish (Osteichthyes), cartilaginous fish (Chondrichthyes), and jawless fish (Agnatha). According to FishBase, 33,400 species of fish had been described as of October 2016, with approximately 250 new species described each year.
Etymology
The word is derived from the Greek words ἰχθύς, ikhthus, meaning "fish"; and λογία, logia, meaning "to study".
History
The study of fish dates from the Upper Paleolithic Revolution (with the advent of "high culture"). The science of ichthyology was developed in several interconnecting epochs, each with various significant advancements.
The study of fish receives its origins from humans' desire to feed, clothe, and equip themselves with useful implements. According to Michael Barton, a prominent ichthyologist and professor at Centre College, "the earliest ichthyologists were hunters and gatherers who had learned how to obtain the most useful fish, where to obtain them in abundance, and at what times they might be the most available". Early cultures manifested these insights in abstract and identifiable artistic expressions.
1500 BC–40 AD
Informal, scientific descriptions of fish are represented within the Judeo-Christian tradition. The Old Testament laws of kashrut forbade the consumption of fish without scales or appendages. Theologians and ichthyologists believe that the apostle Peter and his contemporaries harvested the fish that are today sold in modern industry along the Sea of Galilee, presently known as Lake Kinneret. These fish include cyprinids of the genera Barbus and Mirogrex, cichlids of the genus Sarotherodon, and Mugil cephalus of the family Mugilidae.
335 BC–80 AD
Aristotle incorporated ichthyology into formal scientific study. Between 333 and 322 BC, he provided the earliest taxonomic classification of fish, accurately describing 117 species of Mediterranean fish. Furthermore, Aristotle documented anatomical and behavioral differe |
https://en.wikipedia.org/wiki/Union%20of%20Agricultural%2C%20Food%20Processing%20and%20Tobacco%20Workers%20of%20Yugoslavia | The Union of Agricultural, Food Processing and Tobacco Workers of Yugoslavia () was a trade union representing workers in several related industries in Yugoslavia.
The union was founded in 1959, when the Union of Agricultural Workers and Employees merged with the Union of Food and Tobacco Workers. Like both its predecessors, it affiliated to the Confederation of Trade Unions of Yugoslavia. By 1965, it claimed 373,000 members and was led by Ilija Tepavac.
In 1990, the union split in several smaller unions, including the Croatian Union of Workers in Agriculture, Food, Tobacco and Water Management. |
https://en.wikipedia.org/wiki/SPIKES | The SPIKES protocol is a method used in clinical medicine to break bad news to patients and families. As receiving bad news can cause distress and anxiety, clinicians need to deliver the news carefully. By using the SPIKES method for introducing and communicating information to patients and their families, it can aid in the presentation of the material. The SPIKES method is helpful in providing an organized manner of communication during situations that are typically complex and difficult to communicate. According to research related to the SPIKES method, important factors to consider when using this protocol involve empathy, acknowledgement and validation of feelings, providing information about intervention and treatment, and ensuring that the patient understands the news being delivered.
The protocol was first proposed in 2000 by Baile et al, in the context of oncology.
The name SPIKES is an acronym, where the letters stand for:
S: setting, i.e. setting up the consultation appropriately:
→ This entails never to give bad news by use of phone or in a hallway. One may consider to sit in a private space or room with no distractions so to be sure the message being delivered is the one focused on; no use of televisions or cellphones. Ensure that you face both the patient and the family and establish therapeutic alliance or connection by use of eye contact and physical touch, i.e. holding a hand or touching an arm.
P: perception, i.e. assessing the patient's perception of the situation
→ Begin by asking the patient what they believe is going on. This not only allows you to find out what they know about the situation, but also engages the patient. It allows for them to realize what they think matters, and forms a starting point for how to proceed. It is important to listen to what the patient tells you as this is the first place to correct any misconceptions that are held right away.
I: invitation, i.e. prompting the patient to invite the clinician to deliver the |
https://en.wikipedia.org/wiki/CryptoBuddy | CryptoBuddy is a simple software application for the encryption and compression of computer files to make them safe and secure. The application uses a 64-bit block cipher algorithm for encryption and a proprietary compression algorithm. The CryptoBuddy software is also used as part of the CryptoStick encryption device from Research Triangle Software, Inc. The software was released for public use on June 12, 2002. |
https://en.wikipedia.org/wiki/List%20of%20automation%20protocols | This is a list of communication protocols used for the automation of processes (industrial or otherwise), such as for building automation, power-system automation, automatic meter reading, and vehicular automation.
Process automation protocols
AS-i – Actuator-sensor interface, a low level 2-wire bus establishing power and communications to basic digital and analog devices
BSAP – Bristol Standard Asynchronous Protocol, developed by Bristol Babcock Inc.
CC-Link Industrial Networks – Supported by the CLPA
CIP (Common Industrial Protocol) – can be treated as application layer common to DeviceNet, CompoNet, ControlNet and EtherNet/IP
ControlNet – an implementation of CIP, originally by Allen-Bradley
DeviceNet – an implementation of CIP, originally by Allen-Bradley
DF-1 - used by Allen-Bradley ControlLogix, CompactLogix, PLC-5, SLC-500, and MicroLogix class devices
DNP3 - a protocol used to communicate by industrial control and utility SCADA systems
DirectNet – Koyo / Automation Direct proprietary, yet documented PLC interface
EtherCAT
Ethernet Global Data (EGD) – GE Fanuc PLCs (see also SRTP)
EtherNet/IP – IP stands for "Industrial Protocol". An implementation of CIP, originally created by Rockwell Automation
Ethernet Powerlink – an open protocol managed by the Ethernet POWERLINK Standardization Group (EPSG).
FINS, Omron's protocol for communication over several networks, including Ethernet.
FOUNDATION fieldbus – H1 & HSE
HART Protocol
HostLink Protocol, Omron's protocol for communication over serial links.
Interbus, Phoenix Contact's protocol for communication over serial links, now part of PROFINET IO
MECHATROLINK – open protocol originally developed by Yaskawa, supported by the MMA
MelsecNet, and MelsecNet II, /B, and /H, supported by Mitsubishi Electric.
Modbus PEMEX
Modbus Plus
Modbus RTU or ASCII or TCP
MPI – Multi Point Interface
OSGP – The Open Smart Grid Protocol, a widely use protocol for smart grid devices built on ISO/IEC 14908.1
OpenADR – Open Automated D |
https://en.wikipedia.org/wiki/Symbol%20group | A symbol group is a form of franchise of convenience shops, found primarily in the United Kingdom and Ireland. They do not own or operate shops, but act as suppliers to independent shops which then trade under a common banner.
Unlike other forms of franchise, they have expanded primarily by selling their services to existing shops, rather than by actively developing new outlets. Examples of such franchises are Spar, Londis, Nisa Local and Centra.
Groups
Symbol groups include:
Centra
SuperValu
Mace
Spar
Londis - 1,800 shops (part of Booker Group)
Costcutter - 2,600 shops
Premier Stores - 3,400 shops (part of Booker Group)
Nisa - 2,400 shops
Booker Group is a wholly owned subsidiary of Tesco.
Market
In 2014, the Institute of Grocery Distribution (IGD) reported that the symbol group market is worth £15.5bn, with a 42% share of the UK convenience market through 17,080 shops.
In the 2010s there was significant consolidation in the sector, as Tesco purchased Booker and the Co-operative Group purchased Nisa.
See also
Co-op Food which has a similar corporate structure, although is not usually considered a symbol group.
Edeka, a German grocery chain which has a structure similar to a symbol group |
https://en.wikipedia.org/wiki/List%20of%20natural%20phenomena | A natural phenomenon is an observable event which is not man-made. Examples include: sunrise, weather, fog, thunder, tornadoes; biological processes, decomposition, germination; physical processes, wave propagation, erosion; tidal flow, and natural disasters such as electromagnetic pulses, volcanic eruptions, hurricanes and earthquakes.
History
Over many intervals of time, natural phenomena have been observed by a series of countless events as a feature created by nature.
Physical phenomena
The act of:
Freezing
Boiling
Gravity
Magnetism
Gallery
Chemical phenomena
Oxidation
Fire
Rusting
Biological phenomena
Metabolism
Catabolism
Anabolism
Decomposition – by which organic substances are broken down into a much simpler form of matter
Fermentation – converts sugar to acids, gases and/or alcohol.
Growth
Birth
Death
Population decrease
Gallery
Astronomical phenomena
Supernova
Gamma ray bursts
Quasars
Blazars
Pulsars
Cosmic microwave background radiation.
Geological phenomena
Mineralogic phenomena
Lithologic phenomena
Rock types
Igneous rock
Igneous formation processes
Sedimentary rock
Sedimentary formation processes (sedimentation)
Quicksand
Metamorphic rock
Endogenic phenomena
Plate tectonics
Continental drift
Earthquake
Oceanic trench
Phenomena associated with igneous activity
Geysers and hot springs
Bradyseism
Volcanic eruption
Earth's magnetic field
Exogenic phenomena
Slope phenomena
Slump
Landslide
Weathering phenomena
Erosion
Glacial and peri-glacial phenomena
Glaciation
Moraines
Hanging valleys
Atmospheric phenomena
Impact phenomena
Impact crater
Coupled endogenic-exogenic phenomena
Orogeny
Drainage development
Stream capture
Gallery
Meteorological phenomena
Violent meteorological phenomena are called storms. Regular, cyclical phenomena include seasons and atmospheric circulation. climate change is often semi-regular.
Atmospheric optical phenomena
Oceanographic
Oceanographic phenomena inc |
https://en.wikipedia.org/wiki/Antenna%20rotator | An antenna rotator (or antenna rotor) is a device used to change the orientation, within the horizontal plane, of a directional antenna. Most antenna rotators have two parts, the rotator unit and the controller. The controller is normally placed near the equipment which the antenna is connected to, while the rotator is mounted on the antenna mast directly below the antenna.
Rotators are commonly used in amateur radio and military communications installations. They are also used with TV and FM antennas, where stations are available from multiple directions, as the cost of a rotator is often significantly less than that of installing a second antenna to receive stations from multiple directions.
Rotators are manufactured for different sizes of antennas and installations. For example, a consumer TV antenna rotator has enough torque to turn a TV/FM or small ham antenna. These units typically cost around US$70 .
Heavy-duty ham rotators are designed to turn extremely large, heavy, high frequency (shortwave) beam antennas, and cost hundreds or possibly thousands of dollars.
In the center of the reference picture, the accompanying image includes an AzEl installation rotator, so named for its controlling of both the azimuth and the elevation components of the direction of an antenna system or array. Such antenna configurations are used in, for example, amateur radio satellite or moon-bounce communications.
An open hardware AzEl rotator system is provided by the SatNOGS Groundstation project.
The Alliance Manufacturing Co. of Alliance, Ohio, and the Astatic Corporation of Conneaut, Ohio, manufactured popular radio and TV booster and rotary antenna systems. These products were heavily advertised for radio use in newspapers starting in the early 1940s, and for use with commercial television sets from 1949 into the 1960s. Cinécraft Productions, a pioneer in early TV advertising, produced six commercials for the Astatic Booster TV in 1949 and 112 for the Alliance Tenna-Roto |
https://en.wikipedia.org/wiki/Lufotrelvir | Lufotrelvir (PF-07304814) is an antiviral drug developed by Pfizer which acts as a 3CL protease inhibitor. It is a prodrug with the phosphate group being cleaved in vivo to yield the active agent PF-00835231. Lufotrelvir is in human clinical trials for the treatment of COVID-19, and shows good activity against COVID-19 including several variant strains, but unlike the related drug nirmatrelvir it is not orally active and must be administered by intravenous infusion, and so has been the less favoured candidate for clinical development overall.
See also
3CLpro-1
Bemnifosbuvir
Baloxavir marboxil
Favipiravir
GC376
GRL-0617
Molnupiravir
Remdesivir
Ribavirin
Rupintrivir
S-217622
Triazavirin |
https://en.wikipedia.org/wiki/Sergei%20Winogradsky | Sergei Nikolaievich Winogradsky (or Vinohradsky; published under the name of Sergius Winogradsky or M. S. Winogradsky from Ukrainian: Сергій Миколайович Виноградський Russian: Сергей Николаевич Виноградский 1 September 1856 – 25 February 1953) was a Ukrainian and Russian microbiologist, ecologist and soil scientist who pioneered the cycle-of-life concept. Winogradsky discovered the first known form of lithotrophy during his research with Beggiatoa in 1887. He reported that Beggiatoa oxidized hydrogen sulfide (H2S) as an energy source and formed intracellular sulfur droplets. This research provided the first example of lithotrophy, but not autotrophy.
His research on nitrifying bacteria would report the first known form of chemoautotrophy, showing how a lithotroph fixes carbon dioxide (CO2) to make organic compounds.
He is best known in school science as the inventor of the Winogradsky Column technique for the study of sediment microbes.
Biography
Winogradsky was born in Kyiv, Russian Empire to a family of wealthy lawyers. Among his paternal ancestors were Cossack atamans or hetmans (in Ukrainian), and on the maternal side - the hetman family Skoropadsky. In this early stage of his life, Winogradsky was "strictly devoted to the orthodox faith", though he later became irreligious.
After graduating from the 2nd Kyiv Gymnasium in 1873, he began studying law, but he entered the Imperial Conservatoire of Music in St Petersburg in 1875 to study piano. However, after two years of music training, he entered the University of Saint Petersburg in 1877 to study chemistry under Nikolai Menshutkin and botany under Andrei Sergeevich Famintzin.
He received a diploma in 1881 and stayed at the St. Petersburg University for a degree of Master of Science in botany in 1884. In 1885, he began work at the University of Strasbourg under the renowned botanist Anton de Bary; Winogradsky became renowned for his work on sulfur bacteria.
In 1888, he relocated to Zurich, where he began i |
https://en.wikipedia.org/wiki/Chronom%C3%A8tre%20of%20Louli%C3%A9 | The chronomètre is a precursor of the metronome. It was invented circa 1694 by Étienne Loulié to record the preferred tempo of pieces of music.
The Device
Musician Étienne Loulié collaborated with mathematician Joseph Sauveur on the education of Philippe, Duke of Chartres, who subsequently asked the pair to work together on a scientific study of acoustics sponsored by the Royal Academy of Science circa 1694.
To measure scientifically the number of beats per second caused by different dissonances, they used the "seconds pendulum" invented by Galileo earlier in the century. It doubtlessly was these experiments, on top of his lessons to Chartres, that gave Loulié the idea for his chronomètre, a precursor of the metronome.
In his Éléments (Paris: Ballard, 1696) — which resumes the lessons Loulié had given to Chartres and is dedicated to the prince — Loulié described this invention, complete with an engraving of the device. (A translation of Loulié's description is provided below.)
The device is basically a Galilean seconds pendulum disguised as a classical column. It consists of a six-foot-tall vertical "ruler" marked off in inches, with a little peg-hole at every inch. From the right-angle bar that protrudes at the capital of the Ionic capital, hangs a string with a plumb bob at the end. The length of the string — and therefore the speed of the pendulum swings — can be adjusted by moving the peg at the other end of the string up and down the vertical board and inserting it in one peg-hole or another. The shorter the string, the more rapid the swings; the longer the string, the slower the swings.
To specify the tempo of a piece, the composer could henceforth test the tempo at a variety of peg holes and, having determined the right tempo, could mark at the top of a piece the note value that represented the musical beat, plus the number of the hole into which the peg had been inserted.
Sauveur subsequently criticized the device because it was measured in inches, wh |
https://en.wikipedia.org/wiki/Bicoid%203%E2%80%B2-UTR%20regulatory%20element | The bicoid 3′-UTR regulatory element is an mRNA regulatory element that controls the gene expression of the bicoid protein in fruitfly Drosophila melanogaster.
The structured RNA element consists of four domains (denoted as II, III, IV and V) in the 3′UTR of the mRNA. It is essential for the correct transport and localisation of bicoid mRNA during oocyte and embryo differentiation, which has been studied most thoroughly in the development of Drosophila melanogaster (fruitfly) larvae. |
https://en.wikipedia.org/wiki/The%20Mathematics%20Enthusiast | The Mathematics Enthusiast is a triannual peer-reviewed open access academic journal covering undergraduate mathematics, mathematics education, including historical, philosophical, and cross-cultural perspectives on mathematics. It is hosted by ScholarWorks at the University of Montana. The journal was established in 2004 and its founding editor-in-chief is Bharath Sriraman. The journal exists as an independent entity in order to give authors full copyright over their articles, and is not affiliated with any commercial publishing companies.
Abstracting and indexing
The journal is abstracted and indexed in Academic Search Complete, Emerging Sources Citation Index, PsycINFO, and Scopus. |
https://en.wikipedia.org/wiki/Centre%20d%27immunologie%20de%20Marseille-Luminy | The Centre d’Immunologie de Marseille-Luminy (CIML) was founded in 1976 and has been described by AERES, an independent evaluation agency, as "without doubt one of the best immunology centers of excellence in Europe". The CIML addresses all areas of contemporary immunology; it is located in Marseille in the South of France.
Function
The institute has 17 research teams, with 250 staff including 185 scientists, students, and post-docs from 24 countries. It offers Masters and PhD programs.
The CIML has 90 academic collaborations and 21 industrial partners in France, Europe, and worldwide, and has formed several spin-offs, including: Innate Pharma, Ipsogen (Quiagen), and Immunotech (Beckman-Coulter).
The institute has published over 400 scientific publications in the last 5 years, including 145 in journals with an impact factor ≥ 10.
It is located on a science campus that is home to more than 1,500 researchers and 10,000 students, and 15 biotech companies.
Directors
François Kourilsky, 1976–1977
Michel Fougerau 1978-1980
François Kourilsky, 1981–1984
Pierre Golstein, 1985-1988
Bertrand Jordan, 1989–1990
Michel Pierres, 1991–1994
Bernard Malissen, 1995-2005
Jean Pierre Gorvel 2006-2008
Eric Vivier, 2008 - 2017
Advances in immunology made through discoveries at the CIML
Early work at CIML was centered on T cells. The study of their antigen receptors lead to the discovery of chromosomal inversion during the formation of the T cell receptor (TCR). Researchers at the CIML also published the first nucleotide sequence of a gene encoding a human major histocompatibility complex (MHC) gene and described how the TCR recognizes its MHC ligand. The functions of these T cells were also investigated, leading in particular to the identification of Granzyme A and GZMB (then called CTLA-1 and CTLA-3) and the demonstration of their playing a role in the perforin-granzyme-based mechanism of T-cell-mediated cytotoxicity, and to the discovery of the second, Fas ligand/Fa |
https://en.wikipedia.org/wiki/Small%20intestinal%20bacterial%20overgrowth | Small intestinal bacterial overgrowth (SIBO), also termed bacterial overgrowth, or small bowel bacterial overgrowth syndrome (SBBOS), is a disorder of excessive bacterial growth in the small intestine. Unlike the colon (or large bowel), which is rich with bacteria, the small bowel usually has fewer than 100,000 organisms per millilitre. Patients with bacterial overgrowth typically develop symptoms which may include nausea, bloating, vomiting, diarrhea, malnutrition, weight loss and malabsorption, which is caused by a number of mechanisms.
The diagnosis of bacterial overgrowth is made by a number of techniques, with the gold standard being an aspirate from the jejunum that grows in excess of 105 bacteria per millilitre. Risk factors for the development of bacterial overgrowth include dysmotility; anatomical disturbances in the bowel, including fistulae, diverticula and blind loops created after surgery, and resection of the ileo-cecal valve; gastroenteritis-induced alterations to the small intestine; and the use of certain medications, including proton pump inhibitors.
Small bowel bacterial overgrowth syndrome is treated with an elemental diet or antibiotics, which may be given in a cyclic fashion to prevent tolerance to the antibiotics, sometimes followed by prokinetic drugs to prevent recurrence if dysmotility is a suspected cause.
Signs and symptoms
Symptoms traditionally linked to SIBO include bloating, diarrhea, constipation, and abdominal pain/discomfort. Steatorrhea may be seen in more severe cases.
Bacterial overgrowth can cause a variety of symptoms, many of which are also found in other conditions, making the diagnosis challenging at times. Many of the symptoms are due to malabsorption of nutrients due to the effects of bacteria which either metabolize nutrients or cause inflammation of the small bowel, impairing absorption. The symptoms of bacterial overgrowth include nausea, flatus, constipation, bloating, abdominal distension, abdominal pain or discom |
https://en.wikipedia.org/wiki/Honda%20U3-X | The Honda U3-X is an experimental self-balancing one-wheeled personal transporter shown in 2009.
History
It was unveiled by Honda's CEO on September 24, 2009, and it was announced that it would be shown at the 2009 Tokyo Motor Show. Time magazine called it one of the 50 best inventions of 2009.
In April 2010, Honda engineers did a short demonstration of two of the devices in Times Square, New York City.
In May 2010, Honda representatives demonstrated the U3-X at the Honda Collection Hall in Motegi, Tochigi, Japan
Honda presented the Honda UNI-CUB, a successor to this device at Osaka Motor Show 2013.
Design and operation
Honda developed the U3-X with technology originally developed for ASIMO the bipedal human robot project. Honda states that the "U" stands for unicycle and for universal. It weighs and travels at , a similar speed to the Toyota Winglet.
Honda U3-X is a compact experimental device that fits comfortably between the rider's legs, to provide free movement in all directions just as in human walking - forward, backward, side-to-side, and diagonally. It uses Honda Omni-Traction (HOT) drive system to permit it to move in any lateral direction. The system uses multiple small diameter motorised wheels connected inline to form one large diameter wheel. Rotating the large diameter wheel moves the U3-X forward and backward, while rotating the small diameter wheels moves it side-to-side. Combining these movements causes the U3-X to move diagonally.
It has not been announced yet whether the vehicle will be offered for public sale. The price is not announced yet.
Specifications
Honda stated the U3-X key specifications as follows:
Length
Width (stowed)
Height (stowed)
Weight <
Top speed
Drive system: Omni Traction Drive System
Battery type: Lithium-ion battery
Operation time: ≈ 1 hour
See also
Toyota Winglet
Segway PT |
https://en.wikipedia.org/wiki/Reststrahlen%20effect | The reststrahlen effect (German: “residual rays”) is a reflectance phenomenon in which electromagnetic radiation within a narrow energy band cannot propagate within a given medium due to a change in refractive index concurrent with the specific absorbance band of the medium in question; this narrow energy band is termed the reststrahlen band.
As a result of this inability to propagate, normally incident reststrahlen band radiation experiences strong-reflection or total-reflection from that medium.
The energies at which reststrahlen bands occur vary and are particular to the individual compound.
Numerous physical attributes of a compound will have an effect on the appearance of the reststrahlen band. These include phonon band-gap, particle/grain size, strongly absorbing compounds, compounds with optically opaque bands in the infrared.
Appearance
"The term Reststrahlen was coined following the observation by Heinrich Rubens (more than a century ago) that repeated reflection of an infrared beam at the surface of a given material suppresses radiation at all wavelengths except for certain spectral intervals. The measured intensity for these special intervals (the Reststrahlen range) indicates a reflectance of up to 80% or even more, while the maximum reflectance due to infrared bands of dielectric materials are usually <10%. After four reflections, the intensity of the latter is reduced by a factor of 10−4 compared to the intensity of the incident radiation, while the light in the Reststrahlen range can maintain 40% of its original intensity by the time it reaches the detector. Obviously, this contrast increases with the number of reflections and explains the observation made by Rubens and the term Reststrahlen (residual rays) used to describe this spectral selection."
Reststrahlen bands manifest in diffuse reflectance infrared absorption spectra as complete band reversal, or in infrared emission spectra as a minimum in emissivity.
Application
The reststrahlen eff |
https://en.wikipedia.org/wiki/Extract | An extract (essence) is a substance made by extracting a part of a raw material, often by using a solvent such as ethanol, oil or water. Extracts may be sold as tinctures, absolutes or in powder form.
The aromatic principles of many spices, nuts, herbs, fruits, etc., and some flowers, are marketed as extracts, among the best known of true extracts being almond, cinnamon, cloves, ginger, lemon, nutmeg, orange, peppermint, pistachio, rose, spearmint, vanilla, violet, rum, and wintergreen.
Extraction techniques
Most natural essences are obtained by extracting the essential oil from the feedstock, such as blossoms, fruit, and roots, or from intact plants through multiple techniques and methods:
Expression (juicing, pressing) involves physical extraction material from feedstock, used when the oil is plentiful and easily obtained from materials such as citrus peels, olives, and grapes.
Absorption (steeping, decoction). Extraction is done by soaking material in a solvent, as used for vanilla beans or tea leaves.
Maceration, as used to soften and degrade material without heat, normally using oils, such as for peppermint extract and wine making.
Distillation or separation process, creating a higher concentration of the extract by heating material to a specific boiling point, then collecting this and condensing the extract, leaving the unwanted material behind, as used for lavender extract.
The distinctive flavors of nearly all fruits are desirable adjuncts to many food preparations, but only a few are practical sources of sufficiently concentrated flavor extract, such as from lemons, oranges, and vanilla beans.
Artificial extracts
The majority of concentrated fruit flavors, such as banana, cherry, peach, pineapple, raspberry, and strawberry, are produced by combining a variety of esters with special oils. Suitable coloring is generally obtained by the use of dyes. Among the esters most generally employed are ethyl acetate and ethyl butyrate. The chief factors |
https://en.wikipedia.org/wiki/Ancient%20UNIX | Ancient UNIX is any early release of the Unix code base prior to Unix System III, particularly the Research Unix releases prior to and including Version 7 (the base for UNIX/32V as well as later developments of AT&T Unix).
After the publication of the Lions' book, work was undertaken to release earlier versions of the codebase. SCO first released the code under a limited educational license.
Later, in January 2002, Caldera International (now SCO Group) relicensed (but has not made available) several versions under the four-clause BSD license, namely:
Research Unix: (early versions only)
Version 1 Unix
Version 2 Unix
Version 3 Unix
Version 4 Unix
Version 5 Unix
Version 6 Unix
Version 7 Unix
UNIX/32V
, there has been no widespread use of the code, but it can be used on emulator systems, and Version 5 Unix runs on the Nintendo Game Boy Advance using the SIMH PDP-11 emulator. Version 6 Unix provides the basis for the MIT xv6 teaching system, which is an update of that version to ANSI C and the x86 or RISC-V platform.
The BSD vi text editor is based on code from the ed line editor in those early Unixes. Therefore, "traditional" vi could not be distributed freely, and various work-alikes (such as nvi) were created. Now that the original code is no longer encumbered, the "traditional" vi has been adapted for modern Unix-like operating systems.
SCO Group, Inc. was previously called Caldera International. As a result of the SCO Group, Inc. v. Novell, Inc. case, Novell, Inc. was found to not have transferred the copyrights of UNIX to SCO Group, Inc. Concerns have been raised regarding the validity of the Caldera license.
The Unix Heritage Society
The Unix Heritage Society was founded by Warren Toomey.
First edition Unix was restored to a usable state by a restoration team from the Unix Heritage Society in 2008. The restoration process started with paper listings of the source code which were in Unix PDP-11 assembly language. |
https://en.wikipedia.org/wiki/Open%20energy%20system%20models | Open energy system models are energy system models that are open source. However, some of them may use third party proprietary software as part of their workflows to input, process, or output data. Preferably, these models use open data, which facilitates open science.
Energy system models are used to explore future energy systems and are often applied to questions involving energy and climate policy. The models themselves vary widely in terms of their type, design, programming, application, scope, level of detail, sophistication, and shortcomings. For many models, some form of mathematical optimization is used to inform the solution process.
Energy regulators and system operators in Europe and North America began adopting open energy system models for planning purposes in the early2020s. Open models and open data are increasingly being used by government agencies to guide the develop of netzero public policy as well (with examples indicated throughout this article). Companies and engineering consultancies are likewise adopting open models for analysis (again seebelow).
General considerations
Organization
The open energy modeling projects listed here fall exclusively within the bottom-up paradigm, in which a model is a relatively literal representation of the underlying system.
Several drivers favor the development of open models and open data. There is an increasing interest in making public policy energy models more transparent to improve their acceptance by policymakers and the public. There is also a desire to leverage the benefits that open data and open software development can bring, including reduced duplication of effort, better sharing of ideas and information, improved quality, and wider engagement and adoption. Model development is therefore usually a team effort and constituted as either an academic project, a commercial venture, or a genuinely inclusive community initiative.
This article does not cover projects which simply make their sour |
https://en.wikipedia.org/wiki/Vlei | A vlei () is a shallow minor lake, mostly of a seasonal or intermittent nature. It even might refer to seasonal ponds or marshy patches where frogs and similar marsh dwellers breed. Commonly, vleis vary in their extent, or even in the presence or absence of water, according to the fall of rain or dryness of the season. In terms of water salinity, vleis may be freshwater, saltwater, or brackish. Over time a vlei may degrade into a salt pan or clay pan, such as Dead Vlei or Sossusvlei.
Ecology
Vleis of various types can be of considerable local ecological importance, harboring many endemic and migratory species.
Most vleis are too minor to be granted recognition in the form of a name. However, some major vleis are accorded names, for example Rondevlei and Zeekoevlei in the Cape Peninsula, which are permanent bodies of water. Indeed, Rondevlei is home to hippopotamus.
The term is the basis of various biological common names, such as
vlei rat, rodents in the genus Otomys
vleiroos (literally "marsh rose")
vleikuiken (literally "vlei chick")
vlei frog.
Etymology
The word is used predominantly in South Africa. It is an Afrikaans word derived from the Middle Dutch word for "valley" (). In Afrikaans, however, its meaning changed into that of the shallow lake. The Afrikaans and modern Dutch word for "valley" is .
The North American placename vlaie is cognate with , having the same Middle Dutch derivation. |
https://en.wikipedia.org/wiki/Chain-ladder%20method | The chain-ladder or development method is a prominent actuarial loss reserving technique.
The chain-ladder method is used in both the property and casualty and health insurance fields. Its intent is to estimate incurred but not reported claims and project ultimate loss amounts.
The primary underlying assumption of the chain-ladder method is that historical loss development patterns are indicative of future loss development patterns.
Methodology
According to Jacqueline Friedland's "Estimating Unpaid Claims Using Basic Techniques," there are seven steps to apply the chain-ladder technique:
Compile claims data in a development triangle
Calculate age-to-age factors
Calculate averages of the age-to-age factors
Select claim development factors
Select tail factor
Calculate cumulative claim development factors
Project ultimate claims
Age-to-age factors, also called loss development factors (LDFs) or link ratios, represent the ratio of loss amounts from one valuation date to another, and they are intended to capture growth patterns of losses over time. These factors are used to project where the ultimate amount losses will settle.
Example
Firstly, losses (either reported or paid) are compiled into a triangle, where the rows represent accident years and the columns represent valuation dates. For example, the entry '43,169,009' represents loss amounts related to claims occurring in 1998, valued as of 24 months.
Next, age-to-age factors are determined by calculating the ratio of losses at subsequent valuation dates. From 24 months to 36 months, accident year 1998 losses increased from 43,169,009 to 45,568,919, so the corresponding age-to-age factor is 45,568,919 / 43,169,009 = 1.056. A "tail factor" is selected (in this case, 1.000) to project from the latest valuation age to ultimate.
Finally, averages of the age-to-age factors are calculated. Judgmental selections are made after observing several averages. The age-to-age factors are then multiplied together t |
https://en.wikipedia.org/wiki/LivePerson | LivePerson is a global technology company that develops conversational commerce and AI software.
Headquartered in New York City, LivePerson is best known as the developer of the Conversational Cloud, a software platform that allows consumers to message with brands.
In 2018, the company announced its AI offering, allowing customers to create AI-powered chatbots to answer consumer messages, alongside human customer service staff.
History
LivePerson was founded in 1995 by Robert LoCascio. In April 2000, the company completed an initial public offering on the NASDAQ, in March 2011 its shares started trading also on the Tel Aviv Stock Exchange and are included in the TA-100 Index and the TA BlueTech Index.
Acquisitions
Products and services
The Conversational Cloud — A cloud-based method of customer messaging. Business can communicate with customers on web, mobile, and social.
LP Insights — Turns customers' chat transcripts into structured and unstructured data to provide actionable insights.
See also
Tech companies in the New York metropolitan area |
https://en.wikipedia.org/wiki/ETen%20Chinese%20System | ETen Chinese System (倚天中文系統) was the most popular DOS-compatible traditional Chinese operating system before Chinese Windows 95.
DOS did not support Chinese characters, which are not in Extended ASCII. Many companies in Taiwan developed their own IBM PC compatible traditional Chinese operating system running on DOS, which were mutually incompatible between the OS, such as Kuo Chiao (國喬) and Acer.
The developer of the Eten OS, E-TEN, earned their early profits from sales of their hardware based plug-in card based Chinese system products. Their software (only) Chinese systems were widely copied by many traditional Chinese users and software pirates, but this was difficult for E-TEN to control. Most traditional Chinese products were compatible with the Eten OS at that time.
When Microsoft developed the Chinese Windows 3.1 and Windows 95, traditional Chinese software developer and users shifted to Windows from DOS. The last version of the Eten OS was a Chinese Windows-compatible version. The Eten and other traditional Chinese OS are now used in a few DOS based POS systems. |
https://en.wikipedia.org/wiki/Relativistic%20disk | In general relativity, the relativistic disk expression refers to a class of axi-symmetric self-consistent solutions to Einstein's field equations corresponding to the gravitational field generated by axi-symmetric isolated sources. To find such solutions, one has to pose correctly and solve together the ‘outer’ problem, a boundary value problem for vacuum Einstein's field equations whose solution determines the external field, and the ‘inner’ problem, whose solution determines the structure and the dynamics of the matter source in its own gravitational field. Physically reasonable solutions must satisfy some additional conditions such as finiteness and positiveness of mass, physically reasonable kind of matter and finite geometrical size. Exact solutions describing relativistic static thin disks as their sources were first studied by Bonnor and Sackfield and Morgan and Morgan. Subsequently, several classes of exact solutions corresponding to static and stationary thin disks have been obtained by different authors. |
https://en.wikipedia.org/wiki/Sergio%20Focardi | Sergio Focardi (1932 – 22 June 2013) was an Italian physicist and professor emeritus at the University of Bologna.
He led the Department of Bologna of the (Italian) National Institute for Nuclear Physics and the Faculty of Mathematical, Physical and Natural Sciences at the University of Bologna.
In the early 1960s Focardi spent time at CERN in Geneva.
He was a member of the President's Board of the Italian Physical Society.
From 1992 he had been working on cold fusion with nickel-hydrogen reactors. From 2007 until his death, Focardi collaborated with inventor Andrea Rossi on the development of the Energy Catalyzer (E-Cat).
Studies on nickel-hydrogen exothermal systems
In the early 90s Sergio Focardi, together with physicists Roberto Habel and Francesco Piantelli, started to develop a nickel-hydrogen exothermal reactor. The results of their research were presented in 1994, and published on the peer-reviewed scientific journal Il Nuovo Cimento A.
See also
Energy Catalyzer |
https://en.wikipedia.org/wiki/Emma%20Previato | Emma Previato (November 29, 1952 – June 29, 2022) was a professor of mathematics at Boston University. Her research concerned algebraic geometry and partial differential equations.
Career
Previato received her Ph.D. from Harvard University in 1983 under David Mumford. She was a faculty member at Boston University.
Previato founded Boston University's chapters of the Mathematical Association of America and of the Association for Women in Mathematics.
Awards and honors
In 2003, she received the Mathematical Association of America Northeastern Section's Award for Distinguished College or University Teaching of Mathematics for her work in and out of the classroom, especially her mentoring of students.
In 2012, Previato became a fellow of the American Mathematical Society.
Selected publications
Previato, Emma. Hyperelliptic quasiperiodic and soliton solutions of the nonlinear Schrödinger equation. Duke Mathematical Journal 52 (1985), no. 2, 329–377.
Adams, M. R.; Harnad, J.; Previato, E. Isospectral Hamiltonian flows in finite and infinite dimensions. I. Generalized Moser systems and moment maps into loop algebras. Communications in Mathematical Physics 117 (1988), no. 3, 451–500.
Eilbeck, J. C.; Enolski, V. Z.; Matsutani, S.; Ônishi, Y.; Previato, E. Abelian functions for trigonal curves of genus three. International Mathematics Research Notices 2008, no. 1, Art. ID rnm 140, 38 pp. |
https://en.wikipedia.org/wiki/MAC%20times | MAC times are pieces of file system metadata which record when certain events pertaining to a computer file occurred most recently. The events are usually described as "modification" (the data in the file was modified), "access" (some part of the file was read), and "metadata change" (the file's permissions or ownership were modified), although the acronym is derived from the "mtime", "atime", and "ctime" structures maintained by Unix file systems. Windows file systems do not update ctime when a file's metadata is changed, instead using the field to record the time when a file was first created, known as "creation time" or "birth time". Some other systems also record birth times for files, but there is no standard name for this metadata; ZFS, for example, stores birth time in a field called "crtime". MAC times are commonly used in computer forensics. The name Mactime was originally coined by Dan Farmer, who wrote a tool with the same name.
Modification time (mtime)
A file's modification time describes when the content of the file most recently changed. Because most file systems do not compare data written to a file with what is already there, if a program overwrites part of a file with the same data as previously existed in that location, the modification time will be updated even though the contents did not technically change.
Access time (atime)
A file's access time identifies when the file was most recently opened for reading. Access times are usually updated even if only a small portion of a large file is examined. A running program can maintain a file as "open" for some time, so the time at which a file was opened may differ from the time data was most recently read from the file.
Because some computer configurations are much faster at reading data than at writing it, updating access times after every read operation can be very expensive. Some systems mitigate this cost by storing access times at a coarser granularity than other times; by rounding access |
https://en.wikipedia.org/wiki/Reverse%20migration%20%28immunology%29 | Within molecular and cell biology, reverse migration is the phenomenon in which some neutrophils migrate away from the inflammation site, against the chemokine gradient, during inflammation resolution. The activation of in vivo inflammatory pathways (such as hypoxia-inducible factor, HIF), altered this behavior of reverse migration. |
https://en.wikipedia.org/wiki/Digital%20materialization | Digital materialization (DM)
can loosely be defined as two-way direct communication or conversion between matter and information that enables people to exactly describe, monitor, manipulate and create any arbitrary real object. DM is a general paradigm alongside a specified framework that is suitable for computer processing and includes: holistic, coherent, volumetric modeling systems; symbolic languages that are able to handle infinite degrees of freedom and detail in a compact format; and the direct interaction and/or fabrication of any object at any spatial resolution without the need for “lossy” or intermediate formats.
DM systems possess the following attributes:
realistic - correct spatial mapping of matter to information
exact - exact language and/or methods for input from and output to matter
infinite - ability to operate at any scale and define infinite detail
symbolic - accessible to individuals for design, creation and modification
Such an approach can not only be applied to tangible objects but can include the conversion of things such as light and sound to/from information and matter. Systems to digitally materialize light and sound already largely exist now (e.g. photo editing, audio mixing, etc.) and have been quite effective - but the representation, control and creation of tangible matter is poorly support by computational and digital systems.
Commonplace computer-aided design and manufacturing systems currently represent real objects as "2.5 dimensional" shells. In contrast, DM proposes a deeper understanding and sophisticated manipulation of matter by directly using rigorous mathematics as complete volumetric descriptions of real objects. By utilizing technologies such as Function representation (FRep) it becomes possible to compactly describe and understand the surface and internal structures or properties of an object at an infinite resolution. Thus models can accurately represent matter across all scales making it possible to capture |
https://en.wikipedia.org/wiki/Polyvinyl%20alcohol | Poly(vinyl alcohol) (PVOH, PVA, or PVAl) is a water-soluble synthetic polymer. It has the idealized formula [CH2CH(OH)]n. It is used in papermaking, textile warp sizing, as a thickener and emulsion stabilizer in polyvinyl acetate (PVAc) adhesive formulations, in a variety of coatings, and 3D printing. It is colourless (white) and odorless. It is commonly supplied as beads or as solutions in water. Without an externally added crosslinking agent, PVA solution can be gelled through repeated freezing-thawing, yielding highly strong, ultrapure, biocompatible hydrogels which have been used for a variety of applications such as vascular stents, cartilages, contact lenses, etc.
Although polyvinyl alcohol is often referred to by the acronym PVA, more generally PVA refers to polyvinyl acetate, which is commonly used as a wood adhesive, sealer and water soluble plastic.
Uses
PVA is used in a variety of medical applications because of its biocompatibility, low tendency for protein adhesion, and low toxicity. Specific uses include cartilage replacements, contact lenses, and eye drops. Polyvinyl alcohol is used as an aid in suspension polymerizations. Its largest application in China is its use as a protective colloid to make PVAc dispersions. In Japan its major use is the production of Vinylon fiber. This fiber is also manufactured in North Korea for self-sufficiency reasons, because no oil is required to produce it. Another application is photographic film.
PVA-based polymers are used widely in additive manufacturing. For example, 3D printed oral dosage forms demonstrate great potential in the pharmaceutical industry. It is possible to create drug-loaded tablets with modified drug-release characteristics where PVA is used as a binder substance.
Medically, PVA-based microparticles have received FDA 510(k) approval to be used as embolisation particles to be used for peripheral hypervascular tumors. It may also used as the embolic agent in a Uterine Fibroid Embolectomy (UFE) |
https://en.wikipedia.org/wiki/Black%20box%20group | In computational group theory, a black box group (black-box group) is a group G whose elements are encoded by bit strings of length N, and group operations are performed by an oracle (the "black box"). These operations include:
taking a product g·h of elements g and h,
taking an inverse g−1 of element g,
deciding whether g = 1.
This class is defined to include both the permutation groups and the matrix groups. The upper bound on the order of G given by |G| ≤ 2N shows that G is finite.
Applications
The black box groups were introduced by Babai and Szemerédi in 1984. They were used as a formalism for (constructive) group recognition and property testing. Notable algorithms include the Babai's algorithm for finding random group elements, the Product Replacement Algorithm, and testing group commutativity.
Many early algorithms in CGT, such as the Schreier–Sims algorithm, require a permutation representation of a group and thus are not black box. Many other algorithms require finding element orders. Since there are efficient ways of finding the order of an element in a permutation group or in a matrix group (a method for the latter is described by Celler and Leedham-Green in 1997), a common recourse is to assume that the black box group is equipped with a further oracle for determining element orders.
See also
Implicit graph
Matroid oracle
Notes |
https://en.wikipedia.org/wiki/Virivore | Virivore (equivalently virovore) comes from the English prefix viro- meaning virus, derived from the Latin word for poison, and the suffix -vore from the Latin word vorare, meaning to eat, or to devour; therefore, a virivore is an organism that consumes viruses. Virivory is a well-described process in which organisms, primarily heterotrophic protists, but also some metazoans consume viruses.
Viruses are considered a top predator in marine environments, as they can lyse microbes and release nutrients (i.e. the viral shunt). Viruses also play an important role in the structuring of microbial trophic relationships and regulation of carbon flow.
Discovery
The first described virovore was a small marine flagellate that was shown to ingest and digest virus particles. Subsequently, numerous studies directly and indirectly demonstrated the consumption of virions. In 2022, DeLong et al. showed that over the course of two days the ciliates Halteria and Paramecium reduced chlorovirus plaque-forming units by up to two orders of magnitude, supporting the idea that nutrients were transferred from the viruses to consumers.
Furthermore, the Halteria population grew with chlorovirus as the only source of nutrition, and grew minimally in the absence of chlorovirus. The Paramecium population, however, did not differ in growth when fed chloroviruses compared to the control group. Since the Paramecium population size remained constant in the presence of only cholorviruses, this indicated that Paramecium is capable of maintaining its population size, but not growing using chlorovirus as the sole carbon source. These data showed that some grazers can grow on viruses, but it does not apply to all grazers. It was estimated that Halteria consumed between 10,000 and 1,000,000 viruses per day.It's known that small protists, such as Halteria and Paramecium, are consumed by zooplankton indicating the movement of viral-derived energy and matter up through the aquatic food web. This contradic |
https://en.wikipedia.org/wiki/Luminous%20efficiency%20function | A luminous efficiency function or luminosity function represents the average spectral sensitivity of human visual perception of light. It is based on subjective judgements of which of a pair of different-colored lights is brighter, to describe relative sensitivity to light of different wavelengths. It is not an absolute reference to any particular individual, but is a standard observer representation of visual sensitivity of theoretical human eye. It is valuable as a baseline for experimental purposes, and in colorimetry. Different luminous efficiency functions apply under different lighting conditions, varying from photopic in brightly lit conditions through mesopic to scotopic under low lighting conditions. When not specified, the luminous efficiency function generally refers to the photopic luminous efficiency function.
The CIE photopic luminous efficiency function or is a standard function established by the Commission Internationale de l'Éclairage (CIE) and standardized in collaboration with the ISO, and may be used to convert radiant energy into luminous (i.e., visible) energy. It also forms the central color matching function in the CIE 1931 color space.
Details
There are two luminous efficiency functions in common use. For everyday light levels, the photopic luminosity function best approximates the response of the human eye. For low light levels, the response of the human eye changes, and the scotopic curve applies. The photopic curve is the CIE standard curve used in the CIE 1931 color space.
The luminous flux (or visible power) in a light source is defined by the photopic luminosity function. The following equation calculates the total luminous flux in a source of light:
where
Φv is the luminous flux, in lumens;
Φe,λ is the spectral radiant flux, in watts per nanometre;
(λ), also known as V(λ), is the luminosity function, dimensionless;
λ is the wavelength, in nanometres.
Formally, the integral is the inner product of the luminosity func |
https://en.wikipedia.org/wiki/FHIPEP%20protein%20family | In molecular biology, the FHIPEP protein family (Flagellar/Hr/Invasion Proteins Export Pore family)consists of a number of proteins that constitute the type III secretion (or signal peptide-independent) pathway apparatus. This mechanism translocates proteins lacking an N-terminal signal peptide across the cell membrane in one step, as it does not require an intermediate periplasmic process to cleave the signal peptide. It is a common pathway amongst Gram-negative bacteria for secreting toxic and flagellar proteins.
The pathway apparatus comprises three components: two within the inner membrane and one within the outer. An FHIPEP protein is located within the inner membrane, although it is unknown which component it constitutes. FHIPEP proteins have all about 700 amino acid residues. Within the sequence, the N terminus is highly conserved and hydrophobic, suggesting that this terminus is embedded within the membrane, with 6-8 transmembrane (TM) domains, while the C terminus is less conserved and appears to be devoid of TM regions. It is possible that members of the FHIPEP family serve as pores for the export of specific proteins. |
https://en.wikipedia.org/wiki/Grunsky%20matrix | In complex analysis and geometric function theory, the Grunsky matrices, or Grunsky operators, are infinite matrices introduced in 1939 by Helmut Grunsky. The matrices correspond to either a single holomorphic function on the unit disk or a pair of holomorphic functions on the unit disk and its complement. The Grunsky inequalities express boundedness properties of these matrices, which in general are contraction operators or in important special cases unitary operators. As Grunsky showed, these inequalities hold if and only if the holomorphic function is univalent. The inequalities are equivalent to the inequalities of Goluzin, discovered in 1947. Roughly speaking, the Grunsky inequalities give information on the coefficients of the logarithm of a univalent function; later generalizations by Milin, starting from the Lebedev–Milin inequality, succeeded in exponentiating the inequalities to obtain inequalities for the coefficients of the univalent function itself. The Grunsky matrix and its associated inequalities were originally formulated in a more general setting of univalent functions between a region bounded by finitely many sufficiently smooth Jordan curves and its complement: the results of Grunsky, Goluzin and Milin generalize to that case.
Historically the inequalities for the disk were used in proving special cases of the Bieberbach conjecture up to the sixth coefficient; the exponentiated inequalities of Milin were used by de Branges in the final solution.
A detailed exposition using these methods can be found in . The Grunsky operators and their Fredholm determinants are also related to spectral properties of bounded domains in the complex plane. The operators have further applications in conformal mapping, Teichmüller theory and conformal field theory.
Grunsky Matrix
If f(z) is a holomorphic univalent function on the unit disk, normalized so that f(0) = 0 and f′(0) = 1, the function
is a non-vanishing univalent function on |z| > 1 having a simple pol |
https://en.wikipedia.org/wiki/5-lipoxygenase-activating%20protein | Arachidonate 5-lipoxygenase-activating protein also known as 5-lipoxygenase activating protein, or FLAP, is a protein that in humans is encoded by the ALOX5AP gene.
Function
FLAP is necessary for the activation of 5-lipoxygenase and therefore for the production of leukotrienes, 5-hydroxyeicosatetraenoic acid, 5-oxo-eicosatetraenoic acid, and specialized pro-resolving mediators of the lipoxin and resolvin classes. It is an integral protein within the nuclear membrane. FLAP is necessary in synthesis of leukotriene, which are lipid mediators of inflammation that is involved in respiratory and cardiovascular diseases. FLAP functions as a membrane anchor for 5-lipooxygenase and as an amine acid-bind protein. How FLAP activates 5-lipooxygenase is not completely understood, but there is a physical interaction between the two. FLAP structure consists of 4 transmembrane alpha helices, but they are found in trimer forming a barrel. The barrel is about 60 Å high and 36 Å wide.
Clinical significance
Leukotrienes, which require the FLAP protein to be synthesized, have an established pathological role in allergic and respiratory diseases. Animal and human genetic evidence suggests they may also have an important role in atherosclerosis, myocardial infarction, and stroke. The structure of FLAP provides a tool for the development of novel therapies for respiratory and cardiovascular diseases and for the design of focused experiments to probe the cell biology of FLAP and its role in leukotriene biosynthesis.
Inhibitors
AM-679
MK-886
Veliflapon (BAY X1005) |
https://en.wikipedia.org/wiki/Dark0de | dark0de, also known as Darkode, is a cybercrime forum and black marketplace described by Europol as "the most prolific English-speaking cybercriminal forum to date". The site, which was launched in 2007, serves as a venue for the sale and trade of hacking services, botnets, malware, stolen personally identifiable information, credit card information, hacked server credentials, and other illicit goods and services.
History
In early 2013, it came under a large DDoS attack moving from bulletproof hosting provider Santrex to off-shore, the latter being a participant of the Stophaus campaign against Spamhaus. The site has had an ongoing feud with security researcher Brian Krebs.
In April 2014, various site users were attacked via the Heartbleed exploit, gaining access to private areas of the site.
Takedown
The forum was the target of Operation Shrouded Horizon, an international law enforcement effort led by the Federal Bureau of Investigation which culminated in the site's seizure and arrests of several of its members in July 2015. According to the FBI, the case is "believed to be the largest-ever coordinated law enforcement effort directed at an online cyber criminal forum". Upon announcing the 12 charges issued by the United States, Attorney David Hickton called the site "a cyber hornet's nest of criminal hackers", "the most sophisticated English-speaking forum for criminal computer hackers in the world" which "represented one of the gravest threats to the integrity of data on computers in the United States".
On Monday, September 21, 2015, Daniel Placek appeared on the podcast Radiolab discussing his role in starting Darkode and his eventual cooperation with the United States government in its efforts to take down the site.
Revivals
Only two weeks after the announcement of the raid, the site reappeared with increased security, employing blockchain-based authentication and operating on the Tor anonymity network. Researchers from MalwareTech suggested the relau |
https://en.wikipedia.org/wiki/John%20Endler | John Arthur Endler (born 1947) is a Canadian ethologist and evolutionary biologist noted for his work on the adaptation of vertebrates to their unique perceptual environments, and the ways in which animal sensory capacities and colour patterns co-evolve.
Education and early life
Born in Canada, Endler took his PhD degree at the University of Edinburgh in Scotland.
Career and research
After his PhD, Endler worked at Princeton University (1973-1979), the University of Utah (1979-1986), the University of California, Santa Barbara (1986-2006), the James Cook University of North Queensland, Australia and is currently working at Deakin University in Victoria, Australia. In 2006 he was appointed as an Anniversary Professor of Animal Behaviour in the School of Psychology at the University of Exeter, England. In 2007 he was elected as a Fellow of the American Academy of Arts and Sciences. In 2009 he joined the Centre for Integrative Biology at Deakin University (Australia) where he is an Alfred Deakin Professor. In 2012 he was elected as a Fellow of the Australian Academy of Science. In April 2020 Endler was elected a Fellow of the Royal Society (FRS).
Endler has carried out extensive work on guppies, including in 1975 rediscovering the species now known to aquarists as Endler's guppy, in his honour; this brightly coloured fish is sometimes regarded as a geographical variant of the common guppy Poecilia reticulata, but is now usually treated as a separate species, Poecilia wingei. Although it had been recorded before Endler's discovery, it had not been properly studied and documented. Among biologists, however, he is better known for his experimental work on inducing small-scale evolution in the laboratory. In addition to his work on guppies he has studied many other species, including investigating the bower-building behaviour of bowerbirds in North Queensland, Australia.
In 2008 the European Research Council announced that he was among the first cohort of Life Sci |
https://en.wikipedia.org/wiki/Annals%20of%20Physics | Annals of Physics is a monthly peer-reviewed scientific journal covering all aspects of physics. It was established in 1957 and is published by Elsevier. The editor-in-chief is Neil Turok (University of Edinburgh School of Physics and Astronomy).
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 2.73. |
https://en.wikipedia.org/wiki/Hartogs%27s%20theorem%20on%20separate%20holomorphicity | In mathematics, Hartogs's theorem is a fundamental result of Friedrich Hartogs in the theory of several complex variables. Roughly speaking, it states that a 'separately analytic' function is continuous. More precisely, if is a function which is analytic in each variable zi, 1 ≤ i ≤ n, while the other variables are held constant, then F is a continuous function.
A corollary is that the function F is then in fact an analytic function in the n-variable sense (i.e. that locally it has a Taylor expansion). Therefore, 'separate analyticity' and 'analyticity' are coincident notions, in the theory of several complex variables.
Starting with the extra hypothesis that the function is continuous (or bounded), the theorem is much easier to prove and in this form is known as Osgood's lemma.
There is no analogue of this theorem for real variables. If we assume that a function
is differentiable (or even analytic) in each variable separately, it is not true that will necessarily be continuous. A counterexample in two dimensions is given by
If in addition we define , this function has well-defined partial derivatives in and at the origin, but it is not continuous at origin. (Indeed, the limits along the lines and are not equal, so there is no way to extend the definition of to include the origin and have the function be continuous there.) |
https://en.wikipedia.org/wiki/Blagger%20%28video%20game%29 | Blagger is a platform game created by Antony Crowther for the Commodore 64 and released by Alligata in 1983. A BBC Micro port was released the same year, Acorn Electron, Amstrad CPC (through Amsoft) and MSX in 1984, Commodore 16 and Plus/4 in 1985 and Amstrad PCW in 1987. In some countries this game was released under the name Gangster.
Son of Blagger, was released in 1984 with a third and final title Blagger Goes to Hollywood released in 1985. Another sequel, known as New Blagger but developed as Blagger 2, being a direct continuation of the original, was produced in 1985 but not released.
Gameplay
The game is divided into a series of single-screen levels. The goal of the player on each screen is to manipulate Blagger, a burglar, to collect the scattered keys and then reach the safe. The keys must be collected and the safe opened in a limited amount of time. Blagger can walk left and right, and jump left, right and up. The jumping action is in a fixed pattern and cannot be altered once initiated. Gameplay involves learning the best order in which to collect the keys, and good timing of movement and jumping.
Not all platforms are permanent; some decay once Blagger has stepped on them. Other platforms serve to move Blagger in a particular direction. Blagger will die if he touches cacti, one of the moving enemy obstacles of the level, or if he falls more than a certain distance. The moving enemies vary from level to level, and include cars, aliens, mad hatters, and giant mouths. The movement of the enemies is in a fixed pattern, generally travelling from one point to another and back again.
The BBC and Electron versions feature floating "RG"s as hazards (R.G. being the initials of the programmer of those versions, R.S. Goodley).
Reception |
https://en.wikipedia.org/wiki/Egalitarian%20item%20allocation | Egalitarian item allocation, also called max-min item allocation is a fair item allocation problem, in which the fairness criterion follows the egalitarian rule. The goal is to maximize the minimum value of an agent. That is, among all possible allocations, the goal is to find an allocation in which the smallest value of an agent is as large as possible. In case there are two or more allocations with the same smallest value, then the goal is to select, from among these allocations, the one in which the second-smallest value is as large as possible, and so on (by the leximin order). Therefore, an egalitarian item allocation is sometimes called a leximin item allocation.
The special case in which the value of each item j to each agent is either 0 or some constant vj is called the santa claus problem: santa claus has a fixed set of gifts, and wants to allocate them among children such that the least-happy child is as happy as possible.
Some related problems are:
Multiway number partitioning with the max-min objective corresponds to a special case in which all agents have the same valuations. An even more special case is the partition problem, which corresponds to the case of two agents. Even this special case is NP-hard in general.
Unrelated-machines scheduling is a dual problem, in which the goal is to minimize the maximum value.
Maximin share item allocation is a different problem, in which the goal is not to attain an optimal solution, but rather to find any solution in which each agent receives a value above a certain threshold.
Normalization
There are two variants of the egalitarian rule:
absolute egalitarian (or absolute leximin), where the maximization uses the nominal values of the agents;
relative egalitarian (or relative leximin) where the maximization uses their normalized values - bundle value divided by value of all items.
The two rules are equivalent when the agents' valuations are already normalized, that is, all agents assign the same val |
https://en.wikipedia.org/wiki/Gennadii%20Rubinstein | Gennadii Shlemovich Rubinstein (rus: Геннадий Шлемович Рубинштейн) was a Russian mathematician. His research focused on mathematical programming and operations research. His name is associated to the Kantorovich–Rubinstein metric, also commonly known as the Wasserstein distance, used in optimal transport.
Gennadii Rubinstein got his doctorate in St. Petersburg State University in 1956, under the supervision of Leonid V. Kantorovich.
Alternate form of the first name: Gennady.
Alternate forms of the last name: Rubinšteĭn, Rubinshtein.
Biography
Selected publications
See also
List of Russian mathematicians |
https://en.wikipedia.org/wiki/Physisorption | Physisorption, also called physical adsorption, is a process in which the electronic structure of the atom or molecule is barely perturbed upon adsorption.
Overview
The fundamental interacting force of physisorption is Van der Waals force. Even though the interaction energy is very weak (~10–100 meV), physisorption plays an important role in nature. For instance, the van der Waals attraction between surfaces and foot-hairs of geckos (see Synthetic setae) provides the remarkable ability to climb up vertical walls. Van der Waals forces originate from the interactions between induced, permanent or transient electric dipoles.
In comparison with chemisorption, in which the electronic structure of bonding atoms or molecules is changed and covalent or ionic bonds form, physisorption does not result in changes to the chemical bonding structure. In practice, the categorisation of a particular adsorption as physisorption or chemisorption depends principally on the binding energy of the adsorbate to the substrate, with physisorption being far weaker on a per-atom basis than any type of connection involving a chemical bond.
Modeling by image charge
To give a simple illustration of physisorption, we can first consider an adsorbed hydrogen atom in front of a perfect conductor, as shown in Fig. 1. A nucleus with positive charge is located at R = (0, 0, Z), and the position coordinate of its electron, r = (x, y, z) is given with respect to the nucleus. The adsorption process can be viewed as the interaction between this hydrogen atom and its image charges of both the nucleus and electron in the conductor. As a result, the total electrostatic energy is the sum of attraction and repulsion terms:
The first term is the attractive interaction of nucleus and its image charge, and the second term is due to the interaction of the electron and its image charge. The repulsive interaction is shown in the third and fourth terms arising from the interaction between the nucleus and |
https://en.wikipedia.org/wiki/2-Methylfuran | 2-Methylfuran, also known with the older name of sylvane, is a flammable, water-insoluble liquid with a chocolate odor, found naturally in Myrtle and Dutch Lavender
used as a FEMA GRAS flavoring substance, with the potential for use in alternative fuels.
Manufacture
2-Methylfuran is an article of commerce (chemical intermediate) and is normally manufactured by catalytic hydrogenolysis of furfural alcohol or via a hydrogenation-hydrogenolysis sequence from furfural in the vapor phase.
See also
Swiftfuel |
https://en.wikipedia.org/wiki/Distributed%20temperature%20sensing | Distributed temperature sensing systems (DTS) are optoelectronic devices which measure temperatures by means of optical fibres functioning as linear sensors. Temperatures are recorded along the optical sensor cable, thus not at points, but as a continuous profile. A high accuracy of temperature determination is achieved over great distances. Typically the DTS systems can locate the temperature to a spatial resolution of 1 m with accuracy to within ±1 °C at a resolution of 0.01 °C. Measurement distances of greater than 30 km can be monitored and some specialised systems can provide even tighter spatial resolutions. Thermal changes along the optical fibre cause a local variation in the refractive index, which in turn leads to the inelastic scattering of the light propagating through it. Heat is held in the form of molecular or lattice vibrations in the material. Molecular vibrations at high frequencies (10 THz) are responsible for Raman scattering. Low frequency vibrations (10–30 GHz) cause Brillouin scattering. Energy is exchanged between the light travelling through the fibre and the material itself and cause a frequency shift in the incident light. This frequency shift can then be used to measure temperature changes along the fibre.
Measuring principle—Raman effect
Physical measurement dimensions, such as temperature or pressure and tensile forces, can affect glass fibres and locally change the characteristics of light transmission in the fibre. As a result of the damping of the light in the quartz glass fibres through scattering, the location of an external physical effect can be determined so that the optical fibre can be employed as a linear sensor.
Optical fibres are made from doped quartz glass. Quartz glass is a form of silicon dioxide (SiO2) with amorphous solid structure. Thermal effects induce lattice oscillations within the solid. When light falls onto these thermally excited molecular oscillations, an interaction occurs between the light particles (ph |
https://en.wikipedia.org/wiki/Rohit%20Pappu | Rohit Pappu is an Indian-born computational and theoretical biophysicist. He is the Gene K. Beare Distinguished Professor of Engineering and the director of the Center for Science & Engineering of Living Systems (CSELS) at Washington University in St. Louis.
Education and career
Pappu did his undergraduate work in physics, mathematics, and electronics at the St. Joseph's College, Bangalore. He received an M.S. in solid-state physics in 1992 and Ph.D. in biological physics in 1996, both at Tufts University where he worked on theoretical aspects of protein folding. He spent two years as a postdoctoral fellow at Washington University in St. Louis with Jay Ponder and then from 1998 to 2001 he was a postdoctoral fellow with George Rose at Johns Hopkins University. He joined Washington University in St. Louis as an Assistant Professor in Biomedical Engineering in 2001, becoming Associate Professor in 2007 and Professor in 2011. He was inducted as the Edwin H. Murty Professor of Engineering in 2015, and as the Gene K. Beare Distinguished Professor in the Fall of 2021.
Research
Pappu uses theoretical, computational, and experimental approaches to study intrinsically disordered proteins in the context of normal cellular function and neurodegenerative diseases (notably Huntington's disease). He has made major contributions to understanding the driving forces associated with protein aggregation, and how the linear amino acid sequence of a disordered proteins determines its conformational behaviour, with a particular focus on the role of polar and charged amino acids. With postdoctoral fellow Rahul Das, Pappu discovered that the patterning of charged residues has a major impact on the conformational ensemble of a disordered protein. More recently, his work has focussed on the polymer physics of biological phase transitions to understand the theoretical and molecular underpinnings that drive intracellular phase separation.
Pappu was named a fellow of the American Associati |
https://en.wikipedia.org/wiki/Provider-independent%20address%20space | A provider-independent address space (PI) is a block of IP addresses assigned by a regional Internet registry (RIR) directly to an end-user organization. The user must contract with a local Internet registry (LIR) through an Internet service provider to obtain routing of the address block within the Internet.
Provider-independent addresses offer end-users the opportunity to change service providers without renumbering of their networks and to use multiple access providers in a multi-homed configuration. However, provider-independent blocks may increase the burden on global routers, as the opportunity for efficient route aggregation through Classless Inter-Domain Routing (CIDR) may not exist.
IPv4 assignments
One of the RIRs is RIPE NCC. The RIPE NCC can no longer assign IPv4 Provider Independent (PI) address space as it is now using the last of IPv4 address space that it holds. IPv4 address space from this last is allocated according to section 5.1 of "IPv4 Address Allocation and Assignment Policies for the RIPE NCC Service Region". IPv4 Provider-aggregatable (PA) Address space
can only be allocated to RIPE NCC members.
IPv6 assignments
In April 2009 RIPE accepted a policy proposal of January 2006 to assign IPv6 provider-independent IPv6 prefixes. Assignments are taken from the address range and have a minimum size of a prefix.
See also
Multihoming |
https://en.wikipedia.org/wiki/Helmut%20Gr%C3%B6ttrup | Helmut Gröttrup (12 February 1916 – 4 July 1981) was a German engineer, rocket scientist and inventor of the smart card. During World War II, he worked in the German V-2 rocket program under Wernher von Braun. From 1946 to 1950 he headed a group of 170 German scientists who were forced to work for the Soviet rocketry program under Sergei Korolev. After returning to West Germany in December 1953, he developed data processing systems, contributed to early commercial applications of computer science and coined the German term "Informatik". In 1967 Gröttrup invented the smart card as a "forgery-proof key" for secure identification and access control (ID card) or storage of a secure key, also including inductive coupling for near-field communication (NFC). From 1970 he headed a start-up division of Giesecke+Devrient for the development of banknote processing systems and machine-readable security features.
Education
Helmut Gröttrup's father Johann Gröttrup (1881 – 1940) was a mechanical engineer. He worked full-time at the Bund der technischen Angestellten und Beamten (Butab), a federation for technical staff and officials of the social democratic trade union in Berlin. His mother Thérèse Gröttrup (1894 – 1981), born Elsen, was active in the peace movement. Johann Gröttrup lost his job in 1933 when the Nazi Party came into power.
From 1935 to 1939 Helmut Gröttrup studied applied physics at the Technical University of Berlin and made his thesis with professor Hans Geiger, the co-inventor of the Geiger counter. He also worked for Manfred von Ardenne's research laboratory Forschungslaboratorium für Elektronenphysik.
German rocketry program
From December 1939, Helmut Gröttrup worked in the German V-2 rocket program at the Peenemünde Army Research Center with Walter Dornberger and Wernher von Braun. In December 1940, he was made department head under Ernst Steinhoff for developing remote guidance and control systems.
Since October 1943 Gröttrup had been under SD surveillan |
https://en.wikipedia.org/wiki/Microdot | A microdot is text or an image substantially reduced in size to prevent detection by unintended recipients. Microdots are normally circular and around in diameter but can be made into different shapes and sizes and made from various materials such as polyester or metal. The name comes from the fact that the microdots have often been about the size and shape of a typographical dot, such as a period or the tittle of a lowercase i or j. Microdots are, fundamentally, a steganographic approach to message protection.
History
In 1870 during the Franco-Prussian War, Paris was under siege and messages were sent by carrier pigeon. Parisian photographer René Dagron used microfilm to permit each pigeon to carry a high volume of messages, as pigeons can carry little weight.
Improvement in technology since then has made even more miniaturization possible.
At the International Congress of Photography in Paris in 1925 Emanuel Goldberg presented a method of producing extreme reduction microdots using a two-stage process. First, an initial reduced negative was made, then the image of the negative was projected from the eyepiece of a modified microscope onto a collodium emulsion where the microscope specimen slide would be. The reduction was such that a page of text would be legibly reproduced in a surface of 0.01 mm2. This density is comparable to the entire text of the Bible fifty times over in one square inch. Goldberg's "Mikrat" (microdot) was prominently reported at the time in English, French and German publications.
A technique comparable to modern microdots for steganographic purposes was first used in Germany between World War I and World War II. It was also later used by many countries to pass messages through insecure postal channels. Later microdot techniques used film with aniline dye, rather than silver halide layers, as this was even harder for counter-espionage agents to find.
A popular article on espionage by J. Edgar Hoover in the Reader's Digest in 1946 attri |
https://en.wikipedia.org/wiki/West%20number | The West number is an empirical parameter used to characterize the performance of Stirling engines and other Stirling systems. It is very similar to the Beale number where a larger number indicates higher performance; however, the West number includes temperature compensation. The West number is often used to approximate of the power output of a Stirling engine. The average value is (0.25) for a wide variety of engines, although it may range up to (0.35), particularly for engines operating with a high temperature differential.
The West number may be defined as:
where:
Wn is the West number
Wo is the power output of the engine (watts)
P is the mean average gas pressure (Pa) or (MPa, if volume is in cm3)
V is swept volume of the expansion space (m3, or cm³, if pressure is in MPa)
f is the engine cycle frequency (Hz)
TH is the absolute temperature of the expansion space or heater (kelvins)
TK is the absolute temperature of the compression space or cooler (kelvins)
Bn is the Beale number for an engine operating between temperatures TH and TK
When the Beale number is known, but the West number is not known, it is possible to calculate it. First calculate the West number at the temperatures TH and TK for which the Beale number is known, and then use the resulting West number to calculate output power for other temperatures.
To estimate the power output of a new engine design, nominal values are assumed for the West number, pressure, swept volume and frequency, and the power is calculated as follows:
For example, with an absolute temperature ratio of 2, the portion of the equation representing temperature correction equals 1/3. With a temperature ratio of 3, the temperature term is 1/2. This factor accounts for the difference between the West equation, and the Beale equation in which this temperature term is taken as a constant. Thus, the Beale number is typically in the range of 0.10 to 0.15, which is about 1/3 to 1/2 the value of the West number. |
https://en.wikipedia.org/wiki/List%20of%20biochemists | This is a list of biochemists. It should include those who have been important to the development or practice of biochemistry. Their research or applications have made significant contributions in the area of basic or applied biochemistry.
A
John Jacob Abel (1857–1938). American biochemist and pharmacologist. He founded and chaired the first department of pharmacology in the United States at the University of Michigan.
Robert Abeles (1926–2000). American biological chemist at Brandeis University. Member Natl. Acad. Sci. USA.
John Abelson (b. 1938). American biologist at Caltech, with expertise in biophysics, biochemistry, and genetics, and known for work on RNA splicing.
Gary Ackers (1939–2011). American Professor of Biochemistry and Molecular Biophysics at Washington University in St. Louis, who worked on thermodynamic linkage analysis of biological macromolecules.
Gilbert Smithson Adair FRS (1896–1979). British protein chemist at the University of Cambridge, the first to identify cooperative binding, in the context of oxygen binding to haemoglobin.
Julius Adler (b. 1930). American Professor of Biochemistry and Genetics at the University of Wisconsin–Madison, known for work on chemotaxis.
David Agard. American Professor of Biochemistry and Biophysics at UC San Francisco, whose research is focussed on understanding the basic principles of macromolecular structure and function. Member Natl. Acad. Sci. USA.
Natalie Ahn. Professor of Chemistry and Biochemistry at the University of Colorado at Boulder, whose research is focussed on understanding the mechanisms of cell signalling, with a speciality in phosphorylation and cancers. Member Natl. Acad. Sci. USA.
Bruce Alberts (b. 1938). American biochemist at UC San Francisco, known for his work on protein complexes that enable chromosome replication in science public policy and as an original author of Molecular Biology of the Cell. Member Natl. Acad. Sci. USA.
Robert Alberty (1921–2014). American physical bioch |
https://en.wikipedia.org/wiki/Skyline%20%28construction%20set%29 | American Skyline was a construction set sold in the late 1950s and early 1960s by Elgo Plastics/Halsam Products Company from Chicago, Illinois. With an American Skyline set, its owner could piece together models of high-rise city buildings.
Set contents
The set consisted of a collection of three different types of plastic parts; column segments, vertical panels (which included windows and doors), and floor panels. Doors were simple plastic pieces and did not open. They came in single, double, and 4-door designs. Windows were single, double, triple, and a 7-window design as well as a unique bay style with single and double large open windows (which can also be used as room door ways to divide rooms if one wished).
There were also base blocks, step blocks and rails which were used in the foundations of the structures being constructed. The step blocks were also used in other parts of the structures. The pieces all tend to be fairly durable except for the columns, which tend to have sides broken off with many years of use. The roofing and floor bases were basic thin plastic sheets with a checker board motif in white and brown to one side and blank white on the other. The column segments interlocked to form stacks. Each such stack would present four tracks running the length of the stack. The vertical panel pieces had edges that could slide into the tracks. Panels slid into adjacent tracks in the same column would be at right-angles to each other. The floor/roof panels had corners cut in such a way that each corner could be held in place between two column segments. Also included were flag poles to attach flags to, flags were found (printed) on the back page of the instruction booklet to the sets. Attachment was cutting them and folding them then simply pasting them to the poles.
Set versions
Sets were sold in six different versions. The sets came in six different sizes. They originally came in flat boxes, then Cardboard canisters with a metal top. Note: (Set No. 96 |
https://en.wikipedia.org/wiki/Almgren%E2%80%93Pitts%20min-max%20theory | In mathematics, the Almgren–Pitts min-max theory (named after Frederick J. Almgren, Jr. and his student Jon T. Pitts) is an analogue of Morse theory for hypersurfaces.
The theory started with the efforts for generalizing George David Birkhoff's method for the construction of simple closed geodesics on the sphere, to allow the construction of embedded minimal surfaces in arbitrary 3-manifolds.
It has played roles in the solutions to a number of conjectures in geometry and topology found by Almgren and Pitts themselves and also by other mathematicians, such as Mikhail Gromov, Richard Schoen, Shing-Tung Yau, Fernando Codá Marques, André Neves, Ian Agol, among others.
Description and basic concepts
The theory allows the construction of embedded minimal hypersurfaces through variational methods.
In his PhD thesis Almgren proved that the m-th homotopy group of the space of flat k-dimensional cycles on a closed Riemannian manifold is isomorphic to the (m+k)-th dimensional homology group of M. This result is a generalization of the Dold–Thom theorem, which can be thought of as the k=0 case of Almgren's theorem. Existence of non-trivial homotopy classes in the space of cycles suggests the possibility of constructing minimal submanifolds as saddle points of the volume function, like in the Morse theory. In his subsequent work Almgren used these ideas to prove that for every k=1,...,n-1 a closed n-dimensional Riemannian manifold contains a stationary integral k-dimensional varifold, a generalization of minimal submanifold that may have singularities. Allard showed that such generalized minimal submanifolds are regular on an open and dense subset.
In the 1980s Almgren's student Jon Pitts was able to greatly improve the regularity theory of minimal submanifolds obtained by Almgren in the case of codimension 1. He showed that when the dimension n of the manifold is between 3 and 6 the minimal hypersurface obtained using Almgren's min-max method is smooth. A key new idea in |
https://en.wikipedia.org/wiki/Rosemary%20Roberts | Rosemary A. Roberts is a statistics educator who led the creation of the AP Statistics course and exam for US secondary school students, and who later chaired the Statistical Education Section of the American Statistical Association.
Educated in England and Canada, she spent many years working in the US before her 2013 retirement.
Roberts earned a bachelor's degree in England, at the University of Reading. She completed her Ph.D. in statistics at the University of Waterloo. She joined the mathematics department at Bowdoin College in 1984, retired in 2013, and is now a professor emeritus there.
In 1987, with Tom Moore of Grinnell College, she co-founded the Statistics in the Liberal Arts Workshop (SLAW), an annual meeting of statisticians at liberal arts colleges held every summer at Grinnell.
With Ann E. Watkins, Chris Olsen, and Richard Scheaffer,
She is the coauthor of The Teacher's Guide for AP Statistics (The College Board, 1997).
Roberts was elected as a Fellow of the American Statistical Association in 1997. |
https://en.wikipedia.org/wiki/Societ%C3%A0%20Entomologica%20Italiana | La Società Entomologica Italiana, the Italian Entomological Society, is Italy’s foremost society devoted to the study of insects. The society is famous for promoting applied entomology and many of its past members have saved millions from deadly diseases such as malaria.
History
The society was founded on 31 October 1869, near the "Regio Museo di Storia Naturale", the Royal Natural History Museum (effectively "Museo zoologico de La Specola") in Florence.
The Society had been promoted almost two years before by a group of Italian and other scientists from various institutions across Italy. On 1 January 1868, 21 members of a committee called "Comitato dei Promotori della Società Entomologica Italiana" signed a "manifesto" letter. Coordinated by Alexander Enrico Haliday were four Academic Associates. Emilio Cornalia, then director of del Museo civico di Storia naturale di Milano, the author of works of applied entomology, such as "La Monografia del bombice del gelso" published in 1856; Giovanni Passerini, university professor of Botany at the Università di Parma; Paolo Savi, director of the "Museo zoologico dell'Università di Pisa", and author of "Ornitologia Toscana", Tuscany Birds (1827–1831), who had also promoted the first congress of Italian scientists, Primo Congresso degli Scienziati Italiani at Pisa in 1839 an author of notes on breeding Samia cynthia, an alternative silk producer of optimal quality "shantung" and Achille Costa, holder of the first chair of Entomology and director of Museo zoologico dell'Università di Napoli. Adolfo Targioni Tozzetti and Pietro Stefanelli are also listed as one of the Comitato. Fernandino Maria Piccioli was an editor.
The founding of the society was a part of the Risorgimento. In 1922 it moved to Genoa, to Museo Civico di Storia Naturale di Genova, where it is based until now.
La Società Entomologica Italiana collaborates with Unione Zoologica Italiana, the Italian Zoological Society in maintaining a website listing |
https://en.wikipedia.org/wiki/Herpetosiphon | Herpetosiphon is a genus of bacteria in the family Herpetosiphonaceae.
Phylogeny |
https://en.wikipedia.org/wiki/X-FEN | X-FEN (formerly FRC-FEN) is an extension of Forsyth–Edwards Notation (FEN) introduced by Reinhard Scharnagl in 2003. It was designed to be able to represent all possible positions in Fischer random chess (FRC) and Capablanca random chess (CRC). It is fully backward compatible with FEN.
X-FEN definition
X-FEN is based on traditional FEN. It differs only in the way that castling and en passant tags are used. Moreover, 10×8 positions which use princess (knight+bishop) and empress (knight+rook) compound pieces are supported.
X-FEN inside of PGN
Games are translated into Portable Game Notation (PGN) format. Each game's starting position must be stored in the PGN for FRC and CRC (but not for traditional chess). Storing the starting position is accomplished with a SetUp tag and an FEN string using the definitions for traditional chess games.
Encoding en-passant
The specification of a target square for an en passant capture differs slightly from standard FEN. FEN records the square just behind any pawn that has made a two-square push forward in the latest move. As such, whenever a pawn makes a two-square move, the en passant square is recorded. For example, in the sample game, FEN includes the square e3 as an en passant square after White makes the first move of the game 1. e4. This is somewhat misleading, as no en passant captures can be made by Black from the position.
X-FEN, on the other hand, includes only true en passant squares. That is, X-FEN records a value in the field for an en passant square only if there are one or more enemy pawns on the same rank on an adjacent file. Thus, after 1.e4, the field for the en passant square is left blank, as Black cannot make an en passant capture. However, it is possible that even if an X-FEN records an en passant square, making that capture would be illegal, because after the capture the king of the capturing player would be in check.
Encoding castling rights
"Kk" identifies the ability of g-castling (or i-castlin |
https://en.wikipedia.org/wiki/81%20%28number%29 | 81 (eighty-one) is the natural number following 80 and preceding 82.
In mathematics
81 is:
the square of 9 and the second fourth-power of a prime; 34.
with an aliquot sum of 40; within an aliquot sequence of three composite numbers (81,40,50,43,1,0) to the Prime in the 43-aliquot tree.
a perfect totient number like all powers of three.
a heptagonal number.
a centered octagonal number.
a tribonacci number.
an open meandric number.
the ninth member of the Mian-Chowla sequence.
a palindromic number in bases 8 (1218) and 26 (3326).
a Harshad number in bases 2, 3, 4, 7, 9, 10 and 13.
one of three non-trivial numbers (the other two are 1458 and 1729) which, when its digits (in decimal) are added together, produces a sum which, when multiplied by its reversed self, yields the original number:
8 + 1 = 9
9 × 9 = 81 (although this case is somewhat degenerate, as the sum has only a single digit).
The inverse of 81 is 0. recurring, missing only the digit "8" from the complete set of digits. This is an example of the general rule that, in base b,
omitting only the digit b−2.
In astronomy
Messier object M81, a magnitude 8.5 spiral galaxy in the constellation Ursa Major, also known as Bode's Galaxy, and the first of what is known as the M81 Group of galaxies
The New General Catalogue object NGC 81, a spiral galaxy in the constellation Andromeda
In other fields
Eighty-one is also:
The number of squares on a shogi playing board
The year AD 81, 81 BC, or 1981.
The atomic number of thallium
The symbolic number of the Hells Angels Motorcycle Club. 'H' and 'A' are the 8th and 1st letter of the alphabet, respectively.
The title of a short film by Stephen Burke: 81
The model number of Sinclair ZX81
The number of the department in France called Tarn
The code for international direct dial phone calls to Japan
"+81" is a song by Japanese metalcore band Crystal Lake.
One of two ISBN Group Identifiers for books published in India
The number of stanzas or chapters in the T |
https://en.wikipedia.org/wiki/Counterexample | A counterexample is any exception to a generalization. In logic a counterexample disproves the generalization, and does so rigorously in the fields of mathematics and philosophy. For example, the fact that "student John Smith is not lazy" is a counterexample to the generalization "students are lazy", and both a counterexample to, and disproof of, the universal quantification "all students are lazy."
In mathematics, the term "counterexample" is also used (by a slight abuse) to refer to examples which illustrate the necessity of the full hypothesis of a theorem. This is most often done by considering a case where a part of the hypothesis is not satisfied and the conclusion of the theorem does not hold.
In mathematics
In mathematics, counterexamples are often used to prove the boundaries of possible theorems. By using counterexamples to show that certain conjectures are false, mathematical researchers can then avoid going down blind alleys and learn to modify conjectures to produce provable theorems. It is sometimes said that mathematical development consists primarily in finding (and proving) theorems and counterexamples.
Rectangle example
Suppose that a mathematician is studying geometry and shapes, and she wishes to prove certain theorems about them. She conjectures that "All rectangles are squares", and she is interested in knowing whether this statement is true or false.
In this case, she can either attempt to prove the truth of the statement using deductive reasoning, or she can attempt to find a counterexample of the statement if she suspects it to be false. In the latter case, a counterexample would be a rectangle that is not a square, such as a rectangle with two sides of length 5 and two sides of length 7. However, despite having found rectangles that were not squares, all the rectangles she did find had four sides. She then makes the new conjecture "All rectangles have four sides". This is logically weaker than her original conjecture, since every squa |
https://en.wikipedia.org/wiki/Mega-telomere | A mega-telomere (also known as an ultra-long telomere or a class III telomere), is an extremely long telomere sequence that sits on the end of chromosomes and prevents the loss of genetic information during cell replication. Like regular telomeres, mega-telomeres are made of a repetitive sequence of DNA and associated proteins, and are located on the ends of chromosomes. However, mega-telomeres are substantially longer than regular telomeres, ranging in size from 50 kilobases to several megabases (for comparison, the normal length of vertebrate telomeres is usually between 10 and 20 kilobases).
Telomeres act like protective caps for the chromosome. During cell division, a cell will make copies of its DNA. The enzymes in the cell that are responsible for copying the DNA cannot copy the very ends of the chromosomes. This is sometimes called the "end replication problem". If a cell did not contain telomeres, genetic information from the DNA on the ends of chromosomes would be lost with each division. However, because chromosomes have telomeres or mega-telomeres on their ends, repetitive non-essential sequences of DNA are lost instead (See: Telomere shortening). While the chromosomes in most eukaryotic organisms are capped with telomeres, mega-telomeres are only found in a few species, such as mice and some birds. The specific function of mega-telomeres in vertebrate cells is still unclear.
Discovery
Telomeric regions of DNA were first identified in the late 1970s (See: Discovery of Telomeric DNA). However, extremely long regions of telomere sequence were not recognized in vertebrates until over a decade later. These sequences, which ranged from 30 to 150 kilobases in size, were first identified in laboratory mice by David Kipling and Howard Cooke in 1990.
In 1994, extremely long telomeric regions were identified in chickens. Telomeric sequences ranging from 20 kilobases to several megabases have also been identified in several species of birds. These large regions |
https://en.wikipedia.org/wiki/Bayliss%20and%20Starling%20Society | The Bayliss and Starling Society was founded in 1979 as a forum for research scientists with specific interests in the chemistry, physiology and function of central and autonomic peptides.
The society was named in honour of William Bayliss and Ernest Starling, who discovered the gastrointestinal peptide secretin in 1902 and coined the term hormone in 1905.
The society's main objective was to "advance education and science by the promotion, for the benefit of the public, the study of the chemistry, physiology and disorders of central and peripheral regulating peptides and by the dissemination of the results of such study and research." In doing so, the Society promoted research into peptides and facilitated scientists with research interests in peptides by aiding in the organisation of symposia and relevant conferences.
Additionally the Society offered the John Calam Travelling Fellowship Award for members who wanted to attend national and international academic conferences or visit laboratories to gain experience in new techniques to facilitate their research.
The Bayliss and Starling Society merged with The Physiological Society in 2014. |
https://en.wikipedia.org/wiki/Transduction%20%28physiology%29 | In physiology, transduction is the translation of arriving stimulus into an action potential by a sensory receptor. It begins when stimulus changes the membrane potential of a receptor cell.
A receptor cell converts the energy in a stimulus into an electrical signal. Receptors are broadly split into two main categories: exteroceptors, which receive external sensory stimuli, and interoceptors, which receive internal sensory stimuli.
Transduction and the senses
The visual system
In the visual system, sensory cells called rod and cone cells in the retina convert the physical energy of light signals into electrical impulses that travel to the brain. The light causes a conformational change in a protein called rhodopsin. This conformational change sets in motion a series of molecular events that result in a reduction of the electrochemical gradient of the photoreceptor. The decrease in the electrochemical gradient causes a reduction in the electrical signals going to the brain. Thus, in this example, more light hitting the photoreceptor results in the transduction of a signal into fewer electrical impulses, effectively communicating that stimulus to the brain. A change in neurotransmitter release is mediated through a second messenger system. The change in neurotransmitter release is by rods. Because of the change, a change in light intensity causes the response of the rods to be much slower than expected (for a process associated with the nervous system).
The auditory system
In the auditory system, sound vibrations (mechanical energy) are transduced into electrical energy by hair cells in the inner ear. Sound vibrations from an object cause vibrations in air molecules, which in turn, vibrate the ear drum. The movement of the eardrum causes the bones of the middle ear (the ossicles) to vibrate. These vibrations then pass into the cochlea, the organ of hearing. Within the cochlea, the hair cells on the sensory epithelium of the organ of Corti bend and cause movement |
https://en.wikipedia.org/wiki/Flag%20of%20Martinique | The flag of Martinique consists of a red triangle at the hoist, with two horizontal bands, the upper green and the lower black. It was adopted on 2 February 2023. The flag of France, its parent country, is also flown with official standing due to Martinique's status as a French overseas department/region. The assembly of Martinique flies a flag with the collectivity's logo on it to represent the government.
In 2018, the assembly held a competition to create a flag and anthem for the island, but 2½ years following the presentation of the winners, the flag and anthem were annulled by the local administrative tribunal, as the method of their selection were not deemed within the responsibilities of the council. Then in 2022, the island began a new public vote on an official flag and anthem. The winners were announced on 16 January 2023, but the selected flag design would be withdrawn from consideration at the request of the designer following accusations of plagiarism. Instead, it was decided that the runner-up design, an established flag used by nationalists, would be considered for adoption on 2 February 2023. It was adopted by the assembly with 44 votes in favour and one abstention.
2022 flag consultation
Prior to 2023, Martinique did not have its own flag. In 2018, the local council held a competition to create a flag and anthem for the island, but 2½ years following the presentation of the winners, the flag and anthem were annulled by the local administrative tribunal, as the method of their selection was not deemed within the responsibilities of the council. In 2022, the island began a public vote on an official flag and anthem. However, turnout for the first phase, which narrowed the choices down to two options, was very low, with only 19,084 voting for a flag and 9,294 for the anthem out of an eligible population of around 300,000. The winners, announced 16 January 2023, were the hummingbird design for the flag and "Ansanm" for the anthem, representing 72.84 |
https://en.wikipedia.org/wiki/Ljubljana%20graph | In the mathematical field of graph theory, the Ljubljana graph is an undirected bipartite graph with 112 vertices and 168 edges, rediscovered in 2002 and named after Ljubljana (the capital of Slovenia).
It is a cubic graph with diameter 8, radius 7, chromatic number 2 and chromatic index 3. Its girth is 10 and there are exactly 168 cycles of length 10 in it. There are also 168 cycles of length 12.
Construction
The Ljubljana graph is Hamiltonian and can be constructed from the LCF notation : [47, -23, -31, 39, 25, -21, -31, -41, 25, 15, 29, -41, -19, 15, -49, 33, 39, -35, -21, 17, -33, 49, 41, 31, -15, -29, 41, 31, -15, -25, 21, 31, -51, -25, 23, 9, -17, 51, 35, -29, 21, -51, -39, 33, -9, -51, 51, -47, -33, 19, 51, -21, 29, 21, -31, -39]2.
The Ljubljana graph is the Levi graph of the Ljubljana configuration, a quadrangle-free configuration with 56 lines and 56 points. In this configuration, each line contains exactly 3 points, each point belongs to exactly 3 lines and any two lines intersect in at most one point.
Algebraic properties
The automorphism group of the Ljubljana graph is a group of order 168. It acts transitively on the edges the graph but not on its vertices: there are symmetries taking every edge to any other edge, but not taking every vertex to any other vertex. Therefore, the Ljubljana graph is a semi-symmetric graph, the third smallest possible cubic semi-symmetric graph after the Gray graph on 54 vertices and the Iofinova-Ivanov graph on 110 vertices.
The characteristic polynomial of the Ljubljana graph is
History
The Ljubljana graph was first published in 1993 by Brouwer, Dejter and Thomassen
as a self-complementary subgraph of the Dejter graph.
In 1972, Bouwer was already talking of a 112-vertices edge- but not vertex-transitive cubic graph found by R. M. Foster, nonetheless unpublished. Conder, Malnič, Marušič, Pisanski and Potočnik rediscovered this 112-vertices graph in 2002 and named it the Ljubljana graph after the capital of Slovenia. |
https://en.wikipedia.org/wiki/Alessandro%20Faedo | Alessandro Faedo (18 November 1913 – 15 June 2001) (also known as Alessandro Carlo Faedo or Sandro Faedo) was an Italian mathematician and politician, born in Chiampo. He is known for his work in numerical analysis, leading to the Faedo–Galerkin method: he was one of the pupils of Leonida Tonelli and, after his death, he succeeded him on the chair of mathematical analysis at the University of Pisa, becoming dean of the faculty of sciences and then rector and exerting a strong positive influence on the development of the university.
Selected publications
Scientific works
.
.
, reprinted also in the following book: .
Historical, commemorative and survey works
. "Leonida Tonelli and the Pisa mathematical school" (English translation of the title) is a survey of the work of Tonelli in Pisa and his influence on the development of the school, presented at the International congress in occasion of the celebration of the centenary of birth of Mauro Picone and Leonida Tonelli (held in Rome on 6–9 May 1985).
, is a brief commemorative historical paper describing the events which led Ennio De Giorgi to hold a chair at the Scuola Normale Superiore.
See also
Calculus of variation
Fichera's existence principle
Variational method
Ritz method
Notes |
https://en.wikipedia.org/wiki/The%20Bridges%20Organization | The Bridges Organization is an organization that was founded in Kansas, United States, in 1998 with the goal of promoting interdisciplinary work in mathematics and art. The Bridges Conference is an annual conference on connections between art and mathematics. The conference features papers, educational workshops, an art exhibition, a mathematical poetry reading, and a short movie festival.
List of Bridges conferences |
https://en.wikipedia.org/wiki/Nuclear%20weapons%20in%20popular%20culture | Since their public debut in August 1945, nuclear weapons and their potential effects have been a recurring motif in popular culture, to the extent that the decades of the Cold War are often referred to as the "atomic age".
Images of nuclear weapons
The atomic bombings of Hiroshima and Nagasaki ushered in the "atomic age", and the bleak pictures of the bombed-out cities released shortly after the end of World War II became symbols of the power and destruction of the new weapons (it is worth noting that the first pictures released were only from distances, and did not contain any human bodies—such pictures would only be released in later years).
The first pictures released of a nuclear explosion—the blast from the Trinity test—focused on the fireball itself; later pictures would focus primarily on the mushroom cloud that followed. After the United States began a regular program of nuclear testing in the late 1940s, continuing through the 1950s (and matched by the Soviet Union), the mushroom cloud has served as a symbol of the weapons themselves.
Pictures of nuclear weapons themselves (the actual casings) were not made public until 1960, and even those were only mock-ups of the "Fat Man" and "Little Boy" weapons dropped on Japan—not the more powerful weapons developed more recently. Diagrams of the general principles of operation of thermonuclear weapons have been available in very general terms since at least 1969 in at least two encyclopedia articles, and open literature research into inertial confinement fusion has been at least richly suggestive of how the "secondary" and "inter" stages of thermonuclear weapons work.
In general, however, the design of nuclear weapons has been the most closely guarded secret until long after the secrets had been independently developed—or stolen—by all the major powers and a number of lesser ones. It is generally possible to trace US knowledge of foreign progress in nuclear weapons technology by reading the US Department of |
https://en.wikipedia.org/wiki/Stalking | Stalking is unwanted and/or repeated surveillance by an individual or group toward another person. Stalking behaviors are interrelated to harassment and intimidation and may include following the victim in person or monitoring them. The term stalking is used with some differing definitions in psychiatry and psychology, as well as in some legal jurisdictions as a term for a criminal offense.
According to a 2002 report by the U.S. National Center for Victims of Crime, "virtually any unwanted contact between two people that directly or indirectly communicates a threat or places the victim in fear can be considered stalking", although the rights afforded to victims may vary depending on jurisdiction.
Definitions
A 1995 research paper titled "Stalking Strangers and Lovers" was among the first places to use the term "stalking" to describe the common occurrence of males after a breakup who aggressively pursue their female former partner. Prior to that paper instead of the term "stalking", people more commonly used the terms "female harassment", "obsessive following" or "psychological rape".
The difficulties associated with defining this term exactly (or defining it at all) are well documented.
Having been used since at least the 16th century to refer to a prowler or a poacher (Oxford English Dictionary), the term stalker was initially used by media in the 20th century to describe people who pester and harass others, initially with specific reference to the harassment of celebrities by strangers who were described as being "obsessed". This use of the word appears to have been coined by the tabloid press in the United States. With time, the meaning of stalking changed and incorporated individuals being harassed by their former partners. Pathé and Mullen describe stalking as "a constellation of behaviours in which an individual inflicts upon another repeated unwanted intrusions and communications". Stalking can be defined as the willful and repeated following, watching or |
https://en.wikipedia.org/wiki/Mathematics%20education%20in%20the%20United%20States | Mathematics education in the United States varies considerably from one state to the next, and even within a single state. However, with the adoption of the Common Core Standards in most states and the District of Columbia beginning in 2010, mathematics content across the country has moved into closer agreement for each grade level. The SAT, a standardized university entrance exam, has been reformed to better reflect the contents of the Common Core. However, many students take alternatives to the traditional pathways, including accelerated tracks. As of 2023, twenty-seven states require students to pass three math courses before graduation from high school, and seventeen states and the District of Columbia require four.
Compared to other developed countries in the Organisation for Economic Co-operation and Development (OECD), the average level of mathematical literacy of American students is mediocre. As in many other countries, math scores dropped even further during the COVID-19 pandemic. Secondary-school algebra proves to be the turning point of difficulty many students struggle to surmount, and as such, many students are ill-prepared for collegiate STEM programs, or future high-skilled careers. Meanwhile, the number of eighth-graders enrolled in Algebra I has fallen between the early 2010s and early 2020s. Across the United States, there is a shortage of qualified mathematics instructors. Despite their best intentions, parents may transmit their mathematical anxiety to their children, who may also have school teachers who fear mathematics. About one in five American adults are functionally innumerate. While an overwhelming majority agree that mathematics is important, many, especially the young, are not confident of their own mathematical ability.
Curricular content and standards
Each U.S. state sets its own curricular standards, and details are usually set by each local school district. Although there are no federal standards, since 2015 most states have bas |
https://en.wikipedia.org/wiki/Systemic%20disease | A systemic disease is one that affects a number of organs and tissues, or affects the body as a whole.
Examples
Mastocytosis, including mast cell activation syndrome and eosinophilic esophagitis
Chronic fatigue syndrome
Systemic vasculitis e.g. SLE, PAN
Sarcoidosis – a disease that mainly affects the lungs, brain, joints and eyes, found most often in young African-American women.
Hypothyroidism – where the thyroid gland produces too little thyroid hormones.
Diabetes mellitus – an imbalance in blood glucose (sugar) levels.
Fibromyalgia
Ehlers-Danlos syndromes - an inherited connective tissue disorder with multiple subcategories
Adrenal insufficiency – where the adrenal glands don't produce enough steroid hormones
Coeliac disease – an autoimmune disease triggered by gluten consumption, which may involve several organs and cause a variety of symptoms, or be completely asymptomatic.
Ulcerative colitis – an inflammatory bowel disease
Crohn's disease – an inflammatory bowel disease
Hypertension (high blood pressure)
Metabolic syndrome
AIDS – a disease caused by a virus that cripples the body's immune defenses.
Graves' disease – a thyroid disorder, most often in women, which can cause a goiter (swelling in the front part of the neck) and protruding eyes.
Systemic lupus erythematosus – a connective tissue disorder involving mainly the skin, joints and kidneys.
Rheumatoid arthritis – an inflammatory disease which mainly attacks the joints. But can also affect a person's skin, eyes, lungs and mouth.
Atherosclerosis – a hardening of the arteries
Sickle cell disease – an inherited blood disorder that can block circulation throughout the body, primarily affecting people of sub-Saharan origin.
Myasthenia gravis
Systemic sclerosis
Sinusitis
Sjogren's Syndrome - an autoimmune disease that primarily attacks the lacrimal and salivary glands, but also impacts other organs such as the lungs, kidneys, liver, and nervous system.
Detection
Getting a regular eye e |
https://en.wikipedia.org/wiki/Stirling%20permutation | In combinatorial mathematics, a Stirling permutation of order k is a permutation of the multiset 1, 1, 2, 2, ..., k, k (with two copies of each value from 1 to k) with the additional property that, for each value i appearing in the permutation, the values between the two copies of i are larger than i. For instance, the 15 Stirling permutations of order three are
1,1,2,2,3,3; 1,2,2,1,3,3; 2,2,1,1,3,3;
1,1,2,3,3,2; 1,2,2,3,3,1; 2,2,1,3,3,1;
1,1,3,3,2,2; 1,2,3,3,2,1; 2,2,3,3,1,1;
1,3,3,1,2,2; 1,3,3,2,2,1; 2,3,3,2,1,1;
3,3,1,1,2,2; 3,3,1,2,2,1; 3,3,2,2,1,1.
The number of Stirling permutations of order k is given by the double factorial (2k − 1)!!. Stirling permutations were introduced by in order to show that certain numbers (the numbers of Stirling permutations with a fixed number of descents) are non-negative. They chose the name because of a connection to certain polynomials defined from the Stirling numbers, which are in turn named after 18th-century Scottish mathematician James Stirling.
Stirling permutations may be used to describe the sequences by which it is possible to construct a rooted plane tree with k edges by adding leaves one by one to the tree. For, if the edges are numbered by the order in which they were inserted, then the sequence of numbers in an Euler tour of the tree (formed by doubling the edges of the tree and traversing the children of each node in left to right order) is a Stirling permutation. Conversely every Stirling permutation describes a tree construction sequence, in which the next edge closer to the root from an edge labeled i is the one whose pair of values most closely surrounds the pair of i values in the permutation.
Stirling permutations have been generalized to the permutations of a multiset with more than two copies of each value. Researchers have also studied the number of Stirling permutations that avoid certain patterns.
See also
Langford pairing, a different type of permutation of the same multiset |
https://en.wikipedia.org/wiki/Cockade | A cockade is a knot of ribbons, or other circular- or oval-shaped symbol of distinctive colours which is usually worn on a hat or cap.
The word cockade derives from the French cocarde, from Old French coquarde, feminine of coquard (vain, arrogant), from coc (cock), of imitative origin. The earliest documented use was in 1709.
Eighteenth century
In the 18th and 19th centuries, coloured cockades were used in Europe to show the allegiance of their wearers to some political faction, or to show their rank or to indicate a servant's livery. Because individual armies might wear a variety of differing regimental uniforms, cockades were used as an effective and economical means of national identification.
A cockade was pinned on the side of a man's tricorne or cocked hat, or on his lapel. Women could also wear it on their hat or in their hair.
In pre-revolutionary France, the cockade of the Bourbon dynasty was all white. In the Kingdom of Great Britain supporters of a Jacobite restoration wore white cockades, while the recently established Hanoverian monarchy used a black cockade. The Hanoverians also accorded the right to all German nobility to wear the black cockade in the United Kingdom.
During the 1780 Gordon Riots in London, the blue cockade became a symbol of anti-government feelings and was worn by most of the rioters.
During the American Revolution, the Continental Army initially wore cockades of various colors as an ad hoc form of rank insignia, as General George Washington wrote:
Before long however, the Continental Army reverted to wearing the black cockade they inherited from the British. Later, when France became an ally of the United States, the Continental Army pinned the white cockade of the French Ancien Régime onto their old black cockade; the French reciprocally pinned the black cockade onto their white cockade, as a mark of the French-American alliance. The black-and-white cockade thus became known as the "Union Cockade".
In the Storming of the B |
https://en.wikipedia.org/wiki/Shooting%20ratio | The shooting ratio or "Bertolo code" in filmmaking and television production is the ratio between the total duration of its footage created for possible use in a project and that which appears in its final cut.
A film with a shooting ratio of 2:1 would have shot twice the amount of footage than was used in the film. In real terms this means that 120 minutes of footage would have been shot to produce a film of 60 minutes in length.
While shooting ratios can vary greatly between productions, a typical shooting ratio for a production using film stock will be between 6:1 and 10:1, whereas a similar production using video is likely to be much higher. This is a direct result of the significant difference in price between video tape stock and film stock and the necessary processing. Although the decisions, styles and preferences of the filmmakers can affect the shooting ratio of a project greatly, the nature of the production (genre, form, single camera, multi-camera, etc.) greatly affects the typical range of the ratios seen – documentary films typically have the highest (often exceeding 100:1 following the rise of video and digital media) and animated films have the lowest (typically as close to 1:1 as possible, since the creation of footage frame by frame makes the time costs of animation extremely high compared to live action). Animated productions will often shoot acting reference (by animators of themselves and or others), location reference, and performance reference (taken of voice actors), but these pieces of reference footage are not regarded as counting towards the shooting ratio, as they were never intended to appear in the projects they were created for. Audition footage, screen tests, and location reference are similarly not counted towards a narrative film's shooting ratio, live action or animated, for the same reason. Since a documentary may potentially use any footage that is shot at any point for any reason, documentary productions do not have similar e |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.