source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Forest%E2%80%93savanna%20mosaic
Forest–savanna mosaic is a transitory ecotone between the tropical moist broadleaf forests of Equatorial Africa and the drier savannas and open woodlands to the north and south of the forest belt. The forest–savanna mosaic consists of drier forests, often gallery forest, interspersed with savannas and open grasslands. Flora This band of marginal savannas bordering the dense dry forest extends from the Atlantic coast of Guinea to South Sudan, corresponding to a climatic zone with relatively high rainfall, between 800 and 1400 mm. It is an often unresolvable, complex of secondary forests and mixed savannas, resulting from intense erosion of primary forests by fire and clearing. The vegetation ceases to have an evergreen character, and becomes more and more seasonal. A species of acacia, Faidherbia albida, marks, with its geographical distribution, the Guinean area of the savannas together with the area of the forest-savanna, arboreal and shrub, and a good part of the dense dry forest with prevalently deciduous trees. Ecoregions The World Wildlife Fund recognizes several distinct forest-savanna mosaic ecoregions: The Guinean forest–savanna mosaic is the transition between the Upper and Lower Guinean forests of West Africa and the West Sudanian savanna. The ecoregion extends from Senegal on the west to the Cameroon Highlands on the east. The Dahomey Gap is a region of Togo and Benin where the forest-savanna mosaic extends to the coast, separating the Upper and Lower Guinean forests. The Northern Congolian forest–savanna mosaic lies between the Congolian forests of Central Africa and the East Sudanian savanna. It extends from the Cameroon Highlands in the west to the East African Rift in the east, encompassing portions of Cameroon, Central African Republic, Democratic Republic of the Congo, and southwestern Sudan. The Western Congolian forest–savanna mosaic lies southwest of the Congolian forest belt, covering portions of southern Gabon, southern Republic of the Co
https://en.wikipedia.org/wiki/Convex%20conjugate
In mathematics and mathematical optimization, the convex conjugate of a function is a generalization of the Legendre transformation which applies to non-convex functions. It is also known as Legendre–Fenchel transformation, Fenchel transformation, or Fenchel conjugate (after Adrien-Marie Legendre and Werner Fenchel). It allows in particular for a far reaching generalization of Lagrangian duality. Definition Let be a real topological vector space and let be the dual space to . Denote by the canonical dual pairing, which is defined by For a function taking values on the extended real number line, its is the function whose value at is defined to be the supremum: or, equivalently, in terms of the infimum: This definition can be interpreted as an encoding of the convex hull of the function's epigraph in terms of its supporting hyperplanes. Examples For more examples, see . The convex conjugate of an affine function is The convex conjugate of a power function is The convex conjugate of the absolute value function is The convex conjugate of the exponential function is The convex conjugate and Legendre transform of the exponential function agree except that the domain of the convex conjugate is strictly larger as the Legendre transform is only defined for positive real numbers. Connection with expected shortfall (average value at risk) See this article for example. Let F denote a cumulative distribution function of a random variable X. Then (integrating by parts), has the convex conjugate Ordering A particular interpretation has the transform as this is a nondecreasing rearrangement of the initial function f; in particular, for f nondecreasing. Properties The convex conjugate of a closed convex function is again a closed convex function. The convex conjugate of a polyhedral convex function (a convex function with polyhedral epigraph) is again a polyhedral convex function. Order reversing Declare that if and only if for all The
https://en.wikipedia.org/wiki/System%20G%20%28supercomputer%29
System G is a cluster supercomputer at Virginia Tech consisting of 324 Apple Mac Pro computers with a total of 2592 processing cores. It was finished in November 2008 and ranked 279 in that month's edition of TOP500, running at 16.78 teraflops and peaking at 22.94 teraflops. It now runs at a "sustained (Linpack) performance of 22.8 TFlops". It transmits data between nodes over Gigabit Ethernet and 40Gbit/s Infiniband. Mac Pro Nodes Each of the 324 Mac Pro machines contains two quad-core 2.8 GHz Xeon processors and 8 gigabytes of RAM. Namesake System G's name stems from its homage to System X and to its focus on green computing—the cluster has thousands of power and thermal sensors to test high performance computing at low power requirements and is the largest power-aware research system in the world.
https://en.wikipedia.org/wiki/The%20Tower%20of%20Druaga
is a 1984 arcade action role-playing maze game developed and published in Japan by Namco. Controlling the golden-armored knight Gilgamesh, the player is tasked with scaling 60 floors of the titular tower in an effort to rescue the maiden Ki from Druaga, a demon with eight arms and four legs, who plans to use an artifact known as the Blue Crystal Rod to enslave all of mankind. It ran on the Namco Super Pac-Man arcade hardware, modified with a horizontal-scrolling video system used in Mappy. Druaga was designed by Masanobu Endo, best known for creating Xevious (1983). It was conceived as a "fantasy Pac-Man" with combat and puzzle solving, taking inspiration from games such as Wizardry and Dungeons & Dragons, along with Mesopotamian, Sumerian and Babylonian mythology. It began as a prototype game called Quest with interlocking mazes, revised to run on an arcade system; the original concept was scrapped due to Endo disliking the heavy use of role-playing elements, instead becoming a more action-oriented game. In Japan, The Tower of Druaga was widely successful, attracting millions of fans for its use of secrets and hidden items. It is cited as an important game of its genre for laying down the foundation for future games, as well as inspiring the idea of sharing tips with friends and guidebooks. Druaga is noted as being influential for many games to follow, including Ys, Hydlide, Dragon Slayer and The Legend of Zelda. The success of the game in Japan inspired several ports for multiple platforms, as well as spawning a massive franchise known as the Babylonian Castle Saga, including multiple sequels, spin-offs, literature and an anime series produced by Gonzo. However, the 2009 Wii Virtual Console release in North America was met with a largely negative reception for its obtuse design, which many said was near-impossible to finish without a guidebook, alongside its high difficulty and controls. Gameplay The Tower of Druaga is an action role-playing maze video game. C
https://en.wikipedia.org/wiki/John%20Fairclough
Sir John Whitaker Fairclough (23 August 1930 – 5 June 2003) was a British computer designer, and later government policy advisor. Education John Fairclough was educated at Thirsk Grammar School and then studied electrical engineering at Manchester University, before undertaking national service with the RAF. Career In 1954, he joined the Ferranti computer department and in 1957 he moved to IBM, including working in Raleigh, North Carolina, USA. He returned to the UK to be the Managing Director of IBM Hursley near Winchester in 1974. During 1986–90, Fairclough was Chief Scientific Adviser for the UK Conservative government led by Margaret Thatcher. He left the Cabinet Office and was knighted in 1990. That year, he joined the Board of NM Rothschild and Sons in 1990, becoming Chairman of its venture capital section. He was also involved with a number of start-up companies. He was President of the British Computer Society (1997–98). Personal life He married his first wife, Margaret Harvey, in 1954. After her death in 1996, he married his second wife, Karen, in 2000. He had two sons and a daughter from his first marriage.
https://en.wikipedia.org/wiki/Kummer%27s%20function
In mathematics, there are several functions known as Kummer's function. One is known as the confluent hypergeometric function of Kummer. Another one, defined below, is related to the polylogarithm. Both are named for Ernst Kummer. Kummer's function is defined by The duplication formula is . Compare this to the duplication formula for the polylogarithm: An explicit link to the polylogarithm is given by
https://en.wikipedia.org/wiki/Point%20Cloud%20Library
The Point Cloud Library (PCL) is an open-source library of algorithms for point cloud processing tasks and 3D geometry processing, such as occur in three-dimensional computer vision. The library contains algorithms for filtering, feature estimation, surface reconstruction, 3D registration, model fitting, object recognition, and segmentation. Each module is implemented as a smaller library that can be compiled separately (for example, libpcl_filters, libpcl_features, libpcl_surface, ...). PCL has its own data format for storing point clouds - PCD (Point Cloud Data), but also allows datasets to be loaded and saved in many other formats. It is written in C++ and released under the BSD license. These algorithms have been used, for example, for perception in robotics to filter outliers from noisy data, stitch 3D point clouds together, segment relevant parts of a scene, extract keypoints and compute descriptors to recognize objects in the world based on their geometric appearance, and create surfaces from point clouds and visualize them. PCL requires several third-party libraries to function, which must be installed. Most mathematical operations are implemented using the Eigen library. The visualization module for 3D point clouds is based on VTK. Boost is used for shared pointers and the FLANN library for quick k-nearest neighbor search. Additional libraries such as Qhull, OpenNI, or Qt are optional and extend PCL with additional features. PCL is cross-platform software that runs on the most commonly used operating systems: Linux, Windows, macOS and Android. The library is fully integrated with the Robot Operating System (ROS) and provides support for OpenMP and Intel Threading Building Blocks (TBB) libraries for multi-core parallelism. The library is constantly updated and expanded, and its use in various industries is constantly growing. For example, PCL participated in the Google Summer of Code 2020 initiative with three projects. One was the extension of PCL for u
https://en.wikipedia.org/wiki/Third%20Party%20Control%20Protocol
Third Party Control Protocol (TPCP) is a client-server protocol with three types of primitives: Request (used by the client), Notify (used by the server to send state information to the client), Responses (sent as a response to a request) TPCP is the Protocol used for third party call control 3pcc. TPCP is the protocol used by the controller while communicating with the control server over the control let. TPCP is used to initiate, control and observe sessions between remote parties. Network protocols
https://en.wikipedia.org/wiki/Apocholate%20citrate%20agar
Apocholate citrate agar (ACA) is a selective environment used to isolate Shigella and Salmonella bacteria. The name derives from apocholate and citrate in agar.
https://en.wikipedia.org/wiki/Treynor%20ratio
The Treynor reward to volatility model (sometimes called the reward-to-volatility ratio or Treynor measure), named after Jack L. Treynor, is a measurement of the returns earned in excess of that which could have been earned on an investment that has no diversifiable risk (e.g., Treasury bills or a completely diversified portfolio), per unit of market risk assumed. The Treynor ratio relates excess return over the risk-free rate to the additional risk taken; however, systematic risk is used instead of total risk. The higher the Treynor ratio, the better the performance of the portfolio under analysis. Formula where: Treynor ratio, portfolio i'''s return, risk free rate [[Beta coefficient|portfolio i's beta]] Example Taking the equation detailed above, let us assume that the expected portfolio return is 20%, the risk free rate is 5%, and the beta of the portfolio is 1.5. Substituting these values, we get the following Limitations Like the Sharpe ratio, the Treynor ratio (T'') does not quantify the value added, if any, of active portfolio management. It is a ranking criterion only. A ranking of portfolios based on the Treynor Ratio is only useful if the portfolios under consideration are sub-portfolios of a broader, fully diversified portfolio. If this is not the case, portfolios with identical systematic risk, but different total risk, will be rated the same. But the portfolio with a higher total risk is less diversified and therefore has a higher unsystematic risk which is not priced in the market. An alternative method of ranking portfolio management is Jensen's alpha, which quantifies the added return as the excess return above the security market line in the capital asset pricing model. As these two methods both determine rankings based on systematic risk alone, they will rank portfolios identically. See also Bias ratio (finance) Hansen-Jagannathan bound Jensen's alpha Modern portfolio theory Modigliani risk-adjusted performance Omega ratio Shar
https://en.wikipedia.org/wiki/Contact%20dynamics
Contact dynamics deals with the motion of multibody systems subjected to unilateral contacts and friction. Such systems are omnipresent in many multibody dynamics applications. Consider for example Contacts between wheels and ground in vehicle dynamics Squealing of brakes due to friction induced oscillations Motion of many particles, spheres which fall in a funnel, mixing processes (granular media) Clockworks Walking machines Arbitrary machines with limit stops, friction. Anatomic tissues (skin, iris/lens, eyelids/anterior ocular surface, joint cartilages, vascular endothelium/blood cells, muscles/tendons, et cetera) In the following it is discussed how such mechanical systems with unilateral contacts and friction can be modeled and how the time evolution of such systems can be obtained by numerical integration. In addition, some examples are given. Modeling The two main approaches for modeling mechanical systems with unilateral contacts and friction are the regularized and the non-smooth approach. In the following, the two approaches are introduced using a simple example. Consider a block which can slide or stick on a table (see figure 1a). The motion of the block is described by the equation of motion, whereas the friction force is unknown (see figure 1b). In order to obtain the friction force, a separate force law must be specified which links the friction force to the associated velocity of the block. Non-smooth approach A more sophisticated approach is the non-smooth approach, which uses set-valued force laws to model mechanical systems with unilateral contacts and friction. Consider again the block which slides or sticks on the table. The associated set-valued friction law of type Sgn is depicted in figure 3. Regarding the sliding case, the friction force is given. Regarding the sticking case, the friction force is set-valued and determined according to an additional algebraic constraint. To conclude, the non-smooth approach changes the underlying
https://en.wikipedia.org/wiki/Seed%20ball
Seed balls, also known as earth balls or , consist of seeds rolled within a ball of clay and other matter to assist germination. They are then thrown into vacant lots and over fences as a form of 'guerilla gardening'. Matter such as humus and compost are often placed around the seeds to provide microbial inoculants. Cotton-fibres or liquefied paper are sometimes added to further protect the clay ball in particularly harsh habitats. An ancient technique, it was re-discovered by Japanese natural farming pioneer Masanobu Fukuoka. Development of technique The technique for creating seed balls was rediscovered by Japanese natural farming pioneer Masanobu Fukuoka. The technique was also used, for instance, in ancient Egypt to repair farms after the annual spring flooding of the Nile. Masanobu Fukuoka developed his technique during the period of the Second World War, while working in a Japanese government lab as a plant scientist on the mountainous island of Shikoku. He wanted to find a technique that would increase food production without taking away from the land already allocated for traditional rice production which thrived in the volcanic rich soils of Japan. Construction To make a seed ball, generally about five measures of red clay by volume are combined with one measure of seeds. The balls are formed between 10 mm and 80 mm (about " to 3") in diameter. After the seed balls have been formed, they must dry for 24–48 hours before use. Seed bombing Seed bombing is the practice of introducing vegetation to land by throwing or dropping seed balls. It is used in modern aerial seeding as a way to deter seed predation. It has also been popularized by green movements such as guerrilla gardening as a way to introduce new plants to an environment. Guerrilla gardening The term "seed green-aide" was first used by Liz Christy in 1973 when she started the Green Guerillas. The first seed green-aides were made from condoms filled with tomato seeds, and fertilizer. They were t
https://en.wikipedia.org/wiki/Process%20analytical%20chemistry
Process analytical chemistry (PAC) is the application of analytical chemistry with specialized techniques, algorithms, and sampling equipment for solving problems related to chemical processes. It is a specialized form of analytical chemistry used for process manufacturing similar to process analytical technology (PAT) used in the pharmaceutical industry. The chemical processes are for production and quality control of manufactured products, and process analytical technology is used to determine the physical and chemical composition of the desired products during a manufacturing process. It is first mentioned in the chemical literature in 1946(1,2). Process sampling Process analysis initially involved sampling the variety of process streams or webs and transporting samples to quality control or central analytical service laboratories. Time delays for analytical results due to sample transport and analytical preparation steps negated the value of many chemical analyses for purposes other than product release. Over time it was understood that real-time measurements provided timely information about a process, which was far more useful for high efficiency and quality. The development of real-time process analysis has provided information for process optimization during any manufacturing process. The journal Analytical Chemistry (journal) publishes a biennial review of the most recent developments in the field. The first real-time measurements in a production environment were made with modified laboratory instrumentation; in recent times specialized process and handheld instrumentation has been developed for immediate analysis. Applications Process analytical chemistry involves the following sub-disciplines of analytical chemistry: microanalytical systems, nanotechnology, chemical detection, electrochemistry or electrophoresis, chromatography, spectroscopy, mass spectrometry, process chemometrics, process control, flow injection analysis, ultrasound, and handheld s
https://en.wikipedia.org/wiki/Taiwan%20sleeper%20shark
The Taiwan sleeper shark (Somniosus cheni) is a small sleeper shark from the western North Pacific Ocean around Taiwan. It is only known from a single adult specimen, a pregnant female with 33 embryos, which was caught in 2017.
https://en.wikipedia.org/wiki/Coat%20of%20arms%20of%20Cantabria
The coat of arms of Cantabria has a rectangular shield, round in base (also called Spanish shield in heraldry) and the field is party en fess. In field azure, a tower or crenellated and masoned, port and windows azure, to its right a ship in natural colours that with its bow has broken a chain sable going from the tower to the dexter flank of the shield. At the base, sea waves argent and azure, all surmounted in chief by two male heads, severed and haloed. In field gules, a disc-shaped stele with geometric ornaments of the kind of the Cantabrian steles of Barros or Lombera. The crest is a closed royal crown, a circle of jeweled gold, made up of eight rosettes in the shape of acanthus leaves, only five visible, interpolated with pearls, and with half-arches topped with pearls raising from each leaf and converging in an orb azure, with submeridian and equator or, topped with cross or. The crown, covered in gules. The coat of arms was designed by a commission of experts made up of members of the Royal Academy of History. After long debates they decided to have two differentiated parts: one historical and hagiographic, and the other characteristical. The historic part of the first field shows the emblem of the conquest of Seville by Cantabrian marines in 1248, with the tower (representing the Torre del Oro) and the ship breaking the chain boom that blocked the way through the river Guadalquivir. It symbolizes the eight centuries of activity that characterised the maritime Cantabria. The hagiographic references consist in the heads of the martyr saints Emeterius and Celedonius, representing the unity of the territory under their patronage. The second field shows the image of one of the most important legates left by the primitive people who inhabited the region: the giant steles of the Cantabri. The Stele of Barros (discovered in the town of the same name) was taken as model. The official coat of arms of Cantabria completes with the inclusion of the Spanish royal crow
https://en.wikipedia.org/wiki/Excess%20post-exercise%20oxygen%20consumption
Excess post-exercise oxygen consumption (EPOC, informally called afterburn) is a measurably increased rate of oxygen intake following strenuous activity. In historical contexts the term "oxygen debt" was popularized to explain or perhaps attempt to quantify anaerobic energy expenditure, particularly as regards lactic acid/lactate metabolism; in fact, the term "oxygen debt" is still widely used to this day. However, direct and indirect calorimeter experiments have definitively disproven any association of lactate metabolism as causal to an elevated oxygen uptake. In recovery, oxygen (EPOC) is used in the processes that restore the body to a resting state and adapt it to the exercise just performed. These include: hormone balancing, replenishment of fuel stores, cellular repair, innervation, and anabolism. Post-exercise oxygen consumption replenishes the phosphagen system. New ATP is synthesized and some of this ATP donates phosphate groups to creatine until ATP and creatine levels are back to resting state levels again. Another use of EPOC is to fuel the body’s increased metabolism from the increase in body temperature which occurs during exercise. EPOC is accompanied by an elevated consumption of fuel. In response to exercise, fat stores are broken down and free fatty acids (FFA) are released into the blood stream. In recovery, the direct oxidation of free fatty acids as fuel and the energy consuming re-conversion of FFAs back into fat stores both take place. Duration of the effect The EPOC effect is greatest soon after the exercise is completed and decays to a lower level over time. One experiment, involving exertion above baseline, found EPOC increasing metabolic rate to an excess level that decays to 13% three hours after exercise, and 4% after 16 hours, for the studied exercise dose. Another study, specifically designed to test whether the effect existed for more than 16 hours, conducted tests for 48 hours after the conclusion of the exercise and found measu
https://en.wikipedia.org/wiki/Wait/walk%20dilemma
The wait/walk dilemma occurs when waiting for a bus at a bus stop, when the duration of the wait may exceed the time needed to arrive at a destination by another means, especially walking. Some work on this problem was featured in the 2008 "Year in Ideas" issue of The New York Times Magazine. Research The dilemma has been studied in an unpublished report entitled "Walk Versus Wait: The Lazy Mathematician Wins." Anthony B. Morton's paper "A Note on Walking Versus Waiting" supports and extends Chen et al.'s results. Ramnik Arora's "A Note on Walk versus Wait: Lazy Mathematician Wins" discusses what he believes to be some of the errors in Chen et al.'s argument; the result of Chen et al.'s paper still holds following Arora's alleged corrections. As early as 1990, writer Tom Parker had observed that "walking is faster than waiting for a bus if you're going less than a mile". As an undergraduate mathematics major at Harvard, Scott D. Kominers first began fixating on the problem while walking from MIT to Harvard, which are more than a mile apart in Cambridge, Massachusetts along MBTA bus route 1. He enlisted the help of Caltech physics major Justin G. Chen and Harvard statistics major Robert W. Sinnott to perform the analysis. Their paper concludes that it is usually mathematically quicker to wait for the bus, at least for a little while. But once made, the decision to walk should be final instead of waiting again at subsequent stops. The paper also showed potential applications to the field of cryptography. Interstellar travel The corresponding problem in interstellar travel is called the wait calculation, which tries to determine the optimal time to wait for technological progress to improve spaceship speeds before committing to the journey. See also Bus bunching Rendezvous problem
https://en.wikipedia.org/wiki/Girdling
Girdling, also called ring-barking, is the circumferential removal or injury of the bark (consisting of cork cambium or "phellogen", phloem, cambium and sometimes also the xylem) of a branch or trunk of a woody plant. Girdling prevents the tree from sending nutrients from its foliage to its roots, resulting in the death of the tree over time, and can also prevent flow of nutrients in the other direction depending on how much of the xylem is removed. A branch completely girdled will fail and when the main trunk of a tree is girdled, the entire tree will die, if it cannot regrow from above to bridge the wound. Human practices of girdling include forestry, horticulture, and vandalism. Foresters use the practice of girdling to thin forests. Extensive cankers caused by certain fungi, bacteria or viruses can girdle a trunk or limb. Animals such as rodents will girdle trees by feeding on outer bark, often during winter under snow. Girdling can also be caused by herbivorous mammals feeding on plant bark and by birds and insects, both of which can effectively girdle a tree by boring rows of adjacent holes. Orchardists use girdling as a cultural technique to yield larger fruit or to set fruit. In viniculture (grape cultivation) the technique is also called cincturing. Forestry and horticulture Like all vascular plants, trees use two vascular tissues for transportation of water and nutrients: the xylem (also known as the wood) and the phloem (the innermost layer of the bark). Girdling results in the removal of the phloem, and death occurs from the inability of the leaves to transport sugars (primarily sucrose) to the roots. In this process, the xylem is left untouched, and the tree can usually still temporarily transport water and minerals from the roots to the leaves. Trees normally sprout shoots below the wound; if not, the roots die. Death occurs when the roots can no longer produce ATP and transport nutrients upwards through the xylem. The formation of new shoots
https://en.wikipedia.org/wiki/Interval%20scheduling
Interval scheduling is a class of problems in computer science, particularly in the area of algorithm design. The problems consider a set of tasks. Each task is represented by an interval describing the time in which it needs to be processed by some machine (or, equivalently, scheduled on some resource). For instance, task A might run from 2:00 to 5:00, task B might run from 4:00 to 10:00 and task C might run from 9:00 to 11:00. A subset of intervals is compatible if no two intervals overlap on the machine/resource. For example, the subset {A,C} is compatible, as is the subset {B}; but neither {A,B} nor {B,C} are compatible subsets, because the corresponding intervals within each subset overlap. The interval scheduling maximization problem (ISMP) is to find a largest compatible set, i.e., a set of non-overlapping intervals of maximum size. The goal here is to execute as many tasks as possible, that is, to maximize the throughput. It is equivalent to finding a maximum independent set in an interval graph. A generalization of the problem considers machines/resources. Here the goal is to find compatible subsets whose union is the largest. In an upgraded version of the problem, the intervals are partitioned into groups. A subset of intervals is compatible if no two intervals overlap, and moreover, no two intervals belong to the same group (i.e., the subset contains at most a single representative of each group). Each group of intervals corresponds to a single task, and represents several alternative intervals in which it can be executed. The group interval scheduling decision problem (GISDP) is to decide whether there exists a compatible set in which all groups are represented. The goal here is to execute a single representative task from each group. GISDPk is a restricted version of GISDP in which the number of intervals in each group is at most k. The group interval scheduling maximization problem (GISMP) is to find a largest compatible set - a set of non-overl
https://en.wikipedia.org/wiki/Signal-to-noise%20statistic
In mathematics the signal-to-noise statistic distance between two vectors a and b with mean values and and standard deviation and respectively is: In the case of Gaussian-distributed data and unbiased class distributions, this statistic can be related to classification accuracy given an ideal linear discrimination, and a decision boundary can be derived. This distance is frequently used to identify vectors that have significant difference. One usage is in bioinformatics to locate genes that are differential expressed on microarray experiments. See also Distance Uniform norm Manhattan distance Signal-to-noise ratio Signal to noise ratio (imaging) Notes Statistical distance Statistical ratios
https://en.wikipedia.org/wiki/Aliivibrio%20fischeri
Aliivibrio fischeri (also called Vibrio fischeri) is a Gram-negative, rod-shaped bacterium found globally in marine environments. This species has bioluminescent properties, and is found predominantly in symbiosis with various marine animals, such as the Hawaiian bobtail squid. It is heterotrophic, oxidase-positive, and motile by means of a single polar flagella. Free-living A. fischeri cells survive on decaying organic matter. The bacterium is a key research organism for examination of microbial bioluminescence, quorum sensing, and bacterial-animal symbiosis. It is named after Bernhard Fischer, a German microbiologist. Ribosomal RNA comparison led to the reclassification of this species from genus Vibrio to the newly created Aliivibrio in 2007. However, the name change is not generally accepted by most researchers, who still publish Vibrio fischeri (see Google Scholar for 2018–2019). Genome The genome for A. fischeri was completely sequenced in 2004 and consists of two chromosomes, one smaller and one larger. Chromosome 1 has 2.9 million base pairs (Mbp) and chromosome 2 has 1.3 Mbp, bringing the total genome to 4.2 Mbp. A. fischeri has the lowest G+C content of 27 Vibrio species, but is still most closely related to the higher-pathogenicity species such as V. cholerae. The genome for A. fischeri also carries mobile genetic elements. Ecology A. fischeri are globally distributed in temperate and subtropical marine environments. They can be found free-floating in oceans, as well as associated with marine animals, sediment, and decaying matter. A. fischeri have been most studied as symbionts of marine animals, including squids in the genus Euprymna and Sepiola, where A. fischeri can be found in the squids' light organs. This relationship has been best characterized in the Hawaiian bobtail squid (Euprymna scolopes), where A. fischeri is the only species of bacteria inhabiting the squid's light organ. Symbiosis with the Hawaiian bobtail squid A. fischeri coloniz
https://en.wikipedia.org/wiki/Systems%20modeling%20language
The systems modeling language (SysML) is a general-purpose modeling language for systems engineering applications. It supports the specification, analysis, design, verification and validation of a broad range of systems and systems-of-systems. SysML was originally developed by an open source specification project, and includes an open source license for distribution and use. SysML is defined as an extension of a subset of the Unified Modeling Language (UML) using UML's profile mechanism. The language's extensions were designed to support systems engineering activities. Contrast with UML SysML offers several systems engineering specific improvements over UML, which has been developed as a software modeling language. These improvements include the following: SysML's diagrams express system engineering concepts better due to the removal of UML's software-centric restrictions and adds two new diagram types, requirement and parametric diagrams. The former can be used for requirements engineering; the latter can be used for performance analysis and quantitative analysis. Consequent to these enhancements, SysML is able to model a wide range of systems, which may include hardware, software, information, processes, personnel, and facilities. SysML is a comparatively small language that is easier to learn and apply. Since SysML removes many of UML's software-centric constructs, the overall language is smaller both in diagram types and total constructs. SysML allocation tables support common kinds of allocations. Whereas UML provides only limited support for tabular notations, SysML furnishes flexible allocation tables that support requirements allocation, functional allocation, and structural allocation. This capability facilitates automated verification and validation (V&V) and gap analysis. SysML model management constructs support models, views, and viewpoints. These constructs extend UML's capabilities and are architecturally aligned with IEEE-Std-1471-2000 (IEEE
https://en.wikipedia.org/wiki/Allium%20roseum
Allium roseum, commonly called rosy garlic, is an edible, Old World species of wild garlic. It is native to the Mediterranean region and nearby areas, with a natural range extending from Portugal and Morocco to Turkey and the Palestine region. It is cultivated widely, and has become naturalised in scattered locations in other regions outside its natural range. Description Allium roseum grows naturally to about high in well-drained soils, and in Europe blooms from late spring to early summer. The inflorescences of A. roseum are umbels. The loose, fragrant florets are about long, having six pinkish to lilac tepals. The smell and flavour of the bulb is powerful enough to drive squirrels and browsing deer away from gardens, where they are planted as ornamental flowers. For this reason, they are suitable as companion plants to tulips and similar species. Taxonomy Allium roseum was originally described and published by Carl Linnaeus in his in 1753. Subspecies + varieties Numerous names have been proposed at the subspecies and varietal levels within the species, but only a few are currently accepted: Allium roseum subsp. gulekense Koyuncu & Eker - Turkey Allium roseum subsp. roseum - most of species range Allium roseum var. roseum - most of species range Allium roseum var. tourneuxii Boiss. - Israel, Palestine, Egypt, Libya, Tunisia, Algeria formerly included Allium roseum var. cassium, now called Allium cassium Allium roseum subsp. permixtum, now called Allium permixtum Allium roseum subsp. persicum, now called Allium tripedale Allium roseum var. puberulum, now called Allium cassium
https://en.wikipedia.org/wiki/Pongine%20gammaherpesvirus%202
Pongine gammaherpesvirus 2 (PoHV-2), commonly known as orangutan herpesvirus, is a species of virus in the genus Lymphocryptovirus, subfamily Gammaherpesvirinae, family Herpesviridae, and order Herpesvirales. It infects orangutans (Pongo).
https://en.wikipedia.org/wiki/Porous%20set
In mathematics, a porous set is a concept in the study of metric spaces. Like the concepts of meagre and measure zero sets, a porous set can be considered "sparse" or "lacking bulk"; however, porous sets are not equivalent to either meagre sets or measure zero sets, as shown below. Definition Let (X, d) be a complete metric space and let E be a subset of X. Let B(x, r) denote the closed ball in (X, d) with centre x ∈ X and radius r > 0. E is said to be porous if there exist constants 0 < α < 1 and r0 > 0 such that, for every 0 < r ≤ r0 and every x ∈ X, there is some point y ∈ X with A subset of X is called σ-porous if it is a countable union of porous subsets of X. Properties Any porous set is nowhere dense. Hence, all σ-porous sets are meagre sets (or of the first category). If X is a finite-dimensional Euclidean space Rn, then porous subsets are sets of Lebesgue measure zero. However, there does exist a non-σ-porous subset P of Rn which is of the first category and of Lebesgue measure zero. This is known as Zajíček's theorem. The relationship between porosity and being nowhere dense can be illustrated as follows: if E is nowhere dense, then for x ∈ X and r > 0, there is a point y ∈ X and s > 0 such that However, if E is also porous, then it is possible to take s = αr (at least for small enough r), where 0 < α < 1 is a constant that depends only on E.
https://en.wikipedia.org/wiki/Crystallography%20and%20NMR%20system
CNS or Crystallography and NMR system, is a software library for computational structural biology. It is an offshoot of X-PLOR and uses much of the same syntax. It is used in the fields of X-ray crystallography and NMR spectroscopy of biological macromolecules.
https://en.wikipedia.org/wiki/List%20of%20Pakistani%20flags
This is a list of flags used in Pakistan. National flag Government flags Civil ensign Civil air ensign Provincial and territorial flags Military Naval rank flags Political flags Political parties Opposition/Rebel flag Historical flags Pre-colonial states British India Princely states of Pakistan Former national flag proposals See also National Flag of Pakistan
https://en.wikipedia.org/wiki/Statisticians%27%20and%20engineers%27%20cross-reference%20of%20statistical%20terms
The following terms are used by electrical engineers in statistical signal processing studies instead of typical statistician's terms. In other engineering fields, particularly mechanical engineering, uncertainty analysis examines systematic and random components of variations in measurements associated with physical experiments. Notes
https://en.wikipedia.org/wiki/Coherent%20duality
In mathematics, coherent duality is any of a number of generalisations of Serre duality, applying to coherent sheaves, in algebraic geometry and complex manifold theory, as well as some aspects of commutative algebra that are part of the 'local' theory. The historical roots of the theory lie in the idea of the adjoint linear system of a linear system of divisors in classical algebraic geometry. This was re-expressed, with the advent of sheaf theory, in a way that made an analogy with Poincaré duality more apparent. Then according to a general principle, Grothendieck's relative point of view, the theory of Jean-Pierre Serre was extended to a proper morphism; Serre duality was recovered as the case of the morphism of a non-singular projective variety (or complete variety) to a point. The resulting theory is now sometimes called Serre–Grothendieck–Verdier duality, and is a basic tool in algebraic geometry. A treatment of this theory, Residues and Duality (1966) by Robin Hartshorne, became a reference. One concrete spin-off was the Grothendieck residue. To go beyond proper morphisms, as for the versions of Poincaré duality that are not for closed manifolds, requires some version of the compact support concept. This was addressed in SGA2 in terms of local cohomology, and Grothendieck local duality; and subsequently. The Greenlees–May duality, first formulated in 1976 by Ralf Strebel and in 1978 by Eben Matlis, is part of the continuing consideration of this area. Adjoint functor point of view While Serre duality uses a line bundle or invertible sheaf as a dualizing sheaf, the general theory (it turns out) cannot be quite so simple. (More precisely, it can, but at the cost of imposing the Gorenstein ring condition.) In a characteristic turn, Grothendieck reformulated general coherent duality as the existence of a right adjoint functor , called twisted or exceptional inverse image functor, to a higher direct image with compact support functor . Higher direct images a
https://en.wikipedia.org/wiki/Fc%CE%B1/%CE%BCR
Fcα/μR, also known as is CD351 (Cluster of Differentiation 351), is an Fc receptor that binds IgM with high affinity and IgA with a 10-fold lower affinity. In mice the receptor is expressed on macrophages, follicular dendritic cells, marginal zone B cells, follicular B cells, and kidney tubular epithelial cells. In humans expression has been described on intestinal lamina propria cells, Paneth cells, follicular dendritic cells in tonsils, activated macrophages and some types of pre-germinal centre IgD+/CD38+ B cells.
https://en.wikipedia.org/wiki/List%20of%20electrical%20phenomena
This is a list of electrical phenomena. Electrical phenomena are a somewhat arbitrary division of electromagnetic phenomena. Some examples are: Biefeld–Brown effect — Thought by the person who coined the name, Thomas Townsend Brown, to be an anti-gravity effect, it is generally attributed to electrohydrodynamics (EHD) or sometimes electro-fluid-dynamics, a counterpart to the well-known magneto-hydrodynamics. Bioelectrogenesis — The generation of electricity by living organisms. Capacitive coupling — Transfer of energy within an electrical network or between distant networks by means of displacement current. Contact electrification — The phenomenon of electrification by contact. When two objects were touched together, sometimes the objects became spontaneously charged (οne negative charge, one positive charge). Corona effect — Build-up of charges in a high-voltage conductor (common in AC transmission lines), which ionizes the air and produces visible light, usually purple. Dielectric polarization — Orientation of charges in certain insulators inside an external static electric field, such as when a charged object is brought close, which produces an electric field inside the insulator. Direct Current — (old: Galvanic Current) or "continuous current"; The continuous flow of electricity through a conductor such as a wire from high to low potential. Electromagnetic induction — Production of a voltage by a time-varying magnetic flux. Electroluminescence — The phenomenon wherein a material emits light in response to an electric current passed through it, or to a strong electric field. Electrostatic induction — Redistribution of charges in a conductor inside an external static electric field, such as when a charged object is brought close. Electrical conduction — The movement of electrically charged particles through transmission medium. Electric shock — Physiological reaction of a biological organism to the passage of electric current through its body. Ferranti effect
https://en.wikipedia.org/wiki/MBus%20%28SPARC%29
MBus is a computer bus designed and implemented by Sun Microsystems for communication between high speed computer system components, such as the central processing unit, motherboard and main memory. SBus is used in the same machines to connect add-on cards to the motherboard. MBus was first used in Sun's first multiprocessor SPARC-based system, the SPARCserver 600MP series (launched in 1991), and later found use in the SPARCstation 10 and SPARCstation 20 workstations. The bus permits the integration of several microprocessors on a single motherboard, in a multiprocessing configuration with up to eight CPUs packaged in detachable MBus modules. In practice, the number of processors per MBus is limited to four. Single processor systems were also sold that use the MBus protocol internally, but with the CPUs permanently attached to the motherboard to lower manufacturing costs. MBus specifies a 64-bit datapath, which uses 36-bit physical addressing, giving an address space of 64 GB. The transfer rate is 80 MB/s sustained (320 MB/s peak) at 40 MHz, or 100 MB/s (400 MB/s peak) at 50 MHz. Bus controlling is done by an arbiter. Interrupt, reset, and timeout logic are also specified. Related buses Several related buses were also developed: XBus XBus is a packet-switched bus used in the SPARCserver 1000, SPARCcenter 2000 and Cray CS6400. This corresponds to the circuit-switched MBus, with identical electrical characteristics and physical form factor but an incompatible signalling protocol. KBus KBus is a high-speed interconnection system for linking multiple MBuses, used in Solbourne Computer Series 6 and Series 7 computer systems. History The MBus standard was cooperatively developed by Sun and Ross Technology and released in 1991. Manufacturers who produced computer systems using the MBus included Sun, Ross Technology, Hyundai/Axil, Fujitsu, Solbourne Computer, Tatung, GCS, Auspex, ITRI, ICL, Cray, Amdahl, Themis, DTK and Kamstrup. See also List of device bandw
https://en.wikipedia.org/wiki/Isothermal%20coordinates
In mathematics, specifically in differential geometry, isothermal coordinates on a Riemannian manifold are local coordinates where the metric is conformal to the Euclidean metric. This means that in isothermal coordinates, the Riemannian metric locally has the form where is a positive smooth function. (If the Riemannian manifold is oriented, some authors insist that a coordinate system must agree with that orientation to be isothermal.) Isothermal coordinates on surfaces were first introduced by Gauss. Korn and Lichtenstein proved that isothermal coordinates exist around any point on a two dimensional Riemannian manifold. By contrast, most higher-dimensional manifolds do not admit isothermal coordinates anywhere; that is, they are not usually locally conformally flat. In dimension 3, a Riemannian metric is locally conformally flat if and only if its Cotton tensor vanishes. In dimensions > 3, a metric is locally conformally flat if and only if its Weyl tensor vanishes. Isothermal coordinates on surfaces In 1822, Carl Friedrich Gauss proved the existence of isothermal coordinates on an arbitrary surface with a real-analytic Riemannian metric, following earlier results of Joseph Lagrange in the special case of surfaces of revolution. The construction used by Gauss made use of the Cauchy–Kowalevski theorem, so that his method is fundamentally restricted to the real-analytic context. Following innovations in the theory of two-dimensional partial differential equations by Arthur Korn, Leon Lichtenstein found in 1916 the general existence of isothermal coordinates for Riemannian metrics of lower regularity, including smooth metrics and even Hölder continuous metrics. Given a Riemannian metric on a two-dimensional manifold, the transition function between isothermal coordinate charts, which is a map between open subsets of , is necessarily angle-preserving. The angle-preserving property together with orientation-preservation is one characterization (among many) of
https://en.wikipedia.org/wiki/Tarantool
Tarantool is an in-memory computing platform with a flexible data schema, best used for creating high-performance applications. Two main parts of it are an in-memory database and a Lua application server. Tarantool maintains data in memory and ensures crash resistance with write-ahead logging and snapshotting. It includes a Lua interpreter and interactive console, but also accepts connections from programs in several other languages. History Mail.Ru, one of the largest Internet companies in Russia, started the project in 2008 as part of the development of Moy Mir (My World) social network. In 2010 for a project head it hired a former technical lead from MySQL. Open-source contributors have been active especially in the area of external-language connectors for C, Perl, PHP, Python, Ruby, and node.js. Tarantool became part of the Mail.Ru backbone, used for dynamic content such as user sessions, unsent instant messages, task queues, and a caching layer for traditional relational databases such as MySQL or PostgreSQL. By 2014 Tarantool had also been adopted by the social network services Badoo and Odnoklassniki (the latter is affiliated with Mail.Ru since 2010). Properties All data is maintained in memory (RAM), with data persistence ensured by write-ahead logging and snapshotting, and for those reasons some industry observers have compared Tarantool to Membase. Replication is asynchronous and failover (getting one Tarantool server to take over from another) is possible either from a replica server or from a "hot standby" server. There are no locks. Tarantool uses Lua-style coroutines and asynchronous I/O. The result is that application programs or stored procedures must be written with cooperative multitasking in mind, rather than the more popular preemptive multitasking. For database storage the basic unit is a tuple. Tuples in tuple sets handle the same role as rows in tables for relational databases. Tuples have an arbitrary number of fields, and fields d
https://en.wikipedia.org/wiki/Tunneling%20nanotube
A tunneling nanotube (TNT) or membrane nanotube is a term that has been applied to protrusions that extend from the plasma membrane which enable different animal cells to touch over long distances, sometimes over 100 μm between T cells. Two types of structures have been called nanotubes. The first type are less than 0.7 micrometers in diameter, contain actin and carry portions of plasma membrane between cells in both directions. The second type are larger (>0.7 μm), contain both actin and microtubules, and can carry components of the cytoplasm such as vesicles and organelles between cells, including whole mitochondria. The diameter of TNTs ranges from 50 to 200 nm and they can reach lengths of several cell diameters. These structures may be involved in cell-to-cell communication, transfer of nucleic acids such as mRNA and miRNA between cells in culture or in a tissue, and the spread of pathogens or toxins such as HIV and prions. TNTs have observed lifetimes ranging from a few minutes up to several hours, and several proteins have been implicated in their formation or inhibition. History Membrane nanotubes were first described in a 1999 Cell article examining the development of Drosophila melanogaster wing imaginal discs. More recently, a Science article published in 2004 described structures that connected various types of immune cells together, as well as connections between cells in tissue culture. Since these publications, more TNT-like structures have been recorded, containing varying levels of F-actin, microtubules and other components, but remaining relatively homogenous in terms of composition. Formation Several mechanisms may be involved in nanotube formation. These include molecular controls as well as cell-to-cell interactions. Two primary mechanisms for TNT formation have been proposed. The first involves cytoplasmic protrusions extending from one cell to another, where they fuse with the membrane of the target cell. The other is that, as two previou
https://en.wikipedia.org/wiki/Path%20explosion
In computer science, path explosion is a fundamental problem that limits the scalability and/or completeness of certain kinds of program analyses, including fuzzing, symbolic execution, and path-sensitive static analysis. Path explosion refers to the fact that the number of control-flow paths in a program grows exponentially ("explodes") with an increase in program size and can even be infinite in the case of programs with unbounded loop iterations. Therefore, any program analysis that attempts to explore control-flow paths through a program will either have exponential runtime in the length of the program (or potentially even failure to terminate on certain inputs), or will have to choose to analyze only a subset of all possible paths. When an analysis only explores a subset of all paths, the decision of which paths to analyze is often made heuristically.
https://en.wikipedia.org/wiki/Repeating%20waveforms
Repeating waveforms is a technique for digital synthesis common in PC sound cards. The waveform amplitude values are stored in a buffer memory, which is stored in a phase generator. When addressed, the retrieved value is used as the basis of the synthesized sound. In the phase generator, a value proportional to the desired signal frequency is periodically added to an accumulator. The high order bits of the accumulator form the output address, while the typically larger number of bits in the accumulator and addition value results in an arbitrarily high frequency resolution.
https://en.wikipedia.org/wiki/Finite%20thickness
In formal language theory, in particular in algorithmic learning theory, a class C of languages has finite thickness if every string is contained in at most finitely many languages in C. This condition was introduced by Dana Angluin as a sufficient condition for C being identifiable in the limit. The related notion of M-finite thickness Given a language L and an indexed class C = { L1, L2, L3, ... } of languages, a member language Lj ∈ C is called a minimal concept of L within C if L ⊆ Lj, but not L ⊊ Li ⊆ Lj for any Li ∈ C. The class C is said to satisfy the MEF-condition if every finite subset D of a member language Li ∈ C has a minimal concept Lj ⊆ Li. Symmetrically, C is said to satisfy the MFF-condition if every nonempty finite set D has at most finitely many minimal concepts in C. Finally, C is said to have M-finite thickness if it satisfies both the MEF- and the MFF-condition. Finite thickness implies M-finite thickness. However, there are classes that are of M-finite thickness but not of finite thickness (for example, any class of languages C = { L1, L2, L3, ... } such that L1 ⊆ L2 ⊆ L3 ⊆ ...).
https://en.wikipedia.org/wiki/Union%20of%20Workers%20in%20Food%20and%20Allied%20Industries
The Union of Workers in Food and Allied Industries (, GLB) was a trade union representing workers in food production, tobacco manufacture, and related industries, in Austria. The union was founded by the Austrian Trade Union Federation in 1945. By 1990, it had 39,517 members. The following year, it merged with the Union of Agricultural and Forestry Workers, to form the Union of Agriculture, Food and Allied Industries. Presidents 1945: Karl Mantler 1960: Josef Staribacher 1989: Leopold Simperl
https://en.wikipedia.org/wiki/Greater%20tubercle
The greater tubercle of the humerus is the outward part the upper end of that bone, adjacent to the large rounded prominence of the humerus head. It provides attachment points for the supraspinatus, infraspinatus, and teres minor muscles, three of the four muscles of the rotator cuff, a muscle group that stabilizes the shoulder joint. In doing so the tubercle acts as a location for the transfer of forces from the rotator cuff muscles to the humerus. Structure The upper surface of the greater tubercle is rounded, and marked by three flat impressions: the highest ("superior facet") gives insertion to the supraspinatus muscle. the middle ("middle facet") gives insertion to the infraspinatus muscle. the lowest ("inferior facet"), and the body of the bone for about 2.5 cm, gives insertion to the teres minor muscle. The lateral surface of the greater tubercle is convex, rough, and continuous with the lateral surface of the body of the humerus. It can be described as having a cranial and a caudal part. Between the greater tubercle and the lesser tubercle is the bicipital groove (intertubercular sulcus). Function All three of the muscles that attach to the greater tubercle are part of the rotator cuff, a muscle group that stabilizes the shoulder joint. The greater tubercle therefore acts as a location for the transfer of forces from the rotator cuff muscles to the humerus. The fourth muscle of the rotator cuff (subscapularis muscle) does not attach to the greater tubercle, but instead attaches to the lesser tubercle. Clinical significance The greater tubercle is usually the easiest part of the humerus to palpate. It can be a useful surface landmark during surgery. Additional images
https://en.wikipedia.org/wiki/Koala%20emblems%20and%20popular%20culture
Koala emblems and popular culture deals with the uses which have been made of the image of the Koala, such as coins, emblems, logos, mascots and in the naming of sports teams. Australian emblems and logos The Koala is the official fauna emblem of Queensland, Australia. The Koala is the official fauna emblem for the Wildlife Preservation Society of Queensland. Rugby union team Queensland Reds has the Koala as its logo. Koala Corporation creates changing table that are seen in public restrooms. United States mascots The Koala is the official mascot of Columbia College, a formerly women-only college in Columbia, South Carolina. The Koala is a student newspaper at the University of California, San Diego. Lumpy The Koala is a mascot for WWE Super Show-Down In popular culture Blinky Bill is the koala star of several books, TV shows, a movie and games. Nutsy is Blinky's friend then adopted sister in several books, TV shows, a movie and games. Mrs. Koala is Blinky's mother in several books, TV shows, a movie. Bunyip Bluegum is a koala in The Magic Pudding. Buster Moon in Sing and its sequel. Nigel an eccentric British koala in the 2006 Disney animated film The Wild. The Australian version of the American Disney computer-animated film Zootopia has a koala as a newscaster character. South Korean boyband BTS collaborated with Line Friends and released a set of characters called BT21; one of these characters, created by Namjoon, is a light blue koala named Koya. TV and films Star Trek: Lower Decks episode "Moist Vessel" reveals that the entire universe of the Star Trek franchise is carried in the back of a giant cosmic koala. Qantas airlines used a Koala who continually complains about the airline's reliability in a series of television commercials. An Australian children's show has animated characters headed by The Koala Brothers. Coojee Bear was the koala friend of Australian entertainer Rolf Harris in his 1960s UK Television shows In the animated series American Dad!
https://en.wikipedia.org/wiki/Ahmed%20Abbes
Ahmed Abbes (born 24 May 1970) is a Tunisian-French mathematician and a at the Institut des Hautes Études Scientifiques (IHÉS). He is known for his work in arithmetic geometry. Early life and education Abbes was born on 24 May 1970 in Sfax, Tunisia. Abbes received a bronze medal in 1988 and a silver medal in 1989 at the International Mathematical Olympiad while representing Tunisia. Abbes has both French and Tunisian citizenship. Abbes studied at the École Normale Supérieure from 1990 to 1994 and then received his doctorate from Paris-Sud University in 1995 under the supervision of Lucien Szpiro, with the thesis Théorie d'Arakelov et courbes modulaires on Arakelov theory and modular curves. At Paris-Sud, Michel Raynaud was one of his mentors. Abbes received his habilitation in 2003. Career Abbes was a post-doctoral researcher at the Institut des Hautes Études Scientifiques (IHÉS) from 1995 to 1996 and was also a post-doctoral researcher at the Max Planck Institute for Mathematics in 1996. From 1996 to 2007, he was a Chargé de recherche at the CNRS at Paris-Sud University. From 2007 to 2011, he was a CNRS Director of Research (2nd class) at the University of Rennes 1. In 2011, he moved to the IHÉS where he was a CNRS Director of Research (2nd class) until 2013 and where he has been a CNRS Director of Research (1st class) since 2013. Abbes was an editor for Astérisque from 2010 to 2018 and is the co-editor-in-chief of the Tunisian Journal of Mathematics. Abbes is a Coordinator of the Tunisian Campaign for the Academic and Cultural Boycott of Israel (TACBI). He is also a Secretary of the French Association of Academics for Respect for International Law in Palestine (AURDIP). Research Abbes's research concerns the geometric and cohomological properties of sheaves on manifolds over perfect fields of positive characteristic and p-adic fields. He has worked on a p-adic Simpson correspondence and other topics in p-adic Hodge theory with Michel Gros. Awards In 2005,
https://en.wikipedia.org/wiki/Silurian%20hypothesis
The Silurian hypothesis is a thought experiment which assesses modern science's ability to detect evidence of a prior advanced civilization, perhaps several million years ago. The most probable cues for such a civilization could be carbon, radioactive elements or temperature variation. The name "Silurian" derives from the eponymous sapient species from the BBC science fiction series Doctor Who, who in the series established an advanced civilization prior to humanity. Astrophysicists Adam Frank and Gavin Schmidt proposed the "Silurian Hypothesis" in a 2018 paper, exploring the possibility of detecting an advanced civilization before humans in the geological record. They argued that there has been sufficient fossil carbon to fuel an industrial civilization since the Carboniferous Period (~350 million years ago). However, finding direct evidence, such as technological artifacts, is unlikely due to the rarity of fossilization and Earth's exposed surface. Instead, researchers might find indirect evidence, such as climate changes, anomalies in sediment, or traces of nuclear waste. The hypothesis also speculates that artifacts from past civilizations could be found on the Moon and Mars, where erosion and tectonic activity are less likely to erase evidence. The concept of pre-human civilizations has been explored in popular culture, including novels, television shows, and short stories. Explanation The idea was presented in a 2018 paper by Adam Frank, an astrophysicist at the University of Rochester, and Gavin Schmidt, director of the Goddard Institute for Space Studies. Frank and Schmidt imagined an advanced civilization before humans and pondered whether it would "be possible to detect an industrial civilization in the geological record". They argue as early as the Carboniferous period (~350 million years ago) "there has been sufficient fossil carbon to fuel an industrial civilization comparable with our own". However, they also wrote: "While we strongly doubt that any
https://en.wikipedia.org/wiki/SETD6
SET domain containing 6 is a protein in humans that is encoded by the SETD6 gene. SETD6 monomethylates the RelA subunit of nuclear factor kappa B (NF-κB). RelA mono-methylation at lysine 310 (RelAK310me1) leads to the constitutive repression of RelA target genes by recruiting the PKMT G9a-like protein (GLP), which catalyzes H3K9me2 and leads to chromatin silencing and gene repression. In response to stimulation with TNFa and lipopolysaccharide, phosphorylation of RelA at serine 311 (RelAS311ph) by PKCzeta physically blocks the interaction between GLP and RelAK310me1, leading to transcription activation. PAK4 Methylation by SETD6 Promotes the Activation of the Wnt/β-Catenin Pathway. SETD6 binds and methylates PAK4 both in vitro and in cells at chromatin. Depletion of SETD6 in various cell lines leads to a dramatic reduction in the expression of Wnt/�-catenin target genes. SETD6 binds to but does not methylate DJ1. Under basal conditions, SETD6 and DJ1 associate with chromatin which inhibits DJ1 to activate Nrf2 transcription activity. In response to oxidative stress, SETD6 mRNA and protein levels are dramatically reduced. SETD6 specifically binds and methylates PLK1 during mitosis at K209 and K413. Depletion of SETD6, as well as the double substitution of the lysine residues (K209/413R), leads to elevation in PLK1 catalytic activity, leading to the acceleration of the different mitotic steps, ending with early cytokinesis.
https://en.wikipedia.org/wiki/Startle%20response
In animals, including humans, the startle response is a largely unconscious defensive response to sudden or threatening stimuli, such as sudden noise or sharp movement, and is associated with negative affect. Usually the onset of the startle response is a startle reflex reaction. The startle reflex is a brainstem reflectory reaction (reflex) that serves to protect vulnerable parts, such as the back of the neck (whole-body startle) and the eyes (eyeblink) and facilitates escape from sudden stimuli. It is found across many different species, throughout all stages of life. A variety of responses may occur depending on the affected individual's emotional state, body posture, preparation for execution of a motor task, or other activities. The startle response is implicated in the formation of specific phobias. Startle reflex Neurophysiology A startle reflex can occur in the body through a combination of actions. A reflex from hearing a sudden loud noise will happen in the primary acoustic startle reflex pathway consisting of three main central synapses, or signals that travel through the brain. First, there is a synapse from the auditory nerve fibers in the ear to the cochlear root neurons (CRN). These are the first acoustic neurons of the central nervous system. Studies have shown a direct correlation to the amount of decrease of the startle to the number of CRNs that were killed. Second, there is a synapse from the CRN axons to the cells in the nucleus reticularis pontis caudalis (PnC) of the brain. These are neurons that are located in the pons of the brainstem. A study done to disrupt this portion of the pathway by the injection of PnC inhibitory chemicals has shown a dramatic decrease in the amount of startle by about 80 to 90 percent. Third, a synapse occurs from the PnC axons to the motor neurons in the facial motor nucleus or the spinal cord that will directly or indirectly control the movement of muscles. The activation of the facial motor nucleus causes a j
https://en.wikipedia.org/wiki/Power%20Wheels
Power Wheels is a brand of battery-powered ride-on toy cars for kids ages one to seven years old. Power Wheels ride-ons are built with kid-sized, realistic features – in some cases, real working features like FM radios, opening/closing doors and hoods, and both forward and reverse motion. History The product itself was created by an Italian company, Peg Perego, which started up in 1949. Peg-Perego eventually began using gel cell batteries in their wheeled machines, and the product line was launched. The Power Wheels brand name dates back to 1984, when San Francisco-based toy company Kransco acquired Pines of America, makers of battery-powered vehicles for children. Two years later, Kransco renamed the line "Power Wheels". By 1990 sales of the battery-powered vehicles reached over 1,000,000 per year. In 1994, the Power Wheels Line was bought by Mattel, who placed it under their Fisher-Price subsidiary. With the addition of new vehicle licenses the new Power Wheels lines did well. In 1999, Fisher-Price announced the Harley-Davidson Motorcycle Ride-On – which contributed to a year of record sales for the entire product line. Power Wheels vehicles Power Wheels ride-on cars, trucks and motorcycles have been sold with more than 100 model names. The latest line of Power Wheels features small scale versions of popular real world vehicles, including the Jeep Wrangler, Jeep Hurricane, Ford F-150, Ford Mustang, Kawasaki KFX quad, Harley-Davidson motorcycle, Cadillac Escalade EXT as well as Lightning McQueen from the Pixar’s film Cars, and a Thomas the Tank Engine with a circle of track. Safety recalls The first recall in 1991 involved the 18 Volt Porsche 911, in which the contacts in the foot pedal switch could weld together in use. If this were to happen, the motor would remain running and the vehicle would continue moving forward, unable to stop. A new accelerator pedal was fitted that eliminated the possibility of welded contacts. In 1998, Fisher-Price undertook
https://en.wikipedia.org/wiki/Tendinopathy
Tendinopathy is a type of tendon disorder that results in pain, swelling, and impaired function. The pain is typically worse with movement. It most commonly occurs around the shoulder (rotator cuff tendinitis, biceps tendinitis), elbow (tennis elbow, golfer's elbow), wrist, hip, knee (jumper's knee, popliteus tendinopathy), or ankle (Achilles tendinitis). Causes may include an injury or repetitive activities. Less common causes include infection, arthritis, gout, thyroid disease, diabetes and the use of quinolone antibiotic medicines. Groups at risk include people who do manual labor, musicians, and athletes. Diagnosis is typically based on symptoms, examination, and occasionally medical imaging. A few weeks following an injury little inflammation remains, with the underlying problem related to weak or disrupted tendon fibrils. Treatment may include rest, NSAIDs, splinting, and physiotherapy. Less commonly steroid injections or surgery may be done. About 80% of patients recover completely within six months. Tendinopathy is relatively common. Older people are more commonly affected. It results in a large amount of missed work. Signs and symptoms Symptoms include tenderness on palpation, swelling, and pain, often when exercising or with a specific movement. Cause Causes may include an injury or repetitive activities. Groups at risk include people who do manual labor, musicians, and athletes. Less common causes include infection, arthritis, gout, thyroid disease, and diabetes. Despite the injury of the tendon, there are roads to healing which includes rehabilitation therapy and/or surgery. Obesity, or more specifically, adiposity or fatness, has also been linked to an increasing incidence of tendinopathy. Quinolone antibiotics are associated with increased risk of tendinitis and tendon rupture. A 2013 review found the incidence of tendon injury among those taking fluoroquinolones to be between 0.08 and 0.2%. Fluoroquinolones most frequently affect large load-beari
https://en.wikipedia.org/wiki/Stylomastoid%20artery
The stylomastoid artery enters the stylomastoid foramen and supplies the tympanic cavity, the tympanic antrum and mastoid cells, and the semicircular canals. It is a branch of the posterior auricular artery, and thus part of the external carotid arterial system. In the young subject a branch from this vessel forms, with the anterior tympanic artery from the internal maxillary, a vascular circle, which surrounds the tympanic membrane, and from which delicate vessels ramify on that membrane. It anastomoses with the superficial petrosal branch of the middle meningeal artery by a twig which enters the hiatus canalis facialis.
https://en.wikipedia.org/wiki/Index%20of%20mechanical%20engineering%20articles
This is an alphabetical list of articles pertaining specifically to mechanical engineering. For a broad overview of engineering, please see List of engineering topics. For biographies please see List of engineers. A Acceleration – Accuracy and precision – Actual mechanical advantage – Aerodynamics – Agitator (device) – Air handler – Air conditioner – Air preheater – Allowance – American Machinists' Handbook – American Society of Mechanical Engineers – Ampere – Applied mechanics – Antifriction – Archimedes' screw – Artificial intelligence – Automaton clock – Automobile – Automotive engineering – Axle – Air Compressor B Backlash – Balancing – Beale Number – Bearing – Belt (mechanical) – Bending – Biomechatronics – Bogie – Brittle – Buckling – Bus-- Bushing – Boilers & boiler systems BIW-- C CAD – CAM – CAID – Calculator – Calculus – Car handling – Carbon fiber – Classical mechanics – Clean room design – Clock – Clutch – CNC – Coefficient of thermal expansion – Coil spring – Combustion – Composite material – Compression ratio – Compressive strength – Computational fluid dynamics – Computer – Computer-aided design – Computer-aided industrial design – Computer-numerically controlled – Conservation of mass – Constant-velocity joint – Constraint – Continuum mechanics – Control theory – Corrosion – Cotter pin – Crankshaft – Cybernetics – D Damping ratio – Deformation (engineering) – Delamination – Design – Diesel Engine – Differential – Dimensionless number – Diode – Diode laser – Drafting – Drifting – Driveshaft – Dynamics – Design for Manufacturability for CNC machining – E Elasticity – Elasticity tensor - Electric motor – Electrical engineering – Electrical circuit – Electrical network – Electromagnetism – Electronic circuit – Electronics – Energy – Engine – Engineering – Engineering cybernetics – Engineering drawing – Engineering economics – Engineering ethics – Engineering management – Engineering society – Exploratory engineering – F ( Fits and tolerances)--- Fa
https://en.wikipedia.org/wiki/Electron-rich
Electron-rich is jargon that is used in multiple related meanings with either or both kinetic and thermodynamic implications: with regards to electron-transfer, electron-rich species have low ionization energy and/or are reducing agents. Tetrakis(dimethylamino)ethylene is an electron-rich alkene because, unlike ethylene, it forms isolable radical cation. In contrast, electron-poor alkene tetracyanoethylene is an electron acceptor, forming isolable anions. with regards to acid-base reactions, electron-rich species have high pKa's and react with weak Lewis acids. with regards to nucleophilic substitution reactions, electron-rich species are relatively strong nucleophiles, as judged by rates of attack by electrophiles. For example, compared to benzene, pyrrole is more rapidly attacked by electrophiles. Pyrrole is therefore considered to be an electron-rich aromatic ring. Similarly, benzene derivatives with electron-donating groups (EDGs) are attacked by electrophiles faster than in benzene. The electron-donating vs electron-withdrawing influence of various functional groups have been extensively parameterized in linear free energy relationships. with regards to Lewis acidity, electron-rich species are strong Lewis bases. See also Electron-withdrawing group
https://en.wikipedia.org/wiki/Expiration%20date
An expiration date or expiry date is a previously determined date after which something should no longer be used, either by operation of law or by exceeding the anticipated shelf life for perishable goods. Expiration dates are applied to selected food products and to some other manufactured products like infant car seats where the age of the product may impact its safe use. The legal definition and usage of terms will vary between countries and products. Different terms may be used for products that tend to spoil and those that tend to be shelf-stable. The term Use by is often applied to products such as milk and meat that are more likely to spoil and can become dangerous to those eating them. Such products should not be consumed past the date shown. The term Best before is often applied to products that may deteriorate slightly in quality, but are unlikely to become dangerous as a result, such as dried foods. Such products can be eaten after their Best before date at the discretion of the consumer. Storage and handling conditions will affect whether and when an item will spoil, so there is inherent variability in dating. A time temperature indicator is a sensing label or device that indicates whether a product has been exposed to dangerously high or low temperatures. These indicators are often used for determining whether a product is spoiled due to external factors even if it is before the expiration date. Arbitrary expiration dates are also commonly applied by companies to product coupons, promotional offers and credit cards. In these contexts, the expiration date is chosen for business reasons or to provide some security function rather than any product safety concern. Expiration date is often abbreviated EXP or ED. Terms Use by Generally, foods that have a use by date written on the packaging should not be eaten after the specified date. This term is generally applied to foods that may go bad due to physical instability, chemical spoilage, bacterial s
https://en.wikipedia.org/wiki/Melarsoprol
Melarsoprol is an arsenic-containing medication used for the treatment of sleeping sickness (African trypanosomiasis). It is specifically used for second-stage disease caused by Trypanosoma brucei rhodesiense when the central nervous system is involved. For Trypanosoma brucei gambiense, eflornithine or fexinidazole is usually preferred. It is effective in about 95% of people. It is given by injection into a vein. Melarsoprol has a high number of side effects. Common side effects include brain dysfunction, numbness, rashes, and kidney and liver problems. About 1-5% of people die during treatment, although this is tolerated due to sleeping sickness itself having a practically 100% mortality rate when untreated. In those with glucose-6-phosphate dehydrogenase (G6PD) deficiency, red blood cell breakdown may occur. It has not been studied in pregnancy. It works by blocking pyruvate kinase, an enzyme required for aerobic metabolism by the parasite. Melarsoprol has been used medically since 1949. It is on the World Health Organization's List of Essential Medicines. In regions of the world where the disease is common, melarsoprol is provided for free by the World Health Organization. It is not commercially available in Canada or the United States. In the United States, it may be obtained from the Centers for Disease Control and Prevention, while in Canada it is available from Health Canada. Medical uses People diagnosed with trypanosome-caused disease should be treated with an anti-trypanosomal. Treatment is based on stage, 1 or 2, and parasite, T. b. rhodesiense or T. b. gambiense. In stage 1 disease, trypanosomes are present only in the peripheral circulation. In stage 2 disease, trypanosomes have crossed the blood-brain barrier and are present in the central nervous system. The following are considerable treatment options: Melarsoprol is a treatment used during the second stage of the disease. So far, it is the only treatment available for late-stage T. b. rhodesien
https://en.wikipedia.org/wiki/Core%20model
In set theory, the core model is a definable inner model of the universe of all sets. Even though set theorists refer to "the core model", it is not a uniquely identified mathematical object. Rather, it is a class of inner models that under the right set-theoretic assumptions have very special properties, most notably covering properties. Intuitively, the core model is "the largest canonical inner model there is" (Ernest Schimmerling and John R. Steel) and is typically associated with a large cardinal notion. If Φ is a large cardinal notion, then the phrase "core model below Φ" refers to the definable inner model that exhibits the special properties under the assumption that there does not exist a cardinal satisfying Φ. The core model program seeks to analyze large cardinal axioms by determining the core models below them. History The first core model was Kurt Gödel's constructible universe L. Ronald Jensen proved the covering lemma for L in the 1970s under the assumption of the non-existence of zero sharp, establishing that L is the "core model below zero sharp". The work of Solovay isolated another core model L[U], for U an ultrafilter on a measurable cardinal (and its associated "sharp", zero dagger). Together with Tony Dodd, Jensen constructed the Dodd–Jensen core model ("the core model below a measurable cardinal") and proved the covering lemma for it and a generalized covering lemma for L[U]. Mitchell used coherent sequences of measures to develop core models containing multiple or higher-order measurables. Still later, the Steel core model used extenders and iteration trees to construct a core model below a Woodin cardinal. Construction of core models Core models are constructed by transfinite recursion from small fragments of the core model called mice. An important ingredient of the construction is the comparison lemma that allows giving a wellordering of the relevant mice. At the level of strong cardinals and above, one constructs an intermediate count
https://en.wikipedia.org/wiki/Computational%20indistinguishability
In computational complexity and cryptography, two families of distributions are computationally indistinguishable if no efficient algorithm can tell the difference between them except with negligible probability. Formal definition Let and be two distribution ensembles indexed by a security parameter n (which usually refers to the length of the input); we say they are computationally indistinguishable if for any non-uniform probabilistic polynomial time algorithm A, the following quantity is a negligible function in n: denoted . In other words, every efficient algorithm As behavior does not significantly change when given samples according to Dn or En in the limit as . Another interpretation of computational indistinguishability, is that polynomial-time algorithms actively trying to distinguish between the two ensembles cannot do so: that any such algorithm will only perform negligibly better than if one were to just guess. Related notions Implicit in the definition is the condition that the algorithm, , must decide based on a single sample from one of the distributions. One might conceive of a situation in which the algorithm trying to distinguish between two distributions, could access as many samples as it needed. Hence two ensembles that cannot be distinguished by polynomial-time algorithms looking at multiple samples are deemed indistinguishable by polynomial-time sampling'. If the polynomial-time algorithm can generate samples in polynomial time, or has access to a random oracle that generates samples for it, then indistinguishability by polynomial-time sampling is equivalent to computational indistinguishability.
https://en.wikipedia.org/wiki/List%20of%20numeral%20system%20topics
This is a list of Wikipedia articles on topics of numeral system and "numeric representations" See also: computer numbering formats and number names. Arranged by base Radix, radix point, mixed radix, base (mathematics) Unary numeral system (base 1) Binary numeral system (base 2) Negative base numeral system (base −2) Ternary numeral system numeral system (base 3) Balanced ternary numeral system (base 3) Negative base numeral system (base −3) Quaternary numeral system (base 4) Quater-imaginary base (base 2) Quinary numeral system (base 5) Senary numeral system (base 6) Septenary numeral system (base 7) Octal numeral system (base 8) Nonary (novenary) numeral system (base 9) Decimal (denary) numeral system (base 10) Negative base numeral system (base −10) Duodecimal (dozenal) numeral system (base 12) Hexadecimal numeral system (base 16) Vigesimal numeral system (base 20) Sexagesimal numeral system (base 60) Arranged by culture Other Numeral system topics
https://en.wikipedia.org/wiki/Deep-sea%20fish
Deep-sea fish are fish that live in the darkness below the sunlit surface waters, that is below the epipelagic or photic zone of the sea. The lanternfish is, by far, the most common deep-sea fish. Other deep sea fishes include the flashlight fish, cookiecutter shark, bristlemouths, anglerfish, viperfish, and some species of eelpout. Only about 2% of known marine species inhabit the pelagic environment. This means that they live in the water column as opposed to the benthic organisms that live in or on the sea floor. Deep-sea organisms generally inhabit bathypelagic (1000–4000 m deep) and abyssopelagic (4000–6000 m deep) zones. However, characteristics of deep-sea organisms, such as bioluminescence can be seen in the mesopelagic (200–1000 m deep) zone as well. The mesopelagic zone is the disphotic zone, meaning light there is minimal but still measurable. The oxygen minimum layer exists somewhere between a depth of 700 m and 1000 m deep depending on the place in the ocean. This area is also where nutrients are most abundant. The bathypelagic and abyssopelagic zones are aphotic, meaning that no light penetrates this area of the ocean. These zones make up about 75% of the inhabitable ocean space. The epipelagic zone (0–200 m) is the area where light penetrates the water and photosynthesis occurs. This is also known as the photic zone. Because this typically extends only a few hundred meters below the water, the deep sea, about 90% of the ocean volume, is in darkness. The deep sea is also an extremely hostile environment, with temperatures that rarely exceed and fall as low as (with the exception of hydrothermal vent ecosystems that can exceed 350 °C, or 662 °F), low oxygen levels, and pressures between 20 and 1000 atm (between 2 and 100 MPa). Evolution The earliest known records of deep-sea fish are trace fossils of feeding and swimming behavior attributed to unidentified neoteleosts (referable to the ichnogenera Piscichnus and Undichna), from the Early Cretaceou
https://en.wikipedia.org/wiki/Color%20normalization
Color normalization is a topic in computer vision concerned with artificial color vision and object recognition. In general, the distribution of color values in an image depends on the illumination, which may vary depending on lighting conditions, cameras, and other factors. Color normalization allows for object recognition techniques based on color to compensate for these variations. Main concepts Color constancy Color constancy is a feature of the human internal model of perception, which provides humans with the ability to assign a relatively constant color to objects even under different illumination conditions. This is helpful for object recognition as well as identification of light sources in an environment. For example, humans see an object approximately as the same color when the sun is bright or when the sun is dim. Applications Color normalization has been used for object recognition on color images in the field of robotics, bioinformatics and general artificial intelligence, when it is important to remove all intensity values from the image while preserving color values. One example is in case of a scene shot by a surveillance camera over the day, where it is important to remove shadows or lighting changes on same color pixels and recognize the people that passed. Another example is automated screening tools used for the detection of diabetic retinopathy as well as molecular diagnosis of cancer states, where it is important to include color information during classification. Known issues The main issue about certain applications of color normalization is that the end result looks unnatural or too distant from the original colors. In cases where there is a subtle variation between important aspects, this can be problematic. More specifically, the side effect can be that pixels become divergent and not reflect the actual color value of the image. A way of combating this issue is to use color normalization in combination with thresholding to correct
https://en.wikipedia.org/wiki/Peetre%20theorem
In mathematics, the (linear) Peetre theorem, named after Jaak Peetre, is a result of functional analysis that gives a characterisation of differential operators in terms of their effect on generalized function spaces, and without mentioning differentiation in explicit terms. The Peetre theorem is an example of a finite order theorem in which a function or a functor, defined in a very general way, can in fact be shown to be a polynomial because of some extraneous condition or symmetry imposed upon it. This article treats two forms of the Peetre theorem. The first is the original version which, although quite useful in its own right, is actually too general for most applications. The original Peetre theorem Let M be a smooth manifold and let E and F be two vector bundles on M. Let be the spaces of smooth sections of E and F. An operator is a morphism of sheaves which is linear on sections such that the support of D is non-increasing: supp Ds ⊆ supp s for every smooth section s of E. The original Peetre theorem asserts that, for every point p in M, there is a neighborhood U of p and an integer k (depending on U) such that D is a differential operator of order k over U. This means that D factors through a linear mapping iD from the k-jet of sections of E into the space of smooth sections of F: where is the k-jet operator and is a linear mapping of vector bundles. Proof The problem is invariant under local diffeomorphism, so it is sufficient to prove it when M is an open set in Rn and E and F are trivial bundles. At this point, it relies primarily on two lemmas: Lemma 1. If the hypotheses of the theorem are satisfied, then for every x∈M and C > 0, there exists a neighborhood V of x and a positive integer k such that for any y∈V\{x} and for any section s of E whose k-jet vanishes at y (jks(y)=0), we have |Ds(y)|<C. Lemma 2. The first lemma is sufficient to prove the theorem. We begin with the proof of Lemma 1. Suppose the lemma is false. Then there
https://en.wikipedia.org/wiki/DuPont%20analysis
DuPont analysis (also known as the DuPont identity, DuPont equation, DuPont framework, DuPont model or the DuPont method) is a tool in financial analysis, where return on equity, ROE, is separated into its components, useful in several contexts. This "decomposition" of ROE allows management to focus individually on the key metrics of financial performance, and thereby to identify strengths and weaknesses within the company that should be addressed. It similalrly allows investors to compare the operational efficiency of two comparable firms. The name derives from the DuPont company, that began using this formula in the 1920s. A DuPont explosives salesman ,Donaldson Brown, submitted an internal efficiency report to his superiors in 1912 that contained the formula. Basic formula The DuPont analysis breaks down ROE into three component parts, which may then be managed individually: Profitability: measured by profit margin Asset efficiency: measured by asset turnover Financial leverage: measured by equity multiplier Or Or ROE analysis The DuPont analysis breaks down ROE (that is, the returns that investors receive from a single dollar of equity) into three distinct elements. This analysis enables the manager or analyst to understand the source of superior (or inferior) return by comparison with companies in similar industries (or between industries). See for further context. The DuPont analysis is less useful for industries such as investment banking, in which the underlying elements are not meaningful. Variations of the DuPont analysis have been developed for industries where the elements are weakly meaningful. Examples follow: High margin industries Some industries, such as fashion, may derive a substantial portion of their competitive advantage from selling at a higher margin, rather than higher sales. For high-end fashion brands, increasing sales without sacrificing margin may be critical. The DuPont analysis allows analysts to determine which of the e
https://en.wikipedia.org/wiki/Open%20Graphics%20Project
The Open Graphics Project (OGP) was founded with the goal to design an open-source hardware / open architecture and standard for graphics cards, primarily targeting free software / open-source operating systems. The project created a reprogrammable development and prototyping board and had aimed to eventually produce a full-featured and competitive end-user graphics card. OGD1 The project's first product was a PCI graphics card dubbed OGD1, which used a field-programmable gate array (FPGA) chip. Although the card could not compete with graphics cards on the market at the time in terms of performance or functionality, it was intended to be useful as a tool for prototyping the project's first application-specific integrated circuit (ASIC) board, as well as for other professionals needing programmable graphics cards or FPGA-based prototyping boards. It was also hoped that this prototype would attract enough interest to gain some profit and attract investors for the next card, since it was expected to cost around US$2,000,000 to start the production of a specialized ASIC design. PCI Express and/or Mini-PCI variations were planned to follow. The OGD1 began shipping in September 2010, some six years after the project began and 3 years after the appearance of the first prototypes. Full specifications will be published and open-source device drivers will be released. All RTL will be released. Source code to the device drivers and BIOS will be released under the MIT and BSD licenses. The RTL (in Verilog) used for the FPGA and the RTL used for the ASIC are planned to be released under the GNU General Public License (GPL). It has 256 MiB of DDR RAM, is passively cooled, and follows the DDC, EDID, DPMS and VBE VESA standards. TV-out is also planned. Versioning schema Versioning schema for OGD1 will go like this: {Root Number} – {Video Memory}{Video Output Interfaces}{Special Options e.g.: A1 OGA firmware installed} OGD1 components Main components of OGD1 graphics car
https://en.wikipedia.org/wiki/Nettime
Nettime is an internet mailing list proposed in 1995 by Geert Lovink and Pit Schultz (then half-jokingly called "the nettime brothers") at the second meeting of the "Medien Zentral Kommittee" during the Venice Biennale. Since 1998, Ted Byfield and Felix Stalder have moderated the main list, coordinated moderation of other lists in the nettime "family," and maintained the site as their nexus. The name nettime was chosen as a statement against space metaphors such as cyberspace, dominant at the time. The time of nettime is a social time, it is subjective and intensive, with condensation and extractions, segmented by social events like conferences and little meetings, and text gatherings for export into the paper world. Most people still like to read a text printed on wooden paper, more than transmitted via waves of light. Nettime is not the same time like geotime, or the time clocks go. Everyone who programs or often sits in front of a screen knows about the phenomena of being out of time, time on the net consists of different speeds, computers, humans, software, bandwidth, the only way to see a continuity of time on the net is to see it as a asynchronous network of synchronized time zones. Nettime has been widely recognized for its seminal role stimulating and disseminating ideas about Netzkritik or Net Critique, net.art, and tactical media and pioneered practices such as "collaborative filtering". For example, in 2004 nettime was nominated for an Ars Electronica Golden Nica award. However, the moderators refuse to speak or act as representatives of an organization, preferring instead to serve inasmuch as possible as coordinators of a loose or "headless" collective. The list and related meetings were a strong influence on Bruce Sterling's 1996 science fiction novel Holy Fire. Initially, it was both part of an early wave of, and served as an inspiration for, a number of related efforts such as Blast (1995–1998), Rhizome (1996–present), Fibreculture (2001–present),
https://en.wikipedia.org/wiki/Inelastic%20mean%20free%20path
The inelastic mean free path (IMFP) is an index of how far an electron on average travels through a solid before losing energy. If a monochromatic, primary beam of electrons is incident on a solid surface, the majority of incident electrons lose their energy because they interact strongly with matter, leading to plasmon excitation, electron-hole pair formation, and vibrational excitation. The intensity of the primary electrons, , is damped as a function of the distance, , into the solid. The intensity decay can be expressed as follows: where is the intensity after the primary electron beam has traveled through the solid to a distance . The parameter , termed the inelastic mean free path (IMFP), is defined as the distance an electron beam can travel before its intensity decays to of its initial value. (Note that this is equation is closely related to the Beer–Lambert law.) The inelastic mean free path of electrons can roughly be described by a universal curve that is the same for all materials. The knowledge of the IMFP is indispensable for several electron spectroscopy and microscopy measurements. Applications of the IMFP in XPS Following, the IMFP is employed to calculate the effective attenuation length (EAL), the mean escape depth (MED) and the information depth (ID). Besides, one can utilize the IMFP to make matrix corrections for the relative sensitivity factor in quantitative surface analysis. Moreover, the IMFP is an important parameter in Monte Carlo simulations of photoelectron transport in matter. Calculations of the IMFP Calculations of the IMFP are mostly based on the algorithm (full Penn algorithm, FPA) developed by Penn, experimental optical constants or calculated optical data (for compounds). The FPA considers an inelastic scattering event and the dependence of the energy-loss function (EFL) on momentum transfer which describes the probability for inelastic scattering as a function of momentum transfer. Experimental measurements of the I
https://en.wikipedia.org/wiki/Fluid%20theory%20of%20electricity
Fluid theories of electricity are outdated theories that postulated one or more electrical fluids which were thought to be responsible for many electrical phenomena in the history of electromagnetism. The "two-fluid" theory of electricity, created by Charles François de Cisternay du Fay, postulated that electricity was the interaction between two electrical 'fluids.' An alternate simpler theory was proposed by Benjamin Franklin, called the unitary, or one-fluid, theory of electricity. This theory claimed that electricity was really one fluid, which could be present in excess, or absent from a body, thus explaining its electrical charge. Franklin's theory explained how charges could be dispelled (such as those in Leyden jars) and how they could be passed through a chain of people. The fluid theories of electricity eventually became updated to include the effects of magnetism, and electrons (upon their discovery). Fluid theories In the 1700s many physical phenomena were thought of in terms of an aether, which was a fluid that could permeate matter. This idea had been used for centuries, and was the basis of thinking about physical phenomena, such as electricity, as liquids. Other 18th century examples of fluid models are Lavoisier's caloric and the magnetic fluids of Coulomb and Aepinus. Two-fluid theory By the 18th century, one of a few theories explaining observed electrical phenomena was the two-fluid theory. This theory is generally attributed to Charles François de Cisternay du Fay. du Fay's theory suggested that electricity was composed of two liquids, which could flow through solid bodies. One liquid carried a positive charge, and the other a negative charge. When these two liquids came into contact with one another, they would produce a neutral charge. This theory dealt mainly with explaining electrical attraction and repulsion, rather than how an object could be charged or discharged. du Fay observed this while repeating an experiment created by Otto von G
https://en.wikipedia.org/wiki/Lucky%20number
In number theory, a lucky number is a natural number in a set which is generated by a certain "sieve". This sieve is similar to the Sieve of Eratosthenes that generates the primes, but it eliminates numbers based on their position in the remaining set, instead of their value (or position in the initial set of natural numbers). The term was introduced in 1956 in a paper by Gardiner, Lazarus, Metropolis and Ulam. In the same work they also suggested calling another sieve, "the sieve of Josephus Flavius" because of its similarity with the counting-out game in the Josephus problem. Lucky numbers share some properties with primes, such as asymptotic behaviour according to the prime number theorem; also, a version of Goldbach's conjecture has been extended to them. There are infinitely many lucky numbers. Twin lucky numbers and twin primes also appear to occur with similar frequency. However, if Ln denotes the n-th lucky number, and pn the n-th prime, then Ln > pn for all sufficiently large n. Because of their apparent similarities with the prime numbers, some mathematicians have suggested that some of their common properties may also be found in other sets of numbers generated by sieves of a certain unknown form, but there is little theoretical basis for this conjecture. The sieving process Continue removing the nth remaining numbers, where n is the next number in the list after the last surviving number. Next in this example is 9. One way that the application of the procedure differs from that of the Sieve of Eratosthenes is that for n being the number being multiplied on a specific pass, the first number eliminated on the pass is the n-th remaining number that has not yet been eliminated, as opposed to the number 2n. That is to say, the list of numbers this sieve counts through is different on each pass (for example 1, 3, 7, 9, 13, 15, 19... on the third pass), whereas in the Sieve of Eratosthenes, the sieve always counts through the entire original list (1, 2
https://en.wikipedia.org/wiki/Unrestricted%20algorithm
An unrestricted algorithm is an algorithm for the computation of a mathematical function that puts no restrictions on the range of the argument or on the precision that may be demanded in the result. The idea of such an algorithm was put forward by C. W. Clenshaw and F. W. J. Olver in a paper published in 1980. In the problem of developing algorithms for computing, as regards the values of a real-valued function of a real variable (e.g., g[x] in "restricted" algorithms), the error that can be tolerated in the result is specified in advance. An interval on the real line would also be specified for values when the values of a function are to be evaluated. Different algorithms may have to be applied for evaluating functions outside the interval. An unrestricted algorithm envisages a situation in which a user may stipulate the value of x and also the precision required in g(x) quite arbitrarily. The algorithm should then produce an acceptable result without failure.
https://en.wikipedia.org/wiki/Synthetic%20ecosystems
Synthetic ecosystems are on-chip integrated devices where cellular cultures (individuals) and ecosystem services - such as the renewal of growth, delivery of regulatory signals as well as removal of waste - are patterned into an integrated fluidic device using principles of landscape ecology, physiology and cell signaling.
https://en.wikipedia.org/wiki/Solid%20light
Solid light, often referred to in media as "hard light" or "hard-light", is a hypothetical material, made of light in a solidified state. It has been theorized that this could exist, and experiments claim to have created solid photonic matter or molecules by inducing strong interaction between photons. Potential applications of this could include logic gates for quantum computers and room-temperature superconductor development. Experiments In theory, photons, the particles that make up forms of electromagnetic radiation like light, may be attracted in a nonlinear medium. The MIT-Harvard Center for Ultracold Atoms conducted experiments in the 2010s. Single photons were fired from weak lasers into a dense cloud of rubidium cooled to near absolute zero. The speed of light in the cloud was about 100,000 times slower than in a vacuum. Within the cloud, photons lost energy and gained mass. The conditions allowed photons to attract and bind to other photons, and exit the cloud as molecules. Reportedly, photon pairs were observed in 2013, and triplets in 2018. Fiction Solid light appears in several video game franchises, including Halo, Portal, and Overwatch. In Portal 2 sunlight is used to create "hard light bridges", which act as solid semi-transparent walkways or barriers. In Overwatch the fictional Vishkar Corporation uses solid light as a construction material. In Halo solid light is the foundation of Forerunner weapons and many of their utilitarian devices like retractable bridges. Solid holograms appear many times in Star Trek. In "Red Dwarf", the character Rimmer is a hologram who obtains a "hard light drive", allowing him to become tangible. In DC Comics' Green Lantern, the various Lantern Corps use solid light constructs. Solid light is the main superpower of the Marvel Superheroine Ms Marvel. See also Jaynes–Cummings model Macroscopic quantum self-trapping
https://en.wikipedia.org/wiki/Early%20Career%20Life%20Scientist%20Award
The ASCB Early Career Life Scientist Award is awarded by the American Society for Cell Biology to an outstanding scientist who earned his doctorate no more than 12 years earlier and who has served as an independent investigator for no more than seven years. The winner speaks at the ASCB Annual Meeting and receives a monetary prize. Awardees Source: American Society for Cell Biology 2020 James Olzmann 2019 Cignall Kadoch 2018 Sergiu Pasca 2017 Meng Wang 2016 Bo Huang;Valentina Greco 2015 Vladimir Denic 2014 Manuel Thery 2013 Douglas B. Weibel 2012 Iain Cheeseman 2012 Gia Voeltz 2011 Maxence V. Nachury 2010 Anna Kashina 2009 Martin W. Hetzer 2008 Arshad B. Desai 2007 Abby Dernburg 2006 Karsten Weis 2005 Eva Nogales 2004 No award this year 2003 Frank Gertler 2002 Kathleen Collins and Benjamin Cravatt 2001 Daphne Preuss 2000 Erin O'Shea 1999 Raymond Deshaies See also List of biology awards
https://en.wikipedia.org/wiki/Anacardic%20acids
Anacardic acids are phenolic lipids, chemical compounds found in the shell of the cashew nut (Anacardium occidentale). An acid form of urushiol, they also cause an allergic skin rash on contact, known as urushiol-induced contact dermatitis. Anacardic acid is a yellow liquid. It is partially miscible with ethanol and ether, but nearly immiscible with water. Chemically, anacardic acid is a mixture of several closely related organic compounds. Each consists of a salicylic acid substituted with an alkyl chain that has 15 or 17 carbon atoms. The alkyl group may be saturated or unsaturated; anacardic acid is a mixture of saturated and unsaturated molecules. The exact mixture depends on the species of the plant. The 15-carbon unsaturated side chain compound found in the cashew plant is lethal to Gram-positive bacteria. Folk use for tooth abscesses, it is also active against acne, some insects, tuberculosis, and MRSA. It is primarily found in foods such as cashew nuts, cashew apples, and cashew nutshell oil, but also in mangos and Pelargonium geraniums. Experimental antibacterial properties The side chain with three unsaturated bonds was the most active against Streptococcus mutans, the tooth decay bacterium, in test tube experiments. The number of unsaturated bonds was not material against Cutibacterium acnes, the acne bacterium. Eichbaum claims that a solution of one part anacardic acid to 200,000 parts water to as low as one part in 2,000,000 is lethal to Gram-positive bacteria in 15 minutes in vitro. Somewhat higher ratios killed tubercle bacteria of tuberculosis in 30 minutes. Heating these anacardic acids converts them to the alcohols (cardanols) with reduced activity compared to the acids. Decarboxylation, such as through heating done in most commercial oil processing, results in compounds with significantly reduced activity. It is said that the people of the Gold Coast (now Ghana) use cashew leaves and bark for a toothache. Industrial uses Anacardic acid is t
https://en.wikipedia.org/wiki/Entropy%20estimation
In various science/engineering applications, such as independent component analysis, image analysis, genetic analysis, speech recognition, manifold learning, and time delay estimation it is useful to estimate the differential entropy of a system or process, given some observations. The simplest and most common approach uses histogram-based estimation, but other approaches have been developed and used, each with its own benefits and drawbacks. The main factor in choosing a method is often a trade-off between the bias and the variance of the estimate, although the nature of the (suspected) distribution of the data may also be a factor. Histogram estimator The histogram approach uses the idea that the differential entropy of a probability distribution for a continuous random variable , can be approximated by first approximating with a histogram of the observations, and then finding the discrete entropy of a quantization of with bin probabilities given by that histogram. The histogram is itself a maximum-likelihood (ML) estimate of the discretized frequency distribution ), where is the width of the th bin. Histograms can be quick to calculate, and simple, so this approach has some attraction. However, the estimate produced is biased, and although corrections can be made to the estimate, they may not always be satisfactory. A method better suited for multidimensional probability density functions (pdf) is to first make a pdf estimate with some method, and then, from the pdf estimate, compute the entropy. A useful pdf estimate method is e.g. Gaussian mixture modeling (GMM), where the expectation maximization (EM) algorithm is used to find an ML estimate of a weighted sum of Gaussian pdf's approximating the data pdf. Estimates based on sample-spacings If the data is one-dimensional, we can imagine taking all the observations and putting them in order of their value. The spacing between one value and the next then gives us a rough idea of (the reciprocal of) t
https://en.wikipedia.org/wiki/CNGB1
Cyclic nucleotide gated channel beta 1, also known as CNGB1, is a human gene encoding an ion channel protein. See also Cyclic nucleotide-gated ion channel
https://en.wikipedia.org/wiki/Join%20and%20meet
In mathematics, specifically order theory, the join of a subset of a partially ordered set is the supremum (least upper bound) of denoted and similarly, the meet of is the infimum (greatest lower bound), denoted In general, the join and meet of a subset of a partially ordered set need not exist. Join and meet are dual to one another with respect to order inversion. A partially ordered set in which all pairs have a join is a join-semilattice. Dually, a partially ordered set in which all pairs have a meet is a meet-semilattice. A partially ordered set that is both a join-semilattice and a meet-semilattice is a lattice. A lattice in which every subset, not just every pair, possesses a meet and a join is a complete lattice. It is also possible to define a partial lattice, in which not all pairs have a meet or join but the operations (when defined) satisfy certain axioms. The join/meet of a subset of a totally ordered set is simply the maximal/minimal element of that subset, if such an element exists. If a subset of a partially ordered set is also an (upward) directed set, then its join (if it exists) is called a directed join or directed supremum. Dually, if is a downward directed set, then its meet (if it exists) is a directed meet or directed infimum. Definitions Partial order approach Let be a set with a partial order and let An element of is called the (or or ) of and is denoted by if the following two conditions are satisfied: (that is, is a lower bound of ). For any if then (that is, is greater than or equal to any other lower bound of ). The meet need not exist, either since the pair has no lower bound at all, or since none of the lower bounds is greater than all the others. However, if there is a meet of then it is unique, since if both are greatest lower bounds of then and thus If not all pairs of elements from have a meet, then the meet can still be seen as a partial binary operation on If the meet does exist the
https://en.wikipedia.org/wiki/Liquid%20vapor%20display
Liquid Vapor Display (LVD) is a type of display system and is especially considered in economical display technologies. These employ a reflective passive display principle and depend on the presence of ambient lights for their operation. The figure shows the structure of a typical LVD cell. The principles of the LVD were first described in 1973. It consists of a transparent volatile liquid encased between two glass plates and side spacers. The rear glass plate has a black background and the front glass surface in contact with the liquid is roughened, so that the liquid wets it; essentially, in its simplest form, an LVD consists of a roughened glass surface wet with a transparent volatile liquid of the same refractive index as that of the glass. The rear surface is blackened. The transparent electrode is heated by using a voltage drive, which is the basis of display function. In the OFF condition of display with no voltage applied across the transparent electrode, the viewer sees the black background through the front transparent glass electrode and the liquid. To achieve the ON condition of the display, a voltage is applied to the transparent electrode. This causes sufficient heat in electrode, which evaporates the liquid in contact with it, and a combination of vapor film and vapor bubbles is formed around the roughened glass surface. As the refractive index of vapor is approximately 1, there is a discontinuity established at the interface between the front glass plate and the liquid, this causes the incoming light to scatter before reaching the black background, thus making it a simple display device. The organic liquid selected for LVD should have the following features: 1. Refractive index close to that of the glass plate. 2. Minimum energy for vaporizing the liquid in contact with the roughened surface. The electrical heating of a thin film of liquid adjacent to the roughened surface using transparent electrodes and the applied voltage, makes it an unusu
https://en.wikipedia.org/wiki/Siege%20%28software%29
Siege is a Hypertext Transfer Protocol (HTTP) and HTTPS load testing and web server benchmarking utility developed by Jeffrey Fulmer. It was designed to let web developers measure the performance of their code under stress, to see how it will stand up to load on the internet. It is licensed under the GNU General Public License (GNU GPL) open-source software license, which means it is free to use, modify, and distribute. Siege can stress a single URL or it can read many URLs into memory and stress them simultaneously. It supports basic authentication, cookies, HTTP, HTTPS and FTP protocols. Performance measures Performance measures include elapsed time of the test, the amount of data transferred (including headers), the response time of the server, its transaction rate, its throughput, its concurrency and the number of times it returned OK. These measures are quantified and reported at the end of each run. This is a sample of siege output: Ben: $ siege -u shemp.whoohoo.com/Admin.jsp -d1 -r10 -c25 ..Siege 2.65 2006/05/11 23:42:16 ..Preparing 25 concurrent users for battle. The server is now under siege...done Transactions: 250 hits Elapsed time: 14.67 secs Data transferred: 448,000 bytes Response time: 0.43 secs Transaction rate: 17.04 trans/sec Throughput: 30538.51 bytes/sec Concurrency: 7.38 Status code 200: 250 Successful transactions: 250 Failed transactions: 0 Siege has essentially three modes of operation: regression, internet simulation and brute force. It can read a large number of URLs from a configuration file and run through them incrementally (regression) or randomly (internet simulation). Or the user may simply pound a single URL with a runtime configuration at the command line (brute force). Platform support Siege was written on Linux and has been successfully ported to AIX, BSD, HP-UX, and Solaris. It compiles on most UNIX System V variants and on most newer BSD systems.
https://en.wikipedia.org/wiki/Critical%20dimension
In the renormalization group analysis of phase transitions in physics, a critical dimension is the dimensionality of space at which the character of the phase transition changes. Below the lower critical dimension there is no phase transition. Above the upper critical dimension the critical exponents of the theory become the same as that in mean field theory. An elegant criterion to obtain the critical dimension within mean field theory is due to V. Ginzburg. Since the renormalization group sets up a relation between a phase transition and a quantum field theory, this has implications for the latter and for our larger understanding of renormalization in general. Above the upper critical dimension, the quantum field theory which belongs to the model of the phase transition is a free field theory. Below the lower critical dimension, there is no field theory corresponding to the model. In the context of string theory the meaning is more restricted: the critical dimension is the dimension at which string theory is consistent assuming a constant dilaton background without additional confounding permutations from background radiation effects. The precise number may be determined by the required cancellation of conformal anomaly on the worldsheet; it is 26 for the bosonic string theory and 10 for superstring theory. Upper critical dimension in field theory Determining the upper critical dimension of a field theory is a matter of linear algebra. It is worthwhile to formalize the procedure because it yields the lowest-order approximation for scaling and essential input for the renormalization group. It also reveals conditions to have a critical model in the first place. A Lagrangian may be written as a sum of terms, each consisting of an integral over a monomial of coordinates and fields . Examples are the standard -model and the isotropic Lifshitz tricritical point with Lagrangians see also the figure on the right. This simple structure may be compatible with a scale
https://en.wikipedia.org/wiki/Peter%20Scheiber
Peter Scheiber was a classically trained musician and audio engineer. He was considered to be the originator of multichannel matrix audio formats, a mathematical formula used to convert four audio channels into two and back again. Scheiber was also the inventor of the 360-degree spatial decoder. Like Lou Dorren, Scheiber was an early pioneer of multi-channel sound. It has been written that Scheiber pioneered the surround sound technology that is used in theaters today and referred to as Dolby Surround. In matrix quadraphonic systems four channels are converted (encoded) down to two channels. These two matrixed channels are recorded onto tape or vinyl record. Reproduction occurs via a two-channel stereo transmission medium - in most cases a vinyl record - these are decoded back to four channels and reproduced via four loudspeakers. Musician Scheiber, an Oberlin College music graduate, obtained a full scholarship to study with the first-chair players of the Boston Symphony at Tanglewood. He was 22 years of age when he got to study with Chicago Symphony's first bassoonist. He also played first-chair in the Chicago Chamber Orchestra. During his professional career, he played with the Ottawa Philharmonic and Dallas Symphony orchestras. Around 1977 his bassoon was stolen from the trunk of his car and according to the May 2007 article in Indianapolis Monthly and he never replaced it. Also later, being called on to play there would be reasons not to play such as a missing reed or music. Audio career Peter Scheiber was born in Croton-on-Hudson in New York in 1935. He grew up in Peekskill. From an early age, passionate about music and technology, he had a workbench in his bedroom for experimenting with his gadgets. He later earned a scholarship at Tanglewood Music Center and played with the Chicago Symphony. Later, as a professional, he was a member of orchestras in Ottawa and Texas. In 1967 Scheiber, then a 32-year-old bassoonist, came up with the idea of encoding fou
https://en.wikipedia.org/wiki/FMRFamide%20in%20Biomphalaria%20glabrata
FMRFamide, a neuropeptide involved in cardiac activity regulation, is found in Biomphalaria glabrata, a species of a freshwater snail best known for its role as the intermediate host for the human-infecting trematode parasite Schistosoma mansoni. This freshwater snail species is used as a model organism, in other words, a non-human species which is extensively studied to understand a biological phenomenon, with the expectation that discoveries made in the model will provide insight into the workings of other organisms. Model organisms are in vivo models and are widely used to research human disease when human experimentation would be unfeasible or unethical. Relevance This snail has been studied in relation to human pathology and the epidemiology of schistosomiasis. S. masoni is known to change its host’s (B. glabrata'''s) behavior via the upregulation/downregulation of neuropeptides such as schistosomin and NPY, and some studies have reported that FMRFamide is aminergic, and may be implicated in the secretion of molecules to respond to infection with parasites. The ganglionic central nervous system (CNS) of B. glabrata consists of paired cerebral, pedal, pleural, parietal, and buccal ganglia, and one unpaired visceral ganglion. FMRFamide is concentrated in the concentrated in the cerebral and visceral ganglia, although evidence from current research suggests that FMRFamide moves downward of the head-foot region of the snail as embryonic development proceeds. The exact role of FMRFamide during early development of the embryonic central nervous system is not well studied. Detection of this neuropeptide is important because its expression lays down the foundation of the CNS in the early stages of development in invertebrates. In recent years, neuromodulatory actions of FMRFamide in invertebrates have become more apparent. This is in part due to the extensive studies done on the Planorbidae and Lymnaeidae families of pond snails. FMRFamide expression in B. glabra
https://en.wikipedia.org/wiki/Iterative%20and%20incremental%20development
Iterative and incremental development is any combination of both iterative design or iterative method and incremental build model for development. Usage of the term began in software development, with a long-standing combination of the two terms iterative and incremental having been widely suggested for large development efforts. For example, the 1985 DOD-STD-2167 mentions (in section 4.1.2): "During software development, more than one iteration of the software development cycle may be in progress at the same time." and "This process may be described as an 'evolutionary acquisition' or 'incremental build' approach." In software, the relationship between iterations and increments is determined by the overall software development process. Overview The basic idea behind this method is to develop a system through repeated cycles (iterative) and in smaller portions at a time (incremental), allowing software developers to take advantage of what was learned during development of earlier parts or versions of the system. Learning comes from both the development and use of the system, where possible key steps in the process start with a simple implementation of a subset of the software requirements and iteratively enhance the evolving versions until the full system is implemented. At each iteration, design modifications are made and new functional capabilities are added. The procedure itself consists of the initialization step, the iteration step, and the Project Control List. The initialization step creates a base version of the system. The goal for this initial implementation is to create a product to which the user can react. It should offer a sampling of the key aspects of the problem and provide a solution that is simple enough to understand and implement easily. To guide the iteration process, a project control list is created that contains a record of all tasks that need to be performed. It includes items such as new features to be implemented and areas of redesi
https://en.wikipedia.org/wiki/Air%20core%20gauge
An air core gauge is a specific type of rotary actuator in an analog display gauge that allows an indicator to rotate a full 360 degrees. It is used in gauges and displays, most commonly automotive instrument clusters. A typical automotive application is shown at the right. The air core gauge is a type of "air-core motor". It may be considered a "gauge movement" or "pointer indication device". Background There are four common types of rotary actuators: Physical gauges, in which the needle is attached directly to the value being measured; for example, a mechanical pressure gauge Analog volt meters or d'Arsonval movements, which consist of a coil and a permanent magnet Stepper motors, which move in one-notch increments or steps Air-core motors, as described below. Construction and operation The air core gauge consists of two independent, perpendicular coils surrounding a hollow chamber. A needle shaft protrudes into the chamber, where a permanent magnet is affixed to the shaft. When current flows through the perpendicular coils, their magnetic fields superimpose and the magnet is free to align with the combined fields. A typical air core gauge has four terminals, two for each coil, as shown. The two coils are identified as the sine coil and the cosine coil. Theory The direction of the overall magnetic field is approximately: Where and are the coils' sine and cosine currents respectively. The permanent magnet aligns itself with that field, eventually settling near . In this way, by proportioning the current through each coil, the needle can reach all 360° of rotation. Example If the sin coil current is 29 mA and the cos current is 50 mA: The coil current ratio is 0.58, and arctan 0.58 = 30 degrees. Drivers Air core gauges require special electronics to properly drive the coils. Some driver integrated circuits have a serial input data port and two pair of output lines. One pair of the output lines drives the sin coil and one pair drive
https://en.wikipedia.org/wiki/B-Dienst
The B-Dienst (, observation service), also called xB-Dienst, X-B-Dienst and χB-Dienst, was a Department of the German Naval Intelligence Service (, MND III) of the OKM, that dealt with the interception and recording, decoding and analysis of the enemy, in particular British radio communications before and during World War II. B-Dienst worked on cryptanalysis and deciphering (decrypting) of enemy and neutral states' message traffic and security control of Kriegsmarine key processes and machinery. "The ultimate goal of all evaluation was recognizing the opponent's goal by pro-active identification of data." B-Dienst was instrumental in moulding Wehrmacht operations during the Battles of Norway and France in spring 1940, primarily due to the cryptanalysis successes it had achieved against early and less secure British Naval ciphers. B-Dienst broke British Naval Combined Cypher No. 3 in October 1941, which was used to encrypt all communications between naval personnel, for Allied North Atlantic convoys. This enabled B-Dienst to provide valuable signals intelligence for the German Navy in the Battle of the Atlantic. The intelligence flow largely ended when the Admiralty introduced Naval Cipher No. 5 on 10 June 1943. The new cipher became secure in January 1944 with the introduction of the Stencil Subtractor system which was used to recipher it. Background The B-Dienst unit began as the German Radio Monitoring Service, or educational and news analysis service () by the end of World War I, in 1918, as part of the navy of the German Empire. A counterpart to the B service on the British side was the Y-service or Y Service. The Y was onomatopoeic for the initial syllable of the word wireless, similar to the B initial for the German service. Little was known outside about the internal organization and workings of the B-Dienst section. After the armistice of Italy (Armistice of Cassibile), officers of the Italian naval communications intelligence (SIM, ) in conversation
https://en.wikipedia.org/wiki/Whole%20Building%20Design%20Guide
The Whole Building Design Guide or WBDG is guidance in the United States, described by the Federal Energy Management Program as "a complete internet resource to a wide range of building-related design guidance, criteria and technology", and meets the requirements in guidance documents for Executive Order 13123. The WBDG is based on the premise that to create a successful high-performance building, one must apply an integrated design and team approach in all phases of a project, including planning, design, construction, operations and maintenance. The WBDG is managed by the National Institute of Building Sciences. History The WBDG was initially designed to serve U.S. Department of Defense (DOD) construction programs. A 2003 DOD memorandum named WBDG the “sole portal to design and construction criteria produced by the U.S. Army Corps of Engineers (USACE), Naval Facilities Engineering Command (NAVFAC), and U.S. Air Force.” Since then, WBDG has expanded to serve all building industry professionals. The majority of its 500,000 monthly users are from the private sector. The WBDG draws information from the Construction Criteria Base and a privately owned database run by Information Handling Services. A significant amount of the Whole Building Design Guide content is organized by three categories: Design Guidance, Project Management, and Operations and Maintenance. It is structured to provide WBDG visitors first a broad understanding then increasingly specific information more targeted towards building industry professionals. The WBDG is the resource that federal agencies look to for policy and technical guidance on Federal High Performance and Sustainable Buildings In addition, the WBDG contains online tools, the original Construction Criteria Base, Building Information Modeling guides and libraries, a database of select case studies, federal mandates and other resources. The WBDG also provides over 70 online continuing education courses for architects and other build
https://en.wikipedia.org/wiki/Citrate%20test
The citrate test detects the ability of an organism to use citrate as the sole source of carbon and energy. Principle Bacteria are inoculated on a medium containing sodium citrate and a pH indicator such as bromothymol blue. The medium also contains inorganic ammonium salts, which are utilized as sole source of nitrogen. Use of citrate involves the enzyme citrate lyase, which breaks down citrate to oxaloacetate and acetate. Oxaloacetate is further broken down to pyruvate and carbon dioxide (CO2). Production of sodium bicarbonate (NaHCO3) as well as ammonia (NH3) from the use of sodium citrate and ammonium salts results in alkaline pH. This results in a change of the medium's color from green (neutral) to blue (alkaline). Bacterial colonies are picked up from a straight wire and inoculated into slope of Simmons citrate agar and incubated overnight at 37 °C. Inoculating from a broth culture is not recommended because the inoculum would be too heavy. If the organism has the ability to use citrate, the medium usually changes its color from green to blue, though growth on the medium even without colour change is considered a positive result. An observation of no growth is a negative result. Examples: Escherichia coli: Negative Klebsiella pneumoniae: Positive Frateuria aurantia: Positive
https://en.wikipedia.org/wiki/Birbeck%20granules
Birbeck granules, also known as Birbeck bodies, are rod shaped or "tennis-racket" cytoplasmic organelles with a central linear density and a striated appearance. First described in 1961 (where they were simply termed "characteristic granules"), they are solely found in Langerhans cells. Although part of normal Langerhans cell histology, they also provide a mechanism to differentiate Langerhans cell histiocytoses (which are a group of rare conditions collectively known as histiocytoses) from proliferative disorders caused by other cell lines. Formation is induced by langerin. Function The function of Birbeck granules is debated, but one theory is that they migrate to the periphery of the Langerhans cells and release their contents into the extracellular matrix. Another theory is that the Birbeck granule functions in receptor-mediated endocytosis, similar to clathrin-coated pits. History Birbeck granules were discovered by Michael Stanley Clive Birbeck (1925–2005), a British scientist and electron microscopist who worked at the Chester Beatty Cancer Research Institute or Institute of Cancer Research, London from 1950 until 1981.
https://en.wikipedia.org/wiki/Marginal%20structural%20model
Marginal structural models are a class of statistical models used for causal inference in epidemiology. Such models handle the issue of time-dependent confounding in evaluation of the efficacy of interventions by inverse probability weighting for receipt of treatment, they allow us to estimate the average causal effects. For instance, in the study of the effect of zidovudine in AIDS-related mortality, CD4 lymphocyte is used both for treatment indication, is influenced by treatment, and affects survival. Time-dependent confounders are typically highly prognostic of health outcomes and applied in dosing or indication for certain therapies, such as body weight or lab values such as alanine aminotransferase or bilirubin. The first marginal structural models were introduced in 2000. The works of James Robins, Babette Brumback, and Miguel Hernán provided an intuitive theory and an easy-to-implement software which made them popular for the analysis of longitudinal data.
https://en.wikipedia.org/wiki/Delegated%20Path%20Validation
Delegated Path Validation (DPV) is a method for offloading to a trusted server the work involved in validating a public key certificate. Combining certificate information supplied by the DPV client with certificate path and revocation status information obtained by itself, a DPV server is able to apply complex validation policies that are prohibitive for each client to perform. The requirements for DPV are described in RFC 3379. See also Delegated Path Discovery Cryptographic protocols
https://en.wikipedia.org/wiki/Extrafarma
Extrafarma is the drugstore chain owned by Ultrapar. The company is among the top 10 largest pharmacy chains in Brazil, with stores located throughout the north, northeast and southern regions of the country. The company has more than 400 stores in 10 States and more than 7000 employees. History Pedro de Castro Lazera founded the company Imifarma on 2 December 1960. It was initially focused on the drug distribution market. In the 1990s, Imifarma started to operate in the retail market through its own network of pharmacies under the name Extrafarma. The store chain began in the city of Belém and expanded to other areas in the state of Pará and in the neighbor state of Amapá. Later, it expanded to the states of Maranhão, Ceará, Piauí and Rio Grande do Norte. In 2013, Extrafarma was acquired by Ultra. With the acquisition, Ultra entered the pharmaceutical retail industry and made Extrafarma its third distribution business and specialty retail chain, along with Ipiranga and Ultragaz. With the acquisition, Extrafarma will expand and open new pharmacy stores inside Ipiranga gas stations and Ultragaz resellers. Awards ADVB-PA Award – Association of Sales and Marketing Managers from Brazil: Top Environmental Company Award, 2012.
https://en.wikipedia.org/wiki/Netsh
In computing, netsh, or network shell, is a command-line utility included in Microsoft's Windows NT line of operating systems beginning with Windows 2000. It allows local or remote configuration of network devices such as the interface. Overview A common use of netsh is to reset the TCP/IP stack to default, known-good parameters, a task that in Windows 98 required reinstallation of the TCP/IP adapter. netsh, among many other things, also allows the user to change the IP address on their machine. Starting from Windows Vista, one can also edit wireless settings (for example, SSID) using netsh. netsh can also be used to read information from the IPv6 stack. The command netsh winsock reset can be used to reset TCP/IP problems when communicating with a networked device.
https://en.wikipedia.org/wiki/Aspergillus%20nidulans
Aspergillus nidulans (also called Emericella nidulans when referring to its sexual form, or teleomorph) is one of many species of filamentous fungi in the phylum Ascomycota. It has been an important research organism for studying eukaryotic cell biology for over 50 years, being used to study a wide range of subjects including recombination, DNA repair, mutation, cell cycle control, tubulin, chromatin, nucleokinesis, pathogenesis, metabolism, and experimental evolution. It is one of the few species in its genus able to form sexual spores through meiosis, allowing crossing of strains in the laboratory. A. nidulans is a homothallic fungus, meaning it is able to self-fertilize and form fruiting bodies in the absence of a mating partner. It has septate hyphae with a woolly colony texture and white mycelia. The green colour of wild-type colonies is due to pigmentation of the spores, while mutations in the pigmentation pathway can produce other spore colours. Genome The A. nidulans genome was sequenced in a collaboration between Monsanto and the Broad Institute. A sequence with 13-fold coverage was publicly released in March 2003; analysis of the annotated genome was published in Nature in December 2005. It is 30 million base pairs in size and is predicted to contain around 9,500 protein-coding genes on eight chromosomes. Recently, several caspase-like proteases were isolated from A. nidulans samples under which programmed cell death had been induced. Findings such as these play a key role in determining the evolutionary conservation of the mitochondrion within the eukaryotic cell, and its role as an ancient alphaproteobacterium capable of inducing cell death. Sexual reproduction Sexual reproduction occurs in two fundamentally different ways. This is by outcrossing (heterothallic sex), in which two distinct individuals contribute nuclei, or by homothallic sex or self-fertilization (selfing) in which both nuclei are derived from the same individual. Selfing in A. ni
https://en.wikipedia.org/wiki/Translational%20Health%20Science%20and%20Technology%20Institute
Translational Health Science and Technology Institute (THSTI) is a society registered under the Societies Registration Act XXI of 1860. It is also an autonomous institute of the Department of Biotechnology, Ministry of Science and Technology, Government of India. It was set up in 2009 at Gurgaon and is now located in NCR Biotech Science Cluster, Faridabad along with the Regional Center for Biotechnology, Advanced Technology Platforms Center, Small Animal Facility, and Bio-incubator. Envisioned by former secretary of DBT, M. K. Bhan, the centre was created to enable faster transition of lab research to market. Pramod Garg is the executive director of THSTI. THSTI has six intramural centers namely Vaccine & Infectious Disease Research Centre (VIDRC), Pediatric Biology Centre (PBC), Centre for Bio-design & Diagnostics (CBD), Centre for Human Microbial Ecology (CHME), Policy Centre for Biomedical Research (PCBR), and Drug Discovery Research Centre (DDRC). Vaccine & Infectious Disease Research Centre (VIDRC) is engaged in development of technologies pertaining to prophylaxis, treatment and diagnosis of infections caused by JEV, DENV, HIV, Rotavirus, Mycobaterium tuberculosis, HEV. In 2009, HIV Vaccine Translational Research (HVTR) laboratory was established in collaboration with International AIDS Vaccine Initiative, USA for developing efficient immunogens to be used in immunogenic composition against HIV. The laboratory works in collaboration with the US-based Scripps Research Institute, New York-based Weill Cornell Medical College, Amsterdam-based Academic Medical Center, and Johannesburg-based National Institute of Communicable Diseases. In collaboration with Department of Biotechnology, Bharat Biotech International Limited, PATH and CHRD-SAS, VIDRC was also engaged in the phase III randomized, double-blind placebo controlled trial to evaluate the non-interference in the immune response of three doses of ORV 116E (Rotavac) to antigens contained in childhood vaccines
https://en.wikipedia.org/wiki/Rhizosoleniaceae
Rhizosoleniaceae is a family of diatoms belonging to the order Rhizosoleniales. Genera: Calyptrella Castillo, 1996 Dactyliosolen A.F.Castracane, 1886 Guinardia H.Peragallo, 1892 Henseniella F.Schütt ex G.B.De Toni, 1894 Neocalyprella Hernàndez-Becerril Neocalyptrella Castillo, 1997 Proboscia B.G.Sundstrom, 1986 Pseudosolenia B.G.Sundstrom, 1986 Rhizosolenia T.Brightwell, 1858 Urosolenia F.E.Round & R.M.Crawford, 1990
https://en.wikipedia.org/wiki/United%20States%20Army%20Combat%20Capabilities%20Development%20Command
The Combat Capabilities Development Command, (DEVCOM, aka CCDC) (formerly the United States Army Research, Development, and Engineering Command (RDECOM)) is a subordinate command of the U.S. Army Futures Command. RDECOM was tasked with "creating, integrating, and delivering technology-enabled solutions" to the U.S. Army. It is headquartered at Aberdeen Proving Ground in Maryland. Role and organization CCDC formerly described its role as "the Army's enabling command in the development and delivery of capabilities that empower, unburden and protect the Warfighter." It conducts and sponsors scientific research in areas important to the Army, develops scientific discoveries into new technologies, engineers technologies into new equipment and capabilities, and works with the U.S. Army Training and Doctrine Command to help requirements writers define the future needs of the Army. CCDC is headquartered at Aberdeen Proving Ground. Before 1 November 2019, Major-General Cedric T. Wins was the commanding general, assisted by Brigadier-General Vincent F. Malone as deputy commanding general and Command Sergeant-Major Jon R. Stanley as command sergeant major. They oversee one laboratory and six major centers: US Army CCDC Army Research Laboratory (CCDC ARL) – formerly Army Research Laboratory US Army CCDC Chemical Biological Center (CCDC CBC) – formerly Edgewood Chemical Biological Center US Army CCDC Soldier Center (CCDC SC) – formerly Natick Soldier Research, Development and Engineering Center US Army CCDC Ground Vehicle System Center (CCDC GVSC) – formerly Tank Automotive Research, Development and Engineering Center US Army CCDC Aviation & Missile Center (CCDC AvMC) – formerly Aviation and Missile Research, Development and Engineering Center US Army CCDC Armaments Center (CCDC AC) – formerly Army Armaments Research, Development and Engineering Center US Army CCDC C5ISR Center (CCDC C5ISRC) – formerly Communications-Electronics Research, Development and Engineering Ce
https://en.wikipedia.org/wiki/Canonical%20ensemble
In statistical mechanics, a canonical ensemble is the statistical ensemble that represents the possible states of a mechanical system in thermal equilibrium with a heat bath at a fixed temperature. The system can exchange energy with the heat bath, so that the states of the system will differ in total energy. The principal thermodynamic variable of the canonical ensemble, determining the probability distribution of states, is the absolute temperature (symbol: ). The ensemble typically also depends on mechanical variables such as the number of particles in the system (symbol: ) and the system's volume (symbol: ), each of which influence the nature of the system's internal states. An ensemble with these three parameters is sometimes called the ensemble. The canonical ensemble assigns a probability to each distinct microstate given by the following exponential: where is the total energy of the microstate, and is the Boltzmann constant. The number is the free energy (specifically, the Helmholtz free energy) and is a constant for the ensemble. However, the probabilities and will vary if different N, V, T are selected. The free energy serves two roles: first, it provides a normalization factor for the probability distribution (the probabilities, over the complete set of microstates, must add up to one); second, many important ensemble averages can be directly calculated from the function . An alternative but equivalent formulation for the same concept writes the probability as using the canonical partition function rather than the free energy. The equations below (in terms of free energy) may be restated in terms of the canonical partition function by simple mathematical manipulations. Historically, the canonical ensemble was first described by Boltzmann (who called it a holode) in 1884 in a relatively unknown paper. It was later reformulated and extensively investigated by Gibbs in 1902. Applicability of canonical ensemble The canonical ensemble is the
https://en.wikipedia.org/wiki/List%20of%20text%20editors
The following is a list of notable text editors. Graphical and text user interface The following editors can either be used with a graphical user interface or a text user interface. Graphical user interface Text user interface System default Others vi clones Sources: No user interface (editor libraries/toolkits) ASCII and ANSI art Editors that are specifically designed for the creation of ASCII and ANSI text art. ACiDDraw – designed for editing ASCII text art. Supports ANSI color (ANSI X3.64) JavE – ASCII editor, portable to any platform running a Java GUI PabloDraw – ANSI/ASCII editor allowing multiple users to edit via TCP/IP network connections TheDraw – ANSI/ASCII text editor for DOS and PCBoard file format support ASCII font editors FIGlet – for creating ASCII art text TheDraw – DOS ANSI/ASCII text editor with built-in editor and manager of ASCII fonts PabloDraw – .NET text editor designed for creating ANSI and ASCII art Historical Visual and full-screen editors Line editors See also Comparison of text editors Editor war Line editor List of HTML editors List of word processors Outliner, a specialized type of word processor Source code editor Notes Text editors
https://en.wikipedia.org/wiki/Stuttgart%20Database%20of%20Scientific%20Illustrators%201450%E2%80%931950
The Stuttgart Database of Scientific Illustrators 1450–1950 (abbreviated DSI) is an online repository of bibliographic data about people who illustrated published scientific works from the time of the invention of the printing press, around 1450, until 1950; the latter cut-off chosen with the intention of excluding currently-active illustrators. The database includes those who worked in a variety of fields, including anatomical, astronomical, botanical, zoological and medical illustration. The database is hosted by the University of Stuttgart. Content is displayed in English, and is free to access. As of October 2023, the database includes over 13,000 illustrators. The site is searchable by 20 fields. Suggestions for additional entries, or amendments, may be submitted by members of the public, but are subject to editorial review before inclusion.
https://en.wikipedia.org/wiki/Clascal
Clascal is an object-oriented programming language (and associated discontinued compiler) developed in 1983 by the Personal Office Systems (POS) division (later renamed The Lisa Division, then later The 32-Bit Systems Division) of Apple Computer. Clascal was used to program applications for the Lisa Office System, the operating environment of the Lisa. Developed as an extension of Lisa Pascal, which in turn harked back to the UCSD Pascal model originally implemented on the Apple II, the language was strongly influenced by the Xerox Palo Alto Research Center (PARC) release of Smalltalk-80, v1 (which had been formerly ported to the Lisa), and by Modula. According to Larry Tesler, Clascal was developed as a replacement for Apple's version of Smalltalk, which was "too slow" and because the experience offered by the Smalltalk syntax was too unfamiliar for most people. Clascal was the basis for Object Pascal on the Apple Macintosh in 1985. With the demise of the Lisa in 1986, Pascal and Object Pascal continued to be used in the Macintosh Programmer's Workshop for systems and application development for several more years, until it was finally supplanted by the languages C and C++. The MacApp application framework was based on Toolkit originally written in Clascal. Object Pascal, in turn, served as the basis for Borland's Delphi.
https://en.wikipedia.org/wiki/Efficient%20approximately%20fair%20item%20allocation
When allocating objects among people with different preferences, two major goals are Pareto efficiency and fairness. Since the objects are indivisible, there may not exist any fair allocation. For example, when there is a single house and two people, every allocation of the house will be unfair to one person. Therefore, several common approximations have been studied, such as maximin-share fairness (MMS), envy-freeness up to one item (EF1), proportionality up to one item (PROP1), and equitability up to one item (EQ1). The problem of efficient approximately fair item allocation is to find an allocation that is both Pareto-efficient (PE) and satisfies one of these fairness notions. The problem was first presented at 2016 and has attracted considerable attention since then. Setting There is a finite set of objects, denoted by M. There are n agents. Each agent i has a value-function Vi, that assigns a value to each subset of objects. The goal is to partition M into n subsets, X1,...,Xn, and give each subset Xi to agent i, such that the allocation is both Pareto-efficient and approximately fair. There are various notions of approximate fairness. Efficient approximately envy-free allocation An allocation is called envy-free (EF) if for every agent believes that the value of his/her share is at least as high as that of any other agent. It is called envy-free up to one item (EF1) if, for every two agents i and j, if at most one item is removed from the bundle of j, then i does not envy j. Formally: Some early algorithms could find an approximately fair allocation that satisfies a weak form of efficiency, but not PE. The round-robin procedure returns a complete EF1 allocation with additive utilities. The envy-graph procedure returns a complete EF1 allocation for arbitrary monotone preference relations. Both are guaranteed to return an allocation with no envy-cycles. However, the allocation is not guaranteed to be Pareto-efficient. The Approximate-CEEI mechanism ret
https://en.wikipedia.org/wiki/Mechanome
The mechanome consists of the body, or ome, of data including cell and molecular processes relating to force and mechanical systems at molecular, cellular and tissue length scales - the fundamental "machine code" structures of the cell. The mechanome encompasses biological motors, like kinesin, myosin, RNAP, and Ribosome mechanical structures, like actin or the cytoskeleton and also proteomic and genomic components that are mechanosensitive and are involved in the response of cells to externally applied force. A definition of the "Mechanome" extending to cell/organ/body given by Prof. Roger Kamm, at the 5th World Congress of Biomechanics Munich, includes understanding: The complete state of stress existing from tissues to cells to molecules. The biological state that results from the distribution of forces. Requires knowledge of the distribution of force throughout the cell/organ/body, the functional interactions between these stresses and the fundamental biological processes. The mechanome seeks to understand the fundamental physical-mechanical processes and events that affect biological function. An example at the molecular level includes the common structural designs used by kinesin and myosin motor proteins (such as dimer formation and mechanochemical cycles) that control their function and lead to properties such as processivity. The mechanome assembles the common features of these motors regardless of the "track" (microtubules, actin filaments, nucleotide based structures, membranes) they move on. A cytoskeletal example includes structures such as actin filament networks and bundles that can form from a variety of actin binding proteins that cross-link or bundle actin filaments leading to common mechanical changes of these structures. A cell machinery example includes common structures such as contractile ring formation formed by both actin and tubulin type structures leading to the same mechanical result of cell division. In order to respond to loading ce
https://en.wikipedia.org/wiki/Advertising%20in%20biology
Advertising in biology means the use of displays by organisms such as animals and plants to signal their presence for some evolutionary reason. Such signalling may be honest, used to attract other organisms, as when flowers use bright colours, patterns, and scent to attract pollinators such as bees; or, again honestly, to warn off other organisms, as when distasteful animals use warning coloration to prevent attacks from potential predators. Such honest advertising benefits both the sender and the receiver. Other organisms may advertise dishonestly; in Batesian mimicry, edible animals more or less accurately mimic distasteful animals to reduce their own risk of being attacked by predators. In plants Insect-pollinated flowers use bright colours, patterns, rewards of nectar and pollen, and scent to attract pollinators such as bees. Some also use drugs such as caffeine to encourage bees to return more often. Advertising is influenced by sexual selection: in dioecious plants like sallow, the male flowers are brighter yellow (the colour of their pollen) and have more scent than female flowers. Honey bees are more attracted by the brighter male flowers, but not by their scent. Many flowers that are adapted for pollination by birds produce copious quantities of nectar and advertise this with their red coloration. Insects see red less well than other colours, and the plant needs to devote its energy to attracting birds that can act as pollinators rather than insects that cannot. In fact, the Canary Island endemic Echium wildpretii has two subspecies, a red-flowering one on Teneriffe which is mainly pollinated by birds, and a pink-flowered one on Las Palmas which is pollinated by insects. In animals Advertising takes a variety of forms in animals. Breeding adults often display to attract a mate. Breeding males of sexually dimorphic birds, such as peacocks, birds of paradise and bower birds, have elaborate plumage, song, and behaviour. These evolved through sexual sele
https://en.wikipedia.org/wiki/Hilbert%20spectrum
The Hilbert spectrum (sometimes referred to as the Hilbert amplitude spectrum), named after David Hilbert, is a statistical tool that can help in distinguishing among a mixture of moving signals. The spectrum itself is decomposed into its component sources using independent component analysis. The separation of the combined effects of unidentified sources (blind signal separation) has applications in climatology, seismology, and biomedical imaging. Conceptual summary The Hilbert spectrum is computed by way of a 2-step process consisting of: Preprocessing a signal separate it into intrinsic mode functions using a mathematical decomposition such as singular value decomposition (SVD) or empirical mode decomposition (EMD); Applying the Hilbert transform to the results of the above step to obtain the instantaneous frequency spectrum of each of the components. The Hilbert transform defines the imaginary part of the function to make it an analytic function (sometimes referred to as a progressive function), i.e. a function whose signal strength is zero for all frequency components less than zero. With the Hilbert transform, the singular vectors give instantaneous frequencies that are functions of time, so that the result is an energy distribution over time and frequency. The result is an ability to capture time-frequency localization to make the concept of instantaneous frequency and time relevant (the concept of instantaneous frequency is otherwise abstract or difficult to define for all but monocomponent signals). Definition For a given signal decomposed (with for example Empirical Mode Decomposition) to where is the number of intrinsic mode functions that consists of and The instantaneous angle frequency is then defined as From this, we can define the Hilbert Spectrum for as The Hilbert Spectrum of is then given by Marginal Hilbert Spectrum A two dimensional representation of a Hilbert Spectrum, called Marginal Hilbert Spectrum, is defined as where
https://en.wikipedia.org/wiki/Xcode
Xcode is Apple's integrated development environment (IDE) for macOS, used to develop software for macOS, iOS, iPadOS, watchOS, tvOS, and visionOS. It was initially released in late 2003; the latest stable release is version 15, released on September 18, 2023, and is available free of charge via the Mac App Store and the Apple Developer website. Registered developers can also download preview releases and prior versions of the suite through the Apple Developer website. Xcode includes command-line tools which enable UNIX-style development via the Terminal app in macOS. They can also be downloaded and installed without the GUI. Before Xcode, Apple offered developers Project Builder and Interface Builder to develop Mac OS X applications. Major features Xcode supports source code for the programming languages: C, C++, Objective-C, Objective-C++, Java, AppleScript, Python, Ruby, ResEdit (Rez), and Swift, with a variety of programming models, including but not limited to Cocoa, Carbon, and Java. Third parties have added support for GNU Pascal, Free Pascal, Ada, C#, Go, Perl, and D. Xcode can build fat binary (universal binary) files containing code for multiple architectures with the Mach-O executable format. These helped ease the transitions from 32-bit PowerPC to 64-bit PowerPC, from PowerPC to Intel x86, from 32-bit to 64-bit Intel, and most recently from Intel x86 to Apple silicon by allowing developers to distribute a single application to users and letting the operating system automatically choose the appropriate architecture at runtime. Using the iOS SDK, tvOS SDK, and watchOS SDK, Xcode can also be used to compile and debug applications for iOS, iPadOS, tvOS, and watchOS. Xcode includes the GUI tool Instruments, which runs atop a dynamic tracing framework, DTrace, created by Sun Microsystems and released as part of OpenSolaris. Xcode also integrates built-in support for source code management using the Git version control system and protocol, allowing the use