source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Inferior%20thyroid%20veins | The inferior thyroid veins appear two, frequently three or four, in number, and arise in the venous plexus on the thyroid gland, communicating with the middle and superior thyroid veins. While the superior and middle thyroid veins serve as direct tributaries to the internal jugular vein, the inferior thyroid veins drain directly to the brachiocephalic veins.
The inferior thyroid veins form a plexus in front of the trachea, behind the sternothyroid muscle. From this plexus, the left vein descends and joins the left brachiocephalic vein, and the right vein passes obliquely downward and to the right across the brachiocephalic artery to open into the right brachiocephalic vein, just at its junction with the superior vena cava; sometimes the right and left veins open by a common trunk in the latter situation.
The inferior thyroid veins receive esophageal, tracheal, and inferior laryngeal veins, and are provided with valves at their terminations in the brachiocephalic veins.
Additional images |
https://en.wikipedia.org/wiki/Applix%201616 | The Applix 1616 was a kit computer with a Motorola 68000 CPU, produced by a small company called Applix in Sydney, Australia, from 1986 to the early 1990s. It ran a custom multitasking multiuser operating system that was resident in ROM. A version of Minix was also ported to the 1616, as was the MGR Window System. Andrew Morton, designer of the 1616 and one of the founders of Applix, later became the maintainer of the 2.6 version of the Linux kernel.
History
Paul Berger and Andrew Morton formed the Australian company Applix Pty. Ltd. in approximately 1984 to sell a Z80 card they had developed for the Apple IIc that allowed it to run CP/M. This product was not a commercial success, but Paul later proposed they develop a Motorola 68000-based personal computer for sale in kit form.
The project was presented to Jon Fairall, then editor of the Australia and New Zealand electronics magazine Electronics Today International, and in December 1986, the first of four construction articles was published as "Project 1616", with the series concluding in June 1987. In October and November 1987, a disk controller card was also published as "Project 1617".
Over the next decade, about 400 1616s were sold.
Applix Pty. Ltd., was in no way related to the North American company of the same name that produced Applixware.
Hardware
Main board
The main board contains:
a Motorola 68000 running at 7.5 MHz, or a 68010 running at 15 MHz.
512 kibibytes of Dynamic RAM
between 64 kibibytes and 256 kibibytes of ROM
on board bit mapped colour graphics (no "text" mode), with timing provided by a Motorola 6845 CRT controller. The video could produce 320x200 in 16 colours, or 640x200 in a palette of 4 colours out of 16, with a later modification providing a 960x512 monochrome mode. The frame buffer resided in system memory and video refresh provided DRAM refresh cycles. The video output was able to drive CGA, EGA, MGA and multisync monitors.
dual RS-232 serial ports using a Zilog Z8530.
a |
https://en.wikipedia.org/wiki/Virtual%20instrumentation | Virtual instrumentation is the use of customizable software and modular measurement hardware to create user-defined measurement systems, called virtual instruments.
Traditional hardware instrumentation systems are made up of fixed hardware components, such as digital multimeters and oscilloscopes that are completely specific to their stimulus, analysis, or measurement function. Because of their hard-coded function, these systems are more limited in their versatility than virtual instrumentation systems. The primary difference between hardware instrumentation and virtual instrumentation is that software is used to replace a large amount of hardware. The software enables complex and expensive hardware to be replaced by already purchased computer hardware; e. g. analog-to-digital converter can act as a hardware complement of a virtual oscilloscope, a potentiostat enables frequency response acquisition and analysis in electrochemical impedance spectroscopy with virtual instrumentation.
The concept of a synthetic instrument is a subset of the virtual instrument concept. A synthetic instrument is a kind of virtual instrument that is purely software defined. A synthetic instrument performs a specific synthesis, analysis, or measurement function on completely generic, measurement agnostic hardware. Virtual instruments can still have measurement specific hardware, and tend to emphasize modular hardware approaches that facilitate this specificity. Hardware supporting synthetic instruments is by definition not specific to the measurement, nor is it necessarily (or usually) modular.
Leveraging commercially available technologies, such as the PC and the analog-to-digital converter, virtual instrumentation has grown significantly since its inception in the late 1970s. Additionally, software packages like National Instruments' LabVIEW and other graphical programming languages helped grow adoption by making it easier for non-programmers to develop systems.
The newly updated |
https://en.wikipedia.org/wiki/HealthBoards | HealthBoards is a long-running social networking support group website. It consists of over 280 Internet message boards for patient to patient health support (also referred to as a virtual community or an online health community). HealthBoards was one of the first stand alone health community websites. Health communities prior to it had generally been part of large web portals (WebMD, Yahoo, iVillage, etc.). The HealthBoards members post messages to share information and support on a wide range of health issues such as cancer, back pain, autism, and women's health. As of October 2013, the site had over 1 million registered members, 5 million posted messages, and over 10 million monthly visitors.
History
HealthBoards was founded in 1998 by Charles Simmons, a software engineer in Los Angeles, California. In 1997, after experiencing a variety of symptoms for which doctors had no explanation, Simmons turned to the Web for answers and support. When he did not find online support groups in the areas he needed, he realized that there was a need for a health support website covering a wide range of health topics. After a year of development, HealthBoards was launched on July 26, 1998, with 70 message boards. The original site was developed using custom Perl software written by Simmons. HealthBoards quickly gained popularity. In January 2001, the site began using an internet forum software package called UBB. By November 2003, HealthBoards had reached 100,000 members. Due to considerable growth in traffic and problems with UBB, the site was transitioned to VBulletin 3.0, a more robust internet forum software system. After 2003 HealthBoards experienced its most rapid growth and became one of the largest health communities on the Web. In 2005 HealthBoards was rated as one of the top 20 health websites by Consumer Reports Health WebWatch.
Selection for inclusion as a "Top 20" site was based solely on web traffic volume. These sites were then evaluated using criteria developed |
https://en.wikipedia.org/wiki/Data%20portability | Data portability is a concept to protect users from having their data stored in "silos" or "walled gardens" that are incompatible with one another, i.e. closed platforms, thus subjecting them to vendor lock-in and making the creation of data backups or moving accounts between services difficult.
Data portability requires common technical standards to facilitate the transfer from one data controller to another, such as the ability to export user data into a user-accessible local file, thus promoting interoperability, as well as facilitate searchability with sophisticated tools such as grep.
Data portability applies to personal data. It involves access to the personal data without implying data ownership per se.
Development
At the global level there are proponents who see the protection of digital data as a human right. Thus, in an emerging civil society draft declaration, one finds mention of the following concepts and statutes: Right to Privacy on the Internet, Right to Digital Data Protection, Rights to Consumer Protection on the Internet – United Nations Guidelines for Consumer Protection.
At the regional level there are at least three main jurisdictions where data rights are seen differently: China and India, the United States and the European Union. In the latter, personal data was given special protection under the 2018 General Data Protection Regulation (GDPR).
The GDPR thus became the fifth of the 24 types of legislation listed in Annex 1 Table of existing and proposed European Directives and Regulations in relation to data.
Personal data are the basis for behavioral advertising, and early in the 21st century their value began to grow exponentially, at least as measured in the market capitalization of the major platforms holding personal data on their respective users. European Union regulators reacted to this perceived power imbalance between platforms and users, although much still hinges on the terms of consent given by users to the platforms. The c |
https://en.wikipedia.org/wiki/Polymer%20adsorption | Adsorption is the adhesion of ions or molecules onto the surface of another phase. Adsorption may occur via physisorption and chemisorption. Ions and molecules can adsorb to many types of surfaces including polymer surfaces. A polymer is a large molecule composed of repeating subunits bound together by covalent bonds. In dilute solution, polymers form globule structures. When a polymer adsorbs to a surface that it interacts favorably with, the globule is essentially squashed, and the polymer has a pancake structure.
Polymer versus non-polymer surfaces
Polymer surfaces differ from non-polymer surfaces in that the subunits that make up the surface are covalently bonded to one another. Non-polymer surfaces can be bound by ionic bonds, metallic bonds or intermolecular forces (IMFs). In a two component system, non-polymer surfaces form when a positive net amount of energy is required to break self-interactions and form non-self-interactions. Therefore, the energy of mixing (ΔmixG) is positive. This amount of energy, as described by interfacial tension, varies for different combinations of materials. However, with polymer surfaces, the subunits are covalently bonded together and the bulk phase of the solid surface does not allow for surface tension to be measured directly. The intermolecular forces between the large polymer molecules are difficult to calculate and cannot be determined as easily as non-polymer surface molecular interactions. The covalently bonded subunits form a surface with differing properties as compared to non-polymer surfaces. Some examples of polymer surfaces include: polyvinyl chloride (PVC), nylon, polyethylene (PE), and polypropylene (PP). Polymer surfaces have been analyzed using a variety of techniques, including: scanning electron microscopy, scanning tunneling microscopy, and infrared spectroscopy.
Adsorption isotherms
The adsorption process can be characterized by determining what amount of the ions or molecules are adsorbed to the surfa |
https://en.wikipedia.org/wiki/Information%20architecture | Information architecture (IA) is the structural design of shared information environments; the art and science of organizing and labelling websites, intranets, online communities and software to support usability and findability; and an emerging community of practice focused on bringing principles of design, architecture and information science to the digital landscape. Typically, it involves a model or concept of information that is used and applied to activities which require explicit details of complex information systems. These activities include library systems and database development.
Definition
Information architecture has somewhat different meanings in different branches of information systems or information technology:
The structural design of shared information environments.
The art and science of organizing and labeling web sites, intranets, online communities, and software to support findability and usability.
An emerging community of practice focused on bringing principles of design and architecture to the digital landscape.
The combination of organization, labeling, search and navigation systems within websites and intranets.
Extracting required parameters/data of Engineering Designs in the process of creating a knowledge-base linking different systems and standards.
A blueprint and navigational aid to the content of information-rich systems.
A subset of data architecture where usable data (a.k.a. information) is constructed in and designed or arranged in a fashion most useful or empirically holistic to the users of this data.
The practice of organizing the information / content / functionality of a web site so that it presents the best user experience it can, with information and services being easily usable and findable (as applied to web design and development).
The conceptual framework surrounding information, providing context, awareness of location and sustainable structure.
Debate
The difficulty in establishing a common definition |
https://en.wikipedia.org/wiki/Mammoplasia | Mammoplasia is the normal or spontaneous enlargement of human breasts. Mammoplasia occurs normally during puberty and pregnancy in women, as well as during certain periods of the menstrual cycle. When it occurs in males, it is called gynecomastia and is considered to be pathological. When it occurs in females and is extremely excessive, it is called macromastia (also known as gigantomastia or breast hypertrophy) and is similarly considered to be pathological. Mammoplasia may be due to breast engorgement, which is temporary enlargement of the breasts caused by the production and storage of breast milk in association with lactation and/or galactorrhea (excessive or inappropriate production of milk). Mastodynia (breast tenderness/pain) frequently co-occurs with mammoplasia.
During the luteal phase (latter half) of the menstrual cycle, due to increased mammary blood flow and/or premenstrual fluid retention caused by high circulating concentrations of estrogen and/or progesterone, the breasts temporarily increase in size, and this is experienced by women as fullness, heaviness, swollenness, and a tingling sensation.
Mammoplasia can be an effect or side effect of various drugs, including estrogens, antiandrogens such as spironolactone, cyproterone acetate, bicalutamide, and finasteride, growth hormone, and drugs that elevate prolactin levels such as D2 receptor antagonists like antipsychotics (e.g., risperidone), metoclopramide, and domperidone and certain antidepressants like selective serotonin reuptake inhibitors (SSRIs) and tricyclic antidepressants (TCAs). The risk appears to be less with serotonin-norepinephrine reuptake inhibitors (SNRIs) like venlafaxine. The "atypical" antidepressants mirtazapine and bupropion do not increase prolactin levels (bupropion may actually decrease prolactin levels), and hence there may be no risk with these agents. Other drugs that have been associated with mammoplasia include D-penicillamine, bucillamine, neothetazone, ciclosporin, |
https://en.wikipedia.org/wiki/Exoplanet%20orbital%20and%20physical%20parameters | This page describes exoplanet orbital and physical parameters.
Orbital parameters
Most known extrasolar planet candidates have been discovered using indirect methods and therefore only some of their physical and orbital parameters can be determined. For example, out of the six independent parameters that define an orbit, the radial-velocity method can determine four: semi-major axis, eccentricity, longitude of periastron, and time of periastron. Two parameters remain unknown: inclination and longitude of the ascending node.
Distance from star and orbital period
There are exoplanets that are much closer to their parent star than any planet in the Solar System is to the Sun, and there are also exoplanets that are much further from their star. Mercury, the closest planet to the Sun at 0.4 astronomical units (AU), takes 88 days for an orbit, but the smallest known orbits of exoplanets have orbital periods of only a few hours, see Ultra-short period planet. The Kepler-11 system has five of its planets in smaller orbits than Mercury's. Neptune is 30 AU from the Sun and takes 165 years to orbit it, but there are exoplanets that are thousands of AU from their star and take tens of thousands of years to orbit, e.g. GU Piscium b.
The radial-velocity and transit methods are most sensitive to planets with small orbits. The earliest discoveries such as 51 Peg b were gas giants with orbits of a few days. These "hot Jupiters" likely formed further out and migrated inwards.
The direct imaging method is most sensitive to planets with large orbits, and has discovered some planets that have planet–star separations of hundreds of AU. However, protoplanetary disks are usually only around 100 AU in radius, and core accretion models predict giant planet formation to be within 10 AU, where the planets can coalesce quickly enough before the disk evaporates.
Very-long-period giant planets may have been rogue planets that were captured, or formed close-in and gravitationally scattered |
https://en.wikipedia.org/wiki/Ly6/plaur%20domain%20containing%205 | LY6/PLAUR domain containing 5 is a protein that in humans is encoded by the LYPD5 gene. |
https://en.wikipedia.org/wiki/Off-line%20programming%20%28robotics%29 | Off-line programming (OLP) is a robot programming method where the robot program is created independent from the actual robot cell. The robot program is then uploaded to the real industrial robot for execution. In off-line programming, the robot cell is represented through a graphical 3D model in a simulator. Nowadays OLP and robotics simulator tools help robot integrators create the optimal program paths for the robot to perform a specific task. Robot movements, reachability analysis, collision and near-miss detection and cycle time reporting can be included when simulating the robot program.
OLP does not interfere with production as the program for the robot is created outside the production process on an external computer.
This method contradicts to the traditional on-line programming of industrial robots where the robot teach pendant is used for programming the robot manually.
The time for the adoption of new programs can be cut from weeks to a single day, enabling the robotization of short-run production.
Software
Examples of software and hardware supporting off-line programming:
SprutCAM
RobotMaster
PQArt |
https://en.wikipedia.org/wiki/Tulip%20System-1 | The Tulip System I is a 16-bit personal computer based on the Intel 8086 and made by Tulip Computers, formerly an import company for the Exidy Sorcerer, called Compudata Systems.
Its Motorola 6845-based video display controller could display 80×24 text in 8 different fonts for supporting different languages, including a (Videotex-based) font for 2×3 pseudo graphic symbols for displaying 160×72 pixel graphics in text mode.
The video display generator could also display graphics with a 384×288 or 768×288 (color) or 768×576 (monochrome) pixel resolution using its built-in NEC 7220 video display coprocessor, which had hardware supported drawing functions, with an advanced set of bit-block transfers it could do line generating, arc, circle, ellipse, ellipse arc, filled arc, filled circle, filled ellipse, filled elliptical arc and many other commands.
Its memory can be upgraded in units of 128 KB up to 896 KB (much more than the 640 KB of the original PC).
It included a SASI hard disk interface (a predecessor of the SCSI-standard) and was optionally delivered with a 5 MB or 10 MB hard disk. The floppy disk size was 400 KB (10 sectors, instead of 8 or 9 with the IBM PC) or 800 KB (80 tracks).
It runs at 8 MHz with a true 16-bit CPU, almost twice the speed of the IBM PC XT which was launched only a few months earlier in July 1983. It has the possibility to use an Intel 8087 math coprocessor, which increased the speed to > 200 kflops, which was near mainframe data at that time.
After initially using CP/M-86, it quickly switched to using generic MS-DOS 2.00. There was a rudimentary IBM BIOS-emulator, which allowed the user to use WordStar and a few other IBM-PC software, but Compudata B.V. shipped WordStar and some other software as adopted software for this computer. There was programming support by Compudata B.V. with MS-Basic, MS-Pascal and MS-Fortran.
On a private base, TeX and Turbo Pascal were ported to the Tulip System I. |
https://en.wikipedia.org/wiki/Maltoside | A maltoside is a glycoside with maltose as the glycone (sugar) functional group. Among the most common are alkyl maltosides, which contain hydrophobic alkyl chains as the aglycone. Given their amphiphilic properties, these comprise a class of detergents, where variation in the alkyl chain confers a range of detergent properties including CMC and solubility. Maltosides are most often used for the solubilization and purification of membrane proteins.
History
In 1980 Ferguson-Miller et al. at Michigan State developed n-dodecyl-β-D-maltopyranoside (DDM) as part of a successful effort to purify an active, stable, monodisperse form of cytochrome c oxidase. Maltosides have been used extensively to stabilize membrane proteins for biophysical and structural studies.
Table of detergent properties |
https://en.wikipedia.org/wiki/Apomixis | In botany, apomixis is asexual development of seed or embryo without fertilization. However, other definitions include replacement of the seed by a plantlet or replacement of the flower by bulbils.
Apomictically produced offspring are genetically identical to the parent plant, except Nonrecurrent apomixis. Its etymology is Greek for "away from" + "mixing".
Normal asexual reproduction of plants, such as propagation from cuttings or leaves, has never been considered to be apomixis. In contrast to parthenocarpy, which involves seedless fruit formation without fertilization, apomictic fruits have viable seeds containing a proper embryo, with asexual origin.
In flowering plants, the term "apomixis" is used in a restricted sense to mean agamospermy, i.e., clonal reproduction through seeds. Although agamospermy could theoretically occur in gymnosperms, it appears to be absent in that group.
Apogamy is a related term that has had various meanings over time. In plants with independent gametophytes (notably ferns), the term is still used interchangeably with "apomixis", and both refer to the formation of sporophytes by parthenogenesis of gametophyte cells.
Male apomixis (paternal apomixis) involves replacement of the genetic material of an egg by the genetic material of the pollen.
Some authors included all forms of asexual reproduction within apomixis, but that generalization of the term has since died out.
Evolution
Because apomictic plants are genetically identical from one generation to the next, each lineage has some of the characters of a true species, maintaining distinctions from other apomictic lineages within the same genus, while having much smaller differences than is normal between species of most genera. They are therefore often called microspecies. In some genera, it is possible to identify and name hundreds or even thousands of microspecies, which may be grouped together as species aggregates, typically listed in floras with the convention "Genus s |
https://en.wikipedia.org/wiki/Gymnogongrus%20griffithsiae | Gymnogongrus griffithsiae is a small uncommon seaweed.
Description
This small alga grows to 5 cm long from a small disc. The fronds are erect, stiff and branch dichotomously in 1 plane, the tips a little flattened. In colour it is dark purplish brown. The structure is multiaxial with elongated cells surrounded cortical cells.
Reproduction
Male spermatangia are unknown. Carpotetasporangial outgrowths, that is sporangia containing four spores, by a carposporophyte outgrowth which develops during the year.
Distribution
Found in Great Britain and Ireland with a southern range, as far north as Lough Swilly. In the north Atlantic in the Azores in Europe to Massachusetts to Virginia in North America.
Habitat
The plants grow in rock pools of the lower littoral and in the upper sublittoral.
Possible confusion
This species is similar to Ahnfeltia plicata which usually has wiry irregular branching.
Further reading
Guiry, M.D., Irvine, L.M., Morton, O. 1981. Notes on Irish marine algae- 4 Gymnogongrus devoniensis (Greville) Schotter (Rhodophyta. Irish Naturalists Journal 20: 288 - 292. |
https://en.wikipedia.org/wiki/Wildlife%20radio%20telemetry | Wildlife radio telemetry is a tool used to track the movement and behavior of animals. This technique uses the transmission of radio signals to locate a transmitter attached to the animal of interest. It is often used to obtain location data on the animal's preferred habitat, home range, and to understand population dynamics. The different types of radio telemetry techniques include very high frequency (VHF) transmitters, global positioning system (GPS) tracking, and satellite tracking. Recent advances in technology have improved radio telemetry techniques by increasing the efficacy of data collection. However, studies involving radio telemetry should be reviewed in order to determine if newer techniques, such as collars that transmit the location to the operator via satellites, are actually required to accomplish the goals of the study.
Transmitters
The operator attaches a transmitter to an animal that gives off unique electromagnetic radio signals, which allows the animal to be located. Transmitters are available in a variety of forms and consist of an antenna, a power source, and the electronics required to produce a signal. Transmitters are chosen based on the behavior, size, and life history of the specific species being studied. In order to reduce the impact of the transmitter on the animal's behavior and quality of life, transmitters typically weigh no more than five percent of the animal's body weight. Unfortunately, the smaller the transmitter, the weaker and shorter-lived it is. Transmitters are often designed to fall off the animal at the conclusion of the study due to the unlikelihood of recapturing the tagged animals.
Large animals require transmitters in the form of collars, which leave room for the animal to grow without falling off. Ear tag transmitters are commonly attached to the ear of large animals that have changing neck sizes. Lightweight, adhesive transmitters are glued to the backs of smaller animals, such as bats. Necklace packs are tran |
https://en.wikipedia.org/wiki/Cold%20Atom%20Laboratory | The Cold Atom Laboratory (CAL) is an experimental instrument on board the ISS, which launched in 2018. It creates an extremely cold microgravity environment in order to study behaviour of atoms in these conditions.
Timeline
The CAL was developed at JPL in Pasadena, California. It was originally scheduled for launch to the International Space Station (ISS) in June 2017. It was then delayed until a scheduled launch on a SpaceX CRS-12 rocket in August 2017. It was finally launched on May 21, 2018. The initial mission had a duration of 12 months with up to five years of extended operation.
In January 2020 it underwent hardware upgrades, which were carried out over an eight-day period by astronauts Christina Koch and Jessica Meir under the supervision of ground controllers. The upgrade included an atom interferometer which can be used to study the equivalence principle.
In July 2021, another upgrade by astronaut Megan McArthur gave CAL the ability to work with ultracold potassium atoms in addition to rubidium atoms.
Purpose
The instrument creates extremely cold conditions in the microgravity environment of the ISS, leading to the formation of Bose Einstein Condensates (BECs) that are a magnitude colder than those that are created in laboratories on Earth. In a space-based laboratory, up to 10 seconds interaction times and as low as 1 picokelvin temperatures are achievable, and it could lead to exploration of unknown quantum mechanical phenomena and test some of the most fundamental laws of physics. These experiments are best done in a freely falling environment, because it is more conducive to uninhibited formation of BECs. Ground based experiments suffer from the effect of the condensate interacting asymmetrically with the apparatus, interfering with the time evolution of the condensate. In orbit, experiments can last much longer because freefall is sustained indefinitely. NASA's JPL scientists state that the CAL investigation could advance knowledge in the devel |
https://en.wikipedia.org/wiki/ARM%20architecture%20family | ARM (stylised in lowercase as arm, formerly an acronym for Advanced RISC Machines and originally Acorn RISC Machine) is a family of RISC instruction set architectures (ISAs) for computer processors. Arm Ltd. develops the ISAs and licenses them to other companies, who build the physical devices that use the instruction set. It also designs and licenses cores that implement these ISAs.
Due to their low costs, low power consumption, and low heat generation, ARM processors are useful for light, portable, battery-powered devices, including smartphones, laptops, and tablet computers, as well as embedded systems. However, ARM processors are also used for desktops and servers, including the world's fastest supercomputer (Fugaku) from 2020 to 2022. With over 230 billion ARM chips produced, , ARM is the most widely used family of instruction set architectures.
There have been several generations of the ARM design. The original ARM1 used a 32-bit internal structure but had a 26-bit address space that limited it to 64 MB of main memory. This limitation was removed in the ARMv3 series, which has a 32-bit address space, and several additional generations up to ARMv7 remained 32-bit. Released in 2011, the ARMv8-A architecture added support for a 64-bit address space and 64-bit arithmetic with its new 32-bit fixed-length instruction set. Arm Ltd. has also released a series of additional instruction sets for different rules; the "Thumb" extension adds both 32- and 16-bit instructions for improved code density, while Jazelle added instructions for directly handling Java bytecode. More recent changes include the addition of simultaneous multithreading (SMT) for improved performance or fault tolerance.
History
BBC Micro
Acorn Computers' first widely successful design was the BBC Micro, introduced in December 1981. This was a relatively conventional machine based on the MOS Technology 6502 CPU but ran at roughly double the performance of competing designs like the Apple II due to |
https://en.wikipedia.org/wiki/Teichichnus | Teichichnus is an ichnogenus with a distinctive form produced by the stacking of thin 'tongues' of sediment, atop one another. They are believed to be fodinichnia, with the organism adopting the habit of retracing the same route through varying heights of the sediment, which would allow it to avoid going over the same area. These 'tongues' are often quite sinuous, reflecting perhaps a more nutrient-poor environment in which the feeding animals had to cover a greater area of sediment, in order to acquire sufficient nourishment. Teichichnus is recognized as a series of tightly packed, concave-up laminae, and lacks an outside border or lining, which distinguishes Teichichnus from the Diplocraterion ichnogenus.
External links
Trace fossils
Paleozoology |
https://en.wikipedia.org/wiki/Utricle%20%28ear%29 | The utricle and saccule are the two otolith organs in the vertebrate inner ear. They are part of the balancing system (membranous labyrinth) in the vestibule of the bony labyrinth (small oval chamber). They use small stones and a viscous fluid to stimulate hair cells to detect motion and orientation. The utricle detects linear accelerations and head-tilts in the horizontal plane. The word utricle comes .
Structure
The utricle is larger than the saccule and is of an oblong form, compressed transversely, and occupies the upper and back part of the vestibule, lying in contact with the recessus ellipticus and the part below it.
Macula
The macula of utricle (macula acustica utriculi) is a small (2 by 3 mm) thickening lying horizontally on the floor of the utricle where the epithelium contains vestibular hair cells that allow a person to perceive changes in latitudinal acceleration as well as the effects of gravity; it receives the utricular filaments of the acoustic nerve.
The hair cells are mechanoreceptors which have 40 to 70 stereocilia and only one true cilium called a kinocilium. The kinocilium is the only sensory aspect of the hair cell and is what causes hair cell polarization. The tips of these stereocilia and kinocilium are embedded in a gelatinous layer, which together with the statoconia form the otolithic membrane. This membrane is weighted with calcium carbonate-protein granules called otoliths. The otolithic membrane adds weight to the tops of the hair cells and increases their inertia. The addition in weight and inertia is vital to the utricle's ability to detect linear acceleration, as described below, and to determine the orientation of the head. When the head is tilted such that gravity pulls on the statoconia, the gelatinous layer is pulled in the same direction also, causing the sensory hairs to bend. Labyrinthine activity responsible for the nystagmus induced by off-vertical axis rotation arises in the otolith organs and couples to the oculomotor |
https://en.wikipedia.org/wiki/Onion%20gravy | Onion gravy is a type of gravy prepared with onion. Various types of onions are used in its preparation. Some preparations caramelize the onions. Onion gravy may be served to accompany many foods, such as pork, beef steak, meatloaf, hamburger, bangers and mash, hot dogs, and French fries, among others. Vegan onion gravy also exists, which may use seitan cooking broth in its preparation. Premade mixes and formulations also exist, such as solid sauce bars.
Ingredients
Primary ingredients include onion, broth or stock, such as beef or chicken stock, and flour. Sweet onion is used in some versions, and some versions incorporate beer or red wine in the gravy. Additional ingredients may include cream, garlic, bread crumbs, butter, vegetable oil, and brown sugar, among others. Various herbs and spices may be used, such as salt, pepper, sage, oregano, and thyme.
See also
List of gravies
List of onion dishes
List of sauces
Onion sauce |
https://en.wikipedia.org/wiki/Multicistronic%20message | Multicistronic message is an archaic term for Polycistronic. Monocistronic, bicistronic and tricistronic are also used to describe mRNA with single, double and triple coding areas (exons).
Note that the base word cistron is no longer used in genetics, and has been replaced by intron and exon in eukaryotic mRNA. However, the mRNA found in bacteria is mainly polycistronic. This means that a single bacterial mRNA strand can be translated into several different proteins. This will occur if spacers separate the different proteins, and each spacer has to have a Shine-Dalgarno sequence located upstream of the start codon.
RNA |
https://en.wikipedia.org/wiki/Stains-all | Stains-all is a carbocyanine dye, which stains anionic proteins, nucleic acids, anionic polysaccharides and other anionic molecules.
Properties
Stains-all is metachromatic and changes its color dependent on its contact to other molecules. The detection limit for phosphoproteins is below 1 ng after one hour of staining, for anionic polysaccharides between 10 and 500 ng. Highly anionic proteins are stained blue, proteoglycans purple and anionic proteins pink. RNA is stained blueish-purple with a detection limit of 90 ng and DNA is stained blue with a detection limit of 3 ng.
Stains-all is light sensitive, therefore the staining is performed in the absence of light and photographed immediately. Staining of proteins can be improved by a subsequent silver stain. The analogue Ethyl-Stains-all has similar properties as stains all, with differences in solubility and staining properties.
Applications
Stains-all stains nucleic acids, anionic proteins, anionic polysaccharides such as alginate and pectinate, hyaluronic acid and dermatan sulfate, heparin, heparan sulfate and chondroitin sulfate. It is used in SDS-PAGE, agarose gel electrophoresis and histologic staining, e.g. staining of growth lines in bones. |
https://en.wikipedia.org/wiki/Optimus%20platform | Optimus is a Process Integration and Design Optimization (PIDO) platform developed by Noesis Solutions. Noesis Solutions takes part in key research projects, such as PHAROS and MATRIX.
Optimus allows the integration of multiple engineering software tools (CAD, Multibody dynamics, finite elements, computational fluid dynamics, ...) into a single and automated workflow. Once a simulation process is captured in a workflow, Optimus will direct the simulations to explore the design space and to optimize product designs for improved functional performance and lower cost, while also minimizing the time required for the overall design process.
Process integration
The Optimus GUI enables the creation of a graphical simulation workflow. A set of functions supports the integration of both commercial and in-house software. A simple workflow can cover a single simulation program, whereas more advanced workflows can include multiple simulation programs. These workflows may contain multiple branches, each with one or more simulation programs, and may include special statements that define looping and conditional branching.
Optimus’ workflow execution mechanism can range from a step-by-step review of the simulation process up to deployment on a large (and non-heterogeneous) computation cluster. Optimus is integrated with several resource management systems to support parallel execution on a computational cluster.
Design optimization
Optimus includes a wide range of methods and models to help solve design optimization problems:
Design of Experiments (DOE)
Response Surface Modeling (RSM)
Numerical optimization, based on local or global algorithms, both for single or multiple objectives with continuous and/or discrete design variables
Design of Experiments (DOE)
Design of Experiments (DOE) defines an optimal set of experiments in the design space in order to obtain the most relevant and accurate design information at minimal cost. Optimus supports the following DOE m |
https://en.wikipedia.org/wiki/Tymovirales | Tymovirales is an order of viruses with five families. The group consists of viruses which have positive-sense, single-stranded RNA genomes. Their genetic material is protected by a special coat protein.
Description
Tymoviruses are mainly plant pathogens first described in 2004. They are characterised by similarities in their replication-associated polyproteins. These account for the majority of their genomic coding capacity. They are considered to form a group, phylogenetically, referred to as flexiviruses, with filamentous virions. |
https://en.wikipedia.org/wiki/Armorial%20of%20Albania | Heraldry, as a scholarly discipline that deals with the study and origin of various symbols and elements, emerged in Albania towards the end of the 13th century. Over time, it has evolved as an inseparable component of European heraldry, encompassing its advancements, shifts and accomplishments.
The earliest evidence in the usage of coats of arms can be traced to the formative period of the Principality of Arbanon with the Gropa ruling family. This practice continued in uninterrupted succession across various medieval Albanian lineages and patronymic families, namely the Albani, Angeli, Arianiti, Balsa, Beçikemi, Dukagjini, Durazzo, Dushmani, Kastrioti, Mataranga, Muzaka, Skuraj, Spani, Spata, Thopia, Zaharia, Zenevisi and numerous others.
Medieval rule
Albanian nobility
Italy
Dalmatia
Moldavia & Romania
Greece
Ottoman Empire
For over 500 years, Albanians were an integral and indispensable component of the Ottoman Empire, with far-reaching contributions that extended beyond politics, the economy, the military, administration and the judiciary. At least 32 accounted for grand viziers that served in this vast empire were of full or partial Albanian stock.
Egypt
Stratioti
Following the Ottoman conquest during the Middle Ages, the heraldic tradition became localized primarily throughout the coastal cities that fell under Veneto's rule. While Ulcin and Tivar succumbed in 1571, Parga, in present-day Greece, remained under Venetian control for extended periods (1447–1537; 1561–1797). The pinnacle of the city's prosperity was witnessed between 1572 and 1797. Its affiliation with Veneto facilitated the emergence of a class of captains and stratiotes, primarily engaged in furthering the Republic's interests. According to H. G. Ströhl, one of the notable families among the captains of Parga that possessed heraldic coats of arms was the Barbati.
Republic of Veneto
Holy Roman Empire
Modern rule
National defence
Military coats of arms
Police coats of arms
Local |
https://en.wikipedia.org/wiki/Centre%20for%20Marine%20Living%20Resources%20%26%20Ecology | The Centre for Marine Living Resources & Ecology (CMLRE) is a research institute in Kochi, Kerala under the Ministry of Earth Sciences, Government of India with a mandate to study the marine living resources. Today, apart from implementing various research projects of the ministry, the institute also manages and operates the Fishery Oceanographic Research Vessel (FORV) Sagar Sampada.
History
The institute has its origins in the Sagar Sampada cell, which was established under the then Department of Ocean Development, DOD (upgraded to the Ministry of Earth Sciences in 2006) for managing and co-ordinating activities of FORV Sagar Sampada. During the beginning of the 9th Five Year Plan of the Government of India in 1998, the Marine Living Resources Programme (MLR Programme) was formulated by the DOD with a view of promoting ocean development activities in the country which inter-alia include mapping of the living resources, preparing inventory of commercially exploitable living marine resources, their optimum utilization through ecosystem management and R & D in basic sciences on Marine Living Resources & Ecology. With the objective of implementing this programme, the Sagar Sampada cell was upgraded to the Centre for Marine Living Resources and Ecology. To this date, the research vessel Sagar Sampada serves as the backbone of the MLR research activities co-ordinated by CMLRE.
During the 9th five-year plan (1998-2002), the Centre co-ordinated the first systematic study of marine life along the Indian shelf waters, along the eastern and western coasts of India. The environmental characteristics of this region and the phytoplankton, zooplankton, marine benthos, fishery resources etc. of this region were systematically characterized for the first time. During the 10th five-year plan (2002-2007) the exploration was extended to the continental slope regions, particularly in the case of marine benthos and fisheries. Research thrust was also placed on studies of harmful alg |
https://en.wikipedia.org/wiki/InterPro | InterPro is a database of protein families, protein domains and functional sites in which identifiable features found in known proteins can be applied to new protein sequences in order to functionally characterise them.
The contents of InterPro consist of diagnostic signatures and the proteins that they significantly match. The signatures consist of models (simple types, such as regular expressions or more complex ones, such as Hidden Markov models) which describe protein families, domains or sites. Models are built from the amino acid sequences of known families or domains and they are subsequently used to search unknown sequences (such as those arising from novel genome sequencing) in order to classify them. Each of the member databases of InterPro contributes towards a different niche, from very high-level, structure-based classifications (SUPERFAMILY and CATH-Gene3D) through to quite specific sub-family classifications (PRINTS and PANTHER).
InterPro's intention is to provide a one-stop-shop for protein classification, where all the signatures produced by the different member databases are placed into entries within the InterPro database. Signatures which represent equivalent domains, sites or families are put into the same entry and entries can also be related to one another. Additional information such as a description, consistent names and Gene Ontology (GO) terms are associated with each entry, where possible.
Data contained in InterPro
InterPro contains three main entities: proteins, signatures (also referred to as "methods" or "models") and entries. The proteins in UniProtKB are also the central protein entities in InterPro. Information regarding which signatures significantly match these proteins are calculated as the sequences are released by UniProtKB and these results are made available to the public (see below). The matches of signatures to proteins are what determine how signatures are integrated together into InterPro entries: comparativ |
https://en.wikipedia.org/wiki/Plane%20wave%20expansion%20method | Plane wave expansion method (PWE) refers to a computational technique in electromagnetics to solve the Maxwell's equations by formulating an eigenvalue problem out of the equation. This method is popular among the photonic crystal community as a method of solving for the band structure (dispersion relation) of specific photonic crystal geometries. PWE is traceable to the analytical formulations, and is useful in calculating modal solutions of Maxwell's equations over an inhomogeneous or periodic geometry. It is specifically tuned to solve problems in a time-harmonic forms, with non-dispersive media (a reformulation of the method named Inverse dispersion allows frequency-dependent refractive indices).
Principles
Plane waves are solutions to the homogeneous Helmholtz equation, and form a basis to represent fields in the periodic media. PWE as applied to photonic crystals as described is primarily sourced from Dr. Danner's tutorial.
The electric or magnetic fields are expanded for each field component in terms of the Fourier series components along the reciprocal lattice vector. Similarly, the dielectric permittivity (which is periodic along reciprocal lattice vector for photonic crystals) is also expanded through Fourier series components.
with the Fourier series coefficients being the K numbers subscripted by m, n respectively, and the reciprocal lattice vector given by . In real modeling, the range of components considered will be reduced to just instead of the ideal, infinite wave.
Using these expansions in any of the curl-curl relations like,
and simplifying under assumptions of a source free, linear, and non-dispersive region we obtain the eigenvalue relations which can be solved.
Example for 1D case
For a y-polarized z-propagating electric wave, incident on a 1D-DBR periodic in only z-direction and homogeneous along x,y, with a lattice period of a. We then have the following simplified relations:
The constitutive eigenvalue equation we finally have t |
https://en.wikipedia.org/wiki/Nutrient%20canal | All bones possess larger or smaller foramina (openings) for the entrance of blood-vessels; these are known as the nutrient foramina, and are particularly large in the shafts of the larger long bones, where they lead into a nutrient canal, which extends into the medullary cavity.
The nutrient canal (foramen) is directed away from the growing end of bone.
The growing ends of bones in upper limb are upper end of humerus and lower ends of radius and ulna. In lower limb, the lower end of femur and upper end of tibia are the growing ends. The nutrient arteries along with nutrient veins pass through this canal.
A nutrient canal is found in long bones, in the mandible, and in dental alveoli. In long bones the nutrient canal is found in the shaft. |
https://en.wikipedia.org/wiki/Scaffolding%20%28bioinformatics%29 | Scaffolding is a technique used in bioinformatics. It is defined as follows:
Link together a non-contiguous series of genomic sequences into a scaffold, consisting of sequences separated by gaps of known length. The sequences that are linked are typically contiguous sequences corresponding to read overlaps.When creating a draft genome, individual reads of DNA are second assembled into contigs, which, by the nature of their assembly, have gaps between them. The next step is to then bridge the gaps between these contigs to create a scaffold. This can be done using either optical mapping or mate-pair sequencing.
Assembly software
The sequencing of the Haemophilus influenzae genome marked the advent of scaffolding. That project generated a total of 140 contigs, which were oriented and linked using paired end reads. The success of this strategy prompted the creation of the software, Grouper, which was included in genome assemblers. Until 2001, this was the only scaffolding software. After the Human Genome Project and Celera proved that it was possible to create a large draft genome, several other similar programs were created. Bambus was created in 2003 and was a rewrite of the original grouper software, but afforded researchers the ability to adjust scaffolding parameters. This software also allowed for optional use of other linking data, such as contig order in a reference genome.
Algorithms used by assembly software are very diverse, and can be classified as based on iterative marker ordering, or graph based. Graph based applications have the capacity to order and orient over 10,000 markers, compared to the maximum 3000 markers capable of iterative marker applications. Algorithms can be further classified as greedy, non greedy, conservative, or non conservative. Bambus uses a greedy algorithm, defined as such because it joins together contigs with the most links first. The algorithm used by Bambus 2 removes repetitive contigs before orienting and ordering them in |
https://en.wikipedia.org/wiki/Gaussian%20fixed%20point | A Gaussian fixed point is a fixed point of the renormalization group flow which is noninteracting in the sense that it is described by a free field theory. The word Gaussian comes from the fact that the probability distribution is Gaussian at the Gaussian fixed point. This means that Gaussian fixed points are exactly solvable (trivially solvable in fact). Slight deviations from the Gaussian fixed point can be described by perturbation theory.
See also
UV fixed point
IR fixed point
Quantum triviality |
https://en.wikipedia.org/wiki/Major%20thirds%20tuning | Among alternative tunings for guitar, a major-thirds tuning is a regular tuning in which each interval between successive open strings is a major third ("M3" in musical abbreviation). Other names for major-thirds tuning include major-third tuning, M3 tuning, all-thirds tuning, and augmented tuning. By definition, a major-third interval separates two notes that differ by exactly four semitones (one-third of the twelve-note octave).
The Spanish guitar's tuning mixes four perfect fourths (five semitones) and one major-third, the latter occurring between the G and B strings:
E–A–D–G–B–E.
This tuning, which is used for acoustic and electric guitars, is called "standard" in English, a convention that is followed in this article. While standard tuning is irregular, mixing four fourths and one major third, M3 tunings are regular: Only major-third intervals occur between the successive strings of the M3 tunings, for example, the open augmented C tuning.
G–C–E–G–C–E.
For each M3 tuning, the open strings form an augmented triad in two octaves.
For guitars with six strings, every major-third tuning repeats its three open-notes in two octaves, so providing many options for fingering chords. By repeating open-string notes and by having uniform intervals between strings, major-thirds tuning simplifies learning by beginners. These features also facilitate advanced guitarists' improvisation, precisely the aim of jazz guitarist Ralph Patt when he began popularizing major-thirds tuning between 1963 and 1964.
Avoiding standard tuning's irregular intervals
In standard tuning, the successive open-strings mix two types of intervals, four perfect-fourths and the major third between the G and B strings:
E2–A2–D3–G3–B3–E4.
Only major thirds occur as open-string intervals for major-thirds tuning, which is also called "major-third tuning", "all-thirds tuning", and "M3 tuning". The most viable M3 tunings are:
E2-G#2-C3-E3-G#3-C4
F2-A2-C#3-F3-A3-C#4
F#2-A#2-D3-F#3-A#3-D4
G2-B2-D#3-G3-B3-D#4 |
https://en.wikipedia.org/wiki/Gadd45 | The Growth Arrest and DNA Damage or gadd45 genes, including GADD45A (originally termed gadd45) GADD45B (originally termed MyD118), and GADD45G (originally termed CR6), are implicated as stress sensors that modulate the response of mammalian cells to genotoxic/physiological stress, and modulate tumor formation. Gadd45 proteins interact with other proteins implicated in stress responses, including PCNA, p21, Cdc2/CyclinB1, MEKK4, and p38 kinase.
GADD45 proteins regulate differentiation at the two cell stage of embryogenesis, a key stage of zygotic genome activation. GADD45 likely acts by promoting TET-mediated DNA demethylation leading to the induction of expression of genes necessary for zygote activation.
Overexpression of the GADD45 gene in the Drosophila melanogaster nervous system significantly increases longevity. This longevity increase can be attributed to more efficient recognition and repair of spontaneous DNA damages generated by physiological processes and environmental factors.
History
Gadd45a was discovered and characterized in the laboratory of Dr. Albert J. Fornace Jr. in 1988.
Gadd45b (MyD118) was discovered and characterized in the laboratories of Drs. Dan A. Liebermann and Barbara Hoffman in 1991.
Gadd45g (CR6) was discovered and characterized in the laboratories of Drs. Kenneth Smith, Dan A. Liebermann, and Barbara Hoffman in 1993 and 1999.
See also
GADD45A
GADD45B
GADD45G |
https://en.wikipedia.org/wiki/Eukaryogenesis | Eukaryogenesis, the process which created the eukaryotic cell and lineage, is a milestone in the evolution of life, since eukaryotes include all complex cells and almost all multicellular organisms. The process is widely agreed to have involved symbiogenesis, in which archaea and bacteria came together to create the first eukaryotic common ancestor (FECA). This cell had a new level of complexity and capability, with a nucleus, at least one centriole and cilium, facultatively aerobic mitochondria, sex (meiosis and syngamy), a dormant cyst with a cell wall of chitin and/or cellulose and peroxisomes. It evolved into a population of single-celled organisms that included the last eukaryotic common ancestor (LECA), gaining capabilities along the way, though the sequence of the steps involved has been disputed, and may not have started with symbiogenesis. In turn, the LECA gave rise to the eukaryotes' crown group, containing the ancestors of animals, fungi, plants, and a diverse range of single-celled organisms.
Context
Life arose on Earth once it had cooled enough for oceans to form. The last universal common ancestor (LUCA) was an organism which had ribosomes and the genetic code; it lived some 4 billion years ago. It gave rise to two main branches of prokaryotic life, the bacteria and the archaea. From among these small-celled, rapidly-dividing ancestors arose the Eukaryotes, with much larger cells, nuclei, and distinctive biochemistry. The eukaryotes form a domain that contains all complex cells and most types of multicellular organism, including the animals, plants, and fungi.
Symbiogenesis
According to the theory of symbiogenesis (also known as the endosymbiotic theory) championed by Lynn Margulis, a member of the archaea gained a bacterial cell as a component. The archaeal cell was a member of the Asgard group. The bacterium was one of the Alphaproteobacteria, which had the ability to use oxygen in its respiration. This enabled it – and the archaeal cells that |
https://en.wikipedia.org/wiki/Devitrification | Devitrification is the process of crystallization in a formerly crystal-free (amorphous) glass. The term is derived from the Latin vitreus, meaning glassy and transparent.
Devitrification in glass art
Devitrification occurs in glass art during the firing process of fused glass whereby the surface of the glass develops a whitish scum, crazing, or wrinkles instead of a smooth glossy shine, as the molecules in the glass change their structure into that of crystalline solids. While this condition is normally undesired, in glass art it is possible to use devitrification as a deliberate artistic technique.
Causes of devitrification, commonly referred to as "devit", can include holding a high temperature for too long, which causes the nucleation of crystals. The presence of foreign residue such as dust on the surface of the glass or inside the kiln prior to firing can provide nucleation points where crystals can propagate easily. The chemical compositions of some types of glass can make them more vulnerable to devitrification than others, for example a high lime content can be factor in inducing this condition. In general opaque glass can devit easily as crystals are present in the glass to give its opaque appearance and thus the higher the chance it might devit.
Techniques for avoiding devitrification include cleaning the glass surfaces of dust or unwanted residue, and allowing rapid cooling once the piece reaches the desired temperature, until the temperature approaches the annealing temperature. Devit spray can be purchased to apply to the surfaces of the glass pieces prior to firing which is supposed to help prevent devitrification, however there is disagreement over the long term effectiveness of this solution and whether it should be used as a substitute for proper firing techniques.
Once devit has occurred, there are techniques that can be attempted to fix it, with varying degrees of success. One technique is to cover the surface with a sheet of clear glass |
https://en.wikipedia.org/wiki/Campylobacter%20jejuni | Campylobacter jejuni () is a species of pathogenic bacteria, one of the most common causes of food poisoning in Europe and in the US. The vast majority of cases occur as isolated events, not as part of recognized outbreaks. Active surveillance through the Foodborne Diseases Active Surveillance Network (FoodNet) indicates that about 20 cases are diagnosed each year for each 100,000 people in the US, while many more cases are undiagnosed or unreported; the CDC estimates a total of 1.5 million infections every year. The European Food Safety Authority reported 246,571 cases in 2018, and estimated approximately nine million cases of human campylobacteriosis per year in the European Union.
Campylobacter is a genus of bacteria that is among the most common causes of bacterial infections in humans worldwide. Campylobacter means "curved rod", deriving from the Greek kampylos (curved) and baktron (rod). Of its many species, C. jejuni is considered one of the most important from both a microbiological and public health perspective.
C. jejuni is commonly associated with poultry, and is also commonly found in animal feces. Campylobacter is a helical-shaped, non-spore-forming, Gram-negative, microaerophilic, nonfermenting motile bacterium with a single flagellum at one or both poles, which are also oxidase-positive and grow optimally at 37 to 42 °C. When exposed to atmospheric oxygen, C. jejuni is able to change into a coccal form. This species of pathogenic bacteria is one of the most common causes of human gastroenteritis in the world. Food poisoning caused by Campylobacter species can be severely debilitating, but is rarely life-threatening. It has been linked with subsequent development of Guillain–Barré syndrome, which usually develops two to three weeks after the initial illness. Individuals with recent C. jejuni infections develop Guillain-Barré syndrome at a rate of 0.3 per 1000 infections, about 100 times more often than the general population. Another chronic conditio |
https://en.wikipedia.org/wiki/Feud%20%28video%20game%29 | Feud is an adventure game designed by John Pickford for Binary Design and published in 1987 as the first game under the Bulldog Software label of Mastertronic. Versions were released for the Amiga, Amstrad CPC, Atari 8-bit family, Atari ST, Commodore 64, MS-DOS, MSX, and ZX Spectrum. The player takes on the role of the sorcerer Learic, and must fight his evil twin Leanoric.
Gameplay
The only real enemy is Leanoric. To achieve your objective, the player must collect many herbs scattered across the map and mix them in a cauldron to make offensive and defensive spells. The spells vary from fireballs and lightning to invisibility and even turning peaceful villagers into zombies. A compass indicates Leanoric's location. Several of the herbs are found in a garden, tended by a gardener. The gardener, though slow-moving, is also able to inflict damage on Learic.
Leanoric, as a non-player character, has to do the same thing, collecting herbs to mix in his cauldron before hunting you down in order to attack.
Development
After developing Zub, John Pickford went on to design a game that he wasn't going to program. This required designing the game on paper before development started and overseeing the work of a different programming team.
Reception
Reviewer "Ben" for CRASH wrote, "What a way to kick off a new label! Feud is completely brilliant. I love original games, so it is a real pleasure to see a cheapie that’s as ‘new’ in concept as this — and as playable." Robert Swan, in his favorable review for Atari User magazine, stated that the game has "fantastic graphics, great sound, addictive gameplay and plenty of action-packed screens." He also praised the game's low price. |
https://en.wikipedia.org/wiki/Clarke%27s%20generalized%20Jacobian | In mathematics, Clarke's generalized Jacobian is a generalization of the Jacobian matrix of a smooth function to non-smooth functions. It was introduced by . |
https://en.wikipedia.org/wiki/Charles%20Sims%20%28mathematician%29 | Charles Coffin Sims (April 14, 1937 – October 23, 2017) was an American mathematician best known for his work in group theory. Together with Donald G. Higman he discovered the Higman–Sims group, one of the sporadic groups. The permutation group software developed by Sims also led to the proof of existence of the Lyons group (also known as the Lyons–Sims group) and the O'Nan group (also known as the O'Nan–Sims group).
Sims was born and raised in Elkhart, Indiana, and received his B.S. from the University of Michigan. He did his graduate studies at Harvard University, where he was a student of John G. Thompson and received his Ph.D. degree in 1963. In his thesis, he enumerated p-groups, giving sharp asymptotic upper and lower bounds. Sims is one of the founders of computational group theory and is the eponym of the Schreier–Sims algorithm. He was a faculty member at the Department of Mathematics at Rutgers University from 1965 to 2007. During that period he served, in particular, as Department Chair (1982–84) and Associate Provost for Computer Planning (1984–87). Sims retired from Rutgers in 2007 and moved to St. Petersburg, Florida.
In 2012, he became a fellow of the American Mathematical Society.
See also
Higman–Sims graph
Prevalence of p-groups
Sims conjecture |
https://en.wikipedia.org/wiki/Preaortic%20lymph%20nodes | The preaortic lymph nodes lie in front of the aorta, and may be divided into celiac lymph nodes, superior mesenteric lymph nodes, and inferior mesenteric lymph nodes groups, arranged around the origins of the corresponding arteries.
The celiac lymph nodes are grouped into three sets: the gastric, hepatic and splenic lymph nodes. These groups also form their own subgroups.
The superior mesenteric lymph nodes are grouped into three sets: the mesenteric, ileocolic and mesocolic lymph nodes.
The inferior mesenteric lymph nodes have a subgroup of pararectal lymph nodes.
The preaortic lymph nodes receive a few vessels from the lateral aortic lymph nodes, but their principal afferents are derived from the organs supplied by the three arteries with which they are associated–the celiac, superior and inferior mesenteric arteries.
Some of their efferents pass to the retroaortic lymph nodes, but the majority unite to form the intestinal lymph trunk, which enters the cisterna chyli.
Additional images |
https://en.wikipedia.org/wiki/Invadopodia | Invadopodia are actin-rich protrusions of the plasma membrane that are associated with degradation of the extracellular matrix in cancer invasiveness and metastasis. Very similar to podosomes, invadopodia are found in invasive cancer cells and are important for their ability to invade through the extracellular matrix, especially in cancer cell extravasation.
Invadopodia are generally visualized by the holes they create in ECM (fibronectin, collagen etc.)-coated plates, in combination with immunohistochemistry for the invadopodia localizing proteins such as cortactin, actin, Tks5 etc. Invadopodia can also be used as a marker to quantify the invasiveness of cancer cell lines in vitro using a hyaluronic acid hydrogel assay.
History and controversy
In the early 1980s, researchers noticed protrusions coming from the ventral membrane of cells that had been transformed by the Rous Sarcoma Virus and that they were at the sites of cell-to-extracellular matrix (ECM) adhesion. They termed these structures podosomes, or cellular feet, but it was later noticed that degradation of the ECM was occurring at these sites and the name invadopodia was coined to highlight the invasive nature of these protrusions. Since then, researchers have often used the two names interchangeably, but it is generally accepted that podosomes are the structures involved in normal biological processes (as when immune cells must cross tissue barriers or in bone remodeling) and invadopodia are the structures in invading cancer cells. However, there remains controversy around this nomenclature, with some scientists arguing that the two are different enough to be considered distinct structures while others argue that invadopodia are simply disregulated podosomes and cancer cells don’t simply "invent" new mechanisms.
Due to this confusion and the high similarity between the two structures, many have begun to group the two under the collective term invadosomes.
Structure and formation
Invadopodia have an |
https://en.wikipedia.org/wiki/Lea%20test | The LEA Vision Test System is a series of pediatric vision tests designed specifically for children who do not know how to read the letters of the alphabet that are typically used in eye charts. There are numerous variants of the LEA test which can be used to assess the visual capabilities of near vision and distance vision, as well as several other aspects of occupational health, such as contrast sensitivity, visual field, color vision, visual adaptation, motion perception, and ocular function and accommodation (eye).
History
The first version of the LEA test was developed in 1976 by Finnish pediatric ophthalmologist Lea Hyvärinen, MD, PhD. Dr. Hyvärinen completed her thesis on fluorescein angiography and helped start the first clinical laboratory in that area while serving as a fellow at the Wilmer Eye Institute of Johns Hopkins Hospital in 1967. During her time with the Wilmer Institute, she became interested in vision rehabilitation and assessment and has been working in that field since the 1970s, training rehabilitation teams, designing new visual assessment devices, and teaching. The first test within the LEA Vision Test System that Dr. Hyvarinen created was the classic LEA Symbols Test followed shortly by the LEA Numbers Test which was used in comparison studies within the field of occupational medicine.
Accuracy
Among the array of visual assessment picture tests that exist, the LEA symbols tests are the only tests that have been calibrated against the standardized Landolt C vision test symbol. The Landolt C is an optotype that is used throughout most of the world as the standardized symbol for measuring visual acuity. It is identical to the "C" that is used in the traditional Snellen chart.
In addition to this, the LEA symbols test has been experimentally verified to be both a valid and reliable measure of visual acuity. As is desirable of a good vision test, each of the four optotypes used in the symbols test has been proven to measure visual acuity sim |
https://en.wikipedia.org/wiki/Georges%20Henri%20Halphen | Georges-Henri Halphen (; 30 October 1844, Rouen – 23 May 1889, Versailles) was a French mathematician. He was known for his work in geometry, particularly in enumerative geometry and the singularity theory of algebraic curves, in algebraic geometry. He also worked on invariant theory and projective differential geometry.
Biography
He did his studies at École Polytechnique (X 1862), where he graduated in 1866. He continued his education at École d'Application de l'Artillerie et du Génie de Metz. As a lieutenant of Artillery he was sent Auxonne first and then to Strasbourg. In 1872, Halphen settled in Paris, where he became a lecturer at the École Polytechnique and began his scientific studies. He completed his dissertation in 1878. In 1872 he married Rose Marguerite Aron, with whom he had eight children, four sons and four daughters. Of the four sons, three joined the military and two of them died in World War I. Louis Halphen (1880-1950) was a French historian specialized in medivial times; Charles Halphen (1885-1915), was deputy secretary of the Société mathématique de France. One of his grandsons was Étienne Halphen (1911–1954), who did significant work in applied statistics.
Awards
Georges-Henri Halphen received in the Steiner prize of the Prussian Academy of Sciences in 1880 along with Max Noether. In 1881 Halphen received the Grand Prix of the Académie des sciences for his work on linear differential equations: Mémoire sur la Reduction des Equations Différentielles Linéaires aux Formes Intégrales. He received the Prix Poncelet in 1883 and the Prix Petit d'Ormoy in 1885. He was elected to the Académie des sciences in 1886 in the Section de Géométrie, replacing the deceased Jean Claude Bouquet. In 1887 Halphen was elected to the Accademia dei Lincei in Rome.
Works
Oeuvres de G.H. Halphen, in 4 vols. edited by Camille Jordan, Henri Poincaré, Charles Émile Picard with assistance from Ernest Vessiot, 1916, 1918, 1921, 1924, Paris, France: Gauthier-Villars
Trai |
https://en.wikipedia.org/wiki/Ehrhart%20polynomial | In mathematics, an integral polytope has an associated Ehrhart polynomial that encodes the relationship between the volume of a polytope and the number of integer points the polytope contains. The theory of Ehrhart polynomials can be seen as a higher-dimensional generalization of Pick's theorem in the Euclidean plane.
These polynomials are named after Eugène Ehrhart who studied them in the 1960s.
Definition
Informally, if is a polytope, and is the polytope formed by expanding by a factor of in each dimension, then is the number of integer lattice points in .
More formally, consider a lattice in Euclidean space and a -dimensional polytope in with the property that all vertices of the polytope are points of the lattice. (A common example is and a polytope for which all vertices have integer coordinates.) For any positive integer , let be the -fold dilation of (the polytope formed by multiplying each vertex coordinate, in a basis for the lattice, by a factor of ), and let
be the number of lattice points contained in the polytope . Ehrhart showed in 1962 that is a rational polynomial of degree in , i.e. there exist rational numbers such that:
for all positive integers .
The Ehrhart polynomial of the interior of a closed convex polytope can be computed as:
where is the dimension of . This result is known as Ehrhart–Macdonald reciprocity.
Examples
Let be a -dimensional unit hypercube whose vertices are the integer lattice points all of whose coordinates are 0 or 1. In terms of inequalities,
Then the -fold dilation of is a cube with side length , containing integer points. That is, the Ehrhart polynomial of the hypercube is . Additionally, if we evaluate at negative integers, then
as we would expect from Ehrhart–Macdonald reciprocity.
Many other figurate numbers can be expressed as Ehrhart polynomials. For instance, the square pyramidal numbers are given by the Ehrhart polynomials of a square pyramid with an integer unit square as its ba |
https://en.wikipedia.org/wiki/Rubblization | Rubblization is a construction and engineering technique that involves saving time and transportation costs by reducing existing concrete into rubble at its current location rather than hauling it to another location. Rubblization has two primary applications: creating a base for new roadways and decommissioning nuclear power plants.
Road construction
In road construction, a worn-out Portland cement concrete can be rubblized and then overlaid with a new surface, usually asphalt concrete. Specialized equipment breaks up the old roadway into small pieces to make a base for new pavement. This saves the expense of transporting the old pavement to a disposal site, and purchasing/transporting new base materials for the replacement paving. The result is a smoother pavement surface than would be obtained if a layer of asphalt were to be applied to the unbroken concrete surface. The technique has been used on roads since the late 1990s, and is also being used for concrete airport runways.
The rubblizing process provides many benefits versus other methods of road rehabilitation, such as crack and seat or removal and replacement of a concrete surface including: rubblizing a concrete surface is 52% less expensive than remove and replacing concrete; rubblizing reduces road reconstruction time, from days of lane closures to hours, providing large savings to contractors and reduced impact on travelling public; and rubblization is an environmentally friendly "green" process.,
Nuclear power plant decommissioning
Rubblization is used in decommissioning of nuclear power plants. As with other decommissioning techniques, all equipment from buildings is removed and the surfaces are decontaminated. The difference with rubblization is that above-grade structures, including the concrete containment building, are demolished into rubble and buried in the structure's foundation below ground. The site surface is then covered, regraded, and landscaped for unrestricted use. This saves the exp |
https://en.wikipedia.org/wiki/Intercast | Intercast was a short-lived technology developed in 1996 by Intel for broadcasting information such as web pages and computer software, along with a single television channel. It required a compatible TV tuner card installed in a personal computer and a decoding program called Intel Intercast Viewer. The data for Intercast was embedded in the Vertical Blanking Interval (VBI) of the video signal carrying the Intercast-enabled program, at a maximum of 10.5 Kilobytes/sec in 10 of the 45 lines of the VBI.
With Intercast, a computer user could watch the TV broadcast in one window of the Intercast Viewer, while being able to view HTML web pages in another window. Users were also able to download software transmitted via Intercast as well. Most often the web pages received were relevant to the television program being broadcast, such as extra information relating to a television program, or extra news headlines and weather forecasts during a newscast. Intercast can be seen as a more modern version of teletext.
The Intercast Viewer software was bundled with several TV tuner cards at the time, such as the Hauppauge Win-TV card. Also at the time of Intercast's introduction, Compaq offered some models of computers with built-in TV tuners installed with the Intercast Viewer software.
Upon its debut, Intercast was used by several TV networks, such as NBC, CNN, The Weather Channel, and MTV Networks.
On June 25, 1996, Intel and NBC announced an arrangement which enabled users to watch coverage of the 1996 Summer Olympics and other programming from NBC News.
Intel discontinued support for Intercast a couple of years later.
NBC's series Homicide: Life on the Street was a show that was Intercast-enabled. |
https://en.wikipedia.org/wiki/Roman%20Jackiw | Roman Wladimir Jackiw (; ; 8 November 1939 – 14 June 2023) was a Polish-born American theoretical physicist and Dirac Medallist.
Biography
Born in Lubliniec, Poland in 1939 to a Ukrainian family, the family later moved to Austria and Germany before settling in New York City when Jackiw was about 10.
Jackiw earned his undergraduate degree from Swarthmore College and his PhD from Cornell University in 1966 under Hans Bethe and Kenneth Wilson. He was a professor at the Massachusetts Institute of Technology Center for Theoretical Physics from 1969 until his retirement. He retained his affiliation in emeritus status in 2019.
Jackiw co-discovered the chiral anomaly, which is also known as the Adler–Bell–Jackiw anomaly. In 1969, he and John Stewart Bell published their explanation, which was later expanded and clarified by Stephen L. Adler, of the observed decay of a neutral pion into two photons. This decay is forbidden by a symmetry of classical electrodynamics, but Bell and Jackiw showed that this symmetry cannot be preserved at the quantum level. Their introduction of an "anomalous" term from quantum field theory required that the sum of the charges of the elementary fermions had to be zero. This work also gave important support to the colour theory of quarks.
Jackiw is also known for Jackiw–Teitelboim gravity, a theory of gravity with one dimension each of space and time that includes a dilaton field. Sometimes known as the R = T model or as JT gravity, it is used to model some aspects of near-extremal black holes.
Jackiw married fellow physicist So-Young Pi, daughter of Korean writer Pi Chun-deuk. One of Jackiw's sons is Stefan Jackiw, an American violinist. The other is Nicholas Jackiw, a software designer known for inventing The Geometer's Sketchpad. His daughter, Simone Ahlborn, is an educator at Moses Brown School in Providence, Rhode Island.
Jackiw died 14 June 2023, at the age of 83.
Awards
Heineman Prize, 1995
On 26 May 2000, Jackiw received an honorary |
https://en.wikipedia.org/wiki/R-parity | R-parity is a concept in particle physics. In the Minimal Supersymmetric Standard Model, baryon number and lepton number are no longer conserved by all of the renormalizable couplings in the theory. Since baryon number and lepton number conservation have been tested very precisely, these couplings need to be very small in order not to be in conflict with experimental data. R-parity is a symmetry acting on the Minimal Supersymmetric Standard Model (MSSM) fields that forbids these couplings and can be defined as
or, equivalently, as
where is spin, is baryon number, and is lepton number. All Standard Model particles have R-parity of +1 while supersymmetric particles have R-parity of −1.
Note that there are different forms of parity with different effects and principles, one should not confuse this parity with any other parity.
Dark matter candidate
With R-parity being preserved, the lightest supersymmetric particle (LSP) cannot decay. This lightest particle (if it exists) may therefore account for the observed missing mass of the universe that is generally called dark matter. In order to fit observations, it is assumed that this particle has a mass of to , is neutral and only interacts through weak interactions and gravitational interactions. It is often called a weakly interacting massive particle or WIMP.
Typically the dark matter candidate of the MSSM is a mixture of the electroweak gauginos and Higgsinos and is called a neutralino. In extensions to the MSSM it is possible to have a sneutrino be the dark matter candidate. Another possibility is the gravitino, which only interacts via gravitational interactions and does not require strict R-parity.
R-parity violating couplings of the MSSM
The renormalizable R-parity violating couplings of the MSSM are
violates by 1 unit
The strongest constraint involving this coupling alone is from the non-observation of neutron–antineutron oscillations.
violates by 1 unit
The strongest constraint involving this co |
https://en.wikipedia.org/wiki/Decodoku | Decodoku is set of online citizen science games, based on quantum error correction. The project is supported by the NCCR QSIT and the University of Basel, and allows the public to get involved with quantum error correction research.
The games present the clues left in a quantum computer when errors occur, and encourage the players to work out how best to correct them. These puzzles are presented in a manner similar to typical casual puzzle games, like 2048, Threes or Sudoku, with the scientific background explained via the project website and YouTube channel. Thus far three games have been released: Decodoku, Decodoku:Puzzles and Decodoku:Colors. |
https://en.wikipedia.org/wiki/AngelHack | AngelHack is an American company based in San Francisco that primarily organizes and hosts hackathons for other companies.
History
Founded in 2011, AngelHack distinguished itself from other hackathon organizers by coordinating global hackathons which took place simultaneously in different places.
The company now brands itself as a developer ecosystem. As of 2023, it is headed by Justin Ng.
Users
AngelHack claims over 200,000 members and hosts various educational events for developers.
Controversies
One of AngelHack's co-founders, Greg Gopman, was sued in 2014 by a fellow AngelHack co-founder for allegedly using the company's finances for "elaborate vacations" in Thailand and Colombia. After leaving AngelHack, Gopman has since garnered controversy for referring to homeless people in San Francisco as "hyenas." |
https://en.wikipedia.org/wiki/Convolute%20%28botany%29 | Convolute as a verb literally means to "roll together" or "roll around", from the Latin convolvere. In general application the word can mean to "tangle" or "complicate", but in botanical descriptions convolute usually is an adjective from the Latin convolutus, meaning "rolled around". It commonly refers to a special class of imbricate structures — those where the overlapping edges of leaves, scales or similar elements are spirally wrapped, each scale having one edge within the previous scale and one outside the next scale. In the family Restionaceae the leaf sheaths commonly are convolute in this sense. However in structures such as a spathe, where there is only one element, a convolute (or "convolutive") element is spirally wrapped around itself or its branch. This is common in the buds of leaves and inflorescences of members of the family Araceae. |
https://en.wikipedia.org/wiki/Cryptography | Cryptography, or cryptology (from "hidden, secret"; and graphein, "to write", or -logia, "study", respectively), is the practice and study of techniques for secure communication in the presence of adversarial behavior. More generally, cryptography is about constructing and analyzing protocols that prevent third parties or the public from reading private messages. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, information security, electrical engineering, digital signal processing, physics, and others. Core concepts related to information security (data confidentiality, data integrity, authentication, and non-repudiation) are also central to cryptography. Practical applications of cryptography include electronic commerce, chip-based payment cards, digital currencies, computer passwords, and military communications.
Cryptography prior to the modern age was effectively synonymous with encryption, converting readable information (plaintext) to unintelligible nonsense text (ciphertext), which can only be read by reversing the process (decryption). The sender of an encrypted (coded) message shares the decryption (decoding) technique only with the intended recipients to preclude access from adversaries. The cryptography literature often uses the names "Alice" (or "A") for the sender, "Bob" (or "B") for the intended recipient, and "Eve" (or "E") for the eavesdropping adversary. Since the development of rotor cipher machines in World War I and the advent of computers in World War II, cryptography methods have become increasingly complex and their applications more varied.
Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around computational hardness assumptions, making such algorithms hard to break in actual practice by any adversary. While it is theoretically possible to break into a well-designed system, it is infeasible in actual pract |
https://en.wikipedia.org/wiki/Berkeley%20cardinal | In set theory, Berkeley cardinals are certain large cardinals suggested by Hugh Woodin in a seminar at the University of California, Berkeley in about 1992.
A Berkeley cardinal is a cardinal κ in a model of Zermelo–Fraenkel set theory with the property that for every transitive set M that includes κ and α < κ, there is a nontrivial elementary embedding of M into M with α < critical point < κ. Berkeley cardinals are a strictly stronger cardinal axiom than Reinhardt cardinals, implying that they are not compatible with the axiom of choice.
A weakening of being a Berkeley cardinal is that for every binary relation R on Vκ, there is a nontrivial elementary embedding of (Vκ, R) into itself. This implies that we have elementary
j1, j2, j3, ...
j1: (Vκ, ∈) → (Vκ, ∈),
j2: (Vκ, ∈, j1) → (Vκ, ∈, j1),
j3: (Vκ, ∈, j1, j2) → (Vκ, ∈, j1, j2),
and so on. This can be continued any finite number of times, and to the extent that the model has dependent choice, transfinitely. Thus, plausibly, this notion can be strengthened simply by asserting more dependent choice.
While all these notions are incompatible with Zermelo–Fraenkel set theory (ZFC), their consequences do not appear to be false. There is no known inconsistency with ZFC in asserting that, for example:
For every ordinal λ, there is a transitive model of ZF + Berkeley cardinal that is closed under λ sequences.
See also
List of large cardinal properties |
https://en.wikipedia.org/wiki/Flavensomycin | Flavensomycin is a antibiotic and fungicide with the molecular formula C47H64NO14. Flavensomycin has been first isolated in 1957 from a culture of Streptomyces tanashiensis bacteria. |
https://en.wikipedia.org/wiki/Akshay%20Venkatesh | Akshay Venkatesh (born 21 November 1981) is an Australian mathematician and a professor (since 15 August 2018) at the School of Mathematics at the Institute for Advanced Study. His research interests are in the fields of counting, equidistribution problems in automorphic forms and number theory, in particular representation theory, locally symmetric spaces, ergodic theory, and algebraic topology.
He was the first Australian to have won medals at both the International Physics Olympiad and International Mathematical Olympiad, which he did at the age of 12.
In 2018, he was awarded the Fields Medal for his synthesis of analytic number theory, homogeneous dynamics, topology, and representation theory. He is the second Australian and the second person of Indian descent to win the Fields Medal. He was on the Mathematical Sciences jury for the Infosys Prize in 2020.
Early years
Akshay Venkatesh was born in Delhi, India, and his family emigrated to Perth in Western Australia when he was two years old. He attended Scotch College. His mother, Svetha, is a computer science professor at Deakin University. A child prodigy, Akshay attended extracurricular training classes for gifted students in the state mathematical olympiad program, and in 1993, whilst aged only 11, he competed at the 24th International Physics Olympiad in Williamsburg, Virginia, winning a bronze medal. The following year, he switched his attention to mathematics and, after placing second in the Australian Mathematical Olympiad, he won a silver medal in the 6th Asian Pacific Mathematics Olympiad, before winning a bronze medal at the 1994 International Mathematical Olympiad held in Hong Kong. He completed his secondary education the same year, turning 13 before entering the University of Western Australia as its youngest ever student. Venkatesh completed the four-year course in three years and became, at 16, the youngest person to earn First Class Honours in pure mathematics from the university. He was aw |
https://en.wikipedia.org/wiki/Pleomorphism%20%28microbiology%29 | In microbiology, pleomorphism (from Ancient Greek , pléō, "more", and , morphḗ, form), also pleiomorphism, is the ability of some microorganisms to alter their morphology, biological functions or reproductive modes in response to environmental conditions. Pleomorphism has been observed in some members of the Deinococcaceae family of bacteria. The modern definition of pleomorphism in the context of bacteriology is based on variation of morphology or functional methods of the individual cell, rather than a heritable change of these characters as previously believed.
Bacteria
In the first decades of the 20th century, the term "pleomorphism" was used to refer to the idea that bacteria change morphology, biological systems, or reproductive methods dramatically according to environmental cues. This claim was controversial among microbiologists of the time, and split them into two schools: the monomorphists, who opposed the claim, and the pleomorphists such as Antoine Béchamp, Ernst Almquist, Günther Enderlein, Albert Calmette, Gastons Naessens, Royal Raymond Rife, and Lida Mattman, who supported the posit. According to a 1997 journal article by Milton Wainwright, a British microbiologist, pleomorphism of bacteria lacked wide acceptance among modern microbiologists of the time.
Monomorphic theory, supported by Louis Pasteur, Rudolf Virchow, Ferdinand Cohn, and Robert Koch, emerged to become the dominant paradigm in modern medical science: it is now almost universally accepted that each bacterial cell is derived from a previously existing cell of practically the same size and shape. However it has recently been shown that certain bacteria are capable of dramatically changing shape.
Sergei Winogradsky took a middle-ground stance in the pleomorphism controversy. He agreed with the monomorphic school of thought, but disagreed with some of the foundational microbiological beliefs that the prominent monomorphists Cohn and Koch held. Winogradsky published a literature review t |
https://en.wikipedia.org/wiki/Microwave%20transmission | Microwave transmission is the transmission of information by electromagnetic waves with wavelengths in the microwave frequency range of 300 MHz to 300 GHz (1 m - 1 mm wavelength) of the electromagnetic spectrum. Microwave signals are normally limited to the line of sight, so long-distance transmission using these signals requires a series of repeaters forming a microwave relay network. It is possible to use microwave signals in over-the-horizon communications using tropospheric scatter, but such systems are expensive and generally used only in specialist roles.
Although an experimental microwave telecommunication link across the English Channel was demonstrated in 1931, the development of radar in World War II provided the technology for practical exploitation of microwave communication. During the war, the British Army introduced the Wireless Set No. 10, which used microwave relays to multiplex eight telephone channels over long distances. A link across the English Channel allowed General Bernard Montgomery to remain in continual contact with his group headquarters in London.
In the post-war era, the development of microwave technology was rapid, which led to the construction of several transcontinental microwave relay systems in North America and Europe. In addition to carrying thousands of telephone calls at a time, these networks were also used to send television signals for cross-country broadcast, and later, computer data. Communication satellites took over the television broadcast market during the 1970s and 80s, and the introduction of long-distance fibre optic systems in the 1980s and especially 90s led to the rapid rundown of the relay networks, most of which are abandoned.
In recent years, there has been an explosive increase in use of the microwave spectrum by new telecommunication technologies such as wireless networks, and direct-broadcast satellites which broadcast television and radio directly into consumers' homes. Larger line-of-sight links are |
https://en.wikipedia.org/wiki/Anterior%20intermuscular%20septum%20of%20leg | The anterior intermuscular septum of leg or anterior crural intermuscular septum is a band of fascia which separates the lateral from the anterior compartment of leg.
The deep fascia of leg gives off from its deep surface, on the lateral side of the leg, two strong intermuscular septa, the anterior and posterior peroneal septa, which enclose the peroneus longus and brevis, and separate them from the muscles of the anterior and posterior crural regions, and several more slender processes which enclose the individual muscles in each region.
See also
Posterior intermuscular septum of leg |
https://en.wikipedia.org/wiki/Epsilon%20calculus | In logic, Hilbert's epsilon calculus is an extension of a formal language by the epsilon operator, where the epsilon operator substitutes for quantifiers in that language as a method leading to a proof of consistency for the extended formal language. The epsilon operator and epsilon substitution method are typically applied to a first-order predicate calculus, followed by a demonstration of consistency. The epsilon-extended calculus is further extended and generalized to cover those mathematical objects, classes, and categories for which there is a desire to show consistency, building on previously-shown consistency at earlier levels.
Epsilon operator
Hilbert notation
For any formal language L, extend L by adding the epsilon operator to redefine quantification:
The intended interpretation of ϵx A is some x that satisfies A, if it exists. In other words, ϵx A returns some term t such that A(t) is true, otherwise it returns some default or arbitrary term. If more than one term can satisfy A, then any one of these terms (which make A true) can be chosen, non-deterministically. Equality is required to be defined under L, and the only rules required for L extended by the epsilon operator are modus ponens and the substitution of A(t) to replace A(x) for any term t.
Bourbaki notation
In tau-square notation from N. Bourbaki's Theory of Sets, the quantifiers are defined as follows:
where A is a relation in L, x is a variable, and juxtaposes a at the front of A, replaces all instances of x with , and links them back to . Then let Y be an assembly, (Y|x)A denotes the replacement of all variables x in A with Y.
This notation is equivalent to the Hilbert notation and is read the same. It is used by Bourbaki to define cardinal assignment since they do not use the axiom of replacement.
Defining quantifiers in this way leads to great inefficiencies. For instance, the expansion of Bourbaki's original definition of the number one, using this notation, has length approxi |
https://en.wikipedia.org/wiki/Palmar%20branch%20of%20the%20median%20nerve | The palmar branch of the median nerve is a branch of the median nerve which arises at the distal part of the forearm.
Branches
It pierces the palmar carpal ligament, and divides into a lateral and a medial branch;
The lateral branch supplies the skin over the ball of the thumb, and communicates with the volar branch of the lateral antebrachial cutaneous nerve.
The medial branch supplies the skin of the palm and communicates with the palmar cutaneous branch of the ulnar.
Clinical significance
Unlike most of the median nerve innervation of the hand, the palmar branch travels superficial to the Flexor retinaculum of the hand. Therefore, this portion of the median nerve usually remains functioning during carpal tunnel syndrome.
Additional images |
https://en.wikipedia.org/wiki/Human%20Cell%20Atlas | The Human Cell Atlas is a project to describe all cell types in the human body. The initiative was announced by a consortium after its inaugural meeting in London in October 2016, which established the first phase of the project. Aviv Regev and Sarah Teichmann defined the goals of the project at that meeting, which was convened by the Broad Institute, the Wellcome Trust Sanger Institute and Wellcome Trust. Regev and Teichmann lead the project.
Description
The Human Cell Atlas will catalogue a cell based on several criteria, specifically the cell type, its state, its location in the body, the transitions it undergoes, and its lineage. It will gather data from existing research, and integrate it with data collected in future research projects. Among the data it will collect is the fluxome, genome, metabolome, proteome, and transcriptome.
Its scope is to categorize the 37 trillion cells of the human body to determine which genes each cell expresses by sampling cells from all parts of the body.
All aspects of the project will be made "available to the public for free", including software and results.
By April 2018, the project included more than 480 researchers conducting 185 projects.
Funding
In October 2017, the Chan Zuckerberg Initiative announced funding for 38 projects related to the Human Cell Atlas. Among them was a grant of undisclosed value to the Zuckerman Institute of the Columbia University Medical Center at Columbia University. The grant, titled "A strategy for mapping the human spinal cord with single cell resolution", will fund research to identify and catalogue gene activity in all spinal cord cells. The Translational Genomics Research Institute received a grant to develop a standard for the "processing and storage of solid tissues for single-cell RNA sequencing", compared to the typical practice of relying on the average of sequencing multiple cells. Project home pages are available at the Chan Zuckerberg Initiative's website.
The program is also |
https://en.wikipedia.org/wiki/Identity%20preservation | Identity preservation is the practice of tracking the details of agricultural shipments so that the specific characteristics of each shipment is known. Identity preserved (IP) is the designation given to such bulk commodities marketed in a manner that isolates and preserves the identity of a shipment, presumably because of unique characteristics that have value otherwise lost through commingling during normal storage, handling and shipping procedures. The concept of IP has been accorded greater importance with the introduction of genetically modified organisms into agriculture.
Technical and managerial techniques are used to track and document the paths that agricultural products move in the production process. A fully integrated IP system might track and document a commodity’s seed characteristics, initial planting, growing conditions, harvesting, shipping, storage, processing, packaging, and ultimate sale to the consumer. Separating organic products from conventionally raised ones is one type of IP system. IP systems are a central component of value chains. |
https://en.wikipedia.org/wiki/Equivalent%20dose | Equivalent dose is a dose quantity H representing the stochastic health effects of low levels of ionizing radiation on the human body which represents the probability of radiation-induced cancer and genetic damage. It is derived from the physical quantity absorbed dose, but also takes into account the biological effectiveness of the radiation, which is dependent on the radiation type and energy. In the SI system of units, the unit of measure is the sievert (Sv).
Application
To enable consideration of stochastic health risk, calculations are performed to convert the physical quantity absorbed dose into equivalent dose, the details of which depend on the radiation type. For applications in radiation protection and dosimetry assessment, the International Commission on Radiological Protection (ICRP) and the International Commission on Radiation Units and Measurements (ICRU) have published recommendations and data on how to calculate equivalent dose from absorbed dose.
Equivalent dose is designated by the ICRP as a "limiting quantity"; to specify exposure limits to ensure that "the occurrence of stochastic health effects is kept below unacceptable levels and that tissue reactions are avoided". This is a calculated value, as equivalent dose cannot be practically measured, and the purpose of the calculation is to generate a value of equivalent dose for comparison with observed health effects.
Calculation
Equivalent dose HT is calculated using the mean absorbed dose deposited in body tissue or organ T, multiplied by the radiation weighting factor WR which is dependent on the type and energy of the radiation R.
The radiation weighting factor represents the relative biological effectiveness of the radiation and modifies the absorbed dose to take account of the different biological effects of various types and energies of radiation.
The ICRP has assigned radiation weighting factors to specified radiation types dependent on their relative biological effectiveness, whic |
https://en.wikipedia.org/wiki/Buffer-gas%20trap | The buffer-gas trap (BGT) is a device used to accumulate positrons (the antiparticles of electrons) efficiently while minimizing positron loss due to annihilation, which occurs when an electron and positron collide and the energy is converted to gamma rays. The BGT is used for a variety of research applications, particularly those that benefit from specially tailored positron gases, plasmas and/or pulsed beams. Examples include use of the BGT to create antihydrogen and the positronium molecule.
Design and operation
The schematic design of a BGT is illustrated in Fig. 1. It consists of a specially designed (Penning or Penning–Malmberg) type electromagnetic trap. Positrons are confined in a vacuum inside an electrode structure consisting of a stack of hollow, cylindrical metal electrodes such as that shown in Fig. 2. A uniform axial magnetic field inhibits positron motion radially, and voltages imposed on end electrodes prevent axial loss. Such traps are renowned for their good confinement properties for particles (such as positrons) of a single sign of charge.
Given a trap designed for good confinement, a remaining challenge is to efficiently fill the device. In the BGT, this is accomplished using a series of inelastic collisions with a molecular gas. In a positron-molecule collision, annihilation is much less probable than energy loss due to electronic or vibrational excitation. The BGT has a stepped potential well (Fig. 1) with regions at successively lower gas pressure. Electronic excitation of molecular nitrogen (N2) in the highest-pressure region is used to trap the positrons. This process is repeated until the particles are in a sufficiently low-pressure environment and the annihilation time is acceptably long. The particles cool to the ambient gas temperature due to inelastic vibrational and rotational collisions.
Trap efficiency is typically 5 – 30%, but can be as much as 40%. Positronium (Ps) formation via charge-exchange (e.g., e++ N2-> N2++ Ps) is a ma |
https://en.wikipedia.org/wiki/Sample%20and%20Data%20Relationship%20Format | The Sample and Data Relationship Format (SDRF) is part of the MAGE-TAB standard for communicating the results
of microarray investigations, including all information required for MIAME compliance.
An SDRF file is a tab-delimited file describing the relationships between samples, arrays, data, and other objects used or produced in a microarray investigation.
For simple experimental designs, constructing the SDRF file is straightforward, and even complex loop designs can be expressed in this format. |
https://en.wikipedia.org/wiki/Coordinatograph | A coordinatograph is an instrument which mechanically plots X and Y coordinates onto a surface, such as in compiling maps or in plotting control points such as in electronic circuit design.
One historic application of a coordinatograph was a machine that precisely placed and cut rubylith to create photomasks for early integrated circuits including some of the earliest generations of the modern PC microprocessor. The coordinatograph produced layout would then be photographically reduced 100:1 to create the production photomask.
See also
Cartography
Cartometry
Photolithography
Etching (microfabrication)
Design for manufacturability
Semiconductor device fabrication |
https://en.wikipedia.org/wiki/The%20Economics%20of%20Ecosystems%20and%20Biodiversity | The Economics of Ecosystems and Biodiversity (TEEB) was a study led by Pavan Sukhdev from 2007 to 2011. It is an international initiative to draw attention to the global economic benefits of biodiversity. Its objective is to highlight the growing cost of biodiversity loss and ecosystem degradation and to draw together expertise from the fields of science, economics and policy to enable practical actions. TEEB aims to assess, communicate and mainstream the urgency of actions through its five deliverables—D0: science and economic foundations, policy costs and costs of inaction, D1: policy opportunities for national and international policy-makers, D2: decision support for local administrators, D3: business risks, opportunities and metrics and D4: citizen and consumer ownership.
One motive for the study was to establish an objective global standard basis for natural capital accounting. Estimates establish the cost of biodiversity and ecosystem damage expected to cost 18% of global economic output by 2050 and currently at over US$2T (for the largest 3000 companies according to Trucost), with some estimates as high as US$6T/year. The World Bank in particular has led recent efforts to include the cost of biodiversity and climate harm in national accounts.
Its sponsors declared TEEB to be a "major international initiative to draw attention to the global economic benefits of biodiversity, to highlight the growing costs of biodiversity loss and ecosystem degradation, and to draw together expertise from the fields of science, economics and policy to enable practical actions moving forward." In October 2010 it released its report "Mainstreaming the Economics of Nature: a synthesis of the approach, conclusions and recommendations of TEEB" and launched the Bank of Natural Capital to communicate its findings to the general public.
History
The TEEB study was launched by Germany and the European Commission in response to a proposal by the G8+5 Environment Ministers in Potsdam, |
https://en.wikipedia.org/wiki/Flags%20of%20Asia | This is a gallery of international and national flags used in Asia.
Supranational and international flags
An incomplete list of flags representing intra-Asian international and supranational organisations, which omits intercontinental organisations such as the United Nations:
Flags of Asian sovereign states
Disputed or partially recognised states
Flags of Asian dependencies
Flags of Asian sub-divisions
China
Georgia
Iraq
Japan
Korea
Philippines
Russia
Uzbekistan
Flags of Asian cities
Flags of cities with over 1 million inhabitants.
Historical flags
Notes
See also
Flags of Africa
Flags of Europe
Flags of Oceania
Flags of North America
Flags of South America
Lists of flags of Asian countries
List of Afghan flags
List of Armenian flags
List of Azerbaijani flags
List of Bangladeshi flags
List of Bhutanese flags
List of Bruneian flags
List of Cambodian flags
List of Chinese flags
List of Cypriot flags
List of East Timorese flags
List of Egyptian flags
List of flags of Georgia (country)
List of Indian flags
List of flags of Indonesia
List of Iranian flags
List of flags of Iraq
List of flags of Israel
List of Japanese flags
List of Kazakh flags
List of North Korean flags
List of South Korean flags
List of Kuwaiti flags
List of Kyrgyz flags
List of flags of Laos
List of Malaysian flags
List of flags of the Maldives
List of Mongolian flags
List of Burmese flags
List of flags of Nepal
List of Omani flags
List of Pakistani flags
List of Palestinian flags
List of flags of the Philippines
List of Qatari flags
List of Russian flags
List of Saudi Arabian flags
List of Singaporean flags
List of Sri Lankan flags
List of Taiwanese flags
List of Thai flags
List of Turkish flags
List of Turkmen flags
List of Uzbek flags
List of flags of Vietnam
Asia
Symbols of Asia
Asia |
https://en.wikipedia.org/wiki/Telecom%20Gold | Telecom Gold (sometimes also known as BT Gold) was an early commercial electronic mail service launched by British Telecom in 1982. It was based on Prime minicomputers running Dialcom software under a customised version of PRIMOS. (ITT Dialcom was later acquired by BT in 1986.) The system offered various services, including e-mail to and from other Telecom Gold users and those of Dialcom services in other countries, and other e-mail systems such as Sprint and integration with telex, fax, online databases and an experimental OCR system for a short while. Later, X.400 functionality was added.
Users would dial into the system using a conventional modem and terminal emulator. Alternatively, users could dial a local number and connect via the PSS X.25 network. The X.400 services also had a Mail User Agent which ran on IBM PCs and compatibles.
The UK data centre was originally located in the basement of Beckett House 60-68 St Thomas St, Bermondsey, London, SE1 3QU but later moved to a custom built facility at Oxgate Centre, Oxgate Ln, London NW2 7JA which now houses LDEX1.
The service eventually became obsolete with the growth of the Internet in 1996.
Although BT continued to market the service, it decided not to develop its [Telecom Gold] successor, Mailbox, into an Internet Service Provider when it became clear that people wanted to connect to the Internet during the early to mid 1990s. Instead, BT decided to launch a new Internet Service Provider, called BTnet, in 1994, and within two years, Mailbox had ceased to exist.
During the 1980s, BT Gold hosted one of the first online communities. Users communicated using a noticeboard () and via a simple chat facility which allowed real-time conversations to take place. The BT Gold community was worldwide, but the majority of users were in London and would meet regularly at "eyeballs" (coined from CB usage).
See also
Robert Schifreen & Steve Gold, alleged hackers of Prince Philip's Telecom Gold mailbox in 1985 |
https://en.wikipedia.org/wiki/Algorithm%20Queen | Algorithm Queen is a 2022 painting of Queen Elizabeth II by Ai-Da, a humanoid robot credited with being the world's first ultra-realistic robot artist. Ai-Da painted the Queen in celebration of her Platinum Jubilee.
Description
Algorithm Queen was layered and scaled to produce the final multi-dimensional portrait of the monarch. The portrait will be exhibited publicly in London later in 2022.
Ai-Da said, "I'd like to thank Her Majesty the Queen for her dedication, and for the service she gives to so many people. She is an outstanding, courageous woman who is utterly committed to public service. I think she's an amazing human being, and I wish The Queen a very happy Platinum Jubilee".
Aidan Meller, the robot's creator, said the first portrait of the Queen by a robot provided an opportunity to think about "all that has changed during the Queen's life”. He said, "We are excited Ai-Da Robot has made history just in time for the Queen's Jubilee".
Jonathan Jones, The Guardian's art critic, said the painting showed the Queen's eyes with "a vacant, not quite human look. The mixture of leaden accuracy and, at the same time, complete lack of emphasis, feeling or conviction in Ai-Da's depiction of Her Maj is a telling glimpse of the limits of the AI 'art' genre. The machine records, but does not see. Because it has no conscious mind, let alone emotions". |
https://en.wikipedia.org/wiki/Atmospheric%20Circulation%20Reconstructions%20over%20the%20Earth | The Atmospheric Circulation Reconstructions over the Earth, ACRE, is an international science project, began in 2008, that recovers historical weather observations to reconstruct past global and local weather patterns and so support meteorological reanalysis. The project aims to collect weather data from the past 250 years by linking international meteorological organisations to support data recovery projects and the imaging and digitisation of historical meteorological observations made at, for example, inland stations, lighthouses, or by ships at sea or in ports. The project aims to create historical datasets that are spatially and temporally complete, so as to be of value at a local, or regional level, as well as on a global scale. ACRE aims to recover millions of historic weather observations. This data will be deposited into two databases,
ISPD - the International Surface Pressure Databank,
ICOADS - the International Comprehensive Ocean-Atmosphere Data Set.
This data will also be used to build a global dataset of historical weather reconstructions based on a grid of two degrees of latitude by two of longitude at six hourly intervals, entitled the 20th Century Reanalysis, or 20CR. Version one of 20CR, covering 1891 to 2008, was released in Autumn 2009. Version two, covering 1871 to 2010, appeared in December 2011. Version 3, 20CRv3, going back to 1836, was published in October 2019. It is intended that all the data recovered and the 20CR will be made freely available.
Partners
The project has nine core partners,
The Met Office Hadley Centre,
NOAA Earth System Research Laboratory,
Cooperative Institute for Research in Environmental Sciences,
NOAA National Climatic Data Center,
International Environmental Data Rescue Organization (IEDRO),
The British Library,
The University of Sussex,
The University of Giessen,
The University of Bern,
The University of Southern Queensland.
In addition to the core partners, some 35 projects and other organisations are involved o |
https://en.wikipedia.org/wiki/Fibred%20category | Fibred categories (or fibered categories) are abstract entities in mathematics used to provide a general framework for descent theory. They formalise the various situations in geometry and algebra in which inverse images (or pull-backs) of objects such as vector bundles can be defined. As an example, for each topological space there is the category of vector bundles on the space, and for every continuous map from a topological space X to another topological space Y is associated the pullback functor taking bundles on Y to bundles on X. Fibred categories formalise the system consisting of these categories and inverse image functors. Similar setups appear in various guises in mathematics, in particular in algebraic geometry, which is the context in which fibred categories originally appeared. Fibered categories are used to define stacks, which are fibered categories (over a site) with "descent". Fibrations also play an important role in categorical semantics of type theory, and in particular that of dependent type theories.
Fibred categories were introduced by , and developed in more detail by .
Background and motivations
There are many examples in topology and geometry where some types of objects are considered to exist on or above or over some underlying base space. The classical examples include vector bundles, principal bundles, and sheaves over topological spaces. Another example is given by "families" of algebraic varieties parametrised by another variety. Typical to these situations is that to a suitable type of a map between base spaces, there is a corresponding inverse image (also called pull-back) operation taking the considered objects defined on to the same type of objects on . This is indeed the case in the examples above: for example, the inverse image of a vector bundle on is a vector bundle on .
Moreover, it is often the case that the considered "objects on a base space" form a category, or in other words have maps (morphisms) between them. In |
https://en.wikipedia.org/wiki/Function%20%28biology%29 | In evolutionary biology, function is the reason some object or process occurred in a system that evolved through natural selection. That reason is typically that it achieves some result, such as that chlorophyll helps to capture the energy of sunlight in photosynthesis. Hence, the organism that contains it is more likely to survive and reproduce, in other words the function increases the organism's fitness. A characteristic that assists in evolution is called an adaptation; other characteristics may be non-functional spandrels, though these in turn may later be co-opted by evolution to serve new functions.
In biology, function has been defined in many ways. In physiology, it is simply what an organ, tissue, cell or molecule does.
In the philosophy of biology, talk of function inevitably suggests some kind of teleological purpose, even though natural selection operates without any goal for the future. All the same, biologists often use teleological language as a shorthand for function. In contemporary philosophy of biology, there are three major accounts of function in the biological world: theories of causal role, selected effect, and goal contribution.
In pre-evolutionary biology
In physiology, a function is an activity or process carried out by a system in an organism, such as sensation or locomotion in an animal. This concept of function as opposed to form (respectively Aristotle's ergon and morphê) was central in biological explanations in classical antiquity. In more modern times it formed part of the 1830 Cuvier–Geoffroy debate, where Cuvier argued that an animal's structure was driven by its functional needs, while Geoffroy proposed that each animal's structure was modified from a common plan.<ref>{{cite book |author=Asma, S. T. |date=1996 |title=Following form and function: A philosophical archaeology of life science |publisher=Northwestern University Press |url=https://books.google.com/books?id=deLaAAAAMAAJ|isbn=9780810113978 }}</ref>
In evolutionary |
https://en.wikipedia.org/wiki/Knee%20dislocation | A knee dislocation is an injury in which there is disruption of the knee joint between the tibia and the femur. Symptoms include pain and instability of the knee. Complications may include injury to an artery, most commonly the popliteal artery behind the knee, or compartment syndrome.
About half of cases are the result of major trauma and about half as a result of minor trauma. About 50% of the time, the joint spontaneously reduces before arrival at hospital. Typically there is a tear of the anterior cruciate ligament, posterior cruciate ligament, and either the medial collateral ligament or lateral collateral ligament. If the ankle–brachial pressure index is less than 0.9, CT angiography is recommended to detect blood vessel injury. Otherwise repeated physical exams may be sufficient. More recently, the FAST-D protocol, assessing the posterior tibial and dorsalis pedis arteries for a ‘tri-phasic wave pattern’ with ultrasound, has been shown to be reliable in ruling out significant arterial injury.
If the joint remains dislocated, reduction and splinting is indicated; this is typically carried out under procedural sedation. If signs of arterial injury are present, immediate surgery is generally recommended. Multiple surgeries may be required. In just over 10% of cases, an amputation of part of the leg is required.
Knee dislocations are rare, occurring in about 1 per 100,000 people per year. Males are more often affected than females. Younger adults are most often affected. Descriptions of this injury date back to at least 20 BC by Meges of Sidon.
Signs and symptoms
Symptoms include knee pain. The joint may also have lost its normal shape and contour. A joint effusion may, or may not, be present.
Complications
Complications may include injury to the artery behind the knee (popliteal artery) in about 20% of cases or compartment syndrome. Damage to the common peroneal nerve or tibial nerve may also occur. Nerve problems, if they occur, often persist to a varia |
https://en.wikipedia.org/wiki/Bolometer | A bolometer is a device for measuring radiant heat by means of a material having a temperature-dependent electrical resistance. It was invented in 1878 by the American astronomer Samuel Pierpont Langley.
Principle of operation
A bolometer consists of an absorptive element, such as a thin layer of metal, connected to a thermal reservoir (a body of constant temperature) through a thermal link. The result is that any radiation impinging on the absorptive element raises its temperature above that of the reservoir – the greater the absorbed power, the higher the temperature. The intrinsic thermal time constant, which sets the speed of the detector, is equal to the ratio of the heat capacity of the absorptive element to the thermal conductance between the absorptive element and the reservoir. The temperature change can be measured directly with an attached resistive thermometer, or the resistance of the absorptive element itself can be used as a thermometer. Metal bolometers usually work without cooling. They are produced from thin foils or metal films. Today, most bolometers use semiconductor or superconductor absorptive elements rather than metals. These devices can be operated at cryogenic temperatures, enabling significantly greater sensitivity.
Bolometers are directly sensitive to the energy left inside the absorber. For this reason they can be used not only for ionizing particles and photons, but also for non-ionizing particles, any sort of radiation, and even to search for unknown forms of mass or energy (like dark matter); this lack of discrimination can also be a shortcoming. The most sensitive bolometers are very slow to reset (i.e., return to thermal equilibrium with the environment). On the other hand, compared to more conventional particle detectors, they are extremely efficient in energy resolution and in sensitivity. They are also known as thermal detectors.
Langley's bolometer
The first bolometers made by Langley consisted of two steel, platinum, o |
https://en.wikipedia.org/wiki/AiScaler | aiScaler Ltd. is a multinational software company founded in 2008. It develops application delivery controllers designed to allow dynamic web pages to scale content by intelligently caching frequently requested content. A number of websites in the Alexa top 1000 use aiScaler to manage their traffic.
aiScaler software can be deployed either on public cloud computing platforms such as Amazon Web Services or private virtual environments. aiScaler software is considered an edge device as it proxies traffic, augmenting or replacing content delivery networks endpoints.
History
aiScaler started as a project in 1994 by the web development company WBS. The project was called "Jxel", short for Java Accelerator. The technology was Java-based and intended to be run on a Java Virtual Machine sharing the same computer system as the HTTP server. It was re-written in 2009 using the C computer language, occupying its own dedicated server. The new software was rewritten to run on Linux only, taking advantage of changes in the input/output model based on epoll. In July 2008, aiScaler Ltd acquired all technology of WBS for $3.8 million.
Until 2013, aiScaler was known as "aiCache", producing a product called aiScaler. The company took over the name of its main product, phasing out the brand name aiCache.
Products
All aiScaler products can be categorized as Application Delivery Controllers
aiScaler is an HTTP accelerator that provides application delivery control, in addition to scaling and acceleration of content delivery
aiProtect offers protection against DDoS attacks and SQL injections
aiMobile is a Mobile content management system
aiCDN is a cloud-based Application Delivery Network that allows scaling of dynamic web applications.
aiScaler and Dell offer a hardware Application Delivery Controller, which fits in a standard rack unit server rack.
aiScaler is based on epoll technology allowing it to employ a right-threaded (only the specified number of workers process requests, n |
https://en.wikipedia.org/wiki/Singular%20integral | In mathematics, singular integrals are central to harmonic analysis and are intimately connected with the study of partial differential equations. Broadly speaking a singular integral is an integral operator
whose kernel function K : Rn×Rn → R is singular along the diagonal x = y. Specifically, the singularity is such that |K(x, y)| is of size |x − y|−n asymptotically as |x − y| → 0. Since such integrals may not in general be absolutely integrable, a rigorous definition must define them as the limit of the integral over |y − x| > ε as ε → 0, but in practice this is a technicality. Usually further assumptions are required to obtain results such as their boundedness on Lp(Rn).
The Hilbert transform
The archetypal singular integral operator is the Hilbert transform H. It is given by convolution against the kernel K(x) = 1/(πx) for x in R. More precisely,
The most straightforward higher dimension analogues of these are the Riesz transforms, which replace K(x) = 1/x with
where i = 1, ..., n and is the i-th component of x in Rn. All of these operators are bounded on Lp and satisfy weak-type (1, 1) estimates.
Singular integrals of convolution type
A singular integral of convolution type is an operator T defined by convolution with a kernel K that is locally integrable on Rn\{0}, in the sense that
Suppose that the kernel satisfies:
The size condition on the Fourier transform of K
The smoothness condition: for some C > 0,
Then it can be shown that T is bounded on Lp(Rn) and satisfies a weak-type (1, 1) estimate.
Property 1. is needed to ensure that convolution () with the tempered distribution p.v. K given by the principal value integral
is a well-defined Fourier multiplier on L2. Neither of the properties 1. or 2. is necessarily easy to verify, and a variety of sufficient conditions exist. Typically in applications, one also has a cancellation condition
which is quite easy to check. It is automatic, for instance, if K is an odd function. If |
https://en.wikipedia.org/wiki/Inelastic%20scattering | In chemistry, nuclear physics, and particle physics, inelastic scattering is a process in which the kinetic energy of a particle or a system of particles changes after a collision. Formally, the kinetic energy of the incident particle is not conserved (in contrast to elastic scattering). In an inelastic scattering process, some of the energy of the incident particle is lost or increased. Although inelastic scattering is historically related to the concept of inelastic collision in dynamics, the two concepts are quite distinct; inelastic collision in dynamics refers to processes in which the total macroscopic kinetic energy is not conserved. In general, scattering due to inelastic collisions will be inelastic, but, since elastic collisions often transfer kinetic energy between particles, scattering due to elastic collisions can also be inelastic, as in Compton scattering meaning the two particles in the collision transfer energy causing a loss of energy in one particle.
Electrons
When an electron is the incident particle, the probability of inelastic scattering, depending on the energy of the incident electron, is usually smaller than that of elastic scattering. Thus in the case of gas electron diffraction (GED), reflection high-energy electron diffraction (RHEED), and transmission electron diffraction, because the energy of the incident electron is high, the contribution of inelastic electron scattering can be ignored. Deep inelastic scattering of electrons from protons provided the first direct evidence for the existence of quarks.
Photons
When a photon is the incident particle, there is an inelastic scattering process called Raman scattering. In this scattering process, the incident photon interacts with matter (gas, liquid, and solid) and the frequency of the photon is shifted towards red or blue. A red shift can be observed when part of the energy of the photon is transferred to the interacting matter, where it adds to its internal energy in a process calle |
https://en.wikipedia.org/wiki/Mating%20plug | A mating plug, also known as a copulation plug, sperm plug, vaginal plug, or sphragis (Latin, from Greek σφραγίς sphragis, "a seal"), is a gelatinous secretion used in the mating of some species. It is deposited by a male into a female genital tract, such as the vagina, and later hardens into a plug or glues the tract together. While females can expel the plugs afterwards, the male's sperm still gets a time advantage in getting to the egg, which is often the deciding factor in fertilization.
The mating plug plays an important role in sperm competition and may serve as an alternative and more advantageous strategy to active mate guarding. In some species, such a passive mate-guarding strategy may reduce selection on large male size. Such a strategy may be advantageous because it would allow a male to increase reproductive success by spending more time pursuing new female mates rather than active mate guarding.
Composition
The mating plug of the Bombus terrestris was chemically analyzed and found to consist of palmitic acid, linoleic acid, oleic acid, stearic acid, and cycloprolylproline. It was found that the acids (without cycloprolylproline) were sufficient by themselves to create the plug. Researchers hypothesize that cycloprolylproline reduces female receptivity to further breeding.
Occurrence in nature
Mating plugs are used by many species, including several primates, kangaroos, bees, reptiles, rats, rodents, scorpions, mice, and spiders.
Use of a mating plug as a strategy for reproductive success can also be seen in a few taxa of Lepidoptera and other insects and is often associated with pupal mating. For example, to protect their paternity, male variable checkerspot butterflies pass a mating plug into the genital opening of females to prevent them from remating.
The Heliconius charithonia butterfly uses a mating plug in the form of a spermatophore that provides predatory defense chemicals and protein sources for developing eggs. It also acts as an anaphr |
https://en.wikipedia.org/wiki/Weili%20Dai | Weili Dai () is a Chinese-born American businesswoman. She is the co-founder, former director, and former president of Marvell Technology Group. Dai is a successful female entrepreneur, and is the only female co-founder of a major semiconductor company. In 2015, she was listed as the 95th richest woman in the world by Forbes. Her estimated net worth is US$1.6 billion as of December 2021.
Early life
Dai was born in Shanghai, China, where she played semi-professional basketball before moving to the US at the age of 17. She has a bachelor's degree in computer science from the University of California, Berkeley.
Career
Dai was involved in software development and project management at Canon Research Center America, Inc.
Dai co-founded the American semiconductor company Marvell in 1995 with her husband Sehat Sutardja. She directed Marvell's rise to become a large company. While at Marvell, Dai worked on strategic partnerships, and marketed Marvell's technology for use in products across several markets. Dai also works to increase access to technology in the developing world and served as an ambassador of opportunity between the US and China.
Dai served as chief operating officer, executive vice president, and general manager of the Communications Business Group at Marvell. She was corporate secretary of the board, and a director of the board at Marvell Technology Group Ltd.
Dai promoted partnership with the One Laptop Per Child program (OLPC) and women in science, technology, engineering, and mathematics (STEM) fields.
She sits on the board of the disaster relief organization, Give2Asia, and was named to a committee of 100 representing the Chinese Americans. The Sutardja Dai Hall at her alma mater, UC Berkeley, was named for Dai along with her husband Sehat Sutardja, CEO of Marvell and Pantas Sutardja, CTO of Marvell. Sutardja Dai Hall is home to the Center for Information Technology Research in the Interest of Society (CITRIS). In 2015, Dai was named to the Globa |
https://en.wikipedia.org/wiki/Marker%20gene | In biology, a marker gene may have several meanings. In nuclear biology and molecular biology, a marker gene is a gene used to determine if a nucleic acid sequence has been successfully inserted into an organism's DNA. In particular, there are two sub-types of these marker genes: a selectable marker and a marker for screening. In metagenomics and phylogenetics, a marker gene is an orthologous gene group which can be used to delineate between taxonomic lineages.
Selectable marker
A selectable marker protects the organism from a selective agent that would normally kill it or prevent its growth. In a transformation reaction, depending on the transformation efficiency, only one in several million to billion cells may take up DNA. Rather than checking every single cell, scientists use a selective agent to kill all cells that do not contain the foreign DNA, leaving only the desired ones.
Antibiotics are the most common selective agents. In bacteria, antibiotics are used almost exclusively. In plants, antibiotics that kill the chloroplast are often used as well, although tolerance to salts and growth-inhibiting hormones is becoming more popular. In mammals, resistance to antibiotics that would kill the mitochondria is used as a selectable marker.
Screenable marker
A screenable marker will make cells containing the gene look different. There are three types of screening commonly used:
Green fluorescent protein makes cells glow green under UV light. A specialized microscope is required to see individual cells. Yellow and red versions are also available, so scientists can look at multiple genes at once. It is commonly used to measure gene expression.
GUS assay (using β-glucuronidase) is an excellent method for detecting a single cell by staining it blue without using any complicated equipment. The drawback is that the cells are killed in the process. It is particularly common in plant science.
Blue white screen is used in both bacteria and eukaryotic cells. The bacte |
https://en.wikipedia.org/wiki/Intermetacarpal%20joints | The intermetacarpal joints are in the hand formed between the metacarpal bones. The bases of the second, third, fourth and fifth metacarpal bones articulate with one another by small surfaces covered with cartilage. The metacarpal bones are connected together by dorsal, palmar, and interosseous ligaments.
The dorsal metacarpal ligaments (ligamenta metacarpalia dorsalia) and palmar metacarpal ligaments (ligamenta metacarpalia palmaria) pass transversely from one bone to another on the dorsal and palmar surfaces.
The interosseous metacarpal ligaments (ligamenta metacarpalia interossea) connect their contiguous surfaces, just distal to their collateral articular facets.
The synovial membrane for these joints is continuous with that of the carpometacarpal joints.
Additional images
See also
Transverse metacarpal ligament |
https://en.wikipedia.org/wiki/Marina%20Bosi | Marina Bosi is a Consulting Professor at Stanford University's Center for Computer Research in Music and Acoustics (CCRMA). Originally a flutist and flute teacher, she is known for her work on digital audio coding formats.
Education
Marina Bosi was born near Milan and raised in Florence. She studied the flute with Severino Gazzelloni, and earned a diploma in the flute at the Conservatory of Music in Florence. She then taught flute at the Conservatory of Music in Venice. She later went back to school at the University of Florence where she graduated with a doctorate in physics. Her dissertation (developed and implemented through research at IRCAM in Paris) was “Design of a High-Speed Computer System for the Processing of Musical Sound".
Career
She served as chief technology officer at MPEG LA and as a vice president at Digital Theater Systems (DTS). At Dolby Laboratories she helped to develop the AC-2, AC-3, and MPEG-2 Advanced Audio Coding technologies. She has also worked on devising standards for audio and video technology and digital content. Bosi was also a part of the research team that created the 5.1 channel Dolby Digital format.
Bosi came to the United States to be a visiting scholar at Stanford University's Center for Computer Research in Music and Acoustics (CCRMA). In the early 1990s, she developed Stanford's first course in digital audio coding, which eventually led to the publication of a textbook in the area. She is a founding member of the Digital Media Project and serves on its board of directors.
She is a past president of the Audio Engineering Society and has received the AES Board of Governors and Fellowship awards. In 2019, Marina Bosi was presented with the AES Silver Medal Award "in recognition of outstanding achievements in the development and standardization of audio and video coding and of secure digital rights management."
Selected publications
Marina Bosi and Richard E. Goldberg, Introduction to Digital Audio Coding and Standards, |
https://en.wikipedia.org/wiki/Ash%20Archive | The Ash Archive is a project founded in 2019 to restore ash trees to the landscape in England. English ash trees experienced massive dieback beginning in 2012 as a result of a fungal pathogen, Hymenoscyphus fraxineus. The archive contains over 3,000 trees, all of which propagated from the shoots of trees that had demonstrated some resistance to the fungus. The archive was established with £1.9 million (about USD 2.5 million) in government funding, and followed a five-year project to identify ash trees that were resistant to the fungus. One of the final trees in the archive was planted in January 2020 by Nicola Spence, the Chief Plant Health Officer of the UK government. Spence said, "I’m delighted to acknowledge the successes of the Ash Archive project and welcome the International Year of Plant Health by planting an ash dieback-tolerant tree."
The Ash Archive trees were planted in the county of Hampshire in an unspecified location by the Future Trees Trust. Propagated shoots came from trees in East Anglia. All the trees will be monitored for five years to identify those that are the most resistant to disease. These will form the basis of the future breeding program. |
https://en.wikipedia.org/wiki/Anode%20ray | An anode ray (also positive ray or canal ray) is a beam of positive ions that is created by certain types of gas-discharge tubes. They were first observed in Crookes tubes during experiments by the German scientist Eugen Goldstein, in 1886. Later work on anode rays by Wilhelm Wien and J. J. Thomson led to the development of mass spectrometry.
Anode ray tube
Goldstein used a gas-discharge tube which had a perforated cathode. When an electrical potential of several thousand volts is applied between the cathode and anode, faint luminous "rays" are seen extending from the holes in the back of the cathode. These rays are beams of particles moving in a direction opposite to the "cathode rays", which are streams of electrons which move toward the anode. Goldstein called these positive rays Kanalstrahlen, "channel rays", or "canal rays", because these rays passed through the holes or channels in the cathode.
The process by which anode rays are formed in a gas-discharge anode ray tube is as follows. When the high voltage is applied to the tube, its electric field accelerates the small number of ions (electrically charged atoms) always present in the gas, created by natural processes such as radioactivity. These collide with atoms of the gas, knocking electrons off them and creating more positive ions. These ions and electrons in turn strike more atoms, creating more positive ions in a chain reaction. The positive ions are all attracted to the negative cathode, and some pass through the holes in the cathode. These are the anode rays.
By the time they reach the cathode, the ions have been accelerated to a sufficient speed such that when they collide with other atoms or molecules in the gas they excite the species to a higher energy level. In returning to their former energy levels these atoms or molecules release the energy that they had gained. That energy gets emitted as light. This light-producing process, called fluorescence, causes a glow in the region behind the cath |
https://en.wikipedia.org/wiki/Fast%20Library%20for%20Number%20Theory | The Fast Library for Number Theory (FLINT) is a C library for number theory applications. The two major areas of functionality currently implemented in FLINT are polynomial arithmetic over the integers and a quadratic sieve. The library is designed to be compiled with the GNU Multi-Precision Library (GMP) and is released under the GNU General Public License. It is developed by William Hart of the University of Kaiserslautern (formerly University of Warwick) and David Harvey of University of New South Wales (formerly Harvard University) to address the speed limitations of the PARI and NTL libraries.
Design Philosophy
Asymptotically Fast Algorithms
Implementations Fast as or Faster than Alternatives
Written in Pure C
Reliance on GMP
Extensively Tested
Extensively Profiled
Support for Parallel Computation
Functionality
Polynomial Arithmetic over the Integers
Quadratic Sieve |
https://en.wikipedia.org/wiki/Maxtor | Maxtor was an American computer hard disk drive manufacturer. Founded in 1982, it was the third largest hard disk drive manufacturer in the world before being purchased by Seagate in 2006.
History
Overview
In 1981, three former IBM employees began searching for funding, and Maxtor was founded the following year. In 1983, Maxtor shipped its first product, the Maxtor XT-1140. In 1985, Maxtor filed its initial public offering and started trading on the New York Stock Exchange as "MXO." Maxtor bought hard drive manufacturer MiniScribe in 1990. Maxtor was getting close to bankruptcy in 1992 and closed its engineering operations in San Jose, California, in 1993. In 1996, Maxtor introduced its DiamondMax line of hard drives with DSP-based architecture. In 2000, Maxtor acquired Quantum's hard drive division, which gave Maxtor the ATA/133 hard drive interface and helped Maxtor revive its server hard drive market. In 2006, Maxtor was acquired by Seagate.
Early financing
The Maxtor founders, James McCoy, Jack Swartz, and Raymond Niedzwiecki—graduates of the San Jose State University School of Engineering and former employees of IBM—began the search for funding in 1981. In early 1982, B.J. Cassin and Chuck Hazel (Bay Partners) provided the initial $3 million funding and the company officially began operations on July 1, 1982. In February 1983, it shipped its first product to Convergent Technology and immediately received an additional $5.5 million in its second round of funding. The company also began negotiations with the EDB (Economic Development Board) of Singapore for favorable terms before committing to Singapore as its offshore manufacturing location. The DBS (Development Bank of Singapore) agreed to provide financing to help grow the company in Singapore. In 1983, the company established a liaison and procurement office in Tokyo, headed by Tatsuya Yamamoto.
Maxtor's product architecture used eight disks; 15 surfaces recorded data and the final surface was where th |
https://en.wikipedia.org/wiki/FreeDOS | FreeDOS (formerly Free-DOS and PD-DOS) is a free software operating system for IBM PC compatible computers. It intends to provide a complete MS-DOS-compatible environment for running legacy software and supporting embedded systems.
FreeDOS can be booted from a floppy disk or USB flash drive. It is designed to run well under virtualization or x86 emulation.
Unlike most versions of MS-DOS, FreeDOS is composed of free software, licensed under the terms of the GNU General Public License. However, other packages that form part of the FreeDOS project include non-GPL software considered worthy of preservation, such as 4DOS, which is distributed under a modified MIT License.
History
The FreeDOS project began on 29 June 1994, after Microsoft announced it would no longer sell or support MS-DOS. Jim Hall – who at the time was a student – posted a manifesto proposing the development of PD-DOS, a public domain version of DOS. Within a few weeks, other programmers including Pat Villani and Tim Norman joined the project. Between them, a kernel (by Villani), the COMMAND.COM command line interpreter (by Villani and Norman), and core utilities (by Hall) were created by pooling code they had written or found available. For some time, the project was maintained by Morgan "Hannibal" Toal. There have been many official pre-release distributions of FreeDOS before the final FreeDOS 1.0 distribution. GNU/DOS, an unofficial distribution of FreeDOS, was discontinued after version 1.0 was released.
Blinky the Fish is the mascot of FreeDOS. He was designed by Bas Snabilie.
Distribution
FreeDOS 1.1, released on 2 January 2012, is available for download as a CD-ROM image: a limited install disc that only contains the kernel and basic applications, and a full disc that contains many more applications (games, networking, development, etc.), not available but with a newer, fuller 1.2. The legacy version 1.0 (2006) consisted of two CDs, one of which was an 8 MB install CD targeted at regular |
https://en.wikipedia.org/wiki/Ehud%20de%20Shalit | Ehud de Shalit (; born 16 March 1955) is an Israeli number theorist and professor at the Hebrew University of Jerusalem.
Biography
Ehud de Shalit was born in Rehovot. His father was Amos de-Shalit. He completed his B.Sc. at the Hebrew University in 1975, and his Ph.D. at Princeton University in 1984 under the supervision of Andrew Wiles.
Academic career
De Shalit joined the faculty of Hebrew University in 1987 and was promoted to full professor in 2001. He is an editor for the Israel Journal of Mathematics.
Published works |
https://en.wikipedia.org/wiki/Index%20Translationum | The Index Translationum is UNESCO's database of book translations. Books have been translated for thousands of years, with no central record of the fact. The League of Nations established a record of translations in 1932. In 1946, the United Nations superseded the League and UNESCO was assigned the Index. In 1979, the records were computerised.
Since the Index counts translations of individual books, authors with many books with few translations can rank higher than authors with a few books with more translations. So, for example, while the Bible is the single most translated book in the world, it does not rank in the top ten of the index. The Index counts the Walt Disney Company, employing many writers, as a single writer. Authors with similar names are sometimes included as one entry, for example, the ranking for "Hergé" applies not only to the author of The Adventures of Tintin (Hergé), but also to B.R. Hergehahn, Elisabeth Herget, and Douglas Hergert. Hence, the top authors, as the Index presents them, are from a database query whose results require interpretation.
According to the Index, Agatha Christie remains the most-translated individual author.
Statistics
Source: UNESCO
Top 10 Author
Top 10 Country
Top 10 Target Language
Top 10 Original language
See also
UNESCO Collection of Representative Works, UNESCO's program for funding the translation of works
List of literary works by number of translations |
https://en.wikipedia.org/wiki/Constrictivity | Constrictivity is a dimensionless parameter used to describe transport processes (often molecular diffusion) in porous media.
Constrictivity is viewed to depend on the ratio of the diameter of the diffusing particle to the pore diameter. The value of constrictivity is always less than 1. The constrictivity is defined not for a single pore, but as the parameter of the entire pore space considered.
The resistance to transport in porous media increases because the viscosity of the fluid (which fills the pores) increases in the vicinity of the pore walls (Renkin effect; see also electroviscous effects). This effect is important in very narrow pores and in pore narrowing their diameter to the same size as the diameter of the diffusing particles. Constrictivity must be distinguished from the effects of Knudsen diffusion. Knudsen diffusion occurs when the particle interacts with the pore walls more than it does with other particles due to the large free path and narrow pores. Constrictivity, on the other hand, depends on the influence of the pore walls on the fluid filling the pores.
There are a number of empirical formulas used to estimate the value of constrictivity. For simple pore geometries, constrictivity can be inferred from the geometry of the porous media. In practice, the constrictivity together with the porosity and tortuosity are often used in models as purely empirical parameters to establish the effective diffusivities in porous media.
Footnotes
Sources
P. Grathwohl: Diffusion in natural porous media: Contaminant transport, sorption/desorption and dissolution kinetics. Kluwer Academic Publishers, 1998,
R. K. M. Thambynayagam: The Diffusion Handbook: Applied Solutions for Engineers. McGraw-Hill, 2011,
van Brakel, J., Heertjes, P. M. (1974): Analysis of diffusion in macroporous media in terms of a porosity, a tortuosity and a constrictivity factor. Int. J. Heat Mass Transfer, 17: 1093–1103
Porous media
Transport phenomena
Hydrogeology |
https://en.wikipedia.org/wiki/Observer%20effect%20%28information%20technology%29 | In information technology, the observer effect is the impact on the behaviour of a computer process caused by the act of observing the process while it is running.
For example: if a process uses a log file to record its progress, the process could slow down. Furthermore, the act of viewing the file while the process is running could cause an I/O error in the process, which could, in turn, cause it to stop. Another example would be observing the performance of a CPU by running both the observed and observing programs on the same CPU, which will lead to inaccurate results because the observer program itself affects the CPU performance (modern, heavily cached and pipelined CPUs are particularly affected by this kind of observation).
The observer effect could either have a positive or negative impact on the computer process behaviour. A positive impact can be software bugs, also known as Heisenbugs, which diminish or change their negative behavior when observation mechanisms, such as debugging, are enabled. Such bugs usually create extra difficulties in being isolated. |
https://en.wikipedia.org/wiki/MTS%20system%20architecture | MTS System Architecture describes the software organization of the Michigan Terminal System, a time-sharing computer operating system in use from 1967 to 1999 on IBM S/360-67, IBM System/370, and compatible computers.
Overview
The University of Michigan Multi-Programming Supervisor (UMMPS), has complete control of the hardware and manages a collection of job programs. One of the job programs is MTS, the job program with which most users interact. MTS operates as a collection of command language subsystems (CLSs). One of the CLSs allows for the execution of user programs. MTS provides a collection of system subroutines that are available to CLSs, user programs, and MTS itself. Among other things these system subroutines provide standard access to Device Support Routines (DSRs), the components that perform device dependent input/output.
Organization
The system is organized as a set of independent components with well-defined interfaces between them.
This idea is, of course, neither new nor unique; but MTS components are generally larger, interfaces between components more rigid, and a component communicates with fewer other components than in many systems. As a result, components are more independent of each other and it is easier to replace one component without affecting others.
The interface with the supervisor is the same for all components and very few special cases are allowed; for example, all input/output operations are done using the same supervisor facilities whether the input/output is for a card reader, a paging device, or any other device. Most access to supervisor services is via system subroutines that issue the necessary Supervisor Call instructions (SVCs) rather than by direct use of SVCs. Control blocks are accessed only indirectly by calls to subroutines within the component that "owns" the control block.
The interfaces used by user programs are the cleanest of all. User programs may never refer directly to any system control block (neither |
https://en.wikipedia.org/wiki/Stored-program%20computer | A stored-program computer is a computer that stores program instructions in electronically or optically accessible memory. This contrasts with systems that stored the program instructions with plugboards or similar mechanisms.
The definition is often extended with the requirement that the treatment of programs and data in memory be interchangeable or uniform.
Description
In principle, stored-program computers have been designed with various architectural characteristics. A computer with a von Neumann architecture stores program data and instruction data in the same memory, while a computer with a Harvard architecture has separate memories for storing program and data. However, the term stored-program computer is sometimes used as a synonym for the von Neumann architecture. Jack Copeland considers that it is "historically inappropriate, to refer to electronic stored-program digital computers as 'von Neumann machines'". Hennessy and Patterson wrote that the early Harvard machines were regarded as "reactionary by the advocates of stored-program computers".
History
The concept of the stored-program computer can be traced back to the 1936 theoretical concept of a universal Turing machine. Von Neumann was aware of this paper, and he impressed it on his collaborators.
Many early computers, such as the Atanasoff–Berry computer, were not reprogrammable. They executed a single hardwired program. As there were no program instructions, no program storage was necessary. Other computers, though programmable, stored their programs on punched tape, which was physically fed into the system as needed.
In 1936, Konrad Zuse anticipated in two patent applications that machine instructions could be stored in the same storage used for data.
The University of Manchester's Baby is generally recognized as world's first electronic computer that ran a stored program—an event that occurred on 21 June 1948. However the Baby was not regarded as a full-fledged computer, but more a proof o |
https://en.wikipedia.org/wiki/Industry%20Dive | Industry Dive is an online business-to-business news organization, with an estimated 13 million readers across more than 25 industries, including banking and waste management. Since 2022, it has been owned by Informa plc, which bought its majority stake from Falfurrias Capital Partners for about $530 million.
Industry Dive writes for busy executives using their mobile phones. The company has reported revenues of $30 million to $60 million, mostly from selling ads. It has more than 300 employees, including 80 journalists and 12 engineers. Its headquarters is in Washington, D.C.
History
Industry Dive was formed in 2012 by Sean Griffey (president), Eli Dickinson (chief technology officer) and Ryan Willumson (chief revenue officer) and funded with $900,000 from private investors in 2012 and 2013. The company started by covering five industries: construction, education, marketing, utility, and waste.
In 2016, it began its Dive Awards to recognize the most innovative and disruptive businesses. Industry Dive's revenues quadrupled from 2015 to 2018, putting it in the top half of the Deloitte Technology Fast 500 and the top 20 percent of the Inc top 5000 list. In 2019, Falfurrias Capital Partners acquired a majority stake in the company. ID's content marketing clients included IBM, Siemens, and UPS.
In 2020, DCA Live named Industry Dive to its "Red Hot Companies" list, which recognizes the D.C. area's fastest-growing companies. In the same year, Industry Dive acquired CFO. In 2021, Industry Dive acquired PharmaVOICE.
In 2022, it was purchased by Informa plc, which bought its majority stake from Falfurrias Capital Partners for about $530 million.
Publications
Industry Dive operates through a number of industry verticals, each with their own website:
Agriculture Dive
Automotive Dive
Banking Dive
BioPharma Dive
CFO
CFO Dive
CIO Dive
Construction Dive
C-Store Dive
Cybersecurity Dive
Education Dive
Facilities Dive
Fashion Dive
Food Dive
Grocery Dive
Healthcare Dive
Higher |
https://en.wikipedia.org/wiki/Flash%20Gordon%20%28video%20game%29 | Flash Gordon is a video game based on a comic strip character of the same name. The game was published in 1986 by Mastertronic for the Amstrad CPC, Commodore 64, ZX Spectrum, and MSX personal computers.
It features three individual levels. The first is set on the jungle world of Arboria in which Flash Gordon has to traverse through the jungle like maze to escape. The second level is a beat'em up style game in which Flash fights Prince Barin. The final level is a 3D style shooter which has Flash flying a rocket cycle in pursuit of Ming the Merciless.
Reception
Zzap!64 praised the Commodore 64 version of the game. Reviewers appreciated the gameplay variety offered by the three different sections of the game, and the quality of graphics and sound. It was rated 89% overall. |
https://en.wikipedia.org/wiki/Immunosenescence | Immunosenescence is the gradual deterioration of the immune system, brought on by natural age advancement. A 2020 review concluded that the adaptive immune system is affected more than the innate immune system. Immunosenescence involves both the host's capacity to respond to infections and the development of long-term immune memory. Age-associated immune deficiency is found in both long- and short-lived species as a function of their age relative to life expectancy rather than elapsed time. It has been studied in animal models including mice, marsupials and monkeys. Immunosenescence is a contributory factor to the increased frequency of morbidity and mortality among the elderly. Along with anergy and T-cell exhaustion, immunosenescence belongs among the major immune system dysfunctional states. However, while T-cell anergy is a reversible condition, as of 2020 no techniques for immunosenescence reversal had been developed.
Immunosenescence is not a random deteriorative phenomenon, rather it appears to inversely recapitulate an evolutionary pattern. Most of the parameters affected by immunosenescence appear to be under genetic control. Immunosenescence can be envisaged as the result of the continuous challenge of the unavoidable exposure to a variety of antigens such as viruses and bacteria.
Age-associated decline in immune function
Aging of the immune system is a controversial phenomenon. Senescence refers to replicative senescence from cell biology, which describes the condition when the upper limit of cell divisions (Hayflick limit) has been exceeded, and such cells commit apoptosis or lose their functional properties. Immunosenescence generally means a robust shift in both structural and functional parameters that has a clinically relevant outcome. Thymus involution is probably the most relevant factor responsible for immunosenescence. Thymic involution is common in most mammals; in humans it begins after puberty, as the immunological defense against most nov |
https://en.wikipedia.org/wiki/Spin%20model | A spin model is a mathematical model used in physics primarily to explain magnetism. Spin models may either be classical or quantum mechanical in nature. Spin models have been studied in quantum field theory as examples of integrable models. Spin models are also used in quantum information theory and computability theory in theoretical computer science. The theory of spin models is a far reaching and unifying topic that cuts across many fields.
Introduction
In ordinary materials, the magnetic dipole moments of individual atoms produce magnetic fields that cancel one another, because each dipole points in a random direction. Ferromagnetic materials below their Curie temperature, however, exhibit magnetic domains in which the atomic dipole moments are locally aligned, producing a macroscopic, non-zero magnetic field from the domain. These are the ordinary "magnets" with which we are all familiar.
The study of the behavior of such "spin models" is a thriving area of research in condensed matter physics. For instance, the Ising model describes spins (dipoles) that have only two possible states, up and down, whereas in the Heisenberg model the spin vector is allowed to point in any direction. In certain magnets, the magnetic dipoles are only free to rotate in a 2D plane, a system which can be adequately described by the so-called xy-model.
The lack of a unified theory of magnetism forces scientist to model magnetic systems theoretically with one, or a combination of these spin models in order to understand the intricate behavior of atomic magnetic interactions . Numerical implementation of these models has led to several interesting results, such as quantitative research in the theory of phase transitions.
Quantum
A quantum spin model is a quantum Hamiltonian model that describes a system which consists of spins either interacting or not and are an active area of research in the fields of strongly correlated electron systems, quantum information theory, and quantum c |
https://en.wikipedia.org/wiki/Z4%20%28computer%29 | The Z4 was arguably the world's first commercial digital computer, and is the oldest surviving programmable computer. It was designed, and manufactured by early computer scientist Konrad Zuse's company Zuse Apparatebau, for an order placed by Henschel & Son, in 1942; though only partially assembled in Berlin, then completed in Göttingen, and not delivered before the defeat of Nazi Germany, in 1945. The Z4 was Zuse's final target for the Z3 design. Like the earlier Z2, it comprised a combination of mechanical memory and electromechanical logic, so was not a true electronic computer.
Construction
The Z4 was very similar to the Z3 in its design but was significantly enhanced in a number of respects. The memory consisted of 32-bit rather than 22-bit floating point words. The Program Construction Unit (Planfertigungsteil) punched the program tapes, making programming and correcting programs for the machine much easier by the use of symbolic operations and memory cells. Numbers were entered and output as decimal floating-point even though the internal working was in binary. The machine had a large repertoire of instructions including square root, MAX, MIN and sine. Conditional tests included tests for infinity. When delivered to ETH Zurich in 1950 the machine had a conditional branch facility added and could print on a Mercedes typewriter. There were two program tapes where the second could be used to hold a subroutine. (Originally six were planned.)
In 1944, Zuse was working on the Z4 with around two dozen people, including Wilfried de Beauclair. Some engineers who worked at the telecommunications facility of the OKW also worked for Zuse as a secondary occupation. Also in 1944 Zuse transformed his company to the Zuse KG (Kommanditgesellschaft, i.e. a limited partnership) and planned to manufacture 300 computers. This way he could also request additional staff and scientists as a contractor in the Emergency Fighter Program. Zuse's company also cooperated with Alwin Wal |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.