source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/OpenLDI | OpenLDI (Open LVDS Display Interface) is a high-bandwidth digital-video interface standard for connecting graphics/video processors to flat panel LCD monitors. Even though the promoter’s group originally designed it for the desktop computer to monitor application, the majority of applications today are industrial display connections. For example, displays in medical imaging, machine vision, and construction equipment use the OpenLDI chipsets.
OpenLDI is based on the FPD-Link specification, which was the de facto standard for transferring graphics and video data through notebook computer hinges since the late 1990s. Both OpenLDI and FPD-Link use low-voltage differential signaling (LVDS) as the physical layer signaling, and the three terms have mistakenly been used synonymously. (FPD-Link and OpenLDI are largely compatible, beyond the physical-layer; specifying the same serial data-streams).
The OpenLDI standard was promoted by National Semiconductor, Texas Instruments, Silicon Graphics (SGI) and others. OpenLDI wasn't used in many of the intended applications after losing the computer-to-monitor interconnect application to a competing standard, Digital Visual Interface (DVI).
The SGI 1600SW was the only monitor produced in significant quantities with an OpenLDI connection, though it had minor differences from the final published standards. The 1600SW used a 36-pin MDR36 male connector with a pinout that differs from that of the 36-pin centronics-style connector in the OpenLDI standard.
Sony produced some VAIO displays and laptops using the standard.
(According to the SGI 1600SW entry, a few other displays were made by various manufacturers using the OpenLDI standard.)
See also
VGA |
https://en.wikipedia.org/wiki/Fine%20chemical | In chemistry, fine chemicals are complex, single, pure chemical substances, produced in limited quantities in multipurpose plants by multistep batch chemical or biotechnological processes. They are described by exacting specifications, used for further processing within the chemical industry and sold for more than $10/kg (see the comparison of fine chemicals, commodities and specialties). The class of fine chemicals is subdivided either on the basis of the added value (building blocks, advanced intermediates or active ingredients), or the type of business transaction, namely standard or exclusive products.
Fine chemicals are produced in limited volumes (< 1000 tons/year) and at relatively high prices (> $10/kg) according to exacting specifications, mainly by traditional organic synthesis in multipurpose chemical plants. Biotechnical processes are gaining ground. Fine chemicals are used as starting materials for specialty chemicals, particularly pharmaceuticals, biopharmaceuticals and agrochemicals. Custom manufacturing for the life science industry plays a big role; however, a significant portion of the fine chemicals total production volume is manufactured in-house by large users. The industry is fragmented and extends from small, privately owned companies to divisions of big, diversified chemical enterprises. The term "fine chemicals" is used in distinction to "heavy chemicals", which are produced and handled in large lots and are often in a crude state.
Since the late 1970s, fine chemicals have become an important part of the chemical industry. Their global total production value of $85 billion is split about 60-40 between in-house production in the life-science industry—the products' main consumers—and companies producing them for sale. The latter pursue both a "supply push" strategy, whereby standard products are developed in-house and offered ubiquitously, and a "demand pull" strategy, whereby products or services determined by the customer are provided excl |
https://en.wikipedia.org/wiki/Loom | A loom is a device used to weave cloth and tapestry. The basic purpose of any loom is to hold the warp threads under tension to facilitate the interweaving of the weft threads. The precise shape of the loom and its mechanics may vary, but the basic function is the same.
Etymology and usage
The word "loom" derives from the Old English geloma, formed from ge- (perfective prefix) and loma, a root of unknown origin; the whole word geloma meant a utensil, tool, or machine of any kind. In 1404 "lome" was used to mean a machine to enable weaving thread into cloth.
By 1838 "loom" had gained the additional meaning of a machine for interlacing thread.
Weaving Loom
Weaving is done by intersecting the longitudinal threads, the warp, i.e. the ones stretched on the loom (from the Proto-Indo-European *werp, "to bend") with the transverse threads, the weft, i.e. "that which is woven".
The major components of the loom are the warp beam, heddles, harnesses or shafts (as few as two, four is common, sixteen not unheard of), shuttle, reed and takeup roll. In the loom, yarn processing includes shedding, picking, battening and taking-up operations. These are the principal motions.
Shedding. Shedding is the raising of part of the warp yarn to form a shed (the vertical space between the raised and unraised warp yarns), through which the filling yarn, carried by the shuttle, can be inserted, forming the weft. On the modern loom, simple and intricate shedding operations are performed automatically by the heddle or heald frame, also known as a harness. This is a rectangular frame to which a series of wires, called heddles or healds, are attached. The yarns are passed through the eye holes of the heddles, which hang vertically from the harnesses. The weave pattern determines which harness controls which warp yarns, and the number of harnesses used depends on the complexity of the weave. Two common methods of controlling the heddles are dobbies and a Jacquard Head.
Picking. As the harness |
https://en.wikipedia.org/wiki/Origami | ) is the Japanese art of paper folding. In modern usage, the word "origami" is often used as an inclusive term for all folding practices, regardless of their culture of origin. The goal is to transform a flat square sheet of paper into a finished sculpture through folding and sculpting techniques. Modern origami practitioners generally discourage the use of cuts, glue, or markings on the paper. Origami folders often use the Japanese word to refer to designs which use cuts.
In the detailed Japanese classification, origami is divided into stylized ceremonial origami (儀礼折り紙, girei origami) and recreational origami (遊戯折り紙, yūgi origami), and only recreational origami is generally recognized as origami. In Japan, ceremonial origami is generally called "origata" (:ja:折形) to distinguish it from recreational origami. The term "origata" is one of the old terms for origami.
The small number of basic origami folds can be combined in a variety of ways to make intricate designs. The best-known origami model is the Japanese paper crane. In general, these designs begin with a square sheet of paper whose sides may be of different colors, prints, or patterns. Traditional Japanese origami, which has been practiced since the Edo period (1603–1867), has often been less strict about these conventions, sometimes cutting the paper or using nonsquare shapes to start with. The principles of origami are also used in stents, packaging, and other engineering applications.
Etymology
The Japanese word origami itself is a compound of two smaller Japanese words: "ori" (root verb "oru"), meaning to fold, and "kami", meaning paper. Until recently, not all forms of paper folding were grouped under the word origami. Before that, paper folding for play was known by a variety of names, including "orikata" or "origata" (折形), "orisue" (折据), "orimono" (折物), "tatamigami" (畳紙) and others.
History
Distinct paperfolding traditions arose in Europe, China, and Japan which have been well-documented by his |
https://en.wikipedia.org/wiki/Home%20care%20in%20the%20United%20States | Home care (also referred to as domiciliary care, social care, or in-home care) is supportive care provided in the home. Care may be provided by licensed healthcare professionals who provide medical treatment needs or by professional caregivers who provide daily assistance to ensure the activities of daily living (ADLs) are met. In-home medical care is often and more accurately referred to as home health care or formal care. Home health care is different non-medical care, custodial care, or private-duty care which refers to assistance and services provided by persons who are not nurses, doctors, or other licensed medical personnel. For patients recovering from surgery or illness, home care may include rehabilitative therapies. For terminally ill patients, home care may include hospice care.
Home health services help adults, seniors, and pediatric clients who are recovering after a hospital or facility stay, or need additional support to remain safely at home and avoid unnecessary hospitalization. These Medicare-certified services may include short-term nursing, rehabilitative, therapeutic, and assistive home health care. This care is provided by registered nurses (RNs), licensed practical nurses (LPN's), physical therapists (PTs), occupational therapists (OTs), speech language pathologists (SLPs), unlicensed assistive personnel (UAPs), home health aides (HHAs), home care agencies (HCAs) and medical social workers (MSWs) as a limited number of up to one hour visits, addressed primarily through the Medicare Home Health benefit. Paid individual providers can also provide health services through programs such as California's In-Home Supportive Services (IHSS), or may be paid privately.
The largest segment of home care consists of licensed and unlicensed non-medical personnel, including caregivers who assist the care seeker. Care assistants may help the individual with daily tasks such as bathing, cleaning the home, preparing meals and offering the recipient support and |
https://en.wikipedia.org/wiki/Interrupts%20in%2065xx%20processors | The 65xx family of microprocessors, consisting of the MOS Technology 6502 and its derivatives, the WDC 65C02, WDC 65C802 and WDC 65C816, and CSG 65CE02, all handle interrupts in a similar fashion. There are three hardware interrupt signals common to all 65xx processors and one software interrupt, the instruction. The WDC 65C816 adds a fourth hardware interrupt—, useful for implementing virtual memory architectures—and the software interrupt instruction (also present in the 65C802), intended for use in a system with a coprocessor of some type (e.g., a floating point processor).
Interrupt types
The hardware interrupt signals are all active low, and are as follows:
RESETa reset signal, level-triggered
NMIa non-maskable interrupt, edge-triggered
IRQa maskable interrupt, level-triggered
ABORTa special-purpose, non-maskable interrupt (65C816 only, see below), level-triggered
The detection of a signal causes the processor to enter a system initialization period of six clock cycles, after which it sets the interrupt request disable flag in the status register and loads the program counter with the values stored at the processor initialization vector (–) before commencing execution. If operating in native mode, the 65C816/65C802 are switched back to emulation mode and stay there until returned to native mode under software control.
The detection of an or signal, as well as the execution of a instruction, will cause the same overall sequence of events, which are, in order:
The processor completes the current instruction and updates registers or memory as required before responding to the interrupt.
65C816/65C802 when operating in native mode: The program bank register (, the part of the address bus) is pushed onto the hardware stack.
The most significant byte (MSB) of the program counter () is pushed onto the stack.
The least significant byte (LSB) of the program counter is pushed onto the stack.
The status register () is pushed onto the stack.
The interrupt di |
https://en.wikipedia.org/wiki/Sensory%20maps%20and%20brain%20development | Sensory maps and brain development is a concept in neuroethology that links the development of the brain over an animal’s lifetime with the fact that there is spatial organization and pattern to an animal’s sensory processing. Sensory maps are the representations of sense organs as organized maps in the brain, and it is the fundamental organization of processing. Sensory maps are not always close to an exact topographic projection of the senses. The fact that the brain is organized into sensory maps has wide implications for processing, such as that lateral inhibition and coding for space are byproducts of mapping. The developmental process of an organism guides sensory map formation; the details are yet unknown. The development of sensory maps requires learning, long term potentiation, experience-dependent plasticity, and innate characteristics. There is significant evidence for experience-dependent development and maintenance of sensory maps, and there is growing evidence on the molecular basis, synaptic basis and computational basis of experience-dependent development.
Sensory maps
List of known sensory maps:
Somatotopic maps: homunculus, rat barrel cortex, star-nose mole nose
Retino-topic maps: visual field position, orientation, direction, spatial frequency
Tonotopic maps: interaural time difference, frequency tonotopic maps of the cochlea
Computational maps
The computational map is the “key building block in the infrastructure of information processing by the nervous system.” Computation defined as the transformation in the representation of information is the essence of brain function. Computational maps are involved in processing sensory information and motor programming, and they contain derived information that is accessible to higher-order processing regions. The first computational map to be proposed was the Jeffress model (1948) which stated that the computation of sound localization was dependent upon timing differences of sensory input. Since th |
https://en.wikipedia.org/wiki/D-Orbit | D-Orbit is a private aerospace company headquartered in Italy with subsidiaries in Portugal, the UK and the US.
D-orbit is mainly active in the Space tug also known as orbital transfer vehicle (OTV) market. While this concept has existed for several decades, it is only in the last few years that more examples are being produced and used.
D-Orbit has been operating commercial ION missions since September 2020, deploying satellites for customers like Planet Labs, EnduroSat, Elecnor Deimos, University of Southern California, SatRevolution, and Kleos, and operating payloads for the German HPS, High Performance Space Structure Systems, the Instituto de Astrofísica de Canarias (IAC), and the Swiss data security company Cysec SA.
History
D-Orbit was founded in 2011 by Luca Rossettini, currently serving as chief executive officer (CEO), and Renato Panesi, currently serving as chief commercial officer (COO).
The company's initial focus was the development of a smart and autonomous decommissioning motor for satellites and launcher stages called D3 (D-Orbit Decommissioning Device). In 2015, the D3 project was partially funded by the European Union under the framework of Horizon 2020.
This provided the origin of the D-orbit name, being just a contraction of the term "de-orbit", which denotes an orbital manoeuver that pulls a spacecraft out of its operational orbit and inserts it into a reentry trajectory that will eventually cause it to burn up upon atmospheric entry.
In 2017, the company began the development of ION Satellite Carrier, an orbital transfer vehicle able to host a batch of satellites, transport them across orbits, and release each one of them, individually, into a custom orbital slot and operate third-party payloads.
The OTV performed its first commercial mission in September 2020.
In 2022, the company planned to go public via a SPAC with a valuation of $1.4bn, however this was cancelled.
In June 2022, the company gained an award of around 1.95 million E |
https://en.wikipedia.org/wiki/Nick%20Mathewson | Nick Mathewson is an American computer scientist and co-founder of The Tor Project. He, along with Roger Dingledine, began working on onion routing shortly after they graduated from Massachusetts Institute of Technology (MIT) in the early 2000s. He is also known by his pseudonym nickm. Mathewson and Dingledine were the focus of increased media attention after the leak of NSA's highly classified documents by Edward Snowden, and the subsequent public disclosure of the operation of XKeyscore, which targeted one of The Tor Project's onion servers along with Mixminion remailer which are both run at MIT.
Education
Mathewson graduated from MIT in 2002, earning a Bachelor of Science degree in Computer Science. He later earned a Master of Engineering in Computer Science and Linguistics from MIT.
Works
The Tor Project
Tor was developed by Mathewson, along with his two colleagues, under a contract from the United States Naval Research Laboratory. Mathewson is also lead developer responsible for the security, design, maintenance of the Tor protocol, along with sending out security patches.
libevent
He is also the primary maintainer for libevent, an event notification library used by some prominent applications like Google Chrome, Transmission and also Tor.
Honors
Mathewson, along with the other two developers of the Tor Project (Roger Dingledine and Paul Syverson), were recognized in 2012, by Foreign Policy magazine as #78 in their list of the top 100 global thinkers of the year.
Selected publications |
https://en.wikipedia.org/wiki/Generics%20in%20Java | Generics are a facility of generic programming that were added to the Java programming language in 2004 within version J2SE 5.0. They were designed to extend Java's type system to allow "a type or method to operate on objects of various types while providing compile-time type safety". The aspect compile-time type safety was not fully achieved, since it was shown in 2016 that it is not guaranteed in all cases.
The Java collections framework supports generics to specify the type of objects stored in a collection instance.
In 1998, Gilad Bracha, Martin Odersky, David Stoutamire and Philip Wadler created Generic Java, an extension to the Java language to support generic types. Generic Java was incorporated in Java with the addition of wildcards.
Hierarchy and classification
According to Java Language Specification:
A type variable is an unqualified identifier. Type variables are introduced by generic class declarations, generic interface declarations, generic method declarations, and by generic constructor declarations.
A class is generic if it declares one or more type variables. It defines one or more type variables that act as parameters. A generic class declaration defines a set of parameterized types, one for each possible invocation of the type parameter section. All of these parameterized types share the same class at runtime.
An interface is generic if it declares one or more type variables. It defines one or more type variables that act as parameters. A generic interface declaration defines a set of types, one for each possible invocation of the type parameter section. All parameterized types share the same interface at runtime.
A method is generic if it declares one or more type variables. These type variables are known as the formal type parameters of the method. The form of the formal type parameter list is identical to a type parameter list of a class or interface.
A constructor can be declared as generic, independently of whether the class that the cons |
https://en.wikipedia.org/wiki/Scaffold%20protein | In biology, scaffold proteins are crucial regulators of many key signalling pathways. Although scaffolds are not strictly defined in function, they are known to interact and/or bind with multiple members of a signalling pathway, tethering them into complexes. In such pathways, they regulate signal transduction and help localize pathway components (organized in complexes) to specific areas of the cell such as the plasma membrane, the cytoplasm, the nucleus, the Golgi, endosomes, and the mitochondria.
History
The first signaling scaffold protein discovered was the Ste5 protein from the yeast Saccharomyces cerevisiae. Three distinct domains of Ste5 were shown to associate with the protein kinases Ste11, Ste7, and Fus3 to form a multikinase complex.
Function
Scaffold proteins act in at least four ways: tethering signaling components, localizing these components to specific areas of the cell, regulating signal transduction by coordinating positive and negative feedback signals, and insulating correct signaling proteins from competing proteins.
Tethering signaling components
This particular function is considered a scaffold's most basic function. Scaffolds assemble signaling components of a cascade into complexes. This assembly may be able to enhance signaling specificity by preventing unnecessary interactions between signaling proteins, and enhance signaling efficiency by increasing the proximity and effective concentration of components in the scaffold complex. A common example of how scaffolds enhance specificity is a scaffold that binds a protein kinase and its substrate, thereby ensuring specific kinase phosphorylation. Additionally, some signaling proteins require multiple interactions for activation and scaffold tethering may be able to convert these interactions into one interaction that results in multiple modifications. Scaffolds may also be catalytic as interaction with signaling proteins may result in allosteric changes of these signaling component |
https://en.wikipedia.org/wiki/See%2C%20amid%20the%20Winter%27s%20Snow | "See, amid the Winter's Snow", also known as "The Hymn for Christmas", is an English Christmas carol, written by Edward Caswall and first published in 1858. In 1871 Sir John Goss composed a hymn tune for it, "Humility", and as "Hymn for Christmas Day", it was included in Christmas Carols New And Old, the anthology edited by Henry Ramsden Bramley and John Stainer .
History
Caswall wrote "See, amid the winter's snow" shortly after converting from the Church of England to the Roman Catholic Church and joining the Oratory of Saint Philip Neri. The hymn was published earliest in 1858 as part of The Masque of Mary and Other Poems by Caswall. In 1871, John Goss wrote the tune "Humility" specifically for the carol. Later in the year, Bramley and Stainer selected "See, amid the winter's snow" to be published nationwide in their "Christmas Carols Old and New" hymn book. It was selected to be included in "Christmas Carols Old and New" as one of the carols that had "proved their hold upon the popular mind". While the carol became popular, a number of verses were cut from later publications of "See, amid the Winter's Snow". This includes the original final verse about the Virgin Mary, which was often cut out of non-Catholic hymnals.
The artist Edward Dalziel used the words of this hymn below his engraving of the English downland with animals, even though the engraving did not have any snow in it.
The tune has been re-used in a variety of social protest and union songs in the late 20th century, beginning with "Coal, Not Dole", written in the mid-1980s by Kay Sutcliffe about the closing of the Kent coal fields to a tune by Paul Abrahams, but later reset to Goss's tune at the suggestion of John Tams and recorded by Coope Boyes and Simpson. Shelley Posen wrote "No More Fish, No Fishermen" in 1996 about the end of the cod fishery in Newfoundland. Australian John Warner wrote "Bring out the Banners" on the 150th anniversary of Australia's eight-hour work day rule in 1996.
Compo |
https://en.wikipedia.org/wiki/Science%2C%20Technology%2C%20Engineering%20and%20Mathematics%20Network | The Science, Technology, Engineering and Mathematics Network or STEMNET is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work.
History
It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council.
Function
Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres.
STEM ambassadors
To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell.
Funding
STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
See also
The WISE Campaign
Engineering and Physical Sciences Research Council
National Centre for Excellence in Teaching Mathematics
Association for Science Education
Glossary of areas of mathematics
Glossary of astronomy
Glossary of biology
Glossary of chemistry
Glossary of engineering
Glossary of physics |
https://en.wikipedia.org/wiki/Kernel%20page-table%20isolation | Kernel page-table isolation (KPTI or PTI, previously called KAISER) is a Linux kernel feature that mitigates the Meltdown security vulnerability (affecting mainly Intel's x86 CPUs) and improves kernel hardening against attempts to bypass kernel address space layout randomization (KASLR). It works by better isolating user space and kernel space memory. KPTI was merged into Linux kernel version 4.15, and backported to Linux kernels 4.14.11, 4.9.75, and 4.4.110. Windows and macOS released similar updates. KPTI does not address the related Spectre vulnerability.
Background on KAISER
The KPTI patches were based on KAISER (short for Kernel Address Isolation to have Side-channels Efficiently Removed), a technique conceived in 2016 and published in June 2017 back when Meltdown was not known yet. KAISER makes it harder to defeat KASLR, a 2014 mitigation for a much less severe issue.
In 2014, the Linux kernel adopted kernel address space layout randomization (KASLR), which makes it more difficult to exploit other kernel vulnerabilities, which relies on kernel address mappings remaining hidden from user space. Despite prohibiting access to these kernel mappings, it turns out that there are several side-channel attacks in modern processors that can leak the location of this memory, making it possible to work around KASLR.
KAISER addressed these problems in KASLR by eliminating some sources of address leakage. Whereas KASLR merely prevents address mappings from leaking, KAISER also prevents the data from leaking, thereby covering the Meltdown case.
KPTI is based on KAISER. Without KPTI enabled, whenever executing user-space code (applications), Linux would also keep its entire kernel memory mapped in page tables, although protected from access. The advantage is that when the application makes a system call into the kernel or an interrupt is received, kernel page tables are always present, so most context switching-related overheads (TLB flush, page-table swapping, etc) can |
https://en.wikipedia.org/wiki/GNU%20variants | GNU variants (also called GNU distributions or distros for short) are operating systems based upon the GNU operating system (the Hurd kernel, the GNU C library, system libraries and application software like GNU coreutils, bash, GNOME, the Guix package manager, etc). According to the GNU project and others, these also include most operating systems using the Linux kernel and a few others using BSD-based kernels.
GNU users usually obtain their operating system by downloading GNU distributions, which are available for a wide variety of systems ranging from embedded devices (for example, LibreCMC) and personal computers (for example, Debian GNU/Hurd) to powerful supercomputers (for example, Rocks Cluster Distribution).
Hurd kernel
Hurd is the official kernel developed for the GNU system (before Linux-libre also became an official GNU package). Debian GNU/Hurd was discussed for a release as technology preview with Debian 7.0 Wheezy, however these plans were discarded due to the immature state of the system. However the maintainers of Debian GNU/Hurd decided to publish an unofficial release on the release date of Debian 7.0. Debian GNU/Hurd is not considered yet to provide the performance and stability expected from a production system. Among the open issues are incomplete implementation of Java and X.org graphical user interfaces and limited hardware driver support. About two thirds of the Debian packages have been ported to Hurd.
Arch Hurd is a derivative work of Arch Linux, porting it to the GNU Hurd system with packages optimised for the Intel P6 architecture. Their goal is to provide an Arch-like user environment (BSD-style init scripts, pacman package manager, rolling releases, and a simple set up) on the GNU Hurd, which is stable enough for at least occasional use. Currently it provides a LiveCD for evaluation purposes and installation guides for LiveCD and conventional installation.
Linux kernel
The term GNU/Linux or GNU+Linux is used by the FSF and its s |
https://en.wikipedia.org/wiki/Amino%20acid%20kinase | In molecular biology, the amino acid kinase domain is a protein domain. It is found in protein kinases with various specificities, including the aspartate, glutamate and uridylate kinase families. In prokaryotes and plants the synthesis of the essential amino acids lysine and threonine is predominantly regulated by feed-back inhibition of aspartate kinase (AK) and dihydrodipicolinate synthase (DHPS). In Escherichia coli, thrA, metLM, and lysC encode aspartokinase isozymes that show feedback inhibition by threonine, methionine, and lysine, respectively. The lysine-sensitive isoenzyme of aspartate kinase from spinach leaves has a subunit composition of 4 large and 4 small subunits.
In plants although the control of carbon fixation and nitrogen assimilation has been studied in detail, relatively little is known about the regulation of carbon and nitrogen flow into amino acids. The metabolic regulation of expression of an Arabidopsis thaliana aspartate kinase/homoserine dehydrogenase (AK/HSD) gene, which encodes two linked key enzymes in the biosynthetic pathway of aspartate family amino acids has been studied. The conversion of aspartate into either the storage amino acid asparagine or aspartate family amino acids may be subject to a coordinated, reciprocal metabolic control, and this biochemical branch point is a part of a larger, coordinated regulatory mechanism of nitrogen and carbon storage and utilization. |
https://en.wikipedia.org/wiki/Connect%20%28organization%29 | Connect is a non-profit serving the San Diego and Southern California region. Connect elevates innovators and entrepreneurs throughout their growth journey by providing educational programming, mentorship, networking events, and access to capital. The current CEO is Mike Krenn.
Background
Founded at the University of California, San Diego (UC San Diego), Connect spun out of the university in 2005. Connect was founded in 1985 by Irwin M. Jacobs, co-founder and board member of Qualcomm Incorporated; Richard Atkinson, president emeritus, University of California (and former chancellor, UC San Diego); Lea Rudee, founding dean, UC San Diego School of Engineering; Mary Lindenstein Walshok, associate vice chancellor of extended studies and public programs at UC San Diego; Buzz Woolley, president of Girard Capital/Girard Foundation; David Hale, chairman of Hale BioPharma Ventures LLC; Dan Pegg, former president and CEO of San Diego Regional Economic Development Corporation; and Bob Weaver of Deloitte & Touche.
In 1986 UC San Diego recruited William (Bill) Otterson, chairman and CEO of Cipher Data Products, to head Connect. Over the following 13 years, Otterson built Connect by bringing together local entrepreneurs, academics and out of area venture capitalists through a variety of programs centered on innovation. Today CONNECT is an internationally renowned program that has now been modeled in almost 40 regions around the world including New York City, the UK, Denmark, Finland, Sweden, and Australia.
In May 2019, Connect merged with San Diego Venture Group (SDVG), led by then SDVG President, Mike Krenn.
Programs
Connect offers programs in the areas of research institution support, access to capital, entrepreneur mentorship, business development, and education on capital structure. Connect's lead program is Springboard, which offers free hands-on mentoring by veterans for innovators at the innovation, technology transfer, commercialization, transition and internati |
https://en.wikipedia.org/wiki/Web%20IDL | Web IDL is an interface description language (IDL) format for describing APIs (application programming interfaces) that are intended to be implemented in web browsers. Its adoption was motivated by the desire to improve the interoperability of web programming interfaces by specifying how languages such as ECMAScript should bind these interfaces.
Description
Web IDL is an IDL variant with:
A number of features that allow one to more easily describe the behavior of common script objects in a web context.
A mapping of how interfaces described with Web IDL correspond to language constructs within an ECMAScript execution environment.
Web specifications had been specified using OMG IDL since 1998, first with the DOM Level 1 specification. However, interfaces defined using OMG IDL were not able to specify behavior for JavaScript precisely, leading to issues with interoperability. WebIDL improved on this status quo by providing data types and binding specifications that make the intended behavior in JavaScript clearer.
Status of Web IDL specifications
The first edition of the Web IDL specification became a Candidate Recommendation on 19 April 2012 and a W3C Recommendation on 15 December 2016. For many years the Editor's Draft of a potential second edition, was what most new web specifications referenced. On 5 October 2021, the Editor's Draft was moved to the WHATWG as the Web IDL Living Standard per an update to the agreement between the W3C and WHATWG.
Usage
The W3C Wiki has a list of W3C Specifications that use Web IDL, and nearly all WHATWG specifications use it.
The Chromium Project has a page about using WebIDL to specify interfaces in Blink.
Mozilla uses Web IDL in their software creation process, mapping implementations to Web IDL specs.
When WebKit is built, the IDL files are parsed, creates the code to bind interfaces to implementations.
In the ES operating system, every system API is defined in Web IDL, and can be invoked from JavaScript directly. |
https://en.wikipedia.org/wiki/K-regular%20sequence | In mathematics and theoretical computer science, a k-regular sequence is a sequence satisfying linear recurrence equations that reflect the base-k representations of the integers. The class of k-regular sequences generalizes the class of k-automatic sequences to alphabets of infinite size.
Definition
There exist several characterizations of k-regular sequences, all of which are equivalent. Some common characterizations are as follows. For each, we take R′ to be a commutative Noetherian ring and we take R to be a ring containing R′.
k-kernel
Let k ≥ 2. The k-kernel of the sequence is the set of subsequences
The sequence is (R′, k)-regular (often shortened to just "k-regular") if the -module generated by Kk(s) is a finitely-generated R′-module.
In the special case when , the sequence is -regular if is contained in a finite-dimensional vector space over .
Linear combinations
A sequence s(n) is k-regular if there exists an integer E such that, for all ej > E and 0 ≤ rj ≤ kej − 1, every subsequence of s of the form s(kejn + rj) is expressible as an R′-linear combination , where cij is an integer, fij ≤ E, and 0 ≤ bij ≤ kfij − 1.
Alternatively, a sequence s(n) is k-regular if there exist an integer r and subsequences s1(n), ..., sr(n) such that, for all 1 ≤ i ≤ r and 0 ≤ a ≤ k − 1, every sequence si(kn + a) in the k-kernel Kk(s) is an R′-linear combination of the subsequences si(n).
Formal series
Let x0, ..., xk − 1 be a set of k non-commuting variables and let τ be a map sending some natural number n to the string xa0 ... xae − 1, where the base-k representation of x is the string ae − 1...a0. Then a sequence s(n) is k-regular if and only if the formal series is -rational.
Automata-theoretic
The formal series definition of a k-regular sequence leads to an automaton characterization similar to Schützenberger's matrix machine.
History
The notion of k-regular sequences was first investigated in a pair of papers by Allouche and Shallit. Prior to this, Berstel a |
https://en.wikipedia.org/wiki/Parrondo%27s%20paradox | Parrondo's paradox, a paradox in game theory, has been described as: A combination of losing strategies becomes a winning strategy. It is named after its creator, Juan Parrondo, who discovered the paradox in 1996. A more explanatory description is:
There exist pairs of games, each with a higher probability of losing than winning, for which it is possible to construct a winning strategy by playing the games alternately.
Parrondo devised the paradox in connection with his analysis of the Brownian ratchet, a thought experiment about a machine that can purportedly extract energy from random heat motions popularized by physicist Richard Feynman. However, the paradox disappears when rigorously analyzed. Winning strategies consisting of various combinations of losing strategies were explored in biology before Parrondo's paradox was published.
Illustrative examples
According to Harmer and Abbott, other than the last two examples (i.e. the saw-tooth example and the coin-tossing example), the other examples do not truly exhibit Parrondo's paradox in its strictest definition. In particular, since Parrondo's paradox was physically motivated from the flashing Brownian ratchet, a skip free process must be used to properly preserve this ratchet action and so only a +1 or -1 unit payoff structure can be used. See the coin-tossing example for an explicit illustration of this.
The simple example
Consider two games Game A and Game B, with the following rules:
In Game A, you lose $1 every time you play.
In Game B, you count how much money you have left — if it is an even number you win $3, otherwise you lose $5.
Say you begin with $100 in your pocket. If you start playing Game A exclusively, you will obviously lose all your money in 100 rounds. Similarly, if you decide to play Game B exclusively, you will also lose all your money in 100 rounds.
However, consider playing the games alternatively, starting with Game B, followed by A, then by B, and so on (BABABA...). It shou |
https://en.wikipedia.org/wiki/Engineering%20tolerance | Engineering tolerance is the permissible limit or limits of variation in:
a physical dimension;
a measured value or physical property of a material, manufactured object, system, or service;
other measured values (such as temperature, humidity, etc.);
in engineering and safety, a physical distance or space (tolerance), as in a truck (lorry), train or boat under a bridge as well as a train in a tunnel (see structure gauge and loading gauge);
in mechanical engineering, the space between a bolt and a nut or a hole, etc.
Dimensions, properties, or conditions may have some variation without significantly affecting functioning of systems, machines, structures, etc. A variation beyond the tolerance (for example, a temperature that is too hot or too cold) is said to be noncompliant, rejected, or exceeding the tolerance.
Considerations when setting tolerances
A primary concern is to determine how wide the tolerances may be without affecting other factors or the outcome of a process. This can be by the use of scientific principles, engineering knowledge, and professional experience. Experimental investigation is very useful to investigate the effects of tolerances: Design of experiments, formal engineering evaluations, etc.
A good set of engineering tolerances in a specification, by itself, does not imply that compliance with those tolerances will be achieved. Actual production of any product (or operation of any system) involves some inherent variation of input and output. Measurement error and statistical uncertainty are also present in all measurements. With a normal distribution, the tails of measured values may extend well beyond plus and minus three standard deviations from the process average. Appreciable portions of one (or both) tails might extend beyond the specified tolerance.
The process capability of systems, materials, and products needs to be compatible with the specified engineering tolerances. Process controls must be in place and an effective qu |
https://en.wikipedia.org/wiki/Epoophoron | The epoophoron or epoöphoron (also called organ of Rosenmüller or the parovarium) is a remnant of the mesonephric tubules that can be found next to the ovary and fallopian tube.
Anatomy
It may contain 10–15 transverse small ducts or tubules that lead to the Gartner's duct (also longitudinal duct of epoophoron) that represents the caudal remnant of the mesonephric duct and passes through the broad ligament and the lateral wall of the cervix and vagina.
The epoophoron is a homologue to the epididymis in the male.
While the epoophoron is located in the lateral portion of the mesosalpinx and mesovarium, the paroophoron (residual remnant of that part of the mesonephric duct that forms the paradidymis in the male) lies more medially in the mesosalpinx.
Histology
It has a unique histological profile.
Clinical significance
Clinically the organ may give rise to a local paraovarian cyst or adenoma.
See also
List of homologues of the human reproductive system
Vesicular appendages of epoophoron |
https://en.wikipedia.org/wiki/Ideal%20class%20group | In number theory, the ideal class group (or class group) of an algebraic number field is the quotient group where is the group of fractional ideals of the ring of integers of , and is its subgroup of principal ideals. The class group is a measure of the extent to which unique factorization fails in the ring of integers of . The order of the group, which is finite, is called the class number of .
The theory extends to Dedekind domains and their fields of fractions, for which the multiplicative properties are intimately tied to the structure of the class group. For example, the class group of a Dedekind domain is trivial if and only if the ring is a unique factorization domain.
History and origin of the ideal class group
Ideal class groups (or, rather, what were effectively ideal class groups) were studied some time before the idea of an ideal was formulated. These groups appeared in the theory of quadratic forms: in the case of binary integral quadratic forms, as put into something like a final form by Carl Friedrich Gauss, a composition law was defined on certain equivalence classes of forms. This gave a finite abelian group, as was recognised at the time.
Later Ernst Kummer was working towards a theory of cyclotomic fields. It had been realised (probably by several people) that failure to complete proofs in the general case of Fermat's Last Theorem by factorisation using the roots of unity was for a very good reason: a failure of unique factorization – i.e., the fundamental theorem of arithmetic – to hold in the rings generated by those roots of unity was a major obstacle. Out of Kummer's work for the first time came a study of the obstruction to the factorisation. We now recognise this as part of the ideal class group: in fact Kummer had isolated the p-torsion in that group for the field of p-roots of unity, for any prime number p, as the reason for the failure of the standard method of attack on the Fermat problem (see regular prime).
Somewhat later ag |
https://en.wikipedia.org/wiki/Minkowski%20Portal%20Refinement | The Minkowski Portal Refinement collision detection algorithm is a technique for determining whether two convex shapes overlap.
The algorithm was created by Gary Snethen in 2006 and was first published in Game Programming Gems 7. The algorithm was used in Tomb Raider: Underworld and other games created by Crystal Dynamics and its sister studios within Eidos Interactive.
MPR, like its cousin GJK, relies on shapes that are defined using support mappings. This allows the algorithm to support a limitless variety of shapes that are problematic for other algorithms. Support mappings require only a single mathematical function to represent a point, line segment, disc, cylinder, cone, ellipsoid, football, bullet, frustum or most any other common convex shape. Once a set of basic primitives have been created, they can easily be combined with one another using operations such as sweep, shrink-wrap and affine transformation.
Unlike GJK, MPR does not provide the shortest distance between separated shapes. However, according to its author, MPR is simpler, more numerically robust and handles translational sweeping with very little modification. This makes it well-suited for games and other real-time applications.
External links
Snethen, Gary (2008) "Complex Collision Made Simple", Game Programming Gems 7, 165–178
Snethen, Gary (2008) "XenoCollide Homepage"
Open source implementation: libccd
Geometric algorithms
Convex geometry |
https://en.wikipedia.org/wiki/Nest%20Wifi | Nest Wifi, its predecessor the Google Wifi, and the Nest Wifi's successor, the Nest Wifi Pro, are a line of mesh-capable wireless routers and add-on points developed by Google as part of the Google Nest family of products. The first generation was announced on October 4, 2016, and released in the United States on December 5, 2016. The second generation, distinct in being released as two separate offerings, a "router" and "point", were announced at the Pixel 4 hardware event on October 15, 2019, and was released in the United States on November 4, 2019. The third generation was announced on October 4, 2022, two days prior to the Pixel 7 Fall 2022 event. This generation returned to a single model, doing away with the "router/point" variants, and was released in the United States on October 27, 2022.
The Nest Wifi aims to provide enhanced Wi-Fi coverage through the setup of multiple Nest Wifi devices in a home. Nest Wifi automatically switches between access points depending on signal strength.
History
First generation
Android Police reported in September 2016 that Google was preparing to introduce a mesh-capable wireless router with enhanced range, along with its October 4 date of announcement and US$129 price point. Google Wifi was officially announced on October 4, 2016, with expected availability in the United States in December. The device became available in the United States on December 5, 2016, in the United Kingdom on April 6, 2017, in Canada on April 28, 2017, in France and Germany on June 26, 2017, in Australia on July 20, 2017, in Hong Kong and Singapore on August 30, 2017, and in Philippines on June 26, 2018.
The first generation Google Wifi features 802.11ac connectivity with 2.4 GHz and 5 GHz channels, 2x2 antennas, and support for beamforming. It has two gigabit Ethernet ports, and contains a quad-core processor with 512 MB RAM and 4 GB flash memory. Wi-Fi access can be controlled through a companion mobile app.
In 2020, Google relaunched the firs |
https://en.wikipedia.org/wiki/Darkon%20%28unparticle%29 | Darkon is a hypothetical scalar unparticle to introduce to the Minimal Supersymmetric Standard Model, a dark matter candidate.
History
A. Zee and V. Silveira were the first to consider the darkon field as dark matter in 1985. This approach was then used by several others groups of physicists.
Concept
In addition to the Standard Model particles, It contains the darkon, a real singlet field. To play the role of dark matter, the darkon field must interact weakly with the standard matter field sector and should not rapidly decay into particles. The simplest way of introducing the darkon is to demand that they can only be annihilated or created in pairs and to make it stable against decay.
See also
Lightest supersymmetric particle
WIMPs
SUSY
Physics beyond the Standard Model |
https://en.wikipedia.org/wiki/Temporal%20resolution | Temporal resolution (TR) refers to the discrete resolution of a measurement with respect to time.
Physics
Often there is a trade-off between the temporal resolution of a measurement and its spatial resolution, due to Heisenberg's uncertainty principle. In some contexts, such as particle physics, this trade-off can be attributed to the finite speed of light and the fact that it takes a certain period of time for the photons carrying information to reach the observer. In this time, the system might have undergone changes itself. Thus, the longer the light has to travel, the lower the temporal resolution.
Technology
Computing
In another context, there is often a tradeoff between temporal resolution and computer storage. A transducer may be able to record data every millisecond, but available storage may not allow this, and in the case of 4D PET imaging the resolution may be limited to several minutes.
Electronic displays
In some applications, temporal resolution may instead be equated to the sampling period, or its inverse, the refresh rate, or update frequency in Hertz, of a TV, for example.
The temporal resolution is distinct from temporal uncertainty. This would be analogous to conflating image resolution with optical resolution. One is discrete, the other, continuous.
The temporal resolution is a resolution somewhat the 'time' dual to the 'space' resolution of an image. In a similar way, the sample rate is equivalent to the pixel pitch on a display screen, whereas the optical resolution of a display screen is equivalent to temporal uncertainty.
Note that both this form of image space and time resolutions are orthogonal to measurement resolution, even though space and time are also orthogonal to each other. Both an image or an oscilloscope capture can have a signal-to-noise ratio, since both also have measurement resolution.
Oscilloscopy
An oscilloscope is the temporal equivalent of a microscope, and it is limited by temporal uncertainty the same way a m |
https://en.wikipedia.org/wiki/Two-Higgs-doublet%20model | The two-Higgs-doublet model (2HDM) is an extension of the Standard Model of particle physics. 2HDM models are one of the natural choices for beyond-SM models containing two Higgs doublets instead of just one. There are also models with more than two Higgs doublets, for example three-Higgs-doublet models etc.
The addition of the second Higgs doublet leads to a richer phenomenology as there are five physical scalar states viz., the CP even neutral Higgs bosons and (where is heavier than by convention), the CP odd pseudoscalar and two charged Higgs bosons . The discovered Higgs boson is measured to be CP even, so one can map either or with the observed Higgs. A special case occurs when , the alignment limit, in which the lighter CP even Higgs boson has couplings exactly like the SM-Higgs boson. In another limit such limit, where , the heavier CP even boson, i.e. is SM-like, leaving to be the lighter than the discovered Higgs; however, it is important to note that experiments have strongly pointed towards a value for that is close to 1.
Such a model can be described in terms of six physical parameters: four Higgs masses (), the ratio of the two vacuum expectation values () and the mixing angle () which diagonalizes the mass matrix of the neutral CP even Higgses. The SM uses only 2 parameters: the mass of the Higgs and its vacuum expectation value.
The masses of the H and A bosons could be below 1 TeV and the CMS has conducted searches around this range but no significant excess above the standard model prediction has been observed.
Classification
Two-Higgs-doublet models can introduce flavor-changing neutral currents which have not been observed so far. The Glashow-Weinberg condition, requiring that each group of fermions (up-type quarks, down-type quarks and charged leptons) couples exactly to one of the two doublets, is sufficient to avoid the prediction of flavor-changing neutral currents.
Depending on which type of fermions couples to which doubl |
https://en.wikipedia.org/wiki/Dublin%20University%20Zoological%20Association | The Dublin University Zoological Association was founded in 1853 to promote zoological studies in Ireland. Dublin University is now Trinity College Dublin.
It commenced proceedings in the Natural History Review in 1854.
Notable members
Robert Ball
Edward Perceval Wright
George Henry Kinahan
Robert Warren
William Archer
Samuel Haughton
George James Allman
Alexander Henry Haliday |
https://en.wikipedia.org/wiki/Numerical%20taxonomy | Numerical taxonomy is a classification system in biological systematics which deals with the grouping by numerical methods of taxonomic units based on their character states. It aims to create a taxonomy using numeric algorithms like cluster analysis rather than using subjective evaluation of their properties. The concept was first developed by Robert R. Sokal and Peter H. A. Sneath in 1963 and later elaborated by the same authors. They divided the field into phenetics in which classifications are formed based on the patterns of overall similarities and cladistics in which classifications are based on the branching patterns of the estimated evolutionary history of the taxa.
Although intended as an objective method, in practice the choice and implicit or explicit weighting of characteristics is influenced by available data and research interests of the investigator. What was made objective was the introduction of explicit steps to be used to create dendrograms and cladograms using numerical methods rather than subjective synthesis of data.
See also
Computational phylogenetics
Taxonomy (biology) |
https://en.wikipedia.org/wiki/Lazar%20Lyusternik | Lazar Aronovich Lyusternik (also Lusternik, Lusternick, Ljusternik; ; 31 December 1899 – 22 July 1981) was a Soviet mathematician. He is famous for his work in topology and differential geometry, to which he applied the variational principle. Using the theory he introduced, together with Lev Schnirelmann, he proved the theorem of the three geodesics, a conjecture by Henri Poincaré that every convex body in 3-dimensions has at least three simple closed geodesics. The ellipsoid with distinct but nearly equal axis is the critical case with exactly three closed geodesics.
The Lusternik–Schnirelmann theory, as it is called now, is based on the previous work by Poincaré, David Birkhoff, and Marston Morse. It has led to numerous advances in differential geometry and topology. For this work Lyusternik received the Stalin Prize in 1946. In addition to serving as a professor of mathematics at Moscow State University, Lyusternik also worked at the Steklov Mathematical Institute (RAS) from 1934 to 1948 and the Lebedev Institute of Precise Mechanics and Computer Engineering (IPMCE) from 1948 to 1955.
He was a student of Nikolai Luzin. In 1930 he became one of the initiators of the Egorov affair and then one of the participants in the notorious political persecution of his teacher Nikolai Luzin known as the Luzin affair.
See also
Lusternik–Schnirelmann category
Lyusternik's generalization of the Brunn–Minkowski theorem |
https://en.wikipedia.org/wiki/SEG-Y | The SEG-Y (sometimes SEG Y) file format is one of several standards developed by the Society of Exploration Geophysicists (SEG) for storing geophysical data. It is an open standard, and is controlled by the SEG Technical Standards Committee, a non-profit organization.
History
The format was originally developed in 1973 to store single-line seismic reflection digital data on magnetic tapes. The specification was published in 1975.
The format and its name evolved from the SEG "Ex" or Exchange Tape Format. However, since its release, there have been significant advancements in geophysical data acquisition, such as 3-dimensional seismic techniques and high speed, high capacity recording.
The most recent revision of the SEG-Y format was published in 2017, named the rev 2.0 specification. It still features certain legacies of the original format (referred as rev 0), such as an optional SEG-Y tape label, the main 3200 byte textual EBCDIC character encoded tape header and a 400 byte binary header.
Data structure
This image shows the byte stream structure of a SEG-Y file, with rev 1 Extended Textual File Header records.
Since the first SEG-Y standard was published, many companies dealing with seismic data have produced variants of the SEG-Y standard which have run contrary to the aims of defining a standard for universal interchange, thus generally causing confusion and delay when data received by a company in expected SEG-Y format turns out to be a variant of that format. Initially, many of these derived from the fact that the format was based on the de facto standard of using IBM computers for digital processing where character data was coded in EBCDIC and number data in IBM Floating Point, whereas processing systems in use quickly evolved based on ASCII character and IEEE number representations.
Even before the SEG-Y standard was agreed and published, earlier seismic data format standards published by the SEG such as SEG-A, SEG-B and SEG-C were modified by seismic |
https://en.wikipedia.org/wiki/Regional%20Playback%20Control | RPC-1 and RPC-2 are designations applied to firmware for DVD drives. Older DVD drives use RPC-1 firmware, which allows DVDs from any region to play. Newer drives use RPC-2 firmware, which enforces DVD region coding at the hardware level. See DVD region code#Computer DVD drives for further information.
Some RPC-2 drives can be converted to RPC-1 with the same features as before by using alternative firmware on the drive, or on some drives by setting a secret flag in the drive's EEPROM.
Computer storage media
DVD |
https://en.wikipedia.org/wiki/Katsuya%20Eda | is a mathematician, currently a professor at Waseda University. His research centers on set theory and its applications, particularly in algebraic topology. He has done a great deal of work on the fundamental group of the Hawaiian earring and related subjects.
External links
Eda's home page at Waseda University
Living people
21st-century Japanese mathematicians
Set theorists
Topologists
Academic staff of Waseda University
Year of birth missing (living people)
Place of birth missing (living people) |
https://en.wikipedia.org/wiki/Amy%20C.%20King | Amy Catheryne Patterson King (December 30, 1928 – June 7, 2014) was an American mathematician and mathematics educator who became Foundation Professor of mathematics at Eastern Kentucky University, and was recognized for her distinguished teaching by the Kentucky Section of the Mathematical Association of America.
Personal life
Amy Catheryne Patterson was born on December 30, 1928, in Douglas, Wyoming. She married Don King, who became a professor of dentistry; they had no children. Her brother, James D. Patterson, was also an academic, a professor of physics at the Florida Institute of Technology.
She became a member of the Centenary United Methodist Church of Lexington, Kentucky. She died on June 7, 2014, of burns and smoke inhalation from a fire at her home in Lexington.
Education and career
King was a graduate of the University of Missouri. She earned a master's degree from Wichita State University in 1960, with the master's thesis Selected methods for solving eigenvalue problems by variational procedures. She joined the Mathematical Association of America in 1961, and coauthored the 1963 book Pathways to Probability: History of the Mathematics of Certainty and Chance with Cecil Byron Read, published by Holt, Rinehart and Winston. She became a mathematics instructor, including stints at Wichita State University, Washburn University, the University of Kansas, and the University of Kentucky.
In later life she returned to graduate study, working with S. M. Shah at the University of Kentucky on transcendental functions of bounded index. She completed her Ph.D. in 1970, with the dissertation A Class of Entire Functions of Bounded Index and Radii of Univalence of Some Functions of Zero Order supervised by Shah. Her work in the early 1970s also included surveying the contributions of women in mathematics.
She taught mathematics at Eastern Kentucky University from 1972 until her retirement as Foundation Professor emerita in 1998.
Recognition
King became the inaugur |
https://en.wikipedia.org/wiki/Problem-based%20learning | Problem-based learning (PBL) is a student-centered pedagogy in which students learn about a subject through the experience of solving an open-ended problem found in trigger material. The PBL process does not focus on problem solving with a defined solution, but it allows for the development of other desirable skills and attributes. This includes knowledge acquisition, enhanced group collaboration and communication.
The PBL process was developed for medical education and has since been broadened in applications for other programs of learning. The process allows for learners to develop skills used for their future practice. It enhances critical appraisal, literature retrieval and encourages ongoing learning within a team environment.
The PBL tutorial process often involves working in small groups of learners. Each student takes on a role within the group that may be formal or informal and the role often alternates. It is focused on the student's reflection and reasoning to construct their own learning.
The Maastricht seven-jump process involves clarifying terms, defining problem(s), brainstorming, structuring and hypothesis, learning objectives, independent study and synthesising. In short, it is identifying what they already know, what they need to know, and how and where to access new information that may lead to the resolution of the problem.
The role of the tutor is to facilitate learning by supporting, guiding, and monitoring the learning process. The tutor aims to build students' confidence when addressing problems, while also expanding their understanding. This process is based on constructivism. PBL represents a paradigm shift from traditional teaching and learning philosophy, which is more often lecture-based.
The constructs for teaching PBL are very different from traditional classroom or lecture teaching and often require more preparation time and resources to support small group learning.
Meaning
Wood (2003) defines problem-based learning as a proce |
https://en.wikipedia.org/wiki/Anterior%20medial%20malleolar%20artery | The anterior medial malleolar artery (medial anterior malleolar artery, internal malleolar artery) is an artery in the ankle. It arises about 5 cm. above the ankle-joint from the anterior tibial artery.
The anterior medial malleolar artery passes behind the tendons of the extensor hallucis longus and tibialis anterior muscles, to the medial side of the ankle, upon which it ramifies, anastomosing with branches of the posterior tibial and medial plantar arteries and with the medial calcaneal from the posterior tibial. |
https://en.wikipedia.org/wiki/History%20of%20the%20Actor%20model | In computer science, the Actor model, first published in 1973, is a mathematical model of concurrent computation.
Event orderings versus global state
A fundamental challenge in defining the Actor model is that it did not provide for global states so that a computational step could not be defined as going from one global state to the next global state as had been done in all previous models of computation.
In 1963 in the field of Artificial Intelligence, John McCarthy introduced situation variables in logic in the Situational Calculus. In McCarthy and Hayes 1969, a situation is defined as "the complete state of the universe at an instant of time." In this respect, the situations of McCarthy are not suitable for use in the Actor model since it has no global states.
From the definition of an Actor, it can be seen that numerous events take place: local decisions, creating Actors, sending messages, receiving messages, and designating how to respond to the next message received. Partial orderings on such events have been axiomatized in the Actor model and their relationship to physics explored (see Actor model theory).
Relationship to physics
According to Hewitt (2006), the Actor model is based on physics in contrast with other models of computation that were based on mathematical logic, set theory, algebra, etc. Physics influenced the Actor model in many ways, especially quantum physics and relativistic physics. One issue is what can be observed about Actor systems. The question does not have an obvious answer because it poses both theoretical and observational challenges similar to those that had arisen in constructing the foundations of quantum physics. In concrete terms for Actor systems, typically we cannot observe the details by which the arrival order of messages for an Actor is determined (see Indeterminacy in concurrent computation). Attempting to do so affects the results and can even push the indeterminacy elsewhere. e.g., see metastability in electronics. |
https://en.wikipedia.org/wiki/Hilbert%27s%20twenty-third%20problem | Hilbert's twenty-third problem is the last of Hilbert problems set out in a celebrated list compiled in 1900 by David Hilbert. In contrast with Hilbert's other 22 problems, his 23rd is not so much a specific "problem" as an encouragement towards further development of the calculus of variations. His statement of the problem is a summary of the state-of-the-art (in 1900) of the theory of calculus of variations, with some introductory comments decrying the lack of work that had been done of the theory in the mid to late 19th century.
Original statement
The problem statement begins with the following paragraph:
So far, I have generally mentioned problems as definite and special as possible.... Nevertheless, I should like to close with a general problem, namely with the indication of a branch of mathematics repeatedly mentioned in this lecture-which, in spite of the considerable advancement lately given it by Weierstrass, does not receive the general appreciation which, in my opinion, it is due—I mean the calculus of variations.
Calculus of variations
Calculus of variations is a field of mathematical analysis that deals with maximizing or minimizing functionals, which are mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. The interest is in extremal functions that make the functional attain a maximum or minimum value – or stationary functions – those where the rate of change of the functional is zero.
Progress
Following the problem statement, David Hilbert, Emmy Noether, Leonida Tonelli, Henri Lebesgue and Jacques Hadamard among others made significant contributions to the calculus of variations. Marston Morse applied calculus of variations in what is now called Morse theory. Lev Pontryagin, Ralph Rockafellar and F. H. Clarke developed new mathematical tools for the calculus of variations in optimal control theory. The dynamic programming of Richard Bellman is an al |
https://en.wikipedia.org/wiki/IEC%2061108 | IEC 61108 is a collection of IEC standards for "Maritime navigation and radiocommunication equipment and systems - Global navigation satellite systems (GNSS)".
The 61108 standards are developed in Working Group 4 (WG 4A) of Technical Committee 80 (TC80) of the IEC.
Sections of IEC 61108
Standard IEC 61108 is divided into four parts:
Part 1: Global positioning system (GPS) - Receiver equipment - Performance standards, methods of testing and required test results
Part 2: Global navigation satellite system (GLONASS) - Receiver equipment - Performance standards, methods of testing and required test results
Part 3: Galileo receiver equipment - Performance requirements, methods of testing and required test results
Part 4: Part 4: Shipborne DGPS and DGLONASS maritime radio beacon receiver equipment - Performance requirements, methods of testing and required test results
History
On 1 December 2000, the International Maritime Organization - IMO adopted three resolutions regarding the characteristics of shipped GNSS receivers.
IMO Resolutions
On 1 December 2000, the International Maritime Organization - IMO adopted three resolutions regarding the performance standards for shipborne GNSS receivers:
IMO RESOLUTION MSC.112(73) GLOBAL POSITIONING SYSTEM (GPS) RECEIVER EQUIPMENT
IMO RESOLUTION MSC.113(73) GLONASS RECEIVER EQUIPMENT
IMO RESOLUTION MSC.114(73) DGPS AND DGLONASS MARITIME RADIO BEACON RECEIVER EQUIPMENT
IMO RESOLUTION MSC.233(82) GALILEO RECEIVER EQUIPMENT (adopted on 5 December 2006) |
https://en.wikipedia.org/wiki/Global%20optimization | Global optimization is a branch of applied mathematics and numerical analysis that attempts to find the global minima or maxima of a function or a set of functions on a given set. It is usually described as a minimization problem because the maximization of the real-valued function is equivalent to the minimization of the function .
Given a possibly nonlinear and non-convex continuous function with the global minima and the set of all global minimizers in , the standard minimization problem can be given as
that is, finding and a global minimizer in ; where is a (not necessarily convex) compact set defined by inequalities .
Global optimization is distinguished from local optimization by its focus on finding the minimum or maximum over the given set, as opposed to finding local minima or maxima. Finding an arbitrary local minimum is relatively straightforward by using classical local optimization methods. Finding the global minimum of a function is far more difficult: analytical methods are frequently not applicable, and the use of numerical solution strategies often leads to very hard challenges.
Applications
Typical examples of global optimization applications include:
Protein structure prediction (minimize the energy/free energy function)
Computational phylogenetics (e.g., minimize the number of character transformations in the tree)
Traveling salesman problem and electrical circuit design (minimize the path length)
Chemical engineering (e.g., analyzing the Gibbs energy)
Safety verification, safety engineering (e.g., of mechanical structures, buildings)
Worst-case analysis
Mathematical problems (e.g., the Kepler conjecture)
Object packing (configuration design) problems
The starting point of several molecular dynamics simulations consists of an initial optimization of the energy of the system to be simulated.
Spin glasses
Calibration of radio propagation models and of many other models in the sciences and engineering
Curve fitting like non- |
https://en.wikipedia.org/wiki/NetSpot | NetSpot is a software tool for wireless network assessment, scanning, and surveys, analyzing Wi-Fi coverage and performance. It runs on Mac OS X 10.6+ and Windows 7, 8 and 10. Netspot supports 802.11n, 802.11a, 802.11b, and 802.11g wireless networks and uses the standard Wi-Fi network adapter and its Airport interface to map radio signal strength and other wireless network parameters, and build reports on that. NetSpot was released in August 2011.
Functions
NetSpot provides all professional wireless site survey features for Wi-Fi and maps coverage of a living area, office space, buildings, etc. It provides visual data to help analyze radio signal leaks, discover noise sources, map channel use, optimize access point locations. Also, the application can perform Wi-Fi network planning: the data that are collected help to select channels and placements for new hotspots. Survey reports can be generated in PDF format.
Usual uses
Mapping Wi-Fi
Mapping Wi-Fi signal strength
Optimizing networks
Trouble-shooting networks
Visualizing wireless networks
Diagnosing signal problems
Analyzing wireless network coverage
Release history
See also
iStumbler – an open-source utility to find wireless networks and devices in Mac OS X
KisMAC – a wireless network discovery tool for Mac OS X
WiFi Explorer – a wireless network scanner for Mac OS X |
https://en.wikipedia.org/wiki/Lister%20Medal |
The Lister Medal is an award presented by the Royal College of Surgeons of England in recognition of contributions to surgical science. It is named after the English surgeon Joseph Lister (1827-1912), whose work on antiseptics established the basis of modern sterile surgery.
The medal has its origins in the Lister Memorial Fund, started by the Royal Society, which was raised by public subscription after Lister's death, with the object of creating a lasting mark of respect to his memory. In 1920, the Royal College of Surgeons of England became the trustees and administrators of the fund. They were entrusted with the task of awarding a monetary prize and a bronze medal (gold since 1984) every three years, irrespective of nationality, to those who had made outstanding contributions to surgical science. The triennial award is decided by a committee representing the Royal Society, the Royal College of Surgeons of England, the Royal College of Surgeons in Ireland, the University of Edinburgh, and the University of Glasgow.
The Lister Medal, although it is not always awarded to a surgeon, is one of the most prestigious honours a surgeon can receive. The obverse of the medal consists of a representation of a bust of Lord Lister. The reverse side has the recipient's name across centre, and around the edge of the medal is text naming the award along with the dedication:
On the occasion of the award, the medallist delivers the Lister Oration (sometimes called the "Lister Memorial Lecture"). The first award was announced in 1924, with the presentation and the lecture taking place the following year. The most recent award was made in 2015, with a total of 27 people having received the medal to date.
Medallists
Notes
See also
List of medicine awards |
https://en.wikipedia.org/wiki/Gynophore | A gynophore is the stalk of certain flowers which supports the gynoecium (the ovule-producing part of a flower), elevating it above the branching points of other floral parts.
Plant genera that have flowers with gynophores include Telopea, Peritoma arborea and Brachychiton. |
https://en.wikipedia.org/wiki/Gynecologic%20pathology | Gynecologic pathology is the medical pathology subspecialty dealing with the study and diagnosis of disease involving the female genital tract. A physician who practices gynecologic pathology is a gynecologic pathologist. The term originates from the Greek gyno-(gynaikos) meaning "woman" and the suffix -ology, meaning "study of".
Gynecologic pathologists specialize in the tissue-based diagnosis of diseases of the female reproductive system. This includes neoplastic diseases of the vulva, vagina, cervix, endometrium, fallopian tube, uterus, and ovary, as well as non-neoplastic diseases of these structures.
In the United States, gynecologic pathology training typically involves obtaining a medical doctorate, followed by residency in anatomic pathology or combined anatomic and clinical pathology certified by the American Board of Pathology. Fellowship training in surgical pathology or gynecologic pathology are additional credentials toward a career as a gynecological pathologist.
See also
Gynecological pathology, including diseases of the female genital tract and the placenta
Anatomic pathology
Cytopathology |
https://en.wikipedia.org/wiki/ApplianSys | ApplianSys, founded in 2000, is a privately held venture capital-backed technology company based in Coventry, United Kingdom. It designs, builds and markets Internet server appliances that are deployed in more than 150 countries. Forrester Research have listed ApplianSys as being a key vendor in the worldwide IP Address Management market, with its DNS engine used in a third of all GPRS networks.
Products
ApplianSys' portfolio of appliances include more than 20 models split across a range including DNSBOX (DNS, DHCP and IP Address Management), CACHEBOX (Web cache, Proxy Server, WAN Optimization and Content Filtering) and EDUGATEBOX (Gateway (telecommunications) appliance for schools that are connecting to the internet for the first time).
DNSBOX
The DNSBOX range was launched in 2001. It is divided into 4 series:
Management appliances use a combination of open source and proprietary software, developed by ApplianSys and Nixu.
According to IDC's 2007 IPAM report the average DNSBOX customer manages 15,000 IP addresses.
CACHEbox
CACHEBOX is a dedicated web caching proxy appliance with software editions targeted at Education, SME, Corporate/Governmental and ISP markets. Five models provide different performance levels:
420 offers the highest performance and storage in the range with support for more than 6,000 HTTP requests per second. CACHEBOX420 is typically used by ISP networks or in the core of large enterprise/school networks, often as part of a distributed caching service with smaller units deployed closer to users.
310 offers high performance and storage with support for more than 3,600 HTTP requests per second. CACHEBOX310 is typically used by ISP networks or in large enterprise/school networks.
210 and 230 models are high performance 1U rackmount appliances used primarily in ISP networks and medium-large schools. They employ technologies such as Solid State Drives to support more than 2,500 HTTP requests per second. Multiple devices can be clustered t |
https://en.wikipedia.org/wiki/Acrophobia%20%28game%29 | Acrophobia is an online multiplayer word game. The game was originally conceived by Andrea Shubert, and programmed by Kenrick Mock and Michelle Hoyle in 1995. Originally available over Internet Relay Chat, the game has since been developed into a number of variants, as a download, playable through a browser, via Twitter or through Facebook.
Background
Created by Andrea Shubert in the mid to late 1990s, she developed a "spiritual successor" called TAG: The Acronym Game for startup gaming company play140.
Game play
Players enter a channel hosted by a bot which runs the game. In each round, the bot generates a random acronym. Players compete by racing to create the most coherent or humorous sentence that fits the acronym - in essence, a backronym. After a set amount of time expires, each player then votes anonymously via the bot for their favorite answer (aside from their own).
Points are awarded to the most popular backronym. Bonus points also may be given based on the fastest response and for voting for the winning option. Some implementations give the speed bonus to the player with the first answer that received at least one vote; this is to discourage players from quickly entering gibberish just to be the first. Bonus points for voting for the winner helps discourage players from intentionally voting for poor answers to avoid giving votes to answers that might beat their own.
Some versions of the game were criticized for the ease with which players could disrupt games with obscenities, and the anonymous nature of the site meant that there were no repercussions for this behavior. Usually, nonsense backronyms will score low and the most humorous sounding backronym which effectively makes a sentence from the initials will win. Some rounds may have a specific topic that the answers should fit, although enforcement of the topic depends on solely on the other players' willingness to vote for off-topic answers.
Acrophobia was commended as an online game that showed t |
https://en.wikipedia.org/wiki/Freebox | The Freebox is an ADSL-VDSL-FTTH modem and a set-top box that the French Internet service provider named Free (part of the Iliad group) provides to its DSL-FTTH subscribers.
Its main use is as a high-end fixed and wireless modem (802.11g MIMO), but it also allows Free to offer additional services over ADSL, such as IPTV including high definition (1080p), Video recording with timeshifting capabilities, digital radio and VoIP telephone service via one RJ-11 connector (the first version came with 2 such jacks but only one was ever activated)
The Freebox is provided free to the subscribers, its value being 190 Euros, according to the operator. It is delivered with a remote control, a multimedia box equipped with a 250 GB hard drive, and accessories (cables and filters). At the end of Q2 2005, more than 1.1 million subscribers were equipped with the Freebox. According to company official's results publication, the 2 million level of Freeboxes were reached in September 2006.
V6 generation, Freebox Révolution
The sixth generation device is called the Freebox Révolution or V6 (Version 6). It was launched in early 2011. It is composed of a pair of devices: the ADSL modem/router and the IPTV set top box/media player. The boxes were designed by Philippe Starck.
The Freebox Server device
The Freebox server is a DSL modem, a router, a Wi-Fi hot spot, a NAS (250 GB hard drive), a DECT base with up to 8 connected DECT phone sets, and a digital video recorder for TNT also known as DVB-T and IPTV. As the firmware is updated, its functionalities increase. Most notably:
An external hard drive can be connected to its USB and/or eSATA port. However, some TV channels cannot be recorded to an external hard drive due to copyright policy limitations. Such limitations do not apply to channels recorded from TNT.
The video formats supported are quite wide in range including mp4, H264, mp2, mkv, avi and others. Some formats are not supported though firmware updates may increase the numbe |
https://en.wikipedia.org/wiki/Harry%20R.%20Lewis | Essentially all of Lewis's career has been at Harvard, where he has been honored for his "particularly distinguished contributions to undergraduate teaching"; his students have included future entrepreneurs Bill Gates and Mark Zuckerberg, and numerous future faculty members at Harvard and other schools.
The website "Six Degrees to Harry Lewis", created by Zuckerberg while at Harvard, was a precursor to Facebook.
Education and career
Lewis was born in Boston and grew up in Wellesley, . His parents were physicianshis father a hospital chief of anesthesiology and his mother the head of the Dever State School for disabled children. His father was a World War II veteran and the son of a German Lutheran father and a Russian Jewish mother. After graduating summa cum laude at the end of the eleventh grade at Boston's Roxbury Latin School he entered Harvard College, where he was for a time a third-string lacrosse goalie.
Lewis has said that he discovered "I wasn't a real [once] I got out of the amateur leagues of high school mathematics", but was "tremendously excited" by the computer-science research at Harvard.
As a senior he lectured a graduate class using a computer-graphics program, SHAPESHIFTER, which he had developed for displaying complex-plane on a cathode ray tube. SHAPESHIFTER automatically recognized formulas and commands hand-entered via a stylus on a RAND tablet, and could be "trained" to recognize the handwriting of individual users.
There being no degree program in computer science per se at Harvard at the time, in 1968 Lewis received his BA (summa, Quincy House) in applied mathematics and was elected to Phi Beta Kappa.
After serving for two years in the United States Public Health Service Commissioned Corps as a commissioned officer in the role of mathematician and computer scientist for the National Institutes of Health in Bethesda, Maryland,
he spent a year in Europe as a Frederick Sheldon Traveling Fellow.
He then returned to Harvard, where he ear |
https://en.wikipedia.org/wiki/NT%20%28cassette%29 | NT (sometimes marketed under the name Scoopman) is a digital memo recording system introduced by Sony in 1992.
The NT system was introduced to compete with the Microcassette, introduced by Olympus, and the Mini-Cassette, by Philips.
Design
The system was an R-DAT based system which stored memos using helical scan on special microcassettes, which were with a tape width of 2.5 mm, with a recording capacity of up to 120 minutes similar to Digital Audio Tape. The cassettes are offered in three versions: The Sony NTC-60, -90, and -120, each describing the length of time (in minutes) the cassette can record.
NT stands for Non-Tracking, meaning the head does not precisely follow the tracks on the tape. Instead, the head moves over the tape at approximately the correct angle and speed, but performs more than one pass over each track. The data in each track is stored on the tape in blocks with addressing information that enables reconstruction in memory from several passes. This considerably reduced the required mechanical precision, reducing the complexity, size, and cost of the recorder.
Another feature of NT cassettes is Non-Loading, which means instead of having a mechanism to pull the tape out of the cassette and wrap it around the drum, the drum is pushed inside the cassette to achieve the same effect. This also significantly reduces the complexity, size, and cost of the mechanism.
Audio sampling is in stereo at 32 kHz with 12 bit nonlinear quantization, corresponding to 17 bit linear quantization. Data written to the tape is packed into data blocks and encoded with LDM-2 low deviation modulation.
Uses
The Sony NT-1 Digital Micro Recorder, introduced in 1992, features a real-time clock that records a time signal on the digital track along with the sound data, making it useful for journalism, police and legal work. Due to the machine's buffer memory, it is capable of automatically reversing the tape direction at the end of the reel without an interruption in |
https://en.wikipedia.org/wiki/G%C3%A1bor%20Sz%C3%A9kelyhidi | Gábor Székelyhidi (born 30 June 1981 in Debrecen) is a Hungarian mathematician, specializing in differential geometry.
Gábor Székelyhidi, the brother of László Székelyhidi, graduated from Trinity College, Cambridge with a bachelor's degree in 2002 (part 3 of Tripos 2003 with honours) and received from Imperial College London his PhD in 2006 under the supervision of Simon Donaldson with thesis Extremal metrics and K-stability. Székelyhidi was a postdoc at Harvard University and was from 2008 to 2011 Ritt Assistant Professor at Columbia University. At the University of Notre Dame he became an assistant professor in 2011, an associate professor in 2014, and in 2016 a full professor.
His research deals with geometric analysis and complex differential geometry (Kähler manifolds), including the existence of canonical metrics (such as extremal Kähler and Kähler-Einstein metrics) on projective manifolds, and the relations between extremal metrics and K-stability for polarised varieties and especially Fano varieties.
In 2014 he was an invited speaker at the International Congress of Mathematicians in Seoul.
Selected publications
An introduction to extremal Kaehler metrics (pdf) |
https://en.wikipedia.org/wiki/Double%20Helix%20Medal | The Double Helix Medal has been awarded annually since 2006 by Cold Spring Harbor Laboratory (CSHL) to individuals who have positively impacted human health by raising awareness and funds for biomedical research. At the inaugural dinner, Muhammad Ali received the first Double Helix Medal for his fight against Parkinson's disease. Other notable recipients include founders of Autism Speaks Suzanne and Bob Wright; former Paramount Pictures head Sherry Lansing who produced the Stand Up to Cancer telethon; Evelyn Lauder who founded the Breast Cancer Research Foundation; Hank Greenberg of the Starr Foundation, which is one of the largest supporters of scientific research; Marilyn and Jim Simons, the world's largest individual supporters of autism research; David H. Koch who has donated over $300 million to biomedical research; and prominent scientists and Nobel laureates.
The Double Helix Medal is named for the iconic structure of the DNA molecule, discovered by James D. Watson, Francis Crick, Maurice Wilkins, and Rosalind Franklin. The study of DNA is central to biological research, and is at the heart of work at CSHL.
The annual New York City gala at which the medals are awarded was sparked by philanthropist Cathy Cyphers Soref, an Honorary Director of the Cold Spring Harbor Laboratory Association.
Medal recipients
2022:
Albert Bourla, Ph.D.
Jennifer Doudna, Ph.D.
2021:
Reggie Jackson
Leonard S. Schleifer, M.D., Ph.D.
George D. Yancopoulos, M.D., Ph.D.
2020:
2019:
Boomer Esiason
Nancy Wexler, Ph.D.
2018:/
Priscilla Chan and Mark Zuckerberg
Larry Norton, M.D.
2017:
Tom Brokaw
Helen & Charles Dolan
2016:
Alan Alda
P. Roy Vagelos
2015:
David Botstein
Katie Couric
Anne Wojcicki
2014:
Andrew Solomon
Matthew Meselson
Marlo Thomas
November 4, 2013:
Peter Neufeld
Robin Roberts
Barry C. Scheck
November 28, 2012:
Michael J. Fox
Arthur D. Levinson
Mary D. Lindsay
November 15, 2011:
Kareem Abdul-Jabbar
Temple Grandin
Harold E. Varmus
Novem |
https://en.wikipedia.org/wiki/American%20Board%20of%20Anesthesiology | The American Board of Anesthesiology sets standards and exams for the accreditation of Board certified anesthesiologists coming to the end of their residency. It is one of the 24 medical specialty boards that constitutes the American Board of Medical Specialties.
Former Directors include
Rolland John Whitacre: famous for his design of a subarachnoid needle tip
Edward Boyce Tuohy: famous for his design of an epidural needle
See also
American Osteopathic Board of Anesthesiology |
https://en.wikipedia.org/wiki/Gloeobacter | Gloeobacter is a genus of cyanobacteria. It is the sister group to all other cyanobacteria. Gloeobacter is unique among cyanobacteria in not having thylakoids, which are characteristic for all other cyanobacteria and chloroplasts. Instead, the light-harvesting complexes (also called phycobilisomes), that consist of different proteins, sit on the inside of the plasma membrane among the (cytoplasm). Subsequently, the proton gradient in Gloeobacter is created over the plasma membrane, where it forms over the thylakoid membrane in cyanobacteria and chloroplasts.
The whole genome of G. violaceus (strain PCC 7421) and of G. kilaueensis have been sequenced. Many genes for photosystem I and II were found missing, likely related to the fact that photosynthesis in these bacteria does not take place in the thylakoid membrane as in other cyanobacteria, but in the plasma membrane.
Description
Gloeobacter violaceus produces several pigments, including chlorophyll a, β-carotene, oscillol diglycoside, and echinenone. The purple coloration is due to the relatively low chlorophyll content. G. kilaueensis grows with a few other bacteria as a purple-colored biofilm around 0.5 mm thick. Cultivated colonies are dark purple, smooth, shiny, and raised. G. kilaueensis is mostly unicellular, capsule-shaped, about 3.5×1.5 µm, and imbedded in mucus. They divide over the width of the cell. Cells color gramnegative, and lack vancomycin resistance. They are not motile and do not glide. Growth ceases in complete darkness, so Gloeobacter is very likely obligatory photoautotrophic.
Species and distribution
Gloeobacter violaceus was found on a limestone rock in the Swiss canton Obwalden.
G. kilaueensis occurred within a lava cave at the Kilauea-caldera on Hawaii. It grew there at a temperature around 30 °C at very high humidity, with moisture condensing and dripping off the biofilm.
Gloeobacter could have split off from the other cyanobacteria between 3.7 and 3.2 billion years ago. The species |
https://en.wikipedia.org/wiki/Digifant%20engine%20management%20system | The Digifant engine management system is an electronic engine control unit (ECU), which monitors and controls the fuel injection and ignition systems in petrol engines, designed by Volkswagen Group, in cooperation with Robert Bosch GmbH.
Digifant is the outgrowth of the Digijet fuel injection system first used on water-cooled Volkswagen A2 platform-based models.
History
Digifant was introduced in 1986 on the 2.1 litre Volkswagen Type 2 (T3) (Vanagon in the US) engine. This system combined digital fuel control as used in the earlier Digi-Jet systems with a new digital ignition system. The combination of fuel injection control and ignition control is the reason for the name "Digifant II" on the first version produced. Digifant as used in Volkswagen Golf and Volkswagen Jetta models simplified several functions, and added knock sensor control to the ignition system. Other versions of Digifant appeared on the Volkswagen Fox, Corrado, Volkswagen Transporter (T4) (known as the Eurovan in North America), as well as 1993 and later production versions of the rear-engined Volkswagen Beetle, sold only in Mexico. Lower-power versions (without a knock sensor), supercharged, and 16-valve variants were produced. Nearly exclusive to the European market, Volkswagen AG subsidiary Audi AG also used the Digifant system, namely in its 2.0 E variants of the Audi 80 and Audi 100.
Digifant is an engine management system designed originally to take advantage of the first generation of newly developed digital signal processing circuits. Production changes and updates were made to keep the system current with the changing California and federal emissions requirements. Updates were also made to allow integration of other vehicle systems into the scope of engine operation.
Changes in circuit technology, design and processing speed along with evolving emissions standards, resulted in the development of new engine management systems. These new system incorporated adaptive learning fuz |
https://en.wikipedia.org/wiki/Herschel%20Leibowitz | Scholar, educator, and philanthropist Herschel Leibowitz is widely recognized for his research in visual perception and for his symbiotic approach to conducting research that both advanced theory and helped in the understanding and relief of societal problems. His research on transportation safety included studies of nearsightedness during night driving, vision during civil twilight, an illusion that underlies the behavior of motorists involved in auto-train collisions, susceptibility of pilots to illusions caused by visual-vestibular interactions, and the design of aircraft instrument panels.
Life, education and career
Herschel Leibowitz was the only child of Lewis and Nettie Wolfson Leibowitz. He was born and raised in York, Pennsylvania and attended school in York. He later earned his B.A. at the University of Pennsylvania and M.A. (Experimental Psychology) and Ph.D. (Physiology) at Columbia University.
Leibowitz's early studies at University of Pennsylvania were interrupted by World War II. He served in the U.S. Army during World War II, 75th Infantry Division, European Theater, and fought in the Battle of the Bulge. He studied at the Sorbonne after his military service, and then resumed his studies at the University of Pennsylvania. He later earned his M.A. (Experimental Psychology) and his Ph.D. (Physiology) from Columbia University under the guidance of Clarence Graham. Leibowitz's dissertation explored the effect of pupil size on visual acuity for photometrically matched stimuli.
In 1949, he married the former Eileen Wirtshafter. They had two children, Marjorie (1950) and Michael (1953).
He began his career as a faculty member in the Department of Neurophysiology at the University of Wisconsin (1951–1960). Following this, Leibowitz was an advisory psychologist and manager of behavioral research at IBM (1960–1962). He returned to academia in 1962 as a member of the Department of Psychology at The Pennsylvania State University where he was named Evan |
https://en.wikipedia.org/wiki/SecureDrop | SecureDrop is a free software platform for secure communication between journalists and sources (whistleblowers). It was originally designed and developed by Aaron Swartz and Kevin Poulsen under the name DeadDrop. James Dolan also co-created the software.
History
After Aaron Swartz's death, the first instance of the platform was launched under the name Strongbox by staff at The New Yorker on 15 May 2013. The Freedom of the Press Foundation took over development of DeadDrop under the name SecureDrop, and has since assisted with its installation at several news organizations, including ProPublica, The Guardian, The Intercept, and The Washington Post.
Security
SecureDrop uses the anonymity network Tor to facilitate communication between whistleblowers, journalists, and news organizations. SecureDrop sites are therefore only accessible as onion services in the Tor network. After a user visits a SecureDrop website, they are given a randomly generated code name. This code name is used to send information to a particular author or editor via uploading. Investigative journalists can contact the whistleblower via SecureDrop messaging. Therefore, the whistleblower must take note of their random code name.
The system utilizes private, segregated servers that are in the possession of the news organization. Journalists use two USB flash drives and two personal computers to access SecureDrop data. The first personal computer accesses SecureDrop via the Tor network, and the journalist uses the first flash drive to download encrypted data from the SecureDrop server. The second personal computer does not connect to the Internet, and is wiped during each reboot. The second flash drive contains a decryption code. The first and second flash drives are inserted into the second personal computer, and the material becomes available to the journalist. The personal computer is shut down after each use.
Freedom of the Press Foundation has stated it will have the SecureDrop code and sec |
https://en.wikipedia.org/wiki/Shimeji | Shimeji (Japanese: , or ) is a group of edible mushrooms native to East Asia, but also found in northern Europe. Hon-shimeji (Lyophyllum shimeji) is a mycorrhizal fungus and difficult to cultivate. Other species are saprotrophs, and buna-shimeji (Hypsizygus tessulatus) is now widely cultivated. Shimeji is rich in umami-tasting compounds such as guanylic acid, glutamic acid, and aspartic acid.
Species
Several species are sold as shimeji mushrooms. All are saprotrophic except Lyophyllum shimeji.
Mycorrhizal
Hon-shimeji (), Lyophyllum shimeji
The cultivation methods have been patented by several groups, such as Takara Bio and Yamasa, and the cultivated hon-shimeji is available from several manufacturers in Japan.
Saprotrophic
Buna-shimeji (, lit. beech shimeji), Hypsizygus tessulatus, also known in English as the brown beech or brown clamshell mushroom.
Hypsizygus marmoreus is a synonym of Hypsizygus tessulatus. Cultivation of Buna-shimeji was first patented by Takara Shuzo Co., Ltd. in 1972 as hon-shimeji and the production started in 1973 in Japan. Now, several breeds are widely cultivated and sold fresh in markets.
Bunapi-shimeji (), known in English as the white beech or white clamshell mushroom.
Bunapi was selected from UV-irradiated buna-shimeji ('hokuto #8' x 'hokuto #12') and the breed was registered as 'hokuto shiro #1' by Hokuto Corporation.
Hatake-shimeji (), Lyophyllum decastes.
Shirotamogidake (), Hypsizygus ulmarius.
These two species had been also sold as hon-shimeji.
Velvet pioppino (alias velvet pioppini, black poplar mushroom, Chinese: /), Agrocybe aegerita.
Shimeji health benefits
Shimeji mushrooms contain minerals like potassium and phosphorus, magnesium, zinc, and copper. Shimeji mushrooms lower the cholesterol level of the body. This mushroom is rich in glycoprotein (HM-3A), marmorin, beta-(1-3)-glucan, hypsiziprenol, and hypsin therefore is a potential natural anticancer agent. Shimeji mushrooms contain angiotensin I-converting enzym |
https://en.wikipedia.org/wiki/Ammonolysis | In chemistry, ammonolysis (/am·mo·nol·y·sis/) is the process of splitting ammonia into NH2- + H+. Ammonolysis reactions can be conducted with organic compounds to produce amines (molecules containing a nitrogen atom with a lone pair, :N), or with inorganic compounds to produce nitrides. This reaction is analogous to hydrolysis in which water molecules are split. Similar to water, liquid ammonia also undergoes auto-ionization, {2 NH3 ⇌ NH4+ + NH2- }, where the rate constant is k = 1.9 × 10-38.
Organic compounds such as alkyl halides, hydroxyls (hydroxyl nitriles and carbohydrates), carbonyl (aldehydes/ketones/esters/alcohols), and sulfur (sulfonyl derivatives) can all undergo ammonolysis in liquid ammonia.
Organic synthesis
Mechanism: ammonolysis of esters
This mechanism is similar to the hydrolysis of esters, the ammonia attacks the electrophilic carbonyl carbon forming a tetrahedral intermediate. The reformation of the C-O double bond ejects the ester. The alkoxide deprotonates the ammonia forming an alcohol and amide as products.
Of haloalkanes
On heating a haloalkane and concentrated ammonia in a sealed tube with ethanol, a series of amines are formed along with their salts. The tertiary amine is usually the major product.
{NH3 ->[\ce{RX}] RNH2 ->[\ce{RX}] R2NH ->[\ce{RX}] R3N ->[\ce{RX}] R4N+}
This is known as Hoffmann's ammonolysis.
Of alcohols
Alcohols can also undergo ammonolysis when in the presence of ammonia. An example is the conversion of phenol to aniline, catalyzed by stannic chloride.
ROH + NH3 A ->[\ce{TnCl4}] RNH2 + H2O
Of carbonyl compounds
The reaction between a ketone and ammonia results in an imine and byproduct water. This reaction is water sensitive and thus drying agents such as aluminum chloride or a Dean–Stark apparatus must be employed to remove water. The resulting imine will react and decompose back into the ketone and the ammonia when in the presence of water. This is due to the fact that this reaction is reversible:
R2 |
https://en.wikipedia.org/wiki/Pointwise | In mathematics, the qualifier pointwise is used to indicate that a certain property is defined by considering each value of some function An important class of pointwise concepts are the pointwise operations, that is, operations defined on functions by applying the operations to function values separately for each point in the domain of definition. Important relations can also be defined pointwise.
Pointwise operations
Formal definition
A binary operation on a set can be lifted pointwise to an operation on the set of all functions from to as follows: Given two functions and , define the function by
Commonly, o and O are denoted by the same symbol. A similar definition is used for unary operations o, and for operations of other arity.
Examples
where .
See also pointwise product, and scalar.
An example of an operation on functions which is not pointwise is convolution.
Properties
Pointwise operations inherit such properties as associativity, commutativity and distributivity from corresponding operations on the codomain.
If is some algebraic structure, the set of all functions to the carrier set of can be turned into an algebraic structure of the same type in an analogous way.
Componentwise operations
Componentwise operations are usually defined on vectors, where vectors are elements of the set for some natural number and some field . If we denote the -th component of any vector as , then componentwise addition is .
Componentwise operations can be defined on matrices. Matrix addition, where is a componentwise operation while matrix multiplication is not.
A tuple can be regarded as a function, and a vector is a tuple. Therefore, any vector corresponds to the function such that , and any componentwise operation on vectors is the pointwise operation on functions corresponding to those vectors.
Pointwise relations
In order theory it is common to define a pointwise partial order on functions. With A, B posets, the set of functions A → B ca |
https://en.wikipedia.org/wiki/Inocybe%20hystrix | Inocybe hystrix is an agaric fungus in the family Inocybaceae. It forms mycorrhiza with surrounding deciduous trees. Fruit bodies are usually found growing alone or in small groups on leaf litter during autumn months. Unlike many Inocybe species, Inocybe hystrix is densely covered in brown scales, a characteristic that aids in identification. The mushroom also has a spermatic odour that is especially noticeable when the mushroom is damaged or crushed.
Like many other Inocybe mushrooms, Inocybe hystrix contains dangerous amounts of muscarine and should not be consumed.
Taxonomy
The species was first described in 1838 by Elias Fries under the name Agaricus hystrix. Finnish mycologist Petter Karsten later (1879) transferred it to Inocybe.
Description
Fruit bodies have convex to plano-convex caps measuring in diameter. The caps are dry with scales that can be either erect or flat on the surface. The colour is brown in the centre, becoming paler towards the edges. The flesh is white, and has a spermatic odour and mild taste. The gills are closely spaced, white to dull brown, and have fringed edges. The stipe measures long by thick, and is roughly the same width throughout its length; like the cap, it is scaly.
The spore print is cinnamon brown. spores are roughly almond-shaped, smooth, inamyloid, and measure 8–12.5 by 5–6.5 µm. Clamp connections are present in the hyphae.
The species is poisonous.
Habitat and distribution
In North America and Europe, Inocybe hystrix grows in deciduous forest, especially beech. In Costa Rica, it is found in the Cordillera Talamanca, where it associates with Quercus costaricensis at elevations around .
See also
List of Inocybe species |
https://en.wikipedia.org/wiki/The%20Ultimate%20Resource | The Ultimate Resource is a 1981 book written by Julian Lincoln Simon challenging the notion that humanity was running out of natural resources. It was revised in 1996 as The Ultimate Resource 2.
Overview
The overarching thesis on why there is no resource crisis is that as a particular resource becomes more scarce, its price rises. This price rise creates an incentive for people to discover more of the resource, ration and recycle it, and eventually, develop substitutes. The "ultimate resource" is not any particular physical object but the capacity for humans to invent and adapt.
Scarcity
The work opens with an explanation of scarcity, noting its relation to price; high prices denote relative scarcity and low prices indicate abundance. Simon usually measures prices in wage-adjusted terms, since this is a measure of how much labor is required to purchase a fixed amount of a particular resource. Since prices for most raw materials (e.g., copper) have fallen between 1800 and 1990 (adjusting for wages and adjusting for inflation), Simon argues that this indicates that those materials have become less scarce.
Forecasting
Simon makes a distinction between "engineering” and "economic" forecasting. Engineering forecasting consists of estimating the amount of known physical amount of resources, extrapolates the rate of use from current use and subtracts one from the other. Simon argues that these simple analyses are often wrong. While focusing only on proven resources is helpful in a business context, it is not appropriate for economy-wide forecasting. There exist undiscovered sources, sources not yet economically feasible to extract, sources not yet technologically feasible to extract, and ignored resources that could prove useful but are not yet worth trying to discover.
To counter the problems of engineering forecasting, Simon proposes economic forecasting, which proceeds in three steps in order to capture, in part, the unknowns the engineering method leaves out (p 27) |
https://en.wikipedia.org/wiki/Measurable%20acting%20group | In mathematics, a measurable acting group is a special group that acts on some space in a way that is compatible with structures of measure theory. Measurable acting groups are found in the intersection of measure theory and group theory, two sub-disciplines of mathematics. Measurable acting groups are the basis for the study of invariant measures in abstract settings, most famously the Haar measure, and the study of stationary random measures.
Definition
Let be a measurable group, where denotes the -algebra on and the group law. Let further be a measurable space and let be the product -algebra of the -algebras and .
Let act on with group action
If is a measurable function from to , then it is called a measurable group action. In this case, the group is said to act measurably on .
Example: Measurable groups as measurable acting groups
One special case of measurable acting groups are measurable groups themselves. If , and the group action is the group law, then a measurable group is a group , acting measurably on . |
https://en.wikipedia.org/wiki/Hockey-stick%20identity | In combinatorial mathematics, the hockey-stick identity, Christmas stocking identity, boomerang identity, Fermat's identity or Chu's Theorem, states that if are integers, then
The name stems from the graphical representation of the identity on Pascal's triangle: when the addends represented in the summation and the sum itself are highlighted, the shape revealed is vaguely reminiscent of those objects (see hockey stick, Christmas stocking).
Formulations
Using sigma notation, the identity states
or equivalently, the mirror-image by the substitution :
Proofs
Generating function proof
We have
Let , and compare coefficients of .
Inductive and algebraic proofs
The inductive and algebraic proofs both make use of Pascal's identity:
Inductive proof
This identity can be proven by mathematical induction on .
Base case
Let ;
Inductive step
Suppose, for some ,
Then
Algebraic proof
We use a telescoping argument to simplify the computation of the sum:
Combinatorial proofs
Proof 1
Imagine that we are distributing indistinguishable candies to distinguishable children. By a direct application of the stars and bars method, there are
ways to do this. Alternatively, we can first give candies to the oldest child so that we are essentially giving candies to kids and again, with stars and bars and double counting, we have
which simplifies to the desired result by taking and , and noticing that :
Proof 2
We can form a committee of size from a group of people in
ways. Now we hand out the numbers to of the people. We can divide this into disjoint cases. In general, in case , , person is on the committee and persons are not on the committee. The rest of the committee can be chosen in
ways. Now we can sum the values of these disjoint cases, and using double counting we obtain
See also
Pascal's identity
Pascal's triangle
Leibniz triangle
Vandermonde's identity
Faulhaber's formula, for sums of arbitrary polynomials. |
https://en.wikipedia.org/wiki/Bootstrapping%20node | A bootstrapping node, also known as a rendezvous host, is a node in an overlay network that provides initial configuration information to newly joining nodes so that they may successfully join the overlay network. Bootstrapping nodes are predominantly found in decentralized peer-to-peer (P2P) networks because of the dynamically changing identities and configurations of member nodes in these networks.
Overview
When attempting to join a P2P network, specific discovery or membership protocols (or other configuration information) may be required, and, if a newly joining node is unaware of these protocols, the newly established joining node will not be able to communicate with other nodes and ultimately join the network. Furthermore, these protocols and configuration requirements may dynamically change as the infrastructure and membership of the P2P network evolves. Therefore, there is a need to be able to dynamically inform a newly joining node of the required protocols and configurations.
Identifying a bootstrapping node
Several methods may be used by a joining node to identify bootstrapping nodes:
A joining node may have been pre-configured with the static addresses of the bootstrapping nodes. In such a case, the bootstrapping node addresses cannot change, and therefore should be fault-tolerant members of the network, which are not able to leave the network.
Alternatively, the bootstrap node can be identified via a DNS service, where a domain name resolves to one of the bootstrapping nodes' addresses. This allows the bootstrapping nodes' addresses to change as needed.
Configuration information provided
The objective of the bootstrapping node is to provide newly joining nodes with sufficient configuration information so that the new node may then successfully join the network and access resources, such as shared content. Discovery protocol information can instruct the new node how to discover peers on the network. Membership protocol information can instru |
https://en.wikipedia.org/wiki/Influenza%20non-structural%20protein | Influenza non-structural protein (NS1) is a homodimeric RNA-binding protein found in influenza virus that is required for viral replication. NS1 binds polyA tails of mRNA keeping them in the nucleus. NS1 inhibits pre-mRNA splicing by tightly binding to a specific stem-bulge of U6 snRNA. |
https://en.wikipedia.org/wiki/Cobordism | In mathematics, cobordism is a fundamental equivalence relation on the class of compact manifolds of the same dimension, set up using the concept of the boundary (French bord, giving cobordism) of a manifold. Two manifolds of the same dimension are cobordant if their disjoint union is the boundary of a compact manifold one dimension higher.
The boundary of an (n + 1)-dimensional manifold W is an n-dimensional manifold ∂W that is closed, i.e., with empty boundary. In general, a closed manifold need not be a boundary: cobordism theory is the study of the difference between all closed manifolds and those that are boundaries. The theory was originally developed by René Thom for smooth manifolds (i.e., differentiable), but there are now also versions for
piecewise linear and topological manifolds.
A cobordism between manifolds M and N is a compact manifold W whose boundary is the disjoint union of M and N, .
Cobordisms are studied both for the equivalence relation that they generate, and as objects in their own right. Cobordism is a much coarser equivalence relation than diffeomorphism or homeomorphism of manifolds, and is significantly easier to study and compute. It is not possible to classify manifolds up to diffeomorphism or homeomorphism in dimensions ≥ 4 – because the word problem for groups cannot be solved – but it is possible to classify manifolds up to cobordism. Cobordisms are central objects of study in geometric topology and algebraic topology. In geometric topology, cobordisms are intimately connected with Morse theory, and h-cobordisms are fundamental in the study of high-dimensional manifolds, namely surgery theory. In algebraic topology, cobordism theories are fundamental extraordinary cohomology theories, and categories of cobordisms are the domains of topological quantum field theories.
Definition
Manifolds
Roughly speaking, an n-dimensional manifold M is a topological space locally (i.e., near each point) homeomorphic to an open subset of Euclid |
https://en.wikipedia.org/wiki/Genome-wide%20complex%20trait%20analysis | Genome-wide complex trait analysis (GCTA) Genome-based restricted maximum likelihood (GREML) is a statistical method for heritability estimation in genetics, which quantifies the total additive contribution of a set of genetic variants to a trait. GCTA is typically applied to common single nucleotide polymorphisms (SNPs) on a genotyping array (or "chip") and thus termed "chip" or "SNP" heritability.
GCTA operates by directly quantifying the chance genetic similarity of unrelated individuals and comparing it to their measured similarity on a trait; if two unrelated individuals are relatively similar genetically and also have similar trait measurements, then the measured genetics are likely to causally influence that trait, and the correlation can to some degree tell how much. This can be illustrated by plotting the squared pairwise trait differences between individuals against their estimated degree of relatedness. GCTA makes a number of modeling assumptions and whether/when these assumptions are satisfied continues to be debated.
The GCTA framework has also been extended in a number of ways: quantifying the contribution from multiple SNP categories (i.e. functional partitioning); quantifying the contribution of Gene-Environment interactions; quantifying the contribution of non-additive/non-linear effects of SNPs; and bivariate analyses of multiple phenotypes to quantify their genetic covariance (co-heritability or genetic correlation).
GCTA estimates have implications for the potential for discovery from Genome-wide Association Studies (GWAS) as well as the design and accuracy of polygenic scores. GCTA estimates from common variants are typically substantially lower than other estimates of total or narrow-sense heritability (such as from twin or kinship studies), which has contributed to the debate over the Missing heritability problem.
History
Estimation in biology/animal breeding using standard ANOVA/REML methods of variance components such as heritability, |
https://en.wikipedia.org/wiki/KCNC3 | Potassium voltage-gated channel, Shaw-related subfamily, member 3 also known as KCNC3 or Kv3.3 is a protein that in humans is encoded by the KCNC3.
Function
The Shaker gene family of Drosophila encodes components of voltage-gated potassium channels and comprises four subfamilies. Based on sequence similarity, this gene is similar to one of these subfamilies, namely the Shaw subfamily. The protein encoded by this gene belongs to the delayed rectifier class of channel proteins and is an integral membrane protein that mediates the voltage-dependent potassium ion permeability of excitable membranes.
Clinical significance
KCNC3 is associated with spinocerebellar ataxia type 13.
See also
Voltage-gated potassium channel |
https://en.wikipedia.org/wiki/Traian%20Herseni | Traian Herseni (February 18, 1907 – July 17, 1980) was a Romanian social scientist, journalist, and political figure. First noted as a favorite disciple of Dimitrie Gusti, he helped establish the Romanian school of rural sociology in the 1920s and early '30s, and took part in interdisciplinary study groups and field trips. A prolific essayist and researcher, he studied isolated human groups across the country, trying to define relations between sociology, ethnography, and cultural anthropology, with an underlying interest in sociological epistemology. He was particularly interested in the peasant cultures and pastoral society of the Făgăraș Mountains. Competing with Anton Golopenția for the role of Gusti's leading disciple, Herseni emerged as the winner in 1937; from 1932, he also held a teaching position at the University of Bucharest.
Herseni became a committed eugenicist and racial scientist, who discarded a moderate left-wing stance to embrace fascism, and parted ways with Gusti over his support for the Iron Guard. He was nevertheless protected during the anti-Guard backlash of 1938, when Gusti made him a clerk within the Social Service, part of the National Renaissance Front apparatus. A leading functionary and ideologue of the fascist National Legionary State, and a figure of cultural and political importance under dictator Ion Antonescu, he proposed the compulsory sterilization of "inferior races", and wrote praises of Nazi racial policy. Indicted by the communist regime in 1951, he spent four years in prison. He made a slow return to favors as a researcher for the Romanian Academy, participating in the resumption of sociological research, as well as experimenting in social psychology and pioneering industrial sociology.
Formally a partisan of Marxism-Leninism after 1956, Herseni was more genuinely committed to national communism. The national communist policies instituted during the late 1960s allowed him to revisit some of his controversial theses about t |
https://en.wikipedia.org/wiki/MERMOZ | MERMOZ (also, MERMOZ project and Monitoring planEtary suRfaces with Modern pOlarimetric characteriZation) is an astrobiology project designed to remotely detect biosignatures of life. Detection is based on molecular homochirality, a characteristic property of the biochemicals of life. The aim of the project is to remotely identify and characterize life on the planet Earth from space, and to extend this technology to other solar system bodies and exoplanets. The project began in 2018, and is a collaboration of the University of Bern, University of Leiden and Delft University of Technology.
According to a member of the research team, “When light is reflected by biological matter, a part of the light’s electromagnetic waves will travel in either clockwise or counterclockwise spirals ... This phenomenon is called circular polarization and is caused by the biological matter’s homochirality.” These unique spirals of light indicate living materials; whereas, non-living materials do not reflect such unique spirals of light, according to the researchers.
The research team conducted feasibility studies, using a newly designed detection instrument, based on circular spectropolarimetry, and named FlyPol+ (an upgrade from the original FlyPol), by flying in a helicopter at an altitude of and velocity of for 25 minutes. The results were successful in remotely detecting living material, and quickly (within seconds) distinguishing living material from non-living material. The researchers concluded: "Circular spectropolarimetry can be a powerful technique to detect life beyond Earth, and we emphasize the potential of utilizing circular spectropolarimetry as a remote sensing tool to characterize and monitor in detail the vegetation physiology and terrain features of Earth itself."
The researchers next expect to scan the Earth from the International Space Station (ISS) with their detection instruments. One consequence of further successful studies is a possible pathfinder space m |
https://en.wikipedia.org/wiki/Human%20skeletal%20changes%20due%20to%20bipedalism | The evolution of human bipedalism, which began in primates approximately four million years ago, or as early as seven million years ago with Sahelanthropus, or approximately twelve million years ago with Danuvius guggenmosi, has led to morphological alterations to the human skeleton including changes to the arrangement, shape, and size of the bones of the foot, hip, knee, leg, and the vertebral column. These changes allowed for the upright gait to be overall more energy efficient in comparison to quadrupeds. The evolutionary factors that produced these changes have been the subject of several theories that correspond with environmental changes on a global scale.
Energy efficiency
Human walking is about 75% less costly than both quadrupedal and bipedal walking in chimpanzees. Some hypotheses have supported that bipedalism increased the energetic efficiency of travel and that this was an important factor in the origin of bipedal locomotion. Humans save more energy than quadrupeds when walking but not when running. Human running is 75% less efficient than walking. A 1980 study reported that walking in living hominin bipeds is noticeably more efficient than walking in living hominin quadrupeds, but the costs of quadrupedal and bipedal travel are the same.
Foot
Human feet evolved enlarged heels. The human foot evolved as a platform to support the entire weight of the body, rather than acting as a grasping structure, as it did in early hominids. Humans therefore have smaller toes than their bipedal ancestors. This includes a non-opposable hallux, which is relocated in line with the other toes. The push off would also require all the toes to be slightly bent up.
Humans have a foot arch rather than being flat footed. When non-human hominids walk upright, weight is transmitted from the heel, along the outside of the foot, and then through the middle toes while a human foot transmits weight from the heel, along the outside of the foot, across the ball of the foot and fina |
https://en.wikipedia.org/wiki/Google%20Apps%20Script | Google Apps Script is a scripting platform developed by Google for light-weight application development in the Google Workspace platform. Google Apps Script was initially developed by Mike Harm as a side project while working as a developer on Google Sheets. Google Apps Script was first publicly announced in May 2009 when a beta testing program was announced by Jonathan Rochelle, then Product Manager for Google Docs. In August 2009 Google Apps Script was subsequently made available to all Google Apps Premier and Education Edition customers. It is based on JavaScript 1.6, but also includes some portions of 1.7 and 1.8 and a subset of the ECMAScript 5 API. Apps Script projects run server-side on Google's infrastructure. According to Google, Apps Script "provides easy ways to automate tasks across Google products and third party services." Apps Script is also the tool that powers the add-ons for Google Docs, Sheets and Slides.
Benefits
Google Apps Script is based on JavaScript 1.6 and a selection of JavaScript 1.7 and 1.8. It features a cloud-based debugger for debugging App Scripts in the web browser. It can be used to create simple tools for an organization's internal consumption. It can be used to perform simple system administration tasks. It features a community-based support model.
Limitations
Google Apps Script has some processing limitations. As a cloud-based service, Apps Script limits the time that a user's script may run, as well as limiting access to Google services. Currently, Google Apps Store does not allow direct connection to internal (behind-the-firewall) corporate databases, which is key to building business apps. However, this can be overcome via the use of the JDBC service if connections are allowed from Google servers to the internal database server. Similarly, lack of other connectivity, such as LDAP connectivity, limits the level to which GAS can be used in the enterprise. Due to the cloud nature of Apps Script, functions related to date and t |
https://en.wikipedia.org/wiki/Cray%20S-MP | The Cray S-MP was a multiprocessor server computer sold by Cray Research from 1992 to 1993. It was based on the Sun SPARC microprocessor architecture and could be configured with up to eight 66 MHz BIT B5000 processors. Optionally, a Cray APP matrix co-processor cluster could be added to an S-MP system.
The S-MP was originally designed by FPS Computing as the FPS Model 500EA. FPS were acquired by Cray Research in 1991, becoming Cray Research Superservers Inc., and the Model 500EA was relaunched by Cray in 1992 as the S-MP.
The S-MP was a short-lived model, and was superseded by the Cray CS6400. |
https://en.wikipedia.org/wiki/Jacobi%20method%20for%20complex%20Hermitian%20matrices | In mathematics, the Jacobi method for complex Hermitian matrices is a generalization of the Jacobi iteration method. The Jacobi iteration method is also explained in "Introduction to Linear Algebra" by .
Derivation
The complex unitary rotation matrices Rpq can be used for Jacobi iteration of complex Hermitian matrices in order to find a numerical estimation of their eigenvectors and eigenvalues simultaneously.
Similar to the Givens rotation matrices, Rpq are defined as:
Each rotation matrix, Rpq, will modify only the pth and qth rows or columns of a matrix M if it is applied from left or right, respectively:
A Hermitian matrix, H is defined by the conjugate transpose symmetry property:
By definition, the complex conjugate of a complex unitary rotation matrix, R is its inverse and also a complex unitary rotation matrix:
Hence, the complex equivalent Givens transformation of a Hermitian matrix H is also a Hermitian matrix similar to H:
The elements of T can be calculated by the relations above. The important elements for the Jacobi iteration are the following four:
Each Jacobi iteration with RJpq generates a transformed matrix, TJ, with TJp,q = 0. The rotation matrix RJp,q is defined as a product of two complex unitary rotation matrices.
where the phase terms, and are given by:
Finally, it is important to note that the product of two complex rotation matrices for given angles θ1 and θ2 cannot be transformed into a single complex unitary rotation matrix Rpq(θ). The product of two complex rotation matrices are given by: |
https://en.wikipedia.org/wiki/Brain%20size | The size of the brain is a frequent topic of study within the fields of anatomy, biological anthropology, animal science and evolution. Measuring brain size and cranial capacity is relevant both to humans and other animals, and can be done by weight or volume via MRI scans, by skull volume, or by neuroimaging intelligence testing. The relationship between brain size and intelligence remains a controversial although frequently investigated question.
Humans
In humans, the right cerebral hemisphere is typically larger than the left, whereas the cerebellar hemispheres are typically closer in size. The adult human brain weighs on average about . In men the average weight is about 1370 g and in women about 1200 g. The volume is around 1260 cm3 in men and 1130 cm3 in women, although there is substantial individual variation. Yet another study argued that adult human brain weight is 1300-1400 g for adult humans and 350-400 g for newborn humans. There is a range of volume and weights, and not just one number that one can definitively rely on, as with body mass. It is also important to note that variation between individuals is not as important as variation within species, as overall the differences are much smaller. The mechanisms of interspecific and intraspecific variation also differ.
Variation and evolution
From early primates to hominids and finally to Homo sapiens, the brain is progressively larger, with exception of extinct Neanderthals whose brain size exceeded modern Homo sapiens. The volume of the human brain has increased as humans have evolved (see Homininae), starting from about 600 cm3 in Homo habilis up to 1680 cm3 in Homo neanderthalensis, which was the hominid with the biggest brain size. Some data suggest that the average brain size has decreased since then, including a study concluding the decrease "was surprisingly recent, occurring in the last 3,000 years". However, a reanalysis of the same data suggests that brain size has not decreased, and tha |
https://en.wikipedia.org/wiki/Commissure | A commissure () is the location at which two objects abut or are joined. The term is used especially in the fields of anatomy and biology.
The most common usage of the term refers to the brain's commissures, of which there are five. Such a commissure is a bundle of commissural fibers as a tract that crosses the midline at its level of origin or entry (as opposed to a decussation of fibers that cross obliquely). The five are the anterior commissure, posterior commissure, corpus callosum, commissure of fornix (hippocampal commissure), and habenular commissure. They consist of fibre tracts that connect the two cerebral hemispheres and span the longitudinal fissure. In the spinal cord there are the anterior white commissure, and the gray commissure. Commissural neurons refer to neuronal cells that grow their axons across the midline of the nervous system within the brain and the spinal cord.
Commissure also often refers to cardiac anatomy of heart valves. In the heart, a commissure is the area where the valve leaflets abut. When such an abutment is abnormally stiffened or even fused, valvular stenosis results, sometimes requiring commissurotomy.
The term may also refer to the junction of the upper and lower lips (see labial commissure of mouth).
It may refer to the junction of the upper and lower mandibles of a bird's beak, or alternately, to the full-length apposition of the closed mandibles, from the corners of the mouth to the tip of the beak.
It may refer to the nasal and temporal meeting points of the upper and lower eyelids (the medial and lateral canthi).
In female genitalia, the joining points of the two folds of the labia majora create two commissures - the anterior commissure just anterior to the prepuce of the clitoris, and the posterior commissure of the labia majora, directly posterior to the frenulum of the labia minora and anterior to the perineal raphe.
In biology, the meeting of the two valves of a brachiopod or clam is a commissure; in botany, |
https://en.wikipedia.org/wiki/Deoxyadenosine%20triphosphate | Deoxyadenosine triphosphate (dATP) is a nucleotide used in cells for DNA synthesis (or replication), as a substrate of DNA polymerase.
Deoxyadenosine triphosphate is produced from DNA by the action of nuclease P1, adenylate kinase, and pyruvate kinase.
Health effects
High levels of dATP can be toxic and result in impaired immune function, since dATP acts as a noncompetitive inhibitor for the DNA synthesis enzyme ribonucleotide reductase. Patients with adenosine deaminase deficiency (ADA) tend to have elevated intracellular dATP concentrations because adenosine deaminase normally curbs adenosine levels by converting it into inosine. Deficiency of this deaminase also causes immunodeficiency.
In cardiac myosin, dATP is an alternative to ATP as an energy substrate for facilitating cross-bridge formation.
See also
Adenosine triphosphate (ATP)
Adenosine deaminase deficiency (ADA)
Dilated cardiomyopathy (DCM) |
https://en.wikipedia.org/wiki/Penile%20discharge | Penile discharge is fluid that comes from the urethra at the end of the penis that is not urine, pre-ejaculate or semen.
Common causes include infections due to gonorrhea, chlamydia, or trichomoniasis. In gonorrhea the discharge may be white, yellow, or green.
A swab of the discharge is usually performed.
Treatment depends on the cause. Spread of infection is reduced by also treating sexual contacts.
Risk factors include being sexually active men under the age of 25, having a recent new sexual partner, or having unprotected sex.
Definition and clinical features
Penile discharge is liquid from the urethra at the end of the penis that is not urine or semen. The dripping of clear fluid (pre-ejaculate) when sexually excited is normal.
There may be pain or burning when passing urine, soreness inside the penis or feeling of wanting to pass urine frequently.
Causes
Common causes include infections due to gonorrhea, chlamydia, or trichomoniasis.
Other causes include:
Non-specific urethritis
Acute prostatitis
Infection under the foreskin
Warts at the opening of the urethra
Herpes simplex virus ulcer at the opening of the urethra
Object in the urethra or recent surgical procedure.
A bloody discharge may be a sign of urethral cancer.
Evaluation
A swab of the discharge is usually performed. Other investigations may include tests for HIV, hepatitis and syphilis.
Men who have sex with men may also need to have throat and rectal swabs.
Treatment
Treatment depends on the cause and any antibiotic prescribed depends on which infection is found. Spread of infection is reduced by informing sexual partners so that they can also be treated, and not having sex (including oral or anal) until tests are completed and seven days have passed after treatment.
Epidemiology
Risk factors include being sexually active men under the age of 25, having a recent new sexual partner, unprotected sex (without a condom), or having the presence of any sexually transmitted infection. |
https://en.wikipedia.org/wiki/Multivision%20%28television%20technology%29 | MultiVision was one of the earliest implementations of PIP (picture-in-picture) television available for purchase by users, pioneered by engineer George Schnurle III and sold by the San Jose, California-based company Multivision Products Inc.
The original MultiVision model was a box that measured by and was high. It required a VCR to operate and used its own tuner and the VCR to display two television channels. The television antenna was plugged into the MultiVision unit, which was then plugged into the television receiver's antenna input. The program selected on the MultiVision tuner was displayed in a small window inserted into the main TV picture at a position selected by the user. It also functioned as a switching device to connect additional peripherals (such as a laserdisc player) and offered audio outputs to connect external speakers and provide stereo sound. For monaural broadcasts and VHS tapes, the device could provide synthesized stereo audio.
The MultiVision 3.1 model was an unusually shaped device, similar in size to the original, that lacked any form of controls on the device itself. It used its own two tuners and/or a VCR and/or other devices to display two video sources at once. The tunerless MultiVision 1.1 model looked virtually identical to the 3.1 except in rear view, and featured 4 composite, plus left and right audio input sets, plus switchable external audio and video processor loops. Both provided composite and left and right audio outputs for TV input.
On the 1.1 and 3.1 models, the audio could be set in sync to either the main source or the PIP or selected independently. The 1.1 model's remote had 12 color-coded buttons, 4 each for the main picture, PIP picture, and audio, and like the 3.1's remote included other buttons for swapping main and inset picture, PIP on/off, PIP size, PIP position, audio sync on or off, mute, and more. Their remotes featured angled output ends, which facilitated accurate button selection whilst reclined. |
https://en.wikipedia.org/wiki/Memory%20module | In computing, a memory module or RAM (random-access memory) stick is a printed circuit board on which memory integrated circuits are mounted. Memory modules permit easy installation and replacement in electronic systems, especially computers such as personal computers, workstations, and servers. The first memory modules were proprietary designs that were specific to a model of computer from a specific manufacturer. Later, memory modules were standardized by organizations such as JEDEC and could be used in any system designed to use them.
Types of memory module include:
TransFlash Memory Module
SIMM, a single in-line memory module
DIMM, dual in-line memory module
Rambus memory modules are a subset of DIMMs, but are normally referred to as RIMMs
SO-DIMM, small outline DIMM, a smaller version of the DIMM, used in laptops
Compression Attached Memory Module, thinner than SO-DIMM
Distinguishing characteristics of computer memory modules include voltage, capacity, speed (i.e., bit rate), and form factor.
For economic reasons, the large (main) memories found in personal computers, workstations, and non-handheld game-consoles (such as PlayStation and Xbox) normally consist of dynamic RAM (DRAM). Other parts of the computer, such as cache memories normally use static RAM (SRAM). Small amounts of SRAM are sometimes used in the same package as DRAM. However, since SRAM has high leakage power and low density, die-stacked DRAM has recently been used for designing multi-megabyte sized processor caches.
Physically, most DRAM is packaged in black epoxy resin.
General DRAM formats
Dynamic random access memory is produced as integrated circuits (ICs) bonded and mounted into plastic packages with metal pins for connection to control signals and buses. In early use individual DRAM ICs were usually either installed directly to the motherboard or on ISA expansion cards; later they were assembled into multi-chip plug-in modules (DIMMs, SIMMs, etc.). Some standard module types ar |
https://en.wikipedia.org/wiki/History%20of%20IBM | International Business Machines (IBM) is a multinational corporation specializing in computer technology and information technology consulting. Headquartered in Armonk, New York, United States, the company traces its roots to the amalgamation of various enterprises dedicated to automating routine business transactions, notably pioneering punched card-based data tabulating machines and time clocks. In 1911, these entities were unified under the umbrella of the Computing-Tabulating-Recording Company (CTR).
Thomas J. Watson (1874–1956) assumed the role of General Manager within the company in 1914 and ascended to the position of President in 1915. By 1924, the company rebranded as "International Business Machines." IBM diversified its offerings to include electric typewriters and other office equipment. Watson, a proficient salesman, aimed to cultivate a highly motivated, well-compensated sales force capable of devising solutions for clients unacquainted with the latest technological advancements.
In the 1940s and 1950s, IBM initiated its initial forays into computing, which constituted incremental improvements to the prevailing card-based system. A pivotal moment arrived in the 1960s with the introduction of the System/360 family of mainframe computers. IBM provided a comprehensive spectrum of hardware, software, and service agreements, fostering client loyalty and solidifying its moniker "Big Blue." The customized nature of end-user software, tailored by in-house programmers for a specific brand of computers, deterred brand switching due to its associated costs. Despite challenges posed by clone makers like Amdahl and legal confrontations, IBM leveraged its esteemed reputation, assuring clients with both hardware and system software solutions, earning acclaim as one of the esteemed American corporations during the 1970s and 1980s.
However, IBM encountered difficulties in the late 1980s and 1990s, marked by substantial losses surpassing $8 billion in 1993. The main |
https://en.wikipedia.org/wiki/List%20of%20unexplained%20sounds | The following is a list of unidentified, or formerly unidentified, sounds. All of the sound files in this article have been sped up by at least a factor of 16 to increase intelligibility by condensing them and raising the frequency from infrasound to a more audible and reproducible range.
Unidentified sounds
The following unidentified sounds have been detected by the U.S. National Oceanic and Atmospheric Administration (NOAA) using its Equatorial Pacific Ocean autonomous hydrophone array.
Upsweep
Upsweep is an unidentified sound detected on the American NOAA's equatorial autonomous hydrophone arrays. This sound was present when the Pacific Marine Environmental Laboratory began recording its sound surveillance system, SOSUS, in August 1991. It consists of a long train of narrow-band upsweeping sounds of several seconds in duration each. The source level is high enough to be recorded throughout the Pacific.
The sound appears to be seasonal, generally reaching peaks in spring and autumn, but it is unclear whether this is due to changes in the source or seasonal changes in the propagation environment. The source can be roughly located at , between New Zealand and South America. Scientists/researchers of NOAA speculate the sound to be underwater volcanic activity. The Upsweep's level of sound (volume) has been declining since 1991, but it can still be detected on NOAA's equatorial autonomous hydrophone arrays.
Whistle
This sound, dubbed the Whistle, was recorded by the eastern Pacific autonomous hydrophone deployed at on July 7, 1997 at 07:30GMT. According to NOAA, the Whistle is similar to volcanogenic sounds previously recorded in the Mariana volcanic arc of the Pacific Ocean. NOAA also stated that locating the source of an event requires at least three recording instruments, and since Whistle was only recorded on the NW hydrophone, the sound could have traveled a great distance from its source volcano before detection.
NOAA (formerly unidentified)
Bloop |
https://en.wikipedia.org/wiki/Ritipenem | Ritipenem is a penem class antimicrobial agent. Ritipenem is manufactured by Tanabe Seiyaku in the ritipenem acoxil prodrug form, which can be taken orally . It is not FDA approved in the United States as of 2008. |
https://en.wikipedia.org/wiki/Behavior%20modification%20facility | A behavior modification facility (or youth residential program) is a residential educational and treatment institution enrolling adolescents who are perceived as displaying antisocial behavior, in an attempt to alter their conduct.
Due to irregular licensing rules across countries and states, as well as ambiguity regarding the labels that facilities use themselves, it is hard to gauge how widespread the facilities are. The facilities are part of what has been called the Troubled Teen Industry. Programs in the United States have been controversial due to widespread allegations of abuse and trauma imposed on the adolescents who are enrolled, as well as deceptive marketing practices aimed at parents. Critics say the facilities do not use evidence-based treatments.
Methodologies used in such programs
Practices and service quality in such program vary greatly. The behavior modification methodologies used vary, but a combination of positive and negative reinforcement is typically used. Often these methods are delivered in a contingency management format such as a point system or level system. Such methodology has been found to be highly effective in the treatment of disruptive disorders (see meta-analysis of Chen & Ma (2007).
Positive reinforcement mechanisms include points, rewards and signs of status, while punishment procedures may include time-outs, point deductions, reversal of status, prolonged stays at a facility, physical restraint, or even corporal punishment. Research showed that time out length was not a factor and suggestions were made to limit time out to five minute durations. A newer approach uses graduated sanctions. Staff appear easily trained in behavioral intervention, such training is maintained and does lead to improved consumer outcomes, as well as reduce turn over. More restrictive punishment procedures in general are less appealing to staff and administrators.
Behavioral programs were found to lessen the need for medication. Several studies h |
https://en.wikipedia.org/wiki/Image%20histogram | An image histogram is a type of histogram that acts as a graphical representation of the tonal distribution in a digital image. It plots the number of pixels for each tonal value. By looking at the histogram for a specific image a viewer will be able to judge the entire tonal distribution at a glance.
Image histograms are present on many modern services. Photographers can use them as an aid to show the distribution of tones captured, and whether image detail has been lost to blown-out highlights or blacked-out shadows. This is less useful when using a raw image format, as the dynamic range of the displayed image may only be an approximation to that in the raw file.
The horizontal axis of the graph represents the tonal variations, while the vertical axis represents the total number of pixels in that particular tone.
The left side of the horizontal axis represents the dark areas, the middle represents mid-tone values and the right hand side represents light areas. The vertical axis represents the size of the area (total number of pixels) that is captured in each one of these zones.
Thus, the histogram for a very dark image will have most of its data points on the left side and center of the graph.
Conversely, the histogram for a very bright image with few dark areas and/or shadows will have most of its data points on the right side and center of the graph.
Image manipulation and histograms
Image editors typically create a histogram of the image being edited. The histogram plots the number of pixels in the image (vertical axis) with a particular brightness or tonal value (horizontal axis). Algorithms in the digital editor allow the user to visually adjust the brightness value of each pixel and to dynamically display the results as adjustments are made. Histogram equalization is a popular example of these algorithms. Improvements in picture brightness and contrast can thus be obtained.
In the field of computer vision, image histograms can be useful tools f |
https://en.wikipedia.org/wiki/Data%20Transfer%20Project | The Data Transfer Project (DTP) is an open-source initiative which features data portability between multiple online platforms. The project was launched and introduced by Google on July 20, 2018, and has currently partnered with Facebook, Microsoft, Twitter, and Apple.
Background
The project was formed by the Google Data Liberation Front in 2017, hoping to provide a platform that could allow individuals to move their online data between different platforms, without the need of downloading and re-uploading data. The ecosystem is achieved by extracting different files through various available APIs released by online platforms and translating such codes so that it could be compatible with other platforms. Similarly, the Data Transfer Project is currently being used as a part of Google Takeout and a similar program in Facebook (called "Access your information"), allowing the two personal data downloading services to be compatible with each other. This allows data to be easily transferred from the two platforms.
On July 20, 2018, the joint project was announced. The source code, which has been uploaded to GitHub, was mainly written by Google and Microsoft's engineers.
On July 30, 2019, Apple announced that it will be joining the project, allowing data portability in iCloud.
Implementations
On December 2, 2019, Facebook announced the ability for users to transfer photos and videos to Google Photos, originally available only in a select few countries. This expanded over the following months, and on June 4, 2020, Facebook announced full global availability of this feature.
See more
Data portability
Google Takeout |
https://en.wikipedia.org/wiki/Coccolithophore | Coccolithophores, or coccolithophorids, are single-celled organisms which are part of the phytoplankton, the autotrophic (self-feeding) component of the plankton community. They form a group of about 200 species, and belong either to the kingdom Protista, according to Robert Whittaker's five-kingdom system, or clade Hacrobia, according to a newer biological classification system. Within the Hacrobia, the coccolithophores are in the phylum or division Haptophyta, class Prymnesiophyceae (or Coccolithophyceae). Coccolithophores are almost exclusively marine, are photosynthetic, and exist in large numbers throughout the sunlight zone of the ocean.
Coccolithophores are the most productive calcifying organisms on the planet, covering themselves with a calcium carbonate shell called a coccosphere. However, the reasons they calcify remains elusive. One key function may be that the coccosphere offers protection against microzooplankton predation, which is one of the main causes of phytoplankton death in the ocean.
Coccolithophores are ecologically important, and biogeochemically they play significant roles in the marine biological pump and the carbon cycle. Depending on habitat, they can produce up to 40 percent of the local marine primary production. They are of particular interest to those studying global climate change because, as ocean acidity increases, their coccoliths may become even more important as a carbon sink. Management strategies are being employed to prevent eutrophication-related coccolithophore blooms, as these blooms lead to a decrease in nutrient flow to lower levels of the ocean.
The most abundant species of coccolithophore, Emiliania huxleyi, belongs to the order Isochrysidales and family Noëlaerhabdaceae. It is found in temperate, subtropical, and tropical oceans. This makes E. huxleyi an important part of the planktonic base of a large proportion of marine food webs. It is also the fastest growing coccolithophore in laboratory cultures. It is studi |
https://en.wikipedia.org/wiki/Spiraea%20%C3%97%20cinerea | Spiraea × cinerea is a species of flowering plant in the rose family. It is a hybrid of garden origin (S. hypericifolia × S. cana). Growing to tall and wide, this compact deciduous shrub bears small, lanceolate leaves and multiple white blooms along its arching stems in spring.
The Latin specific epithet cinerea means “the colour of ash”.
The cultivar ‘Grefsheim’ is widely grown as a garden plant. Hardy down to , it is easy to grow in a sunny mixed planting. It has gained the Royal Horticultural Society’s Award of Garden Merit. |
https://en.wikipedia.org/wiki/Fractal-generating%20software | Fractal-generating software is any type of graphics software that generates images of fractals. There are many fractal generating programs available, both free and commercial. Mobile apps are available to play or tinker with fractals. Some programmers create fractal software for themselves because of the novelty and because of the challenge in understanding the related mathematics. The generation of fractals has led to some very large problems for pure mathematics.
Fractal generating software creates mathematical beauty through visualization. Modern computers may take seconds or minutes to complete a single high resolution fractal image. Images are generated for both simulation (modeling) and random fractals for art. Fractal generation used for modeling is part of realism in computer graphics. Fractal generation software can be used to mimic natural landscapes with fractal landscapes and scenery generation programs. Fractal imagery can be used to introduce irregularity to an otherwise sterile computer generated environment.
Fractals are generated in music visualization software, screensavers and wallpaper generators. This software presents the user with a more limited range of settings and features, sometimes relying a series pre-programmed variables. Because complex images can be generated from simple formula fractals are often used among the demoscene. The generation of fractals such as the Mandelbrot set is time-consuming and requires many computations, so it is often used in benchmarking devices.
History
The generation of fractals by calculation without computer assistance was undertaken by German mathematician Georg Cantor in 1883 to create the Cantor set. Throughout the following years, mathematicians have postulated the existence of numerous fractals. Some were conceived before the naming of fractals in 1975, for example, the Pythagoras tree by Dutch mathematics teacher Albert E. Bosman in 1942.
The development of the first fractal generating softw |
https://en.wikipedia.org/wiki/Titus%20%28dinosaur%29 | Titus is an obsidian black skeleton of a Tyrannosaurus rex discovered in Montana's Hell Creek Formation in 2014 and excavated in 2018.
Titus was on display as the centrepiece of an exhibition at the Nottingham Natural History Museum, England, from July 2021 to August 2022. According to the Nottingham City Council, it is also a rare instance of an actual Tyrannosaurus fossil leaving North America. The exhibit includes 3D scanned replicas of the skeleton, which visitors can inspect and handle. He is named after the protagonist in Shakespeare's Titus Andronicus. The owner of Titus remains anonymous.
Description
The mounted Titus skeleton measures high and long. It is named after the protagonist in Shakespeare's Titus Andronicus. The skeleton comprises 59 preserved elements, representing about 20% of the bones in an adult T. rex. External bone inspection has revealed injuries to Titus' right tibia (possibly a claw or bite wound); a deformed toe on the right foot; and a bitten and healed tail. The bite wound near the end of the tail indicates a possible attack by another Tyrannosaurus.
Discovery
In September 2014, commercial paleontologist Craig Pfister first discovered the remains of Titus near Ekalaka, Carter County, Montana. The site was an ancient river channel whence the specimen may have been transported in a flood event which also winnowed the skeleton and may in part explain why only 20% of the bones were preserved. Pfister originally found a broken tibia, and said he knew right away that it belonged to a Tyrannosaurus rex, but was sidetracked by the discovery of a nearby Triceratops. Excavation of the specimen began in 2018, and took 18 months.
Reconstruction and exhibition
The bones of Titus were shipped to conservationist Nigel Larkin in the United Kingdom, who assessed and conserved the bones. Larkin reconstructed the mount using a cast of the Tyrannosaurus specimen Stan to supplement the known bones of "Titus", after scanning the bones using photog |
https://en.wikipedia.org/wiki/Equivalence%20test | Equivalence tests are a variety of hypothesis tests used to draw statistical inferences from observed data. In these tests, the null hypothesis is defined as an effect large enough to be deemed interesting, specified by an equivalence bound. The alternative hypothesis is any effect that is less extreme than said equivalence bound. The observed data are statistically compared against the equivalence bounds. If the statistical test indicates the observed data is surprising, assuming that true effects are at least as extreme as the equivalence bounds, a Neyman-Pearson approach to statistical inferences can be used to reject effect sizes larger than the equivalence bounds with a pre-specified Type 1 error rate.
Equivalence testing originates from the field of clinical trials. One application, known as a non-inferiority trial, is used to show that a new drug that is cheaper than available alternatives works as well as an existing drug. In essence, equivalence tests consist of calculating a confidence interval around an observed effect size and rejecting effects more extreme than the equivalence bound when the confidence interval does not overlap with the equivalence bound. In two-sided tests, both upper and lower equivalence bounds are specified. In non-inferiority trials, where the goal is to test the hypothesis that a new treatment is not worse than existing treatments, only a lower equivalence bound is specified. Equivalence tests can be performed in addition to null-hypothesis significance tests. This might prevent common misinterpretations of p-values larger than the alpha level as support for the absence of a true effect. Furthermore, equivalence tests can identify effects that are statistically significant but practically insignificant, whenever effects are statistically different from zero, but also statistically smaller than any effect size deemed worthwhile (see the first figure). Equivalence tests were originally used in areas such as pharmaceutics, fr |
https://en.wikipedia.org/wiki/Lymphocyte%20function-associated%20antigen%201 | Lymphocyte function-associated antigen 1 (LFA-1) is an integrin found on lymphocytes and other leukocytes. LFA-1 plays a key role in emigration, which is the process by which leukocytes leave the bloodstream to enter the tissues. LFA-1 also mediates firm arrest of leukocytes. Additionally, LFA-1 is involved in the process of cytotoxic T cell mediated killing as well as antibody mediated killing by granulocytes and monocytes. As of 2007, LFA-1 has 6 known ligands: ICAM-1, ICAM-2, ICAM-3, ICAM-4, ICAM-5, and JAM-A. LFA-1/ICAM-1 interactions have recently been shown to stimulate signaling pathways that influence T cell differentiation. LFA-1 belongs to the integrin superfamily of adhesion molecules.
Structure
LFA-1 is a heterodimeric glycoprotein with non-covalently linked subunits. LFA-1 has two subunits designated as the alpha subunit and beta subunit. The alpha subunit was named aL in 1983. The alpha subunit is designated CD11a; and the beta subunit, unique to leukocytes, is beta-2 or CD18. The ICAM binding site is on the alpha subunit. The general binding region of the alpha subunit is the I-domain. Due to the presence of a divalent cation site in the I-domain, the specific binding site is often referred to as the metal-ion dependent adhesion site (MIDAS).
Activation
In an inactive state, LFA-1 rests in a bent conformation and has a low affinity for ICAM binding. This bent conformation conceals the MIDAS. Chemokines stimulate the activation process of LFA-1. The activation process begins with the activation of Rap1, an intracellular g-protein. Rap1 assists in breaking the constraint between the alpha and beta subunits of LFA-1. This induces an intermediate extended conformation. The conformational change stimulates a recruitment of proteins to form an activation complex. The activation complex further destabilizes the alpha and beta subunits. Chemokines also stimulate an I-like domain on the beta subunit, which causes the MIDAS site on the beta subunit to bind |
https://en.wikipedia.org/wiki/George%20Lusztig | George Lusztig (born Gheorghe Lusztig; May 20, 1946) is a Romanian-born American mathematician and Abdun Nur Professor at the Massachusetts Institute of Technology (MIT). He was a Norbert Wiener Professor in the Department of Mathematics from 1999 to 2009.
Education and career
Born in Timișoara to a Hungarian-Jewish family, he did his undergraduate studies at the University of Bucharest, graduating in 1968. Later that year he left Romania for the United Kingdom, where he spent several months at the University of Warwick and Oxford University. In 1969 he moved to the United States, where he went to work for two years with Michael Atiyah at the Institute for Advanced Study in Princeton, New Jersey. He received his PhD in mathematics in 1971 after completing a doctoral dissertation, titled "Novikov's higher signature and families of elliptic operators", under the supervision of William Browder and Michael Atiyah.
Lusztig worked for almost seven years at the University of Warwick. His involvement at the university encompassed a Research Fellowship, (1971–72); lecturer in Mathematics, (1972–74); and Professor of Mathematics, (1974–78). In 1978, he accepted a chair at MIT.
Contributions
He is known for his work on representation theory, in particular for the objects closely related to algebraic groups, such as finite reductive groups, Hecke algebras, -adic groups, quantum groups, and Weyl groups. He essentially paved the way for modern representation theory. This has included fundamental new concepts, including the character sheaves, the Deligne–Lusztig varieties, and the Kazhdan–Lusztig polynomials.
Awards and honors
In 1983, Lusztig was elected as a fellow of the Royal Society. In 1985 Lusztig won the Cole Prize (Algebra). He was elected to the National Academy of Sciences in 1992. He received the Brouwer Medal in 1999, the National Order of Faithful Service in 2003 and the Leroy P. Steele Prize for Lifetime Achievement in Mathematics in 2008. In 2012, he became a f |
https://en.wikipedia.org/wiki/Osmotic%20concentration | Osmotic concentration, formerly known as osmolarity, is the measure of solute concentration, defined as the number of osmoles (Osm) of solute per litre (L) of solution (osmol/L or Osm/L). The osmolarity of a solution is usually expressed as Osm/L (pronounced "osmolar"), in the same way that the molarity of a solution is expressed as "M" (pronounced "molar"). Whereas molarity measures the number of moles of solute per unit volume of solution, osmolarity measures the number of osmoles of solute particles per unit volume of solution. This value allows the measurement of the osmotic pressure of a solution and the determination of how the solvent will diffuse across a semipermeable membrane (osmosis) separating two solutions of different osmotic concentration.
Unit
The unit of osmotic concentration is the osmole. This is a non-SI unit of measurement that defines the number of moles of solute that contribute to the osmotic pressure of a solution. A milliosmole (mOsm) is 1/1,000 of an osmole. A microosmole (μOsm) (also spelled micro-osmole) is 1/1,000,000 of an osmole.
Types of solutes
Osmolarity is distinct from molarity because it measures osmoles of solute particles rather than moles of solute. The distinction arises because some compounds can dissociate in solution, whereas others cannot.
Ionic compounds, such as salts, can dissociate in solution into their constituent ions, so there is not a one-to-one relationship between the molarity and the osmolarity of a solution. For example, sodium chloride (NaCl) dissociates into Na+ and Cl− ions. Thus, for every 1 mole of NaCl in solution, there are 2 osmoles of solute particles (i.e., a 1 mol/L NaCl solution is a 2 osmol/L NaCl solution). Both sodium and chloride ions affect the osmotic pressure of the solution.
Another example is magnesium chloride (MgCl2), which dissociates into Mg2+ and 2Cl− ions. For every 1 mole of MgCl2 in the solution, there are 3 osmoles of solute particles.
Nonionic compounds do not dissociate |
https://en.wikipedia.org/wiki/Excess-3 | Excess-3, 3-excess or 10-excess-3 binary code (often abbreviated as XS-3, 3XS or X3), shifted binary or Stibitz code (after George Stibitz, who built a relay-based adding machine in 1937) is a self-complementary binary-coded decimal (BCD) code and numeral system. It is a biased representation. Excess-3 code was used on some older computers as well as in cash registers and hand-held portable electronic calculators of the 1970s, among other uses.
Representation
Biased codes are a way to represent values with a balanced number of positive and negative numbers using a pre-specified number N as a biasing value. Biased codes (and Gray codes) are non-weighted codes. In excess-3 code, numbers are represented as decimal digits, and each digit is represented by four bits as the digit value plus 3 (the "excess" amount):
The smallest binary number represents the smallest value ().
The greatest binary number represents the largest value ().
To encode a number such as 127, one simply encodes each of the decimal digits as above, giving (0100, 0101, 1010).
Excess-3 arithmetic uses different algorithms than normal non-biased BCD or binary positional system numbers. After adding two excess-3 digits, the raw sum is excess-6. For instance, after adding 1 (0100 in excess-3) and 2 (0101 in excess-3), the sum looks like 6 (1001 in excess-3) instead of 3 (0110 in excess-3). To correct this problem, after adding two digits, it is necessary to remove the extra bias by subtracting binary 0011 (decimal 3 in unbiased binary) if the resulting digit is less than decimal 10, or subtracting binary 1101 (decimal 13 in unbiased binary) if an overflow (carry) has occurred. (In 4-bit binary, subtracting binary 1101 is equivalent to adding 0011 and vice versa.)
Motivation
The primary advantage of excess-3 coding over non-biased coding is that a decimal number can be nines' complemented (for subtraction) as easily as a binary number can be ones' complemented: just by inverting all bits. Also, when |
https://en.wikipedia.org/wiki/Code%20page%201043 | Code page 1043 (CCSID 1043), also known as Traditional Chinese PC Data Extended, is a single byte character set (SBCS) used by IBM in its PC DOS operating system.
This code page is intended for use with code page 927 (Traditional Chinese double byte character set). It is an extension of Code page 904.
Codepage layout |
https://en.wikipedia.org/wiki/Reports%20of%20Streptococcus%20mitis%20on%20the%20Moon | As part of the Apollo 12 mission, the camera from the Surveyor 3 probe was brought back from the Moon to Earth. On analyzing the camera it was found that the common bacterium Streptococcus mitis was alive on the camera. This was attributed by NASA to the camera not being sterilized on Earth prior to its launch two and a half years previously. However, later study showed that the scientists analysing the camera on return to Earth used procedures that were inadequate to prevent recontamination after return to Earth, for instance with their arms exposed, not covering their entire bodies as modern scientists would do. There may also have been possibilities for contamination during the return mission as the camera was returned in a porous bag rather than the airtight containers used for lunar sample return. As a result, the source of the contamination remains controversial.
History
Since the Apollo Program, there has been at least one independent investigation into the validity of the NASA claim. Leonard D. Jaffe, a Surveyor program scientist and custodian of the Surveyor 3 parts brought back from the Moon, stated in a letter to the Planetary Society that a member of his staff reported that a "breach of sterile procedure" took place at just the right time to produce a false positive result. One of the implements being used to scrape samples off the Surveyor parts was laid down on a non-sterile laboratory bench, and then was used to collect surface samples for culturing. Jaffe wrote, "It is, therefore, quite possible that the microorganisms were transferred to the camera after its return to Earth, and that they had never been to the Moon." In 2007, NASA funded an archival study that sought the film of the camera-body microbial sampling, to confirm the report of a breach in sterile technique.
The bacterial test is now non-repeatable because the parts were subsequently taken out of quarantine and fully re-exposed to terrestrial conditions (the Surveyor 3 camera is now |
https://en.wikipedia.org/wiki/Insulated%20shipping%20container | Insulated shipping containers are a type of packaging used to ship temperature sensitive products such as foods, pharmaceuticals, organs, blood, biologic materials, vaccines and chemicals. They are used as part of a cold chain to help maintain product freshness and efficacy. The term can also refer to insulated intermodal containers or insulated swap bodies.
Construction
A variety of constructions have been developed. An insulated shipping container might be constructed of:
a vacuum flask, similar to a "thermos" bottle
fabricated thermal blankets or liners
molded expanded polystyrene foam (EPS, styrofoam), similar to a cooler
other molded foams such as polyurethane, polyethylene
sheets of foamed plastics
Vacuum Insulated Panels (VIPs)
reflective materials: (metallised film)
bubble wrap or other gas filled panels
other packaging materials and structures
Some are designed for single use while others are returnable for reuse. Some insulated containers are decommissioned refrigeration units. Some empty containers are sent to the shipper disassembled or “knocked down”, assembled and used, then knocked down again for easier return shipment.
Shipping containers are available for maintaining cryogenic temperatures, with the use of liquid nitrogen. Some carriers have these as a specialized service
Use
Insulated shipping containers are part of a comprehensive cold chain which controls and documents the temperature of a product through its entire distribution cycle. The containers may be used with a refrigerant or coolant such as:
block or cube ice, slurry ice
dry ice
Gel or ice packs (often formulated for specific temperature ranges)
Phase change materials (PCMs)
Some products (such as frozen meat) have sufficient thermal mass to contribute to the temperature control and no excess coolant is required
A digital Temperature data logger or a time temperature indicator is often enclosed to monitor the temperature inside the container for its entire shipme |
https://en.wikipedia.org/wiki/Nerdfighteria | Nerdfighteria is a mainly online-based community subculture that originated on YouTube in 2007, when the VlogBrothers (John and Hank Green) rose to prominence in the YouTube community. As their popularity grew, so did coverage on Nerdfighteria, whose followers are individually known as Nerdfighters. The term was coined when John saw a copy of the arcade game Aero Fighters and misread the title as Nerd Fighters.
Hank Green describes it as "a community that sprung up around our videos, and basically we just get together and try to do awesome things and have a good time and fight against world suck". He defines "world suck" as "the amount of suck in the world". The Greens established The Foundation to Decrease World Suck, in order to raise funds and launch projects that would help a variety of causes. Nerdfighters believe in fighting world suck, promoting education, freedom of speech and the use of the intellect in modern society. Nerdfighters and the Green brothers have collaborated on many projects such as the charitable drive, Project for Awesome which launched in 2007, and VidCon, the convention focusing on topics surrounding the world of digital media. Nerdfighters have been documented by websites such as The Hollywood Reporter, and The Wall Street Journal, with a following estimated to be in the millions.
Community topics
Nerdfighteria is known for its online collaborative nature: forums, spinoff blogs, meet-ups, and charitable events have been spawned by its members. Instances of the community collaborating can be observed in the creation of college campus groups at universities such as the University of Maryland, Texas Christian University, the University of British Columbia, and the University of California, Los Angeles. Another Nerdfighter club was founded at Auburn University, in which the members have stated their desire to do charity work with The Humane Society and This Star Won't Go Out.
The Nerdfighter subculture was able to force the release of the |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.