source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Postvaccinal%20encephalitis
Postvaccinal encephalitis (PVE) is postvaccinal complication which was associated with vaccination with vaccinia virus during worldwide smallpox eradication campaign. With mortality ranging between 25 – 30% and lifelong consequences between 16 – 30% it was one of the most severe adverse events associated with this vaccination. The mechanism of its underlying condition is unknown. Symptoms and signs PVE symptoms start to appear between 8th and 14th day after vaccination. Amongst the first are fever, headache, confusion and nausea. With passing time lethargy, seizures, short and long term memory dysfunctions, localized paralysis, hemiplegia, polyneuritis and convulsions. In extreme cases PVE can lead to coma and death. Among the several forms of viral brain inflammation are rabies, polio, and two types transmitted by the mosquito: equine encephalitis in its various forms and St. Louis encephalitis. The latter two have appeared in epidemic form in the United States and are characterized by high fever, prolonged coma (which is responsible for the disease being known as a "sleeping sickness"), and convulsions sometimes followed by death. Encephalitis that results as a complication of another systemic infection is known as parainfectious encephalitis and can follow such diseases as measles (rubeola), influenza, and scarlet fever. The AIDS virus also infects the brain and produces dementia in a predictably progressive pattern. Although no specific treatment can destroy the virus once the disease has become established, many types of encephalitis can be prevented by immunization. Histology Inflammatory extra-adventitial lesions are found not only in the brain but in the spinal cord as well. Lesions might be uniform in acute phase or disseminated in subacute phase. Unlike in cases of encephalitis lethargica the main damage is found in white brain matter. Meninges are infiltrated with T cells, plasmatic cells and phagocytic cells. Polymorfonuclear cells were found only i
https://en.wikipedia.org/wiki/Activation%20energy%20asymptotics
Activation energy asymptotics (AEA), also known as large activation energy asymptotics, is an asymptotic analysis used in the combustion field utilizing the fact that the reaction rate is extremely sensitive to temperature changes due to the large activation energy of the chemical reaction. History The techniques were pioneered by the Russian scientists Yakov Borisovich Zel'dovich, David A. Frank-Kamenetskii and co-workers in the 30s, in their study on premixed flames and thermal explosions (Frank-Kamenetskii theory), but not popular to western scientists until the 70s. In the early 70s, due to the pioneering work of Williams B. Bush, Francis E. Fendell, Forman A. Williams, Amable Liñán and John F. Clarke, it became popular in western community and since then it was widely used to explain more complicated problems in combustion. Method overview In combustion processes, the reaction rate is dependent on temperature in the following form (Arrhenius law), where is the activation energy, and is the universal gas constant. In general, the condition is satisfied, where is the burnt gas temperature. This condition forms the basis for activation energy asymptotics. Denoting for unburnt gas temperature, one can define the Zel'dovich number and heat release parameter as follows In addition, if we define a non-dimensional temperature such that approaching zero in the unburnt region and approaching unity in the burnt gas region (in other words, ), then the ratio of reaction rate at any temperature to reaction rate at burnt gas temperature is given by Now in the limit of (large activation energy) with , the reaction rate is exponentially small i.e., and negligible everywhere, but non-negligible when . In other words, the reaction rate is negligible everywhere, except in a small region very close to burnt gas temperature, where . Thus, in solving the conservation equations, one identifies two different regimes, at leading order, Outer convective-diffusive zone I
https://en.wikipedia.org/wiki/Sandhar%20Technologies
Sandhar Technologies Limited or Sandhar Group or Sandhar (), is an Indian multinational and a global manufacturer of automotive components. The company was founded in 1987 by Jayant Davar, currently has 3 subsidiaries and 10 joint venture with 41 factories all over the world. History 1987-90: The company was founded in 1987 by Jayant Davar and started supplying sheet metals for Hero Honda motorcycles and received technical assistance from Honda Lock to manufacture lock systems for the motorbikes. 1990-95: It started producing mirrors and door handles for Honda cars thanks to technical assistance received from Honda Lock and even up to the date it is the only supplier for Honda cars. 2000-05: The company starts investing in Zn, Al die casting and tools and dies. 2006-10: Enters in the field of wheel assembly production such as handle bar, spools for seatbelt retractors and PDC products. 2011-16: The business expanded to manufacture plastic injection moulding seatbelts then eventually set up a research unit Gurugram. 2016-18: Thanks to joint venture with Whetron and Jinyoung, the company has started to manufacture automotive electronic products. In December 2017 the company entered into joint venture with Daeshin Machinery Ind. Co. Ltd. By the end of 2018, the company announced a joint venture with Kwangsung Corporation Ltd., Rep Korea to manufacture sunvisor, cargo screen, black-Out tape, glove box & several other blow-moulded products. 2019: The company has entered into Joint Venture with Winnercom Co. Ltd., and Hanshin Corporation of Rep Korea, References Indian Engineering Services Indian companies established in 1987 Companies listed on the National Stock Exchange of India Companies listed on the Bombay Stock Exchange
https://en.wikipedia.org/wiki/John%20Giannandrea
John Giannandrea is a Scottish software engineer and businessman. He co-founded Metaweb, led Google Search and artificial intelligence, was co-founder and CTO of the speech recognition company Tellme Networks, Chief Technologist of the web browser group at Netscape, senior engineer at General Magic, and is now a senior executive at Apple Inc. In December 2018, it was announced that Giannandrea had been appointed Senior Vice President of Machine Learning and Artificial Intelligence Strategy at Apple, the department rumored to have the most involvement with Apple’s electric car project. References Year of birth missing (living people) Living people Computer programmers Google employees Apple Inc. employees Apple Inc. executives Scottish expatriates in the United States Scottish people of Italian descent
https://en.wikipedia.org/wiki/Electronarcosis
Electronarcosis, also called electric stunning or electrostunning, is a profound stupor produced by passing an electric current through the brain. Electronarcosis may be used as a form of electrotherapy in treating certain mental illnesses in humans, or may be used to render livestock unconscious prior to slaughter. History In 1902, Stephen Leduc discovered he could produce a narcotic-like state in animals, and eventually, he tried it on himself, where he remained conscious but unable to move in a dream-like state. In 1951, an American psychiatrist Hervey M. Cleckley published a paper on the results of treating 110 patients having anxiety neuroses with electronarcosis therapy. He argued that patients may benefit from electronarcosis after other treatments have failed. A 1974 paper discussed the advantage of using electronarcosis for short-term general anesthesia. Researchers achieved electronarcosis by applying 180 mA at a frequency of 500 Hertz to the mastoid part of the temporal bone. Phases Electronarcosis results in a condition similar to an epileptic seizure, with the three phases called tonic, clonic, and recovery. During the tonic phase, the patient or animal collapses and becomes rigid. During the clonic, muscles relax and some movement occurs. During recovery, the patient or animal becomes aware. Livestock Electronarcosis is one of the methods used to render animals unconscious before slaughter and unable to feel pain. Electronarcosis may be followed immediately by electrocution or by bleeding. Modern electronarcosis is typically performed by applying 200 volts of high frequency alternating current of about 1500 hertz for 3 seconds to the animal's head. A high-frequency current is alleged to not be felt as an electric shock or cause skeletal muscle contractions. A wet animal will pass a current of over an ampere. If other procedures do not follow electronarcosis, the animal will usually recover. Studies have been used to determine optimal p
https://en.wikipedia.org/wiki/Variety%20of%20finite%20semigroups
In mathematics, and more precisely in semigroup theory, a variety of finite semigroups is a class of semigroups having some nice algebraic properties. Those classes can be defined in two distinct ways, using either algebraic notions or topological notions. Varieties of finite monoids, varieties of finite ordered semigroups and varieties of finite ordered monoids are defined similarly. This notion is very similar to the general notion of variety in universal algebra. Definition Two equivalent definitions are now given. Algebraic definition A variety V of finite (ordered) semigroups is a class of finite (ordered) semigroups that: is closed under division. is closed under taking finite Cartesian products. The first condition is equivalent to stating that V is closed under taking subsemigroups and under taking quotients. The second property implies that the empty product—that is, the trivial semigroup of one element—belongs to each variety. Hence a variety is necessarily non-empty. A variety of finite (ordered) monoids is a variety of finite (ordered) semigroups whose elements are monoids. That is, it is a class of (ordered) monoids satisfying the two conditions stated above. Topological definition In order to give the topological definition of a variety of finite semigroups, some other definitions related to profinite words are needed. Let A be an arbitrary finite alphabet. Let A+ be its free semigroup. Then let be the set of profinite words over A. Given a semigroup morphism , let be the unique continuous extension of to . A profinite identity is a pair u and v of profinite words. A semigroup S is said to satisfy the profinite identity u = v if, for each semigroup morphism , the equality holds. A variety of finite semigroups is the class of finite semigroups satisfying a set of profinite identities P. A variety of finite monoids is defined like a variety of finite semigroups, with the difference that one should consider monoid morphisms instead
https://en.wikipedia.org/wiki/Marinactinospora
Marinactinospora is a genus in the phylum Actinomycetota (Bacteria). It contains a single species, Marinactinospora thermotolerans. The species has a high tolerance for environmental temperatures, up to 55°C. Etymology The name Marinactinospora derives from the Latin adjective marinus, of or belonging to the sea; Greek noun aktis, aktinos (ἀκτίς, ἀκτῖνος), a beam; Greek noun spora (σπορά), a seed, and in biology a spore; Neo-Latin feminine gender noun Marinactinospora, marine and spored ray, referring to marine spore-forming actinomycete. The specific name derives from the Greek noun thermē (θέρμη), heat; Latin participle adjective tolerans, tolerating; Neo-Latin participle adjective thermotolerans, able to tolerate a high temperature.) See also Bacterial taxonomy Microbiology References Actinomycetota Bacteria genera Marine biology
https://en.wikipedia.org/wiki/Visopsys
Visopsys (Visual Operating System), is an operating system, written by Andy McLaughlin. Development of the operating system began in 1997. The operating system is licensed under the GNU GPL, with the headers and libraries under the less restrictive LGPL license. It runs on the 32-bit IA-32 architecture. It features a multitasking kernel, supports asynchronous I/O and the FAT line of file systems. It requires a Pentium processor. History The development of Visopsys began in 1997, being written by Andy McLaughlin. The first public release of the Operating System was on 2 March 2001, with version 0.1. In this release, Visopsys was a 32 bit operating system, supporting preemptive multitasking and virtual memory. System Overview Visopsys uses a monolithic kernel, written in the C programming language, with elements of assembly language for certain interactions with the hardware. The operating system supports a graphical user interface, with a small C library. References External links Homepage Operating systems
https://en.wikipedia.org/wiki/IBM%204768
The IBM 4768 PCIe Cryptographic Coprocessor is a hardware security module (HSM) that includes a secure cryptoprocessor implemented on a high-security, tamper resistant, programmable PCIe board. Specialized cryptographic electronics, microprocessor, memory, and random number generator housed within a tamper-responding environment provide a highly secure subsystem in which data processing and cryptography can be performed. Sensitive key material is never exposed outside the physical secure boundary in a clear format. The IBM 4768 is validated to FIPS PUB 140-2 Level 4, the highest level of certification achievable for commercial cryptographic devices. It has achieved PCI-HSM certification. The IBM 4768 data sheet describes the coprocessor in detail. IBM supplies two cryptographic-system implementations: The PKCS#11 implementation creates a high-security solution for application programs developed for this industry-standard API. The IBM Common Cryptographic Architecture (CCA) implementation provides many functions of special interest in the finance industry, extensive support for distributed key management, and a base on which custom processing and cryptographic functions can be added. Applications may include financial PIN transactions, bank-to-clearing-house transactions, EMV transactions for integrated circuit (chip) based credit cards, and general-purpose cryptographic applications using symmetric key algorithms, hashing algorithms, and public key algorithms. The operational keys (symmetric or RSA private) are generated in the coprocessor and are then saved either in a keystore file or in application memory, encrypted under the master key of that coprocessor. Any coprocessor with an identical master key can use those keys. Performance benefits include the incorporation of elliptic curve cryptography (ECC) and format preserving encryption (FPE) in the hardware. IBM supports the 4768 on certain IBM Z mainframes as Crypto Express6S (CEX6S) - feature code 0893.
https://en.wikipedia.org/wiki/DNVGL-ST-E271
The DNVGL-ST-E271 (formerly DNV 2.7-1) is a regulation issued by DNV (actual DNV GL) regarding the offshore containers specifications. DNV 2.7-1 was initially issued in 1989 and the most recent version “DNV Standard for Certification No. 2.7-1 Offshore Containers” was released in June 2013. It is a set of transport related requirements for offshore containers. It refers to: design, manufacture, testing, certification, marking and periodical inspection (will detail each of them later). The purpose was to insure that containers are safe and suitable for repeated use. Prior to 1989 there was no specific regulation for offshore equipment handling and lifting although offshore container handling is significantly more dangerous than onshore. For offshore containers ,the rate of wear and tear is higher than in most other environments. Containers are required to be constructed to withstand the forces encountered in offshore operations, and will not suffer complete failure even if subject to more extreme loads. DNV 2.7-1 is fully compliant with EN 12079 part 1 (offshore containers) and part 2 (lifting sets) and distinct as regards to part 3 (periodical inspection). References Offshore engineering
https://en.wikipedia.org/wiki/Power%20plant%20engineering
Power Plant Engineering or Power Plant Engineering () abbreviated as TPTL is a branch of the field of Energy engineering , and is defined as the engineering and technology required for the production of an electric power station. Technique is focused on power generation for industry and community, not just for household electricity production. This field is a discipline field using the theoretical basis of mechanical engineering and electrical. The engineering aspects of power generation have developed with technology and are becoming more and more complicated. The introduction of nuclear technology and other existing technology advances have made it possible for power to be created in more ways and on a larger scale than was previously possible. Assignment of different types of engineers for the design, construction, and operation of new power plants depending on the type of system being built, such as whether it is power generation fueled fossil, NPP, hydropower plant, and solar power plant. History Power plant engineering got its start in the 1800s when small systems were used by individual factories to provide electrical power. Originally the only source of power came from DC, or direct current, systems. While this was suitable for business, electricity was not accessible for most of the public body. During these times, the coal-powered steam engine was costly to run and there was no way for the power to be transmitted over distances. Hydroelectricity was one of the most utilized forms of power generation as water mills could be used to create power to transmit to small towns. It wasn't until the introduction of AC, or alternating current, power systems that allowed for the creation of power plants as we know them today. AC systems allowed power to be transmitted over larger distances than DC systems allowed and thus, large power stations were able to be created. One of the progenitors of long-distance power-transmission was the Lauffen to Frankfurt pow
https://en.wikipedia.org/wiki/Azure%20Sphere
Azure Sphere is an application platform with integrated communications and security features developed and managed by Microsoft for Internet Connected Devices. The platform consists of integrated hardware built around a secure silicon chip, the Azure Sphere OS (operating system for Azure Sphere), a high-end operating system based on Linux, and the Azure Sphere Security Service, a cloud-based security service. Azure Sphere security was developed based on Microsoft Research's position on the seven required characteristics of highly secure devices. Azure Sphere OS The Azure Sphere OS is a custom Linux-based microcontroller operating system created by Microsoft to run on an Azure Sphere-certified chip and to connect to the Azure Sphere Security Service. The Azure Sphere OS provides a platform for Internet of things application development, including both high-level applications and real-time capable applications. It is the first operating system running a Linux kernel that Microsoft has publicly released and the second Unix-like operating system that the company has developed for external (public) users, the other being Xenix. Azure Sphere Security Service The Azure Sphere Security Service, sometimes referred to as AS3, is a cloud-based service that enables maintenance, updates, and control for Azure Sphere-certified chips. The Azure Sphere Security Service establishes a secure connection between devices and the internet and/or cloud services and ensures secure boot. The primary purpose of contact between an Azure Sphere device and the Azure Sphere Security Service is to authenticate the device identity, ensure the integrity and trust of the system software, and to certify that the device is running a trusted code base. The service also provides the secure channel used by Microsoft to automatically download and install Azure Sphere OS updates and customer application updates to deployed devices. Azure Sphere chips and hardware Azure Sphere-certified chips and hard
https://en.wikipedia.org/wiki/United%20States%20Coast%20Guard%20Unit%20387%20Cryptanalysis%20Unit
The United States Coast Guard Unit 387 became the official cryptanalytic unit of the Coast Guard collecting communications intelligence for Coast Guard, U.S. Department of Defense, and the Federal Bureau of Investigation (FBI) in 1931. Prior to becoming official, the Unit worked under the U.S. Treasury Department intercepting communications during the prohibition. The Unit was briefly absorbed into the U.S. Navy in 1941 during World War II (WWII) before returning to be a Coast Guard unit again following the war. The Unit contributed to significant success in deciphering rum runner codes during the prohibition and later Axis agent codes during WWII, leading to the breaking of several code systems including the Green and Red Enigma machines. The Rise of Unit 387 The U.S. Coast Guard (USCG) Unit 387 was established in the 1920s as a small embedded unit of the USCG. It did not become an officially named unit until 1931, when it was named the USCG Unit 387 by Elizebeth Friedman. The United States government established this code-communications unit to intercept ship communications and track down prohibition law breakers because “rum runners” were increasingly using radio and code systems for communication. There was an increasing need for code-breaking and encoding capabilities to counter the rum runners, as they were sophisticated criminals attempting to intercept government communications as well. By 1927, the USCG intercepted hundreds of messages but lacked the resources and personnel needed for codebreaking. Therefore, the U.S. Treasury Department appointed William and Elizebeth Friedman, a couple famous for cryptology, to create new code systems for the USCG operations against the prohibition violators and to decrypt the messages accumulating. The Friedmans were famous cryptographers with expansive careers in Washington DC for the U.S. army, navy, Treasury and Justice Departments throughout WWI and WWII. In 1927, the rum runners commonly used two coding systems, s
https://en.wikipedia.org/wiki/Bing%E2%80%93Borsuk%20conjecture
In mathematics, the Bing–Borsuk conjecture states that every -dimensional homogeneous absolute neighborhood retract space is a topological manifold. The conjecture has been proved for dimensions 1 and 2, and it is known that the 3-dimensional version of the conjecture implies the Poincaré conjecture. Definitions A topological space is homogeneous if, for any two points , there is a homeomorphism of which takes to . A metric space is an absolute neighborhood retract (ANR) if, for every closed embedding (where is a metric space), there exists an open neighbourhood of the image which retracts to . There is an alternate statement of the Bing–Borsuk conjecture: suppose is embedded in for some and this embedding can be extended to an embedding of . If has a mapping cylinder neighbourhood of some map with mapping cylinder projection , then is an approximate fibration. History The conjecture was first made in a paper by R. H. Bing and Karol Borsuk in 1965, who proved it for and 2. Włodzimierz Jakobsche showed in 1978 that, if the Bing–Borsuk conjecture is true in dimension 3, then the Poincaré conjecture must also be true. The Busemann conjecture states that every Busemann -space is a topological manifold. It is a special case of the Bing–Borsuk conjecture. The Busemann conjecture is known to be true for dimensions 1 to 4. References Topology Conjectures Unsolved problems in mathematics Manifolds
https://en.wikipedia.org/wiki/Busemann%20G-space
In mathematics, a Busemann G-space is a type of metric space first described by Herbert Busemann in 1942. If is a metric space such that for every two distinct there exists such that (Menger convexity) every -bounded set of infinite cardinality possesses accumulation points for every there exists such that for any distinct points there exists such that (geodesics are locally extendable) for any distinct points , if such that , and (geodesic extensions are unique). then X is said to be a Busemann G-space. Every Busemann G-space is a homogenous space. The Busemann conjecture states that every Busemann G-space is a topological manifold. It is a special case of the Bing–Borsuk conjecture. The Busemann conjecture is known to be true for dimensions 1 to 4. References Metric spaces Topology Manifolds
https://en.wikipedia.org/wiki/Silurian%20hypothesis
The Silurian hypothesis is a thought experiment which assesses modern science's ability to detect evidence of a prior advanced civilization, perhaps several million years ago. The most probable cues for such a civilization could be carbon, radioactive elements or temperature variation. The name "Silurian" derives from the eponymous sapient species from the BBC science fiction series Doctor Who, who in the series established an advanced civilization prior to humanity. Astrophysicists Adam Frank and Gavin Schmidt proposed the "Silurian Hypothesis" in a 2018 paper, exploring the possibility of detecting an advanced civilization before humans in the geological record. They argued that there has been sufficient fossil carbon to fuel an industrial civilization since the Carboniferous Period (~350 million years ago). However, finding direct evidence, such as technological artifacts, is unlikely due to the rarity of fossilization and Earth's exposed surface. Instead, researchers might find indirect evidence, such as climate changes, anomalies in sediment, or traces of nuclear waste. The hypothesis also speculates that artifacts from past civilizations could be found on the Moon and Mars, where erosion and tectonic activity are less likely to erase evidence. The concept of pre-human civilizations has been explored in popular culture, including novels, television shows, and short stories. Explanation The idea was presented in a 2018 paper by Adam Frank, an astrophysicist at the University of Rochester, and Gavin Schmidt, director of the Goddard Institute for Space Studies. Frank and Schmidt imagined an advanced civilization before humans and pondered whether it would "be possible to detect an industrial civilization in the geological record". They argue as early as the Carboniferous period (~350 million years ago) "there has been sufficient fossil carbon to fuel an industrial civilization comparable with our own". However, they also wrote: "While we strongly doubt that any
https://en.wikipedia.org/wiki/U-Net
U-Net is a convolutional neural network that was developed for biomedical image segmentation at the Computer Science Department of the University of Freiburg. The network is based on a fully convolutional neural network whose architecture was modified and extended to work with fewer training images and to yield more precise segmentation. Segmentation of a 512 × 512 image takes less than a second on a modern GPU. The U-Net architecture has also been employed in diffusion models for iterative image denoising. This technology underlies many modern image generation models, such as DALL-E and Midjourney. Description The U-Net architecture stems from the so-called “fully convolutional network” proposed by Long, Shelhamer, and Darrell in 2014. The main idea is to supplement a usual contracting network by successive layers, where pooling operations are replaced by upsampling operators. Hence these layers increase the resolution of the output. A successive convolutional layer can then learn to assemble a precise output based on this information. One important modification in U-Net is that there are a large number of feature channels in the upsampling part, which allow the network to propagate context information to higher resolution layers. As a consequence, the expansive path is more or less symmetric to the contracting part, and yields a u-shaped architecture. The network only uses the valid part of each convolution without any fully connected layers. To predict the pixels in the border region of the image, the missing context is extrapolated by mirroring the input image. This tiling strategy is important to apply the network to large images, since otherwise the resolution would be limited by the GPU memory. History U-Net was created by Olaf Ronneberger, Philipp Fischer, Thomas Brox in 2015 and reported in the paper “U-Net: Convolutional Networks for Biomedical Image Segmentation”. It is an improvement and development of FCN: Evan Shelhamer, Jonathan Long, Trevor Darr
https://en.wikipedia.org/wiki/Kaleshwaram%20Lift%20Irrigation%20Project
The Kaleshwaram Lift Irrigation Project (KLIP) is a multi-purpose irrigation project on the Godavari River in Kaleshwaram, Bhupalpally, Telangana, India. Currently the world's largest multi-stage lift irrigation project, its farthest upstream influence is at the confluence of the Pranahita and Godavari rivers. The Pranahita River is itself a confluence of various smaller tributaries including the Wardha, Painganga, and Wainganga rivers which combine to form the seventh-largest drainage basin on the subcontinent, with an estimated annual discharge of more than or 280 TMC. It remains untapped as its course is principally through dense forests and other ecologically sensitive zones such as wildlife sanctuaries. The Kaleshwaram Lift Irrigation Project is divided into 7 links and 28 packages spanning a distance of approximately through 13 districts and utilizing a canal network of more than . The project aims to produce a total of 240 TMC (195 from Medigadda Barrage, 20 from Sripada Yellampalli project and 25 from groundwater), of which 169 has been allocated for irrigation, 30 for Hyderabad municipal water, 16 for miscellaneous industrial uses and 10 for drinking water in nearby villages, with the remainder being estimated evaporation loss. The project aims at increasing total culturable command area (the sustainable area which can be irrigated after accounting for both upstream and downstream factors) by across all 13 districts in addition to stabilizing the existing CCA. On 21 June 2019, the project was opened by Telangana Governor E. S. L. Narasimhan and Chief minister K. Chandrashekar Rao. National Green Tribunal declared the Scheme is constructed without following the statuary provisions with regard to environmental aspects. Four major pumping facilities manage the project's outflow, the largest at Ramadugu (Medaram, Annaram and Sundilla being the others) is also likely to be the largest in Asia once consistent measurements are available, requiring seven pum
https://en.wikipedia.org/wiki/Archaeal%20initiation%20factors
Archaeal initiation factors are proteins that are used during the translation step of protein synthesis in archaea. The principal functions these proteins perform include ribosome RNA/mRNA recognition, delivery of the initiator Met-tRNAiMet, methionine bound tRNAi, to the 40s ribosome, and proofreading of the initiation complex. Conservation of archaeal initiation factors Of the three domains of life, archaea, eukaryotes, and bacteria, the number of archaeal TIFs is somewhere between eukaryotes and bacteria; eukaryotes have the largest number of TIFs, and bacteria, having streamlined the process, have only three TIFs. Not only are archaeal TIF numbers between that of bacteria and eukaryote numbers, but archaeal initiation factors are seen to have both traits of eukaryotic and prokaryotic initiation factors. Two core TIFs, IF1/IF1A and IF2/IF5B are conserved across the three domains of life. There is also a semi-universal TIF found in all archaea and eukaryote called SUI1, but only in certain bacterial species (YciH). In archaea and eukaryotes, these TIFs help correct the identification of the initiation codon, while its function is unknown in bacteria. Just between eukaryote and archaea, a/eIF2 (trimer) and aIF6 in archaea are conserved in eukaryotes as eIF2(trimer) and eIF6 TIFs. List of initiation factors aIF1: SUI1 (eIF1) homolog. aIF1A: IF1/eIF1A homolog. Plays a role in occupying the ribosomal A site, helping the unambiguous placement of tRNAi in the P site of in the large ribosomal subunit. aIF2: Trimeric, eIF2 homolog. Binds to the 40S small subunit of the ribosome to help guide the start translation of mRNA into proteins. Can substitute for eIF2. aIF5A: EF-P/eIF5A homolog. Contains hypusine, just like the eukaryotic one. Actually an elongation factor. aIF5B: IF2/eIF5B homolog. Join the ribosomal subunits (small and large) to form the complete single (monomeric) mRNA bound ribosome unit in the late stages of initiation. aIF6: eIF6 homolog. Keeps
https://en.wikipedia.org/wiki/Arlo%20Technologies
Arlo Technologies is an American company that makes wireless surveillance cameras. Prior to an initial public offering (IPO) on the New York Stock Exchange in August 2018, Arlo was a brand of such products by Netgear, which retained majority control after the IPO. According to the company, it has shipped 21.6 million devices, has 5.82 million registered accounts, and has 877 thousand paid accounts, as of January 2022. History On February 6, 2018, Netgear made the announcement that its board of directors had unanimously approved the separation of its Arlo business from Netgear. During the second quarter of 2018, Netgear's Arlo unit became a holding of Arlo Technologies, Inc. Netgear issued less than 20% of the Arlo common stock in the IPO, allowing it to retain majority control. The CEO of Arlo is Matthew McRae. McRae joined Netgear in October 2017 when he was hired as senior vice president of strategy. Products Arlo makes products such as the Arlo Security Camera, as well as portable and baby monitoring cameras. Arlo cameras are designed to save energy by use of a low-power standby mode. Manufacturing Arlo manufacturing is outsourced to Foxconn and Pegatron. References External links 2018 initial public offerings Companies listed on the New York Stock Exchange Companies based in San Jose, California American companies established in 2018 Home automation companies Netgear Arlo
https://en.wikipedia.org/wiki/Nilsemigroup
In mathematics, and more precisely in semigroup theory, a nilsemigroup or nilpotent semigroup is a semigroup whose every element is nilpotent. Definitions Formally, a semigroup S is a nilsemigroup if: S contains 0 and for each element a∈S, there exists a positive integer k such that ak=0. Finite nilsemigroups Equivalent definitions exists for finite semigroup. A finite semigroup S is nilpotent if, equivalently: for each , where is the cardinality of S. The zero is the only idempotent of S. Examples The trivial semigroup of a single element is trivially a nilsemigroup. The set of strictly upper triangular matrix, with matrix multiplication is nilpotent. Let a bounded interval of positive real numbers. For x, y belonging to I, define as . We now show that is a nilsemigroup whose zero is n. For each natural number k, kx is equal to . For k at least equal to , kx equals n. This example generalize for any bounded interval of an Archimedean ordered semigroup. Properties A non-trivial nilsemigroup does not contain an identity element. It follows that the only nilpotent monoid is the trivial monoid. The class of nilsemigroups is: closed under taking subsemigroups closed under taking quotients closed under finite products but is not closed under arbitrary direct product. Indeed, take the semigroup , where is defined as above. The semigroup S is a direct product of nilsemigroups, however its contains no nilpotent element. It follows that the class of nilsemigroups is not a variety of universal algebra. However, the set of finite nilsemigroups is a variety of finite semigroups. The variety of finite nilsemigroups is defined by the profinite equalities . References Semigroup theory
https://en.wikipedia.org/wiki/Synthetic%20crystalline%20bovine%20insulin
In 1965, Chinese scientists first synthesized crystalline bovine insulin (), which was the first functional crystalline protein being fully synthesized in the world. Research on synthesizing bovine insulin started on 1958. Members in the research group were from the Chemistry Department of Beijing University (), Shanghai Institute of Biochemistry, CAS () and Shanghai Institute of Organic Chemistry, CAS (). Insulin is a protein (peptide) consisting of two chain, A and B. Chain A consists of 21 amino acid residues while chain consists of 30 amino acid residues. The main function of insulin is to regulate the concentrate of sugar in blood. Type 1 diabetes are caused by dysfunction on the synthesis or secretory of insulin while injecting insulin can treat type 1 diabetes. In 1979, Wang Yinglai, the project's lead scientist, nominated Niu Jingyi, a team member who had made significant contributions, for the Nobel Chemistry Prize, but the nomination was unsuccessful. See also Helmut Zahn Panayotis Katsoyannis Cell-free protein synthesis References Biochemistry 1965 in China Chinese inventions
https://en.wikipedia.org/wiki/Legendre%20moment
In mathematics, Legendre moments are a type of image moment and are achieved by using the Legendre polynomial. Legendre moments are used in areas of image processing including: pattern and object recognition, image indexing, line fitting, feature extraction, edge detection, and texture analysis. Legendre moments have been studied as a means to reduce image moment calculation complexity by limiting the amount of information redundancy through approximation. Legendre moments With order of m + n, and object intensity function f(x,y): where m,n = 1, 2, 3, ... with the nth-order Legendre polynomials being: which can also be written: where D(n) = floor(n/2). The set of Legendre polynomials {Pn(x)} form an orthogonal set on the interval [−1,1]: A recurrence relation can be used to compute the Legendre polynomial: f(x,y) can be written as an infinite series expansion in terms of Legendre polynomials [−1 ≤ x,y ≤ 1.]: See also Image moment Legendre polynomial Zernike polynomials References Computer vision Orthogonal polynomials Polynomials Special hypergeometric functions
https://en.wikipedia.org/wiki/Prime%20omega%20function
In number theory, the prime omega functions and count the number of prime factors of a natural number Thereby (little omega) counts each distinct prime factor, whereas the related function (big omega) counts the total number of prime factors of honoring their multiplicity (see arithmetic function). That is, if we have a prime factorization of of the form for distinct primes (), then the respective prime omega functions are given by and . These prime factor counting functions have many important number theoretic relations. Properties and relations The function is additive and is completely additive. If divides at least once we count it only once, e.g. . If divides times then we count the exponents, e.g. . As usual, means is the exact power of dividing . If then is squarefree and related to the Möbius function by If then is a prime number. It is known that the average order of the divisor function satisfies . Like many arithmetic functions there is no explicit formula for or but there are approximations. An asymptotic series for the average order of is given by where is the Mertens constant and are the Stieltjes constants. The function is related to divisor sums over the Möbius function and the divisor function including the next sums. The characteristic function of the primes can be expressed by a convolution with the Möbius function: A partition-related exact identity for is given by where is the partition function, is the Möbius function, and the triangular sequence is expanded by in terms of the infinite q-Pochhammer symbol and the restricted partition functions which respectively denote the number of 's in all partitions of into an odd (even) number of distinct parts. Continuation to the complex plane A continuation of has been found, though it is not analytic everywhere. Note that the normalized function is used. Average order and summatory functions An average order of both and is .
https://en.wikipedia.org/wiki/Green%20strength
Green strength, or handling strength, can be defined as the strength of a material as it is processed to form its final ultimate tensile strength. This strength is usually considerably lower than the final ultimate strength of a material. The term green strength is usually referenced when discussing non-metallic materials such as adhesives and elastomers (such as rubber). Recently, it has also been referenced in metallurgy applications such as powdered metallurgy. Adhesives A joint made through the use of an adhesive can be referred to as an adhesive joint or bond. The green strength of adhesives is the early development of bond strength of an adhesive. It indicated "that the adhesive bond is strong enough to be handled a short time after the adherents are mated but much before full cure is obtained." Usually, this strength is significantly lower than the final curing strength. Most adhesives typically have an initial green strength and a final ultimate tensile strength listed for their application. For household adhesives, this data is usually reflected on the packaging. The best example of this is seen in typical epoxies from a local hardware stores. During curing, the epoxy will travel into an initial curing phase, also called "green phase", when it begins to gel. At that point, the epoxy is no longer workable and will move from being tacky to a firm rubber-like texture. While the epoxy is only partially cured at this point, it has formed a lower green strength. Normally, this process occurs within 30 minutes to 1 hour. At this time, the part in question can be handled, but cannot handle large loads or stress. It typically takes up to 24 hours for a standard epoxy to cure to its final and complete strength. Temperature is an important factor in the time it takes for an adhesive to form the green strength. While this can vary from adhesive to adhesive, general speaking, heat can speed up the process to form the green strength and the overall curing time. Time
https://en.wikipedia.org/wiki/Line%20detection
In image processing, line detection is an algorithm that takes a collection of n edge points and finds all the lines on which these edge points lie. The most popular line detectors are the Hough transform and convolution-based techniques. Hough transform The Hough transform can be used to detect lines and the output is a parametric description of the lines in an image, for example ρ = r cos(θ) + c sin(θ). If there is a line in a row and column based image space, it can be defined ρ, the distance from the origin to the line along a perpendicular to the line, and θ, the angle of the perpendicular projection from the origin to the line measured in degrees clockwise from the positive row axis. Therefore, a line in the image corresponds to a point in the Hough space. The Hough space for lines has therefore these two dimensions θ and ρ, and a line is represented by a single point corresponding to a unique set of these parameters. The Hough transform can then be implemented by choosing a set of values of ρ and θ to use. For each pixel (, ) in the image, compute r cos(θ) + c sin(θ) for each values of θ, and place the result in the appropriate position in the (ρ, θ) array. At the end, the values of (ρ, θ) with the highest values in the array will correspond to strongest lines in the image Convolution-based technique In a convolution-based technique, the line detector operator consists of a convolution masks tuned to detect the presence of lines of a particular width n and a θ orientation. Here are the four convolution masks to detect horizontal, vertical, oblique (+45 degrees), and oblique (−45 degrees) lines in an image. a) Horizontal mask(R1) (b) Vertical (R3) (C) Oblique (+45 degrees)(R2) (d) Oblique (−45 degrees)(R4) In practice, masks are run over the image and the responses are combined given by the following equation: R(x, y) = max(|R1 (x, y)|, |R2 (x, y)|, |R3 (x, y)|, |R4 (x, y)|) If R(x, y) > T, then discontinuity As can be seen below, if mask is over
https://en.wikipedia.org/wiki/Antonio%20Casilli
Antonio A. Casilli (born 1972) is a Professor of Sociology at Télécom Paris, the school of telecommunications engineering of the Polytechnic Institute of Paris, and an Associate Researcher at the School for Advanced Studies in the Social Sciences. His research focuses on computer-mediated communication, labour, and fundamental rights. He has been a regular commentator at La Grande Table and Place de la Toile on France Culture. Research Domains After his first work on the impact of industrial technologies on the imagery of the body, under the influence of Donna Haraway and Antonio Negri, he studied the communicational violence and digital cultures. In Les Liaisons numériques, he analyses the uses of information and communications technologies and the impact of practices of the representation of the self (avatars, photos, and autobiographical accounts) on social structures, communication codes, social capital and privacy. His research also explores the relationship between information technologies and health. His methods combine participant observation and advanced research tools for social research such as multi-agent systems and social network analysis. Privacy on social media Antonio Casilli studies the concept of privacy, criticising the hypothesis of the end of privacy as a consequence of the uses of social media. Instead of arguing that privacy is disappearing, he observes a change of perception in society. The privacy of an individual is characterised by the construction and the management of online social capital. Casilli proposes a new representation model of privacy, where it is learnt as a negotiable entity: not purely from an individual decision, but from a permanent negotiation. In this case, social media users adapt to the publication of personal information in their social circles, and on the feedback given by their contacts. The private and public characteristics do not intervene first but as a function of collective variables. Digital labor Cas
https://en.wikipedia.org/wiki/Topcoder%20Open
Topcoder Open (TCO) was an annual design, software development, data science and competitive programming championship, organized by Topcoder, and hosted in different venues around US. In the first two years, 2001 and 2002, the tournament was titled TopCoder Invitational. In addition to the main championship, from 2001 to 2007 Topcoder was organizing an annual TopCoder Collegiate Challenge tournament, for college students only. Also from 2007 to 2010 TopCoder High School competition was held. From 2015, Topcoder Regional events were held through the year in different countries. In 2020–2023 in-person Topcoder Open finals were cancelled, and replaced by virtual events due to the impact of COVID-19 pandemic, and subsequent economic slowdown. The 2023 Topcoder Open was the final edition of the contest. Competition tracks Competition tracks included in Topcoder Open tournament changed through its history. Many of them resemble the types of challenges offered to Topcoder Community through the year, but there is no 1:1 match. Here is the alphabetical list of all competition tracks ever present at TCO: Algorithm Competition (SRM) Timeline: 2001 – 2022 Champions: Gennady Korotkevich tourist (2022, 2021, 2020, 2019, 2014); Petr Mitrichev Petr (2018, 2015, 2013, 2006); Yuhao Du xudyh (2017); Makoto Soejima rng_58 (2016, 2011, 2010); Egor Kulikov Egor (2012); Bin Jin crazyb0y (2009); tomek (2008, 2004, 2003); Jan Kuipers Jan_Kuipers (2007); Eryx (2005); John Dethridge John Dethridge (2002); jonmac (2001). Details: The only track that was present at all main TCOs events, and at most of the other Topcoder events. Follows the format of regular 1.5 hours Single Round Matches: The Coding Phase – 75 mins: All competitors are presented with the same three algorithmic problems of different complexity, each problem has its own maximal number of points. Problem descriptions are initially invisible. Competitors have 75 minutes to solve these problems. Competitor
https://en.wikipedia.org/wiki/Working%20fluid%20selection
Heat engines, refrigeration cycles and heat pumps usually involve a fluid to and from which heat is transferred while undergoing a thermodynamic cycle. This fluid is called the working fluid. Refrigeration and heat pump technologies often refer to working fluids as refrigerants. Most thermodynamic cycles make use of the latent heat (advantages of phase change) of the working fluid. In case of other cycles the working fluid remains in gaseous phase while undergoing all the processes of the cycle. When it comes to heat engines, working fluid generally undergoes a combustion process as well, for example in internal combustion engines or gas turbines. There are also technologies in heat pump and refrigeration, where working fluid does not change phase, such as reverse Brayton or Stirling cycle. This article summarises the main criteria of selecting working fluids for a thermodynamic cycle, such as heat engines including low grade heat recovery using Organic Rankine Cycle (ORC) for geothermal energy, waste heat, thermal solar energy or biomass and heat pumps and refrigeration cycles. The article addresses how working fluids affect technological applications, where the working fluid undergoes a phase transition and does not remain in its original (mainly gaseous) phase during all the processes of the thermodynamic cycle. Finding the optimal working fluid for a given purpose – which is essential to achieve higher energy efficiency in the energy conversion systems – has great impact on the technology, namely it does not just influence operational variables of the cycle but also alters the layout and modifies the design of the equipment. Selection criteria of working fluids generally include thermodynamic and physical properties besides economical and environmental factors, but most often all of these criteria are used together. Selection criteria of working fluids The choice of working fluids is known to have a significant impact on the thermodynamic as well as economic
https://en.wikipedia.org/wiki/Comparison%20of%20civic%20technology%20platforms
Civic technology is technology that enables engagement and participation, or enhances the relationship between the people and government, by enhancing citizen communications and public decision, improving government delivery of services and infrastructure. This comparison of civic technology platforms compares platforms that are designed to improve citizen participation in governance, distinguished from technology that directly deals with government infrastructure. Platform types Graham Smith of the University of Southampton, in his book Beyond the Ballot, used the following categorization of democratic innovations: Electoral innovations"aim to increase electoral turnout" Consultation innovations"aim to inform decision-makers of citizens' views" Deliberative innovations"aim to bring citizens together to deliberate on policy issues, the outcomes of which may influence decision-makers" Co-governance innovations"aim to give citizens significant influence during the process of decision-making" Direct democracy innovations"aim to give citizens final decision-making power on key issues" E-democracy innovations"use information technology to engage citizens in the decision-making process" Comparison chart See also Civic technology Civic technology companies Comparison of Internet forum software Comparison of Q&A sites E-democracy Liquid democracy Open government Voting Direct democracy References Political software Open government Voting Direct democracy Civic technology platforms
https://en.wikipedia.org/wiki/Safe%20Swiss%20Cloud
Safe Swiss Cloud is a Swiss-based cloud Infrastructure as a Service (IaaS) company. The company provides computing power (CPU, RAM, data storage), object storage and managed services. History Founded in 2013 by Prodosh Banerjee and Gerald Dürr. In 2015, cloud services expanded to several data centres, while investing in equipment to increase capacity. In 2017, Safe Swiss Cloud added a number of new platforms to its offerings, including Kubernetes & Openshift, Openstack and VMware/vCloud. An investment has been made to expand clustered and redundant solid-state drive (SSD) storage capacity. Services Cloud computing service designed to offer computer, storage and managed cloud services. The company's cloud computing service includes an infrastructure service that offers virtual data centers that include firewall, router, network. Acquisitions In August 2015, Safe Swiss Cloud acquired Basel based Nexos, an IT services company. In late 2017 Everyware AG, a Swiss IT services company, acquired a controlling stake in Safe Swiss Cloud. Awards In November 2015, the company was announced as a winner of the "Bully Awards" issued to European organizations that stand out through innovation, leadership and exceptional growth. In December 2015, Safe Swiss Cloud was named best practice for finance applications in the Cloud by the European Union Agency for Network and Information Security (ENISA) in the report "Secure Use of Cloud Computing in the Finance Sector". Research Safe Swiss Cloud collaborated with the Zurich University of Applied Sciences/ZHAW to research cyber intelligence and advanced cloud billing systems. This research was supported by CTI, the Swiss government's research and innovation program for industry. References 2013 establishments in Switzerland As a service Cloud applications Web services Cloud computing providers Cloud platforms
https://en.wikipedia.org/wiki/Welding%20of%20advanced%20thermoplastic%20composites
Advanced thermoplastic composites (ACM) have a high strength fibres held together by a thermoplastic matrix. Advanced thermoplastic composites are becoming more widely used in the aerospace, marine, automotive and energy industry. This is due to the decreasing cost and superior strength to weight ratios, over metallic parts. Advance thermoplastic composite have excellent damage tolerance, corrosion resistant, high fracture toughness, high impact resistance, good fatigue resistance, low storage cost, and infinite shelf life. Thermoplastic composites also have the ability to be formed and reformed, repaired and fusion welded. Fusion bonding fundamentals Fusion bonding is a category of techniques for welding thermoplastic composites. It requires the melting of the joint interface, which decreases the viscosity of the polymer and allows for intermolecular diffusion. These polymer chains then diffuse across the joint interface and become entangled, giving the joint its strength. Welding techniques There are many welding techniques that can be used to fusion bond thermoplastic composites. These different techniques can be broken down into three classifications for their ways of generating heat; frictional heating, external heating and electromagnetic heating. Some of these techniques can be very limited and only used for specific joints and geometries. Friction welding Friction welding is best used for parts that are small and flat. The welding equipment is often expensive, but produces high-quality welds. Linear vibration welding Two flat parts are brought together under pressure with one fixed in place and the other vibrating back-and-forth parallel to the joint. Frictional heat is then generated till the polymers are softened or melted. Once the desired temperature is met, the vibration motion stops, the polymer solidifies and a weld joint is made. The two most important welding parameters that affect the mechanical performance are welding pressure and time. De
https://en.wikipedia.org/wiki/Moving%20object%20detection
Moving object detection is a technique used in computer vision and image processing. Multiple consecutive frames from a video are compared by various methods to determine if any moving object is detected. Moving objects detection has been used for wide range of applications like video surveillance, activity recognition, road condition monitoring, airport safety, monitoring of protection along marine border, etc. Definition Moving object detection is to recognize the physical movement of an object in a given place or region. By acting segmentation among moving objects and stationary area or region, the moving objects' motion can be tracked and thus analyzed later. To achieve this, consider a video is a structure built upon single frames, moving object detection is to find the foreground moving target(s), either in each video frame or only when the moving target shows the first appearance in the video. Traditional methods Among all the traditional moving object detection methods, we could categorize them into four major approaches: Background subtraction, Frame differencing, Temporal Differencing, and Optical Flow. Frame differencing Instead of using traditional approach, to use image subtraction operator by subtracting second and images afterwards, the frame differencing method makes comparisons between two successive frames to detect moving targets. Temporal differencing The temporal differencing method identifies the moving object by applying pixel-wise difference method with two or three consecutive frames. See also object detection motion estimation video tracking References Image processing Motion in computer vision
https://en.wikipedia.org/wiki/Divisor%20sum%20identities
The purpose of this page is to catalog new, interesting, and useful identities related to number-theoretic divisor sums, i.e., sums of an arithmetic function over the divisors of a natural number , or equivalently the Dirichlet convolution of an arithmetic function with one: These identities include applications to sums of an arithmetic function over just the proper prime divisors of . We also define periodic variants of these divisor sums with respect to the greatest common divisor function in the form of Well-known inversion relations that allow the function to be expressed in terms of are provided by the Möbius inversion formula. Naturally, some of the most interesting examples of such identities result when considering the average order summatory functions over an arithmetic function defined as a divisor sum of another arithmetic function . Particular examples of divisor sums involving special arithmetic functions and special Dirichlet convolutions of arithmetic functions can be found on the following pages: here, here, here, here, and here. Average order sum identities Interchange of summation identities The following identities are the primary motivation for creating this topics page. These identities do not appear to be well-known, or at least well-documented, and are extremely useful tools to have at hand in some applications. In what follows, we consider that are any prescribed arithmetic functions and that denotes the summatory function of . A more common special case of the first summation below is referenced here. In general, these identities are collected from the so-called "rarities and b-sides" of both well established and semi-obscure analytic number theory notes and techniques and the papers and work of the contributors. The identities themselves are not difficult to prove and are an exercise in standard manipulations of series inversion and divisor sums. Therefore, we omit their proofs here. The convolution method Th
https://en.wikipedia.org/wiki/USB%20communications
This article provides information about the communications aspects of Universal Serial Bus (USB): Signaling, Protocols, Transactions. USB is an industry-standard used to specify cables, connectors, and protocols that are used for communication between electronic devices. USB ports and cables are used to connect hardware such as printers, scanners, keyboards, mice, flash drives, external hard drives, joysticks, cameras, monitors, and more to computers of all kinds. USB also supports signaling rates from 1.5 Mbit/s (Low speed) to 80 Gbit/s (USB4 2.0) depending on the version of the standard. The article explains how USB devices transmit and receive data using electrical signals over the physical layer, how they identify themselves and negotiate parameters such as speed and power with the host or other devices using standard protocols such as USB Device Framework and USB Power Delivery, and how they exchange data using packets of different types and formats such as token, data, handshake, and special packets. Signaling (USB PHY) Signaling rate (transmission rate) The maximum signaling rate in USB 2.0 is 480 Mbit/s (60 MB/s) per controller and is shared amongst all attached devices. Some personal computer chipset manufacturers overcome this bottleneck by providing multiple USB 2.0 controllers within the southbridge. In practice and including USB protocol overhead, data rates of 320 Mbit/s (38 MB/s) are sustainable over a high-speed bulk endpoint. Throughput can be affected by additional bottlenecks, such as a hard disk drive as seen a in routine testing performed by CNet, where write operations to typical high-speed hard drives sustain rates of 25–30 MB/s, and read operations at 30–42 MB/s; this is 70% of the total available bus bandwidth. For USB 3.0, typical write speed is 70–90 MB/s, while read speed is 90–110 MB/s. Mask tests, also known as eye diagram tests, are used to determine the quality of a signal in the time domain. They are defined in the referenced do
https://en.wikipedia.org/wiki/Radio-frequency%20welding
Radio-frequency welding, also known as dielectric welding and high-frequency welding, is a plastic welding process that utilizes high-frequency electric fields to induce heating and melting of thermoplastic base materials. The electric field is applied by a pair of electrodes after the parts being joined are clamped together. The clamping force is maintained until the joint solidifies. Advantages of this process are fast cycle times (on the order of a few seconds), automation, repeatability, and good weld appearance. Only plastics which have dipoles can be heated using radio waves and therefore not all plastics are able to be welded using this process. Also, this process is not well suited for thick or overly complex joints. The most common use of this process is lap joints or seals on thin plastic sheets or parts. Heating mechanism Four types of polarization can occur in materials subjected to high-frequency alternating electric fields: Electronic or electric polarization is the redistribution of electrons Ionic polarization is the redistribution of charged particles—cations and anions Maxwell–Wagner polarization is a charge buildup at the interfaces of non-homogeneous materials Dipole polarization is the realignment of permanent dipoles Dipole polarization is the phenomenon that is responsible for the heating mechanism in Radio Frequency plastic welding, dielectric heating. When an electric field is applied to a molecule with an asymmetric distribution of charge, or dipole, the electric forces cause the molecule to align itself with the electrical field. When an alternating electrical field is applied, the molecule will continuously reverse its alignment, leading to molecular rotation. This process is not instantaneous, therefore if the frequency is high enough, the dipole will be unable to rotate quickly enough to stay aligned with the electric field resulting in random motion as the molecule attempts to follow the electrical field. This motion cau
https://en.wikipedia.org/wiki/Batch%20normalization
Batch normalization (also known as batch norm) is a method used to make training of artificial neural networks faster and more stable through normalization of the layers' inputs by re-centering and re-scaling. It was proposed by Sergey Ioffe and Christian Szegedy in 2015. While the effect of batch normalization is evident, the reasons behind its effectiveness remain under discussion. It was believed that it can mitigate the problem of internal covariate shift, where parameter initialization and changes in the distribution of the inputs of each layer affect the learning rate of the network. Recently, some scholars have argued that batch normalization does not reduce internal covariate shift, but rather smooths the objective function, which in turn improves the performance. However, at initialization, batch normalization in fact induces severe gradient explosion in deep networks, which is only alleviated by skip connections in residual networks. Others maintain that batch normalization achieves length-direction decoupling, and thereby accelerates neural networks. Internal covariate shift Each layer of a neural network has inputs with a corresponding distribution, which is affected during the training process by the randomness in the parameter initialization and the randomness in the input data. The effect of these sources of randomness on the distribution of the inputs to internal layers during training is described as internal covariate shift. Although a clear-cut precise definition seems to be missing, the phenomenon observed in experiments is the change on means and variances of the inputs to internal layers during training. Batch normalization was initially proposed to mitigate internal covariate shift. During the training stage of networks, as the parameters of the preceding layers change, the distribution of inputs to the current layer changes accordingly, such that the current layer needs to constantly readjust to new distributions. This problem is especial
https://en.wikipedia.org/wiki/Electronics%20%28journal%29
Electronics is a peer-reviewed, scientific journal that covers the study of electronics, including the design, development, and application of electronic devices, systems, and circuits. The journal is published by MDPI and was established in 2012. The editor-in-chief is Flavio Canavero 'Politecnico di Torino). The journal covers a wide range of topics related to electronics, including: electronic devices, electronic materials, electronic circuits, electronic systems, communication electronics, power electronics, and biomedical electronics. The journal also includes articles on the application of electronics in various fields, such as consumer electronics, industrial electronics, automotive electronics, and military electronics. The journal publishes original research articles, review articles, and short communications. Abstracting and indexing EBSCO databases ProQuest databases Scopus According to the Journal Citation Reports, the journal has a 2021 impact factor of 2.690. References External links Electronics English-language journals MDPI academic journals Academic journals established in 2012
https://en.wikipedia.org/wiki/Foods%20%28journal%29
Foods is a peer-reviewed scientific journal covering various aspects of food science. It is published by MDPI and was established in 2012. The editor-in-chief is Arun K. Bhunia (Purdue University). The journal publishes research articles, reviews, and commentaries related to food research, including food chemistry, food toxicology, food engineering, and quality. Abstracting and indexing The journal is abstracted and indexed, for example, in: According to the Journal Citation Reports, the journal has a 2022 impact factor of 5.2. References External links English-language journals Academic journals established in 2012 Food science journals MDPI academic journals Continuous journals Creative Commons Attribution-licensed journals
https://en.wikipedia.org/wiki/Robinson%20compass%20mask
A Robinson compass mask is a type of compass mask used for edge detection. It has eight major compass orientations, each will extract the edges in respect to its direction. A combined use of compass masks of different directions could detect the edges from different angles. Technical explanation The Robinson compass mask is defined by taking a single mask and rotating it to form eight orientations: The direction axis is the line of zeros in the matrix. Robinson compass mask is similar to kirsch compass masks, but is simpler to implement. Since the matrix coefficients only contains 0, 1, 2, and are symmetrical, only the results of four masks need to be calculated, the other four results are the negation of the first four results. An edge, or contour is an tiny area with neighboring distinct pixel values. The convolution of each mask with the image would create a high value output where there is a rapid change of pixel value, thus an edge point is found. All the detected edge points would line up as edges. Example An example of Robinson compass masks applied to the original image. Obviously, the edges in the direction of the mask is enhanced. References Feature detection (computer vision) Image processing
https://en.wikipedia.org/wiki/Discrete%20skeleton%20evolution
Discrete Skeleton Evolution (DSE) describes an iterative approach to reducing a morphological or topological skeleton. It is a form of pruning in that it removes noisy or redundant branches (spurs) generated by the skeletonization process, while preserving information-rich "trunk" segments. The value assigned to individual branches varies from algorithm to algorithm, with the general goal being to convey the features of interest of the original contour with a few carefully chosen lines. Usually, clarity for human vision (aka. the ability to "read" some features of the original shape from the skeleton) is valued as well. DSE algorithms are distinguished by complex, recursive decision-making processes with high computational requirements. Pruning methods such as by structuring element (SE) convolution and the Hough transform are general purpose algorithms which quickly pass through an image and eliminate all branches shorter than a given threshold. DSE methods are most applicable when detail retention and contour reconstruction are valued. Methodology Pre-processing Input images will typical contain more data than is necessary to generate an initial skeleton, and thus must be reduced in some way. Reducing the resolution, converting to grayscale, and then binary by masking or thresholding are common first steps. Noise removal may occur before and/or after converting an image to binary. Morphological operations such as closing, opening, and smoothing of the binary image may also be part of pre-processing. Ideally, the binarized contour should be as noise-free as possible before the skeleton is generated. Skeletonization DSE techniques may be applied to an existing skeleton or incorporated as part of the skeleton growing algorithm. Suitable skeletons may be obtained using a variety of methods: Thinning algorithms, such as the Grassfire transform Voronoi diagram Medial Axis Transform or Symmetry Axis Transform Distance Mapping Significance Measures DS
https://en.wikipedia.org/wiki/Image%20foresting%20transform
In the practice of digital image processing Alexandre X. Falcao, Jorge Stolfi, and Roberto de Alencar Lotufo have created and proven that the Image Foresting Transform (IFT) can be used as a time saver in processing 2-D, 3-D images, and moving images. History In 1959 Dijkstra used a balanced heap data structure to improve upon an algorithm presented by Moore in 1957 and Bellman in 1958 that computed the cost of the paths in a general graph. The Bucket sorting technique is how Dial improved on the algorithm a decade later. The algorithm has been tweaked and modified in many ways since then. It is on this version that Falcao, Stolfi, and Lotufo improved. Definition The transform is a tweaked version of Dijkstra’s shortest-path algorithm that is optimized for using more than one input and the maximization of digital image processing operators. The transform makes a graph of the pixels in an image and the connections between these points are the "cost" of the path portrayed. The cost is calculated by inspecting the characteristics, for example, grey scale, color, gradient among many others, of the path between pixels. Trees are made by connecting the pixels that have the same or close cost for applying the operator decided upon. The robustness of the transform does come at a cost and uses a lot of storage space for the code and the data being processed. When the transform is through, the predecessor, cost, and label are returned. Most of the operators that are used for digital image processing can use these three pieces of information to be optimized. Optimization Depending on which digital image processing operator has been decided upon the algorithm can be further tweaked for optimization depending upon what that operator uses. The algorithm can also be optimized by cutting out the recalculation of paths. This is accomplished by using an external reference table to keep track of the calculated paths. "Backward Arcs" can be eliminated by comparing the cost of the
https://en.wikipedia.org/wiki/Endangered%20river
An endangered river is one which has the potential to partly or fully dry up, or one that is thought to have ecological issues now or in the near future. Some such issues are natural while others are the direct result of human development. Organisations including the World Wildlife Fund (WWF) have published lists of rivers at risk. The WWF's 2007 list featured the Danube, the Nile, and the Rio Grande amongst others, stating that these "once great rivers" are in danger and "can no longer be assured of reaching the sea unhindered". The US National Park Service reported in 2015 that part of the Rio Grande "often lies dry". Dangers may be the result of the natural change of conditions in the local environment but many are due to human development projects such as dams and irrigation. Humans have also caused significant water pollution which endangers life which relies upon the water source. Importance According to the World Resources Institute (WRI) river basins are "indispensable resources for billions of people, companies, farms, and ecosystems" but many suffer from "water stress". It states that numerous major rivers around the world have become depleted to the point that they can fail to reach their destinations. Water scarcity is a danger to all the life that depends on the source. Dangers to rivers Rivers face many dangers, both natural and manmade. Climate change can have a significant impact on rivers, both positive and negative. This could be seen itself as an indirect result of human developments. An area which has a reduction in rainfall or an increase in temperature can make a river fully or partially dry out. This often means that the water can no longer reach its mouth. The flora and fauna of an area can face change and can have a substantial impact on landscape and local ecology. Invasive species can move into an area or be introduced by humans. One particular example of the latter is the case of beavers being introduced into Tierra del Fuego in 1946.
https://en.wikipedia.org/wiki/Point-normal%20triangle
The curved point-normal triangle, in short PN triangle, is an interpolation algorithm to retrieve a cubic Bézier triangle from the vertex coordinates of a regular flat triangle and normal vectors. The PN triangle retains the vertices of the flat triangle as well as the corresponding normals. For computer graphics applications, additionally a linear or quadratic interpolant of the normals is created to represent an incorrect but plausible normal when rendering and so giving the impression of smooth transitions between adjacent PN triangles. The usage of the PN triangle enables the visualization of triangle based surfaces in a smoother shape at low cost in terms of rendering complexity and time. Mathematical formulation With information of the given vertex positions of a flat triangle and the according normal vectors at the vertices a cubic Bézier triangle is constructed. In contrast to the notation of the Bézier triangle page the nomenclature follows G. Farin (2002), therefore we denote the 10 control points as with the positive indices holding the condition . The first three control points are equal to the given vertices. Six control points related to the triangle edges, i.e. are computed asThis definition ensures that the original vertex normals are reproduced in the interpolated triangle. Finally the internal control point is derived from the previously calculated control points as An alternative interior control point was suggested in. References Geometry
https://en.wikipedia.org/wiki/Intel%205-level%20paging
Intel 5-level paging, referred to simply as 5-level paging in Intel documents, is a processor extension for the x86-64 line of processors. It extends the size of virtual addresses from 48 bits to 57 bits, increasing the addressable virtual memory from 256 TB to 128 PB. The extension was first implemented in the Ice Lake processors, and the 4.14 Linux kernel adds support for it. Windows 10 and 11 with server versions also support this extension in their latest updates, where it is provided by a separate kernel of the system called ntkrla57.exe. Technology x86-64 processors without this feature use a four-level page table structure when operating in 64-bit mode. A similar situation arose when the 32 bit IA-32 processors used two levels, allowing up to four GB of memory (both virtual and physical). To support more than 4 GB of RAM, an additional mode of address translation called Physical Address Extension (PAE) was defined, involving a third level. This was enabled by setting a bit in the CR4 register. Likewise, the new extension is enabled by setting bit 12 of the CR4 register (known as LA57). This is only used when the processor is operating in 64 bit mode, and only may be modified when it is not. If the bit is not set, the processor operates with four paging levels. As adding another page table multiplies the address space by 512, the virtual limit has increased from 256 TB to 128 PB. An extra nine bits of the virtual address index the new table, so while formerly bits 0 through 47 were in use, now bits 0 through 56 are in use. As with four level paging, the high-order bits of a virtual address that do not participate in address translation must be the same as the most significant implemented bit. With five-level paging enabled, this means that bits 57 through 63 must be copies of bit 56. Intel has renamed the existing paging system as "4-level paging", which used to be known as IA-32e paging. Extending page table entry to 128 bits allows full 64-bit address
https://en.wikipedia.org/wiki/Crinkled%20arc
In mathematics, and in particular the study of Hilbert spaces, a crinkled arc is a type of continuous curve. The concept is usually credited to Paul Halmos. Specifically, consider where is a Hilbert space with inner product We say that is a crinkled arc if it is continuous and possesses the crinkly property: if then that is, the chords and are orthogonal whenever the intervals and are non-overlapping. Halmos points out that if two nonoverlapping chords are orthogonal, then "the curve makes a right-angle turn during the passage between the chords' farthest end-points" and observes that such a curve would "seem to be making a sudden right angle turn at each point" which would justify the choice of terminology. Halmos deduces that such a curve could not have a tangent at any point, and uses the concept to justify his statement that an infinite-dimensional Hilbert space is "even roomier than it looks". Writing in 1975, Richard Vitale considers Halmos's empirical observation that every attempt to construct a crinkled arc results in essentially the same solution and proves that is a crinkled arc if and only if, after appropriate normalizations, where is an orthonormal set. The normalizations that need to be allowed are the following: a) Replace the Hilbert space H by its smallest closed subspace containing all the values of the crinkled arc; b) uniform scalings; c) translations; d) reparametrizations. Now use these normalizations to define an equivalence relation on crinkled arcs if any two of them become identical after any sequence of such normalizations. Then there is just one equivalence class, and Vitale's formula describes a canonical example. See also References Banach spaces Differential calculus Hilbert spaces Topological vector spaces
https://en.wikipedia.org/wiki/MoltenVK
MoltenVK is a software library which allows Vulkan applications to run on top of Metal on Apple's macOS, iOS, and tvOS operating systems. It is the first software component to be released for the Vulkan Portability Initiative, a project to have a subset of Vulkan run on platforms lacking native Vulkan drivers. There are some limitations compared with a native Vulkan implementation. History MoltenVK was first released as a proprietary and commercially licensed product by The Brenwill Workshop on July 27, 2016. On July 31, 2017, Khronos announced the formation of the Vulkan Portability Technical Subgroup. Open source On February 26, 2018, Khronos announced that Vulkan became available on macOS and iOS products through the MoltenVK library. Valve announced that Dota 2 will run on macOS using the Vulkan API with the aid of MoltenVK, and that they had made an arrangement with developer The Brenwill Workshop Ltd to release MoltenVK as open-source software under the Apache License version 2.0. On May 30, 2018, Qt was updated with Vulkan for Qt on macOS using MoltenVK. On May 31, 2018, optional Vulkan support for Dota 2 on macOS was released. Benchmarks for the game were available the following day, showing better performance using Vulkan and MoltenVK compared to OpenGL. On July 20, 2018, Wine was updated with Vulkan support on macOS using MoltenVK. On 29 July 2018, the first app using MoltenVK was accepted onto the App Store, after initially being rejected. On 6 August 2018, Google open-sourced Filament, a crossplatform real-time physically based rendering engine with MoltenVK for macOS/iOS. On November 28, 2018, Valve released Artifact, their first Vulkan-only game on macOS using MoltenVK. Version 1.0 On 29 January 2019, MoltenVK 1.0.32 was released with early prototype of Vulkan Portability Extensions. RPCS3 and Dolphin emulators were updated with Vulkan support on macOS using MoltenVK. On 13 April 2019, MoltenVK 1.0.34 was released with support for tessellation.
https://en.wikipedia.org/wiki/Evolution%3A%20Random%20Mutations
Evolution: Random Mutations is a card game created by Dmitriy Knorre and Sergey Machin in 2010. The game is inspired by the evolutionary biology. It was published by SIA Rightgames RBG. Publishing of the game was financed in Boomstarter. English, French and German game editions were published in 2014. Two or more players create their own animals, make them evolve and hunt in order to survive. Random Mutations is the remake of Evolution: The Origin of Species basic game published in 2010. New game shows the aspects of evolution better. Generation of traits is truly random, it is as a result of either positive or negative mutation. As a result of natural selection that plays out on the game – positive mutations remain more often than negative. One more important entity is the population of species. Game presentation was held on 15 December 2013. Rules Place definition in match The player with the largest number of victory points at the end of the game is the winner. The rankings of players in match are determined as follows: Game entities main deck. Each player fill up their own player's deck decks from main deck by drawing cards. Card can be played as: trait of species. If population of a species is more than 1, the player cannot add new trait to it. species. It is a separate population of animals with the same set of traits. All the animals of a species obtain all the traits of a species. animal . It increases the population of a species that already exists. As a rule it is prohibited to increase the animal population of a species above the total number of animal species a player has. food bank is the pool of tokens for animals. All tokens are divided into following types: red tokens (food). It's a food that an animal takes from food bank blue tokens (extra food) . It's a food that an animal gets using its traits. green tokens (shelters). An animal with a shelter token is invulnerable for carnivores. black tokens (parasites). This token can be pla
https://en.wikipedia.org/wiki/Data%20protection%20officer
A data protection officer (DPO) ensures, in an independent manner, that an organization applies the laws protecting individuals' personal data. The designation, position and tasks of a DPO within an organization are described in Articles 37, 38 and 39 of the European Union (EU) General Data Protection Regulation (GDPR). Many other countries require the appointment of a DPO, and it is becoming more prevalent in privacy legislation. According to the GDPR, the DPO shall directly report to the highest management level. This doesn't mean the DPO has to be directly managed at this level but they must have direct access to give advice to senior managers who are making decisions about personal data processing. The core responsibilities of the DPO include ensuring his/her organization is aware of, and trained on, all relevant GDPR obligations. Common tasks of a DPO include ensuring proper processes are in place for subject access requests, data mapping, privacy impact assessments, as well as raising data privacy awareness with employees. Additionally, they must conduct audits to ensure compliance, address potential issues proactively, and act as a liaison between his/her organization and the public regarding all data privacy matters. In Germany, a 2001 law established a requirement for a DPO in certain organizations and included various protections around the scope and tenure for the role, including protections against dismissal for bringing problems to the attention of management. Many of these concepts were incorporated into the drafting of Article 38 of the GDPR and have continued to be incorporated in other privacy standards. References External links Section 4 - Data protection officer (DPO) Data protection Data protection authorities
https://en.wikipedia.org/wiki/Selenia%20Gp-16
The Selenia Gp-16 was a general purpose minicomputer designed by the Italian company Selenia of STET group. It was followed by an upgraded version called Gp-160. History The Gp-16 was minicomputer designed mainly for industrial customers, whose concept in design was similar to PDP-8. The design took place in Rome during the mid-sixties under the supervision of Saverio Rotella, when the production was carried in Fusaro (Naples) in the second half of that decade. The most known uses were in control tower in airports, as done in Venice; a modified upgraded version was used in Gruppi Speciali of CSELT, the second electromechanical phone switch in Europe. The Gp16 was later adopted also by Olivetti. However, it never reached a large diffusion because of a small market push from its producing company. Technical features Hardware Gp-16 had: word length: 16-bit; RAM memory: 8 Kbyte, extensibile to 32 Kbyte (when the CSELT version extended it until 64 Kbyte); cycle time: 4 microseconds; ROM technology: magnetic-core memory; input/output peripherals: punched card reader, punched tape reader, teleprinter. Software No true Operating system; Fortran Compiler; Arithmetical calculation Routine; Monitor processing system; Assembly language. Notes Bibliography Fondazione Adriano Olivetti, La cultura informatica in Italia: riflessioni e testimonianze sulle origini: 1950-1970, Bollati Boringhieri, 1993. External links ENAV: GP-16/GP-160 Minicomputers 16-bit computers
https://en.wikipedia.org/wiki/MIR-3
MIR-3 () is a third generation computer that was released in the 1970s in the Soviet Union. It collected all the achievements of microelectronics in the 1970s. The main task of the MIR-3 computer was to solve computational problems for engineers. MIR-3 consisted of keyboard, TV (display), a means of reading magnetic tapes and disks, a processor. The size of MIR-3 has decreased. Now it was the size of a regular desk. True, the size was without the means of reading magnetic tapes and disks. The speed of the MIR-3 computer was 105 - 107 actions per second. The memory capacity was up to 106 knocks. Essentially, the MIR-3 computer consisted of several computers. The microprocessor consists of several processors, each of which was responsible for the operation of a separate MIR-3 unit. For example, one for reading information from magnetic tapes and transferring information, the other for processing and calculations, the third for printing on the keyboard, and so on. The complex structure of MIR-3 required the creation of means for coordinating the work of individual computer parts. When creating the charter, the language Analitik-74 was used. Participation in the creation of computers MIR-3 hosted by the Academy of Sciences of the Ukrainian SSR, including Victor Glushkov. References Soviet computer systems
https://en.wikipedia.org/wiki/International%20Journal%20of%20Antennas%20and%20Propagation
International Journal of Antennas and Propagation is a peer reviewed, scientific open access journal that publishes original and review articles in all areas of antennas and propagation. The editor-in-chief is Slawomir Koziel. Abstracting and indexing This journal is abstracted and indexed by the following services Academic Onefile Aerospace and High Technology Database Aluminium Industry Abstracts Current Contents - Engineering, Computing and Technology EBSCO Ei Compendex INSPEC Science Citation Index Scopus Solid State and Superconductivity Abstracts References Hindawi Publishing Corporation academic journals Electrical and electronic engineering journals Electromagnetism journals
https://en.wikipedia.org/wiki/Account%20verification
Account verification is the process of verifying that a new or existing account is owned and operated by a specified real individual or organization. A number of websites, for example social media websites, offer account verification services. Verified accounts are often visually distinguished by check mark icons or badges next to the names of individuals or organizations. Account verification can enhance the quality of online services, mitigating sockpuppetry, bots, trolling, spam, vandalism, fake news, disinformation and election interference. History Account verification was initially a feature for public figures and accounts of interest, individuals in "music, acting, fashion, government, politics, religion, journalism, media, sports, business and other key interest areas". It was introduced by Twitter in June 2009, followed by Google+ in 2011, Facebook page in October 2015 (Available in United States, Canada, United Kingdom, Australia and New Zealand) Facebook profile and Facebook page in 2018 (Available in Worldwide) Instagram in 2014, and Pinterest in 2015. On YouTube, users are able to submit a request for a verification badge once they obtain 100,000 or more subscribers. It also has an "official artist" badge for musicians and bands. In July 2016, Twitter announced that, beyond public figures, any individual would be able to apply for account verification. This was temporarily suspended in February 2018, following a backlash over the verification of one of the organisers of the far-right Unite the Right rally due to a perception that verification conveys "credibility" or "importance". In March 2018, during a live-stream on Periscope, Jack Dorsey, co-founder and CEO of Twitter, discussed the idea of allowing any individual to get a verified account. Twitter reopened account verification applications in May 2021 after revamping their account verification criteria. This time offering notability criteria for the account categories of government, companie
https://en.wikipedia.org/wiki/Autocrypt
Autocrypt is a cryptographic protocol for email clients aiming to simplify key exchange and enabling encryption. Version 1.0 of the Autocrypt specification was released in December 2017 and makes no attempt to protect against MITM attacks. It is implemented on top of OpenPGP replacing its complex key management by fully automated unsecured exchange of cryptographic keys between peers . Method Autocrypt-capable email clients transparently negotiate encryption capabilities and preferences and exchange keys between users alongside sending regular emails. This is done by including the key material and encryption preferences in the header of each email, which allows encrypting any message to a contact who has previously sent the user email. This information is not signed or verified in any way even if the actual message is encrypted and verified. No support is required from email providers other than preserving and not manipulating the Autocrypt specific header fields. When a message is encrypted to a group of receivers, keys are also automatically sent to all receivers in this group. This ensures that a reply to a message can be encrypted without any further complications or work by the user. Security model Autocrypt is guided by the idea of opportunistic security from RFC 7435 but implementing something much less secure than a trust on first use (TOFU) model. Encryption of messages between Autocrypt-capable clients can be enabled without further need of user interaction. Traditional OpenPGP applications should display a noticeable warning if keys are not verified either manually or by a web of trust method before use. In contrast, Autocrypt completely resigns on any kind of key verification. Key exchange is during the initial handshake and valid or invalid keys of peers may be replaced anytime later without any user interaction or verification. This makes it very easy to exchange new key(s) if a user loses access to the key but also makes the protocol m
https://en.wikipedia.org/wiki/Pure%20inductive%20logic
Pure inductive logic (PIL) is the area of mathematical logic concerned with the philosophical and mathematical foundations of probabilistic inductive reasoning. It combines classical predicate logic and probability theory (Bayesian inference). Probability values are assigned to sentences of a first-order relational language to represent degrees of belief that should be held by a rational agent. Conditional probability values represent degrees of belief based on the assumption of some received evidence. PIL studies prior probability functions on the set of sentences and evaluates the rationality of such prior probability functions through principles that such functions should arguably satisfy. Each of the principles directs the function to assign probability values and conditional probability values to sentences in some respect rationally. Not all desirable principles of PIL are compatible, so no prior probability function exists that satisfies them all. Some prior probability functions however are distinguished through satisfying an important collection of principles. History Inductive logic started to take a clearer shape in the early 20th century in the work of William Ernest Johnson and John Maynard Keynes, and was further developed by Rudolf Carnap. Carnap introduced the distinction between pure and applied inductive logic, and the modern Pure Inductive Logic evolves along the lines of the pure, uninterpreted approach envisaged by Carnap. Framework General case In its basic form, PIL uses first-order logic without equality, with the usual connectives (and, or, not and implies respectively), quantifiers finitely many predicate (relation) symbols, and countably many constant symbols . There are no function symbols. The predicate symbols can be unary, binary or of higher arities. The finite set of predicate symbols may vary while the rest of the language is fixed. It is a convention to refer to the language as and write where the list the predicate sym
https://en.wikipedia.org/wiki/Nessus%20Attack%20Scripting%20Language
The Nessus Attack Scripting Language, usually referred to as NASL, is a scripting language that is used by vulnerability scanners like Nessus and OpenVAS. With NASL specific attacks can be automated, based on known vulnerabilities. Tens of thousands of plugins have been written in NASL for Nessus and OpenVAS. Files that are written in this language usually get the file extension .nasl. For the exploitation of a zero day attack it is possible for an end user of Nessus or OpenVAS to write custom code in NASL which is executed by these vulnerability scanners. In earlier versions of Nessus, a binary called nasl or nasl.exe was provided that could interpret NASL code to perform vulnerability scans. In later versions of Nessus, this should be done via an API that is provided by this software. An example of executing a NASL plugin 'myzeroday.nasl' on Windows, a command such as the following could be invoked: An equivalent example of a Linux or UNIX command could look like this: If the plugin, in this example myzeroday.nasl, is placed in the same directory where other NASL plugins are located, it can also be included in standard scans by Nessus or OpenVAS, via the Web GUI or an API. Many of the specifications of the formal language are similar to those of the programming language C and the scripting language Perl and those of other languages. Control flow such as the for loop, the if and if-else statements are part of the language and comments are preceded by a hash. An example of "Hello World" in NASL is: In the release notes of Nessus 6.10.0 of 1/31/2017, a new NASL compiler for faster plugins was mentioned. References Sources Nessus Network Auditing, 2nd Edition by Russ Rogers, Publisher: Syngress Release Date: October 2011 Chapter 11 NASL Using the 'nasl' Nessus Command Line Tool NASL API Documentation Basic Structure of NASL Scripts Adding third party nasl plugins to OpenVAS by Alexander V. Leonov Information Security Automation FreeBSD Manual Pages, Manual
https://en.wikipedia.org/wiki/VDM-1
The Processor Technology VDM-1, for Video Display Module, was the first video card for S-100 bus computers. Created in 1975, it allows an S-100 machine to produce its own display, and when paired with a keyboard and their 3P+S card, it eliminates the need for a separate video terminal. Using a 7 x 9 dot matrix and ASCII characters, it produces a 64-column by 16-row text display. The VDM-1 is a complex card and was soon replaced by an increasing number of similar products from other companies. An early competitor was the Solid State Music VB-1, which offers an identical display from a much simpler card. Later cards using LSI chips have enough room to include the keyboard controller as well. History TV Typewriter In September 1973, the cover article of Radio Electronics magazine was Don Lancaster's "Build a TV Typewriter", which allows users to type characters on a keyboard and have them appear on a conventional television. Given this limited functionality, they initially estimated that the magazine would sell about 20 copies of the plans for $20 each. Instead, they were flooded by requests and eventually sent out 10,000 copies. Bob Marsh built a TV Typewriter and showed it to Lee Felsenstein. Felsenstein noted that it had no external memory, so once a full page of text had been typed, the entire page had to be erased to display additional text. He phoned Lancaster and asked him about this design note, and Lancaster replied that he simply hadn't considered using it as the basis of a terminal, "I don't know – people just want to put up characters on their TV screens". Tom Swift Terminal Throughout 1973, Felsenstein had been looking for a low-cost terminal for the Community Memory bulletin board system. He had designed the Pennywhistle modem to address the need for remote access at a price under $100, but the terminal that they hooked it up to still cost $1500. Felsenstein began designing a printed circuit board that would combine the video output of the TV Typew
https://en.wikipedia.org/wiki/ADCIRC
The ADCIRC model is a high-performance, cross-platform numerical ocean circulation model popular in simulating storm surge, tides, and coastal circulation problems. Originally developed by Drs. Rick Luettich and Joannes Westerink, the model is developed and maintained by a combination of academic, governmental, and corporate partners, including the University of North Carolina at Chapel Hill, the University of Notre Dame, and the US Army Corps of Engineers. The ADCIRC system includes an independent multi-algorithmic wind forecast model and also has advanced coupling capabilities, allowing it to integrate effects from sediment transport, ice, waves, surface runoff, and baroclinicity. Access The model is free, with source code made available by request via the website, allowing users to run the model on any system with a Fortran compiler. A pre-compiled Windows version of the model can also be purchased alongside the SMS software. ADCIRC is coded in Fortran, and can be used with native binary, text, or netCDF file formats. Capabilities The model formulation is based on the shallow water equations, solving the continuity equation (represented in the form of the Generalized Wave Continuity Equation) and the momentum equations (with advective, Coriolis, eddy viscosity, and surface stress terms included). ADCIRC utilizes the finite element method in either three-dimensional or two-dimensional depth-integrated form on a triangular unstructured grid with Cartesian or spherical coordinates. It can run in either barotropic or baroclinic modes, allowing inclusion of changes in water density and properties such as salinity and temperature. ADCIRC can be run either in serial mode (e.g. on a personal computer) or in parallel on supercomputers via MPI. The model has been optimized to be highly parallelized, in order to facilitate rapid computation of large, complex problems. ADCIRC is able to apply several different bottom friction formulations including Manning's n-based bo
https://en.wikipedia.org/wiki/Suyat
Suyat (Baybayin: , Hanunó'o: , Buhid: , Tagbanwa: , Modern Kulitan: Jawi (Arabic): ) is the modern collective name of the indigenous scripts of various ethnolinguistic groups in the Philippines prior to Spanish colonization in the 16th century up to the independence era in the 21st century. The scripts are highly varied; nonetheless, the term was suggested and used by cultural organizations in the Philippines to denote a unified neutral terminology for Philippine indigenous scripts. Ancient Philippine scripts Ancient Philippine scripts are various writing systems that developed and flourished in the Philippines around 300 BC. These scripts are related to other Southeast Asian systems of writing that developed from South Indian Brahmi scripts used in Asoka Inscriptions and Pallava Grantha, a type of writing used in the writing of palm leaf books called Grantha script during the ascendancy of the Pallava dynasty about the 5th century, and Arabic scripts that have been used in South East Asian countries. Since the 21st century, these scripts have simply been collectively referred to as "suyat" by various Filipino cultural organizations. Historical Philippine Indic scripts Kawi The Kawi script originated in Java and was used across much of Maritime Southeast Asia. It is hypothesized to be an ancestor of Baybayin. The presence of Kawi script in the Philippines is evidenced in the Laguna Copperplate Inscription, the earliest known written document found in the Philippines. It is a legal document with the inscribed date of Shaka era 822, corresponding to April 21, 900 CE. It was written in the Kawi script in a variety of Old Malay containing numerous loanwords from Sanskrit and a few non-Malay vocabulary elements whose origin is ambiguous between Kawi and Old Tagalog. A second example of Kawi script can be seen on the Butuan Ivory Seal, found in the 1970s and dated between the 9th and 12th centuries. It is an ancient seal made of ivory that was found in an archaeo
https://en.wikipedia.org/wiki/Whiskey%20Lake
Whiskey Lake is Intel's codename for a family of third-generation 14nm Skylake low-power mobile processors. Intel announced Whiskey Lake on August 28, 2018. Changes 14++ nm process, same as Coffee Lake Increased turbo clocks (300–600 MHz) 14 nm PCH Native USB 3.1 gen 2 support (10 Gbit/s) Integrated Wi-Fi 802.11ac 160 MHz / WiFi 5 and Bluetooth 5.0 Intel Optane Memory support List of Whiskey Lake CPUs Mobile processors The TDP for these CPUs is 15 W, but is configurable. Core i5-8365U and i7-8665U support Intel vPro Technology Pentium Gold and Celeron CPUs lack AVX2 support. References Intel microarchitectures Intel x86 microprocessors X86 microarchitectures
https://en.wikipedia.org/wiki/WireGuard
WireGuard is a communication protocol and free and open-source software that implements encrypted virtual private networks (VPNs), and was designed with the goals of ease of use, high speed performance, and low attack surface. It aims for better performance and more power than IPsec and OpenVPN, two common tunneling protocols. The WireGuard protocol passes traffic over UDP. In March 2020, the Linux version of the software reached a stable production release and was incorporated into the Linux 5.6 kernel, and backported to earlier Linux kernels in some Linux distributions. The Linux kernel components are licensed under the GNU General Public License (GPL) version 2; other implementations are under GPLv2 or other free/open-source licenses. The name WireGuard is a registered trademark of Jason A. Donenfeld. Protocol WireGuard uses the following: Curve25519 for key exchange ChaCha20 for symmetric encryption Poly1305 for message authentication codes SipHash24 for hashtable keys BLAKE2s for cryptographic hash function HKDF for key derivation function UDP-based only In May 2019, researchers from INRIA published a machine-checked proof of the WireGuard protocol, produced using the CryptoVerif proof assistant. Optional Pre-shared Symmetric Key Mode WireGuard supports pre-shared symmetric key mode, which provides an additional layer of symmetric encryption to mitigate future advances in quantum computing. This addresses the risk that traffic may be stored until quantum computers are capable of breaking Curve25519, at which point traffic could be decrypted. Pre-shared keys are "usually troublesome from a key management perspective and might be more likely stolen", but in the shorter term, if the symmetric key is compromised, the Curve25519 keys still provide more than sufficient protection. Networking WireGuard uses only UDP, due to the potential disadvantages of TCP-over-TCP. Tunneling TCP over a TCP-based connection is known as "TCP-over-TCP", and doing so can induce a
https://en.wikipedia.org/wiki/Application%20strings%20manager
An application strings manager is a software tool primarily designed to optimize the download and storage of strings files used and produced in software development. It centralizes the management of all the product strings generated and used by an organization to overcome the complexity arising from the diversity of strings types, and their position in the overall content workflow. Uses Application strings manager is a kind of software repository for text files, strings, and their corresponding keys. It can be used to store strings files produced by an organization itself, such as product content strings and UI content strings, or for third-party content which must be treated differently for both technical and workflow reasons. Uses in software development To manage the source files used in software development, organizations typically use revision control. The many source files used in software development are eventually built into the product strings (also known as "strings files") which constitute the components of a software product. Consequently, a software product may comprise hundreds and even thousands of individual product strings which must be managed in order to efficiently maintain a coherent and functional software product. This function of managing the product strings is done by an application strings manager. An application strings manager can be thought of as being to strings what revision control is to source files. Strings managers Some factors and features that may be offered by a strings manager include: Files manager to store files locally or on network storage Workflow High availability through having a redundant set of repository managers work against the same database or file storage User restrictions native to the strings manager or integrated with other organizational systems such as LDAP or Single Sign-On servers Examples of Strings managers Twine cdocs Phrase String File Formats See also Software repository Software
https://en.wikipedia.org/wiki/Jouanolou%27s%20trick
In algebraic geometry, Jouanolou's trick is a theorem that asserts, for an algebraic variety X, the existence of a surjection with affine fibers from an affine variety W to X. The variety W is therefore homotopy-equivalent to X, but it has the technically advantageous property of being affine. Jouanolou's original statement of the theorem required that X be quasi-projective over an affine scheme, but this has since been considerably weakened. Jouanolou's construction Jouanolou's original statement was: If X is a scheme quasi-projective over an affine scheme, then there exists a vector bundle E over X and an affine E-torsor W. By the definition of a torsor, W comes with a surjective map to X and is Zariski-locally on X an affine space bundle. Jouanolou's proof used an explicit construction. Let S be an affine scheme and . Interpret the affine space as the space of (r + 1) × (r + 1) matrices over S. Within this affine space, there is a subvariety W consisting of idempotent matrices of rank one. The image of such a matrix is therefore a point in X, and the map that sends a matrix to the point corresponding to its image is the map claimed in the statement of the theorem. To show that this map has the desired properties, Jouanolou notes that there is a short exact sequence of vector bundles: where the first map is defined by multiplication by a basis of sections of and the second map is the cokernel. Jouanolou then asserts that W is a torsor for . Jouanolou deduces the theorem in general by reducing to the above case. If X is projective over an affine scheme S, then it admits a closed immersion into some projective space . Pulling back the variety W constructed above for along this immersion yields the desired variety W for X. Finally, if X is quasi-projective, then it may be realized as an open subscheme of a projective S-scheme. Blow up the complement of X to get , and let denote the inclusion morphism. The complement of X in is a Cartier div
https://en.wikipedia.org/wiki/Culcheth%20Laboratories
Culcheth Laboratories was a British metallurgical and nuclear research institute that researched the structural design of nuclear reactors and reactor pressure vessels in Culcheth, Cheshire, then in south Lancashire and now in the borough of Warrington. History The Reactor Materials Laboratory was established at Culcheth in 1950. The UKAEA's Safety and Reliability Directorate (SRD) stayed at Culcheth until 1995. Function It carried out work on reactors for the British civil and military (submarine fleet) nuclear energy programmes, investigating metallurgy. In the first ten years, it carried out research on materials for fast breeder reactors; it was the first time that niobium had been part of a fast breeder reactor. The site investigated fracture mechanics, nuclear reactor physics and hydraulics. Work on irradiation of metals was also carried out with the School of Materials, University of Manchester and the Department of Materials Science and Metallurgy, University of Cambridge. Structure Culcheth is just over one mile north of junction 11 of the M62 motorway on the A574. It was administered by the Research and Development Branch of the United Kingdom Atomic Energy Authority (UKAEA). See also List of nuclear reactors References External links Local history 1950 establishments in England 1995 disestablishments in England Chemical research institutes Former nuclear research institutes History of Warrington Materials science institutes Metallurgical industry of the United Kingdom Metallurgical organizations Nuclear research institutes in the United Kingdom Research institutes established in 1950 Research institutes in Cheshire
https://en.wikipedia.org/wiki/Rhythm%20of%20Structure
Rhythm of Structure is a multimedia interdisciplinary project founded in 2003. It features a series of exhibitions, performances, and academic projects that explore the interconnecting structures and process of mathematics and art, and language, as way to advance a movement of mathematical expression across the arts, across creative collaborative communities celebrating the rhythm and patterns of both ideas of the mind and the physical reality of nature. Introduction Rhythm of Structure, as an expanding series of art exhibitions, performances, videos/films and publications created and curated by multimedia mathematical artist and writer John Sims, explores and celebrates the intersecting structures of mathematics, art, community, and nature. Sims also created Recoloration Proclamation featuring the installation, The Proper Way to Hang a Confederate Flag (2004). From his catalog essay from the Rhythm of Structure: Mathematics, Art and Poetic exhibition, Sims sets the curatorial theme where he writes: "Mathematics, as a parameter of human consciousness in an indispensable conceptual technology, essential is seeing beyond the retinal and knowing beyond the intuitive. The language and process of mathematics, as elements of, foundation for art, inform an analytic expressive condition that inspires a visual reckoning for a convergence: from the illustrative to the metaphysical to the poetic. And in the dialectic of visual art call and text performative response, there is an inter-dimensional conversation where the twisting structures of language, vision and human ways give birth to the spiritual lattice of a social geometry, a community constructivism -- a place of connections, where emotional calculations meet spirited abstraction." First premiering at the Fire Patrol No.5 Gallery in 2003, with the show Rhythm of Structure: MathArt in Harlem. This interdisciplinary project has featured numerous exhibitions around the country collaborating with many notable artists, wr
https://en.wikipedia.org/wiki/Lifelong%20Planning%20A%2A
LPA* or Lifelong Planning A* is an incremental heuristic search algorithm based on A*. It was first described by Sven Koenig and Maxim Likhachev in 2001. Description LPA* is an incremental version of A*, which can adapt to changes in the graph without recalculating the entire graph, by updating the g-values (distance from start) from the previous search during the current search to correct them when necessary. Like A*, LPA* uses a heuristic, which is a lower boundary for the cost of the path from a given node to the goal. A heuristic is admissible if it is guaranteed to be non-negative (zero being admissible) and never greater than the cost of the cheapest path to the goal. Predecessors and successors With the exception of the start and goal node, each node has predecessors and successors: Any node from which an edge leads towards is a predecessor of . Any node to which an edge leads from is a successor of . In the following description, these two terms refer only to the immediate predecessors and successors, not to predecessors of predecessors or successors of successors. Start distance estimates LPA* maintains two estimates of the start distance for each node: , the previously calculated g-value (start distance) as in A* , a lookahead value based on the g-values of the node’s predecessors (the minimum of all , where {{math|n'''}} is a predecessor of and is the cost of the edge connecting and ) For the start node, the following always holds true: If equals , then is called locally consistent. If all nodes are locally consistent, then a shortest path can be determined as with A*. However, when edge costs change, local consistency needs to be re-established only for those nodes which are relevant for the route. Priority queue When a node becomes locally inconsistent (because the cost of its predecessor or the edge linking it to a predecessor has changed), it is placed in a priority queue for re-evaluation. LPA* uses a two-dimensional key: Ent
https://en.wikipedia.org/wiki/Autonomous%20peripheral%20operation
In computing, autonomous peripheral operation is a hardware feature found in some microcontroller architectures to off-load certain tasks into embedded autonomous peripherals in order to minimize latencies and improve throughput in hard real-time applications as well as to save energy in ultra-low-power designs. Overview Forms of autonomous peripherals in microcontrollers were first introduced in the 1990s. Allowing embedded peripherals to work independently of the CPU and even interact with each other in certain pre-configurable ways off-loads event-driven communication into the peripherals to help improve the real-time performance due to lower latency and allows for potentially higher data throughput due to the added parallelism. Since 2009, the scheme has been improved in newer implementations to continue functioning in sleep modes as well, thereby allowing the CPU (and other unaffected peripheral blocks) to remain dormant for longer periods of time in order to save energy. This is partially driven by the emerging IoT market. Conceptually, autonomous peripheral operation can be seen as a generalization of and mixture between direct memory access (DMA) and hardware interrupts. Peripherals that issue event signals are called event generators or producers whereas target peripherals are called event users or consumers. In some implementations, peripherals can be configured to pre-process the incoming data and perform various peripheral-specific functions like comparing, windowing, filtering or averaging in hardware without having to pass the data through the CPU for processing. Implementations Known implementations include: Peripheral Event Controller (PEC) in Siemens/Infineon C166 and C167 16-bit microcontrollers since 1990 Intelligent autonomous peripherals ( CCU6) in Infineon XC800 series of 8051-compatible 8-bit microcontrollers since 2005 Event System (EVSYS) in Atmel AVR XMEGA 8-bit microcontrollers since 2008 Peripheral Event System (PES) with SleepWalk
https://en.wikipedia.org/wiki/L%C3%A9vy%E2%80%93Steinitz%20theorem
In mathematics, the Lévy–Steinitz theorem identifies the set of values to which rearrangements of an infinite series of vectors in Rn can converge. It was proved by Paul Lévy in his first published paper when he was 19 years old. In 1913 Ernst Steinitz filled in a gap in Lévy's proof and also proved the result by a different method. In an expository article, Peter Rosenthal stated the theorem in the following way. The set of all sums of rearrangements of a given series of vectors in a finite-dimensional real Euclidean space is either the empty set or a translate of a subspace (i.e., a set of the form v + M, where v is a given vector and M is a linear subspace). See also Riemann series theorem References Mathematical series Permutations Summability theory Theorems in analysis
https://en.wikipedia.org/wiki/Frontiers%20in%20Heat%20and%20Mass%20Transfer
Frontiers in Heat and Mass Transfer is a peer-reviewed open access scientific journal covering heat transfer and mass transfer. It is published by Tech Science Press and the editors-in-chief are Amir Faghri (University of Connecticut) and Yuwen Zhang (University of Missouri). In 2017, Frontiers in Heat Pipes was merged into this journal. Abstracting and indexing The journal is abstracted and indexed in Emerging Sources Citation Index, Ei Compendex, and Scopus. References External links Engineering journals English-language journals Academic journals established in 2010 Continuous journals
https://en.wikipedia.org/wiki/ZFS
ZFS (previously: Zettabyte File System) is a file system with volume management capabilities. It began as part of the Sun Microsystems Solaris operating system in 2001. Large parts of Solaris – including ZFS – were published under an open source license as OpenSolaris for around 5 years from 2005 before being placed under a closed source license when Oracle Corporation acquired Sun in 20092010. During 2005 to 2010, the open source version of ZFS was ported to Linux, Mac OS X (continued as MacZFS) and FreeBSD. In 2010, the illumos project forked a recent version of OpenSolaris, including ZFS, to continue its development as an open source project. In 2013, OpenZFS was founded to coordinate the development of open source ZFS. OpenZFS maintains and manages the core ZFS code, while organizations using ZFS maintain the specific code and validation processes required for ZFS to integrate within their systems. OpenZFS is widely used in Unix-like systems. Overview The management of stored data generally involves two aspects: the physical volume management of one or more block storage devices (such as hard drives and SD cards), including their organization into logical block devices as seen by the operating system (often involving a volume manager, RAID controller, array manager, or suitable device driver); and the management of data and files that are stored on these logical block devices (a file system or other data storage). Example: A RAID array of 2 hard drives and an SSD caching disk is controlled by Intel's RST system, part of the chipset and firmware built into a desktop computer. The Windows user sees this as a single volume, containing an NTFS-formatted drive of their data, and NTFS is not necessarily aware of the manipulations that may be required (such as reading from/writing to the cache drive or rebuilding the RAID array if a disk fails). The management of the individual devices and their presentation as a single device is distinct from the management of the
https://en.wikipedia.org/wiki/Aaron%20Traywick
Aaron James Traywick (December 19, 1989 – April 29, 2018) was an American businessman and life extension activist in the transhumanism and biohacking communities. He sought to develop gene therapies to make inexpensive treatments available for incurable conditions such as AIDS and the herpes simplex virus. His lack of any medical training and his unconventional methods—such as broadcasting an associate injecting himself with an "untested experimental gene therapy", then later doing the same to himself in an onstage public demonstration—drew widespread criticism. Education Aaron Traywick was a resident of Elmore, Alabama, and a graduate of Stanhope Elmore High School. He graduated from the University of Montevallo in Alabama with a degree in interdisciplinary studies; he held no background in the sciences or formal training in clinical medicine. Career From January 2016 to his termination in July 2016, he worked to advocate investment in radical approaches towards anti-aging at the Global Healthspan Policy Institute started by his cousin. In 2017 he founded Ascendance Biomedical in Washington, D.C., with the mission to "make cutting edge biomedical technologies available for everyone". Traywick's self-administered do it yourself homemade gene therapies received substantial media attention. In October 2017, Ascendance Biomedical shared a live broadcast of Traywick's associate Tristan Roberts injecting himself with an "untested experimental gene therapy" for HIV over Facebook's live-streaming service. During a presentation at the February 2018 BodyHacking Con in Austin, Texas, Traywick injected himself with something he referred to at the time as a "research compound". In a later conversation with BBC reporters, he spoke of it as a "treatment" for Herpes virus, a term with a specific meaning subject to FDA regulations. Shortly after the event, the FDA issued a statement on the inherent dangers of this approach to untested gene editing, without mentioning the co
https://en.wikipedia.org/wiki/Julia%20Robinson%20Mathematics%20Festival
The Julia Robinson Mathematics Festival (JRMF) is an educational organization that sponsors locally organized mathematics festivals and online webinars targeting K–12 students. The events are designed to introduce students to mathematics in a collaborative and non-competitive forum. History In the 1970s, Saint Mary's College of California produced a mathematics contest that was popular with secondary schools throughout the San Francisco Bay Area. In 2005, Nancy Blachman attended an education forum sponsored by the Mathematical Sciences Research Institute (MSRI) and remembered how the Saint Mary's contest had inspired her as a student. Unfortunately, the contest no longer existed. Seeking to possibly resurrect the contest, Blachman and MSRI development director Jim Sotiros reached out to colleagues in the educational community. One response was from local high school math teacher Joshua Zucker, who also remembered the contest and even had saved a book of problems from it. Sotiros suggested that Blachman and her husband David desJardins fund MSRI in order to hire Zucker to recreate a program in the style of the Saint Mary’s Math Contest. Blachman and Zucker became co-founders and organized their first event in 2007. They called it a festival rather than a contest because they wanted to emphasize collaboration, creativity and fun rather than competition. They named the festival after Julia Robinson, a mathematician renowned for her contributions to decision problems. In fact, her work on Hilbert's 10th problem played a crucial role in its ultimate resolution. Blachman felt that such a woman would provide a role model for young girls and would show that one need not be male to be a great mathematician. When they sent out invitations to local schools, the response was so overwhelming that, in order to have enough space, they prevailed upon Google in nearby Mountain View to host the first festival. A second festival was hosted in 2008 by Pixar Animation Studios
https://en.wikipedia.org/wiki/Benny%20Moldovanu
Benny Moldovanu (born April 11, 1962) is a German economist who currently holds the Chair of Economic Theory II at the University of Bonn. His research focuses on applied game theory, auction theory, mechanism design, contests and matching theory, and voting theory. In 2004, Moldovanu was awarded the Gossen Prize for his contributions to auction theory and mechanism design. Biography Benny Moldovanu earned a BSc and MSc in mathematics from the Hebrew University of Jerusalem in 1986 and 1989, respectively, the latter under the supervision of Bezalel Peleg. He then obtained in 1991 a PhD in economics from the University of Bonn, with future Nobel Memorial Prize winner Reinhard Selten as advisor and Avner Shaked as co-advisor, with thesis "Game theory, economics, social and behavioral sciences". He went on to earn his habilitation from the same university in 1995. Having worked as assistant professor of economics at the University of Bonn after his PhD (1991–1995), he then became full professor at the University of Mannheim (1995–2002) before returning to the University of Bonn in 2002, where he has worked ever since. At Bonn, he has been the Co-Director and later Academic Director of the Bonn Graduate School of Economics (2006–2013) as well as Co-Director of the Hausdorff Center for Mathematics (2006–2013), where he today leads the research area on mechanism design and game theory. Moreover, at Bonn, Moldovanu is currently Director of the Institute of Microeconomics (since 2012) as well as of the Reinhard Selten Institute for Research in Economics (since 2017). Throughout his professional career, Moldovanu has held visiting appointments at the University of Michigan, Ann Arbor, Northwestern University, University College London, Yale University, Tel Aviv University, and the Hebrew University of Jerusalem. In terms of professional activities, he has been a member of the Councils of the European Economic Association and Game Theory Society, is a research fellow at the
https://en.wikipedia.org/wiki/Ghana%20Internet%20Policy
Ghana was one of the first countries to be connected to internet in Africa. History Ghana became the next country to have internet in the Sub Sahara. Internet services began in Ghana in 1995. This was made possible through the collaborations between Network Computer Systems (NCS), Pipex International, The Ministry of Transport and Communication of Ghana, Ghana Telecom, and British Telecom. NCS had registered ghana.com domain in 1993. Wireless internet Ghana has over 15 internet service providers who mostly provide different forms of internet services. Most of these ISPs provide wireless internet. And these include all the telcos in Ghana. Internet speed According to bandwidthplace, Ghana's internet speed hovers around an average of 1.46 Mbit/s uploads across most of the internet enabled devices tested. Internet accessibility There are over 7.9 million internet users in Ghana who mostly access the internet from mobile devices. Internet service providers There are a number of ISPs in Ghana, apart from the major telecommunication companies like MTN Ghana, Vodafone Ghana, Airtel Africa, Millicom and Globacom there are other companies like Africaonline, ADTech IT and Blue Cloud Network which also provide internet services. There is also Busyinternet and Surfline which offers wireless internet services through their omternet enabled devices. Internet closure during elections Net neutrality As it stands now, Ghana does not have any provisions on net neutrality. This has raised concerns and brought together netizens and tech firms to protest for this provision in Ghana Ghana has however faced a net neutrality crisis in the past. This was as a result of the NCA wanting to ban sites like Skype, Whatsapp, Viber and other internet based instant messaging platforms with the excuse that they were causing losses to telcos in Ghana. This campaign was led by telecommunication giant MTN which complained of losses due to people's continuous use of these platforms which r
https://en.wikipedia.org/wiki/Pdr1p
Pdr1p (Pleiotropic Drug Resistance 1p) is a transcription factor found in yeast and is a key regulator of genes involved in general drug response. It induces the expression of ATP-binding cassette transporter, which can export toxic substances out of the cell, allowing cells to survive under general toxic chemicals. It binds to DNA sequences that contain certain motifs called pleiotropic drug response element (PDRE). Pdr1p is encoded by a gene called PDR1 (also known as YGL013C) on chromosome VII. Transcriptional role Pdr1p is a main regulator of PDR genes and is known to target about 50 genes. Pdr1p binds to sequence 5'-TCCGYGGR-3' of PDRE, which is located within the promoter sequences of its target genes. 218 genes are reported to possess PDRE. Pdr1p is observed to bind PDRE sites on DNA at basal level and also after simulation with toxins. This shows that Pdr1p-DNA interaction isn't dependent on toxic stimulation. This also suggests an involvement of activator(s) or co-activator(s) that induce PDR genes along with Pdr1p. Pdr1p has a functional homolog called Pdr3p encoded by gene called PDR3. Pdr3p is known to be regulated by Pdr3p and Pdr1p. Pdr1p can form a homodimer with itself or heterodimer with Pdr3p. Loss of function studies of both PDR1 and PDR3 revealed that Pdr1p mutant shows lower tolerance (grows less in culture) against organic toxins such as cycloheximide and oligomycin. This confirms the functions of Prf1p that confer stronger drug response phenotype than Pdr3p. However, Pdr3p is crucial for PDR responses since cells containing loss of function mutation in both PDR1 and PDR3 genes weren't able to grow at all in the presence of those two toxins. Both Pdr1p and Pdr3p regulate Pdr5p, which is an ATP-binding cassette transporter. A single amino acid substitution mutation, which is a gain of function mutation of Pdr1p denoted as pdr1-3 (F815S, substitution mutation of Phenylalanine at 815th of the polypeptide by Serine) leads to an over-expres
https://en.wikipedia.org/wiki/Alv%C3%A9ole%20Lab
Alvéole is a French company based in Paris and founded in 2010 by Quattrocento, a business accelerator company in the life science field, in collaboration with researchers from the French National Center for Scientific Research with expertise in bioengineering and cell imaging. Alvéole is specialized in the development of devices for controlling microenvironment in vitro. Its first product is Primo, a contactless and maskless photopatterning device allowing researchers to control the topography (via microfabrication) and biochemistry (via micropatterning) of cell microenvironment. Products Alvéole's first product is Primo, a photopatterning device which can be docked on standard inverted microscopes. Primo photopatterning technique is based on LIMAP technology and combines a maskless and contactless photolithography system controlled by a dedicated software (Leonardo) and a specific photo-initiator. This system modulate UV-light illumination through an array of micromirrors (digital micromirror device). The UV light is then projected through the objective of the microscope onto the substrate in order to either perform microfabrication or protein micropatterning. - Microfabrication: The modulated UV light is projected on a photosensitive resist. The cured photoresist can then be used as a mold to deposit PDMS and generate microstructured PDMS chips. - Protein micropatterning: The modulated UV light is projected onto a standard cell culture substrate previously coated with an anti-fouling polymer and reacts with the photo-initiator to locally degrade this coating. Adhesion proteins can then be adsorbed on the illuminated area only, allowing for the creation of proteins micropatterns on which cells can adhere. Applications By micropatterning adhesions proteins with a sub-cellular resolution, Primo enables to control cell adhesion and isolate single cells under highly reproducible conditions. This allows cell biology researchers from various fields – such as mech
https://en.wikipedia.org/wiki/National%20Digital%20Repository%20for%20Museums%20of%20India
National Digital Repository for Museums of India is a C-DAC-led project to create a seamless access to collections and artifacts organized according to themes, regardless of the physical and geographical locations of the museums that house them. The first public version developed on Dspace was released in 2002. The initial draft of Open Archival Information System (OAIS) was also released in year 2003. It is necessary to transform museums for greater relevance and application for the modern society. Therefore, while focusing the needs of Indian museums, Dr. Dinesh Katre, Senior Director at C-DAC initiated the development of e-curator software named as JATAN (जतन): Virtual Museum Builder in 2001, which was developed and released in 2004. Subsequently, JATAN (जतन) software was deployed in Chhatrapati Shivaji Maharaj Museum, Mumbai; Raja Dinkar Kelkar Museum, Pune and The Baroda Museum & Picture Gallery in Vadodara. Although the response from museums was lukewarm, C-DAC continued developing JATAN (जतन) software into a comprehensive digital collection management system for museums. As part of this research, early visions of using crowdsourcing method for metadata enrichment of museum artefacts and unified virtual catalogue for Indian museums was presented in 2005. During 2013, Ministry of Culture started the Vivekananda Memorial Museum Excellence Program in collaboration with the Art Institute of Chicago, USA. As part of this program various existing software solutions available in India were evaluated and finally JATAN: Virtual Museum Builder was selected for standardized implementation across national museums. Certification Program for Museum Curators The team of Human-Centred Design & Computing Group at C-DAC, Pune organized JATAN certification training program in order to motivate and prepare the museum curators in taking on with the challenging task of digitization. Several batches of this 2 days training program for the museum curators were conducted. The train
https://en.wikipedia.org/wiki/Spatial%20transcriptomics
Spatial transcriptomics is a method for assigning cell types (identified by the mRNA readouts) to their locations in the histological sections and can also be used to determine subcellular localization of mRNA molecules. First described in 2016 by Ståhl et al., it has since undergone a variety of improvements and modifications. The Ståhl method implies positioning individual tissue samples on the arrays of spatially barcoded reverse transcription primers able to capture mRNA with the tails. Besides tail and spatial barcode, which indicates the x and y position on the arrayed slide, the probe contains a cleavage site, amplification and sequencing handle, and unique molecular identifier. Commonly, histological samples are cut using cryotome, then fixed, stained, and put on the microarrays. After that, it undergoes enzymatic permeabilization, so that molecules can diffuse down to the slide, with further mRNA release and binding to the probes. Reverse transcription is then carried out in situ. As a result, spatially marked complementary DNA is synthesized, providing information about gene expression in the exact location of the sample. Thus, described protocol combines paralleled sequencing and staining of the same sample. It is important to mention that the first generation of the arrayed slides comprised about 1,000 spots of the 100-μm diameter, limiting resolution to ~10-40 cells per spot. In the broader meaning of this term, spatial transcriptomics includes methods that can be divided into five principal approaches to resolving spatial distribution of transcripts. They are microdissection techniques, Fluorescent in situ hybridization methods, in situ sequencing, in situ capture protocols and in silico approaches. Application Defining the spatial distribution of mRNA molecules allows for the experimentalist to uncover cellular heterogeneity in tissues, tumours, immune cells as well as determine the subcellular distribution of transcripts in various conditions
https://en.wikipedia.org/wiki/YouTuber
A YouTuber is a type of social media influencer who uploads or creates videos on the online video-sharing website YouTube, typically posting to their personal YouTube channel. The term was first used in the English language in 2006, and subsequently appeared in the 2006 Time Person of the Year issue. Influence Influential YouTubers are frequently described as microcelebrities. Since YouTube is widely conceived as a bottom-up social media video platform, microcelebrities do not appear to be involved with the established and commercial system of celebrity culture; rather, they appear self-governed and independent. This appearance, in turn, leads to YouTubers being seen as more relatable and authentic, also fostered by the direct connection between artist and viewer using the medium of YouTube. In 2014, the University of Southern California surveyed 1318-year-olds in the United States on whether 10 YouTube celebrities or 10 traditional celebrities were more influential; YouTube personalities took the first five spots of the ranking, with the YouTube duo Smosh ranking as most influential. The survey was repeated in 2015, and found six YouTubers on the first ranks, with KSI ranked as most influential. Several YouTubers and their influence were subjects for scientific studies, such as Zoella, and PewDiePie. Numerous studies in the late 2010s found that YouTuber was the most desired career by children. YouTubers' influence has also extended beyond the platform. Some have ventured into mainstream forms of media, such as Liza Koshy, who, among other pursuits, hosted the revival of the Nickelodeon show Double Dare and starred in the Netflix dance-comedy film Work It. In 2019, Ryan's Mystery Playdate, a show starring Ryan Kaji, the then-seven-year-old host of the toy review and vlog channel Ryan's World, began airing on the Nick Jr. Channel; later that year, NBC debuted A Little Late with Lilly Singh in its 1:35 am ET time slot. Singh's digital prominence was cited as a r
https://en.wikipedia.org/wiki/Tsetlin%20machine
A Tsetlin machine is an Artificial Intelligence algorithm based on propositional logic. Background A Tsetlin machine is a form of learning automaton based upon algorithms from reinforcement learning to learn expressions from propositional logic. Ole-Christoffer Granmo gave the method its name after Michael Lvovitch Tsetlin and his Tsetlin automata. The method uses computationally simpler and more efficient primitives compared to more ordinary artificial neural networks. As of April 2018 it has shown promising results on a number of test sets. Types Original Tsetlin machine Convolutional Tsetlin machine Regression Tsetlin machine Relational Tsetlin machine Weighted Tsetlin machine Arbitrarily deterministic Tsetlin machine Parallel asynchronous Tsetlin machine Coalesced multi-output Tsetlin machine Tsetlin machine for contextual bandit problems Tsetlin machine autoencoder Tsetlin machine composites: plug-and-play collaboration between specialized Tsetlin machines Applications Keyword spotting Aspect-based sentiment analysis Word-sense disambiguation Novelty detection Intrusion detection Semantic relation analysis Image analysis Text categorization Fake news detection Game playing Batteryless sensing Recommendation systems Word embedding ECG analysis Edge computing Bayesian network learning Original Tsetlin machine Tsetlin automaton The Tsetlin automaton is the fundamental learning unit of the Tsetlin machine. It tackles the multi-armed bandit problem, learning the optimal action in an environment from penalties and rewards. Computationally, it can be seen as a finite-state machine (FSM) that changes its states based on the inputs. The FSM will generate its outputs based on the current states. A quintuple describes a two-action Tsetlin automaton: A Tsetlin automaton has states, here : The FSM can be triggered by two input events The rules of state migration of the FSM are stated as It includes two output actions Which
https://en.wikipedia.org/wiki/Diccionario%20biogr%C3%A1fico%20espa%C3%B1ol
Diccionario biográfico español is a Spanish biographical dictionary published by the Real Academia de la Historia. History On 23 May 1735 Felipe V approved the constitution of Real Academia de la Historia. The Academy's first Director Agustín de Montiano y Luyando proposed to create a Diccionario histórico-crítico de España. However, resources were limited and the Diccionario biográfico español did not get underway until the end of the twentieth century. Printed edition The Dictionary was edited by Gonzalo Anes (the then Director of the Academy), Jaime Olmedo, and Quintín Aldea Vaquero. It was written over ten years by 5,000 historians. It consists of 50 volumes with 45,000 pages and 40,000 biographies of notable figures in Spanish history, from the 7th century BC to the present. The first twenty-five volumes were published in 2011 with the remaining volumes completed by 2013. Electronic edition The electronic version of the dictionary was launched formally by the King and Queen of Spain in 2018, although some material had been available online previously. Carmen Iglesias, who became director of the Real Academia de la Historia in 2014, was the historian responsible for the electronic version, which differs in some respects from the printed version which preceded it. It is intended that any errors will be corrected continuously. The printed edition contains information about living persons, whereas so far the electronic edition is restricted to deceased persons. The difficulty of writing objectively about living people was one of the criticisms of the Dictionary when it appeared. Another source of controversy was the treatment of Francisco Franco, who initially was not described as a dictator. His entry was revised for the online version. References 2011 non-fiction books Spanish dictionaries Online dictionaries Spanish
https://en.wikipedia.org/wiki/Pole%20%28surveying%29
In surveying, a pole is bar made of wood or metal and normally held vertical, upon which different instruments can be mounted: a prism, a GPS device, etc. It may be manufactured with a predetermined length (e.g., 2 meters) or may be graduated for different heights or stages. Technology advances have introduced tilt-compensation capability into survey poles, that allow the surveyor to measure points above ground or when the pole is off-vertical. References- See also Level staff Surveying instruments
https://en.wikipedia.org/wiki/Intragenomic%20and%20intrauterine%20conflict%20in%20humans
Intragenomic and intrauterine conflicts in humans arise between mothers and their offspring. Parental investment theory states that parents and their offspring will often be in conflict over the optimal amount of investment that the parent should provide. This is because the best interests of the parent do not always match the best interests of the offspring. Maternal-infant conflict is of interest due to the intensity of maternal investment in her offspring. In humans, mothers often invest years of care into their children due to the long developmental period before children become self-sufficient.   Parents and their children are typically engaged in a cooperative venture in which both benefit by the survival and future reproduction of the offspring. However, their interests cannot be identical because their genes are not identical. While both parent and offspring are 100% related to themselves, they share only 50% of their genes with each other, which means both parent and child will at times be in conflict with each other. Intrauterine conflict Maternal-infant conflict begins during pregnancy, where the mother’s body must maintain her own health while also providing for the developing fetus. Broadly, both the fetus and the mother have the same evolutionary interests in the fetus coming to term and resulting in a healthy birth. However, there may also be conflicts between the amount of nutrients the fetus prefers to receive and the amount the mother prefers to give. For example, there may be conflict between the mother and the fetus over who should control fetal growth, wherein the fetus would prefer optimal growth and the mother would prefer to control fetal growth relative to the level of resources she has available. Role of the placenta The placenta plays an important role in this conflict since it is the source of nutrient delivery from mother to fetus, and also receives signals from both mother and fetus. The placenta is believed to play a balancing rol
https://en.wikipedia.org/wiki/Golomb%20graph
In graph theory, the Golomb graph is a polyhedral graph with 10 vertices and 18 edges. It is named after Solomon W. Golomb, who constructed it (with a non-planar embedding) as a unit distance graph that requires four colors in any graph coloring. Thus, like the simpler Moser spindle, it provides a lower bound for the Hadwiger–Nelson problem: coloring the points of the Euclidean plane so that each unit line segment has differently-colored endpoints requires at least four colors. Construction The method of construction of the Golomb graph as a unit distance graph, by drawing an outer regular polygon connected to an inner twisted polygon or star polygon, has also been used for unit distance representations of the Petersen graph and of generalized Petersen graphs. As with the Moser spindle, the coordinates of the unit-distance embedding of the Golomb graph can be represented in the quadratic field . Fractional coloring The fractional chromatic number of the Golomb graph is 10/3. The fact that this number is at least this large follows from the fact that the graph has 10 vertices, at most three of which can be in any independent set. The fact that the number is at most this large follows from the fact that one can find 10 three-vertex independent sets, such that each vertex is in exactly three of these sets. This fractional chromatic number is less than the number 7/2 for the Moser spindle and less than the fractional chromatic number of the unit distance graph of the plane, which is bounded between 3.6190 and 4.3599. References External links Individual graphs Planar graphs
https://en.wikipedia.org/wiki/Kittell%20graph
In the mathematical field of graph theory, the Kittell graph is a planar graph with 23 vertices and 63 edges. Its unique planar embedding has 42 triangular faces. The Kittell graph is named after Irving Kittell, who used it as a counterexample to Alfred Kempe's flawed proof of the four-color theorem. Simpler counterexamples include the Errera graph and Poussin graph (both published earlier than Kittell) and the Fritsch graph and Soifer graph. References Individual graphs Planar graphs
https://en.wikipedia.org/wiki/Emptiness%20problem
In theoretical computer science and formal language theory, a formal language is empty if its set of valid sentences is the empty set. The emptiness problem is the question of determining whether a language is empty given some representation of it, such as a finite-state automaton. For an automaton having states, this is a decision problem that can be solved in time. However, variants of that question, such as the emptiness problem for non-erasing stack automata, are PSPACE-complete. The emptiness problem is undecidable for context-sensitive grammars, a fact that follows from the undecidability of the halting problem. It is, however, decidable for context-free grammars. References Formal languages Polynomial-time problems PSPACE-complete problems Undecidable problems
https://en.wikipedia.org/wiki/Joost-Pieter%20Katoen
Joost-Pieter Katoen (born October 6, 1964) is a Dutch theoretical computer scientist based in Germany. He is distinguished professor in Computer Science and head of the Software Modeling and Verification Group at RWTH Aachen University. Furthermore, he is part-time associated to the Formal Methods & Tools group at the University of Twente. Education Katoen received his master's degree with distinction in Computer Science from the University of Twente in 1987. In 1990, he was awarded a Professional Doctorate in Engineering from the Eindhoven University of Technology, and in 1996, he received his Ph.D. in computer science from the University of Twente. Research Katoen's main research interests are formal methods, computer aided verification, in particular model checking, concurrency theory, and semantics, in particular semantics of probabilistic programming languages. His research is largely tool and application oriented. Together with Christel Baier he wrote and published the book Principles of Model Checking. Career From 1997 to 1999, Katoen was a postdoctoral researcher at the University of Erlangen-Nuremberg. In 1999, he became an associate professor at the University of Twente, where he still holds a part-time position. In 2004, he was appointed a full professor at RWTH Aachen University. In 2013, Katoen became Theodore von Kármán Fellow and Distinguished Professor at RWTH Aachen University. Also in 2013, he was elected member of the Academia Europaea. In 2017, he received an honorary doctorate from Aalborg University. In 2018, Katoen was awarded the highly remunerated ERC Advanced Grant. In 2020, Katoen became an ACM Fellow and in 2021, he was elected as member of the Royal Holland Society of Science and Humanities (KHMW). In 2022, he was elected as member of the North Rhine-Westphalian Academy of Science, Humanities and the Arts. Katoen is a founding member of the IFIP Working Group (WG) 1.8 on Concurrency Theory and a member of the WG 2.2 Formal Descr
https://en.wikipedia.org/wiki/Etcher%20%28software%29
balenaEtcher (commonly referred to and formerly known as Etcher) is a free and open-source utility used for writing image files such as .iso and .img files, as well as zipped folders onto storage media to create live SD cards and USB flash drives. It is developed by Balena, and licensed under Apache License 2.0. Etcher is a free, open-source tool that allows users to write images to portable storage media such as USB sticks and SD cards. Etcher was developed using the Electron framework and supports Windows, macOS and Linux. balenaEtcher was originally called Etcher, but its name was changed on October 29, 2018, when Resin.io changed its name to Balena. Features Etcher is primarily used through a graphical user interface. Additionally, there is a command line interface available which is under active development. Future planned features include support for persistent storage allowing live SD card or USB flash drive to be used as a hard drive, as well as support for flashing multiple boot partitions to a single SD card or USB flash drive. See also List of tools to create Live USB systems References External links Github Repository Cross-platform software Free system software Linux installation software Multiboot live USB
https://en.wikipedia.org/wiki/Hercules%20DFXE
The Hercules DFXE was an American diesel truck engine produced by the Hercules Engine Company. Part of the Hercules DFX series, the DFXE is a naturally aspirated, direct injection,overhead valve, inline six-cylinder engine. The engine had a displacement of , with a bore of , a stroke of and a compression ratio of 14.8:1. It developed at 1,600 rpm and a maximum of torque at 1150 rpm; at 1600 rpm. The DFXE was designed to requirements of the British Purchasing Commission for use in the Diamond T Model 980 tank transporter. As the Diamond T Model 980 was later adopted by the US, the DFXE was one of the few diesel engines used by US tactical trucks during World War II. The engine was used in upright or horizontal configurations. It was used in the Diamond T Models 980 and 981 trucks in World War Two, the Le Tourneau Model B29 Tournapull earthmover (introduced in 1945), and the Oliver OC-18 crawler tractor (introduced in 1952), among others. Notes Sources Berndt, Thomas. Standard Catalog of U.S. Military Vehicles 1940–1965. Iola, WI: Krause Publications, 1993. Orlemann, Eric R. LeTourneau Earthmovers. St Paul, Minnesota: MBI Publishing Co, 2001. U.S. Department of the Army. TM 9-1768A: Tractor Truck M20, Component Of 45-Ton Tank Transporter, Truck-Trailer M19, Engine, Clutch, Fuel System, And Cooling System. Washington, DC: United States Government Printing Office, 22 June 1945. U.S. Department of the Army. TM55-1031: Engines, Diesel, Hercules: Series DFXB, DFXC, DFXD, and DFXE. Washington, DC: United States Government Printing Office, November 1944. Ware, Pat. The Diamond T Models 980, 981: Britain's second-generation tank transporter. Yalding, Kent: Kelsey Media, 2020. Wendel, C. H. Standard Catalog of Farm Tractors, 1890-1980. Iola, WI: KP Books, 2005. Diesel engines DFXE Tractors Engineering vehicles Straight-six engines
https://en.wikipedia.org/wiki/Time%20travel%20debugging
Time travel debugging or time traveling debugging is the process of stepping back in time through source code to understand what is happening during execution of a computer program. Typically, debugging and debuggers, tools that assist a user with the process of debugging, allow users to pause the execution of running software and inspect the current state of the program. Users can then step forward in time, stepping into or over statements and proceeding in a forward direction. Interactive debuggers include the ability to modify code and step forward based on updated information. Reverse debugging tools allow users to step backwards in time through the steps that resulted in reaching a particular point in the program. Time traveling debuggers provide these features and also allow users to interact with the program, changing the history if desired, and watch how the program responds. Characteristics supporting bi-directional travel There are several characteristics that support the ability to move backwards as well as forwards in time. Selecting a purely functional programming language helps due to the self-contained nature of pure functions. Pure functions have no side effects and depend only on the information explicitly provided to the function, providing a repeatable, reliable, replayable path through the code. Languages and debuggers that enable hot swapping, the ability to modify code as the code is running, provide some of the requirements necessary to rewind, and potentially rewrite execution. Tools based on the GNU debugger (GDB), available for compatible languages such as C, C++, Go, and Fortran are capable of reverse debugging, but the effort significantly slows interaction. Time traveling debuggers Examples of debuggers with the ability to step backwards: See also Interactive computing List of purely functional programming languages References Human–computer interaction
https://en.wikipedia.org/wiki/Taiwania%20%28supercomputer%29
Taiwania () is a supercomputer series in Taiwan owned by the National Applied Research Laboratories. History The supercomputer was activated on 9 May 2018 after a two-year program to establish it with a cost of NT$430 million. In April of 2023, it was unveiled that Taiwania 1 itself will be retired and replaced by Taiwania 4. Technical specifications The Taiwania 1 Supercomputer has a memory of 3.4 petabytes with a maximum speed of 1.33 quadrillion FLOPS. The hardware takes up a total area of 33 m2. Taiwania 2 has a maximum speed of 9 PFLOPS. Taiwania 2 History The Taiwania 2 supercomputer is a follow on to the Taiwania supercomputer designed by the National Center for High-Performance Computing. Taiwania 2 debuted at 20 on the November 2018 TOP500 and 10 on the Green500. Technical specifications Taiwania 2 has a computing capacity of 9 quadrillion floating-point operations per second (9 PetaFLOPS, or 9 PFLOPS). Its hardware consists of 252 nodes, each of which contains two Intel Xeon Gold CPUs and eight NVIDIA V100 GPUs. It runs the CentOS operating system. Taiwania 3 Taiwania 3 is one of the supercomputers made by Taiwan,. and also the newest one (August, 2021). It is placed in the National Center for High-performance Computing of NARLabs. There are 50,400 cores in total with 900 nodes, using Intel Xeon Platinum 8280 2.4 GHz CPU (28 Cores/CPU) and using CentOS as Operating System. It is an open access for public supercomputer. It is currently open access to scientists and more to do specific research after get permission from Taiwan's National Center for High-performance Computing. This is the third supercomputer of the Taiwania series. It uses CentOS x86_64 7.8 as its system operator and Slurm Workload Manager as workflow manager to ensure better performance. Taiwania 3 uses InfiniBand HDR100 100Gbit/s high speed Internet connection to ensure better performance of the supercomputer. The main memory capability is 192GB. There's currently two Intel Xeon Pl
https://en.wikipedia.org/wiki/Krauss%20wildcard-matching%20algorithm
In computer science, the Krauss wildcard-matching algorithm is a pattern matching algorithm. Based on the wildcard syntax in common use, e.g. in the Microsoft Windows command-line interface, the algorithm provides a non-recursive mechanism for matching patterns in software applications, based on syntax simpler than that typically offered by regular expressions. History The algorithm is based on a history of development, correctness and performance testing, and programmer feedback that began with an unsuccessful search for a reliable non-recursive algorithm for matching wildcards. An initial algorithm, implemented in a single while loop, quickly prompted comments from software developers, leading to improvements. Ongoing comments and suggestions culminated in a revised algorithm still implemented in a single while loop but refined based on a collection of test cases and a performance profiler. The experience tuning the single while loop using the profiler prompted development of a two-loop strategy that achieved further performance gains, particularly in situations involving empty input strings or input containing no wildcard characters. The two-loop algorithm is available for use by the open-source software development community, under the terms of the Apache License v. 2.0, and is accompanied by test case code. Usage The algorithm made available under the Apache license is implemented in both pointer-based C++ and portable C++ (implemented without pointers). The test case code, also available under the Apache license, can be applied to any algorithm that provides the pattern matching operations below. The implementation as coded is unable to handle multibyte character sets and poses problems when the text being searched may contain multiple incompatible character sets. Pattern matching operations The algorithm supports three pattern matching operations: A one-to-one match is performed between the pattern and the source to be checked for a match, with the exc
https://en.wikipedia.org/wiki/26-fullerene%20graph
In the mathematical field of graph theory, the 26-fullerene graph is a polyhedral graph with V = 26 vertices and E = 39 edges. Its planar embedding has three hexagonal faces (including the one shown as the external face of the illustration) and twelve pentagonal faces. As a planar graph with only pentagonal and hexagonal faces, meeting in three faces per vertex, this graph is a fullerene. The existence of this fullerene has been known since at least 1968. Properties The 26-fullerene graph has prismatic symmetry, the same group of symmetries as the triangular prism. This symmetry group has 12 elements; it has six symmetries that arbitrarily permute the three hexagonal faces of the graph and preserve the orientation of its planar embedding, and another six orientation-reversing symmetries. The number of fullerenes with a given even number of vertices grows quickly in the number of vertices; 26 is the largest number of vertices for which the fullerene structure is unique. The only two smaller fullerenes are the graph of the regular dodecahedron (a fullerene with 20 vertices) and the graph of the truncated hexagonal trapezohedron (a 24-vertex fullerene), which are the two types of cells in the Weaire–Phelan structure. The 26-fullerene graph has many perfect matchings. One must remove at least five edges from the graph in order to obtain a subgraph that has exactly one perfect matching. This is a unique property of this graph among fullerenes in the sense that, for every other number of vertices of a fullerene, there exists at least one fullerene from which one can remove four edges to obtain a subgraph with a unique perfect matching. The vertices of the 26-fullerene graph can be labeled with sequences of 12 bits, in such a way that distance in the graph equals half of the Hamming distance between these bitvectors. This can also be interpreted as an isometric embedding from the graph into a 12-dimensional taxicab geometry. The 26-fullerene graph is one of only five
https://en.wikipedia.org/wiki/Basics%20of%20white%20flower%20colouration
White flower colour is related to the absence or reduction of the anthocyanidin content. Unlike other colors, white colour is not induced by pigments. Several white plant tissues are principally equipped with the complete machinery for anthocyanin biosynthesis including the expression of regulatory genes. Nevertheless, they are unable to accumulate red or blue pigments, for example Dahlia ´Seattle´ petals showing a white tip. Several studies have revealed a further reduction of the anthocyanidin to colorless epicatechin by the enzyme anthocyanidin reductase (ANR). Cultivation & Modification of Colour Many external factors can influence colour: light, temperature, pH, sugars and metals. There is a method to turn petunia flowers from white to transparent. The petunia flower is immersed into a flask of water, connected to a vacuum pump, after which the flower appeared colourless. The white colour is expressed by the air present in the vacuoles that absorb the light, without air the flower loses the white colour. There is an increasing interest in flower colour, since some colorations are currently unavailable in plants. Ornamental companies create new flower colour by classical and mutation breeding and biotechnological approaches. For example, white bracts in Poinsettia are obtained by high frequency irradiation. See also Basics of blue flower colouration References Agronomy Biological pigments Biological processes Botany Cellular respiration Plant physiology Quantum biology
https://en.wikipedia.org/wiki/Maclear%27s%20Beacon
Maclear's Beacon is a triangulation station used in Maclear's arc measurement for Earth's circumference determination. The beacon is on top of Table Mountain in Cape Town, South Africa. It is situated on the Eastern end on the plateau of the mountain. The beacon is above sea level. Table Mountain's higher than the upper cable car station. The structure consists of man made rock packed in a triangle form, being high. It was painted in lamp black colour to make it visible, when light shown on it. In December 1844, the Astronomer Royal at the Cape, Thomas Maclear, instructed his assistant William Mann to build a beacon in the form of a pile of rocks which would be used to confirm and possibly expand on the existing curvature of the Earth data of Nicolas-Louis de Lacaille. This data was in connection with the Cape arc of the meridian. Initially the beacon had no name but in later years it was named after Maclear. In 1929, the pile of stones collapsed and it was restored in 1979 to commemorate the centenary of Maclear's death. The beacon is still used by cartographers today. It has become a tourist attraction and hiking trails over the mountain pass next to the beacon. It is also a National Monument. References External links 1844 in South Africa Buildings and structures completed in 1844 Buildings and structures in Cape Town Geodesy Geomatics 19th-century architecture in South Africa
https://en.wikipedia.org/wiki/Ministry%20of%20Testing
Ministry of Testing, also referred to as the MoT, is a global software testing community that was founded by Rosie Sherry, who was longlisted for most influential woman in UK tech by Computer Weekly in 2017 and 2018, as well as listed in the Female Founders 101 list by BusinessCloud. MoT started out as a UK-based internet forum for software testers and quickly grew into an independent business that provides software testing conferences and Meetups around the world, and an online learning platform dedicated to the craft of software testing. Members of the Ministry of Testing community consist of software testers and those working in software quality. The community created by Ministry of Testing aims to get its participants sharing innovative practises and ideas around software testing. Computer Scientists at the University of Maryland used Ministry of Testing (along with organisations) to recruit software testers for a study into identifying vulnerabilities in software. The Ministry of Testing has teamed up with community members to run local meetups. There are 50 meetups across the world. According to Rosie Sherry, Brazil, and Greek meetups are some of the best meetups across the world. Specifically for the Greek meetup, she says, 'But our Athens group is just nuts and mad crazy. Their last meetup had 300 people attend. That's the size of a conference!' TestBash Ministry of Testing's first conference, named TestBash, was first held in Cambridge. Events have been described as having a strong community atmosphere and using innovative conference engagement methods, such as The UnExpo and 99-Second Talks. TestBash software testing conferences are largely informal events, with talks addressing areas of innovation across of testing, quality and working in software development. There are now multiple TestBash conferences taking place in 7 cities annually around the world. In 2018 the first TestBash focusing specifically on technical testing and automation will be held,
https://en.wikipedia.org/wiki/Individual%20Master%20File
The Individual Master File (IMF) is the system currently used by the United States Internal Revenue Service (IRS) to store and process tax submissions and used as the main data input to process the IRS's transactions. It is a running record of all of a person's individual tax events including refunds, payments, penalties and tax payer status. It is a batch-driven application that uses VSAM files. Written in assembly language and COBOL, the IMF was originally created by IBM for the IRS in the 1960s to run with an IBM System/360 and associated tape storage system. The IMF is frequently identified as a legacy system in need of modernization. Description The IMF stores an individual's name, taxpayer identification number, address, income, deductions, credits, payments received, refunds issued and taxes dismissed. The IMF stores over 100 million Americans individual taxpayers' data. The IMF application is a system consisting of a series of batch runs, data records and files. The IMF system receives individual tax submissions in electronic format and processes them through a pre-posting phase. It then posts and analyzes the transactions which produces output in the form of Refund Data, Notice Data, Reports and information feeds to other entities and departments. Age The IMF system began operation in the 1960s and is still used today, and is considered well overdue for modernization. Portions of the system are programmed in COBOL and others directly in assembly language. In a 2018 report to Congress, the Government Accountability Office identified the IMF and other IT systems at the IRS as "facing significant risks due to their reliance on legacy programming languages, outdated hardware, and a shortage of human resources with critical skills". The IMF and other legacy systems have been named as obstacles that prevent the IRS from acting quickly in exigent circumstances. In the weeks following the passage of the Coronavirus Aid, Relief, and Economic Security Act
https://en.wikipedia.org/wiki/Ribbon%20Communications
Ribbon Communications Inc. is a public company that makes software, IP and optical networking solutions for service providers, enterprises and critical infrastructure sectors. The company was formed in 2017, following the merger of Genband and Sonus Networks and is headquartered in Plano, Texas. History Ribbon Communications was the combination of two companies, each of which had acquired other businesses over their history. Ribbon Communications Ribbon Communications was founded in October, 2017, following the merger of Genband and Sonus Networks in May. Ray Dolan initially headed the combined company, while Walsh led the Kandy business unit. By December, Dolan, who had led Sonus since 2010, resigned. Franklin (Fritz) W. Hobbs was appointed as president and CEO of the combined organization and served in that role until November 2019. In January 2018, the company announced that its session border controllers would be used in the virtual network services of Verizon. In 2018 Ribbon also acquired Edgewater Networks. In November 2019, Ribbon announced it would acquire ECI Telecom from Shaul Shani for $486 million in cash and stock. The company completed the merger in March 2020. In February 2020, Bruce McClelland was named president, CEO and director. A years later, Ribbon moved its headquarters to Plano, Texas. In August 2020, AVCTechnologies announced an agreement to buy Kandy Communications Business. The transaction was completed in December 2020. Genband General Bandwidth was founded in 1999 by Paul Carew, Brendon Mills, Ron Lutz and Steve Raich in Austin, Texas and received initial venture capital funding of $12 million. The company raised over $200 million in four rounds of venture funding and grew to over 200 people by 2003. In 2004, Mills resigned and was replaced as CEO by Charles Vogt. In March 2006, General Bandwidth changed its name to Genband, Inc. and moved its headquarters to Plano, Texas. Genband started as a media gateway vendor selling the
https://en.wikipedia.org/wiki/Code%20page%20904
Code page 904 (CCSID 904) is encoded for use as the single byte component of certain traditional Chinese character encodings. It is used in Taiwan. When combined with the double-byte Code page 927, it forms the two code-sets of Code page 938. Codepage layout References 904
https://en.wikipedia.org/wiki/Myki%20%28password%20manager%29
Myki was a password manager and authenticator developed by Myki Security. Myki was available on iOS (requires iOS10 or higher) and Android (requires Android 5 or higher), as browser extensions on Chrome, Firefox, Safari, Opera and Microsoft Edge, and as a standalone desktop app for Windows, macOS, Linux, Arch Linux, and Debian. It was available in English, Arabic, French, German, Italian, Portuguese and Spanish. On 24 March 2022, MYKI announced Jump Cloud's acquisition of Myki and on 10 April 2022, Myki ceased to operate. Product Overview The Myki Password Manager and Authenticator was an offline (data stored on smartphone, not cloud) free mobile application for storing and managing passwords, credit cards, government IDs and notes. Myki was available on iOS and Android and was available as browser extensions on Chrome, Firefox. Safari and Opera. and as a standalone desktop app for Windows (requires Windows 8 or higher), macOS (requires MacOS 10.12 or higher), Linux (App image &. snap), Arch Linux (paceman), and Debian (.deb). Myki For Teams was an offline password manager for teams. Myki for Managed Service Providers enables MSPs to manage the passwords of the multiple companies they administer. Myki was named one of the Best Password Managers of 2018 globally by PC Magazine. History Myki Security was founded in 2015 by Antoine Vincent Jebara and Priscilla Elora Sharuk. Myki launched its product in a private beta in September 2016. In 2016, Myki was the first MENA-based company selected to compete in TechCrunch Disrupt Battlefield in San Francisco, California. In January 2017, Myki raised $1.2 million from BECO Capital in Dubai, United Arab Emirates, Leap Ventures and B&Y Venture Partners in Beirut, Lebanon. In 2019, Myki added a secure password-sharing feature, allowing users to share sensitive login credentials securely with trusted individuals, further differentiating it from other password managers in the market. End of support On 24 February 2022, it