source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/Markov%20odometer | In mathematics, a Markov odometer is a certain type of topological dynamical system. It plays a fundamental role in ergodic theory and especially in orbit theory of dynamical systems, since a theorem of H. Dye asserts that every ergodic nonsingular transformation is orbit-equivalent to a Markov odometer.
The basic example of such system is the "nonsingular odometer", which is an additive topological group defined on the product space of discrete spaces, induced by addition defined as , where . This group can be endowed with the structure of a dynamical system; the result is a conservative dynamical system.
The general form, which is called "Markov odometer", can be constructed through Bratteli–Vershik diagram to define Bratteli–Vershik compactum space together with a corresponding transformation.
Nonsingular odometers
Several kinds of non-singular odometers may be defined.
These are sometimes referred to as adding machines.
The simplest is illustrated with the Bernoulli process. This is the set of all infinite strings in two symbols, here denoted by endowed with the product topology. This definition extends naturally to a more general odometer defined on the product space
for some sequence of integers with each
The odometer for for all is termed the dyadic odometer, the von Neumann–Kakutani adding machine or the dyadic adding machine.
The topological entropy of every adding machine is zero. Any continuous map of an interval with a topological entropy of zero is topologically conjugate to an adding machine, when restricted to its action on the topologically invariant transitive set, with periodic orbits removed.
Dyadic odometer
The set of all infinite strings in strings in two symbols has a natural topology, the product topology, generated by the cylinder sets. The product topology extends to a Borel sigma-algebra; let denote that algebra. Individual points are denoted as
The Bernoulli process is conventionally endowed with a collection of measure |
https://en.wikipedia.org/wiki/Ambient%20video | Ambient video is a genre of video that puts emphasis on atmosphere over traditional storytelling or news content. Its purpose is to create a soothing, pleasant environment through the use of imagery. A typical video might include footage of waves rolling over the beach accompanied by soft ambient music. Some additional examples might include movement of water, sunrise or sunset, or slow movement such as that of a jellyfish. Ambient video can be the subject of focus or alternatively run unobtrusively in the background.
Uses
Because of its ability to transform spaces, ambient video is often used for meditation or relaxation. In addition to providing visual focus, ambient video can figuratively transport viewers to new locations and, although is often accompanied by, is not dependent on music.
Although ambient video often serves as background art, its visual nature allows it to also be used:
as a meditation tool
to explore new locations
as an aid for concentration or productivity
to create atmosphere at home or in public places
as a calming backdrop homes or public places (such as airports, hotels, hospitals, restaurants, shopping venues and so forth)
History of ambient video
Ambient video is a new art form made possible by the increasing availability of large-scale, high-definition video display units. Introduced in the late 1990s, the first flat wide-screen televisions were expensive, costing as much as $15,000 in 1997. As costs rapidly declined, by 2008 larger screens became affordable and available for average consumers, increasing the demand for new content.
In addition, the number of televisions per household increased. According to the U.S. Energy Information Administration, in 1997 30% of households had three or more televisions compared to 39% in 2015.
See also
Ambient music
Brian Eno
Meditation
Widescreen
References
Ambient music
Video |
https://en.wikipedia.org/wiki/SAE%20J306 | SAE J306 is a standard that defines the viscometric properties of automotive gear oils. It is maintained by SAE International. Key parameters for this standard are the kinematic viscosity of the gear oil, the maximum temperature at which the oil has a viscosity of 150,000 cP, and a measure of its shear stability through the KRL test.
References
Lubrication
Gear oils
Automotive standards
Viscosity |
https://en.wikipedia.org/wiki/Stanford%20arm | The Stanford arm is an industrial robot with six degrees of freedom, designed at Stanford University by Victor Scheinman in 1969.
The Stanford arm is a serial manipulator whose kinematic chain consists of two revolute joints at the base, a prismatic joint, and a spherical joint. Because it includes several kinematic pairs, it is often used as an educational example in robot kinematics.
References
-
Robotic manipulators |
https://en.wikipedia.org/wiki/Death%20of%20Elaine%20Herzberg | The death of Elaine Herzberg (August 2, 1968 – March 18, 2018) was the first recorded case of a pedestrian fatality involving a self-driving car, after a collision that occurred late in the evening of March 18, 2018. Herzberg was pushing a bicycle across a four-lane road in Tempe, Arizona, United States, when she was struck by an Uber test vehicle, which was operating in self-drive mode with a human safety backup driver sitting in the driving seat. Herzberg was taken to the local hospital where she died of her injuries.
Following the fatal incident, the National Transportation Safety Board (NTSB) issued a series of recommendations and sharply criticized Uber. The company suspended testing of self-driving vehicles in Arizona, where such testing had been sanctioned since August 2016. Uber chose not to renew its permit for testing self-driving vehicles in California when it expired at the end of March 2018. Uber resumed testing in December 2018.
On March 2019, Arizona prosecutors ruled that Uber was not criminally responsible for the crash. The back-up driver of the vehicle was charged with negligent homicide.
While Herzberg was the first pedestrian killed by a self-driving car, a driver had been killed by a semi-autonomous car almost two years earlier. A reporter for The Washington Post compared Herzberg's fate with that of Bridget Driscoll who, in the United Kingdom in 1896, was the first pedestrian to be killed by an automobile.
The Arizona incident has magnified the importance of collision avoidance systems for self-driving vehicles.
Collision summary
Herzberg was crossing Mill Avenue (North) from west to east, approximately south of the intersection with Curry Road, outside the designated pedestrian crosswalk, close to the Red Mountain Freeway. She was pushing a bicycle laden with shopping bags, and had crossed at least two lanes of traffic when she was struck at approximately 9:58 pm MST (UTC−07:00) by a prototype Uber self-driving car based on a Volvo XC90 |
https://en.wikipedia.org/wiki/Kagome%20metal | In solid-state physics, the kagome metal or kagome magnet is a type of ferromagnetic quantum material. The atomic lattice in a kagome magnet has layered overlapping triangles and large hexagonal voids, akin to the kagome pattern in traditional Japanese basket-weaving. This geometry induces a flat electronic band structure with Dirac crossings, in which the low-energy electron dynamics correlate strongly.
Electrons in a kagome metal experience a "three-dimensional cousin of the quantum Hall effect": magnetic effects require electrons to flow around the kagome triangles, akin to superconductivity. This phenomenon occurs in many materials at low temperatures and high external field, but, unlike superconductivity, materials are known in which the effect remains under standard conditions.
The first room-temperature, vanishing-external-field kagome magnet discovered was the intermetallic , as shown in 2011. Many others have since been found. Kagome magnets occur in a variety of crystal and magnetic structures, generally featuring a 3d-transition-metal kagome lattice with in-plane period ~5.5 Å. Examples include antiferromagnet , paramagnet , ferrimagnet , hard ferromagnet (and Weyl semimetal) , and soft ferromagnet . Until 2019, all known kagome materials contained the heavy element tin, which has a strong spin–orbit coupling, but potential kagome materials under study () included magnetically doped Weyl-semimetal , and the class (A = Cs, Rb, K). Although most research on kagome magnets has been performed on Fe3Sn2, it has since been discovered that in fact exhibits a structure much closer to the ideal kagome lattice.
A kagome lattice harbors massive Dirac fermions, Berry curvature, band gaps, and spin–orbit activity, all of which are conducive to the Hall Effect and zero-energy-loss electric currents. These behaviors are promising for the development of technologies in quantum computing, spin superconductors, and low power electronics. in p |
https://en.wikipedia.org/wiki/Agglomerated%20food%20powder | Agglomerated food powder is a unit operation during which native particles are assembled to form bigger agglomerates, in which the original particle can still be distinguished. Agglomeration can be achieved through processes that use liquid as a binder (wet methods) or methods that do not involve any binder (dry methods).
Description
The liquid used in wet methods can be added directly to the product or via a humid environment. Using a fluidized bed dryer and multiple step spray drying are two examples of wet methods while roller compacting and extrusion are two examples of dry methods.
Advantages of agglomeration for food include:
Dust reduction: Dust reduction is achieved when the smallest particles (or "fines") in the product are combined into larger particles.
Improved flow: Flow improvement occurs as the larger, and sometimes more spherical, particles more easily pass over each other than the smaller or more irregularly-shaped particles in the original material.
Improved dispersion and/or solubility: Improved dispersion and solubility is sometimes achieved with instantization, in which the solubility of a product allows it to instantly dissolve upon its addition to water. For a powder to be considered instant it should go through wettability, sinkability, dispersibility, and solubility within a few seconds. Non-fat dry milk and high quality protein powders are good examples of instant powders.
Optimized bulk density: Consistent bulk density is important in accurate and consistent filling of packaging.
Improved product characteristics
Increased homogeneity of the finished product, reducing segregation of fine particles (such as powdered vitamins or spray-dried flavors) from larger particles (such as granulated sugars or acids). As a powder is agitated, smaller particles will fall to the bottom, and larger raise to the top. Agglomeration can reduce the range of particle sizes present in the product, reducing segregation.
Disadvantages of food aggl |
https://en.wikipedia.org/wiki/Windows%20Server%202019 | Windows Server 2019 is the ninth version of the Windows Server operating system by Microsoft, as part of the Windows NT family of operating systems. It is the second version of the server operating system based on the Windows 10 platform, after Windows Server 2016. It was announced on March 20, 2018 for the first Windows Insider preview release, and was released internationally on October 2, 2018. It was succeeded by Windows Server 2022 on August 18, 2021.
Development and release
Windows Server 2019 was announced on March 20, 2018, and the first Windows Insider preview version was released on the same day. It was released for general availability on October 2 of the same year.
On October 6, 2018, distribution of Windows 10 version 1809 (build 17763) was paused while Microsoft investigated an issue with user data being deleted during an in-place upgrade. It affected systems where a user profile folder (e.g. Documents, Music or Pictures) had been moved to another location, but data was left in the original location. As Windows Server 2019 is based on the Windows version 1809 codebase, it too was removed from distribution at the time, but was re-released on November 13, 2018. The software product life cycle for Server 2019 was reset in accordance with the new release date.
Editions
Windows Server 2019 consists of the following editions:
Windows Server 2019 Essentials - intended for companies up to and including 25 employees, memory-limited.
Windows Server 2019 Standard - intended for companies with more than 25 employees or more than 1 server to separate server roles.
Windows Server 2019 Datacenter - is mainly used for placing multiple virtual machines on a physical host.
Features
Windows Server 2019 has the following new features:
Container services:
Support for Kubernetes (stable; v1.14)
Support for Tigera Calico (an open-source networking and security solution for containers, virtual machines, and native host-based workloads)
Linux containers on Wind |
https://en.wikipedia.org/wiki/Conversable | Conversable is a SaaS based Artificial Intelligence (AI) powered conversational platform, headquartered in Austin, Texas. It allows customers to create intelligent, automated response flows through conversations in any messaging channel or voice platforms. It has offices in Austin and Dallas.
Some of companies using Conversable platform includes Budweiser, Wingstop, Pizza Hut, T.G.I. Friday's, Sam’s Club, Shake Shack, CES, Whole Foods and 7-Eleven.
History
In 2015, Conversable Inc. was founded by Ben Lamm, founder and CEO of digital creative design studio Chaotic Moon Studios, which was acquired by Accenture, and Andrew Busey, former CEO & co-founder of social game company Challenge Games Inc., which was acquired by Zynga. It received funding of $2 million from angel investors.
In March 2017, Conversable launched a new product called AQUA (Answer Questions Using AI), which is a business intelligence (BI) platform.
In 2018, Conversable was acquired by LivePerson to "help LivePerson continue to accelerate its goal of providing conversational commerce products to customers," according to CEO Robert LoCascio.
Overview
It helps companies to deliver on-demand content, customer self-service, and conversational commerce via messaging channel and voice applications.
Partnership
The company is partnered with Phobio in January, 2018. It is also partnered with Olo, Hinduja Global Solutions, Booz Allen Hamilton, Ernst & Young, Mindtree, WPP and Pactera.
References
External links
Official website
Internet properties established in 2015
AI companies
Computer-related introductions in 2015
Companies based in Austin, Texas
Companies based in Dallas
Instant messaging |
https://en.wikipedia.org/wiki/DNS%20over%20HTTPS | DNS over HTTPS (DoH) is a protocol for performing remote Domain Name System (DNS) resolution via the HTTPS protocol. A goal of the method is to increase user privacy and security by preventing eavesdropping and manipulation of DNS data by man-in-the-middle attacks by using the HTTPS protocol to encrypt the data between the DoH client and the DoH-based DNS resolver. By March 2018, Google and the Mozilla Foundation had started testing versions of DNS over HTTPS. In February 2020, Firefox switched to DNS over HTTPS by default for users in the United States.
An alternative to DoH is the DNS over TLS (DoT) protocol, a similar standard for encrypting DNS queries, differing only in the methods used for encryption and delivery. Based on privacy and security, whether either protocol is superior is a matter of controversial debate; while others argue the merits of either depend on the specific use case.
Technical details
DoH is a proposed standard, published as RFC 8484 (October 2018) by the IETF. It uses HTTP/2 and HTTPS, and supports the wire format DNS response data, as returned in existing UDP responses, in an HTTPS payload with the MIME type application/dns-message. If HTTP/2 is used, the server may also use HTTP/2 server push to send values that it anticipates the client may find useful in advance.
DoH is a work in progress. Even though the IETF has published RFC 8484 as a proposed standard and companies are experimenting with it, the IETF has yet to determine how it should best be implemented. The IETF is evaluating a number of approaches for how best to deploy DoH and is looking to set up a working group, Adaptive DNS Discovery (ADD), to do this work and develop a consensus. In addition, other industry working groups such as the Encrypted DNS Deployment Initiative, have been formed to "define and adopt DNS encryption technologies in a manner that ensures the continued high performance, resiliency, stability and security of the Internet's critical namespace and n |
https://en.wikipedia.org/wiki/Resource%20Unit | Resource Unit (RU) is a unit in OFDMA terminology used in 802.11ax WLAN to denote a group of 78.125 kHz bandwidth subcarriers (tones) used in both DownLink (DL) and UpLink (UL) transmissions. With OFDMA, different transmit powers may be applied to different RUs. There are maximum of 9 RUs for 20 MHz bandwidth, 18 in case of 40 MHz and more in case of 80 or 160 MHz bandwidth. The RUs enables an Access Point station to allow WLAN stations to access it simultaneously and efficiently.
Description
In the older WLAN standard (802.11ac) only single-user station is allowed to transmit (uplink transmission) at one point in time, although multi-user downlink (DL-MU-MIMO) from AP to Non-AP stations has been supported through MIMO beamforming. The more stations active in the network, the longer the stations need to wait before allowed to transmit, hence the overall wireless traffic gets slower.
802.11ax WLAN is the first WLAN standard to use OFDMA to enable transmissions with multiple users simultaneously (it is called High Efficiency Multi Users [HE-MU] Access). In OFDMA, a symbol is constructed of subcarriers where the total number defines a Physical Layer PDU bandwidth. Each user is assigned different subsets of subcarriers to achieve simultaneous data transmission in MU (Multi-Users) environment. The more subcarriers are used, the longer their symbol rate is, which means that the overall rate of information remains the same. For example, in 20 MHz OFDMA bandwidth there is a total of 256 subcarriers (tones) which are grouped in sub-channels (or Resource Units).
There are three subcarrier types used in OFDMA WLAN:
Data subcarrier; used for actual data transmission
Pilot subcarrier; used for phase information and parameter tracking
Unused subcarrier which is neither data nor pilot subcarrier. This includes DC, Guard band and null subcarriers.
There are a few RUs currently defined: 26-tone RU, 52-tone RU, 106-tone RU, 242-tone RU (primary channel), 484-tone RU, 99 |
https://en.wikipedia.org/wiki/Lists%20of%20individual%20animals | There are several lists of individual animals on Wikipedia. These are lists of notable, non-fictional, specific animals (as opposed to groups of categories of animals).
List of individual apes
Oldest hominoids
List of individual bears
List of giant pandas
List of individual birds
List of individual cats
List of oldest cats
List of individual dogs
List of oldest dogs
List of individual elephants
List of animals in film and television
List of animals awarded human credentials
List of historical horses
List of leading Thoroughbred racehorses
List of racehorses
List of individual bovines
List of individual cetaceans
List of captive orca
List of individual monkeys
List of wealthiest animals
List of wolves
List of individual pigs |
https://en.wikipedia.org/wiki/Engineers%20Registration%20Board | The Engineers Registration Board (ERB), is a statutory authority established in 1969, under the Engineers Registration Act (ERA) Cap 271, whose mission is to regulate and supervise the profession of engineering in Uganda.
Location
, the headquarters and offices of the ERB are temporarily housed at the offices of the Uganda Ministry of Works and Transport at Kyambogo, while efforts to secure a permanent location are ongoing, with the help of the Uganda Investment Authority. The geographical coordinate of the ERB headquarters are: 00°20'24.0"N, 32°37'37.0"E (Latitude:0.339999; Longitude:32.626949).
Overview
Under its mandate, the ERB is authorized to (a) register (b) de-register (c) restore registration (d) suspend registration (e) hold inquiries (f) hear appeals and (f) appear as respondents against a case brought against it in the High Court. It is also mandated to advise the government regarding the engineering sector.
The board is appointed by the Ugandan Minister of Works and Transport in consultation with the Uganda Institute of Professional Engineers (UIPE), the professional body of engineers in the country, who are guaranteed for positions on the board.
The 17th Board was named on 15 March 2018, by the Ugandan Minister of Works and Transport, Engineer Monica Azuba Ntege. The ERB members named are:
Michael Odongo: Chairman
Henry Francis Okinyal: Vice Chairman
Andrew Kitaka: Member
Elias Bahanda
Michael Pande
Florence Lubwama Kiyimba
Peter Balimunsi
Ronald Namugera: Board Registrar
Registered engineers in Uganda, enjoy cross-border reciprocity of recognition of credentials in the countries of the countries of the East African Community (Burundi, Kenya, Rwanda, Tanzania and South Sudan).
, there were 842 registered engineers, of whom 774 were Uganda nationals with full operational licences and 68 were foreigners with temporary registration. 720 of the registered engineers are concentrated in Kampala, with only 105 scattered across the remaining 120 |
https://en.wikipedia.org/wiki/List%20of%20early%20third%20generation%20computers | This list of early third generation computers, tabulates those computers using monolithic integrated circuits (ICs) as their primary logic elements, starting from small-scale integration CPUs (SSI) to large-scale integration CPUs (LSI). Computers primarily using ICs first came into use about 1961 for military use. With the availability of reliable low cost ICs in the mid 1960s commercial third generation computers using ICs started to appear.
The fourth generation computers began with the shipment of CPS-1, the first commercial microprocessor microcomputer in 1972 and for the purposes of this list marks the end of the "early" third generation computer era. Note that third generation computers were offered well into the 1990s.
The list is organized by delivery year to customers or production/operational date. In some cases only the first computer from any one manufacturer is listed. Computers announced, but never completed, are not included. Computers without documented manual input (keyboard/typewriter/control unit) are also not included.
Aerospace and military computers (1961-1971)
1961
Semiconductor Network Computer (Molecular Electronic Computer, Mol-E-Com), first monolithic integrated circuit general purpose computer (built for demonstration purposes, programmed to simulate a desk calculator) was built by Texas Instruments for the US Air Force.
1962
Martin MARTAC 420 (Fairchild Micrologic)
AC Spark Plug MAGIC (Fairchild Micrologic)
Librascope L-90 series (silicon planar epitaxial semiconductor IC)
1963
UNIVAC 1824
Autonetics D37 (Solid Circuit, Texas Instruments)
1965
Apollo Guidance Computer First installation
Burroughs D84
Litton L-304 - TTL IC
Honeywell ALERT - HLTTL IC
Autonetics D26 - DTL IC
1967
Ballistic Research Laboratories Electronic Scientific Computer Model II (BRLESC II)
CDC 449
CP-823/U
1970
AN/UYK-7
Rolm 1601 (AN/UYK-12(V)), Feb 1970
1971
AN/GYK-12 Militarized version of Litton L-3050
Commercial computers (1965-1971)
This t |
https://en.wikipedia.org/wiki/Cultural%20depictions%20of%20weasels | Weasels are mammals belonging to the family Mustelidae and the genus Mustela, which includes stoats, least weasels, ferrets, and minks, among others. Different species of weasel have lived alongside humans on every continent except Antarctica and Australia, and have been assigned a wide range of folkloric and mythical meanings.
Early history
Çatalhöyük
Excavations in Çatalhöyük, the earliest known agricultural settlement in Turkey (dating to 7000-6000 BC), uncovered animal parts which were incorporated into the structure of homes in what may have been a ritual practice. The teeth of weasels, alongside the teeth of foxes, tusks of wild boars, the claws of bears, and vulture beaks, were found embedded in plaster wall decorations.
In antiquity
Greece and Rome
Weasels were likely seen as pests in Ancient Greece and Rome. There are modern claims that the Ancient Greeks and Romans kept weasels as house pets. Plutarch and Cicero both refer to weasels as household pets in their writing.
They were also thought to have anti-venom properties: Pliny the Elder details a recipe for an antidote for asp venom made from crushed weasels, and writes about a least weasel slaying a basilisk in his Natural History:To this dreadful monster the effluvium of the weasel is fatal, a thing that has been tried with success, for kings have often desired to see its body when killed; so true is it that it has pleased Nature that there should be nothing without its antidote. The animal is thrown into the hole of the basilisk, which is easily known from the soil around it being infected. The weasel destroys the basilisk by its odour, but dies itself in this struggle of nature against its own self.
Japan
The Kamaitachi (鎌鼬) is a Japanese yōkai that is said to take the form of a weasel who either has sharp nails or sickles in place of paws.
In Tōhoku, injuries received from a kamaitachi can be healed by burning an old calendar and putting it on the wound. In Shin'etsu, there is a folk bel |
https://en.wikipedia.org/wiki/Cascade%20Lake | Cascade Lake is an Intel codename for a 14 nm server, workstation and enthusiast processor microarchitecture, launched in April 2019. In Intel's process–architecture–optimization model, Cascade Lake is an optimization of Skylake. Intel states that this will be their first microarchitecture to support 3D XPoint-based memory modules. It also features Deep Learning Boost instructions and mitigations for Meltdown and Spectre. Intel officially launched new Xeon Scalable SKUs on February 24, 2020.
Variants
Server: Cascade Lake-SP, Cascade Lake-AP
Workstation: Cascade Lake-W
Enthusiast: Cascade Lake-X
List of Cascade Lake processors
Cascade Lake-X (Enthusiast)
Cascade Lake-AP (Advanced Performance)
Cascade Lake-AP is branded as Xeon Platinum 9200 series and all SKUs are soldered to the motherboard. These CPUs will not work with Optane Memory.
Xeon Platinum 9200 Series
Cascade Lake-SP (Scalable)
Xeon Platinum series
Xeon Gold 6200 series
Bolded denotes new SKUs released February 24, 2020.
Xeon Gold 5200 Series
Bolded denotes new SKUs released February 24, 2020.
Xeon Silver series
Bolded denotes new SKUs released February 24, 2020.
Xeon Bronze series
Bolded denotes new SKUs released February 24, 2020.
Cascade Lake-W (Workstation)
Xeon W-3200 series
Xeon W-2200 series
See also
List of Intel Cascade Lake-based Xeon microprocessors
References
Skylake microarchitecture
Intel microarchitectures
Transactional memory
X86 microarchitectures |
https://en.wikipedia.org/wiki/Apache%20Celix | Apache Celix is an open-source implementation of the OSGi specification adapted to C and C++ developed by the Apache Software Foundation. The project aims to provide a framework to develop (dynamic) modular software applications using component and/or service-oriented programming.
Apache Celix is primarily developed in C and adds an additional abstraction, in the form of a library, to support for C++.
Modularity in Apache Celix is achieved by supporting - run-time installed - bundles. Bundles are zip files and can contain software modules in the form of shared libraries. Modules can provide and request dynamic services, for and from other modules, by interacting with a provided bundle context. Services in Apache Celix are "plain old" structs with function pointers or "plain old C++ Objects" (POCO).
History
Apache Celix was welcomed in the Apache Incubator at November 2010 and graduated to Top Level Project from the Apache Incubator in July 2014.
References
"Prose in this article was copied from this source, which is released under an Apache License, Version 2.0"
External links
Celix
Embedded systems
Free software
Service-oriented architecture-related products |
https://en.wikipedia.org/wiki/Structural%20battery | Structural batteries are multifunctional materials or structures, capable of acting as an electrochemical energy storage system (i.e. batteries) while possessing mechanical integrity.
They help save weight and are useful in transport applications such as electric vehicles and drones, because of their potential to improve system efficiencies. Two main types of structural batteries can be distinguished: embedded batteries and laminated structural electrodes.
Embedded batteries
Embedded batteries represent multifunctional structures where lithium-ion battery cells are efficiently embedded into a composite structure, and more often sandwich structures. In a sandwich design, state-of-the-art lithium-ion batteries are embedded forming a core material and bonded in between two thin and strong face sheets (e.g. aluminium). In-plane and bending loads are carried by face sheets while the battery core takes up transverse shear and compression loads as well as storing the electrical energy. The multifunctional structure can then be used as a load-bearing as well as an energy storage material.
Laminated structural electrodes
In laminated structural electrodes the electrode material possesses an intrinsic load-bearing and energy storage function. Such batteries are also called massless batteries, since in theory vehicle body parts could also store energy thus not adding any additional weight to the vehicle as additional batteries would not be needed. An example for such batteries are those based on a zinc anode, manganeseoxide cathode and a fiber/ polymer composite electrolyte. The structural electrolyte enables stable charge and discharge performance. This assembly has been demonstrated in an unmanned aerial vehicle. A commonly proposed structural battery is based on a carbon fiber reinforced polymer (CFRP) concept. Here, carbon fibers serve simultaneously as electrodes and structural reinforcement. The lamina is composed of carbon fibers that are embedded in a matrix ma |
https://en.wikipedia.org/wiki/Ancient%20protein | Ancient proteins are complex mixtures and the term palaeoproteomics is used to characterise the study of proteomes in the past. Ancients proteins have been recovered from a wide range of archaeological materials, including bones, teeth, eggshells, leathers, parchments, ceramics, painting binders and well-preserved soft tissues like gut intestines. These preserved proteins have provided valuable information about taxonomic identification, evolution history (phylogeny), diet, health, disease, technology and social dynamics in the past.
Like modern proteomics, the study of ancient proteins has also been enabled by technological advances. Various analytical techniques, for example, amino acid profiling, racemisation dating, immunodetection, Edman sequencing, peptide mass fingerprinting, and tandem mass spectrometry have been used to analyse ancient proteins. The introduction of high-performance mass spectrometry (for example, Orbitrap) in 2000 has revolutionised the field, since the entire preserved sequences of complex proteomes can be characterised.
Over the past decade, the study of ancient proteins has evolved into a well-established field in archaeological science. However, like the research of aDNA (ancient DNA preserved in archaeological remains), it has been limited by several challenges such as the coverage of reference databases, identification, contamination and authentication. Researchers have been working on standardising sampling, extraction, data analysis and reporting for ancient proteins. Novel computational tools such as de novo sequencing and open research may also improve the identification of ancient proteomes.
History: the pioneers of ancient protein studies
Philip Abelson, Edgar Hare and Thomas Hoering
Abelson, Hare and Hoering were leading the studies of ancient proteins between the 1950s and the early 1970s. Abelson was directing the Geophysical Laboratory at the Carnegie Institute (Washington, DC) between 1953 and 1971, and he was the firs |
https://en.wikipedia.org/wiki/Juan%20Jos%C3%A9%20Holzinger | Juan José Holzinger (?–1864) was a German-born mining engineer who served as a colonel in the Mexican Army during the Texas Revolution.
Holzinger first came to Mexico in 1825 as a mining engineer for a British company. He served with Santa Anna when he led a Federalist revolt. During the Texas Revolution Holzinger was credited with saving some Texan prisoner. Afterward he moved to Vera Cruz where he became a major landowner.
References
1864 deaths
German emigrants to Mexico
Mexican military personnel
People of Mexican side in the Texas Revolution |
https://en.wikipedia.org/wiki/Internet%20Video%20Coding | Internet Video Coding (ISO/IEC 14496-33, MPEG-4 IVC) is a video coding standard. IVC was created by MPEG, and was intended to be a royalty-free video coding standard for use on the Internet, as an alternative to non-free formats such as AVC and HEVC. As such, IVC was designed to only use (mostly old) coding techniques which were not covered by royalty-requiring patents.
According to a blog post by MPEG founder and chairman Leonardo Chiariglione in 2018, "IVC is practically dead." He said that three companies had made statements equivalent to "I may have patents and I am willing to license them at FRAND terms" covering IVC, meaning that implementations might have to pay money to the companies. These statements meant that IVC was not clearly a royalty-free video coding format; those companies would need to be contacted to determine whether they had essential patents and to determine the terms for their use which might involve the payment of some fees.
The ITU-T/ITU-R/ISO/IEC patent policy defines three types of patent licensing. The goal for IVC was to only use techniques patented under type 1 (royalty-free), while the three companies said they may have patents under type 2 (possibly requiring royalty payments). The text of the code of practice is as follows:
2.1 The patent holder is willing to negotiate licences free of charge with other parties on a non-discriminatory basis on reasonable terms and conditions. Such negotiations are left to the parties concerned and are performed outside ITU-T/ITU-R/ISO/IEC.
2.2 The patent holder is willing to negotiate licences with other parties on a non-discriminatory basis on reasonable terms and conditions. Such negotiations are left to the parties concerned and are performed outside ITU-T/ITU-R/ISO/IEC.
2.3 The patent holder is not willing to comply with the provisions of either paragraph 2.1 or paragraph 2.2; in such case, the Recommendation | Deliverable shall not include provisions depending on the patent.
History
MPEG i |
https://en.wikipedia.org/wiki/Broadcast%20to%20Allied%20Merchant%20Ships | Broadcast to Allied Merchant Ships (BAMS) was a protocol and system of broadcasts for Allied Merchant ship convoys that was used during World War II to provide for the transmission of official messages to merchant ships in any part of the world. The BAMS system is designed for communication by the best employment of radio stations available.
Background
On the outbreak of World War II, the British Admiralty took over control from the GPO, and the embryo merchant ship broadcast system, called GBMS came into force. Ships listened at routine times to the Rugby Radio Station and to area stations, otherwise keeping watch on the international distress frequency at 500 kHz. After the fall of France, the Admiralty assumed control of all Allied merchant shipping which complied with British procedures. When America entered the war, the world was divided into two strategic zones, the Admiralty being responsible for merchant shipping in one, and the United States Navy in the other.
The GBMS organisation proved to be inadequate for the efficient clearance of traffic for a number of reasons, including poor coverage by Wireless telegraphy (W/T) stations, obsolescent equipment, and many ships only able to listen at single or two operator periods. The systems gradually improved, and from 1942 all Allied merchant ships had to have two radio technicians on board, with more modern equipment being fitted to ships.
In 1942, the GBMS system was superseded by the combined Anglo-American system of BAMS, and the addition of US Navy W/T stations improved the poor coverage. For ship to shore communications, during Radio silence, ships in Convoy passed any essential messages through their escort for transmission. The Commodore's and Vice-Commodore's ships, rescue ships, merchant aircraft carriers, and ships fitted with Huff Duff were fitted when possible for intercommunication with other escort vessels.
In 1943 Rodger Winn persuaded the Admiralty that German code breaking in World War II wa |
https://en.wikipedia.org/wiki/Samantha%20Fox%20Strip%20Poker | Samantha Fox Strip Poker is a 1986 erotic video game developed by Software Communications and published by Martech. It was published on the Commodore 64, Amstrad CPC, BBC Micro, MSX, and ZX Spectrum.
It is one of the first erotic video games to include a real human being. It is part of a theme of erotic games where players complete difficult tasks and are rewarded with nudity.
Gameplay
The players plays 5-card or 7-card stud poker against British model and singer Samantha Fox. Winning hands results in her taking off her clothes until she is topless.
Development
The video game was programmed by Wolfgang Smith, with the graphics edited by Malcolm Smith. The author of the music is Rob Hubbard, credited with the name John York. The music includes a cover of "The Entertainer" by Scott Joplin and "The Stripper" by David Rose.
Reception
ZZap!64 felt the music was well-suited to the style of game. Commodore Format magazine thought that the idea of anybody using the game as a way to experience titillating content was depressing due to the required amount of effort from the player.
Uvejuegos thought the game was a prime example of how strange the 1980s were. Der Spiegel placed the game within the sub-genre of early pixelated digi-ladies of dubious beauty, along with Artworx's Strip Poker (1984).
Reviews
Jeux & Stratégie #40
Jeux & Stratégie HS #3
References
External links
World of Spectrum
Happy Computer
Tilt
1986 video games
Amstrad CPC games
BBC Micro and Acorn Electron games
Commodore 64 games
Erotic video games
MSX games
Video games developed in the United Kingdom
Video games scored by Rob Hubbard
ZX Spectrum games
Video games based on musicians
Martech games
Poker video games |
https://en.wikipedia.org/wiki/Dereverberation | Dereverberation is the process by which the effects of reverberation are removed from sound, after such reverberant sound has been picked up by microphones. Dereverberation is a subtopic of acoustic digital signal processing and is most commonly applied to speech but also has relevance in some aspects of music processing. Dereverberation of audio (speech or music) is a corresponding function to blind deconvolution of images, although the techniques used are usually very different. Reverberation itself is caused by sound reflections in a room (or other enclosed space) and is quantified by the room reverberation time and the direct-to-reverberant ratio. The effect of dereverberation is to increase the direct-to-reverberant ratio so that the sound is perceived as closer and clearer.
A main application of dereverberation is in hands-free phones and desktop conferencing terminals because, in these cases, the microphones are not close to the source of sound – the talker’s mouth – but at arm’s length or further distance. As well as telecommunications, dereverberation is importantly applied in automatic speech recognition because speech recognizers are usually error-prone in reverberant scenarios.
Dereverberation became established as a topic of scientific research in the years 2000 to 2005., although a few notable early articles exist. The first scientific text book on the topic was published in 2010. A global scientific study sponsored by the IEEE Technical Committee for Audio and Acoustic Signal Processing took place in 2014.
Three different approaches can be followed to perform dereverberation. In the first approach, reverberation is cancelled by exploiting a mathematical model of the acoustic system (or room) and, after estimation of the room acoustic model parameters, forming an estimate for the original signal. In the second approach, reverberation is suppressed by treating it as a type of (convolutional) noise and performing a de-noising process specifically ad |
https://en.wikipedia.org/wiki/Solaristor | A solaristor (from SOLAR cell transISTOR) is a compact two-terminal self-powered phototransistor. The two-in-one transistor plus solar cell achieves the high-low current modulation by a memresistive effect in the flow of photogenerated carriers. The term was coined by Dr Amador Perez-Tomas working in collaboration with other ICN2 researchers in 2018 when they demonstrated the concept in a ferroelectric-oxide/organic bulk heterojunction solar cell.
Principle of operation
In a basic solaristor embodiment, the self-powered transistor effect is achieved by the integration of a light absorber layer (a material that absorbs photon energy) in series with a functional semiconductor transport layer, which internal conductivity or contact resistance can be modified externally.
Light absorber (solar cell element)
In general, the light absorber is a semiconductor p–n junction that:
Efficiently harvests photons at various visible wavelengths by the photoelectric effect.
Splits photo-generated excitons into free electrons and holes.
Brings these free electrons and holes toward their respective outer electrodes by means of an internal field.
Additionally, in thin-film solar cells, buffer electron and hole semiconductor transport layers are introduced at the respective metal electrodes to avoid electron-hole recombination and to remove the metal/absorber Schottky barrier.
Conductivity modulator (transistor element)
A solaristor effect is achieved by modifying the internal field properties or the overall conductivity of the solar cell.
Ferroelectric solaristors. One possibility is the use of ferroelectric semiconductors as transport layers. A ferroelectric layer can be seen as a semiconductor with switchable surface charge polarity. Because of this tuneable dipole effect, ferroelectrics bend their electronic band structure and offsets with respect to adjacent metals and/or semiconductors when switching the ferroelectric polarization so that the overall conductivity can be tu |
https://en.wikipedia.org/wiki/PostScript%20Latin%201%20Encoding | The PostScript Latin 1 Encoding (often spelled ISOLatin1Encoding) is one of the character sets (or encoding vectors) used by Adobe Systems' PostScript (PS) since 1984 (1982). In 1995, IBM assigned code page 1277 (CCSID 1277) to this character set. It is a superset of ISO 8859-1.
Code page layout
References
(NB. This book is informally called "red book" due to its red cover.)
(NB. This edition also contains a description of Display PostScript, which is no longer discussed in the third edition.)
Character sets |
https://en.wikipedia.org/wiki/IVPN | IVPN is a VPN service offered by IVPN Limited (formerly Privatus Limited) based in Gibraltar. Launched in 2009, IVPN operates using the WireGuard, OpenVPN, and IKEv2 protocols.
Features
Privatus Limited has been independently audited by Cure53 and has undergone a no-logging audit and a comprehensive pentest report. They accept Bitcoin, Monero, PayPal, credit cards, and cash as payment methods and all of their clients are open source. IVPN is supported by FlashRouters, a company that specializes in providing custom-flashed routers for VPN users.
Reception
In a September 2017 review by PCWorld, IVPN received 4 1/2 out of 5 stars and was awarded their Editor's Choice Award in 2017.
The service has also been reviewed by PC Magazine, and TorrentFreak.
See also
Comparison of virtual private network services
Internet privacy
Encryption
Secure communication
List of free and open-source Android applications
References
External links
Internet privacy
Virtual private network services |
https://en.wikipedia.org/wiki/Helen%20Campbell%20D%27Olier | Helen Campbell D’Olier (1829 - 1887) was a Scottish artist best known for her reproductions from early-Christian illuminated manuscripts.
Life and family
Born in Edinburgh, the daughter of James Lawson, she received some instruction in drawing and painting from William Simpson RSA. In 1849 she moved to Ireland having married Dublin barrister John Rutherford D’Olier (1816-1899). They had two children Helen Lawson D'Olier (d. 1948) and Isaac Matthew D'Olier (1850-1913). D'Olier died on 29 June 1887.
Artistic work
While she did produce some traditional landscape paintings, it is for her skill in reproducing illuminations from medieval manuscripts and particularly from the Book of Kells for which this artist is best remembered. Her copies, on vellum, of the illustrations in the 9th-century Book of Kells, are noted for their accuracy and fidelity. D'Olier spent many years copying these illustrations, at a time when copyists were permitted to work directly from the manuscript itself. It can be seen, from her surviving drawings, that D'Olier had been permitted to trace the designs directly from the pages of the manuscript.
Apart from her meticulous reproductions D'Olier researched the history of the manuscripts, particularly the origins of the pigments used to decorate them. In 1884 she lectured on the subject in Alexandra College, showing magic-lantern slides of her work. Some of her illustrations were included in a lecture on the Book of Kells given by Professor J.D. Westwood in Oxford, which was published in 1887. Her work was also included in a paper read to the Dublin Society in 1887 by Professor W.J.Hartley She is known to have worked on the Lindisfarne Gospels and the Bodleian's Liber Sacramentorum. Her work was exhibited in the Dublin Exhibition of 1861 and 1872.
In 1914 Sir Edward Sullivan (1852-1928) produced a very popular book on the Book of Kells with 24 full-colour plates. At the time it was technically easier to reproduce drawings than photographs |
https://en.wikipedia.org/wiki/Frontier%20%28supercomputer%29 | Hewlett Packard Enterprise Frontier, or OLCF-5, is the world's first and fastest exascale supercomputer, hosted at the Oak Ridge Leadership Computing Facility (OLCF) in Tennessee, United States and first operational in 2022. It is based on the Cray EX and is the successor to Summit (OLCF-4). , Frontier is the world's fastest supercomputer. Frontier achieved an Rmax of 1.102 exaFLOPS, which is 1.102 quintillion operations per second, using AMD CPUs and GPUs. Measured at 62.86 gigaflops/watt, Frontier topped the Green500 list for most efficient supercomputer, until it was dethroned (in efficiency) by Flatiron Institute's Henri supercomputer in November 2022.
Design
Frontier uses 9,472 AMD Epyc 7453s "Trento" 64 core 2 GHz CPUs (606,208 cores) and 37,888 Radeon Instinct MI250X GPUs (8,335,360 cores). They can perform double precision operations at the same speed as single precision.
"Trento" is an optimized 3rd Gen EPYC CPU ("Milan"), which itself is based on the Zen 3 microarchitecture.
It occupies 74 rack cabinets. Each cabinet hosts 64 blades, each consisting of 2 nodes.
Blades are interconnected by HPE Slingshot 64-port switch that provides 12.8 terabits/second of bandwidth. Groups of blades are linked in a dragonfly topology with at most three hops between any two nodes. Cabling is either optical or copper, customized to minimize cable length. Total cabling runs . Frontier is liquid-cooled, allowing 5x the density of air-cooled architectures.
Each node consists of one CPU, 4 GPUs and 4 terabytes of flash memory. Each GPU has 128 GB of RAM soldered onto it.
Frontier has coherent interconnects between CPUs and GPUs, allowing GPU memory to be accessed coherently by code running on the Epyc CPUs.
Frontier uses an internal 75 TB/s read / 35 TB/s write / 15 billion IOPS flash storage system, along with the 700 PB Orion site-wide Lustre filesystem.
Frontier consumes 21 megawatts (MW) (compared to its predecessor Summit's 13 MW); it has been estimated that the n |
https://en.wikipedia.org/wiki/Beau%20Geste%20hypothesis | The Beau Geste hypothesis in animal behaviour is the hypothesis that tries to explain why some avian species have such elaborate song repertoires for the purpose of territorial defence. The hypothesis takes its name from the 1924 book Beau Geste and was coined by John Krebs in 1977.
Background
The Beau Geste hypothesis which was coined by Krebs in 1977 to explain why various avian species have such large song repertories. The hypothesis discusses that avian species utilize such large song repertories for potentially a number of reasons such as for territorial defence and to test the competition within a new habitat.
The name of the hypothesis comes from the book which was originally published in 1924 "Beau Geste". The book tells the story of three English brothers which all enlisted in the French Foreign Legion and ended up in a desert battle against a Tuareg army. They were greatly outnumbered, and in order to create the illusion that they had more men than they actually had, they took whatever dead soldiers they could find and propped them up along the walls of the fortress.
Non-avian species
There has been mention of this hypothesis in places such as research into amphibian vocalizations in the Boophis madagascariensis, an endemic species of tree frog found in Madagascar, where the Beau Geste hypothesis is used to give one explanation of why the species has such a large vocal repertoire. There has been some support for the theory in that the frogs use a wide variety of songs to give the illusion to invading frogs that the territory they are trying to enter is already full of competing frogs.
The Beau Geste hypothesis has also been found to explain vocalizations within some cricket species such as the bush cricket, where males use a wide variety of songs to access the amount of competition which is in a given area. When males are present in an area with a large number of other males their vocal repertories are much smaller than when in an area with only a f |
https://en.wikipedia.org/wiki/Metric%20temporal%20logic | Metric temporal logic (MTL) is a special case of temporal logic. It is an extension of temporal logic in which temporal operators are replaced by time-constrained versions like until, next, since and previous operators. It is a linear-time logic that assumes both the interleaving and fictitious-clock abstractions. It is defined over a point-based weakly-monotonic integer-time semantics.
MTL has been described as a prominent specification formalism for real-time systems. Full MTL over infinite timed words is undecidable.
Syntax
The full metric temporal logic is defined similarly to linear temporal logic, where a set of non-negative real number is added to temporal modal operators U and S. Formally, MTL is built up from:
a finite set of propositional variables AP,
the logical operators ¬ and ∨, and
the temporal modal operator (pronounced " until in ."), with an interval of non-negative numbers.
the temporal modal operator (pronounced " since in ."), with as above.
When the subscript is omitted, it is implicitly equal to .
Note that the next operator N is not considered to be a part of MTL syntax. It will instead be defined from other operators.
Past and Future
The past fragment of metric temporal logic, denoted as past-MTL is defined as the restriction of the full metric temporal logic without the until operator. Similarly, the future fragment of metric temporal logic, denoted as future-MTL is defined as the restriction of the full metric temporal logic without the since operator.
Depending on the authors, MTL is either defined as the future fragment of MTL, in which case full-MTL is called MTL+Past. Or MTL is defined as full-MTL.
In order to avoid ambiguity, this article uses the names full-MTL, past-MTL and future-MTL. When the statements holds for the three logic, MTL will simply be used.
Model
Let intuitively represent a set
of points in time. Let a function which associates a letter to
each moment . A model of a MTL formula is
such a |
https://en.wikipedia.org/wiki/Group%20functor | In mathematics, a group functor is a group-valued functor on the category of commutative rings. Although it is typically viewed as a generalization of a group scheme, the notion itself involves no scheme theory. Because of this feature, some authors, notably Waterhouse and Milne (who followed Waterhouse), develop the theory of group schemes based on the notion of group functor instead of scheme theory.
A formal group is usually defined as a particular kind of a group functor.
Group functor as a generalization of a group scheme
A scheme may be thought of as a contravariant functor from the category of S-schemes to the category of sets satisfying the gluing axiom; the perspective known as the functor of points. Under this perspective, a group scheme is a contravariant functor from to the category of groups that is a Zariski sheaf (i.e., satisfying the gluing axiom for the Zariski topology).
For example, if Γ is a finite group, then consider the functor that sends Spec(R) to the set of locally constant functions on it. For example, the group scheme
can be described as the functor
If we take a ring, for example, , then
Group sheaf
It is useful to consider a group functor that respects a topology (if any) of the underlying category; namely, one that is a sheaf and a group functor that is a sheaf is called a group sheaf. The notion appears in particular in the discussion of a torsor (where a choice of topology is an important matter).
For example, a p-divisible group is an example of a fppf group sheaf (a group sheaf with respect to the fppf topology).
See also
automorphism group functor
Notes
References
Algebraic geometry |
https://en.wikipedia.org/wiki/Necrobiome | The necrobiome has been defined as the community of species associated with decaying corpse remains. The process of decomposition is complex. Microbes decompose cadavers, but other organisms including fungi, nematodes, insects, and larger scavenger animals also contribute. Once the immune system is no longer active, microbes colonizing the intestines and lungs decompose their respective tissues and then travel throughout the body via the blood and lymphatic systems to break down other tissue and bone. During this process, gases are released as a by-product and accumulate, causing bloating. Eventually, the gases seep through the body's wounds and natural openings, providing a way for some microbes to exit from the inside of the cadaver and inhabit the outside. The microbial communities colonizing the internal organs of a cadaver are referred to as the thanatomicrobiome. The region outside of the cadaver that is exposed to the external environment is referred to as the epinecrotic portion of the necrobiome, and is especially important when determining the time and location of death for an individual. Different microbes play specific roles during each stage of the decomposition process. The microbes that will colonize the cadaver and the rate of their activity are determined by the cadaver itself and the cadaver's surrounding environmental conditions.
History
There is textual evidence that human cadavers were first studied around the third century BC to gain an understanding of human anatomy. Many of the first human cadaver studies took place in Italy, where the earliest record of determining the cause of death from a human corpse dates back to 1286. However, understanding of the human body progressed slowly, in part because the spread of Christianity and other religious beliefs resulted in human dissection becoming illegal. Thus, non-human animals were solely dissected for anatomical understanding until the 13th century when officials realized human cadavers were ne |
https://en.wikipedia.org/wiki/The%20Mandarin%20%28website%29 | The Mandarin is an Australian online magazine established in 2014. The site, which reports news of interest to Australian public sector managers, is published by Tom Burton and owned by Private Media.
Background
The Mandarin was launched as a part of Private Media's "broader focus on business and the government sector"; Ken Henry, Lucy Turnbull, and Graeme Samuel served on the website's initial advisory committee.
As of 2020, the managing editor was Chris Johnson. Its reporting has been referenced by The New Daily and the Sydney Morning Herald.
References
External links
Mandarin Official Website
Club Management News
Australian news websites
Magazines established in 2014
2014 establishments in Australia
Business magazines published in Australia
Online magazines |
https://en.wikipedia.org/wiki/Multi-homogeneous%20B%C3%A9zout%20theorem | In algebra and algebraic geometry, the multi-homogeneous Bézout theorem is a generalization to multi-homogeneous polynomials of Bézout's theorem, which counts the number of isolated common zeros of a set of homogeneous polynomials. This generalization is due to Igor Shafarevich.
Motivation
Given a polynomial equation or a system of polynomial equations it is often useful to compute or to bound the number of solutions without computing explicitly the solutions.
In the case of a single equation, this problem is solved by the fundamental theorem of algebra, which asserts that the number of complex solutions is bounded by the degree of the polynomial, with equality, if the solutions are counted with their multiplicities.
In the case of a system of polynomial equations in unknowns, the problem is solved by Bézout's theorem, which asserts that, if the number of complex solutions is finite, their number is bounded by the product of the degrees of the polynomials. Moreover, if the number of solutions at infinity is also finite, then the product of the degrees equals the number of solutions counted with multiplicities and including the solutions at infinity.
However, it is rather common that the number of solutions at infinity is infinite. In this case, the product of the degrees of the polynomials may be much larger than the number of roots, and better bounds are useful.
Multi-homogeneous Bézout theorem provides such a better root when the unknowns may be split into several subsets such that the degree of each polynomial in each subset is lower than the total degree of the polynomial. For example, let be polynomials of degree two which are of degree one in indeterminate and also of degree one in (that is the polynomials are bilinear. In this case, Bézout's theorem bounds the number of solutions by
while the multi-homogeneous Bézout theorem gives the bound (using Stirling's approximation)
Statement
A multi-homogeneous polynomial is a polynomial that is homoge |
https://en.wikipedia.org/wiki/Plant%20genetic%20resources | Plant genetic resources describe the variability within plants that comes from human and natural selection over millennia. Their intrinsic value mainly concerns agricultural crops (crop biodiversity).
According to the 1983 revised International Undertaking on Plant Genetic Resources for Food and Agriculture of the Food and Agriculture Organization (FAO), plant genetic resources are defined as the entire generative and vegetative reproductive material of species with economical and/or social value, especially for the agriculture of the present and the future, with special emphasis on nutritional plants.
In the State of the World’s Plant Genetic Resources for Food and Agriculture (1998) the FAO defined Plant Genetic Resources for Food and Agriculture (PGRFA) as the diversity of genetic material contained in traditional varieties and modern cultivars as well as crop wild relatives and other wild plant species that can be used now or in the future for food and agriculture.
History
The first use of plant genetic resources dates to more than 10,000 years ago, when farmers selected from the genetic variation they found in wild plants to develop their crops. As human populations moved to different climates and ecosystems, taking the crops with them, the crops adapted to the new environments, developing, for example, genetic traits providing tolerance to conditions such as drought, water logging, frost and extreme heat. These traits - and the plasticity inherent in having wide genetic variability - are important properties of plant genetic resources.
In recent centuries, although humans had been prolific in collecting exotic flora from all corners of the globe to fill their gardens, it wasn’t until the early 20th century that the widespread and organized collection of plant genetic resources for agricultural use began in earnest. Russian geneticist Nikolai Vavilov, considered by some as the father of plant genetic resources, realized the value of genetic variability for |
https://en.wikipedia.org/wiki/Two-tree%20broadcast | The two-tree broadcast (abbreviated 2tree-broadcast or 23-broadcast) is an algorithm that implements a broadcast communication pattern on a distributed system using message passing.
A broadcast is a commonly used collective operation that sends data from one processor to all other processors.
The two-tree broadcast communicates concurrently over two binary trees that span all processors. This achieves full usage of the bandwidth in the full-duplex communication model while having a startup latency logarithmic in the number of partaking processors.
The algorithm can also be adapted to perform a reduction or prefix sum.
Algorithm
A broadcast sends a message from a specified root processor to all other processors.
Binary tree broadcasting uses a binary tree to model the communication between the processors.
Each processor corresponds to one node in the tree, and the root processor is the root of the tree.
To broadcast a message , the root sends to its two children (child nodes). Each processor waits until it receives and then sends to its children. Because leaves have no children, they don't have to send any messages.
The broadcasting process can be pipelined by splitting the message into blocks, which are then broadcast consecutively.
In such a binary tree, the leaves of the tree only receive data, but never send any data themselves. If the communication is bidirectional (full-duplex), meaning each processor can send a message and receive a message at the same time, the leaves only use one half of the available bandwidth.
The idea of the two-tree broadcast is to use two binary trees and and communicate on both concurrently.
The trees are constructed so that the interior nodes of one tree correspond to leaf nodes of the other tree.
The data that has to be broadcast is split into blocks of equal size.
In each step of the algorithm, each processor receives one block and sends the previous block to one of its children in the tree in which it is an interior node.
|
https://en.wikipedia.org/wiki/Streamlining%20theory | Genomic streamlining is a theory in evolutionary biology and microbial ecology that suggests that there is a reproductive benefit to prokaryotes having a smaller genome size with less non-coding DNA and fewer non-essential genes. There is a lot of variation in prokaryotic genome size, with the smallest free-living cell's genome being roughly ten times smaller than the largest prokaryote. Two of the bacterial taxa with the smallest genomes are Prochlorococcus and Pelagibacter ubique, both highly abundant marine bacteria commonly found in oligotrophic regions. Similar reduced genomes have been found in uncultured marine bacteria, suggesting that genomic streamlining is a common feature of bacterioplankton. This theory is typically used with reference to free-living organisms in oligotrophic environments.
Overview
Genome streamlining theory states that certain prokaryotic genomes tend to be small in size in comparison to other prokaryotes, and all eukaryotes, due to selection against the retention of non-coding DNA. The known advantages of small genome size include faster genome replication for cell division, fewer nutrient requirements, and easier co-regulation of multiple related genes, because gene density typically increases with decreased genome size. This means that an organism with a smaller genome is likely to be more successful, or have higher fitness, than one hindered by excessive amounts of unnecessary DNA, leading to selection for smaller genome sizes.
Some mechanisms that are thought to underlie genome streamlining include deletion bias and purifying selection. Deletion bias is the phenomenon in bacterial genomes where the rate of DNA loss is naturally higher than the rate of DNA acquisition. This is a passive process that simply results from the difference in these two rates. Purifying selection is the process by which extraneous genes are selected against, making organisms lacking this genetic material more successful by effectively reducing their |
https://en.wikipedia.org/wiki/A%20Primer%20of%20Real%20Functions | A Primer of Real Functions is a revised edition of a classic Carus Monograph on the theory of functions of a real variable. It is authored by R. P. Boas, Jr and updated by his son Harold P. Boas.
References
1960 non-fiction books
Mathematics textbooks
Functions and mappings |
https://en.wikipedia.org/wiki/Electro-biochemical%20reactor | Electro-biochemical reactor (EBR) is a type of a bioreactor used in water treatment. EBR is a high-efficiency denitrification, metals, and inorganics removal technology that provides electrons directly to the EBR bioreactor as a substitute for using excess electron donors and nutrients. It was patented by INOTEC, a bioremediation company based in Salt Lake City, UT.
The EBR technology is based on the principle that microbes mediate the removal of metal and inorganic contaminants through electron transfer (redox processes). In conventional bioreactors, these electrons are provided by excess organic electron donors (e.g., organic carbon sources such as methanol, glucose, etc.). They require excess nutrients/chemicals to compensate for inefficient and variable electron availability needed to adjust reactor ORP chemistry, compensate for system sensitivity (fluctuation), and to achieve more consistent constituent removal. The Electro-Biochemical Reactor directly supplies needed electrons to the reactor and microbes, using a low applied potential across the reactor cell (1-3 V) at low milli-Amp levels. As a comparison, one molecule of glucose, often used as a cost-effective electron donor, can provide up to 24 electrons under complete glucose metabolism, while a current of 1 mA provides 6.2x10^15 electrons every second. The small amount of power required can even come from a small solar/battery source.
The EBR systems have been successfully demonstrated in the mining and power generation sectors to remove nitrate, nitrite, selenium, cadmium, molybdenum, nickel, tin, uranium, zinc, antimony, copper, lead, silver, vanadium, and mercury.
References
Bioreactors |
https://en.wikipedia.org/wiki/MAP-Seq | MAPseq or Multiplexed Analysis of Projections by Sequencing is a RNA-Seq based method for high-throughput mapping of neuronal projections. It was developed by Anthony M. Zador and his team at Cold Spring Harbor Laboratory and published in Neuron, a Cell Press magazine.
The method works by uniquely labeling neurons in a source region by injecting a viral library encoding a diverse collection of RNA sequences ("barcodes"). The barcode mRNA is expressed at high levels and transported into the axon terminals at distal target projection regions. Following this, the cells from source and putative target regions of interest are harvested, and their RNA is extracted and sequenced. By matching the presence of the unique "barcode" in the source and target tissue, one can map the projections of neuron in a one-to-many fashion.
See also
RNA-Seq
Patch-sequencing
References
Molecular biology
Neuroscience
External links
RNA sequencing
Molecular biology techniques |
https://en.wikipedia.org/wiki/Food%20and%20biological%20process%20engineering | Food and biological process engineering is a discipline concerned with applying principles of engineering to the fields of food production and distribution and biology. It is a broad field, with workers fulfilling a variety of roles ranging from design of food processing equipment to genetic modification of organisms. In some respects it is a combined field, drawing from the disciplines of food science and biological engineering to improve the earth's food supply.
Creating, processing, and storing food to support the world's population requires extensive interdisciplinary knowledge. Notably, there are many biological engineering processes within food engineering to manipulate the multitude of organisms involved in our complex food chain. Food safety in particular requires biological study to understand the microorganisms involved and how they affect humans. However, other aspects of food engineering, such as food storage and processing, also require extensive biological knowledge of both the food and the microorganisms that inhabit it. This food microbiology and biology knowledge becomes biological engineering when systems and processes are created to maintain desirable food properties and microorganisms while providing mechanisms for eliminating the unfavorable or dangerous ones.
Concepts
Many different concepts are involved in the field of food and biological process engineering. Below are listed several major ones.
Food science
The science behind food and food production involves studying how food behaves and how it can be improved. Researchers analyze longevity and composition (i.e., ingredients, vitamins, minerals, etc.) of foods, as well as how to ensure food safety.
Genetic engineering
Modern food and biological process engineering relies heavily on applications of genetic manipulation. By understanding plants and animals on the molecular level, scientists are able to engineer them with specific goals in mind.
Among the most notable applications of |
https://en.wikipedia.org/wiki/PCVC%20Speech%20Dataset | The PCVC (Persian Consonant Vowel Combination) Speech Dataset is a Modern Persian speech corpus for speech recognition and also speaker recognition. The dataset contains sound samples of Modern Persian combination of vowel and consonant phonemes from different speakers. Every sound sample contains just one consonant and one vowel So it is somehow labeled in phoneme level. This dataset consists of 23 Persian consonants and 6 vowels. The sound samples are all possible combinations of vowels and consonants (138 samples for each speaker). The sample rate of all speech samples is 48000 which means there are 48000 sound samples in every 1 second. Every sound sample starts with consonant then continues with vowel. In each sample, in average, 0.5 second of each sample is speech and the rest is silence. Each sound sample ends with silence. All of sound samples are denoised with "Adaptive noise reduction" algorithm.
Compared to Farsdat speech dataset and Persian speech corpus it is more easy to use because it is prepared in .mat data files. Also it is more based on phoneme based separation and all samples are denoised.
Contents
The corpus is downloadable from its Kaggle web page, and contains the following:
.mat data files of sound samples in a 23*6*30000 matrix, in which 23 is number of consonants, 6 is the number of vowels and 30000 is the length of sound sample.
See also
Comparison of datasets in machine learning
References
External links
The Kaggle page of PCVC speech dataset
PCVC Paper on ResearchGate
Datasets in machine learning
Speech recognition
Speaker recognition
Persian language
Speech synthesis |
https://en.wikipedia.org/wiki/SpaceX%20fairing%20recovery%20program | The SpaceX fairing recovery program was an experimental program by SpaceX, begun in 2017 in an effort to determine if it might be possible to economically recover and reuse expended launch vehicle payload fairings from suborbital space. The experimental program became an operational program as, by late 2020, the company was routinely recovering fairings from many flights, and by 2021 were successfully refurbishing and reflying previously flown fairings on the majority of their satellite launches.
During the early years of the program, SpaceX attempted to catch the descending payload fairings, under parachute, in a very large net on a moving ship in the Atlantic ocean east of the Space Coast of Florida. Two former platform supply vessels—Ms.Tree, formerly known as Mr.Steven, and its sister ship, Ms.Chief—were chartered by SpaceX and used 2018–2021 as experimental platforms for recovery of rocket fairings from Falcon 9 orbital launch trajectories. These fast ships were retrofitted with large nets intended to catch fairings—and prevent the fairings from making contact with seawater—as part of an iterative development program to create technology that will eventually allow rocket payload fairings to be economically reused and reflown. Ms.Tree was used for SpaceX Falcon 9 fairing recovery experiments on a number of occasions in 2018 and early 2019, while named Mr.Steven. Ms.Tree first successfully caught a fairing on 25 June 2019 during Falcon Heavy launch 3, which carried the DoD's STP-2 mission. This was the ship's first fairing recovery voyage after its renaming, change of ownership, and net upgrade. By 2020, the program reached operational status where fairings from most Falcon 9 satellite launches were recovered, either "in the net" or from the water, and for the first time, both fairing halves of a single flight were caught in the nets of two different ships. The final fairing that was successfully caught in a net was in October 2020.
In early 2021, the nets were |
https://en.wikipedia.org/wiki/Harrison%27s%20rule | Harrison's rule is an observation in evolutionary biology by Launcelot Harrison which states that in comparisons across closely related species, host and parasite body sizes tend to covary positively.
Parasite species' body size increases with host species' body size
Launcelot Harrison, an Australian authority in zoology and parasitology, published a study in 1915 concluding that host and parasite body sizes tend to covary positively, a covariation later dubbed as 'Harrison's rule'. Harrison himself originally proposed it to interpret the variability of congeneric louse species. However, subsequent authors verified it for a wide variety of parasitic organisms including nematodes, rhizocephalan barnacles, fleas, lice, ticks, parasitic flies and mites, as well as herbivorous insects associated with specific host plants.
The variability of parasite species' body size increases with host species' body size
Robert Poulin observed that in comparisons across species, the variability of parasite body size also increases with host body size.
It is self-evident that we expect greater variation coming together with greater mean body sizes due to an allometric power law scaling effect. However, Poulin referred to parasites' increasing body size variability due to biological reasons, thus we expect an increase greater than that caused by a scaling effect.
Recently, Harnos et al. applied phylogenetically controlled statistical methods to test Harrison's rule and Poulin's s Increasing Variance Hypothesis in avian lice. Their results indicate that the three major families of avian lice (Ricinidae, Menoponidae, Philopteridae) follow Harrison's rule, and two of them (Menoponidae, Philopteridae) also follow Poulin's supplement to it.
Implications
The allometry between host and parasite body sizes constitutes an evident aspect of host–parasite coevolution. The slope of this relationship is a taxon-specific character. Parasites' body size is known to covary positively with fecund |
https://en.wikipedia.org/wiki/451%20Group | 451 Group is a New York City-based technology industry research firm. Through its Uptime Institute operating unit, the company provides research for data center operators. In December 2019, 451 Group sold an operating division, 451 Research, to information and analytics company S&P Global.
History
451 Group acquired Uptime Institute in 2009. The company subsequently acquired:
consumer spending research firm ChangeWave Research in 2011
events company Tech Touchstone in 2013
mobile communications research firm Yankee Group in 2013
IT professional community Wisegate in 2017.
The company's 451 Research division was acquired by S&P Global Market Intelligence on December 6, 2019, according to a company press release.
Business
Uptime Institute is the operating division of the 451 Group. It is an American professional services organization best known for its "Tier Standard". and the associated certification of data center compliance with the standard.
Founded in 1987 by Kenneth G. Brill, the Uptime Institute was founded as an industry proponent to help owners and operators quantify and qualify their ability to provide a predictable level of performance from data centers, regardless of the status of external factors, such as power utilities.
451 Research
451 Research was formerly part of 451 Group. Until being acquired by S&P Global in December 2019, it was an information technology industry analyst firm, headquartered in New York with offices in London, Boston, Washington DC, and San Francisco.
The company claimed over 250 employees, over 100 industry analysts and over 1000 clients. The company produced qualitative and quantitative research, across thirteen research channels, and targets service providers, technology vendors, enterprise IT leaders and financial professionals.
See also
High availability
Downtime
Uptime
References
External links
Information technology companies of the United States
Consulting firms established in 1993
International i |
https://en.wikipedia.org/wiki/TRENDnet | TRENDnet is a global manufacturer of computer networking products headquartered in Torrance, California, in the United States. It sells networking and surveillance products especially in the small to medium business (SMB) and home user market segments.
History
The company was founded in 1990 by Pei Huang and Peggy Huang.
Vulnerabilities
In September 2013, the Federal Trade Commission (FTC) brought an enforcement action against TRENDnet alleging that the company marketed its SecurView IP cameras describing them as "secure", when in fact the software allowed online viewing by anyone with the camera's IP address.
The FTC approved a final settlement with TRENDnet in February 2014. In January 2018, TRENDnet launched 4K UHD PoE surveillance cameras with covert IR LEDs.
References
External links
Electronics companies established in 1990
Networking companies of the United States
Networking hardware
Routers (computing)
Wireless networking
1990 establishments in California |
https://en.wikipedia.org/wiki/360%20video%20projection | A 360 video projection is any of many ways to map a spherical field of view to a flat image. It is used to encode and deliver the effect of a spherical, 360-degree image to viewers such as needed for 360-degree videos and for virtual reality. A 360 video projection is a specialized form of a map projection, with characteristics tuned for the efficient representation, transmission, and display of 360° fields of view.
Different projections
Equirectangular
An equirectangular projection simply maps the yaw and pitch (longitude and latitiude) of a sphere linearly to a rectangular image. It produces a signature curved look. In addition, the distribution of pixel density (which can be visualized with Tissot's indicatrix) is suboptimal, with the usually more important "equator" getting the lowest density.
Cube Map
Cube mapping records the environment as the six faces of a cube. The image distortion is markedly reduced, especially when looking at the faces head-on. Still, the edges and corners of faces receive more pixels than the center.
Equi-Angular Cubemap (EAC)
The Equi-Angular Cubemap (EAC) projection is a variant of the cubemap that distributes the pixels evenly by angle. This keeps the density of information consistent, regardless of which direction the viewer is looking. It was detailed by Google on March 14, 2017. In January 2018 the company started using the spherical projection to stream 360 degree videos on YouTube.
GoPro adopted the EAC format in 2019 when they released the GoPro MAX. They noted that EAC enabled then in using 25% less pixels by packing the equivalent of 5376x2688 pixels into an EAC projection of 4032x2688 pixels. This projection was then split horizontally in two streams of 4032x1344 and encoded, which could be decoded by regular UHD decoders.
Mainstream video tools have not yet added support for EAC formats, such as GoPro's .360. A custom fork of FFmpeg and a tool called max2sphere however do enable .360 processing.
Pyramid format
|
https://en.wikipedia.org/wiki/IDA%20Indoor%20Climate%20and%20Energy | IDA Indoor Climate and Energy (IDA ICE) is a Building performance simulation (BPS) software. IDA ICE is a simulation application for the multi-zonal and dynamic study of indoor climate phenomena as well as energy use. The implemented models are state of the art, many studies show that simulation results and measured data compare well.
User interface
The user interface of IDA ICE makes it easy to create simple cases but also offers the flexibility to go into detail of advanced studies. Many inputs are adaptable to local requirements such as climate data, material data, system components or result reports. IDA ICE provides a 3D environment for geometry modeling, the table-based input of boundary conditions provide good visual feedback and enables efficient quality check. A simple procedure for calculating and reporting cooling, heating, air demand, and energy, together with a built-in version handling system, makes it efficient to compare different systems and results.
Advanced daylight calculation are achieved by interfacing the Radiance lighting simulation tool with result visualization in the 3D environment. A module for Appendix G of ASHRAE 90.1-2010 is available, this is used for example in LEED and BREEAM. The integrated radiosity method with single reflection and one measuring point can be used for whole-year daylight analysis and allows modeling daylight-based control strategies (e.g. shading devices, artificial lightening).
There is also the "Early Stage Building Optimization" (ESBO) user interface which makes it possible for users to experiment with variations in both buildings and systems at an early stage with a minimum of user input. A full range of component models for renewable energy studies is available, with boreholes, stratified tanks, heat pumps, solar collectors, CHP, PV, wind turbines, etc.
An interface with OpenFOAM for detailed CFD studies is in development.
Input
IDA ICE supports IFC BIM models generated by tools such as ArchiCAD, Revi |
https://en.wikipedia.org/wiki/Glossary%20of%20Broken%20Dreams | Glossary of Broken Dreams is a 2018 Austrian/American documentary film directed by Johannes Grenzfurthner. The essayistic feature film tries to present an overview of political concepts such as freedom, privacy, identity, resistance, etc.
Grenzfurthner calls his project an "ideotaining cinematic revue" about political ideas he considers important. Grenzfurthner cites frustration about the current level of political debate as a primary influence for making the film. He couldn't tolerate "ignorant and topically abusive comments on the 'Internet' anymore." So he teamed up with writer and activist Ishan Raval to "explain, re-evaluate, and sometimes sacrifice political golden calves of discourse."
The film features performances by Amber Benson, Max Grodenchik, Jason Scott Sadofsky and others.
The film was produced by art group monochrom.
Concept
Johannes Grenzfurthner, who defines himself as a 'lumpennerd' in the film's intro, functions as a storyteller and host who guides the viewer through the narrative. Glossary of Broken Dreams does not make use of classic documentary-style interviews and aesthetics. Instead, the film presents different political and philosophical concepts in the form of short films and essayistic chapters featuring fictional characters. These bizarre and exaggerated characters are performed by actors, voice performers and musicians (for example FM4's Hannes Duscher and Roland Gratzer). The film can be seen in the tradition of reflexive documentary films and performative documentary films.
In an interview with Film Threat, Grenzfurthner describes his project as follows:
It's a peculiar film for nerds of a peculiar set of interests, but at the same time it's talking about topics that are so goddamn important that more people should know about it. I guess that's why I made it. No idea if there is even a target audience for it, but one can try. There is a lot to process. And cat meme drunk masses will probably not even scratch the surface. But my |
https://en.wikipedia.org/wiki/ROCm | ROCm is an Advanced Micro Devices (AMD) software stack for graphics processing unit (GPU) programming. ROCm spans several domains: general-purpose computing on graphics processing units (GPGPU), high performance computing (HPC), heterogeneous computing. It offers several programming models: HIP (GPU-kernel-based programming), OpenMP/Message Passing Interface (MPI) (directive-based programming), OpenCL.
ROCm is free, libre and open-source software (except the GPU firmware blobs), it is distributed under various licenses. ROCm is short for Radeon Open Compute platform.
Background
The first GPGPU software stack from ATI/AMD was Close to Metal, which became Stream.
ROCm was launched around 2016 with the Boltzmann Initiative. ROCm stack builds upon previous AMD GPU stacks, some tools trace back to GPUOpen, others to the Heterogeneous System Architecture (HSA).
Heterogeneous System Architecture Intermediate Language
HSAIL was aimed at producing a middle-level, hardware-agnostic intermediate representation, that could be JIT-compiled to the eventual hardware (GPU, FPGA...) using the appropriate finalizer. This approach was dropped for ROCm: now it builds only GPU code, using LLVM, and its AMDGPU backend that was upstreamed, although there is still research on such enhanced modularity with LLVM MLIR.
Programming abilities
ROCm as a stack ranges from the kernel driver to the end-user applications.
AMD has introductory videos about AMD GCN hardware, and ROCm programming via its learning portal.
One of the best technical introductions about the stack and ROCm/HIP programming, remains, to date, to be found on Reddit.
High-level programming
HIP programming
HIP(HCC) kernel language
Memory allocation
NUMA
Heterogeneous Memory Model and Shared Virtual Memory
ROCm code objects
Compute/Graphics interop
Low-level programming
Hardware support
ROCm is primarily targeted at discrete professional GPUs, but unofficial support includes Vega-family and RDNA 2 consumer GPU |
https://en.wikipedia.org/wiki/IBM%204765 | The IBM 4765 PCIe Cryptographic Coprocessor is a hardware security module (HSM) that includes a secure cryptoprocessor implemented on a high-security, tamper resistant, programmable PCIe board. Specialized cryptographic electronics, microprocessor, memory, and random number generator housed within a tamper-responding environment provide a highly secure subsystem in which data processing and cryptography can be performed.
The IBM 4765 is validated to FIPS PUB 140-2 Level 4, the highest level of certification achievable for commercial cryptographic devices. The IBM 4765 data sheet describes the coprocessor in detail.
IBM supplies two cryptographic-system implementations:
The PKCS#11 implementation creates a high-security solution for application programs developed for this industry-standard API.
The IBM Common Cryptographic Architecture (CCA) implementation provides many functions of special interest in the finance industry, extensive support for distributed key management, and a base on which custom processing and cryptographic functions can be added.
Toolkits for custom application development are also available.
Applications may include financial PIN transactions, bank-to-clearing-house transactions, EMV transactions for integrated circuit (chip) based credit cards, and general-purpose cryptographic applications using symmetric key algorithms, hashing algorithms, and public key algorithms.
The operational keys (symmetric or RSA private) are generated in the coprocessor and are then saved either in a keystore file or in application memory, encrypted under the master key of that coprocessor. Any coprocessor with an identical master key can use those keys.
Supported systems
IBM supports the 4765 on IBM Z, IBM POWER Systems, and IBM-approved x86 servers (Linux or Microsoft Windows).
IBM Z: Crypto Express4S (CEX4S) / Crypto Express3C (CEX3C) - feature code 0865
IBM POWER systems: feature codes EJ27, EJ28, and EJ29
x86: Machine type-model 4765-001
History
|
https://en.wikipedia.org/wiki/InstantTV | InstantTV is a cloud software digital video recorder (DVR) operated by RecordTV Pte Ltd based in Singapore. The company was founded by Carlos Nicholas Fernandes in 2007 and previously offered services as RecordTV.com.
History
RecordTV.com was originally a US-based company that provided cloud-based recording of any and all cable TV channels its founder, David Simon, had subscribed to, by any user on the Internet. The MPAA sued Simon for copyright infringement. Simon initially hired Ira Rothken to defend against the litigation, but eventually gave up, settled and decided to sell its assets.
Launch in Singapore
Fernandes purchased the assets of RecordTV.com from Simon, but then invented (and patented) "A System and Method for recording television and/or radio programmes via the Internet". Among other things, the RecordTV.com service was restricted to Singapore users alone and users were able to record only Singapore Free-to-Air content, which was broadcast by Singapore's state-owned broadcaster, MediaCorp. Shortly thereafter, on 24 July 2017 and 27 September 2017, RecordTV.com received two cease and desist letters alleging infringement by MediaCorp.
RecordTV Pte Ltd vs. MediaCorp litigation
RecordTV Pte Ltd refused to comply with the demand to shut down its website. After the first cease and desist letter from MediaCorp, RecordTV's lawyers wrote back, alleging that MediaCorp's move was "calculated to stifle innovation and the growth of a new industry" according to an article by Singapore Press Holdings that was republished. After the second, September cease and desist letter was received, RecordTV preemptively sued MediaCorp for groundless threats of copyright infringement proceedings, claimed S$30.5 million in damages and continued to operate its website.
One of the core legal issues that arose was whether RecordTV.com was recording content on behalf of its users or whether the users were using RecordTV.com to record TV shows they would otherwise be entitled to |
https://en.wikipedia.org/wiki/Korean%20Biology%20Olympiad | The Korean Biology Olympiad (KBO) is a biology olympiad held by Korean Biology Educational Society. The top four finalists become eligible to join the International Biology Olympiad.
See also
List of biology awards
References
Biology awards
Education competitions in South Korea |
https://en.wikipedia.org/wiki/Distributed-element%20circuit | Distributed-element circuits are electrical circuits composed of lengths of transmission lines or other distributed components. These circuits perform the same functions as conventional circuits composed of passive components, such as capacitors, inductors, and transformers. They are used mostly at microwave frequencies, where conventional components are difficult (or impossible) to implement.
Conventional circuits consist of individual components manufactured separately then connected together with a conducting medium. Distributed-element circuits are built by forming the medium itself into specific patterns. A major advantage of distributed-element circuits is that they can be produced cheaply as a printed circuit board for consumer products, such as satellite television. They are also made in coaxial and waveguide formats for applications such as radar, satellite communication, and microwave links.
A phenomenon commonly used in distributed-element circuits is that a length of transmission line can be made to behave as a resonator. Distributed-element components which do this include stubs, coupled lines, and cascaded lines. Circuits built from these components include filters, power dividers, directional couplers, and circulators.
Distributed-element circuits were studied during the 1920s and 1930s but did not become important until World War II, when they were used in radar. After the war their use was limited to military, space, and broadcasting infrastructure, but improvements in materials science in the field soon led to broader applications. They can now be found in domestic products such as satellite dishes and mobile phones.
Circuit modelling
Distributed-element circuits are designed with the distributed-element model, an alternative to the lumped-element model in which the passive electrical elements of electrical resistance, capacitance and inductance are assumed to be "lumped" at one point in space in a resistor, capacitor or inductor, respe |
https://en.wikipedia.org/wiki/Extensive%20category | In mathematics, an extensive category is a category C with finite coproducts that are disjoint and well-behaved with respect to pullbacks. Equivalently, C is extensive if the coproduct functor from the product of the slice categories C/X × C/Y to the slice category C/(X + Y) is an equivalence of categories for all objects X and Y of C.
Examples
The categories Set and Top of sets and topological spaces, respectively, are extensive categories. More generally, the category of presheaves on any small category is extensive.
The category CRingop of affine schemes is extensive.
References
External links
Category theory |
https://en.wikipedia.org/wiki/Paul%20Benioff | Paul Anthony Benioff (May 1, 1930 – March 29, 2022) was an American physicist who helped pioneer the field of quantum computing. Benioff was best known for his research in quantum information theory during the 1970s and 80s that demonstrated the theoretical possibility of quantum computers by describing the first quantum mechanical model of a computer. In this work, Benioff showed that a computer could operate under the laws of quantum mechanics by describing a Schrödinger equation description of Turing machines. Benioff's body of work in quantum information theory encompassed quantum computers, quantum robots, and the relationship between foundations in logic, math, and physics.
Early life and education
Benioff was born on May 1, 1930, in Pasadena, California. His father, Hugo Benioff, was a professor of seismology at the California Institute of Technology, and his mother, Alice Pauline Silverman, received a master's degree in English from the University of California, Berkeley. He married Hannelore Benioff.
Benioff also attended Berkeley, where he earned an undergraduate degree in botany in 1951. After a two-year stint working in nuclear chemistry for Tracerlab, he returned to Berkeley. In 1959, he obtained his PhD in nuclear chemistry.
Career
In 1960, Benioff spent a year at the Weizmann Institute of Science in Israel as a postdoctoral fellow. He then spent six months at the Niels Bohr Institute in Copenhagen as a Ford Fellow. In 1961, he began a long career at Argonne National Laboratory, first with its Chemistry Division and later in 1978 in the lab's Environmental Impact Division. Benioff remained at Argonne until he retired in 1995. He continued to conduct research at the laboratory as a post-retirement emeritus scientist for the Physics Division until his death.
In addition, Benioff taught the foundations of quantum mechanics as a visiting professor at Tel Aviv University in 1979, and he worked as a visiting scientist at CNRS Marseilles in 1979 and 1 |
https://en.wikipedia.org/wiki/ACM%20Doctoral%20Dissertation%20Award | The ACM Doctoral Dissertation Award is awarded annually by the Association for Computing Machinery to the authors of the best doctoral dissertations in computer science and computer engineering. The award is accompanied by a prize of US$20,000 and winning dissertations are published in the ACM Digital Library. Honorable mentions are awarded $10,000. Financial support is provided by Google. The number of awarded dissertations may vary year-to-year.
ACM also awards the ACM India Doctoral Dissertation Award. Several Special Interest Groups (SIGs) award a Doctoral Dissertation Award.
Recipients
See also
List of computer science awards
List of engineering awards
References
External links
ACM Doctoral Dissertation Award Winners on acm.org
ACM Doctoral Dissertation Awards with affiliations
Theoretical computer science
Computer science awards |
https://en.wikipedia.org/wiki/Tolerogenic%20dendritic%20cell | Tolerogenic dendritic cells (a. k. a. tol-DCs, tDCs, or DCregs) are heterogenous pool of dendritic cells with immuno-suppressive properties, priming immune system into tolerogenic state against various antigens. These tolerogenic effects are mostly mediated through regulation of T cells such as inducing T cell anergy, T cell apoptosis and induction of Tregs. Tol-DCs also affect local micro-environment toward tolerogenic state by producing anti-inflammatory cytokines.
Tol-DCs are not lineage specific and their immune-suppressive functions is due to their state of activation and/or differentiation. Generally, properties of all types of dendritic cells can be highly affected by local micro-environment such as presence of pro or anti-inflammatory cytokines, therefore tolerogenic properties of tol-DCs are often context dependant and can be even eventually overridden into pro-inflammatory phenotype.
Tolerogenic DCs present a potential strategy for treatment of autoimmune diseases, allergic diseases and transplant rejections. Moreover, Ag-specific tolerance in humans can be induced in vivo via vaccination with Ag-pulsed ex vivo generated tolerogenic DCs. For that reason, tolerogenic DCs are an important promising therapeutic tool.
Dendritic cells
Dendritic cells (DCs) were first discovered and described in 1973 by Ralph M. Steinman. They represent a bridge between innate and adaptive immunity and play a key role in the regulation of initiation of immune responses. DCs populate almost all body surfaces and they do not kill the pathogens directly, they utilize and subsequently degrade antigens to peptides by their proteolytic activity. After that, they present these peptides in complexes together with their MHC molecules on their cell surface. DCs are also the only cell type which can activate naïve T cells and induce antigen-specific immune responses.
Therefore, their role is crucially important in balance between tolerance and immune response.
Tolerogenic dendritic |
https://en.wikipedia.org/wiki/Medical%20Devices%20Park%2C%20Hyderabad | Medical Devices Park, Hyderabad is a medical devices industrial estate located in Hyderabad, Telangana, India. The largest such Park in India spread over 250 acres. The dedicated park's ecosystem supports medical technology innovation and manufacturing.
History
The Park was inaugurated on 17 June 2017 near Hyderabad at Sultanpur in Patancheru of Sangareddy district by the Minister for Industries, K. T. Rama Rao.
References
2017 establishments in Telangana
Economy of Hyderabad, India
Economy of Telangana
Industries in Hyderabad, India
Industrial parks in India
Science parks in India
Research institutes in Hyderabad, India
High-technology business districts in India
Biotechnology |
https://en.wikipedia.org/wiki/Solar%20Decathlon%20Africa | The Solar Decathlon AFRICA is an international competition that challenges collegiate teams to design and build houses powered exclusively by the sun. The winner of the competition is the team that is able to score the most points in ten contests.
On November 15, 2016, the Moroccan Ministry of Energy, Mines, Water, and Sustainable development; the Moroccan Research Institute in Solar Energy and New Energies (IRESEN); and the United States Department of Energy signed a memorandum of understanding to collaborate on the development of Solar Decathlon Africa, a competition that will integrate unique local and regional characteristics while following the philosophy, principles, and model of the U.S. Department of Energy Solar Decathlon. The competition is planned for September 2019.
This competition takes place during even years, alternating with the U.S.-based competition, Solar Decathlon by agreement between the United States and Moroccan governments.
Solar Decathlon Africa 2019
The 2019 edition of the Solar Decathlon Africa will take place in Ben Guerir, Morocco
Participants:
Contests
Architecture
Engineering and Construction
Market Appeal
Comfort Conditions
Appliances
Sustainability
Home life and entertainment
Communication and Social Awareness
Electrical Energy and Balance
Innovation
See also
Solar Decathlon
Solar Decathlon China
Solar Decathlon Europe
Solar Decathlon Middle East
Solar Decathlon Latin America and Caribbean
References
External links
Solar Decathlon Africa Main Website
U.S. Department of Energy Solar Decathlon
Solar Decathlon Europe 2019
Solar Decathlon Europe 2014
Solar Decathlon Latin America and Caribbean
Solar Decathlon Middle East
Solar Decathlon China
Sustainable building
Building engineering
Sustainable architecture
Low-energy building
Energy conservation
Sustainable building in Africa
Solar Decathlon |
https://en.wikipedia.org/wiki/Singularity%20%28software%29 | Singularity is a free and open-source computer program that performs operating-system-level virtualization also known as containerization.
One of the main uses of Singularity is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world.
The need for reproducibility requires the ability to use containers to move applications from system to system.
Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.
In 2021 the Singularity open source project split into two projects called Apptainer and SingularityCE.
History
Singularity began as an open-source project in 2015, when a team of researchers at Lawrence Berkeley National Laboratory, led by Gregory Kurtzer, developed the initial version written in the C programming language and released it under the BSD license.
By the end of 2016, many developers from different research facilities joined forces with the team at Lawrence Berkeley National Laboratory to further the development of Singularity.
Singularity quickly attracted the attention of computing-heavy scientific institutions worldwide:
Stanford University Research Computing Center deployed Singularity on their XStream and Sherlock clusters
National Institutes of Health installed Singularity on Biowulf, their 95,000+ core/30 PB Linux cluster
Various sites of the Open Science Grid Consortium including Fermilab started adopting Singularity; by April 2017, Singularity was deployed on 60% of the Open Science Grid network.
For two years in a row, in 2016 and 2017, Singularity was recognized by HPCwire editors as "One of five new technologies to watch".
In 2017 Singularity also won the first place for the category "Best HPC Programming Tool or Technology".
based on the data entered on a voluntary basis in a public registry, Singularity user base was estimated to be greate |
https://en.wikipedia.org/wiki/Profinite%20word | In mathematics, more precisely in formal language theory, the profinite words are a generalization of the notion of finite words into a complete topological space. This notion allows the use of topology to study languages and finite semigroups. For example, profinite words are used to give an alternative characterization of the algebraic notion of a variety of finite semigroups.
Definition
Let A an alphabet. The set of profinite words over A consists of the completion of a metric space whose domain is the set of words over A. The distance used to define the metric is given using a notion of separation of words. Those notions are now defined.
Separation
Let M and N be monoids, and let p and q be elements of the monoid M. Let φ be a morphism of monoids from M to N. It is said that the morphism φ separates p and q if . For example, the morphism sending a word to the parity of its length separates the words ababa and abaa. Indeed .
It is said that N separates p and q if there exists a morphism of monoids φ from M to N that separates p and q. Using the previous example, separates ababa and abaa. More generally, separates any words whose size are not congruent modulo n. In general, any two distinct words can be separated, using the monoid whose elements are the factors of p plus a fresh element 0. The morphism sends prefixes of p to themselves and everything else to 0.
Distance
The distance between two distinct words p and q is defined as the inverse of the size of the smallest monoid N separating p and q. Thus, the distance of ababa and abaa is . The distance of p to itself is defined as 0.
This distance d is an ultrametric, that is, . Furthermore it satisfies and .
Since any word p can be separated from any other word using a monoid with |p|+1 elements, where |p| is the length of p, it follows that the distance between p and any other word is at least . Thus the topology defined by this metric is discrete.
Profinite topology
The profinite completion of , de |
https://en.wikipedia.org/wiki/AggregateIQ | AggregateIQ (AIQ) previously known as SCL Canada is a Canadian political consultancy and technology company, based in Victoria, British Columbia.
History
AIQ was founded in 2013 by Zack Massingham, a former university administrator and Jeff Silvester. As of February 2017, AIQ employed 20 people and was based in downtown Victoria, British Columbia.
AIQ has attracted controversy over its involvement in the Vote Leave and BeLeave campaigns in 2016 and the Cambridge Analytica scandal that broke out in 2018.
Two years after the Brexit vote in 2016, it was revealed that AggregateIQ had been paid £3.5 million by four pro-Brexit campaigning groups - Vote Leave, BeLeave, Veterans for Britain, and Northern Ireland's Democratic Unionist Party - to design software aimed at aggregating personal data and influencing voters through messaging on social media. Under UK law, co-ordination between groups during an election is prohibited. In May 2018, a Facebook executive testified before the House of Commons Select Committee for Digital, Culture, Media and Sport that Vote Leave and BeLeave were targeting exactly the same audiences on Facebook via AIQ.
Prior to the Brexit campaign, AIQ had worked with John Bolton before he became Donald Trump's national security adviser, and with US senators Thom Tillis and Ted Cruz on their senatorial campaigns. As part of Cambridge Analytica's work for the Cruz campaign, AIQ created Ripon, a customized campaign software platform that became the prototype used by pro-Brexit campaign groups, including VoteLeave and BeLeave.
On 6 April 2018, Facebook suspended AggregateIQ from its platform due to concerns over its possible affiliation with SCL Group, the parent company of Cambridge Analytica. Facebook stated, "In light of recent reports that AggregateIQ may be affiliated with SCL and may, as a result, have improperly received FB user data, we have added them to the list of entities we have suspended from our platform while we investigate."
On 2 |
https://en.wikipedia.org/wiki/Okta%2C%20Inc. | Okta, Inc. (formerly Saasure Inc.) is an American identity and access management company based in San Francisco. It provides cloud software that helps companies manage and secure user authentication into applications, and for developers to build identity controls into applications, website web services and devices. It was founded in 2009 and had its initial public offering in 2017, being valued at over $6 billion.
Products and services
Okta sells 10 products, including Single Sign-On, Universal Directory, Advanced Server Access (formerly ScaleFT), API Access Management, Authentication, User Management, B2B Integration, Multi-factor Authentication, Lifecycle Management, and Access Gateway.
Okta sells six services, including a single sign-on service that allows users to log into a variety of systems using one centralized process. For example, the company claims the ability to log into Gmail, Workday, Salesforce and Slack with one login. It also offers API authentication services.
Okta's services are built on top of the Amazon Web Services cloud.
Okta primarily targets enterprise businesses. Claimed customers as of 2020 include Zoominfo, JetBlue, Nordstrom, MGM Resorts International, and the U.S. Department of Justice.
Okta runs an annual “Oktane” user conference, which in 2018 featured former US President Barack Obama as a keynote speaker.
Operations
Okta is headquartered in San Francisco. It also has offices in San Jose, Bellevue, Toronto, Washington D.C., Chicago, Bengaluru, London, Amsterdam, Sydney, Paris, and Stockholm.
History
Okta was co-founded in 2009 by Todd McKinnon and Frederic Kerrest, who previously worked together at Salesforce.
In 2015, the company raised US$75 million in venture capital from Andreessen Horowitz, Greylock Partners, and Sequoia Capital, at a total initial valuation of US$1.2 billion.
In 2017, Okta's initial public offering priced at $17.00 per share, trading up on its first day, to raise an additional US$187 million. At |
https://en.wikipedia.org/wiki/Tenosynovial%20giant%20cell%20tumor | Tenosynovial giant cell tumor (TGCT) is a group of rare, typically non-malignant tumors of the joints. TGCT tumors often develop from the lining of joints (also known as synovial tissue).
Common symptoms of TGCT include swelling, pain, stiffness and reduced mobility in the affected joint or limb. This group of tumors can be divided into different subsets according to their site, growth pattern, and prognosis. Localized TGCT is sometimes referred to as giant cell tumor of the tendon sheath; diffuse TGCT is also called pigmented villonodular synovitis (PVNS).
Classification
Classification for TGCT encompasses two subtypes that can be divided according to site – within a joint (intra-articular) or outside of the joint (extra-articular) – and growth pattern (localized or diffuse) of the tumor(s). Localized and diffuse subsets of TGCT differ in their prognosis, clinical presentation, and biological behavior, but share a similar manner of disease development.
Localized TGCT
Localized TGCT is sometimes referred to as localized pigmented villonodular synovitis (L-PVNS), giant cell tumor of the tendon sheath (GCT-TS), nodular tenosynovitis, localized nodular tenosynovitis, and L-TGCT.
The localized form of TGCT is more common. Localized TGCT tumors are typically 0.5 cm-4 cm), develop over years, are benign and non-destructive to the surrounding tissue, and may reoccur in the affected area. The most common symptom is painless swelling. Localized TGCT most often occurs in fingers, but can also occur in other joints.
Diffuse TGCT
Diffuse TGCT is sometimes referred to as pigmented villonodular synovitis (PVNS), conventional PVNS, and D-TGCT.
Diffuse TGCT occurs less frequently and is locally aggressive (in some cases, tumors may infiltrate surrounding soft tissue). It most commonly affects people under 40 years old, though the age of occurrence varies. Diffuse TGCT may occur inside a joint (intra-articular) or outside of a joint (extra-articular). Intra-articular tumors |
https://en.wikipedia.org/wiki/Multi-focus%20image%20fusion | Multi-focus image fusion is a multiple image compression technique using input images with different focus depths to make one output image that preserves all information.
Overview
In recent years, image fusion has been used in many applications such as remote sensing, surveillance, medical diagnosis, and photography applications.
Two major applications of image fusion in photography are fusion of multi-focus images and multi-exposure images.
The main idea of image fusion is gathering important and the essential information from the input images into one single image which ideally has all of the information of the input images. The research history of image fusion spans over 30 years and many scientific papers. Image fusion generally has two aspects: image fusion methods and objective evaluation metrics.
In visual sensor networks (VSN), sensors are cameras which record images and video sequences. In many applications of VSN, a camera can't give a perfect illustration including all details of the scene. This is because of the limited depth of focus of the optical lens of cameras. Therefore, just the object located in the focal length of camera is focused and clear, and other parts of the image are blurred.
VSN captures images with different depths of focus using several cameras. Due to the large amount of data generated by cameras compared to other sensors such as pressure and temperature sensors and some limitations of bandwidth, energy consumption and processing time, it is essential to process the local input images to decrease the amount of transmitted data.
Much research on multi-focus image fusion has been done in recent years and can be classified into two categories: transform and spatial domains. Commonly used transforms for image fusion are Discrete cosine transform (DCT) and Multi-Scale Transform (MST). Recently, Deep learning (DL) has been thriving in several image processing and computer vision applications.
Multi-Focus image fusion in the spat |
https://en.wikipedia.org/wiki/DNS%20over%20TLS | DNS over TLS (DoT) is a network security protocol for encrypting and wrapping Domain Name System (DNS) queries and answers via the Transport Layer Security (TLS) protocol. The goal of the method is to increase user privacy and security by preventing eavesdropping and manipulation of DNS data via man-in-the-middle attacks. The well-known port number for DoT is 853.
While DNS-over-TLS is applicable to any DNS transaction, it was first standardized for use between stub or forwarding resolvers and recursive resolvers, in in May of 2016. Subsequent IETF efforts specify the use of DoT between recursive and authoritative servers ("Authoritative DNS-over-TLS" or "ADoT") and a related implementation between authoritative servers (Zone Transfer-over-TLS or "xfr-over-TLS").
Server software
BIND supports DoT connections as of version 9.17. Earlier versions offered DoT capability by proxying through stunnel. Unbound has supported DNS over TLS since 22 January 2023. Unwind has supported DoT since 29 January 2023. With Android Pie's support for DNS over TLS, some ad blockers now support using the encrypted protocol as a relatively easy way to access their services versus any of the various work-around methods typically used such as VPNs and proxy servers.
Client software
Android clients running Android 9 (Pie) or newer support DNS over TLS and will use it by default if the network infrastructure, for example the ISP, supports it.
In April 2018, Google announced that Android Pie will include support for DNS over TLS, allowing users to set a DNS server phone-wide on both Wi-Fi and mobile connections, an option that was historically only possible on rooted devices. DNSDist, from PowerDNS, also announced support for DNS over TLS in version 1.3.0.
Linux and Windows users can use DNS over TLS as a client through the NLnet Labs stubby daemon or Knot Resolver. Alternatively they may install getdns-utils to use DoT directly with the getdns_query tool. The unbound DNS resolver by NL |
https://en.wikipedia.org/wiki/Garfield%20Electronics%20Doctor%20Click | The Doctor Click is a rhythm controller manufactured by the American company Garfield Electronics. It was released in 1982.
In the pre-MIDI era, the Doctor Click enabled various different synthesizers and drum machines to communicate with each other.
It features two independent channels.
There are also footswitch inputs for Play, Reset and Enter.
Modulation
The unit features modulation control for VCO, VCF and VCA sections of synthesizers.
Design
The unit is constructed in a metal case, uses microswitches and a screen for reading the tempo.
Programming
You can step program the device by selecting Timebase 12 and use the Step button to enter the step count for each note.
References
Electronic dance music
Audio engineering |
https://en.wikipedia.org/wiki/Flatiron%20School | Flatiron School is an educational organization founded in 2012 by Adam Enbar and Avi Flombaum. The organization is based in New York City and teaches software engineering, computer programming, data science, product design, and cybersecurity engineering. In 2017, the company was sued for making false statements about the earning potential of its graduates. It was acquired by WeWork in 2017 and sold to Carrick Capital Partners in 2020.
History
Flatiron School was founded in 2012 by Adam Enbar and Avi Flombaum.
In 2017, the New York State Attorney General sued Flatiron School for operating without a license and making false statements about the earning potential of its graduates. The two parties reached a $375,000 settlement. Flatiron School claimed a 98.5% employment rate but this included apprentices and freelance workers, while the claimed average salary of $74,447 only included graduates in full-time employment.
In 2018, Yale University announced a collaboration with the Flatiron School during Yale's "Summer Session" — together, the institutions offered a Web Development Bootcamp for Summer 2019, which offered two Yale College credits for students.
The organization has made efforts to promote parity in tech, working with other companies to sponsor course scholarships for women, LGBTQ+ people, and members of underserved communities.
Takeovers and acquisitions
Flatiron School was acquired by WeWork, a collaborative workspace company, in October 2017. Following the acquisition, they launched Access Labs, a joint effort to make tech education accessible to low-income earners in New York. In August 2018, Flatiron School acquired Designation, a Chicago-based UX/UI design school, and expanded design courses elsewhere in December 2018.
Since being acquired by WeWork, the company has expanded, opening campuses in Atlanta, Austin, Chicago, Dallas, Denver, Houston, London, San Francisco, Seattle, and Washington, D.C.
In 2020, WeWork sold Flatiron School to Carrick C |
https://en.wikipedia.org/wiki/Solar%20Decathlon%20China | The Solar Decathlon China (SDC) is a cooperative student competition in China focused on the design and construction of sustainable housing. It was instituted in 2011 during the Strategic Economic Dialogue between China and the United States. Competitions took place in 2013, 2018 and 2022.
Solar Decathlon China 2018
The 2018 edition took place in Dezhou, in the province of Shandong.
The top finishers were:
, : South China University of Technology and Polytechnic University of Turin
: Tsinghua University
There was a tie for the 3rd place with , : Southeast University and Technical University of Braunschweig being on par with , : Shandong University, Xiamen University, National Institute of Applied Sciences of Rennes, University of Rennes 1/Superior School of Engineering of Rennes, University of Rennes 2/Institute of Management and Urbanism of Rennes, High School Joliot Curie of Rennes, Technical School of Compagnons du Devoir of Rennes, European Academy of Art in Brittany (EESAB), National School of Architecture of Brittany
The other participating teams were:
: Beijing Jiaotong University
: The University of Hong Kong
: College of Management Academic Studies (COMAS) Afeka College
: McGill University and Concordia University
, : New Jersey Institute of Technology and Fujian University of Technology
: Indian Institute of Technology Bombay
: Shenyang Institute of Engineering
, : Shanghai Jiaotong University and University of Illinois at Urbana-Champaign
: Hunan University
: Seoul National University, Sung Kyun Kwan University, AJOU University
: Shanghai University of Engineering Science
, : Tongji University and Technical University of Darmstadt
: University of Toronto, Ryerson University, Seneca College
: University of Nottingham, Ningbo, China
: Xi'an University of Architecture and Technology
, : Xi'an Jiaotong University and Western New England University
, : Yantai University and Illinois Institute of Technology
: Istanbul Technical Univer |
https://en.wikipedia.org/wiki/SEA-PHAGES | SEA-PHAGES stands for Science Education Alliance-Phage Hunters Advancing Genomics and Evolutionary Science; it was formerly called the National Genomics Research Initiative. This was the first initiative launched by the Howard Hughes Medical Institute (HHMI) Science Education Alliance (SEA) by their director Tuajuanda C. Jordan in 2008 to improve the retention of Science, technology, engineering, and mathematics (STEM) students. SEA-PHAGES is a two-semester undergraduate research program administered by the University of Pittsburgh's Graham Hatfull's group and the Howard Hughes Medical Institute's Science Education Division. Students from over 100 universities nationwide engage in authentic individual research that includes a wet-bench laboratory and a bioinformatics component.
Curriculum
During the first semester of this program, classes of around 18-24 undergraduate students work under the supervision of one or two university faculty members and a graduate student assistant—who have completed two week-long training workshops—to isolate and characterize their own personal bacteriophage that infects a specific bacterial host cell from local soil samples. Once students have successfully isolated a phage, they are able to classify them by visualizing them through Electron microscope (EM) images. Also, DNA is extracted and purified by the students, and one sample is sent for sequencing to be ready for the second semester's curriculum.
The second semester consists of the annotation of the genome the class sent to be sequenced. In that case, students work together to evaluate the genes for start-stop coordinates, ribosome-binding sites, and possible functions of those proteins in which the sequence codes. Once the annotation is completed, it is submitted to the National Center for Biotechnology Information's (NCBI) DNA sequence database GenBank. If there is still time in the semester or the sent DNA was not able to be sequenced, the class could request genome file fro |
https://en.wikipedia.org/wiki/IonQ | IonQ is a quantum computing hardware and software company based in College Park, Maryland. They are developing a general-purpose trapped ion quantum computer and software to generate, optimize, and execute quantum circuits.
History
IonQ was co-founded by Christopher Monroe and Jungsang Kim, professors at the University of Maryland and Duke University, respectively, in 2015, with the help of Harry Weller and Andrew Schoen, partners at venture firm New Enterprise Associates.
The company is an offshoot of the co-founders’ 25 years of academic research in quantum information science. Monroe's quantum computing research began as a Staff Researcher at the National Institute of Standards and Technology (NIST) with Nobel-laureate physicist David Wineland where he led a team using trapped ions to produce the first controllable qubits and the first controllable quantum logic gate, culminating in a proposed architecture for a large-scale trapped ion computer.
Kim and Monroe began collaborating formally as a result of larger research initiatives funded by the Intelligence Advanced Research Projects Activity (IARPA). They wrote a review paper for Science Magazine entitled Scaling the Ion Trap Quantum Processor, pairing Monroe's research in trapped ions with Kim’s focus on scalable quantum information processing and quantum communication hardware.
This research partnership became the seed for IonQ’s founding. In 2015, New Enterprise Associates invested $2 million to commercialize the technology Monroe and Kim proposed in their Science paper.
In 2016, they brought on David Moehring from IARPA—where he was in charge of several quantum computing initiatives—to be the company’s chief executive. In 2017, they raised a $20 million series B, led by GV (formerly Google Ventures) and New Enterprise Associates, the first investment GV has made in quantum computing technology. They began hiring in earnest in 2017, with the intent to bring an offering to market by late 2018.
In May 2019 |
https://en.wikipedia.org/wiki/Cardinal%20tree | A cardinal tree (or trie) of degree k, by analogy with cardinal numbers and by opposition with ordinal trees, is a rooted tree in which each node has positions for an edge to a child. Each node has up to children and each child of a given node is labeled by a unique integer from the set {1, 2, . . . , k}. For instance, a binary tree is a cardinal tree of degree 2.
References
Data types
Knowledge representation
Abstract_data_types |
https://en.wikipedia.org/wiki/Ordinal%20tree | An ordinal tree, by analogy with an ordinal number, is a rooted tree of arbitrary degree in which the children of each node are ordered, so that one refers to the ith child in the sequence of children of a node.
See also
Cardinal tree
References
Data types
Knowledge representation
Abstract data types |
https://en.wikipedia.org/wiki/Earth%20section%20paths | Earth section paths are plane curves defined by the intersection of an earth ellipsoid and a plane (ellipsoid plane sections). Common examples include the great ellipse (containing the center of the ellipsoid) and normal sections (containing an ellipsoid normal direction). Earth section paths are useful as approximate solutions for geodetic problems, the direct and inverse calculation of geographic distances. The rigorous solution of geodetic problems involves skew curves known as geodesics.
Inverse problem
The inverse problem for earth sections is: given two points, and on the surface of the reference ellipsoid, find the length, , of the short arc of a spheroid section from to and also find the departure and arrival azimuths (angle from true north) of that curve, and . The figure to the right illustrates the notation used here. Let have geodetic latitude and longitude (k=1,2). This problem is best solved using analytic geometry in earth-centered, earth-fixed (ECEF) Cartesian coordinates.
Let and be the ECEF coordinates of the two points, computed using the geodetic to ECEF transformation discussed here.
Section plane
To define the section plane select any third point not on the line from to . Choosing to be on the surface normal at will define the normal section at . If is the origin then the earth section is the great ellipse. (The origin would be co-linear with 2 antipodal points so a different point must be used in that case). Since there are infinitely many choices for , the above problem is really a class of problems (one for each plane). Let be given. To put the equation of the plane into the standard form, , where , requires the components of a unit vector, , normal to the section plane. These components may be computed as follows: The vector from to is , and the vector from to is . Therefore, ), where is the unit vector in the direction of . The orientation convention used here is that points to the left of the path. If this is no |
https://en.wikipedia.org/wiki/Solar%20Decathlon%20Middle%20East | The 2018 edition of the Solar Decathlon Middle East will take place in Dubai, United Arab Emirates.
Solar Decathlon Middle East 2018
The teams selected to compete in Solar Decathlon Middle East 2018 are:
: Team Aqua Green Ajman University of Science and Technology (United Arab Emirates)
: American University in Dubai (United Arab Emirates)
: American University of Ras AlKhaimah (United Arab Emirates)
: Dhofar University (Oman)
: Eindhoven University of Technology (Netherlands)
: Heriot-Watt University Dubai (United Arab Emirates)
: D'Annunzio University of Chieti–Pescara; University of Pisa; Università degli Studi della Campania Luigi Vanvitelli; University of Sassari (Italy)
: Universiti Sains Islam Malaysia - Malaysia; University of Technology – Malaysia (Malaysia)
, : Team EFdeN Signature Ion Mincu University of Architecture and Urbanism; Technical University of Civil Engineering of Bucharest; Birla Institute of Technology and Science, Pilani – Dubai Campus; Politehnica University of Bucharest (Romania-United Arab Emirates)
: National Chiao Tung University (Taiwan)
: National University of Sciences and Technology (Pakistan)
: New York University Abu Dhabi (United Arab Emirates)
: Qatar University (Qatar)
: Sapienza University of Rome (Italy)
: University of Wollongong (Australia)
: The Petroleum Institute; Zayed University (United Arab Emirates)
, : University of Sharjah; University of Ferrara (United Arab Emirates-Italy)
, , : University of Bordeaux; Amity University; An-Najah National University; Arts & Métiers Paris Tech; Bordeaux School of Architecture (France-United Arab Emirates-Palestine)
: Virginia Tech (United States)
: King Saud University (Kingdom of Saudi Arabia)
: The University of Jordan (Jordan)
: University of Belgrade (Serbia)
See also
Solar Decathlon
Solar Decathlon Africa
Solar Decathlon China
Solar Decathlon Europe
Solar Decathlon Latin America and Caribbean
References
External links
U.S. Department of Energy |
https://en.wikipedia.org/wiki/Solar%20Decathlon%20Latin%20America%20and%20Caribbean | The Solar Decathlon is an initiative of the Department of Energy of the United States (DOE) in which universities around the world compete with the design and construction of sustainable housing that works 100% with solar energy. It is called “Decathlon" since universities and their prototypes are evaluated in 10 criteria: architecture, engineering and construction, energy efficiency, energy consumption, comfort, sustainability, positioning, communications, urban design and feasibility and innovation.
Solar Decathlon Latin American and Caribbean 2019
The 2019 edition of the Solar Decathlon Latin America and Caribbean will take place in Cali, Colombia.
Participating teams
: Federal University of Paraíba, João Pessoa, Paraíba, Brasil
: Arturo Prat University, Chile
: Pontificia Universidad Javeriana, Colombia
, : Pontificia Universidad Javeriana de Cali + Universidad Federal Santa Catarina + Instituto Federal Santa Catarina, Colombia/Brasil
: National Service of Learning, SENA, Colombia
: Universidad de San Buenaventura + Universidad Autónoma de Occidente, Colombia
: University of Magdalena, Santa Marta, Magdalena Department, Colombia
: National University of Colombia, Colombia
, : Universidad de la Salle + Hochschule Ostwestfalen-Lippe University, Colombia/Alemania
: University of Los Andes (Colombia)
: Universidad del Valle, Colombia
: Universidad Santiago de Cali, Colombia
: Western Institute of Technology and Higher Education, México
: National University of Engineering, Perú
: Universidad de Sevilla, España
Solar Decathlon Latin America and Caribbean 2015
The first Solar Decathlon Latin America and Caribbean was held on the campus of Universidad del Valle in Santiago de Cali, Colombia, in December 2015.
The top finishers were:
: La Casa Uruguaya, Universidad ORT Uruguay (Uruguay)
: Pontificia Universidad Javeriana de Cali and Universidad ICESI (Colombia)
, : Universidad de Sevilla and Universidad Santiago de Cali (Spain-Colombia)
The ot |
https://en.wikipedia.org/wiki/European%20Astrobiology%20Network%20Association | The European Astrobiology Network Association (EANA) coordinates and facilities research expertise in astrobiology in Europe.
EANA was created in 2001 to coordinate the different European centers in astrobiology and the related fields previously organized in paleontology, geology, atmospheric physics, planetary science and stellar physics.
The association is administered by an Executive Council that is elected every three years and represents the European nations active in the field, as Austria, Belgium, France, Germany, Italy, Portugal, Spain, etc.
The EANA Executive Council is composed of a president, two vice-presidents, a treasurer and two secretaries, and councillors. Further information about the current Executive Council can be founded at http://www.eana-net.eu/index.php?page=Discover/eananetwork.
The EANA association strongly supports the AbGradE – Astrobiology Graduates in Europe, which is an independent organisation that aim to support early-career scientists and students in astrobiology.
Objectives
The specific objectives of EANA are to:
bring together active European researchers and link their research programs
fund exchange visits between laboratories
optimize the sharing of information and resources facilities for research
promote this field of research to European funding agencies and politicians
promote research on extremophiles of relevance to environmental issues in Europe
interface with the Research Network with European bodies (e.g. European Space Agency, and the European Commission)
attract young scientists to participate
promote public interest in astrobiology, and to educate the younger generation
References
External links
European Astrobiology Network Association (EANA) - home page
Astrobiology
Biology societies
Space organizations
Scientific organizations established in 2001 |
https://en.wikipedia.org/wiki/Proceedings%20of%20the%20Institution%20of%20Electrical%20Engineers | Proceedings of the Institution of Electrical Engineers was a series journals which published the proceedings of the Institution of Electrical Engineers. It was originally established as the Journal of the Society of Telegraph Engineers in 1872, and was known under several titles over the years, such as Journal of the Institution of Electrical Engineers, Proceedings of the IEE and IEE Proceedings.
History
The journal was originally established in 1872, as
Journal of the Society of Telegraph Engineers (1872–1880)
Then underwent a series of name changes
Journal of the Society of Telegraph Engineers and of Electricians (1881–1882)
Journal of the Society of Telegraph-Engineers and Electricians (1883–1888)
Until in 1889 it settled into
Journal of the Institution of Electrical Engineers (1889–1940)
The journal remained under that name for over 50 years.
From 1926 to 1940, a new journal was started
Institution of Electrical Engineers - Proceedings of the Wireless Section of the Institution (1926–1940)
In 1941, the journals were reorganized in distinct parts. From 1941 to 1948 those were
Journal of the Institution of Electrical Engineers - Part I: General
Journal of the Institution of Electrical Engineers - Part II: Power Engineering
Journal of the Institution of Electrical Engineers - Part IIA: Automatic Regulators and Servo Mechanisms
Journal of the Institution of Electrical Engineers - Part III: Communication Engineering
Journal of the Institution of Electrical Engineers - Part III: Radio and Communication Engineering
Journal of the Institution of Electrical Engineers - Part IIIA: Radiocommunication
Journal of the Institution of Electrical Engineers - Part IIIA: Radiolocation
In 1949, until 1954, the publications were reorganized into
Journal of the Institution of Electrical Engineers
and
Proceedings of the IEE - Part I: General
Proceedings of the IEE - Part IA: Electric Railway Traction
Proceedings of the IEE - Part II: Power Engineering
Proceedings of the IEE - |
https://en.wikipedia.org/wiki/IBM%204767 | The IBM 4767 PCIe Cryptographic Coprocessor is a hardware security module (HSM) that includes a secure cryptoprocessor implemented on a high-security, tamper resistant, programmable PCIe board. Specialized cryptographic electronics, microprocessor, memory, and random number generator housed within a tamper-responding environment provide a highly secure subsystem in which data processing and cryptography can be performed. Sensitive key material is never exposed outside the physical secure boundary in a clear format.
The IBM 4767 is validated to FIPS PUB 140-2 Level 4, the highest level of certification achievable for commercial cryptographic devices. The IBM 4767 data sheet describes the coprocessor in detail.
IBM supplies two cryptographic-system implementations:
The PKCS#11 implementation creates a high-security solution for application programs developed for this industry-standard API.
The IBM Common Cryptographic Architecture (CCA) implementation provides many functions of special interest in the finance industry, extensive support for distributed key management, and a base on which custom processing and cryptographic functions can be added.
Toolkits for custom application development are also available.
Applications may include financial PIN transactions, bank-to-clearing-house transactions, EMV transactions for integrated circuit (chip) based credit cards, and general-purpose cryptographic applications using symmetric key algorithms, hashing algorithms, and public key algorithms.
The operational keys (symmetric or RSA private) are generated in the coprocessor and are then saved either in a keystore file or in application memory, encrypted under the master key of that coprocessor. Any coprocessor with an identical master key can use those keys. Performance benefits include the incorporation of elliptic curve cryptography (ECC) and format preserving encryption (FPE) in the hardware.
Supported systems
IBM supports the 4767 on certain IBM Z, IBM Power Sys |
https://en.wikipedia.org/wiki/Mynaric | Mynaric is a manufacturer of laser communication equipment for airborne and spaceborne communication networks, so called constellations.
History
In 2009, Mynaric was founded by former employees of the German Aerospace Center (DLR), and some of the key technologies have been licensed from DLR.
In November 2013, Mynaric demonstrated for the first time successful laser communication from a jet platform Tornado. A data rate of 1 Gbit/s over a distance of 60 km was achieved at a flight speed of 800 km/h. In October 2017, Mynaric performed an IPO at the Frankfurt Stock Exchange raising 27.3M € of growth capital.
In February 2018, Mynaric's laser communication products were inducted into the Space Technology Hall of Fame of the Space Foundation, and in April 2018, Mynaric announced a partnership with CEA-Leti regarding highly sensitive avalanche photodiodes that may enable longer link distances and reduced system complexity. In June 2018, Facebook's Connectivity Lab (related to Facebook Aquila) was reported to have achieved a bidirectional 10 Gbit/s air-to-ground connection with Mynarics products.
In March 2019, Mynaric announced that former SpaceX Starlink vice president Bulent Altan joins its management board and that it has raised additional $12.5 million funding from the lead investor of an undisclosed satellite constellation.
In November 2021, Mynaric listed on Nasdaq and raised $75.9 million growth capital drawing Peter Thiel and ARK Invest as new investors. The company was also selected by Northrop Grumman as strategic supplier for laser communications and, subsequently, in June 2022, completed a ground demonstration of laser terminals that will be used to send and receive data in space as part of the U.S. National Defense Space Architecture. In July 2022, Mynaric received a strategic investment of $11.4 million from L3Harris.
Products
Mynaric offers various laser communication products for wireless data transmission between aircraft, UAVs, high-altitude platf |
https://en.wikipedia.org/wiki/Halal%20certification%20in%20Europe | Halal meat is meat of animal slaughtered according to Quran and Sunnah and thus permitted for consumption by Muslims.
Halal meat market is the segment of much bigger food market, which offers goods that can be deemed as halal. In the case of meat, the qualification of halal addresses the practice of slaughter, and it is therefore comparable to other credence attributes that refer to the method of production rather than to the intrinsic characteristics of the product.
Across the EU, an increasing number of religious and commercial organizations are promoting the segmentation of the halal meat market through qualification practices that have created an image of non-stunned meat as being of authentic halal quality.
Introduction
Across Europe, halal meat markets are experiencing a period of unprecedented growth and development, though the intensity varies from country to country. In the UK and France there has been year-on-year growth for well over a decade, while in Germany the market is just starting to develop. The growth of these markets is in some way linked to the increasing number of Muslim immigrants across Europe and to the growing consumption of meat characteristic of vertical mobility amongst second and third generation Muslims. Halal meat and halal animal products are increasingly available in non-ethnic stores, particularly supermarket chains and fast food restaurants, and much as Jewish diners in the US are attracting large numbers of non-Jewish consumers, so the consumption of halal meat products by non-Muslims is also increasing across Europe.
Stunning issue
As the market has grown the authenticity of the halal meat sold in supermarkets and fast food restaurants has also been questioned by some Muslims, who have reacted against the practice of stunning and the use of mechanical blades (in the case of poultry) allowed in the halal standards adopted by these economic actors.
Dispute between Muslims emerge from debates about the origins of Islam, w |
https://en.wikipedia.org/wiki/Three-gap%20theorem | In mathematics, the three-gap theorem, three-distance theorem, or Steinhaus conjecture states that if one places points on a circle, at angles of , , , ... from the starting point, then there will be at most three distinct distances between pairs of points in adjacent positions around the circle. When there are three distances, the largest of the three always equals the sum of the other two. Unless is a rational multiple of , there will also be at least two distinct distances.
This result was conjectured by Hugo Steinhaus, and proved in the 1950s by Vera T. Sós, , and Stanisław Świerczkowski; more proofs were added by others later. Applications of the three-gap theorem include the study of plant growth and musical tuning systems, and the theory of light reflection within a mirrored square.
Statement
The three-gap theorem can be stated geometrically in terms of points on a circle. In this form, it states that if one places points on a circle, at angles of from the starting point, then there will be at most three distinct distances between pairs of points in adjacent positions around the circle. An equivalent and more algebraic form involves the fractional parts of multiples of a real number. It states that, for any positive real number and integer , the fractional parts of the numbers divide the unit interval into subintervals with at most three different lengths. The two problems are equivalent under a linear correspondence between the unit interval and the circumference of the circle, and a correspondence between the real number and the
Applications
Plant growth
In the study of phyllotaxis, the arrangements of leaves on plant stems, it has been observed that each successive leaf on the stems of many plants is turned from the previous leaf by the golden angle, approximately 137.5°. It has been suggested that this angle maximizes the sun-collecting power of the plant's leaves. If one looks end-on at a plant stem that has grown in this way, there will be a |
https://en.wikipedia.org/wiki/Ion%20implantation-induced%20nanoparticle%20formation | Ion implantation-induced nanoparticle formation is a technique for creating nanometer-sized particles for use in electronics.
Ion implantation
Ion Implantation is a technique extensively used in the field of materials science for material modification. The effect it has on nanomaterials allows manipulation of mechanical, electronic, morphological and optical properties.
One-dimensional nano-materials are an important contributor to the creation of nano-devices i.e. field effect transistors, nanogenerators and solar cells. The offer the potential of high integration density, lower power consumption, higher speed and super high frequency.
The effects of ion implantation varies according to multiple variables. Collision cascade may occur during implantation and this causes of interstitials and vacancies in target materials (although these defects may be mitigated through dynamic annealing). Collision modes are nuclear collision, electron collision and charge exchange. Another process is the sputtering effect, which significantly affects the morphology and shape of nano-materials.
References
Materials science
Nanotechnology
Semiconductor fabrication materials |
https://en.wikipedia.org/wiki/Libre%20Computer%20Project | The Libre Computer Project is an effort initiated by Shenzhen Libre Technology Co., Ltd., with the goal of producing standards-compliant single-board computers (SBC) and upstream software stack to power them.
Hardware
Libre Computer Project uses crowd-funding on Indiegogo and Kickstarter to market their SBC designs. The delivery and after-sales support was poor resulting in lots of complaints and dissatisfied funders.
Active Libre Computer SBC designs include:
ROC-RK3328-CC (Renegade)
The ROC-RK3328-CC "Renegade" board was funded on Indiegogo and features the following specifications:
Rockchip RK3328 SoC
4 ARM Cortex-A53 @ 1.4GHz
Cryptography Extensions
2G + 2P ARM Mali-450 @ 500MHz
OpenGL ES 1.1 / 2.0
OpenVG 1.1
Multi-Media Processor
Decoders
VP9 P2 4K60
H.265 M10P @ L5.1 4K60
H.264 H10P @ L5.1 4K60
JPEG
Encoders
H.265 1080P30 or 2x H.264 720P30
H.264 1080P30 or 2x H.264 720P30
Up to 4GB DDR4-2133 SDRAM
2 USB 2.0 Type A
1 USB 3.0 Type A
Gigabit Ethernet
3.5mm TRRS AV Jack
HDMI 2.0
MicroUSB Power In
MicroSD Card Slot with UHS support
eMMC Interface with 5.x support
IR Receiver
U-Boot Button
40 Pin Low Speed Header (PWM, I2C, SPI, GPIO)
ADC Header
Power Enable/On Header
AML-S905X-CC (Le Potato)
The AML-S905X-CC "Le Potato" board was funded on Kickstarter on 24 July 2017 and features the following specifications:
Amlogic S905X SoC
4 ARM Cortex-A53 @ 1.512GHz
Cryptography Extension
2G + 3P ARM Mali-450 @ 750MHz
OpenGL ES 1.1 / 2.0
OpenVG 1.1
Amlogic Video Engine 10
Decoders
VP9 P2 4K60
H.265 MP10@L5.1 4K60
H.264 HP@L5.1 4K30
JPEG / MJPEG
Encoders
H.264 1080P60
JPEG
Up to 2GB DDR3 SDRAM
4 USB 2.0 Type A
100 Mb Fast Ethernet
3.5mm TRRS AV Jack
HDMI 2.0
MicroUSB Power In
MicroSD Card Slot
eMMC Interface
IR Receiver
U-Boot Button
40 Pin Low Speed Header (PWM, I2C, SPI, GPIO)
Audio Headers (I2S, ADC, SPDIF)
UART Header
NOTE: GPIO Header Pin 11 or HDMI CEC is selectable by onboard jumper. They can not b |
https://en.wikipedia.org/wiki/ISO%205427 | ISO 5427 is an 8-bit extension to the KOI-7 N1 character set, which was standardised by the ISO. The first half was published in 1979, and the second half was published in 1981. It supports the Russian (also before 1918), Belarusian, Bulgarian (also before 1945), Ukrainian, Macedonian, and Serbian languages. In July and October 1983, there were some revisions.
Code table
References
Character sets
5427 |
https://en.wikipedia.org/wiki/Natural%20resources%20engineering | Natural Resources Engineering, the sixth Abet accredited environmental engineering program in the United States, is a subset of environmental engineering that applies various branches of science in order to create new technology that aims to protect, maintain, and establish sustainable natural resources. Specifically, natural resources engineers are concerned with applying engineering concepts and solutions to prevalent environmental issues. Common natural resources this discipline of engineering works closely with include both living resources such as plants and animals as well as non-living resources such as renewable energy, land, soils, and water. Natural resource engineering also involves researching and evaluating natural and societal forces. The hydrological cycle is the main component of natural forces and the desires of other people attribute to societal forces. Some historical examples of applications of natural resources engineering include the Roman aqueducts and the Hoover Dam.
Natural resource engineering degrees require a basic understanding of core engineering classes including calculus, physics, chemistry, and engineering mechanics, as well as additional courses with a stronger focus on applications of natural resources in environmental systems. These specific courses include soil and water engineering, modeling of biological and physical systems, properties of biological materials, and systems optimization.
The overall purpose of natural resource engineering is mainly categorized as either resource development, environmental management or both. Natural resource engineers often work in a vast variety of environments ranging from urban to rural. Most natural resource engineers can be found working for groups who strive to solve current and future environmental issues such as environmental consulting firms and government agencies.
History
Natural resources engineering has always existed as an extension of biological engineering, but demand for suc |
https://en.wikipedia.org/wiki/Fingerprint%20scanner | Fingerprint scanners are security systems of biometrics. They are used in police stations, security industries, smartphones, and other mobile devices.
Fingerprints
People have patterns of friction ridges on their fingers, these patterns are called the fingerprints. Fingerprints are uniquely detailed, durable over an individual's lifetime, and difficult to alter. Due to the unique combinations, fingerprints have become an ideal means of identification.
Types of fingerprint scanners
There are four types of fingerprint scanners: optical scanners, capacitance scanners, ultrasonic scanners, and thermal scanners. The basic function of every type of scanner is to obtain an image of a person's fingerprint and find a match for it in its database. The measure of the fingerprint image quality is in dots per inch (DPI).
Optical scanners take a visual image of the fingerprint using a digital camera.
Capacitive or CMOS scanners use capacitors and thus electric current to form an image of the fingerprint. This type of scanner tends to excel in terms of precision.
Ultrasonic fingerprint scanners use high frequency sound waves to penetrate the epidermal (outer) layer of the skin.
Thermal scanners sense the temperature differences on the contact surface, in between fingerprint ridges and valleys.
All fingerprint scanners are susceptible to be fooled by a technique that involves photographing fingerprints, processing the photographs using special software, and printing fingerprint replicas using a 3D printer.
Construction forms
There are two construction forms: the stagnant and the moving fingerprint scanner.
Stagnant: The finger must be dragged over the small scanning area. This is cheaper and less reliable than the moving form. Imaging can be less than ideal when the finger is not dragged over the scanning area at constant speed.
Moving: The finger lies on the scanning area while the scanner runs underneath. Because the scanner moves at constant speed over the fingerpri |
https://en.wikipedia.org/wiki/Mac%20OS%20Inuit | Mac OS Inuit, also called Mac OS Inuktitut or InuitSCII, is an 8-bit, single byte, extended ASCII character encoding supporting the variant of Canadian Aboriginal syllabics used by the Inuktitut language. It was designed by Doug Hitch for the government of the Northwest Territories, and adopted by Michael Everson for his fonts.
Mac OS Inuit is used by the Inuktitut localisation of the classic Mac OS, which was overseen by the Baffin Bay Divisional Board of Education with support from Everson Gunn Teoranta and authorised by Apple, although it did not ship with Apple hardware.
Layout
Each character is shown with its equivalent Unicode code point. Only the second half of the table (code points 128–255) is shown, the first half (code points 0–127) being the same as ASCII.
References
Character sets
Inuit
Inuit |
https://en.wikipedia.org/wiki/Mac%20OS%20Turkic%20Cyrillic | The Macintosh Turkic Cyrillic encoding is used in Apple Macintosh computers to represent texts in the Cyrillic script for Turkic languages. It was created by Michael Everson for use in his fonts, but is not an official Mac OS Codepage. It supports Azerbaijani, Bashkir, Kazakh, Kyrgyz, Tajik, Tatar, Turkmen, and Uzbek. Also possibly supports Russian, Bulgarian and Belarusian.
Each character is shown with its equivalent Unicode code point. Only the second half of the table (code points 128–255) is shown, the first half (code points 0–127) being the same as ASCII.
References
Character sets
Turkic Cyrillic |
https://en.wikipedia.org/wiki/Mac%20OS%20Barents%20Cyrillic | The Macintosh Barents Cyrillic encoding is used in Apple Macintosh computers to represent texts in Kildin Sami, Komi, and Nenets.
Layout
Each character is shown with its equivalent Unicode code point. Only the second half of the table (code points 128–255) is shown, the first half (code points 0–127) being the same as ASCII.
See also
ISO-IR-200: ISO 8859-5 derivative created for the same languages, also with Michael Everson's involvement.
References
Character sets
Barents Cyrillic |
https://en.wikipedia.org/wiki/Mac%20OS%20Ogham | Mac OS Ogham is a character encoding for representing Ogham text on Apple Macintosh computers. It is a superset of the Irish Standard I.S. 434:1999 character encoding for Ogham (which is registered as ISO-IR-208), adding some punctuation characters from Mac OS Roman. It is not an official Mac OS Codepage.
Layout
Each character is shown with its equivalent Unicode code point. Only the second half of the table (code points 128–255) is shown, the first half (code points 0–127) being the same as ASCII.
References
Character sets
Ogham
Ogham |
https://en.wikipedia.org/wiki/Pyragas%20method | In the mathematics of chaotic dynamical systems, in the Pyragas method of stabilizing a periodic orbit, an appropriate continuous controlling signal is injected into the system, whose intensity is nearly zero as the system evolves close to the desired periodic orbit but increases when it drifts away from the desired orbit. Both the Pyragas and OGY (Ott, Grebogi and Yorke) methods are part of a general class of methods called "closed loop" or "feedback" methods which can be applied based on knowledge of the system obtained through solely observing the behavior of the system as a whole over a suitable period of time.
The method was proposed by Lithuanian physicist .
References
External links
Kęstutis Pyragas homepage
Chaos theory
Nonlinear systems |
https://en.wikipedia.org/wiki/Mac%20OS%20Armenian | Mac OS Armenian is an Armenian character encoding for Mac OS created by Michael Everson for use in his fonts. It is not an official Mac OS character set.
Layout
Each character is shown with its equivalent Unicode code point. Only the second half of the table (code points 128–255) is shown, the first half (code points 0–127) being the same as ASCII.
References
Character sets
Armenian |
https://en.wikipedia.org/wiki/Mac%20OS%20Georgian | Mac OS Georgian is a character encoding for Mac OS created by Michael Everson for use in his fonts. It is not an official Mac OS character set.
The encoding is a form of extended ASCII, with the Georgian characters occupying the upper half of the 8-bit code space. Like the Georgian Unicode block, Mac OS Georgian encodes the characters from the Asomtavruli and Mkhedruli scripts (the former is used primarily in Georgian Orthodox Church materials, while the latter is used for most Georgian writing); it also includes a number of symbols and punctuation marks not found in 7-bit ASCII. All characters in Mac OS Georgian that also appear in Mac OS Roman are placed at the same locations as in Mac OS Roman, aiding compatibility with applications designed for Mac OS Roman.
Each character is shown with its equivalent Unicode code point. Only the second half of the table (code points 128–255) is shown, the first half (code points 0–127) being the same as ASCII.
References
Character sets
Georgian |
https://en.wikipedia.org/wiki/Kubilius%20model | In mathematics, the Kubilius model relies on a clarification and extension of a finite probability space on which the behaviour of additive arithmetic functions can be modeled by sum of independent random variables.
The method was introduced in Jonas Kubilius's monograph Tikimybiniai metodai skaičių teorijoje (published in Lithuanian in 1959) / Probabilistic Methods in the Theory of Numbers (published in English in 1964) .
Eugenijus Manstavičius and Fritz Schweiger wrote about Kubilius's work in 1992, "the most impressive work has been done on the statistical theory of arithmetic functions which almost created a new research area called Probabilistic Number Theory. A monograph (Probabilistic Methods in the Theory of Numbers) devoted to this topic was translated into English in 1964 and became very influential."
References
Further reading
Number theory |
https://en.wikipedia.org/wiki/3DISCO | 3DISCO (stands for “3D imaging of solvent-cleared organs“) is histological method which make biological samples more transparent (so called “cleared”), by using series of organic solvents for matching refractive index (RI) of tissue and surrounding medium. Structures in transparent tissues can be examined by fluorescence microscopy without need for time-consuming physical sectioning and following reconstruction in silico.
The method was developed by team around Ali Ertürk and Hans-Ulrich Dodt from Max-Planck-Institute in Munich primarily for clearing and imaging unsectioned mouse brain and spinal cord. Later on the method or its modifications were successfully used in many fields of biological research to image and investigate whole body of mouse, structure and function of mouse brain, stem cells, tumor tissues, developmental processes or whole human embryos.
History and development of method
Use of organic solvents for clearing (making transparent) the tissue was first mentioned more than century ago by German anatomist Werner Spalteholz. But with some exceptions (reviewed in ) was clearing techniques for whole 20th century almost forgotten. Their renaissance came in last decade, probably thank to spread of advanced techniques of fluorescence microscopy which allow optical sectioning of the specimen (confocal, multiphoton or light sheet microscopy).
Of actual clearing techniques, first used organic solvent was mixture of benzyl alcohol and benzyl benzoate (BABB). Authors used this solution to clear mouse brain, mouse embryos and whole body of D. melanogaster. Main drawback of this solution is bleaching of GFP signal and insufficient clearing of highly myelinated tissue of adult animals. Therefore, many other reagents were tested with aim to find GFP compatible and more sufficient clearing. As results tetrahydrofuran (THF) and dibenzyl ether (DBE) were chosen as the best regents for clearing. Based on those findings the 3DISCO protocol was published in 2012 |
https://en.wikipedia.org/wiki/BOQA | Bridge Operations (or Operational) Quality Assurance (BOQA; pronounced ) is a methodology utilised in shipping and which originates from the similar FOQA/FDM (Flight Operations Quality Assurance/Flight Data Monitoring) concept in aviation. BOQA is a methodology with which ship owners/operators, ship Captains, and other associated shipping stakeholders can automatically and systematically monitor, track, trend and analyse operational quality of (seagoing) vessels. The main target with BOQA is to provide a non-punitive platform for proactive analysis of vessel data to enable the enhancement of maritime safety . The BOQA methodology can be used in both conventional manned ships and in autonomous or unmanned vessels providing that adequate data sources are available.
History
The original template for BOQA was laid out when Royal Caribbean approached Aerobytes Limited (a market leader in FOQA) to collaborate to provide a similar product for the maritime industry. https://www.aerobytes.co.uk/boqa/. Discussions were held as to what breaches of performance should be detected and two recorders were installed on RCCL vessels and discussions were also held with Carnival group to develop the BOQA concept. Aerobytes decided to focus on its core business of aviation and RCCL and Carnival went on to develop their own systems along with a few other companies who saw the potential.
At present BOQA is not mandated and therefore there are no strict rules as to what an effective BOQA system should contain, but once the enormous potential is realised it is entirely possible that might change.
Description
BOQA is best developed as a non-punitive company-internal methodology or process, which has the overall target of assisting the ship Captain and the ship operator to maintain a high level of safety and operational quality.
BOQA has been described as:
A system which delivers 24/7 electronic monitoring and electronic alarms when set operational parameters are deviated from
During |
https://en.wikipedia.org/wiki/Aggregated%20distribution | An aggregated distribution, commonly found among predators and parasites, is a highly uneven (skewed) statistical distribution pattern in which they collect or aggregate in regions, which may be widely separated, where their prey or hosts are at high density. This distribution makes sampling difficult and invalidates commonly-used parametric statistics. A similar pattern is found among predators that search for their prey.
In predators
When predators need to search for their prey, they could search at random, as has been assumed in models such as those made by Lotka in 1925 and Volterra in 1928. This would imply that they scatter themselves evenly across the environment. However, prey may be concentrated at high densities in some areas and scarce elsewhere. The zoologists M. P. Hassell and R. M. May noted that predators and parasites, too, might aggregate themselves where prey was abundant, choosing some response curve: they observed for example that redshanks (predatory birds) adopted a sigmoid (s-shaped) response to the density of Corophium (amphipod) prey per square metre of mudflats. They noted, too, that several different behaviours of predators or parasites could cause them to aggregate selectively in areas where prey are at high density: they could be attracted by a volatile substance liberated by prey or the plants they are feeding on; they could choose to spend more time in areas where they have caught prey, as many predators appear to do; or, predators could follow an individual predator which had located prey, as is seen in feeding terns. Aggregated distributions of predators where they tend to spend time in areas where prey are concentrated have the effect of stabilising prey populations; when the time to travel between such concentrated areas is high; and when the prey populations too are more highly clumped.
In parasites
Parasite aggregation with respect to hosts is, according to Robert Poulin "a defining feature of metazoan parasite populations." |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.