source stringlengths 31 203 | text stringlengths 28 2k |
|---|---|
https://en.wikipedia.org/wiki/Flow%20Cytometry%20Standard | Flow Cytometry Standard (FCS) is a data file standard for the reading and writing of data from flow cytometry experiments. The FCS specification has traditionally been developed and maintained by the International Society for Advancement of Cytometry (ISAC). FCS used to be the only widely adopted file format in flow cytometry. Recently, additional standard file formats have been developed by ISAC.
File Format
The FCS file format describes a file that is a combination of textual data followed by binary data. The order of the file layout is as follows:
HEADER segment
TEXT segment
DATA segment
Optional ANALYSIS segment
CRC value
Optional OTHER segments
The HEADER segment is an ASCII text string that begins by identifying the version of the FCS standard used, followed by three pairs of byte offsets that designate the positions of the TEXT, DATA, and ANALYSIS segments. An example header segment is given below
FCS3.0 58 4380 4381 5586 0 0
Because the field width of the header segment byte positions is constrained by 8 characters, the maximum position it is capable of storing is 99,999,999. Anything beyond that is encoded as a 0 for both the start and end position, and the corresponding TEXT segment keyword is used instead.
The text segment is an ASCII text string that is divided into a series of key-value pairs that are delimited by some chosen character, e.g. '|'. The first character immediately following the header segment is the delimiter. An example of a header and text segment is given below
FCS3.0 58 4380 4381 5586 0 0|$BEGINANALYSIS|0|$BEGINDATA|4381|$BEGINSTEXT|0|$BTIM|08:24:37.64|$BYTEORD|1,2,3,4|$CELLS|RBC|...|
To be a valid FCS file, the text segment must contain all required keywords, which describe the DATA segment format and encoding. For FCS version 3.1, the required FCS primary TEXT segment keywords are as follows:
The DATA segment of the FCS file follows after the TEXT segment and is l |
https://en.wikipedia.org/wiki/Anupam%20%28supercomputer%29 | Anupam is a series of supercomputers designed and developed by Bhabha Atomic Research Centre (BARC) for their internal usages. It is mainly used for molecular dynamical simulations, reactor physics, theoretical physics, computational chemistry, computational fluid dynamics, and finite element analysis.
The latest in the series is Anupam-Aganya.
Introduction
BARC carries out inter-disciplinary and multi-disciplinary R&D activities covering a wide range of disciplines in physical sciences, chemical sciences, biological sciences and engineering. Expertise at BARC covers the entire spectrum of science and technology.
BARC has started development of supercomputers under the ANUPAM project in 1991 and till date, has developed more than 20 different computer systems. All ANUPAM systems have employed parallel processing as the underlying philosophy and MIMD (Multiple Instruction Multiple Data) as the core architecture. BARC, being a multidisciplinary research organization, has a large pool of scientists and engineers, working in various aspects of nuclear science and technology and thus are involved in doing diverse nature of computations.
To keep the gestation period short, the parallel computers were built with commercially available off-the-shelf components, with BARC's major contribution being in the areas of system integration, system engineering, system software development, application software development, fine tuning of the system and support to a diverse set of users.
The series started with a small four-processor system in 1991 with a sustained performance of 34 MFlops. Keeping in mind the ever increasing demands from the users, new systems have been built regularly with increasing computational power. The latest in the series of supercomputers is the 4608 core ANUPAM-Adhya system developed in 2010-11, with a sustained performance of 47 TeraFLOPS on the standard High Performance Linpack (HPL) benchmark. The system is in production mode and released to user |
https://en.wikipedia.org/wiki/Kinetica%20%28software%29 | Kinetica is a distributed, memory-first OLAP database developed by Kinetica DB, Inc. Kinetica is designed to use GPUs and modern vector processors to improve performance on complex queries across large volumes of real-time data. Kinetica is well suited for analytics on streaming geospatial and temporal data.
Background
In 2009, Amit Vij and Nima Neghaban founded GIS Federal, a developer of software they called GPUdb. The GIS stood for Global Intelligence Solutions. GPUdb was initially marketed for US military and intelligence applications, at Fort Belvoir for INSCOM.
In 2014 and 2016, the analyst firm International Data Corporation mentioned Kinetica for its production deployments at the US Army and United States Postal Service, respectively. As a result of their work with USPS, IDC announced that Kinetica was the recipient of the HPC Innovation Excellence Award.
On March 3, 2016, the name of the company was changed to GPUdb to match the name of the software, and a $7 million investment was announced which included Raymond J. Lane. In September 2016, it announced another $6 million investment, and an office in San Francisco, while keeping its office in Arlington, Virginia. After adding marketing and service people, the name of both the company and product was changed to Kinetica.
In June 2017, the company announced US$50 million in Series A funding led by Canvas Ventures and Meritech Capital Partners, along with new investor Citi Ventures and existing backer Ray Lane of GreatPoint Ventures.
The company has headquarters in Arlington, Virginia and regional offices in Europe and Asia Pacific.
Software
The software is designed to run on graphics processing units such as the Tesla from Nvidia. Partners include Cisco, Dell EMC, HPE, IBM, NVIDIA, Confluent, Amazon, Microsoft, Google, and Oracle. At Kinetica's core is a distributed, memory-first relational SQL database that utilizes the processing power of CPUs with the acceleration of multi-core GPU devices to ana |
https://en.wikipedia.org/wiki/Warnock%27s%20dilemma | Warnock's dilemma, named for its originator Bryan Warnock, is the problem of interpreting a lack of response to a posting in a virtual community. The term originally referred to mailing list discussions, but has been applied to Usenet posts, blogs, web forums, and online content in general. The dilemma arises because a lack of response does not necessarily imply that no one is interested in the topic, but could also mean for example that readers find the content to be exceptionally good (leaving nothing for commenters to add).
On many internet forums, only around one percent of users create new posts, while nine percent reply and 90 percent are lurkers that do not contribute to the discussion. When no users reply, the original poster has no way of knowing what lurkers think of their contribution.
Warnock's dilemma leads to online writers and publishers adopting more provocative writing strategies in order to ensure that they will get a response. However, this can also lead publishers to avoid producing the kind of content that might fail to generate comments due to its high quality. This problem arises particularly with sites that focus on viral content, such as BuzzFeed and Huffington Post.
Original description
Since Warnock's original description of the dilemma in August 2000, the expression has become used in the Perl world.
See also
Like button
Notes
Sources
External links
Warnock's later explanations
Mention in Wired Jargon Watch
Internet culture
Dilemmas |
https://en.wikipedia.org/wiki/Model-based%20systems%20engineering | Model-based systems engineering (MBSE), according to the International Council on Systems Engineering (INCOSE), is the formalized application of modeling to support system requirements, design, analysis, verification and validation activities beginning in the conceptual design phase and continuing throughout development and later life cycle phases. MBSE is a technical approach to systems engineering that focuses on creating and exploiting domain models as the primary means of information exchange, rather than on document-based information exchange. MBSE technical approaches are commonly applied to a wide range of industries with complex systems, such as aerospace, defense, rail, automotive, manufacturing, etc.
History
The first known prominent public usage of the term "Model-Based Systems Engineering" is a book by A. Wayne Wymore with the same name. The MBSE term was also commonly used among the SysML Partners consortium during the formative years of their Systems Modeling Language (SysML) open source specification project during 2003-2005, so they could distinguish SysML from its parent language UML v2, where the latter was software-centric and associated with the term Model-Driven Development (MDD). The standardization of SysML in 2006 resulted in widespread modeling tool support for it and associated MBSE processes that emphasized SysML as their lingua franca.
In September 2007, the MBSE approach was further generalized and popularized when INCOSE introduced its "MBSE 2020 Vision", which was not restricted to SysML, and supported other competitive modeling language standards, such as AP233, HLA, and Modelica. According to the MBSE 2020 Vision: "MBSE is expected to replace the document-centric approach that has been practiced by systems engineers in the past and to influence the future practice of systems engineering by being fully integrated into the definition of systems engineering processes."
As of 2014, the scope of MBSE started to cover more Modeling an |
https://en.wikipedia.org/wiki/Direct%20cool | Direct cool is one of the two major types of techniques used in domestic refrigerators, the other being the "frost-free" type. Direct-cool refrigerators produce the cooling effect by a natural convection process from cooled surfaces in the insulated compartment that is being cooled. Water vapor that contacts the cooled surface freezes. Therefore, unlike frost-free units, direct-cool units require manual defrosting of the interior. Direct cool is less expensive in production and in operation, as it consumes less energy when compared to frost free refrigerators
References
Refrigerants
2. Direct Cool Vs Frost Free Refrigerators – Know the Differences |
https://en.wikipedia.org/wiki/Software%20supply%20chain | A software supply chain is composed of the components, libraries, tools, and processes used to develop, build, and publish a software artifact.
Software vendors often create products by assembling open-source and commercial software components. A software bill of materials (SBOM) declares the inventory of components used to build a software artifact such as a software application. It is analogous to a list of ingredients on food packaging: where you might consult a label to avoid foods that may cause allergies, SBOMs can help organizations or persons avoid consumption of software that could harm them.
The concept of a BOM is well-established in traditional manufacturing as part of supply chain management. A manufacturer uses a BOM to track the parts it uses to create a product. If defects are later found in a specific part, the BOM makes it easy to locate affected products.
Usage
An SBOM is useful both to the builder (manufacturer) and the buyer (customer) of a software product. Builders often leverage available open-source and third-party software components to create a product; an SBOM allows the builder to make sure those components are up to date and to respond quickly to new vulnerabilities. Buyers can use an SBOM to perform vulnerability or license analysis, both of which can be used to evaluate risk in a product.
While many companies just use a spreadsheet for general BOM management, there are additional risks and issues in an SBOM written to a spreadsheet. SBOMs gain greater value when collectively stored in a repository that can be a part of other automation systems, easily queried by other applications. This need for automated SBOM processing is addressed by CycloneDX and Software Package Data Exchange (SPDX), both being open document standards.
Understanding the supply chain of software, obtaining an SBOM, and using it to analyze known vulnerabilities are crucial in managing risk.
Legislation
The Cyber Supply Chain Management and Transparency Act of |
https://en.wikipedia.org/wiki/Pierre%20Djibril%20Coulibaly | Pierre Djibril Coulibaly (born June 1957, Korhogo, Ivory Coast) is an Ivorian software engineer. He is managing director of Computer NEXAT, which he created in 2003 after twenty years at SIR and as a head of IT in education.
Career
Coulibaly was a founding member and vice president of the Federation of Inventors of the Ivory Coast (FEDINCI). Coulibaly introduced, in 2010, an application to Organisation Africaine de la Propriété Intellectuelle (Organisation of African Intellectual Property) to obtain a patent for the universal computer management software design process (patent 15165 of 30 March 2010). The work of Coulibaly on universal management software facilitates accessibility of management software for business areas hitherto neglected by application developers.
Awards
Coulibaly won the Start Award for Quality in Geneva, Switzerland; and the Knight of the National Order of Côte d'Ivoire for best tertiary sector innovation in Cotonou, Benin in 2013 at the Carrefour international innovation convention.
Coulibaly was honored as one of the 100 personalities that have marked Africa in 2014, by the FinancialAfrik.
References
1957 births
Living people
Software engineers
People from Korhogo
Ivorian engineers |
https://en.wikipedia.org/wiki/ScreenOS | ScreenOS is a real-time embedded operating system for the NetScreen range of hardware firewall devices from Juniper Networks.
Features
Beside transport level security ScreenOS also integrates these flow management applications:
IP gateway VPN management – ICSA-certified IPSec
IP packet inspection (low level) for protection against TCP/IP attacks
Virtualization for network segmentation
Possible NSA backdoor and 2015 "Unauthorized Code" incident
In December 2015, Juniper Networks announced that it had found unauthorized code in ScreenOS that had been there since August 2012. The two backdoors it created would allow sophisticated hackers to control the firewall of un-patched Juniper Netscreen products and decrypt network traffic. At least one of the backdoors appeared likely to have been the effort of a governmental interest. There was speculation in the security field about whether it was the NSA. Many in the security industry praised Juniper for being transparent about the breach. WIRED speculated that the lack of details that were disclosed and the intentional use of a random number generator with known security flaws could suggest that it was planted intentionally.
NSA and GCHQ
A 2011 leaked NSA document says that GCHQ had current exploit capability against the following ScreenOS devices: NS5gt, N25, NS50, NS500, NS204, NS208, NS5200, NS5000, SSG5, SSG20, SSG140, ISG 1000, ISG 2000. The exploit capabilities seem consistent with the program codenamed FEEDTROUGH.
Versions
References
External links
ScreenOS Software Documentation
Embedded operating systems
Real-time operating systems
Network operating systems
Juniper Networks
Computer networking |
https://en.wikipedia.org/wiki/BOSH%20%28software%29 | BOSH is an open-source software project that offers a toolchain for release engineering, software deployment and application lifecycle management of large-scale distributed services. The toolchain is made up of a server (the BOSH Director) and a command line tool. BOSH is typically used to package, deploy and manage cloud software. While BOSH was initially developed by VMware in 2010 to deploy Cloud Foundry PaaS, it can be used to deploy other software (such as Hadoop, RabbitMQ, or MySQL for instance). BOSH is designed to manage the whole lifecycle of large distributed systems.
Since March 2016, BOSH can manage deployments on both Microsoft Windows and Linux servers.
A BOSH Director communicates with a single Infrastructure as a service (IaaS) provider to manage the underlying networking and virtual machines (VMs) (or containers). Several IaaS providers are supported: Amazon Web Services EC2, Apache CloudStack, Google Compute Engine, Microsoft Azure, OpenStack, and VMware vSphere.
To help support more underlying IaaS providers, BOSH uses the concept of a Cloud Provider Interface (CPI). There is an implementation of the CPI for each of the IaaS providers listed above. Typically the CPI is used to deploy VMs, but it can be used to deploy containers as well.
Few CPIs exist for deploying containers with BOSH and only one is actively supported. For this one, BOSH uses a CPI that deploys Pivotal Software's Garden containers (Garden is very similar to Docker) on a single virtual machine, run by VirtualBox or VMware Workstation. In theory, any other container engine could be supported, if the necessary CPIs were developed.
Due to BOSH indifferently supporting deployments on VMs or containers, BOSH uses the generic term “instances” to designate those. It is up to the CPI to choose whether a BOSH “instance” is actually a VM or a container.
Workflow
Once installed, a BOSH server accepts uploading root filesystems (called “stemcells”) and packages (called “releases”) to |
https://en.wikipedia.org/wiki/DataVault | The DataVault was Thinking Machines' mass storage system, storing five gigabytes of data, expandable to ten gigabytes with transfer rates of 40 megabytes per second. Eight DataVaults could be operated in parallel for a combined data transfer rate of 320 megabytes per second for up to 80 gigabytes of data.
Each DataVault unit stored its data in an array of 39 individual disk drives with data spread across the drives. Each 64-bit data chunk received from the I/O bus was split into two 32-bit words. After verifying parity, the DataVault controller added 7 bits of Error Correcting Code (ECC) and stored the resulting 39 bits on 39 individual drives. Subsequent failure of any one of the 39 drives would not impair reading of the data, since the ECC code allows any single bit error to be detected and corrected.
Although operation is possible with a single failed drive, three spare drives were available to replace failed units until they are repaired. The ECC codes permit 100% recovery of the data on any one failed disk, allowing a new copy of this data to be reconstructed and written onto the replacement disk. Once this recovery is complete, the data base is considered to be healed.
In today's terminology this would be labeled a RAID-2 subsystem. However, these units shipped before the label RAID was formed.
The DataVault was an example of unusual industrial design. Instead of the usual rectilinear box, the cabinet had a gentle curve that made it look like an information desk or a bartender's station.
References
External links
Computer storage devices
Thinking Machines Corporation |
https://en.wikipedia.org/wiki/Certified%20penetration%20testing%20engineer | Certified Penetration Testing Engineer (C)PTE) is an internationally recognized cyber security certification administered by the United States-based information security company Mile2. The accreditation maps to the Committee on National Security Systems' 4013 education certification. The C)PTE certification is considered one of five core cyber security certifications.
Accreditations
Obtaining the C)PTE certification requires proven proficiency and knowledge of five key information security elements, penetration testing, data collection, scanning, enumeration, exploitation and reporting.
The CPTE certification is one of some information assurance accreditations recognized by the U.S. National Security Agency. The certification has also been approved by the U.S. Department of Homeland Security's National Initiative for Cybersecurity Studies and Careers (NICSS) and the U.S.-based National Security Systems Committee.
Examination
The online exam for C)PTE accreditation lasts two hours and consists of 100 multiple choice questions.
References
External links
Mile2 C)PTE website page
Beginners Guide to Penetration Testing
Computer security qualifications
Data security
Information technology qualifications |
https://en.wikipedia.org/wiki/Shared%20memory | In computer science, shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies. Shared memory is an efficient means of passing data between programs. Depending on context, programs may run on a single processor or on multiple separate processors.
Using memory for communication inside a single program, e.g. among its multiple threads, is also referred to as shared memory.
In hardware
In computer hardware, shared memory refers to a (typically large) block of random access memory (RAM) that can be accessed by several different central processing units (CPUs) in a multiprocessor computer system.
Shared memory systems may use:
uniform memory access (UMA): all the processors share the physical memory uniformly;
non-uniform memory access (NUMA): memory access time depends on the memory location relative to a processor;
cache-only memory architecture (COMA): the local memories for the processors at each node is used as cache instead of as actual main memory.
A shared memory system is relatively easy to program since all processors share a single view of data and the communication between processors can be as fast as memory accesses to the same location. The issue with shared memory systems is that many CPUs need fast access to memory and will likely cache memory, which has two complications:
access time degradation: when several processors try to access the same memory location it causes contention. Trying to access nearby memory locations may cause false sharing. Shared memory computers cannot scale very well. Most of them have ten or fewer processors;
lack of data coherence: whenever one cache is updated with information that may be used by other processors, the change needs to be reflected to the other processors, otherwise the different processors will be working with incoherent data. Such cache coherence protocols can, when they work well, provide extremely hig |
https://en.wikipedia.org/wiki/Ernst%20Schering%20Prize | The Ernst Schering Prize is awarded annually by the Ernst Schering Foundation for especially outstanding basic research in the fields of medicine, biology or chemistry anywhere in the world. Established in 1991 by the Ernst Schering Research Foundation, and named after the German apothecary and industrialist, Ernst Christian Friedrich Schering, who founded the Schering Corporation, the prize is now worth €50,000.
Recipients
Source: Schering Foundation
1992 , (Center for Molecular Biology, University of Heidelberg, Germany)
1993 Christiane Nüsslein-Volhard, (Max Planck Institute for Developmental Biology in Tübingen, Germany)
1994 Bert Vogelstein, (Oncology Center, Johns Hopkins University, Baltimore, Maryland, US)
1995 Yasutomi Nishizuka, (Kobe University, Japan)
1996 Judah Folkman, (Harvard Medical School, Harvard University, Boston, US)
1997 Johann Mulzer, (Institute for Organic Chemistry, University of Vienna, Austria)
1998 Ilme Schlichting, (Max Planck Institute for Molecular Physiology in Dortmund, Germany)
1999 Michael Berridge, (Babraham Institute in Cambridge, UK)
2000 , (University of Tokyo, Japan)
2001 Kyriacos Nicolaou, (University of California, San Diego, California, and The Scripps Research Institute, La Jolla, California, US)
2002 Ian Wilmut, (The Roslin Institute in Edinburgh, UK)
2003 Svante Pääbo, (Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany)
2004 , (National Institute of Neurological Disorders and Stroke (NINDS), Bethesda, Maryland, US)
2005 Thomas Tuschl, (Laboratory of RNA Molecular Biology, Rockefeller University, New York)
2006 Wolfgang Baumeister, (Max Planck Institute of Biochemistry in Martinsried, Germany)
2007 Carolyn Bertozzi, (University of California, Berkeley, US)
2008 Klaus Rajewsky, (Harvard Medical School, Boston, US)
2009 Rudolf Jaenisch, (Whitehead Institute, Cambridge, Massachusetts, US)
2010 Marc Feldmann and Sir Ravinder Maini, (Kennedy Institute of Rheumatology at Imperial College London, UK)
201 |
https://en.wikipedia.org/wiki/AirPair | AirPair is a service and eponymous company that connects people who need help with programming issues (usually, programmers at small technology companies or at finance companies that use technology products) and people who can help them. Unlike services such as oDesk and Elance, AirPair is not a service for outsourcing programming tasks, but rather a service that facilitates one-off knowledge transfers from people with highly specialized knowledge of particular technology stacks or programming issues to people who are in need of specialized help.
History
AirPair launched in March 2013, with founder Jonathon Kresner, who hails from Australia, working full-time, and it soon hired three other part-time developers to work alongside him. Kresner had previously founded two other startups: Preparty, a social invitation and event-booking service based in Australia, and ClimbFind, an online rock-climbing community that reached a million users. Kresner was inspired to work on AirPair because he saw the need for outside expert assistance with programming issues arise regularly at these startups.
In November 2013, founder Kresner describes the company's initial success at bootstrapping itself to "Ramen profitability" in a blog post. In December 2013, AirPair was accepted into the Winter 2014 Y Combinator batch.
In March 2014, AirPair announced it would launch partnerships with Stripe, Twilio, and other companies that had their own application programming interfaces, allowing developers having trouble with the APIs to seek help over AirPair from experts on the APIs.
AirPair presented at the Y Combinator Winter 2014 Demo Day on March 25, 2014, and successfully raised over $1 million within the next 48 hours.
Reception
A review of AirPair by Will Lam stressed that because payment was based on time rather than results, it was important to use it for clearly thought-out questions where one had high confidence that the session would help.
Dennis Beatty, who met AirPair founde |
https://en.wikipedia.org/wiki/Impedance%20analyzer | An impedance analyzer is a type of electronic test equipment used to measure complex electrical impedance as a function of test frequency.
Impedance is an important parameter used to characterize electronic components, electronic circuits, and the materials used to make components. Impedance analysis can also be used to characterize materials exhibiting dielectric behavior such as biological tissue, foodstuffs or geological samples.
Impedance analyzers come in three distinct hardware implementations, and together these three implementations can probe from ultra low frequency to ultra high frequency and can measure impedances from µΩ to TΩ.
Operation
Impedance analyzers are a class of instruments which measure complex electrical impedance as a function of frequency. This involves the phase sensitive measurement of current and voltage applied to a device under test while the measurement frequency is varied over the course of the measurement. Key specifications of an impedance analyzer are the frequency range, impedance range, absolute impedance accuracy and phase angle accuracy. Further specifications include the ability to apply voltage bias and current bias while measuring, and the measurement speed.
Impedance analyzers typically offer highly accurate impedance measurements, e.g. with a basic accuracy of up to 0.05%, and a frequency measurement range from µHz to GHz. Impedance values can range over many decades from µΩ to TΩ, whereas the phase angle accuracy is in the range of 10 millidegree. Measured impedance values include absolute impedance, the real and imaginary part of the measured impedance and the phase between the voltage and current. Model-derived impedance parameters such as conductance, inductance and capacitance are calculated based on a replacement circuit model and subsequently displayed.
LCR meters also provide impedance measurement functionality, typically with similar accuracy but lower frequency range. The measurement frequency of LCR meters |
https://en.wikipedia.org/wiki/Palestine%20grid | The Palestine grid was the geographic coordinate system used by the Survey Department of Palestine.
The system was chosen by the Survey Department of the Government of Palestine in 1922. The projection used was the Cassini-Soldner projection. The central meridian (the line of longitude along which there is no local distortion) was chosen as that passing through a marker on the hill of Mar Elias Monastery south of Jerusalem. The false origin (zero point) of the grid was placed 100 km to the south and west of the Ali el-Muntar hill that overlooks Gaza city. The unit length for the grid was the kilometre; the British units were not even considered.
At the time the grid was established, there was no intention of mapping the lower reaches of the Negev Desert, but this did not remain true. Those southern regions having a negative north-south coordinate then became a source of confusion, which was solved by adding 1,000 to the northern coordinate in that case. For some military purposes, 1,000 was added to the north-south coordinates of all locations, so that they then ranged uniformly from about 900 to about 1,300.
During World War II, a Military Palestine Grid was used that was similar to the Palestine Grid but used the transverse Mercator projection. The difference between the two projections was only a few metres.
After the establishment of the State of Israel, the Palestine grid continued to be used under the name of the Israel Grid or the Israeli Cassini Soldner (ICS) grid, now called the "Old Israeli Grid", with 1,000 km added to the northing component to make the north-south range continuous. It was replaced by the Israeli Transverse Mercator grid in 1994. The Palestine grid is still commonly used to specify locations in the historical and archaeological literature.
Specifying locations
The basic way of specifying a location on the Palestine grid is to write the east-west coordinate followed by the north-south coordinate using 3 digits each. For example, |
https://en.wikipedia.org/wiki/Cubic%20mean | The cubic mean (written as ) is a specific instance of the generalized mean with .
Definition
For real numbers the cubic mean is defined as:
For example, the cubic mean of two numbers is:
.
Applications
It is used for predicting the life expectancy of machine parts.
The cubic mean wind speed has been used a measure of local potential for wind energy.
The cubic mean is also used in biology to measure the mean dimensions of spherical bacteria (cocci) and of larger animals that are (approximately) spheroidal in shape. In this case using the conventional arithmetic mean will not give an accurate result because the size of a spherical bacterium increases as the cube of the radius.
References
Means |
https://en.wikipedia.org/wiki/Congressional%20App%20Challenge | The Congressional Science Technology, Engineering and Math (STEM) Academic Competition, also known as the House App Contest or Congressional App Challenge, allows middle and high school students in participating congressional districts to compete in an annual application software ("app") development contest. Students are encouraged to design an app using any programming language on any platform, with no limits on topic or function. Winners from congressional districts have their apps featured online and in the United States Capitol Building and are invited to attend the annual #HouseofCode event.
History
The challenge was established by the United States House of Representatives in 2013 under the "Academic Competition Resolution of 2013" as a bipartisan effort to engage student creativity and participation in STEM education fields in a similar fashion as the Congressional Art Competition. The resolution passed with 99% support – a vote of 411 to 3 and outlined how and at what interval the competition would be hosted. The Congressional Internet Caucus Advisory Committee introduced the concept for the Congressional App Challenge in 2013 and the challenge was co-chaired by Congressional Internet Caucus co-chairs Rep. Bob Goodlatte and Rep. Anna Eshoo. Today, the Congressional App Challenge is managed by the non-profit organization, the Internet Education Foundation, in partnership with the House of Representatives. In its inaugural year, 84 congressional districts in 31 states and DC recognized 212 students for creating 109 apps. The 2018 Challenge had 5,229 students submit 1,715 apps in 222 congressional districts.
Demographics
With its focus on increasing diversity and inclusion within the computer science field, the Congressional App Challenge enrolls a large number of underrepresented minority, female, and rural students with various experience in coding. In 2018, 8% of participants were Black, 15% Hispanic, and 3% American Indian, and 36% of participants wer |
https://en.wikipedia.org/wiki/Alex%20Bateman | Alexander George Bateman is a computational biologist and Head of Protein Sequence Resources at the European Bioinformatics Institute (EBI), part of the European Molecular Biology Laboratory (EMBL) in Cambridge, UK. He has led the development of the Pfam biological database and introduced the Rfam database of RNA families. He has also been involved in the use of Wikipedia for community-based annotation of biological databases.
Education
Bateman received a Bachelor of Science degree in Biochemistry from Newcastle University in 1994. He received his PhD from the University of Cambridge in 1997, for research supervised by Cyrus Chothia at the MRC Laboratory of Molecular Biology (LMB) on the evolution of the immunoglobulin protein superfamily. During this time, he also worked with Sean Eddy to discover novel protein domains using the HMMER software.
Career and research
In 1997, Bateman joined the Wellcome Trust Sanger Institute to lead the development of the Pfam biological database. In 2003, he introduced the Rfam database of RNA families. He was also involved in providing protein analysis for the publication of the human genome.
As of 2012, he has been Head of Protein Sequence Resources at EMBL-EBI.
Bateman has also been involved in promoting the use of Wikipedia within the science community and in particular, community-based annotation of biological databases through Wikipedia, for example, annotation of the Rfam database through WikiProject RNA.
Bateman served as Executive Editor of the journal Bioinformatics from 2004 to 2012 and has also served as Editor of Nucleic Acids Research, Genome Biology and Current Protocols in Bioinformatics. In 2014, he was appointed one of the first Honorary Editors of Bioinformatics. As of 2015, Bateman also serves on the ISCB Board of Directors.
Awards and honours
Bateman was awarded the 2010 Benjamin Franklin Award in bioinformatics. He became the third former member of Richard Durbin's lab to win the award, following Sean Edd |
https://en.wikipedia.org/wiki/Photolith%20film | A photolith film is a transparent film, made with some sort of transparent plastic (formerly made of acetate). Nowadays, with the use of laser printers and computers, the photolith film can be based on polyester, vegetable paper or laser film paper. It is mainly used in all photolithography processes.
A color image, or polychromatic, is divided into four basic colors: cyan, the magenta, the yellow and black (the so-called system CMYK (short name from cyan, magenta , yellow and black ), generating four photolith film images, a photo filtered with each of the three basic colors plus a B&W film (addition of the three). For black-and-white images, such as text or simple logos, only one photolith film is needed.
The photolith film it is sometimes recorded by an optical laser process on an imagesetter machine, coming from a digital file, or by a photographic process in a contact copier, if a physical copy of the original already exist. In the old offset printing plates acquire text or images to be printed after being sensitized from a photolith film.
The photolith films, as well as vegetable and the laser films, are used to store plates, screens or other media sensitive to light as a backup for repeating their processes in the future. They normally store the information of the three or four separated colours on monochrome photolith films.
See also
Azo compound
Contact copier
Ozalid
Diazo copier
References
Animation techniques
Non-impact printing
Technical drawing
Infographics |
https://en.wikipedia.org/wiki/MPLAB%20devices | The MPLAB series of devices are programmers and debuggers for Microchip PIC and dsPIC microcontrollers, developed by Microchip Technology.
The ICD family of debuggers has been produced since the release of the first Flash-based PIC microcontrollers, and the latest ICD 3 currently supports all current PIC and dsPIC devices. It is the most popular combination debugging/programming tool from Microchip.
The REAL ICE emulator is similar to the ICD, with the addition of better debugging features, and various add-on modules that expand its usage scope. The ICE is a family of discontinued in-circuit emulators for PIC and dsPIC devices, and is currently superseded by the REAL ICE.
MPLAB ICD
The MPLAB ICD is the first in-circuit debugger product by Microchip, and is currently discontinued and superseded by ICD 2. The ICD connected to the engineer's PC via RS-232, and connected to the device via ICSP.
The ICD supported devices within the PIC16C and PIC16F families, and supported full speed execution, or single step interactive debugging. Only one hardware breakpoint was supported by the ICD.
MPLAB ICD 2
The MPLAB ICD 2 is a discontinued in-circuit debugger and programmer by Microchip, and is currently superseded by ICD 3. The ICD 2 connects to the engineer's PC via USB or RS-232, and connects to the device via ICSP.
The ICD 2 supports most PIC and dsPIC devices within the PIC10, PIC12, PIC16, PIC18, dsPIC, rfPIC and PIC32 families, and supports full speed execution, or single step interactive debugging. At breakpoints, data and program memory can be read and modified using the MPLAB IDE. The ICD 2 firmware is field upgradeable using the MPLAB IDE.
The ICD 2 can be used to erase, program or reprogram PIC MCU program memory, while the device is installed on target hardware, using ICSP. Target device voltages from 2.0V to 6.0V are supported.
MPLAB ICD 3
The MPLAB ICD 3 is an in-circuit debugger and programmer by Microchip, and is the latest in the ICD series. The ICD 3 |
https://en.wikipedia.org/wiki/Stochastic%20empirical%20loading%20and%20dilution%20model | The stochastic empirical loading and dilution model (SELDM) is a stormwater quality model. SELDM is designed to transform complex scientific data into meaningful information about the risk of adverse effects of runoff on receiving waters, the potential need for mitigation measures, and the potential effectiveness of such management measures for reducing these risks. The U.S. Geological Survey developed SELDM in cooperation with the Federal Highway Administration to help develop planning-level estimates of event mean concentrations, flows, and loads in stormwater from a site of interest and from an upstream basin. SELDM uses information about a highway site, the associated receiving-water basin, precipitation events, stormflow, water quality, and the performance of mitigation measures to produce a stochastic population of runoff-quality variables. Although SELDM is, nominally, a highway runoff model is can be used to estimate flows concentrations and loads of runoff-quality constituents from other land use areas as well. SELDM was developed by the U.S. Geological Survey so the model, source code, and all related documentation are provided free of any copyright restrictions according to U.S. copyright laws and the USGS Software User Rights Notice. SELDM is widely used to assess the potential effect of runoff from highways, bridges, and developed areas on receiving-water quality with and without the use of mitigation measures. Stormwater practitioners evaluating highway runoff commonly use data from the Highway Runoff Database (HRDB) with SELDM to assess the risks for adverse effects of runoff on receiving waters.
SELDM is a stochastic mass-balance model. A mass-balance approach (figure 1) is commonly applied to estimate the concentrations and loads of water-quality constituents in receiving waters downstream of an urban or highway-runoff outfall. In a mass-balance model, the loads from the upstream basin and runoff source area are added to calculate the discharge, co |
https://en.wikipedia.org/wiki/Vector%20addition%20system | A vector addition system (VAS) is one of several mathematical modeling languages for the description of distributed systems. Vector addition systems were introduced by Richard M. Karp and Raymond E. Miller in 1969, and generalized to vector addition systems with states (VASS) by John E. Hopcroft and Jean-Jacques Pansiot in 1979. Both VAS and VASS are equivalent in many ways to Petri nets introduced earlier by Carl Adam Petri. Reachability in vector addition systems is Ackermann-complete (and hence nonelementary).
Informal definition
A vector addition system consists of a finite set of integer vectors. An initial vector is seen as the initial values of multiple counters, and the vectors of the VAS are seen as updates. These counters may never drop below zero. More precisely, given an initial vector with non negative values, the vectors of the VAS can be added componentwise, given that every intermediate vector has non negative values. A vector addition system with states is a VAS equipped with control states. More precisely, it is a finite directed graph with arcs labelled by integer vectors. VASS have the same restriction that the counter values should never drop below zero.
Formal definitions and basic terminology
A VAS is a finite set for some .
A VASS is a finite directed graph such that for some .
Transitions
Let be a VAS. Given a vector , the vector can be reached, in one transition, if and .
Let be a VASS. Given a configuration , the configuration can be reached, in one transition, if and .
See also
Petri net
Finite state machine
Communicating finite-state machine
Kahn process networks
Process calculus
Actor model
Trace theory
References
Formal specification languages
Models of computation
Concurrency (computer science)
Diagrams
Software modeling language |
https://en.wikipedia.org/wiki/John%20M.%20Sullivan%20%28mathematician%29 | John Matthew Sullivan (born February 25, 1963) is an American mathematician who works in Germany as a professor at the Technical University of Berlin. His research includes work on knot theory, constant-mean-curvature surfaces, mathematical foams, scientific visualization, and mesh generation.
Early life and education
Sullivan was born in Princeton, New Jersey, and graduated summa cum laude from Harvard University in 1985. He earned a master's degree from the University of Cambridge in 1986, and a doctorate from Princeton University in 1990 under the supervision of Frederick J. Almgren, Jr.
Career
After postdoctoral studies at The Geometry Center and the Mathematical Sciences Research Institute, Sullivan joined the faculty of the University of Illinois at Urbana–Champaign in 1997. He moved to Berlin in 2003, and chaired the Berlin Mathematical School from 2012 to 2014.
Awards
In 2012, he became one of the inaugural Fellows of the American Mathematical Society.
References
External links
Website at TU Berlin
Mathematical art gallery, 2010 Bridges conference
1963 births
Living people
People from Princeton, New Jersey
21st-century German mathematicians
Harvard University alumni
Alumni of the University of Cambridge
Princeton University alumni
University of Illinois Urbana-Champaign faculty
Academic staff of the Technical University of Berlin
Fellows of the American Mathematical Society
20th-century American mathematicians
21st-century American mathematicians
Mathematical artists |
https://en.wikipedia.org/wiki/Starlink | Starlink is a satellite internet constellation operated by American aerospace company SpaceX, providing coverage to over 60 countries. It also aims for global mobile phone service after 2023.
SpaceX started launching Starlink satellites in 2019. As of August 2023, it consists of over 5,000 mass-produced small satellites in low Earth orbit (LEO), which communicate with designated ground transceivers. Nearly 12,000 satellites are planned to be deployed, with a possible later extension to 42,000. SpaceX announced reaching more than 1 million subscribers in December 2022, 1.5 million subscribers in May 2023, and 2 million subscribers in September 2023. It has had a key role in the Russo-Ukrainian War.
The SpaceX satellite development facility in Redmond, Washington, houses the Starlink research, development, manufacturing and orbit control facilities. In May 2018, SpaceX estimated the cost of designing, building and deploy the constellation was estimated to be at least US$10 billion (equivalent to US$ billion in ). In January 2017 SpaceX expected more than $30 billion in revenue by 2025 from its satellite constellation, while revenues from its launch business were expected to reach $5 billion in the same year.
Astronomers have raised concerns about the effect the constellation may have on ground-based astronomy, and how the satellites will add to an already congested orbital environment. SpaceX has attempted to mitigate astronometric interference concerns with measures to reduce the satellites' brightness during operation. They are equipped with Hall-effect thrusters allowing them to orbit raise, station-keep, and de-orbit at the end of their lives. They are also designed to autonomously and smoothly avoid collisions based on uplinked tracking data.
History
Background
Constellations of low Earth orbit satellites were first conceptualized in the mid-1980s as part of the Strategic Defense Initiative, culminating in Brilliant Pebbles, where weapons were to be stage |
https://en.wikipedia.org/wiki/Generalized%20spectrogram | In order to view a signal (taken to be a function of time) represented over both time and frequency axis, time–frequency representation is used. Spectrogram is one of the most popular time-frequency representation, and generalized spectrogram, also called "two-window spectrogram", is the generalized application of spectrogram.
Definition
The definition of the spectrogram relies on the Gabor transform (also called short-time Fourier transform, for short STFT), whose idea is to localize a signal in time by multiplying it with translations of a window function .
The definition of spectrogram is
,
where denotes the Gabor Transform of .
Based on the spectrogram, the generalized spectrogram is defined as:
,
where:
For , it reduces to the classical spectrogram:
The feature of Generalized spectrogram is that the window sizes of and are different. Since the time-frequency resolution will be affected by the window size, if one choose a wide and a narrow (or the opposite), the resolutions of them will be high in different part of spectrogram. After the multiplication of these two Gabor transform, the resolutions of both time and frequency axis will be enhanced.
Properties
Relation with Wigner Distribution
where
Time marginal condition
The generalized spectrogram satisfies the time marginal condition if and only if ,
where denotes the Dirac delta function
Frequency marginal condition
The generalized spectrogram satisfies the frequency marginal condition if and only if ,
where denotes the Dirac delta function
Conservation of energy
The generalized spectrogram satisfies the conservation of energy if and only if .
Reality analysis
The generalized spectrogram is real if and only if for some .
References
Class notes of Time frequency analysis and wavelet transform -- from Prof. Jian-Jiun Ding's course website
P. Boggiatto, G. De Donno, and A. Oliaro, “Two window spectrogram and their integrals," Advances and Applications, vol. 205, pp. 251–268, 2009.
Time |
https://en.wikipedia.org/wiki/RAMiCS | RAMiCS, the International Conference on Relational and Algebraic Methods in Computer Science, is an academic conference organized every eighteen months by an international steering committee and held in different locations mainly in Europe, but also in other continents. Like most theoretical computer science conferences, its contributions are strongly peer-reviewed. Proceedings of the conferences appear in Lecture Notes in Computer Science, and some of the stronger papers have been published in Journal of Logical and Algebraic Methods in Programming.
Early history
RAMiCS, then still called RelMiCS, was first organized by Chris Brink and Gunther Schmidt on January 17–21, 1994 in Schloß Dagstuhl, Germany as International Seminar on Relational Methods in Computer Science. The second RelMiCS was organized by the late Armando Haeberer and held July 10–14, 1995 in Paraty near Rio de Janeiro, Brazil. The 3rd International Seminar on the Use of Relational Methods in Computer Science (RelMiCS 3) was January 6–10, 1997 in Albatros Hotel in Hammamet, Tunisia. A 4th International Seminar on Relational Methods in Computer Science (RelMiCS 4) took place September 14–20, 1998 in Stefan Banach International Mathematical Centre, Sept. 2004, Warsaw, Poland. The 5th International Seminar on Relational Methods in Computer Science (RelMiCS 5) occurred January 9–14, 2000 at Valcartier near Québec, Canada. From that point on, publication was arranged with Springer in the series Lecture Notes in Computer Science.
See also
Calculus of relations
Binary relation
Heterogeneous relation
List of computer science conferences
References
Theoretical computer science conferences
Relational algebra
Recurring events established in 1994 |
https://en.wikipedia.org/wiki/Michael%20Goldsmith%20%28computer%20scientist%29 | Michael Goldsmith (born 1959) is a British computer scientist, senior research fellow and Lecturer at the University of Oxford, England.
He is a member of Oxford University's Department of Computer Science. He is an associate director of Oxford University's Cyber Security Centre, and an Oxford Martin Fellow of The Global Cyber Security Capacity Centre. He is a fellow of Worcester College, Oxford.
Career
Goldsmith is a senior research fellow at the University of Oxford's Computer Science Department, From 2006 to 2011 he was principal fellow: High-Integrity Techniques in the e-Security Group of the WMG Digital Laboratory in the University of Warwick.
Publications
Goldsmith's publications cover security, cryptography in general, CSP, and formal methods in particular.
References
External links
1959 births
Living people
Alumni of the University of Oxford
British computer scientists
Members of the Department of Computer Science, University of Oxford
Fellows of Worcester College, Oxford
Formal methods people
Place of birth missing (living people) |
https://en.wikipedia.org/wiki/User%20activity%20monitoring | In the field of information security, user activity monitoring (UAM) or user activity analysis (UAA) is the monitoring and recording of user actions. UAM captures user actions, including the use of applications, windows opened, system commands executed, checkboxes clicked, text entered/edited, URLs visited and nearly every other on-screen event to protect data by ensuring that employees and contractors are staying within their assigned tasks, and posing no risk to the organization.
User activity monitoring software can deliver video-like playback of user activity and process the videos into user activity logs that keep step-by-step records of user actions that can be searched and analyzed to investigate any out-of-scope activities.
Issues
The need for UAM rose due to the increase in security incidents that directly or indirectly involve user credentials, exposing company information or sensitive files. In 2014, there were 761 data breaches in the United States, resulting in over 83 million exposed customer and employee records. With 76% of these breaches resulting from weak or exploited user credentials, UAM has become a significant component of IT infrastructure. The main populations of users that UAM aims to mitigate risks with are:
Contractors
Contractors are used in organizations to complete information technology operational tasks. Remote vendors that have access to company data are risks. Even with no malicious intent, an external user like a contractor is a major security liability.
Users
70% of regular business users admitted to having access to more data than necessary. Generalized accounts give regular business users access to classified company data. This makes insider threats a reality for any business that uses generalized accounts.
IT users
Administrator accounts are heavily monitored due to the high-profile nature of their access. However, current log tools can generate “log fatigue” on these admin accounts. Log fatigue is the overwhelming |
https://en.wikipedia.org/wiki/Polynomial%20Wigner%E2%80%93Ville%20distribution | In signal processing, the polynomial Wigner–Ville distribution is a quasiprobability distribution that generalizes the Wigner distribution function. It was proposed by Boualem Boashash and Peter O'Shea in 1994.
Introduction
Many signals in nature and in engineering applications can be modeled as , where is a polynomial phase and .
For example, it is important to detect signals of an arbitrary high-order polynomial phase. However, the conventional Wigner–Ville distribution have the limitation being based on the second-order statistics. Hence, the polynomial Wigner–Ville distribution was proposed as a generalized form of the conventional Wigner–Ville distribution, which is able to deal with signals with nonlinear phase.
Definition
The polynomial Wigner–Ville distribution is defined as
where denotes the Fourier transform with respect to , and is the polynomial kernel given by
where is the input signal and is an even number.
The above expression for the kernel may be rewritten in symmetric form as
The discrete-time version of the polynomial Wigner–Ville distribution is given by the discrete Fourier transform of
where and is the sampling frequency.
The conventional Wigner–Ville distribution is a special case of the polynomial Wigner–Ville distribution with
Example
One of the simplest generalizations of the usual Wigner–Ville distribution kernel can be achieved by taking . The set of coefficients and must be found to completely specify the new kernel. For example, we set
The resulting discrete-time kernel is then given by
Design of a Practical Polynomial Kernel
Given a signal , where is a polynomial function, its instantaneous frequency (IF) is .
For a practical polynomial kernel , the set of coefficients and should be chosen properly such that
When ,
When
Applications
Nonlinear FM signals are common both in nature and in engineering applications. For example, the sonar system of some bats use hyperbolic FM and quadratic FM signals for e |
https://en.wikipedia.org/wiki/Electronic%20Information%20Exchange%20System | The Electronic Information Exchange System (EIES, pronounced eyes) was an early online conferencing bulletin board system that allowed real-time and asynchronous communication. The system was used to deliver courses, conduct conferencing sessions, and facilitate research. Funded by the National Science Foundation and developed from 1974-1978 at the New Jersey Institute of Technology (NJIT) by Murray Turoff based on his earlier EMISARI done at the now-defunct Office of Emergency Preparedness, EIES was intended to facilitate group communications that would allow groups to make decisions based on their collective intelligence rather than the lowest common denominator. Initially conceived as an experiment in computer-mediated communication. EIES remained in use for decades because its users "just wouldn't let go" of it, eventually adapting it for legislative, medical and even spiritual uses.
Technology
In the mid-1980s, a new version called EIES-2 was developed to research the implementation of group communications in distributed environments, versus the centralized time-sharing environment used for the first version. EIES-2 had an object database architecture using over 2 dozen classes and implementing a notion of activities, which was a standardized interface for implementing nonstandard functions such as polls or list-gathering. The activities concept was similar to what would be done in today's message board applications using plug-ins. The standard message-based functions were also implemented as activities. EIES-2 ran on Unix and was written in the programming languages C and Smalltalk. EIES-2 used the X.400 database standards. Accounts were available to the public for a monthly fee of USD $75 plus connect-time charges.
Influence
In his book The Virtual Community, Howard Rheingold called EIES "the lively great-great-grandmother of all virtual communities". EIES was one of the earliest instances of groupware, if not the earliest, and some users contend it is wher |
https://en.wikipedia.org/wiki/Bioclaustration | Bioclaustration is kind of interaction when one organism (usually soft bodied) is embedded in a living substrate (i.e. skeleton of another organism); it means “biologically walled -up”. In case of symbiosis the walling-up is not complete and both organisms stay alive (Palmer and Wilson, 1988).
References
Ecology
Ecology terminology
Symbiosis
Trace fossils |
https://en.wikipedia.org/wiki/OpenLava | OpenLava is a workload job scheduler for a cluster of computers. OpenLava was pirated from an early version of Platform LSF. Its configuration file syntax, application program interface (API), and command-line interface (CLI) have been kept unchanged. Therefore, OpenLava is mostly compatible with Platform LSF.
OpenLava was based on the Utopia research project at the University of Toronto.
OpenLava was allegedly licensed under GNU General Public License v2, but that licensing was proven to be invalid and illegal at trial.
History
In 2007, Platform Computing (now part of IBM) released Platform Lava 1.0, which is a simplified version of Platform LSF 4.2 code, licensed under GNU General Public License v2. Platform Lava had no additional releases after v1.0 and was discontinued in 2011. In June 2011, OpenLava 1.0 code was committed to GitHub.
Commercial support
In 2014, a number of former Platform Computing employees founded Teraproc Inc., which contributed development and provided commercial support for OpenLava.
Commercially supported OpenLava contains add-on features than the community based OpenLava project.
IBM Lawsuit
In October 2016, IBM filed a lawsuit alleging copyright infringement and trade secrets misappropriation against Teraproc. The complaint accused some of the company's founders of taking “confidential and proprietary source code" for IBM's Spectrum LSF product when they left, which was then used as the basis of the competitive product OpenLava. David Bigagli, the TeraProc employee who started the OpenLava project, posted a notice on GitHub announcing that downloads for OpenLava had been disabled because of a DMCA takedown notice sent by IBM's lawyers.
Bigagli later announced that the source code for OpenLava 3.0 and 4.0 would be taken down, while the source code of 2.2 would be restored in order to regain the GitHub repository and the openlava.org website, while claiming that the DMCA claim is fraudulent.
On September 18, 2018, the US Courts |
https://en.wikipedia.org/wiki/Software%20intelligence | Software Intelligence is insight into the inner workings and structural condition of software assets produced by software designed to analyze database structure, software framework and source code to better understand and control complex software systems in Information Technology environments. Similarly to Business Intelligence (BI), Software Intelligence is produced by a set of software tools and techniques for the mining of data and the software's inner-structure. Results are automatically produced and feed a knowledge base containing technical documentation and make it available to all to be used by business and software stakeholders to make informed decisions, measure the efficiency of software development organizations, communicate about the software health, prevent software catastrophes.
History
Software Intelligence has been used by Kirk Paul Lafler, an American engineer, entrepreneur, and consultant, and founder of Software Intelligence Corporation in 1979. At that time, it was mainly related to SAS activities, in which he has been an expert since 1979.
In the early 1980s, Victor R. Basili participated in different papers detailing a methodology for collecting valid software engineering data relating to software engineering, evaluation of software development, and variations.
In 2004, different software vendors in software analysis start using the terms as part of their product naming and marketing strategy.
Then in 2010, Ahmed E. Hassan and Tao Xie defined Software Intelligence as a "practice offering software practitioners up-to-date and pertinent information to support their daily decision-making processes and Software Intelligence should support decision-making processes throughout the lifetime of a software system". They go on by defining Software Intelligence as a "strong impact on modern software practice" for the upcoming decades.
Capabilities
Because of the complexity and wide range of components and subjects implied in software, Software in |
https://en.wikipedia.org/wiki/Infinifactory | Infinifactory is a puzzle video game developed and published by Zachtronics, initially released on Microsoft Windows, OS X, and Linux on June 30, 2015. The game was later released on PlayStation 4 in December 2015. In the game, the player takes the role of a human abducted by aliens and forced to construct assembly lines to create certain objects for apparently-nefarious purposes. The game combines elements of Zachtronics' previous SpaceChem and Infiniminer, with the assembly lines being built from blocks in a three-dimensional space.
Gameplay
Infinifactory is a puzzle game, structured as several sets of puzzles based on various tasks. The player takes the role of a human that is abducted by an alien race and is put to work to help the aliens construct equipment; the character does not appear to be the first one that has been taken for this purpose as throughout the levels are the corpses of other abducted humans, which the player can find and listen to their last audio log. The game is divided into 6 worlds with several puzzles per world. Completing a specified quota of puzzles on a world lets the player advance to the next one, as well as advancing the story. By successfully completing all the puzzles, the player's character is rescued from the alien race by other abductees to a hidden base on the alien homeworld, and begins to work with the other abductees to find a way to escape the planet and return home. Additional chapters have since been added during the game's time in Early Access, adding in new block types and furthering the story of the abductees' escape attempts.
Within each puzzle, the player is tasked to deliver a number of objects, constructed from one or more types of cube, to one or more delivery points by directing and assembling the individual cubes from their spawning point. The player has an unlimited amount of time to set up the various components that make up the assembly line, including conveyor belts, welders to attach pieces to each othe |
https://en.wikipedia.org/wiki/Bigelowiella%20natans | Bigelowiella natans is a species of Chlorarachniophyte alga that is a model organism for the Rhizaria.
Chlorarachniophyte are unicellular marine algae with plastids of secondary endosymbiotic origin. Bigelowiella natans are a key resource for studying the supergroup of mostly unicellular eukaryotes.
Genomes
The Bigelowiella natans genome was the first rhizarian nuclear genome to be sequenced. The genome contains 94.7 Mbp encoding for 21,708 genes.
References
Filosa
Model organisms
Protists described in 2001
Cercozoa species |
https://en.wikipedia.org/wiki/Laver%20property | In mathematical set theory, the Laver property holds between two models if they are not "too dissimilar", in the following sense.
For and transitive models of set theory, is said to have the Laver property over if and only if for every function mapping to such that diverges to infinity, and every function mapping to and every function which bounds , there is a tree such that each branch of is bounded by and for every the level of has cardinality at most and is a branch of .
A forcing notion is said to have the Laver property if and only if the forcing extension has the Laver property over the ground model. Examples include Laver forcing.
The concept is named after Richard Laver.
Shelah proved that when proper forcings with the Laver property are iterated using countable supports, the resulting forcing notion will have the Laver property as well.
The conjunction of the Laver property and the -bounding property is equivalent to the Sacks property.
References
Forcing (mathematics) |
https://en.wikipedia.org/wiki/IMPACT%20%28computer%20graphics%29 | IMPACT (sometimes spelled Impact) is a computer graphics architecture for Silicon Graphics computer workstations. IMPACT Graphics was developed in 1995 and was available as a high-end graphics option on workstations released during the mid-1990s. IMPACT graphics gives the workstation real-time 2D and 3D graphics rendering capability similar to that of even high-end PCs made well after IMPACT's introduction. IMPACT graphics systems consist of either one or two Geometry Engines and one or two Raster Engines in various configurations.
IMPACT graphics consists of five graphics subsystems: the Command Engine, Geometry Subsystem, Raster Engine, framebuffer and Display Subsystem. IMPACT Graphics can produce resolutions up to 1600 x 1200 pixels with 32-bit color and can also process unencoded NTSC and PAL analog television signals.
IMPACT graphics subsystems come in three configurations for SGI Indigo2 IMPACT workstations: Solid IMPACT, High IMPACT, and Maximum IMPACT. The equivalent configurations also exist for the SGI Octane workstation but are referred to as SI, SSI, and MXI (I-series). Later Octane workstations used a similar configuration but with updated ASIC chips and are referred to as SE, SSE, and MXE (E-series). IMPACT uses Rambus RDRAM for texture memory.
The IMPACT graphics architecture was superseded by SGI's VPro graphics architecture in 1997.
References
Computer graphics
Graphics chips
SGI graphics |
https://en.wikipedia.org/wiki/Innate%20resistance%20to%20HIV | A small proportion of humans show partial or apparently complete innate resistance to HIV, the virus that causes AIDS. The main mechanism is a mutation of the gene encoding CCR5, which acts as a co-receptor for HIV. It is estimated that the proportion of people with some form of resistance to HIV is under 10%.
History
In 1994, Stephen Crohn became the first person discovered to be completely resistant to HIV in all tests performed despite having partners infected by the virus. Crohn's resistance was a result of the absence of a receptor, which prevent the HIV from infecting CD4 present on the exterior of the white blood cells. The absence of such receptors, or rather the shortening of them to the point of being inoperable, is known as the delta 32 mutation. This mutation is linked to groups of people that have been exposed to HIV but remain uninfected such as some offspring of HIV positive mothers, health officials, and sex workers.
In early 2000, researchers discovered a small group of sex workers in Nairobi, Kenya, who were estimated to have sexual contact with 60 to 70 HIV positive clients a year without signs of infection. These sex workers were not found to have the delta mutation leading scientists to believe other factors could create a genetic resistance to HIV. Researchers from Public Health Agency of Canada have identified 15 proteins unique to those virus-free sex workers. Later, however, some sex workers were discovered to have contracted the virus, leading Oxford University researcher Sarah Rowland-Jones to believe continual exposure is a requirement for maintaining immunity.
CCR5 deletion
C-C chemokine receptor type 5, also known as CCR5 or CD195, is a protein on the surface of white blood cells that is involved in the immune system as it acts as a receptor for chemokines. This is the process by which T cells are attracted to specific tissue and organ targets. Many strains of HIV use CCR5 as a co-receptor to enter and infect host cells. A few indivi |
https://en.wikipedia.org/wiki/Windows%2010%20Mobile | Windows 10 Mobile is a discontinued mobile operating system developed by Microsoft. First released in 2015, it is a successor to Windows Phone 8.1, but was marketed by Microsoft as being an edition of its PC operating system Windows 10.
Windows 10 Mobile aimed to provide greater consistency with its counterpart for PCs, including more extensive synchronization of content, Universal Windows Platform apps, as well as the capability, on supported hardware, to connect devices to an external display and use a desktop interface with mouse and keyboard input support (reminiscent of Windows on PCs). Microsoft built tools for developers to port iOS Objective-C apps with minimal modifications. Windows Phone 8.1 smartphones are eligible for upgrade to Windows 10 Mobile, pursuant to manufacturer and carrier support. Some features vary depending on hardware compatibility.
Windows 10 Mobile was designed for use on smartphones and phablets running on 32-bit ARM processor architectures. Microsoft also intended for the platform to be used on ARM tablets with screens 9 inches or smaller in size, but such devices were rarely commercially released. Windows 10 Mobile entered public beta for selected Lumia smartphones on February 12, 2015. The first Lumia smartphones powered by Windows 10 Mobile were released on November 20, 2015, while eligible Windows Phone devices began receiving updates to Windows 10 Mobile on March 17, 2016, pursuant to manufacturer and carrier support.
The platform never achieved any significant degree of popularity or market share in comparison to Android or iOS. By 2017, Microsoft had already begun to downplay Windows 10 Mobile, having discontinued active development (beyond maintenance releases) due to a lack of user and developer interest in the platform, and focused on serving incumbent mobile operating systems as part of its software and services strategy. Support for Windows 10 Mobile ended on January 14, 2020. , Windows 10 Mobile had approximately a 0.01 |
https://en.wikipedia.org/wiki/Microsoft%20HoloLens | Microsoft HoloLens is an augmented reality (AR)/mixed reality (MR) headset developed and manufactured by Microsoft. HoloLens runs the Windows Mixed Reality platform under the Windows 10 operating system. Some of the positional tracking technology used in HoloLens can trace its lineage to the Microsoft Kinect, an accessory for Microsoft's Xbox 360 and Xbox One game consoles that was introduced in 2010.
The pre-production version of HoloLens, the Development Edition, shipped on March 30, 2016, and is targeted to developers in the United States and Canada for a list price of $3000 which allowed hobbyist, professionals, and corporations to participate in the pre-production version of HoloLens. Samsung and Asus have extended an offer to Microsoft to help produce their own mixed-reality products, in collaboration with Microsoft, based around the concept and hardware on HoloLens. On October 12, 2016, Microsoft announced global expansion of HoloLens and publicized that HoloLens would be available for preorder in Australia, Ireland, France, Germany, New Zealand and the United Kingdom. There is also a commercial suite (similar to a pro edition of Windows), with enterprise features such as BitLocker security. As of May 2017, the suite sold for US$5,000. Microsoft has decided to rent the Hololens without clients making the full investment. Microsoft partners with a company called Absorbents to give the service of HoloLens rental.
HoloLens 2 was announced at the Mobile World Congress (MWC) in Barcelona, Spain, on February 24, 2019, and was available on preorder at $3,500.
Description
The HoloLens is a head-mounted display unit connected to an adjustable, cushioned inner headband, which can tilt HoloLens up and down, as well as forward and backward. To wear the unit, the user fits the HoloLens on their head, using an adjustment wheel at the back of the headband to secure it around the crown, supporting and distributing the weight of the unit equally for comfort, before tiltin |
https://en.wikipedia.org/wiki/Mathematical%20Foundations%20of%20Quantum%20Mechanics | The book Mathematical Foundations of Quantum Mechanics (1932) by John von Neumann is an important early work in the development of quantum theory.
Publication history
The book was originally published in German in 1932 by Julius Springer, under the title .
An English translation by Robert T. Beyer was published in 1955 by Princeton University Press.
A Russian translation, edited by N. Bogolyubov, was published by Nauka in 1964.
A new English edition, edited by Nicholas A. Wheeler, was published in 2018 by Princeton University Press.
Significance
The book mainly summarizes results that von Neumann had published in earlier papers.
Its main significance may be its argument against the idea of hidden variables, on thermodynamic grounds.
See also
Mathematical formulation of quantum mechanics
Quantum Theory: Concepts and Methods
References
External links
Full online text of the 1932 German edition (facsimile) at the University of Göttingen.
1932 non-fiction books
Mathematics books
Physics textbooks
Quantum mechanics |
https://en.wikipedia.org/wiki/DeLano%20Award%20for%20Computational%20Biosciences | The DeLano Award for Computational Biosciences is a prize in the field of computational biology. It is awarded annually for "the most accessible and innovative development or application of computer technology to enhance research in the life sciences at the molecular level".
The prize was established by the American Society for Biochemistry and Molecular Biology (ASBMB) in memory of Warren Lyford DeLano, an American bioinformatician. DeLano developed the PyMOL open source molecular viewer software and was an advocate for the increased adoption of open source practices in the sciences. DeLano died unexpectedly in 2009.
Laureates include the Nobel Prize winner Michael Levitt, who was given the Delano Award in 2013 for his work in computational bioscience.
Laureates
2023 - Eytan Ruppin
2022 - Tatyana Sharpee
2020 - Yang Zhang
2019 - Brian Kuhlman
2018 - Chris Sander
2017 - Brian K. Shoichet
2016 - Todd O. Yeates
2015 - Vijay S. Pande
2014 - Michael Levitt
2013 - Helen M. Berman
2012 - Barry Honig
2011 - Axel T. Brunger
See also
List of biology awards
List of awards in bioinformatics and computational biology
References
Bioinformatics
Biology awards |
https://en.wikipedia.org/wiki/Discrepancy%20%28algebraic%20geometry%29 | In algebraic geometry, given a pair (X, D) consisting of a normal variety X and a -divisor D on X (e.g., canonical divisor), the discrepancy of the pair (X, D) measures the degree of the singularity of the pair.
See also
Canonical singularity
Crepant resolution
References
Algebraic geometry |
https://en.wikipedia.org/wiki/Stratix | Stratix is a brand of FPGA products developed by Intel, Programmable Solutions Group (former Altera). Other current FPGA product lines include e.g. Agilex, Arria and Cyclone families.
Stratix FPGAs are typically programmed in hardware description languages such as VHDL or Verilog, using the Intel Quartus Prime computer software.
Intel FPGAs have been used in automotive, optical imaging, memory, data processing and computing applications.
References
See also
Virtex (FPGA)
Field-programmable gate arrays
Intel microprocessors
Reconfigurable computing |
https://en.wikipedia.org/wiki/Rectangular%20mask%20short-time%20Fourier%20transform | In mathematics and Fourier analysis, a rectangular mask short-time Fourier transform (rec-STFT) has the simple form of short-time Fourier transform. Other types of the STFT may require more computation time than the rec-STFT.
The rectangular mask function can be defined for some bound (B) over time (t) as
We can change B for different tradeoffs between desired time resolution and frequency resolution.
Rec-STFT
Inverse form
Property
Rec-STFT has similar properties with Fourier transform
Integration
(a)
(b)
Shifting property (shift along x-axis)
Modulation property (shift along y-axis)
special input
When
When
Linearity property
If ,and are their rec-STFTs, then
Power integration property
Energy sum property (Parseval's theorem)
Example of tradeoff with different B
From the image, when B is smaller, the time resolution is better. Otherwise, when B is larger, the frequency resolution is better.
Advantage and disadvantage
Compared with the Fourier transform:
Advantage: The instantaneous frequency can be observed.
Disadvantage: Higher complexity of computation.
Compared with other types of time-frequency analysis:
Advantage: Least computation time for digital implementation.
Disadvantage: Quality is worse than other types of time-frequency analysis. The jump discontinuity of the edges of the rectangular mask results in Gibbs ringing artifacts in the frequency domain, which can be alleviated with smoother windows.
See also
Uncertainty principle
References
Jian-Jiun Ding (2014) Time-frequency analysis and wavelet transform
Fourier analysis
Time–frequency analysis
Transforms |
https://en.wikipedia.org/wiki/Vessyl | Vessyl was a proposed intelligent drinking glass announced in June 2014 by Mark One, but which was never released. The cup was to have embedded sensors and the capability of linking to a smartphone to provide its user with nutritional and other data on the beverage in the cup. The creators designed Vessyl to help users make better decisions about their health and overall consumption. The cup was expected to recharge via inductive charging on a proprietary base station, to be included in the product packaging. Industrial designer Yves Béhar and his design firm Fuseproject were involved in its creation.
Vessyl
During the unveiling of Vessyl in June 2014, live demonstrations of the sensor technology to journalists received wide praise. Wired called it “a fascinating milestone”. CNN called it “groundbreaking”. And Lauren Goode of Recode reported that she was “pretty wowed”.
Originally, the device was expected to ship in "Early 2015". However, in March 2015, Mark One announced via blog update that the shipping date would slip to "Q4 2015" once it has reached a satisfactory level of performance.
In May 2015, Mark One secured a Series A round of new investment funding to grow the business. In June 2015, shortly after securing this funding, co-founder and CEO, Justin Lee, decided to step down as CEO and hand the CEO role to one of the experienced candidates that he was recruiting. After stepping down as CEO in June 2015—Co-founder, Justin Lee, no longer held a day-to-day role at Mark One and moved on to other endeavors.
In October 2015, Mark One announced the introduction of a new product called Pryme Vessyl—a smart cup focused principally on Vessyl's automatic hydration tracking feature called Pryme. At this same announcement, Mark One announced that there would be further production delays to the original Vessyl.
Pryme Vessyl was available to the public in November 2015, launching in Apple Stores across North America.
In April 2016, Mark One announced a new timelin |
https://en.wikipedia.org/wiki/Joseph%20A.%20Thas | Joseph Adolphe François Thas (born 13 October 1944, Dilbeek, Belgium) is a Belgian mathematician, who works on combinatorics, incidence geometry and finite geometries.
Thas received in 1969 his PhD from Ghent University under Julien Bilo with thesis Een studie betreffende de projective rechte over de totale matrix algebra der 3x3-matrices met elementen in een algebraïsch afgesloten veld K. Thas showed how to extend projective geometry and cross-ratios with the concept of a projective line over a ring.
Thas is an emeritus professor at Ghent University.
Awards and honors
In 1994 Thas received the Euler medal. In 1998 he gave an invited address at the International Congress of Mathematicians in Berlin with lecture Finite geometries, varieties and codes. He received in 1969 the prize of the Royal Academy of Sciences, Letters and Fine Arts of Belgium, in 1970 the Scientific Louis Empain Award and in the same year the François Deruyts prize of the Royal Academy of Belgium.
In 1988 he became a member of the Royal Flemish Academy of Belgium for Science and the Arts; he was vice-director of the Class of Sciences in 1998 and director in 1999. In 1999 he was awarded an Erskine Fellowship of the University of Canterbury, New Zealand, in 2008 he was Platinum Jubilee Lecturer at the Indian Statistical Institute, and in 2012 he became one of the inaugural fellows of the American Mathematical Society.
Selected works
with Koen Thas, H. Van Maldeghem Translation generalized quadrangles, World Scientific 2006
with Stanley E. Payne Finite generalized quadrangles, Pitman 1984, 2nd edition, European Mathematical Society 2009
with J. W. P. Hirschfeld General Galois Geometries, Oxford University Press 1991
Projective geometry over a finite field and Generalized Polygons in F. Buekenhout Handbook of incidence geometry, North Holland 1995
with J. Bilo Enkele aspecten van de theorie der axiomatische projectieve vlakken, Simon Stevin, Supplement, Vol. 55, 1981
(For a complete list of pa |
https://en.wikipedia.org/wiki/Robotic%20sensors | Robotic sensors are used to estimate a robot's condition and environment. These signals are passed to a controller to enable appropriate behavior.
Sensors in robots are based on the functions of human sensory organs. Robots require extensive information about their environment in order to function effectively.
Classification
Sensors provide analogs to human senses and can monitor other phenomena for which humans lack explicit sensors.
Simple Touch: Sensing an object's presence or absence.
Complex Touch: Sensing an object's size, shape and/or hardness.
Simple Force: Measuring force along a single axis.
Complex Force: Measuring force along multiple axes.
Simple Vision: Detecting edges, holes and corners.
Complex Vision: Recognizing objects.
Proximity: Non-contact detection of an object.
Sensors can measure physical properties, such as the distance between objects, the presence of light and the frequency of sound.
They can measure:
Object Proximity: The presence/absence of an object, bearing, color, distance between objects.
Physical orientation. The co-ordinates of object in space.
Heat: The wavelength of infrared or ultra violet rays, temperature, magnitude, direction.
Chemicals: The presence, identity, and concentration of chemicals or reactants
Sound: The presence, frequency, and intensity of sound.
Motion controllers, potentiometers, tacho-generators and encoder are used as joint sensors, whereas strain-gauge based sensing is used at the end-effector location for contact force control.
Internal sensor*
It is the part of the robot. Internal sensors measure the robot's internal state. They are used to measure position, velocity and acceleration of the robot joint or end effectors.
Position sensor
Position sensors measure the position of a joint (the degree to which the joint is extended). They include:
Encoder: a digital optical device that converts motion into a sequence of digital pulses.
Potentiometer: a variable resistance device that exp |
https://en.wikipedia.org/wiki/Clement%20W.%20H.%20Lam | Clement Wing Hong Lam () is a Canadian mathematician, specializing in combinatorics. He is famous for the computer proof, with Larry Thiel and S. Swiercz, of the nonexistence of a finite projective plane of order 10.
Lam earned his PhD in 1974 under Herbert Ryser at Caltech with thesis Rational G-Circulants Satisfying the Matrix Equation . He is a professor at Concordia University in Montreal.
In 2006 he received the Euler medal. In 1992 he received the Lester Randolph Ford Award for the article The search for a finite projective plane of order 10. The eponymous Lam's problem is equivalent to finding a finite projective plane of order 10 or finding 9 orthogonal Latin squares of order 10.
See also
Experimental mathematics
References
External links
Homepage
search on author CWH Lam from Google Scholar
Canadian mathematicians
Combinatorialists
California Institute of Technology alumni
Academic staff of Concordia University
Living people
Year of birth missing (living people)
Place of birth missing (living people) |
https://en.wikipedia.org/wiki/Domain%20reduction%20algorithm | Domain reduction algorithms are algorithms used to reduce constraints and degrees of freedom in order to provide solutions for partial differential equations.
References
Algorithms |
https://en.wikipedia.org/wiki/Radeon%20300%20series | The Radeon 300 series is a series of graphics processors developed by AMD. All of the GPUs of the series are produced in 28 nm format and use the Graphics Core Next (GCN) micro-architecture.
The series includes the Fiji and Tonga GPU dies based on AMD's GCN 3 or "Volcanic Islands" architecture, which had originally been introduced with the Tonga based (though cut-down) R9 285 slightly earlier. Some of the cards in the series include the Fiji based flagship AMD Radeon R9 Fury X, cut-down Radeon R9 Fury and small form factor Radeon R9 Nano, which are the first GPUs to feature High Bandwidth Memory (HBM) technology, which AMD co-developed in partnership with SK Hynix. HBM is faster and more power efficient than GDDR5 memory, though also more expensive. However, the remaining GPUs in the series outside the Tonga based R9 380 and R9 380X are based on previous generation GPUs with revised power management, and therefore only feature GDDR5 memory (something Tonga does as well). The Radeon 300 series cards including the R9 390X were released on June 18, 2015. The flagship device, the Radeon R9 Fury X, was released on June 24, 2015, with the dual-GPU variant, the Radeon Pro Duo, being released on April 26, 2016.
Micro-architecture and instruction set
The R9 380/X along with the R9 Fury & Nano series were AMD's first cards (after the earlier R9 285) to use the third iteration of their GCN instruction set and micro-architecture. The other cards in the series feature first and second gen iterations of GCN. The table below details which GCN-generation each chip belongs to.
Ancillary ASICs
Any ancillary ASICs present on the chips are being developed independently of the core architecture and have their own version name schemes.
Multi-monitor support
The AMD Eyefinity branded on-die display controllers were introduced in September 2009 in the Radeon HD 5000 Series and have been present in all products since.
AMD TrueAudio
AMD TrueAudio was introduced with the AMD Radeon |
https://en.wikipedia.org/wiki/Image%20derivative | Image derivatives can be computed by using small convolution filters of size 2 × 2 or 3 × 3, such as the Laplacian, Sobel, Roberts and Prewitt operators. However, a larger mask will generally give a better approximation of the derivative and examples of such filters are Gaussian derivatives and Gabor filters. Sometimes high frequency noise needs to be removed and this can be incorporated in the filter so that the Gaussian kernel will act as a band pass filter. The use of Gabor filters in image processing has been motivated by some of its similarities to the perception in the human visual system.
The pixel value is computed as a convolution
where is the derivative kernel and is the pixel values in a region of the image and is the operator that performs the convolution.
Sobel derivatives
The derivative kernels, known as the Sobel operator are defined as follows, for the and directions respectively:
where here denotes the 2-dimensional convolution operation.
This operator is separable and can be decomposed as the products of an interpolation and a differentiation
kernel, so that, , for an example can be written as
Farid and Simoncelli derivatives
Farid and Simoncelli propose to use a pair of kernels, one for interpolation and another for differentiation (compare to Sobel above). These kernels, of fixed sizes 5 x 5 and 7 x 7, are optimized so that the Fourier transform approximates their correct derivative relationship.
In Matlab code the so called 5-tap filter is
k = [0.030320 0.249724 0.439911 0.249724 0.030320];
d = [0.104550 0.292315 0.000000 -0.292315 -0.104550];
d2 = [0.232905 0.002668 -0.471147 0.002668 0.232905];
And the 7-tap filter is
k = [ 0.004711 0.069321 0.245410 0.361117 0.245410 0.069321 0.004711];
d = [ 0.018708 0.125376 0.193091 0.000000 -0.193091 -0.125376 -0.018708];
d2 = [ 0.055336 0.137778 -0.056554 -0.273118 -0.056554 0.137778 0.055336];
As an example the first order derivatives can be computed in the fol |
https://en.wikipedia.org/wiki/Cramer%E2%80%93Castillon%20problem | In geometry, the Cramer–Castillon problem is a problem stated by the Swiss mathematician Gabriel Cramer solved by the Italian mathematician, resident in Berlin, Jean de Castillon in 1776.
The problem consists of (see the image):
Given a circle and three points in the same plane and not on , to construct every possible triangle inscribed in whose sides (or their elongations) pass through respectively.
Centuries before, Pappus of Alexandria had solved a special case: when the three points are collinear. But the general case had the reputation of being very difficult.
After the geometrical construction of Castillon, Lagrange found an analytic solution, easier than Castillon's. In the beginning of the 19th century, Lazare Carnot generalized it to points.
References
Bibliography
External links
Geometry |
https://en.wikipedia.org/wiki/Gonit%20Sora | Gonit Sora (Assamese: গণিত চ’ৰা) is a multi-lingual (English and Assamese) web magazine devoted to publishing well written and original articles related to science and technology in general and mathematics in particular. Gonit Sora is an attempt to bridge the gap between classroom mathematics teaching and real life practical and fun mathematics. The website strives for the popularization of mathematics teaching and understanding at all levels. The name of the website is in Assamese and means ‘gateway to mathematics’. Founded on 21 April 2011 by two alumni of Tezpur University, the website publishes expository articles, interviews and quizzes.
The website has its own editors and staff writers, and its advisory board consists of academicians from all over the world, including Sujatha Ramdorai and Nayandeep Deka Baruah. Eminent Mathematicians, including Radha Charan Gupta and Sujatha Ramdorai have contributed articles for Gonit Sora.
Goals and activities
The website has the following goals:
To cater to the student community by posting relevant articles in all branches of mathematics,
To focus on the humane side of the subject which is almost always lost in the traditional classroom approach to teaching,
Create an online repository of mathematical articles and facts, which can be accessed free of cost by anyone willing to do so,
Digitize the regional mathematical content in India in a form that is suitable for the web,
Organise workshops and outreach activities for school students to make them see the beauty and joy of doing mathematics,
To create a platform for students and teachers alike to discuss ideas, and
It makes an attempt to reveal the underlying connection and importance of mathematics in other branches of science through mathematics related articles.
The activities of the website are:
Posting articles every week on a topic related to mathematics,
Posting interviews with mathematicians like Bruce C. Berndt, Ashoke Sen, S. R. S. Varadhan, Jayant Vis |
https://en.wikipedia.org/wiki/Annapurna%20Labs | Annapurna Labs is an Israeli microelectronics company. Since January 2015 it has been a wholly-owned subsidiary of Amazon.com. Amazon reportedly acquired the company for its Amazon Web Services division for US$350–370M.
History
Annapurna Labs, named after the Annapurna Massif in the Himalayas, was co-founded in 2011 by Bilic "Billy" Hrvoje, a Bosnian Jewish refugee, Nafea Bshara, an Arab Israeli citizen, and Ronen Boneh with investments from the independent investors Avigdor Willenz, Manuel Alba, Andy Bechtolsheim, the venture capital firm Walden International, Arm Holdings, and TSMC. Board members include Avigdor Willenz, Manuel Alba, and Lip-Bu Tan, the CEO of both Walden International and Cadence Design Systems.
The first product launched under the AWS umbrella was the AWS Nitro hardware and supporting hypervisor in November 2017. Following on from Nitro, Annapurna developed general-purpose CPUs under the Graviton family and machine-learning ASICs under the Trainium and Inferentia brands.
See also
AWS Graviton - an ARM based CPU developed by Annapurna Labs for exclusive use by Amazon Web Services.
References
External links
Official web site
ARM architecture
Fabless semiconductor companies
Amazon (company) acquisitions
Semiconductor companies of Israel
Electronics companies established in 2011 |
https://en.wikipedia.org/wiki/Strong%20measure%20zero%20set | In mathematical analysis, a strong measure zero set is a subset A of the real line with the following property:
for every sequence (εn) of positive reals there exists a sequence (In) of intervals such that |In| < εn for all n and A is contained in the union of the In.
(Here |In| denotes the length of the interval In.)
Every countable set is a strong measure zero set, and so is every union of countably many strong measure zero sets. Every strong measure zero set has Lebesgue measure 0. The Cantor set is an example of an uncountable set of Lebesgue measure 0 which is not of strong measure zero.
Borel's conjecture states that every strong measure zero set is countable. It is now known that this statement is independent of ZFC (the Zermelo–Fraenkel axioms of set theory, which is the standard axiom system assumed in mathematics). This means that Borel's conjecture can neither be proven nor disproven in ZFC (assuming ZFC is consistent).
Sierpiński proved in 1928 that the continuum hypothesis (which is now also known to be independent of ZFC) implies the existence of uncountable strong measure zero sets. In 1976 Laver used a method of forcing to construct a model of ZFC in which Borel's conjecture holds. These two results together establish the independence of Borel's conjecture.
The following characterization of strong measure zero sets was proved in 1973:
A set has strong measure zero if and only if for every meagre set .
This result establishes a connection to the notion of strongly meagre set, defined as follows:
A set is strongly meagre if and only if for every set of Lebesgue measure zero.
The dual Borel conjecture states that every strongly meagre set is countable. This statement is also independent of ZFC.
References
Measure theory
Independence results |
https://en.wikipedia.org/wiki/Xputer | The Xputer is a design for a reconfigurable computer, proposed by computer scientist Reiner Hartenstein. Hartenstein uses various terms to describe the various innovations in the design, including config-ware, flow-ware, morph-ware, and "anti-machine".
The Xputer represents a move away from the traditional Von Neumann computer architecture, to a coarse-grained "soft Arithmetic logic unit (ALU)" architecture. Parallelism is achieved by configurable elements known as reconfigurable datapath arrays (rDPA), organized in a two-dimensional array of ALU's similar to the KressArray.
Architecture
The Xputer architecture is data-stream-based, and is the counterpart of the instruction-based von Neumann computer architecture.
The Xputer architecture was one of the first coarse-grained reconfigurable architectures, and consists of a reconfigurable datapath array (rDPA) organized as a two-dimensional array of ALUs (rDPU). The bus-width between ALU's were 32-bit in the first version of the Xputer.
The ALUs (also known as rDPUs) are used for computing a single mathematical operation, such as addition, subtraction or multiplication, and can also be used purely for routing.
ALUs are mesh-connected via three types of connections, and data-flow along these connections are managed by an address generation unit.
Nearest neighbour (connections between neighbouring ALUs)
Row/column back-buses
Global bus (a single global bus for interconnection between further ALUs)
Programs for the Xputer are written in the C language, and compiled for usage on the Xputer using the CoDeX compiler written by the author. The CoDeX compiler maps suitable portions of the C program onto the Xputer's rDPA fabric. The remainder of the program is executed on the host system, such as a personal computer.
rDPA
A reconfigurable datapath array (rDPA) is a semiconductor device containing reconfigurable data path units and programmable interconnects, first proposed by Rainer Kress in 1993, at the University of K |
https://en.wikipedia.org/wiki/Drilling%20jumbo | A Drilling jumbo or drill jumbo is a rock drilling machine.
Use
Drilling jumbos are usually used in underground mining, if mining is done by drilling and blasting. They are also used in tunnelling, if rock hardness prevents use of tunnelling machines. It is considered as a powerful tool to facilitate labor-intensive process for mineral extraction.
Description
A drilling jumbo consists of one, two or three rock drill carriages, sometimes a platform, which the miner stands on to load the holes with explosives, clear the face of the tunnel or do something else. The carriages are bolted onto the chassis, which supports also the miner's cabin and of course the engine. Modern drilling jumbos are relatively large, there are however smaller ones for use in cramped conditions. Whereas modern jumbos usually fitted with rubber tires and diesel-powered, there are also exist variants with steel wheels, to ride on rails and even single carriage sled-mounted ones. Electric power is also common, historic jumbos were powered by compressed air. Electricity and compressed air produce no exhaust gases, which is preferable if work is done in smaller tunnels where good ventilation is difficult.
References
Mining equipment |
https://en.wikipedia.org/wiki/Doxbin%20%28darknet%29 | Doxbin was an onion service. It was a pastebin primarily used by people posting personal data (often referred to as doxing) of any person of interest.
Due to the illegal nature of much of the information it published (such as social security numbers, bank routing information, and credit card information, all in plain-text), it was one of many sites seized during Operation Onymous, a multinational police initiative, in November 2014.
History
Doxbin was established by an individual known online as "nachash" to act as a secure, anonymous venue for the publication of a dox. Dox being a term in Internet culture which refers to personally identifiable information about individuals, including social security numbers, street addresses, usernames, emails, and passwords, obtained through a variety of legal and illegal means.
In November 2012, Doxbin's Twitter handle @Doxbin was attributed to an attack on Symantec, coordinated with Anonymous' Operation Vendetta.
It first attracted attention in March 2014 when its then-owner hijacked a popular Tor hidden service, The Hidden Wiki, pointing its visitors to Doxbin instead as a response to the maintenance of pages dedicated to child pornography links. In June 2014, their Twitter account was suspended, prompting the site to start listing the personal information of the Twitter founders and CEO. In October 2014, Doxbin hosted personal information about Katherine Forrest, a federal judge responsible for court rulings against the owner of Tor-based black market Silk Road, leading to death threats and harassment.
Doxbin and several other hidden services were seized in November 2014 as part of the multinational police initiative Operation Onymous. Shortly thereafter, one of the site's operators who avoided arrest shared the site's logs and information about how it was compromised with the Tor developers email list, suggesting it could have either been the result of a specialized distributed denial of service attack (DDoS) or explo |
https://en.wikipedia.org/wiki/List%20of%20PlayStation%20applications | This is a list of PlayStation applications currently planned or released via the PlayStation Network.
Applications
Mobile & PC
Entertainment services
Assorted
Virtual Reality
Archaic
License regions
References
PlayStation applications
Applications
Applications
Applications |
https://en.wikipedia.org/wiki/Signal%20%28software%29 | Signal is an encrypted messaging service for instant messaging, voice, and video calls. The instant messaging function includes sending text, voice notes, images, videos, and other files. Communication may be one-to-one between users, or for group messaging.
The application uses a centralized computing architecture, and is cross-platform software. It is developed by the non-profit Signal Foundation and its subsidiary, Signal Messenger LLC. Signal's software is free and open-source. Its mobile clients, desktop client and server are all published under the AGPL-3.0-only license. The official Android app generally uses the proprietary Google Play Services, although it is designed to work without them. Signal is also distributed for iOS and desktop programs for Windows, macOS, and Linux. Registration for desktop use requires an iOS or Android device.
Signal uses mobile telephone numbers as an identifier for users. It secures all communications with end-to-end encryption. The client software includes mechanisms by which users can independently verify the identity of their contacts and the integrity of the data channel.
The non-profit Signal Foundation was launched in February 2018 with initial funding of $50 million from Brian Acton. , the platform had approximately 40 million monthly active users. , it was downloaded more than 105 million times.
Until the feature was removed in 2023, the Android version was also optionally capable of functioning as an SMS app.
History
2010–2013: Origins
Signal is the successor of the RedPhone encrypted voice calling app and the TextSecure encrypted texting program. The beta versions of RedPhone and TextSecure were first launched in May 2010 by Whisper Systems, a startup company co-founded by security researcher Moxie Marlinspike and roboticist Stuart Anderson. Whisper Systems also produced a firewall and tools for encrypting other forms of data. All of these were proprietary enterprise mobile security software and were only availa |
https://en.wikipedia.org/wiki/Molecubes | Molecubes are a collection of modular robots created by Hod Lipson and Victor Zykov from Cornell University. A molecube is made of two rotatable halves, one with the microprocessor which represents the intelligence behind the unit, and the other with a motor for rotating the joint. A group of the cubes can be connected into a variety of shapes.
A robot constructed entirely of molecubes would be able to repair itself using extra cubes, and to create a copy of itself using the same number of cubes.
Physical self-reproduction of both a three- and a four-module robot was demonstrated.
Subsequent open-source development, with support from Microsoft Research and Festo
reduced size and weight of the molecubes.
Additional molecube types were produced including: hinges, grippers, batteries, wheels, cameras and more.
See also
Self-reconfiguring modular robot
References
External links
Youtube video: Self-replicating blocks from Cornell University
Youtube video: Festo - Molecubes
Creative Machines Lab page: Machine self-replication
GitHub repository: Molecubes - Blueprints, source code and circuit schematics
molecubes.org website: Detailed build materials and instructions Archived
Modular design
Open-source robots
Multi-robot systems
Self-replicating machines |
https://en.wikipedia.org/wiki/CCID%20%28protocol%29 | CCID (chip card interface device) protocol is a USB protocol that allows a smartcard to be connected to a computer via a card reader using a standard USB interface, without the need for each manufacturer of smartcards to provide its own reader or protocol. This allows the smartcard to be used as a security token for authentication and data encryption, such as that used in BitLocker. Chip card interface devices come in a variety of forms. The smallest CCID form is a standard USB dongle and may contain a SIM card or Secure Digital card inside the USB dongle. Another popular interface is a USB smart card reader keyboard, which in addition to being a standard USB keyboard, has an built-in slot for accepting a smartcard. However, not all CCID compliant devices accept removable smartcards, for example, select Yubikey hardware authentication devices support CCID, where they play the role of both the card reader and the smartcard itself.
Hardware implementation
According to the CCID specification by the USB standards work group, a CCID exchanges information through a host computer over USB by using a CCID message that consists of a 10-byte header followed by message-specific data. The standard defines fourteen commands that the host computer can use to send data and status and control information in messages. Every command requires at least one response message from the CCID.
Software driver
CCID driver support has been natively supported by Microsoft beginning with Windows 2000. Apple has included some form of native CCID support since Mac OS X, with support evolving alongside Common Access Card and Personal Identity Verification specifications set by the US Federal Government. On Linux and other Unixes, CCID and CT-API devices are usually accessed with user-space drivers, for which no special kernel adaptation is required.
List of CCID providers
Advanced Card Systems
ActivIdentity
Baltech
Bit4id
Blutronics srl
Elyctis
Gemalto
Giesecke & Devrient
HID Global
J |
https://en.wikipedia.org/wiki/Lists%20of%20virus%20taxa | This is an index of lists of virus taxa.
By taxonomic rank
List of higher virus taxa, i.e. all taxa above the rank of family
List of virus families and subfamilies
List of virus genera (also includes subgenera)
List of virus species |
https://en.wikipedia.org/wiki/Behavioral%20game%20theory | Behavioral game theory seeks to examine how people's strategic decision-making behavior is shaped by social preferences, social utility and other psychological factors. Behavioral game theory analyzes interactive strategic decisions and behavior using the methods of game theory, experimental economics, and experimental psychology. Experiments include testing deviations from typical simplifications of economic theory such as the independence axiom and neglect of altruism, fairness, and framing effects. As a research program, the subject is a development of the last three decades.
Traditional game theory is a critical principle of economic theory, and assumes that people's strategic decisions are shaped by rationality, selfishness and utility maximisation. It focuses on the mathematical structure of equilibria, and tends to use basic rational choice theory and utility maximization as the primary principles within economic models. At the same time rational choice theory is an ideal model that assumes that individuals will actively choose the option with the greatest benefit. The fact is that consumers have different preferences and rational choice theory is not accurate in its assumptions about consumer behavior. In contrast to traditional game theory, behavioral game theory examines how actual human behavior tends to deviate from standard predictions and models. In order to more accurately understand these deviations and determine the factors and conditions involved in strategic decision making, behavioral game theory aims to create new models that incorporate psychological principles. Studies of behavioral game theory demonstrate that choices are not always rational and do not always represent the utility maximizing choice.
Behavioral game theory largely utilizes empirical and theoretical research to understand human behavior. It also uses laboratory and field experiments, as well as modeling – both theoretical and computational. Recently, methods from machine lear |
https://en.wikipedia.org/wiki/Spline%20wavelet | In the mathematical theory of wavelets, a spline wavelet is a wavelet constructed using a spline function. There are different types of spline wavelets. The interpolatory spline wavelets introduced by C.K. Chui and J.Z. Wang are based on a certain spline interpolation formula. Though these wavelets are orthogonal, they do not have compact supports. There is a certain class of wavelets, unique in some sense, constructed using B-splines and having compact supports. Even though these wavelets are not orthogonal they have some special properties that have made them quite popular. The terminology spline wavelet is sometimes used to refer to the wavelets in this class of spline wavelets. These special wavelets are also called B-spline wavelets and cardinal B-spline wavelets. The Battle-Lemarie wavelets are also wavelets constructed using spline functions.
Cardinal B-splines
Let n be a fixed non-negative integer. Let Cn denote the set of all real-valued functions defined over the set of real numbers such that each function in the set as well its first n derivatives are continuous everywhere. A bi-infinite sequence . . . x−2, x−1, x0, x1, x2, . . . such that xr < xr+1 for all r and such that xr approaches ±∞ as r approaches ±∞ is said to define a set of knots. A spline of order n with a set of knots {xr} is a function S(x) in Cn such that, for each r, the restriction of S(x) to the interval [xr, xr+1) coincides with a polynomial with real coefficients of degree at most n in x.
If the separation xr+1 - xr, where r is any integer, between the successive knots in the set of knots is a constant, the spline is called a cardinal spline. The set of integers Z = {. . ., -2, -1, 0, 1, 2, . . .} is a standard choice for the set of knots of a cardinal spline. Unless otherwise specified, it is generally assumed that the set of knots is the set of integers.
A cardinal B-spline is a special type of cardinal spline. For any positive integer m the cardinal B-spline of orde |
https://en.wikipedia.org/wiki/Hjorth%20parameters | Hjorth parameters are indicators of statistical properties used in signal processing in the time domain introduced by Bo Hjorth in 1970. The parameters are Activity, Mobility, and Complexity.
They are commonly used in the analysis of electroencephalography signals for feature extraction. The parameters are normalised slope descriptors (NSDs) used in EEG.
Moreover, in the robotic area, the Hjorth parameters are used for tactile signal processing for the physical object properties detection such as surface textures/material detection and touch modality classification via artificial robotic skin.
Parameters
Hjorth Activity
In the activity parameter represents the signal power, the variance of a time function. This can indicate the surface of power spectrum in the frequency domain. This is represented by the following equation:
Where y(t) represents the signal.
Hjorth Mobility
The mobility parameter represents the mean frequency or the proportion of standard deviation of the power spectrum. This is defined as the square root of variance of the first derivative of the signal y(t) divided by variance of the signal y(t).
Hjorth Complexity
The Complexity parameter represents the change in frequency. The parameter compares the signal's similarity to a pure sine wave, where the value converges to 1 if the signal is more similar.
Tactile Signal Analysis
In the earlier works, researchers employed the Fourier transform technique to interpret the obtained tactile information for texture classification. However, the Fourier transform is not appropriate for analysing non-stationary
signals in which textures are irregular or non-uniform. Short time Fourier transform or Wavelet might be the most appropriate techniques to analyse non-stationary signals. However, these methods deal with a large number of
data points, thereby causing difficulties at the classification step. More features require more training samples resulting in the growth of the computational complexity as wel |
https://en.wikipedia.org/wiki/Small%20molecule%20sensors | Small molecule sensors are an effective way to detect the presence of metal ions in solution. Although many types exist, most small molecule sensors comprise a subunit that selectively binds to a metal that in turn induces a change in a fluorescent subunit. This change can be observed in the small molecule sensor's spectrum, which can be monitored using a detection system such as a microscope or a photodiode. Different probes exist for a variety of applications, each with different dissociation constants with respect to a particular metal, different fluorescent properties, and sensitivities. They show great promise as a way to probe biological processes by monitoring metal ions at low concentrations in biological systems. Since they are by definition small and often capable of entering biological systems, they are conducive to many applications for which other more traditional bio-sensing are less effective or not suitable.
Uses
Metal ions are essential to virtually all biological systems and hence studying their concentrations with effective probes is highly advantageous. Since metal ions are key to the causes of cancer, diabetes, and other diseases, monitoring them with probes that can provide insight into their concentrations with spatial and temporal resolution is of great interest to the scientific community. There are many applications that one can envision for small molecule sensors. It has been shown that one can use them to differentiate effectively between acceptable and harmful concentrations of mercury in fish. Further, since some types of neurons uptake zinc during their operation, these probes can be used as a way to track activity in the brain and could serve as an effective alternative to functional MRI. One can also track and quantify the growth of a cell, such as a fibroblast, that uptakes metal ions as it constructs itself. Numerous other biological processes can be tracked using small molecule sensors as many change metal concentrations as they |
https://en.wikipedia.org/wiki/Design%20space%20exploration | Design Space Exploration (DSE) refers to systematic analysis and pruning of unwanted design points based on parameters of interest. While the term DSE can apply to any kind of system, we refer to electronic and embedded system design in this article.
Given the complex specification of electronic systems and the plethora of design choices ranging from the choice of components, number of components, operating modes of each of the components, connections between the components, choice of algorithm, etc.; design decisions need to be based on a systematic exploration process. However, the exploration process is complex because of a variety of ways in which the same functionality can be implemented. A tradeoff analysis between each of the implementation option based on a certain parameter of interest forms the basis of DSE. The parameter of interest could vary across systems, but the commonly used parameters are power, performance, and cost. Additional factors like size, shape, weight, etc. can be important for some handheld systems like cellphone and tablets. With growing usage of mobile devices, energy is also becoming a mainstream optimization parameter along with power and performance.
Owing to the complexity of the exploration process, researchers have proposed automated DSE where the exploration software is able to take decisions and comes up with the optimal solution. However, it is not possible to have an automated DSE for all kind of systems and hence there are semi-automated methods of DSE where the designer has to steer the tool after every iteration towards convergence. Since the exploration is a complex process which takes large computational time, researchers have developed exploration tools which can give an approximate analysis of the system behavior in a fraction of time compared to accurate analysis. Such tools are very important for quick comparison of design decisions and are becoming more important with increasing complexity of designs.
To simplif |
https://en.wikipedia.org/wiki/Straight-line%20program | In mathematics, more specifically in computational algebra, a straight-line program (SLP) for a finite group G = 〈S〉 is a finite sequence L of elements of G such that every element of L either belongs to S, is the inverse of a preceding element, or the product of two preceding elements. An SLP L is said to compute a group element g ∈ G if g ∈ L, where g is encoded by a word in S and its inverses.
Intuitively, an SLP computing some g ∈ G is an efficient way of storing g as a group word over S; observe that if g is constructed in i steps, the word length of g may be exponential in i, but the length of the corresponding SLP is linear in i. This has important applications in computational group theory, by using SLPs to efficiently encode group elements as words over a given generating set.
Straight-line programs were introduced by Babai and Szemerédi in 1984 as a tool for studying the computational complexity of certain matrix group properties. Babai and Szemerédi prove that every element of a finite group G has an SLP of length O(log2|G|) in every generating set.
An efficient solution to the constructive membership problem is crucial to many group-theoretic algorithms. It can be stated in terms of SLPs as follows. Given a finite group G = 〈S〉 and g ∈ G, find a straight-line program computing g over S. The constructive membership problem is often studied in the setting of black box groups. The elements are encoded by bit strings of a fixed length. Three oracles are provided for the group-theoretic functions of multiplication, inversion, and checking for equality with the identity. A black box algorithm is one which uses only these oracles. Hence, straight-line programs for black box groups are black box algorithms.
Explicit straight-line programs are given for a wealth of finite simple groups in the online ATLAS of Finite Groups.
Definition
Informal definition
Let G be a finite group and let S be a subset of G. A sequence L = (g1,…,gm) of elements of G is a s |
https://en.wikipedia.org/wiki/Mobile%20Literacy%20in%20South%20Africa | Mobile Literacy in South Africa refers to a range of informal education projects and initiatives that support the development of literacy and enable digital fluency while using mobile devices, especially mobile phones. Mobile literacy is also known by the abbreviation mLiteracy.
The mobile Literacy ecosystem in South Africa was mapped at the mLiteracy Network Meeting hosted by the Goethe-Institut Johannesburg in January 2015. The starting point for meeting was the UNESCO study "Reading in the Mobile Era" (2014), which takes a closer look at the implications of reading on mobile devices in developing countries.
Ecosystem of mLiteracy in South Africa
The mobile Literacy ecosystem in South Africa has developed organically through the work of several stakeholders who use mobile platforms to encourage access to text, stories and reading materials. Role players within this ecosystem include content providers, platforms, mobile networks, funding agencies, training facilities (including schools and libraries), authors and users. Several organisations working within or based in South Africa have links to and synergies with projects across Africa. Most of the projects use open licences, specifically Creative Commons licences.
Historically, the target group of most projects has been children, teenagers and young adults. Recent data from the South African Department for Basic Education showed that 53% of all Grade 3 children and 70% of all Grade 6 children scored less than 35% on the Annual National Assessment language test. With high mobile penetration in South Africa (some claims as high as 128%) there is potential to engage with learners and young adults at every stage of their development via mobile devices.
Content providers
African Storybook. Founded in 2013, the African Storybook has published numerous openly licensed illustrated children's stories for 2- to 10-year-olds in a variety of African languages and English. The site has tools for the creation, translatio |
https://en.wikipedia.org/wiki/Inverted-F%20antenna | An inverted-F antenna is a type of antenna used in wireless communication, mainly at UHF and microwave frequencies. It consists of a monopole antenna running parallel to a ground plane and grounded at one end. The antenna is fed from an intermediate point a distance from the grounded end. The design has two advantages over a simple monopole: the antenna is shorter and more compact, allowing it to be contained within the case of the mobile device, and it can be impedance matched to the feed circuit by the designer, allowing it to radiate power efficiently, without the need for extraneous matching components.
The inverted-F antenna was first conceived in the 1950s as a bent-wire antenna. However, its most widespread use is as a planar inverted-F antenna (PIFA) in mobile wireless devices for its space saving properties. PIFAs can be printed using the microstrip format, a widely used technology that allows printed RF components to be manufactured as part of the same printed circuit board used to mount other components.
PIFAs are a variant of the patch antenna. Many variants of this, and other forms of the inverted-F, exist that implement wideband or multi-band antennas. Techniques include coupled resonators and the addition of slots.
Evolution and history
The inverted-F antenna is an evolution of the widely-used quarter-wave monopole antenna, which consists of a conductive rod mounted perpendicularly above a conductive ground plane, fed at its base. The wire F-type antenna was invented in the 1940s. In this antenna the feed is connected to an intermediate point along the length of the antenna instead of to the base, and the base of the antenna is connected to the ground plane. The advantage of doing this is that the input impedance of the antenna is dependent on the distance of the feed point from the grounded end. The portion of the antenna between the feedpoint and the ground plane is essentially behaving as a short-circuit stub. Thus, the designer can ma |
https://en.wikipedia.org/wiki/Google%20App%20Runtime%20for%20Chrome | Android Runtime for Chrome (ARC) is a compatibility layer and sandboxing technology for running Android applications on desktop and laptop computers in an isolated environment. It allows applications to be safely run from a web browser, independent of user operating system, at near-native speeds.
Overview
The Android Runtime for Chrome is a partially open-sourced project under development by Google. It was announced by Sundar Pichai at the Google I/O 2014 developer conference. In a limited beta consumer release in September 2014, Duolingo, Evernote, Sight Words, and Vine Android applications were made available in the Chrome Web Store for installation on Chromebook devices running OS version 37 or higher.
, the development by Google is taking place behind closed doors with individual repository commits, code reviews and most issue tracking being kept internal to the company. The open sourced parts of ARC are licensed under a BSD-style license.
Development
In a limited beta consumer release in September 2014, Duolingo, Evernote, Sight Words, and Vine Android applications were made available in the Chrome Web Store for installation on Chromebook devices running OS version 37 or higher.
In October 2014, three more apps were added: CloudMagic, Onefootball, and Podcast Addict.
In March 2015, Anandtech reported that VLC media player should be added in the coming months.
On April 1, 2015, Google released ARC Welder, a Chrome Packaged App providing the ARC runtime and application packager. It is intended to give Android developers a preview of the upcoming technology and a chance to test their Android apps on the Chrome platform.
Architecture
ARC builds upon the Google Native Client. The Native Client platform is being extended with a POSIX-compatible layer on top of the NaCl Integrated Runtime and Pepper APIs which emulate the Linux environment in the foundation of an Android phone. This then allows running an almost unchanged Dalvik VM in a sandboxed environment.
|
https://en.wikipedia.org/wiki/Cas%20Cremers | Casimier Joseph Franciscus "Cas" Cremers (born 1974) is a computer scientist and a faculty member at the CISPA Helmholtz Center for Information Security in Saarbruecken, Germany.
Career
Cremers received his PhD from Eindhoven University of Technology in 2006, under the supervision of Sjouke Mauw and Erik de Vink. Between 2006 and 2013, he worked at the Information Security Group at ETH Zurich, Switzerland, until joining the University of Oxford in 2013. He was made full professor of Information Security in 2015.
His research focuses on information security, in particular the formal analysis of security protocols. This work ranges from developing mathematical foundations for protocol analysis to the development of analysis tools, notably the Scyther and Tamarin tools. Recently his research expanded into directions such as protocol standardisation, including the improvement of the ISO/IEC 9798 standard, and applied cryptography, leading to the development of new security requirements and protocols. His joint work with Marko Horvat, Sam Scott, and Thyla van der Merwe led to a not insignificant change to the TLS 1.3 specification.
In 2018 Cremers moved from Oxford University to the Cispa Helmholtz Center for Information Security at Saarbrücken.
Cremers previously worked in MSX computer game development, initially working for the Sigma Group before founding his own group Parallax; he is credited for work on nine different games, and many other demos, in a combination of roles including programmer, designer, composer, and writer. He was interviewed by blog "Distrito Entebras" on the history of his career working in MSX games development.
Publications
Cremers' publications cover security, cryptography, ISO standards, automated verification of security protocols, and formal methods.
His thesis was entitled "Scyther - Semantics and Verification of Security Protocols", and was supervised by Sjouke Mauw and Erik de Vink. Also published with Sjouke Mauw is their book Op |
https://en.wikipedia.org/wiki/List%20of%20OpenCL%20applications | The following list contains a list of computer programs that are built to take advantage of the OpenCL or WebCL heterogeneous compute framework.
Graphics
ACDSee
Adobe Photoshop
Affinity Photo
Capture One
Blurate
darktable
FAST: imaging Medical
GIMP
HALCON by MVTec
Helicon Focus
ImageMagick
Musemage
Pathfinder, GPU-based font rasterizer
PhotoScan
seedimg
CAD and 3D modelling
Autodesk Maya
Blender GPU rendering with NVIDIA CUDA and OptiX & AMD OpenCL
Houdini
LuxRender
Mandelbulber
Audio, video, and multimedia
AlchemistXF
CUETools
DaVinci Resolve by Blackmagic Design
FFmpeg has a number of OpenCL filters
gr-fosphor GNU Radio block for RTSA-like spectrum visualization
HandBrake
Final Cut Pro X
KNLMeansCL: Denoise plugin for AviSynth
Libav
OpenCV
RealFlow Hybrido2
Sony Catalyst
Vegas Pro by Magix Software GmbH
vReveal by MotionDSP
Total Media Theatre by ArcSoft
x264
x265
h.265/HEVC possible
Web (including WebCL)
Google Chrome (experimental)
Mozilla Firefox (experimental)
Office
Collabora Online
LibreOffice Calc
Games
Military Operations, operational level real-time strategy game where the complete army is simulated in real-time using OpenCL
Planet Explorers is using OpenCL to calculate the voxels.
BeamNG.drive is going to use OpenCL for the physics engine.
Leela Zero, open source replication of Alpha Go Zero using OpenCL for neural network computation.
Scientific computing
Advanced Simulation Library (ASL)
AMD Compute Libraries
clBLAS, complete set of BLAS level 1, 2 & 3 routines
clSPARSE, routines for sparse matrices
clFFT, FFT routines
clRNG, random numbers generators MRG31k3p, MRG32k3a, LFSR113, and Philox-4×32-10
ArrayFire: parallel computing with an easy-to-use API with JIT compiler (open source),
BEAGLE, Bayesian and Maximum Likelihood phylogenetics library
BigDFT
BOINC
Bolt, STL-compatible library for creating accelerated data parallel applications
Bullet
CLBlast: tuned clBlas
clMAGMA, OpenCL |
https://en.wikipedia.org/wiki/Simultaneous%20algebraic%20reconstruction%20technique | Simultaneous algebraic reconstruction technique (SART) is a computerized tomography (CT) imaging algorithm useful in cases when the projection data is limited; it was proposed by Anders Andersen and Avinash Kak in 1984.
It generates a good reconstruction in just one iteration and it is superior to standard algebraic reconstruction technique (ART).
As a measure of its popularity, researchers have proposed various extensions to
SART: OS-SART, FA-SART, VW-OS-SART, SARTF, etc. Researchers have also studied how SART can best be implemented on different parallel processing architectures. SART and its proposed extensions are used in emission CT in nuclear medicine, dynamic CT, and holographic tomography, and other reconstruction applications. Convergence
of the SART algorithm was theoretically established in 2004 by Jiang and Wang. Further convergence analysis was done by Yan.
An application of SART to ionosphere was presented by Hobiger et al. Their method does not use matrix algebra and therefore it can be implemented in a low-level programming language. Its convergence speed is significantly higher than that of classical SART. A discrete version of SART called DART was developed by Batenburg and Sijbers.
References
Radiology
Medical imaging
Inverse problems |
https://en.wikipedia.org/wiki/Semi-simplicity | In mathematics, semi-simplicity is a widespread concept in disciplines such as linear algebra, abstract algebra, representation theory, category theory, and algebraic geometry. A semi-simple object is one that can be decomposed into a sum of simple objects, and simple objects are those that do not contain non-trivial proper sub-objects. The precise definitions of these words depends on the context.
For example, if G is a finite group, then a nontrivial finite-dimensional representation V over a field is said to be simple if the only subrepresentations it contains are either {0} or V (these are also called irreducible representations). Now Maschke's theorem says that any finite-dimensional representation of a finite group is a direct sum of simple representations (provided the characteristic of the base field does not divide the order of the group). So in the case of finite groups with this condition, every finite-dimensional representation is semi-simple. Especially in algebra and representation theory, "semi-simplicity" is also called complete reducibility. For example, Weyl's theorem on complete reducibility says a finite-dimensional representation of a semisimple compact Lie group is semisimple.
A square matrix (in other words a linear operator with V finite dimensional vector space) is said to be simple if its only invariant subspaces under T are {0} and V. If the field is algebraically closed (such as the complex numbers), then the only simple matrices are of size 1 by 1. A semi-simple matrix is one that is similar to a direct sum of simple matrices; if the field is algebraically closed, this is the same as being diagonalizable.
These notions of semi-simplicity can be unified using the language of semi-simple modules, and generalized to semi-simple categories.
Introductory example of vector spaces
If one considers all vector spaces (over a field, such as the real numbers), the simple vector spaces are those that contain no proper nontrivial subspaces. The |
https://en.wikipedia.org/wiki/Brenkert%20Brenograph%20Jr. | The Brenkert Brenograph Jr. was a projector used in atmospheric theatres to project moving clouds over ceilings painted blue. The effect created the illusion that theatre patrons were outdoors. The device was used primarily in theatre designs of John Eberson.
The machine was manufactured by the Brenkert Light Projection Company of Detroit. The company sold the projector for $225 in the 1920s.
The projector operated automatically with a universal electrical motor, capable of varying speeds. It used a powerful 1500 watt light bulb to display nimbus and cumulus clouds. The clouds were images on a strip of negatives that moved continuously in front of the light. The negatives were affixed to a circular disc that rotated once every 105 minutes—about the length of a typical performance.
The projector was small enough to be hidden in the theatre architectural design so that the illusion of floating clouds would be maintained.
Notes and references
Craig, Robert M. Atlanta Architecture: Art Deco to Modern Classic, 1929–1959. Gretna, LA: Pelican Pub., 1974, p. 74.
Hoffman, Scott L. A Theatre History of Marion, Ohio: John Eberson's Palace and Beyond. Charlotte, NC: The History Press, 2015, p. 30, 32–34, 62.
Welling, David. Cinema Houston: From Nickelodeon to Cineplex. Austin, TX: U. Texas P., 2010, p. 54.
The Pantagraph (Bloomington, IL), May 22, 1983, p. 13.
Projectors |
https://en.wikipedia.org/wiki/Industrial%20Internet%20Consortium | The Industrial Internet Consortium rebranded as the Industry IoT Consortium in August 2021. The Industry IoT Consortium is a program of the Object Management Group (OMG).
The Industry IoT Consortium (IIC) is an open membership organization, with 159 members as of 27 September 2021. The IIC was formed to accelerate the development, adoption and widespread use of interconnected machines and devices and intelligent analytics. Founded by AT&T, Cisco, General Electric, IBM, and Intel in March 2014, the IIC catalyzes and coordinates the priorities and key technologies of the Industrial Internet. In August 2021, the organization changed its mission to deliver transformative business value to industry, organizations, and society by accelerating adoption of a trustworthy internet of things. No products or services are sold.
Its current executive director is Richard Soley. Stephen J. Mellor serves as Chief Technical Officer for the Industry IoT Consortium.
History
The Industrial Internet Consortium (IIC) was founded on 27 March 2014 by AT&T, Cisco, General Electric, IBM, and Intel. Though its parent company is the Object Management Group, the IIC is not a standards organization. Rather, the consortium was formed to bring together industry players — from multinational corporations, small and large technology innovators to academia and governments — to accelerate the development, adoption and widespread use of Industrial Internet technologies.
The mission has changed over the years from a focus on growing an industrial internet market to one more focussed on connectivity and trustworthiness to one focused on digital transformation and deployments. The mission, from August 2020, was "To deliver transformative business value to organizations, industry and society by accelerating adoption of a trustworthy internet of things".
Specifically, the IIC members are concerned with creating an ecosystem for insight and thought leadership, interoperability and security via reference |
https://en.wikipedia.org/wiki/Modesty%20in%20medical%20settings | Modesty in medical settings refers to the practices and equipment used to preserve patient modesty in medical examination and clinics.
Tools for modesty
Prior to the invention of the stethoscope, a physician who wanted to perform auscultation to listen to heart sounds or noise inside a body would have to physically place their ear against the body of the person being examined. In 1816, male physician René Laennec invented the stethoscope as a way to respect the modesty of a female patient, as it would have been awkward for him to put his ear on her chest.
Hospital gowns increase modesty as compared to the patient presenting nude, but in the past have been odd clothing which exposes the body. Some contemporary changes to the design of hospital gowns are proposed.
Society and culture
In places with more cultural diversity it becomes more likely that people will make new and different requests for modesty in health care.
Special populations
Sometimes women do not access healthcare because of modesty concerns.
Muslims in non-Muslim societies sometimes make requests for modesty.
References
External links
patientmodesty.org/, a United States-based nonprofit organization advocating for modesty
Modesty
Medical privacy |
https://en.wikipedia.org/wiki/OpenFlint | OpenFlint is an open technology used for displaying ("casting") content from one computerized device on the display of another. Usually this would be from a smaller personal device (like a smartphone) to a device with a larger screen suitable for viewing by multiple spectators (like a TV).
Development of OpenFlint was initiated in 2014 by the Matchstick project, which is a crowd-funded effort to create a miniature piece of hardware suitable for running an OpenFlint server casting to a screen through an HDMI connection. This is similar in concept to Google's Chromecast device that uses Google Cast.
The Matchstick TV devices are powered by Firefox OS, but as an open technology OpenFlint itself is not tied to any specific operating system or hardware.
, no consumer-grade OpenFlint-enabled products have shipped, but Matchstick developer devices have been shipping since late 2014, and the first round of devices for backers of the Matchstick Kickstarter project were expected to ship in February 2015, but were delayed until August 2015.
A demonstration OpenFlint server can be set up on an ordinary laptop or desktop computer running Linux by following instructions.
The Matchstick TV dongle project was canceled due to issues implementing DRM into Firefox OS.
See also
Android TV
References
External links
Github organization
Wireless display technologies |
https://en.wikipedia.org/wiki/List%20of%20model%20organisms | This is a list of model organisms used in scientific research.
Viruses
Phages (infecting prokaryotes):
Escherichia virus Lambda (Phage lambda)
Phi X 174, the first DNA genome ever to be sequenced (circular, 5386 base pairs in length), shortly after the RNA genome of bacteriophage MS2 (in 1976).
T4 phage
Animal viruses:
SV40
Human alphaherpesvirus (Herpes simplex virus)
Plant viruses:
Tobacco mosaic virus
Prokaryotes
Bacteria:
Escherichia coli (E. coli), common Gram-negative gut bacterium widely used in molecular genetics. Main lab strain is 'K-12'.
Bacillus subtilis, endospore forming Gram-positive bacterium. Main lab strain is '168'.
Caulobacter crescentus, bacterium that divides into two distinct cells used to study cellular differentiation.
Mycoplasma genitalium, minimal organism.
Aliivibrio fischeri, quorum sensing, bioluminescence and animal-bacterial symbiosis with Hawaiian bobtail squid.
Bacteroides thetaiotaomicron, polysaccharide-degrading member of the human gut microbiota, used to study functional aspects of the gut microbiota.
Synechocystis (specifically PCC 6803), photosynthetic cyanobacterium widely used in photosynthesis research.
Pseudomonas fluorescens, soil bacterium that readily diversifies into different strains in the lab.
Azotobacter vinelandii, obligate aerobe diazotroph used in nitrogen fixation research.
Streptomyces coelicolor, soil-dwelling filamentous bacterium used to produce many clinically useful antibiotics.
Archaea:
Methanococcus and Methanosarcina, model methanogens, representing the two metabolic types of hydrogenotrophism and methylotrophism. Methanogenesis remains a key area of metabolic research.
Halobacterium salinarum and Haloferax volcanii, model Haloarchaea. The former has a reputation in the study of DNA repair. The latter is more suited to more traditional genetics due to a shorter generation time and more stable genome. This order is known for its easy updake of genetic tools as well as resistance to c |
https://en.wikipedia.org/wiki/VRVis | VRVis Zentrum für Virtual Reality und Visualisierung (VRVis) is a research center in the area of Visual Computing in Austria. It is one of currently 22 centrally funded COMET – Competence Centers for Excellent Technologies of Austria. The VRVis Center is located in Ares Tower in Vienna.
References
Laboratories in Austria
Research institutes in Austria
Engineering organizations
TU Wien |
https://en.wikipedia.org/wiki/Transmission%20loss%20%28duct%20acoustics%29 | Transmission loss (TL) in duct acoustics describes the acoustic performances of a muffler-like system. It is frequently used in the industry areas such as muffler manufacturers and NVH (noise, vibration and harshness) department of automobile manufacturers, and in academic studies. Generally the higher transmission loss of a system it has, the better it will perform in terms of noise cancellation.
Introduction
Transmission loss (TL) in duct acoustics is defined as the difference between the power incident on a duct acoustic device (muffler) and that transmitted downstream into an anechoic termination. Transmission loss is independent of the source, if only plane waves are incident at the inlet of the device. Transmission loss does not involve the radiation impedance inasmuch as it represents the difference between incident acoustic energy and that transmitted into an anechoic environment. Being made independent of the terminations, TL finds favor with researchers who are sometimes interested in finding the acoustic transmission behavior of an element or a set of elements in isolation of the terminations. But measurement of the incident wave in a standing wave acoustic field requires uses of impedance tube technology, may be quite laborious, unless one makes use of the two-microphone method with modern instrumentation.
Mathematical definition
By definition the plane wave TL on an acoustic component with negligible mean flow may be described as:
where:
is the incident sound power in the inlet coming towards muffler;
is the transmitted sound power going downstream in the outlet out of the muffler;
stand for the cross-sectional area of the inlet and outlet of muffler;
is the acoustic pressure of the incident wave in the inlet, towards muffler;
is the acoustic pressure of the transmitted wave in the outlet, away from muffler.
is the particle velocity of the incident wave in the inlet, towards muffler;
is the particle velocity of the transmitted wave in the o |
https://en.wikipedia.org/wiki/FreeSync | FreeSync is an adaptive synchronization technology for LCD and OLED displays that support a variable refresh rate aimed at avoiding tearing and reducing stuttering caused by misalignment between the screen's refresh rate and the content's frame rate.
FreeSync was developed by AMD and first announced in 2014 to compete against Nvidia's proprietary G-Sync. It is royalty-free, free to use, and has no performance penalty.
Overview
FreeSync dynamically adapts the display refresh rate to variable frame rates which result from irregular GPU load when rendering complex gaming content as well as the lower 23.97/24/29.97/30 Hz used by fixed video content. This helps remove stuttering delays caused by the video interface having to finish the current frame and screen tearing when starting a new frame in the middle of transmission (with vertical sync off). The range of refresh rates supported by the standard is based on the capabilities reported by the display. FreeSync can be enabled automatically by plug and play, making it transparent to the operating system and end user. FreeSync is not limited to only AMD graphics cards, FreeSync is also compatible with select Nvidia graphics cards and select consoles.
Transitions between different refresh rates are seamless and undetectable to the user. The sync mechanism keeps the video interface at the established pixel clock rate but dynamically adjusts the vertical blanking interval. The monitor keeps displaying the currently received image until a new frame is presented to the video card's frame buffer then transmission of the new image starts immediately. This simple mechanism provides low monitor latency and a smooth, virtually stutter-free viewing experience, with reduced implementation complexity for the timing controller (TCON) and display panel interface. It also helps improve battery life by reducing the refresh rate of the panel when not receiving new images.
Technology
The original FreeSync is based over DisplayPort 1.2a, |
https://en.wikipedia.org/wiki/C3D%20Toolkit | C3D Toolkit is a proprietary cross-platform geometric modeling kit software developed by Russian by C3D Labs (previously part of ASCON Group). It's written in C++ . It can be licensed by other companies for use in their 3D computer graphics software products. The most widely known software in which C3D Toolkit is typically used are computer aided design (CAD), computer-aided manufacturing (CAM), and computer-aided engineering (CAE) systems.
C3D Toolkit provides routines for 3D modeling, 3D constraint solving, polygonal mesh-to-B-rep conversion, 3D visualization, and 3D file conversions etc.
History
Nikolai Golovanov is a graduate of the Mechanical Engineering department of Bauman Moscow State Technical University as a designer of space launch vehicles. Upon his graduation, he began with the Kolomna Engineering Design bureau, which at the time employed the future founders of ASCON, Alexander Golikov and Tatiana Yankina. While at the bureau, Dr Golovanov developed software for analyzing the strength and stability of shell structures.
In 1989, Alexander Golikov and Tatiana Yankina left Kolomna to start up ASCON as a private company. Although they began with just an electronic drawing board, even then they were already conceiving the idea of three-dimensional parametric modeling. This radical concept eventually changed flat drawings into three-dimensional models. The ASCON founders shared their ideas with Nikolai Golovanov, and in 1996 he moved to take up his current position with ASCON. As of 2012 he was involved in developing algorithms for C3D Toolkit.
In 2012 the earliest version of the C3D Modeller kernel was extracted from KOMPAS-3D CAD. It was later adopted to a range of different platforms and advertised as separate product.
Overview
It incorporates five modules:
C3D Modeler constructs geometric models, generates flat projections of models, performs triangulations, calculates the inertial characteristics of models, and determines whether collisions occur |
https://en.wikipedia.org/wiki/Map%20%28parallel%20pattern%29 | Map is an idiom in parallel computing where a simple operation is applied to all elements of a sequence, potentially in parallel. It is used to solve embarrassingly parallel problems: those problems that can be decomposed into independent subtasks, requiring no communication/synchronization between the subtasks except a join or barrier at the end.
When applying the map pattern, one formulates an elemental function that captures the operation to be performed on a data item that represents a part of the problem, then applies this elemental function in one or more threads of execution, hyperthreads, SIMD lanes or on multiple computers.
Some parallel programming systems, such as OpenMP and Cilk, have language support for the map pattern in the form of a parallel for loop; languages such as OpenCL and CUDA support elemental functions (as "kernels") at the language level. The map pattern is typically combined with other parallel design patterns. For example, map combined with category reduction gives the MapReduce pattern.
See also
Map (higher-order function)
Functional programming
Algorithmic skeleton
References
Parallel computing
Software design patterns |
https://en.wikipedia.org/wiki/Cramer%27s%20theorem%20%28algebraic%20curves%29 | In algebraic geometry, Cramer's theorem on algebraic curves gives the necessary and sufficient number of points in the real plane falling on an algebraic curve to uniquely determine the curve in non-degenerate cases. This number is
where is the degree of the curve. The theorem is due to Gabriel Cramer, who published it in 1750.
For example, a line (of degree 1) is determined by 2 distinct points on it: one and only one line goes through those two points. Likewise, a non-degenerate conic (polynomial equation in and with the sum of their powers in any term not exceeding 2, hence with degree 2) is uniquely determined by 5 points in general position (no three of which are on a straight line).
The intuition of the conic case is this: Suppose the given points fall on, specifically, an ellipse. Then five pieces of information are necessary and sufficient to identify the ellipse—the horizontal location of the ellipse's center, the vertical location of the center, the major axis (the length of the longest chord), the minor axis (the length of the shortest chord through the center, perpendicular to the major axis), and the ellipse's rotational orientation (the extent to which the major axis departs from the horizontal). Five points in general position suffice to provide these five pieces of information, while four points do not.
Derivation of the formula
The number of distinct terms (including those with a zero coefficient) in an n-th degree equation in two variables is (n + 1)(n + 2) / 2. This is because the n-th degree terms are numbering n + 1 in total; the (n − 1) degree terms are numbering n in total; and so on through the first degree terms and numbering 2 in total, and the single zero degree term (the constant). The sum of these is (n + 1) + n + (n – 1) + ... + 2 + 1 = (n + 1)(n + 2) / 2 terms, each with its own coefficient. However, one of these coefficients is redundant in determining the curve, because we can always divide through the polynomial equatio |
https://en.wikipedia.org/wiki/High%20availability%20software | High availability software is software used to ensure that systems are running and available most of the time. High availability is a high percentage of time that the system is functioning. It can be formally defined as (1 – (down time/ total time))*100%. Although the minimum required availability varies by task, systems typically attempt to achieve 99.999% (5-nines) availability. This characteristic is weaker than fault tolerance, which typically seeks to provide 100% availability, albeit with significant price and performance penalties.
High availability software is measured by its performance when a subsystem fails, its ability to resume service in a state close to the state of the system at the time of the original failure, and its ability to perform other service-affecting tasks (such as software upgrade or configuration changes) in a manner that eliminates or minimizes down time. All faults that affect availability – hardware, software, and configuration need to be addressed by High Availability Software to maximize availability.
Features
Typical high availability software provides features that:
Enable hardware and software redundancy:
These features include:
The discovery of hardware and software entities,
The assignment of active/standby roles to these entities,
Detection of failed components,
Notification to redundant components that they should become active, and
The ability to scale the system.
A service is not available if it cannot service all the requests being placed on it. The “scale-out” property of a system refers to the ability to create multiple copies of a subsystem to address increasing demand, and to efficiently distribute incoming work to these copies (Load balancing (computing)) preferably without shutting down the system. High availability software should enable scale-out without interrupting service.
Enable active/standby communication (notably Checkpointing):
Active subsystems need to communicate to standby subsystems to en |
https://en.wikipedia.org/wiki/DiY%20networking | DIY networking is an umbrella term for different types of grassroots networking, such as wireless community network, mesh network, ad-hoc network, stressing on the possibility that Wireless technology offers to create "offline" or "off-the-cloud" local area networks (LAN), which can operate outside the Internet. Do it yourself (DiY) networking is based on such Wireless LAN networks that are created organically through the interconnection of nodes owned and deployed by individuals or small organizations. Even when the Internet is easily accessible, such DiY networks form an alternative, autonomous option for communication and services, which (1) ensures that all connected devices are in de facto physical proximity, (2) offers opportunities and novel capabilities for creative combinations of virtual and physical contact, (3) enables free, anonymous and easy access, without the need for pre-installed applications or any credentials, and (4) can create feelings of ownership and independence, and lead to the appropriation of the hybrid space in the long-run.
DiY networks follow the Do-It-Yourself subculture, and provide the technological means for more participatory processes, benefiting from the grassroots engagement of citizens in the design of hybrid, digital and physical, space through novel forms of social networking, crowd sourcing, and citizen science. But for these possibilities to be materialized there are many practical, social, political, and economic challenges that need to be addressed.
Although DiY could be also used for illegal purposes, the DiY concept has become more and more popular in the mainstream academic literature, activism, art, popular media, and everyday practice, and especially in the case of communications networks there are more and more related scientific papers, books, and online articles. There is a large potential for new, novel, and free locality-aware services and opportunities that demand anonymous and easy access, such as Online So |
https://en.wikipedia.org/wiki/Distance%20oracle | In computing, a distance oracle (DO) is a data structure for calculating distances between vertices in a graph.
Introduction
Let G(V,E) be an undirected, weighted graph, with n = |V| nodes and m = |E| edges. We would like to answer queries of the form "what is the distance between the nodes s and t?".
One way to do this is just run the Dijkstra algorithm. This takes time , and requires no extra space (besides the graph itself).
In order to answer many queries more efficiently, we can spend some time in pre-processing the graph and creating an auxiliary data structure.
A simple data structure that achieves this goal is a matrix which specifies, for each pair of nodes, the distance between them. This structure allows us to answer queries in constant time , but requires extra space. It can be initialized in time using an all-pairs shortest paths algorithm, such as the Floyd–Warshall algorithm.
A DO lies between these two extremes. It uses less than space in order to answer queries in less than time. Most DOs have to compromise on accuracy, i.e. they don't return the accurate distance but rather a constant-factor approximation of it.
Approximate DO
Thorup and Zwick describe more than 10 different DOs. They then suggest a new DO that, for every k, requires space , such that any subsequent distance query can be approximately answered in time . The approximate distance returned is of stretch at most , that is, the quotient obtained by dividing the estimated distance by the actual distance lies between 1 and . The initialization time is .
Some special cases include:
For we get the simple distance matrix.
For we get a structure using space which answers each query in constant time and approximation factor at most 3.
For , we get a structure using space, query time , and stretch .
Higher values of k do not improve the space or preprocessing time.
DO for general metric spaces
The oracle is built of a decreasing collection of k+1 sets of vertices:
Fo |
https://en.wikipedia.org/wiki/Teletransportation%20paradox | The teletransportation paradox or teletransport paradox (also known in alternative forms as the duplicates paradox) is a thought experiment on the philosophy of identity that challenges common intuitions on the nature of self and consciousness, formulated by Derek Parfit in his 1984 book Reasons and Persons.
The Polish science-fiction writer Stanisław Lem described the same problem in the mid-twentieth century. He put it in writing in his philosophical text Dialogs in 1957. Similarly, in Lem's Star Diaries ("Fourteenth Voyage") of 1957, the hero visits a planet and finds himself recreated from a backup record, after his death from a meteorite strike, which on this planet is a very commonplace procedure. In chapter 6 of his later discursive book "Summa Technologiae", first published in 1964, he discussed in detail the identity paradoxes associated with teleportation and hibernation of human beings.
Similar questions of identity have been raised as early as 1775.
Derek Parfit's version
Derek Parfit and others consider a hypothetical "teletransporter", a machine that puts you to sleep, records your molecular composition, breaking you down into atoms, and relaying it to Mars at the speed of light. On Mars, another machine re-creates you (from local stores of carbon, hydrogen, and so on), each atom in exactly the same relative position. Parfit poses the question of whether or not the teletransporter is actually a method of travel, or if it simply kills and makes an exact replica of the user.
Then the teleporter is upgraded. The teletransporter on Earth is modified to not destroy the person who enters it, but instead it can simply make infinite replicas, all of whom would claim to remember entering the teletransporter on Earth in the first place.
Using thought experiments such as these, Parfit argues that any criteria we attempt to use to determine sameness of person will be lacking, because there is no further fact. What matters, to Parfit, is simply "Relation |
https://en.wikipedia.org/wiki/Karyoklepty | Karyoklepty is a strategy for cellular evolution, whereby a predator cell appropriates the nucleus of a cell from another organism to supplement its own biochemical capabilities.
In the related process of kleptoplasty, the predator sequesters plastids (especially chloroplasts) from dietary algae. The chloroplasts can still photosynthesize, but do not last long after the prey's cells are metabolised. If the predator can also sequester cell nuclei from the prey to encode proteins for the plastids, it can sustain them. Karyoklepty is this sequestration of nuclei; even after sequestration, the nuclei are still capable of transcription.
Johnson et al. described and named karyoklepty in 2007 after observing it in the ciliate species Mesodinium rubrum. Karyoklepty is a Greek compound of the words karydi ("kernel") and kleftis ("thief").
See also
Endosymbiont
References
Further reading
Ecology terminology
Endosymbiotic events
Evolutionary biology |
https://en.wikipedia.org/wiki/ArtPassport | ArtPassport is a free app for iOS. It provides 360° views of art exhibitions and can be connected to a virtual reality headset. It was chosen by TIME as one of the best 25 apps in the first five months of 2017, and was nominated for a Webby Award in 2018. The developers claimed that the app was downloaded 25,000 times in its first week on the App Store, where it received a rating of 2.5 stars out of 5.
References
Online databases |
https://en.wikipedia.org/wiki/Corran%20McLachlan | Corran Norman Stuart McLachlan (1 April 1944 – 9 August 2003) was a New Zealand research scientist and entrepreneur. McLachlan is noted for his work on epidemiological research surrounding the effects of the A1 beta-casein. He believed the existence of this protein in cows’ milk to be a public health issue contributing to both heart disease and type 1 diabetes. In February 2000, McLachlan and his business partner, Howard Paterson,
established A2 Corporation Limited (renamed The a2 Milk Company in April 2014) to market A2 cows’ milk, which was free from the A1 beta-casein.
Early life, education, and family
McLachlan attended his local primary school where he was one of two students in his class before attending Wairarapa College in Masterton from 1957 to 1961. It was here that he developed his interest in science, a discipline he continued to pursue. In 1962, McLachlan began studying at the University of Canterbury, and graduated with a first-class honours degree in chemical engineering. He then went to the University of Cambridge, where he completed a PhD on the reactions of carbon dioxide in alkaline solutions, supervised by Peter Danckwerts, in 1969.
While at Cambridge, McLachlan met a German au pair, Ulrike von Thielen, and they were married within seven months. The couple went on to have three children.
Career
In 1970, McLachlan returned to New Zealand and began working in the Chemistry Division of the Department of Scientific and Industrial Research. In 1974, he won the first United Development Corporation inventor's prize.
McLachlan first became involved in the dairy industry in 1989 when he became the managing director of Tenon Developments. In a joint venture with Morrinsville Thames Cooperative Dairy Company, they developed a method of producing cholesterol-free butter and low-fat meats using extraction technology. He remained the managing director of Tenon Developments Ltd until his death.
While the research project was dropped by the New Zealand Dair |
https://en.wikipedia.org/wiki/Consistent%20and%20inconsistent%20equations | In mathematics and particularly in algebra, a system of equations (either linear or nonlinear) is called consistent if there is at least one set of values for the unknowns that satisfies each equation in the system—that is, when substituted into each of the equations, they make each equation hold true as an identity. In contrast, a linear or non linear equation system is called inconsistent if there is no set of values for the unknowns that satisfies all of the equations.
If a system of equations is inconsistent, then it is possible to manipulate and combine the equations in such a way as to obtain contradictory information, such as , or and (which implies ).
Both types of equation system, consistent and inconsistent, can be any of overdetermined (having more equations than unknowns), underdetermined (having fewer equations than unknowns), or exactly determined.
Simple examples
Underdetermined and consistent
The system
has an infinite number of solutions, all of them having (as can be seen by subtracting the first equation from the second), and all of them therefore having for any values of and .
The nonlinear system
has an infinitude of solutions, all involving
Since each of these systems has more than one solution, it is an indeterminate system.
Underdetermined and inconsistent
The system
has no solutions, as can be seen by subtracting the first equation from the second to obtain the impossible .
The non-linear system
has no solutions, because if one equation is subtracted from the other we obtain the impossible .
Exactly determined and consistent
The system
has exactly one solution: .
The nonlinear system
has the two solutions and , while
has an infinite number of solutions because the third equation is the first equation plus twice the second one and hence contains no independent information; thus any value of can be chosen and values of and can be found to satisfy the first two (and hence the third) equations.
Exactly |
https://en.wikipedia.org/wiki/Plethystic%20substitution | Plethystic substitution is a shorthand notation for a common kind of substitution in the algebra of symmetric functions and that of symmetric polynomials. It is essentially basic substitution of variables, but allows for a change in the number of variables used.
Definition
The formal definition of plethystic substitution relies on the fact that the ring of symmetric functions is generated as an R-algebra by the power sum symmetric functions
For any symmetric function and any formal sum of monomials , the plethystic substitution f[A] is the formal series obtained by making the substitutions
in the decomposition of as a polynomial in the pk's.
Examples
If denotes the formal sum , then .
One can write to denote the formal sum , and so the plethystic substitution is simply the result of setting for each i. That is,
.
Plethystic substitution can also be used to change the number of variables: if , then is the corresponding symmetric function in the ring of symmetric functions in n variables.
Several other common substitutions are listed below. In all of the following examples, and are formal sums.
If is a homogeneous symmetric function of degree , then
If is a homogeneous symmetric function of degree , then
,
where is the well-known involution on symmetric functions that sends a Schur function to the conjugate Schur function .
The substitution is the antipode for the Hopf algebra structure on the Ring of symmetric functions.
The map is the coproduct for the Hopf algebra structure on the ring of symmetric functions.
is the alternating Frobenius series for the exterior algebra of the defining representation of the symmetric group, where denotes the complete homogeneous symmetric function of degree .
is the Frobenius series for the symmetric algebra of the defining representation of the symmetric group.
External links
Combinatorics, Symmetric Functions, and Hilbert Schemes (Haiman, 2002)
References
M. Haiman, Combinatorics, |
https://en.wikipedia.org/wiki/Cognitive%20bias%20in%20animals | Cognitive bias in animals is a pattern of deviation in judgment, whereby inferences about other animals and situations may be affected by irrelevant information or emotional states. It is sometimes said that animals create their own "subjective social reality" from their perception of the input. In humans, for example, an optimistic or pessimistic bias might affect one's answer to the question "Is the glass half empty or half full?"
To explore cognitive bias, one might train an animal to expect that a positive event follows one stimulus and that a negative event follows another stimulus. For example, on many trials, if the animal presses lever A after a 20 Hz tone it gets a highly desired food, but a press on lever B after a 10 Hz tone yields bland food. The animal is then offered both levers after an intermediate test stimulus, e.g. a 15 Hz tone. The hypothesis is that the animal's "mood" will bias the choice of levers after the test stimulus; if positive, it will tend to choose lever A, if negative it will tend to choose lever B. The hypothesis is tested by manipulating factors that might affect mood – for example, the type of housing the animal is kept in.
Cognitive biases have been shown in a wide range of species including rats, dogs, rhesus macaques, sheep, chicks, starlings and honeybees.
In rats
In what has been described as a "landmark study", the first study of cognitive bias in animals was conducted with rats. This showed that laboratory rats in unpredictable environments had a more pessimistic attitude than rats in predictable environments.
One study on rats investigated whether changes in light intensity – a short-term manipulation of emotional state – has an effect on cognitive bias. Light intensity was chosen as a treatment because this specifically relates to anxiety-induction. Rats were trained to discriminate between two different locations, in either high ('H') or low ('L') light levels. One location was rewarded with palatable food and the |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.