text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Working Draft produced as part of the W3C Semantic Web Activity. This document incorporates material developed by the Working Group designed to provide the reader with the basic knowledge required to effectively use RDF in their particular applications.
This document is being released for review by W3C members and other interested parties to encourage feedback and comments. This is the current state of an ongoing work on the Primer..
1. Introduction
2. Making Statements About Resources
2.3 The RDF Model
2.4 Structured Property Values and Blank Nodes
2.5 Typed Literals
2.6 Concepts Summary
3. An XML Syntax for RDF: RDF/XML
3.1 Basic Principles
3.2 Defining New RDF Resources
3.3 RDF/XML Summary
4. Other RDF Capabilities
4.1 RDF Containers
4.2 RDF Collections
4.3 RDF Reification
4.4 Miscellaneous RDF Facilities
4.4.1
7. Other Parts of the RDF Specification
7.1 RDF Semantics
7.2 Test Cases
8. References
8.1 Normative References
8.2 Informational References
9. Acknowledgments
A. Uniform Resource Identifiers (URIs) Survival Guide
B. Extensible Markup Language (XML) Survival Guide 0:
Figure 0 illustrates that RDF uses URIs to identify:
RDF also provides an XML-based syntax (called RDF/XML) for recording and exchanging these graphs. The following is a small chunk of RDF in RDF/XML corresponding to the graph in Figure 0:
<rdf:RDF xmlns: <Person rdf: <fullName>Eric Miller</fullName> <mailbox rdf: <personalTitle>Dr.</personalT augment these other documents, (make assertions about) Web resources, e.g., Web pages. For example,.
@@Now that Secs 2.1 and 2.2 are appendices, need a smooth segue here. In particular, need to briefly introduce URIrefs and QNames.@@
@@Mention that full discussion of RDF URIrefs (and the graph model as a whole) is in the Concepts document.@@:
Groups of statements are represented by corresponding groups). In drawing RDF graphs, nodes that represent resources identified by URIrefs are shown as ellipses, while nodes that represent literals are shown as boxes (labeled by the literal itself). RDF graphs can be described as "labeled directed graphs", since the arcs have labels, and are "directed" (point in a specific direction, from subject to object).
@@mention that a full discussion of RDF literals is in the Concepts document.@@ 2. So, for example, if the QName prefix
foo is mapped simple character string character string the drawing of the graph representing the group. (Blank nodes were previously called anonymous resources in [RDF-MS].) However, we would need some form of explicit identifier for that node if we wanted the various blank nodes in the triples representation of the graph. To do this, we use node identifiers, having the form _:name, to indicate the presence of blank nodes in triples. groups isn't really accurate: these simple untyped: URIre borrow a conceptual framework from XML Schema datatypes [XML-SCHEMA2] to more precisely describe these datatype requirements. RDF's use of this framework is defined in RDF Concepts and Abstract Syntax [RDF-CONCEPTS].
@@Framework discussion removed@@ 9:
The typed literal in Figure 9 is valid RDF, but obviously an error as far as the xsd:integer datatype is concerned, since "pumpkin" is not defined as being in the lexical space. The normative (i.e., definitive) RDF specification defining these concepts is the RDF Concepts and Abstract Syntax [RDF-CONCEPTS], which should be consulted for further information. Together with the RDF Semantics [RDF-SEMANTICS], [RDF-CONCEPTS] provides the definition of the abstract syntax for RDF, together with its formal semantics (meaning). Additional background on the basic ideas underlying RDF, and its role in providing a general language for describing Web information, can be found in [WEBDATA].
@@Following para moved here from Concepts document sect 1.2 "Background reading", and needs to be fit in.@@
RDF draws upon ideas from knowledge representation, artificial intelligence and data management, including from Conceptual Graphs, logic-based knowledge representation, frames, and relational databases. Some possible sources of background information are [Sowa], [CG], [KIF], [Hayes], [Luger], [Gray].
@@Original text continues.@@ 10:
with a triple representation of:
ex:index.html exterms:creation-date "August 16, 1999" .
Corresponding RDF/XML syntax for the graph in Figure 6 would be:.
@@may be too much XML tutorial material in discussing lines 1, 2, and 3@@ stringI of the exterms: prefix (), giving..
@@In above, Section 2.2 now an appendix.@@ group of statements about:
ex:index.html dc:creator exstaff:85740 . ex:index.html exterms:creation-date "August 16, 1999" . ex:index.html exterms:language "English" .
whose graph (the same as Figure 2) is shown in Figure 11:
@@Syntax doc. Sec 2.3@@
the RDF/XML syntax for the graph shown in Figure 11 could be written as:
@@Syntax doc. Sec 2>
(we have added line numbers again to use in explaining the example).
Compared with the previous two examples, we've added an additional namespace declaration (in Line 3), and an additional creator property element (in Line 8). In addition, we've nested the three property elements a literal string within start- and end-tags written the dc:creator element using, RDF/XML requires that we write out the full URIref, rather than abbreviating it as a QName, as we've done in writing element and attribute names.
It is important to understand that the RDF/XML in the example above is an abbreviation. The RDF/XML below, 12 (taken from [RDF-SYNTAX]) shows a graph saying "the document '' has a title 'RDF/XML Syntax Specification (Revised)' and has an editor, the editor has a name 'Dave Beckett' and a home page '' ".
This illustrates an idea we discussed near the end of Section 2:, and the most general approach, is to assign a blank node identifier (or bnodeID) that describes the blank node. Using this approach, RDF/XML corresponding to Figure 12 could be written as follows:
@@Syntax doc. Sec 2.10@@ this example, the bnodeID is assigned to the blank node in Line 9, and used to reference it in Line 7 The advantage of using a bnodeID over some of the other approaches described in [RDF-SYNTAX] is that using a bnodeID allows the same blank node to be referred to in more than one place in the same RDF/XML document..
@@Syntax doc. Sec 2.9@@: full XML-style (untyped) character literals in our examples. However, you should be aware that typed literals from appropriate datatypes, such as XML Schema datatypes, can always be used instead. serialization approaches described in [RDF-SYNTAX], this simple serialization approach provides the most direct representation of the actual graph structure, and is particularly recommended for applications in which the output RDF/XML is to be used in further RDF processing.
@@These aren't really *new* resources; look at Dave and Brian comments@@ model of tent called the "Overnighter") might be written:
@@Syntax doc. Sec 2.14@@ used by example.com), as a shorthand for the complete URIref of the resource we want to describe. This fragment identifier item10245 will be interpreted relative to a base URI, in this case, the URI of the containing catalog. The full URIref for the tent is formed by taking the base URI (of the catalog), and appending #item #item10245 in a rdf:about attribute. This would be understood to refer to another resource defined within the catalog. We could also have introduced the URIref of the catalog entry itself by specifying rdf:about="#item10245" instead of rdf:ID="item10245" (i.e., by specifying the relative URIref directly). The two forms are essentially synonyms: the full URIref formed by RDF is the same in either case:.
RDF located outside the catalog could refer to this catalog entry by using the full URIref, i.e., by concatenating the relative URIref #item rdf:Description elements that refer to the original resource using rdf:about.
The previous example indicated that fragment identifiers such as #item retrieved URI of the document itself. In this case, we would define the catalog as:, #item10245, will generate the same absolute URIref,, no matter where the catalog is located.:>
Note the use of the rdf:type property to indicate that the instance belongs to class Tent. In this case, we imagine that example.com has defined its classes as part of the same vocabulary that it uses to describe its other terms (such as the property exterms:
@@Syntax doc. Sec 2.13; introduce term "typed node"@@.
@@Note often need to add an identifier to XML as subject; clarify how XML can be interpreted as RDF.@@-SYNTAX].
@@mention other abbrevations covered in syntax; more nested forms@@ three predefined types (together with some associated predefined properties) for describing containers. membership properties have names of the form rdf:_n, where n is an integer,, the following RDF/XML>
Note an example of the typed node we saw earlier in Section 3.2, and is an abbreviation of an rdf:Description element, together with an rdf:type property, describing the Bag.:
The graph in Figure 15 could Committe following RDF/XML>
RDF defines no particular meaning for this structure; it is up to applications to interpret it.
@@Also need to explain that you can't really close the members, but the intent is to close them. It's more complicated to add additional members to a given list, but you could define another list (somewhere else). You need to be able to constrain the value of s:students to being a single node. Can't do this in RDF.@@
RDF applications sometimes need to make statements about statements, for instance, to record information about when a statement was made, who made it, or other similar information. For example, consider a statement about the tent we discussed in Section 3:.
@@Brian couldn't understand whether the above para was getting at the statements/stating distinction. He also couldn't understand the para below. Perhaps what I need to do is add something like the following:
To see this, given the original triple:
exproducts:item10245 exterms:weight "2.4" .
and a reification of it, together with an additional triple that says.
@@end new example@@.
@@new example: intended addition to the RDF capabilities we've already described, RDF provides a number of other miscellaneous facilities. We cover these facilities in this section, along with some other topics which don't fit naturally into the other sections.
@@flatten this, making rdf:value into 4.4; can I have multiple anchors to preserve both miscellaneous and rdfvalue?@@
In Section 2 "primary">
@@Above example uses a nested Bnode and parseType=Resource; need to rewrite with a node id if I remove that abbreviation from the syntax section?@@
The same approach can be used to represent quantities using any units of measure, as well as values taken from different classification schemes or rating systems, by using the rdf:value property to give the primary value, and using additional properties to identify the classification scheme or other information that further describes the value.
You need not use rdf:value for these purposes, and RDF does not associate any particular meaning with it. rdf:value is simply provided as a convenience for use in these commonly-occurring situations.
@@This section currently does not include a discussion of rdfs:Datatype, and the declaration of specific datatypes in schemas, and requires further synchronization in a number of other areas with the RDF Vocabulary Description Language specification.@@
@@Look at discussions with Patrick for rdfs:Datatype usage.@@ would want to define classes such as exterms:Tent, and define a class called xyz:MotorVehicle (using xyz: to stand for the namespace we will use in this example). A class represents the group of resources that belong to the class, called its instances. In this case, we intend the class xyz:MotorVehicle to represent the group of resources that we intend to represent motor vehicles.
In RDF Schema,.)terms (property) resource ex:author an rdfs:range property whose value is the (class) resource ex:Person.
The rdfs:range property can also be used to indicate that the value of a property is intended to be a literal of a particular datatype. For example, if we wanted to indicate that the property ex:age was intended to be a literal of the XML Schema datatype xsd:integer, we would give the property) resource ex:age an rdfs:range property whose value is the resource xsd:integer.. is intended to be applied to instances of class ex:Book. If exterms:weight has more than one domain property, say one specifying ex:Book as the domain and another one specifying ex:MotorVehicle as the domain, this says that any resource that has a exterms:weight property is intended to be an instance of all of the classes specified as the domains, i.e., that any resource that has a exterms.
@@Be more explicit about fact that the instances are in the same document as the schema? Show how the schema is referenced if they aren't (e.g., by URIrefs of the classes and properties, or by using isDefinedBy if you want a link to the actual document?@@
<> group a programming language might. RDF Vocabulary Description Language 1.0: RDF Schema [RDF-VOCABULARY].
RDF Schema provides basic capabilities for defining.
@@Sections could be more "tutorial"; e.g., pointing out how the examples use various RDF features to do various important, dc:date is extended by properties like prism:publicationTime, prism:releaseTime, prism:expirationTime, etc.
pcv: This prefix refers to the PRISM Controlled Vocabulary.
@@Above is not very clear; it concludes the structured value stuff, so presumably could be made clearer; e.g., structured value a reference to the ISO code example.@@].
@@Following paras too "hypey" and also a lot of redundant stuff. Too many "very"s.@@ designed for content that can be packaged into easily disinguishable sections. News sites, web logs, sports scores, stock quotes, and the like are all-accessable). constaints to CIM/XML illustrates).
As Section 6.4 suggests, structured metadata using controlled vocabularies plays an important role in medicine, enabling efficient literature searches and aiding in the distribution and exchange of medical knowledge..
The following is a sample example DAML+OIL or OWL, become more widely used..
@@Fair overlap now with Concepts doc description of MT. Reduce?@@ Semantics [RDF-SEMANTICS].
In other words, the RDF model theory provides the formal underpinnings for all of the concepts we have described.
@@Cover Brian's comments about this sect.@@
The RDF Test Cases [RDF-TESTS] supplement the textual RDF specifications with specific examples of RDF/XML syntax and the corresponding RDF graph triples. To describe these examples, it introduces a notation called N-triples, which provides the basis for the triples notation used throughout this Primer. The test cases themselves are also published in machine-readable form at Web locations referenced by the Test Cases document, so developers can use these as the basis for.
@@This section needs editing now its an appendix.@@IS].
In order to make writing URIrefs easier,.)
In later sections, we'll see how RDF uses URIrefs for identifying the subjects, predicates, and objects in statements. But before we do that, we need to briefly introduce, in the next section, the basis of how RDF statements can be physically represented and exchanged.
@@This section needs editing now its an appendix.@@ representing RDF information, and for exchanging it between machines. An example of RDF/XML was given in Section 1, and the language is described in more detail in Section 3.
Changes since the 26 April 2002 Working Draft:
Changes since the 11 November 2002 Working Draft:
@@Fill in@@
|
http://www.w3.org/2001/09/rdfprimer/rdf-primer-20021124.html
|
CC-MAIN-2014-52
|
refinedweb
| 2,592
| 56.25
|
import "k8s.io/apimachinery/pkg/types"
Package types implements various generic types used throughout kubernetes.
doc.go namespacedname.go nodename.go patch.go uid.go
func (n NamespacedName) String() string
String returns the general purpose string representation
NodeName is a type that holds a api.Node's Name identifier. Being a type captures intent and helps make sure that the node name is not confused with similar concepts (the hostname, the cloud provider id, the cloud provider name etc)
To clarify the various types:
* Node.Name is the Name field of the Node in the API. This should be stored in a NodeName.
Unfortunately, because Name is part of ObjectMeta, we can't store it as a NodeName at the API level.
* Hostname is the hostname of the local machine (from uname -n).
However, some components allow the user to pass in a --hostname-override flag, which will override this in most places. In the absence of anything more meaningful, kubelet will use Hostname as the Node.Name when it creates the Node.
* The cloudproviders have the own names: GCE has InstanceName, AWS has InstanceId.
For GCE, InstanceName is the Name of an Instance object in the GCE API. On GCE, Instance.Name becomes the Hostname, and thus it makes sense also to use it as the Node.Name. But that is GCE specific, and it is up to the cloudprovider how to do this mapping. For AWS, the InstanceID is not yet suitable for use as a Node.Name, so we actually use the PrivateDnsName for the Node.Name. And this is _not_ always the same as the hostname: if we are using a custom DHCP domain it won't be.
Similarly to above, these are constants to support HTTP PATCH utilized by both the client and server that didn't make sense for a whole package to be dedicated to.
const ( JSONPatchType PatchType = "application/json-patch+json" MergePatchType PatchType = "application/merge-patch+json" StrategicMergePatchType PatchType = "application/strategic-merge-patch+json" ApplyPatchType PatchType = "application/apply-patch+yaml" )
UID is a type that holds unique ID values, including UUIDs. Because we don't ONLY use UUIDs, this is an alias to string. Being a type captures intent and helps make sure that UIDs and names do not get conflated.
Package types imports 1 packages (graph) and is imported by 6721 packages. Updated 2019-03-06. Refresh now. Tools for package owners.
|
https://godoc.org/k8s.io/apimachinery/pkg/types
|
CC-MAIN-2020-29
|
refinedweb
| 401
| 65.12
|
Add the DragonFly cvs id and perform general cleanups on cvs/rcs/sccs ids. Most ids have been removed from !lint sections and moved into comment sections.
1: .\" Copyright (c) 1983,: .\" @(#)dir.5 8.3 (Berkeley) 4/19/94 33: .\" $FreeBSD: src/share/man/man5/dir.5,v 1.12.2.5 2001/12/17 11:30:13 ru Exp $ 34: .\" $DragonFly: src/share/man/man5/dir.5,v 1.2 2003/06/17 04:37:00 dillon Exp $ 35: .\" 36: .Dd April 19, 1994 37: .Dt DIR 5 38: .Os 39: .Sh NAME 40: .Nm dir , 41: .Nm dirent 42: .Nd directory file format 43: .Sh SYNOPSIS 44: .In dirent.h 45: .Sh DESCRIPTION 46: Directories provide a convenient hierarchical method of grouping 47: files while obscuring the underlying details of the storage medium. 48: A directory file is differentiated from a plain file 49: by a flag in its 50: .Xr inode 5 51: entry. 52: It consists of records (directory entries) each of which contains 53: information about a file and a pointer to the file itself. 54: Directory entries may contain other directories 55: as well as plain files; such nested directories are referred to as 56: subdirectories. 57: A hierarchy of directories and files is formed in this manner 58: and is called a file system (or referred to as a file system tree). 59: .\" An entry in this tree, 60: .\" nested or not nested, 61: .\" is a pathname. 62: .Pp 63: Each directory file contains two special directory entries; one is a pointer 64: to the directory itself 65: called dot 66: .Ql .\& 67: and the other a pointer to its parent directory called dot-dot 68: .Ql \&.. . 69: Dot and dot-dot 70: are valid pathnames, however, 71: the system root directory 72: .Ql / , 73: has no parent and dot-dot points to itself like dot. 74: .Pp 75: File system nodes are ordinary directory files on which has 76: been grafted a file system object, such as a physical disk or a 77: partitioned area of such a disk. 78: (See 79: .Xr mount 2 80: and 81: .Xr mount 8 . ) 82: .Pp 83: The directory entry format is defined in the file 84: .Aq sys/dirent.h 85: (which should not be included directly by applications): 86: .Bd -literal 87: #ifndef _SYS_DIRENT_H_ 88: #define _SYS_DIRENT_H_ 89: 90: #include <machine/ansi.h> 91: 92: /* 93: * The dirent structure defines the format of directory entries returned by 94: * the getdirentries(2) system call. 95: * 96: * A directory entry has a struct dirent at the front of it, containing its 97: * inode number, the length of the entry, and the length of the name 98: * contained in the entry. These are followed by the name padded to a 4 99: * byte boundary with null bytes. All names are guaranteed null terminated. 100: * The maximum length of a name in a directory is MAXNAMLEN. 101: */ 102: 103: struct dirent { 104: __uint32_t d_fileno; /* file number of entry */ 105: __uint16_t d_reclen; /* length of this record */ 106: __uint8_t d_type; /* file type, see below */ 107: __uint8_t d_namlen; /* length of string in d_name */ 108: #ifdef _POSIX_SOURCE 109: char d_name[255 + 1]; /* name must be no longer than this */ 110: #else 111: #define MAXNAMLEN 255 112: char d_name[MAXNAMLEN + 1]; /* name must be no longer than this */ 113: #endif 114: }; 115: 116: /* 117: * File types 118: */ 119: #define DT_UNKNOWN 0 120: #define DT_FIFO 1 121: #define DT_CHR 2 122: #define DT_DIR 4 123: #define DT_BLK 6 124: #define DT_REG 8 125: #define DT_LNK 10 126: #define DT_SOCK 12 127: #define DT_WHT 14 128: 129: /* 130: * Convert between stat structure types and directory types. 131: */ 132: #define IFTODT(mode) (((mode) & 0170000) >> 12) 133: #define DTTOIF(dirtype) ((dirtype) << 12) 134: 135: /* 136: * The _GENERIC_DIRSIZ macro gives the minimum record length which will hold 137: * the directory entry. This requires the amount of space in struct direct 138: * without the d_name field, plus enough space for the name with a terminating 139: * null byte (dp->d_namlen+1), rounded up to a 4 byte boundary. 140: */ 141: #define _GENERIC_DIRSIZ(dp) \ 142: ((sizeof (struct dirent) - (MAXNAMLEN+1)) + (((dp)->d_namlen+1 + 3) &~ 3)) 143: 144: #ifdef _KERNEL 145: #define GENERIC_DIRSIZ(dp) _GENERIC_DIRSIZ(dp) 146: #endif 147: 148: #endif /* !_SYS_DIRENT_H_ */ 149: .Ed 150: .Sh SEE ALSO 151: .Xr fs 5 , 152: .Xr inode 5 153: .Sh BUGS 154: The usage of the member d_type of struct dirent is unportable as it is 155: .Fx Ns -specific . 156: It also may fail on certain filesystems, for example the cd9660 filesystem. 157: .Sh HISTORY 158: A 159: .Nm 160: file format appeared in 161: .At v7 .
|
http://www.dragonflybsd.org/cvsweb/src/share/man/man5/dir.5?f=h;content-type=text%2Fx-cvsweb-markup;ln=1;rev=1.2
|
CC-MAIN-2014-35
|
refinedweb
| 776
| 73.98
|
Diary RSS | All of Andrew's writings | Diary archiveWritings from the software side of bioinformatics and chemical informatics, with a heaping of Python thrown in for good measure. Code to taste. Best served at room temperature.
mmpdb paper, poster, and walkthrough #
Last year we released mmpdb program, a tool for identifying and using matched molecular pairs. It is based on the source code that Hussain and Rea contributed to the RDKit project. Some of the features we added are:
-
Last week I presented our
poster for the 11th International Conference on Chemical
Structures (ICCS), in Noordwijkerhout, The Netherlands.
In this essay I'll walk through an example of how to use mmpdb using the example data from the paper's supporting materials.
Step 1: install mmpdb
Mmpdb requires Python and RDKit. It will work with both Python 2.7 and Python 3.5+. While you can download it from the mmpdb project page, it's easier to install the package with pip, as:
pip install mmpdbThis is a pure Python installation which installs the command-line tool "mmpdb", and the Python library package "mmpdblib".
Step 2: Get the supporting data
I'll structure this as the commands to run, followed by some commentary. I'll also provide a link to the next section if you want to skip the commentary.
curl -O unzip ci8b00173_si_001.zipSkip to step 3.
Quoting from the paper:
We used all of the CYP3A4 (ChEMBL target ID ChEMBL340) and hERG (ChEMBL target ID ChEMBL340) data from ChEMBL23 to generate a reproducible timing benchmark. We merged all of the IC50 and Ki data for hERG and IC50, AC50, and Ki data for CYP3A4 with PCHEMBL values and removed undefined compounds and duplicates. The result was 14,377 compounds for CYP3A4 and 6192 compounds for hERG, with 302 compounds having a measured value for both hERG and CYP3A4, yielding a data set with 20,267 compounds overall. It should be noted that we employed this very coarse data cleaning and merging protocol for illustration purposes only. Additional care would need to be taken to assemble a hERG or CYP3A4 data set for actual MMPA and ensure compatibility of the assay, reproducibility of the data, etc.The SMILES and property data files are available in the supplementary data, which is a zip file containing the following:
% unzip -l ci8b00173_si_001.zip Archive: ci8b00173_si_001.zip Length Date Time Name --------- ---------- ----- ---- 987 03-22-2018 17:30 test_calls.txt 1393558 08-23-2017 15:50 ChEMBL_CYP3A4_hERG.smi 426852 08-23-2017 15:44 ChEMBL_CYP3A4_hERG_props.txt --------- ------- 1821397 3 filesDownload and unzip that file using your tools of choice.
Step 3: Fragment the SMILES
mmpdb fragment --max-heavies 70 --max-rotatable-bonds 20 --has-header \ ChEMBL_CYP3A4_hERG.smi--output ChEMBL_CYP3A4_hERG.fragmentsSkip to step 4.
Each input molecule is fragmented into 0 or more fragment records. The algorithm uses a SMARTS pattern to identify which bonds may be fragmented. Use the --cut-smarts option to specify one of the alternative built-in patterns, or to specify your own SMARTS pattern.
It then generates all of the 1-cut, 2-cut, and 3-cut fragmentations. Use --num-cuts to limit the fragmentation to at most 2 or at most 1 cut.
The input structures come from a SMILES file. The --has-header option tells the parser to ignore the first line, because it is a header line. The built-in help, available with:
mmpdb help-smiles-formatgives more details about the --has-header and describes how the --delimiter option works.
I'll use the ChEMBL_CYP3A4_hERG.smi which was extracted from the zip file. It contains a header line followed by 20,267 SMILES records.
The above fragment command also uses options to limit the fragmentation to input records with at most 70 heavy atoms and at most 20 rotatable bonds. Isotopically labeled hydrogens are considered heavy atoms. You can specify an alternate SMARTS definition for "rotatable bond" if you wish.
The --output tells mmpdb to save the results to the named file rather than stdout.
All of the commands support the --help option, which gives more detailed information on the available command-line options and what they do. For more details about the fragment command, do:
mmpdb fragment --help
Progress information
The fragmentation took about 40 minutes on my 7 year old laptop. I don't like long-running programs which give no output because part of me worries that the program froze, so mmpdb displays a progress indicator, like:
Fragmented record 2845/20267 (14.0%)This indicates that the 14% of the 20,267 input structures were processed.
You can disable progress information or warning messages with the "--quiet"/"-q" flag, as in:
mmpdb --quiet fragment ...This is a global option which can apply to all of the subcommands, which is why it goes before the subcommand.
Multiprocessing and caching
The fragment command can fragment multiple input structures at the same time. By default it uses four processes. Since my laptop only has four cores, I kept the default. If you have a more powerful machine then you might want to increase the number of fragmentation jobs to run in parallel, using the "--num-jobs"/"-j" option.
If you have added or modified a few records and want to re-fragment then you can save a lot of time by having mmpdb re-use previous fragmentation information. Specify the old fragment file using the --cache option.
Step 4: Index the fragments
mmpdb index --symmetric ChEMBL_CYP3A4_hERG.fragmentsSkip to step 5.
Each fragmentation contains a "constant" part and a "variable" part. In the MMP indexing step, the constants are matched up to generate a pair between one variable part and the other variable part. This pair can be written using a SMIRK-like notation, in the form "A>>B".
(Bear in mind that a SMIRKS describes a reaction, and this isn't a reaction. "A>>B" and "B>>A" are equivalent, and a matched molecular series may also be important.)
The following creates an index from the fragment file and saves the results into a MMPDB database file:
% mmpdb index --symmetric ChEMBL_CYP3A4_hERG.fragments WARNING: No --output filename specified. Saving to 'ChEMBL_CYP3A4_hERG.mmpdb'. WARNING: Neither ujson nor cjson installed. Falling back to Python's slower built-in json decoder.The --symmetric flag adds both "A>>B" and "B>>A" into the database. This roughly doubles the database size. This flag is useful if you want a unique identifier which expressed both the pair and the direction of transformation. It doesn't really affect the downstream processing.
The first warning message is because I prefer that people specify the output file on the command-line, using the "--output" flag. If you don't specify the output filename then it infers it based on the input filename, and it prints the filename so you know where to look. We'll probably drop the "WARNING:" part in the future, because the current behavior seems to be what people want.
The second warning message is because mmpdb was developed under Python 2.7 and we found the performance was limited by the time it took to load the fragment file; each line of the file is a JSON record. The third-party "ujson" and "cjson" JSON parsers were significantly faster than the built-in "json" module. The warning is a strong suggestion to install one of them.
However, I just discovered now that Python 3.6 the ujson package only saves a 4 seconds out of the 130 seconds. A 3% increase is nice, but not enough to warrant the warning message. I've filed an issue to look into this further.
The output "mmpdb" file is a SQLite database, so if you are really curious about its contents you can easily access the file using tools other than mmpdb.
Step 5: Load properties
mmpdb loadprops --properties ChEMBL_CYP3A4_hERG_props.txt ChEMBL_CYP3A4_hERG.mmpdbSkip to 'transform' analysis.
mmpdb reads the physical property/activity information from a tab-separated file. Only one value is supported per compound+property. For details on the format, use:
mmpdb help-property-format
The CHEMBL_CYP3A4_HERG_props.txt file contains CYP3A4 and hERG_pIC50 values. The first few lines are:
CMPD_CHEMBLID CYP3A4 hERG_pIC50 CHEMBL3612928 * 4.52 CHEMBL2425617 7 * CHEMBL3221133 * 6.8 CHEMBL3221131 * 5.9 CHEMBL3221134 * 6.32 CHEMBL1945199 * 4.97 CHEMBL1573507 4.4 * CHEMBL284328 5 * CHEMBL1531676 4.9 * CHEMBL1546374 5.35 * CHEMBL1379480 5.45 * CHEMBL1499545 4.9 * CHEMBL486696 4.5 * CHEMBL1453970 5.45 * CHEMBL1490799 5.5 * CHEMBL121663 4.8 * CHEMBL282468 4.5 * CHEMBL1500528 5.5 * CHEMBL354349 4.6 * CHEMBL221753 4.6 6.97 ...
The following shows the output from loading the properties into the existing mmpdb database.
% mmpdb loadprops --properties ChEMBL_CYP3A4_hERG_props.txt ChEMBL_CYP3A4_hERG.mmpdb WARNING: APSW not installed. Falling back to Python's sqlite3 module. Using dataset: MMPs from 'ChEMBL_CYP3A4_hERG.fragments' Reading properties from 'ChEMBL_CYP3A4_hERG_props.txt' WARNING: the identifier column in the properties file (column 1) has a header of 'CMPD_CHEMBLID'; should be 'id', 'ID', 'Name', or 'name' Read 2 properties for 20267 compounds from 'ChEMBL_CYP3A4_hERG_props.txt' 5474 compounds from 'ChEMBL_CYP3A4_hERG_props.txt' are not in the dataset at 'ChEMBL_CYP3A4_hERG.mmpdb' Imported 9691 'CYP3A4' records (9691 new, 0 updated). Imported 5340 'hERG_pIC50' records (5340 new, 0 updated). Generated 1246741 rule statistics (1263109 rule environments, 2 properties) Number of rule statistics added: 1246741 updated: 0 deleted: 0 Loaded all properties and re-computed all rule statistics.It took about 90 seconds to run.
Timings showed that APSW is faster than Python's built-in sqlite3 module. The "WARNING" is a suggestion that you install that package. Bear in mind that APSW is not available from the Python packaging index at PyPI. It can be installed via pip, using the complex command at the bottom of the APSW download page.
Analysis #1: Transform a compound
mmpdb transform \ --smiles "C[C@@H](C(=O)OC(C)C)N[P@](=O)(OC[C@@H]1[C@H]([C@@]([C@@H](O1)n2ccc(=O)[nH]c2=O)(C)F)O)Oc3ccccc3" \ --substructure "C[C@@H](C(=O)OC(C)C)N[P@](=O)(OC[C@@H]1[C@H]([C@@]([C@@H](O1)n2ccc(=O)[nH]c2=O)(C)F)O)Oc3ccccc3" \ --property hERG_pIC50 --property CYP3A4 \ ChEMBL_CYP3A4_hERG.mmpdbSkip to 'predict' analysis.
mmpdb supports two analysis options. The 'transform' command applies the matched pair rules to generate transformations of an input structure, along with a prediction of the change in the requested properties.
The above example starts with sofosbuvir (specified via the --smiles option), and requires that the transform structure still have the sofosbuvir substructure. The total search on the command-line took 8 seconds. We found that we could speed it up by either loading the database into memory (with the --in-memory option; only available if APSW is installed) or by putting the data on a RAM disk.
The output is a tab-separated file with a large number of columns. This is designed to be imported into Excel or Spotfire for further analysis; there are too many columns to make sense of it here. Instead, I'll pull out one record as an example:
ID: 25 SMILES: C=CCC(C)OC(=O)[C@H](C)N[P@](=O)(OC[C@H]1O[C@@H](n2ccc(=O)[nH]c2=O)[C@](C)(F)[C@@H]1O)Oc1ccccc1 hERG_pIC50_from_smiles: [*:1]C hERG_pIC50_to_smiles: [*:1]CC=C hERG_pIC50_radius: 0 hERG_pIC50_fingerprint: 59SlQURkWt98BOD1VlKTGRkiqFDbG6JVkeTJ3ex3bOA hERG_pIC50_rule_environment_id: 774189 hERG_pIC50_count: 2 hERG_pIC50_avg: 0.765 hERG_pIC50_std: 0.61518 hERG_pIC50_kurtosis: hERG_pIC50_skewness: hERG_pIC50_min: 0.33 hERG_pIC50_q1: 0.33 hERG_pIC50_median: 0.765 hERG_pIC50_q3: 1.2 hERG_pIC50_max: 1.2 hERG_pIC50_paired_t: 1.7586 hERG_pIC50_p_value: 0.32915 CYP3A4_from_smiles: [*:1]C CYP3A4_to_smiles: [*:1]CC=C CYP3A4_radius: 0 CYP3A4_fingerprint: 59SlQURkWt98BOD1VlKTGRkiqFDbG6JVkeTJ3ex3bOA CYP3A4_rule_environment_id: 774189 CYP3A4_count: 8 CYP3A4_avg: 0.31875 CYP3A4_std: 0.50705 CYP3A4_kurtosis: -0.27899 CYP3A4_skewness: 0.66842 CYP3A4_min: -0.2 CYP3A4_q1: -0.075 CYP3A4_median: 0.2 CYP3A4_q3: 0.6 CYP3A4_max: 1.3 CYP3A4_paired_t: 1.7781 CYP3A4_p_value: 0.11863You may think it's a bit of a duplication that both the hERG_pIC50 and CYP3A4 have identical "*_from_smiles" and "*_to_smiles" values. What we found during development was there may be several different pairs which result in the same transform. We have some heuristics to select what we think is the most relevant transform, based on the number of pairs found with property information. However, the amount of property information may vary for different properties, causing different transforms to be selected as the "most relevant."
Analysis #2: Predict a change between two compounds
mmpdbThis second (and last) analysis example predicts the shift between two compounds. In this case it predicts the effect on the hERG_pIC50 from the addition of a florine to sofosbuvir, graphically depicted as:
The analysis takes about 3 seconds to generate the following:
% mmpdb --quiet predicted delta: +0.220769 +/- 0.354767(I used the --quiet option so I wouldn't get the warning message about APSW not being installed.)
See 'predict' details
Add the --save-details to have the predict command save prediction details to the files "pred_detail_rules.txt" and "pred_detail_pairs.txt". (Use --prefix to change the output filename prefix from 'pred_details' to something else.)
Again, these are tab-separated files meant more for Spotfire or Excel than a simple HTML page. As before, I'll just show one example record, first from pred_detail_rules.txt:
rule_environment_statistics_id: 1006880 rule_id: 110 rule_environment_id: 1016557 radius: 3 fingerprint: epWXDOOtiVLnFPhsdb89UN1noHJUTNbNiF1h/qpOhmQ from_smiles: [*:1][H] to_smiles: [*:1]F count: 39 avg: 0.22077 std: 0.35477 kurtosis: 0.27687 skewness: -0.081873 min: -0.74 q1: 0 median: 0.2 q3: 0.4875 max: 1.04 paired_t: 3.8862 p_value: 0.00039518and then from pred_detail_pairs.txt:
rule_environment_id: 698 from_smiles: [*:1][H] to_smiles: [*:1]F radius: 0 fingerprint: 59SlQURkWt98BOD1VlKTGRkiqFDbG6JVkeTJ3ex3bOA lhs_public_id: CHEMBL488468 rhs_public_id: CHEMBL495303 lhs_smiles: CC(C)(C)CCN1CCC(CNC(=O)c2cc(Cl)cc(Cl)c2)CC1 rhs_smiles: CC(C)(C)C(F)CN1CCC(CNC(=O)c2cc(Cl)cc(Cl)c2)CC1 lhs_value: 5.71 rhs_value: 5.33 delta: -0.38
ChEMBL target sets association network #
This is part 3 of a three-part series in generating fingerprint-based set similarities using chemfp. Read part 1 to see some of the ways to compare two fingerprint sets, and part 2 where I figure out how to use the ChEMBL bioactivity data.
I usually work with entity-based similarities. I have a molecule X and I want to find other molecules which are similar to it. Set-based similarities are a bit different. I think of them as comparing two objects by the clouds around them. Instead of comparing two proteins based on a more intrinsic property like their sequence or 3D structure, I might want to compare two proteins by the types of molecules which bind to them or affect them. This might reveal if two proteins have similar binding pockets or are involved in the same chemical pathway.
Before jumping into the nitty-gritty, I thought I would try a non-molecular example. Suppose you read Neal Stephenson's Cryptonomicon and enjoy it so much that you want to read more like it. "Like it" can mean many things: books that talk about technology, books which combine two different time periods, books with historical fiction taking place during the Second World War, books with many vignettes, and so on. For some, Foucault's Pendulum is like Cryptonomicon. And of course "like" can mean other books by the same author, or even by the same publishing house or imprint.
The "entity" in this case is a book. While there are many possible scoring functions, the end result is a book or list of books, likely ranked in preference order.
Suppose however you have read many books by Stephenson and want to find another author like him. Here too there are many ways to make a comparison. One is to use the book similarity function. For each author under consideration, compare all of that author's books to all of Stephenson's books, and come up with some aggregate scoring function to give the set similarity. Use that to figure out the most similar author.
If you repeat this many time you can create a network of authors, associated by similarity based on their books.
IC50 activity sets from ChEMBL
Back into the world of molecules. I want to compare target proteins in human assays based on the set of molecules with an IC50 of at least 1 micromolar for each molecule. I'll use the ChEMBL 21 bioactivity data to generate a data.
The following SQL query is based on an example Iain Wallace sent me, adapted to use the SQLite console. I'll first turn on the timer and have it save the output to the file "chembl_sets.tsv", as tab-separated fields. The query looks for "single protein" targets in humans (tax_id=9606), where the assay activity is an IC50 with better than 1000nM. For each of those assays, get the target name and the ChEMBL id for the compound used in the assay.
% sqlite3 chembl_21.db SQLite version 3.8.5 2014-08-15 22:37:57 Enter ".help" for usage hints.Run Time: real 25.771 user 4.440337 sys 2.698867 sqlite> quit Most of the 26 seconds was likely spent in reading from the hard disk. However, do note that this was after I did an ANALYZE on some of the tables in the database. Without the ANALYZE, I suspect the query will take a lot longer.
The above console commands produce the file "chembl_sets.tsv" where the first few line for me look like:
Beta-1 adrenergic receptor CHEMBL305153 Dopamine D4 receptor CHEMBL303519 Endothelin-converting enzyme 1 CHEMBL415967 Neprilysin CHEMBL415967 FK506-binding protein 1A CHEMBL140442 Coagulation factor X CHEMBL117716 Trypsin I CHEMBL117716 Retinoid X receptor alpha CHEMBL111217 Epidermal growth factor receptor erbB1 CHEMBL68920 Receptor protein-tyrosine kinase erbB-2 CHEMBL68920 Epidermal growth factor receptor erbB1 CHEMBL69960 Receptor protein-tyrosine kinase erbB-2 CHEMBL69960 Proteinase-activated receptor 1 CHEMBL330643 Tyrosine-protein kinase LCK CHEMBL69638 Neuropeptide Y receptor type 5 CHEMBL75193 Gonadotropin-releasing hormone receptor CHEMBL65614 ...(That's a copy&paste of the terminal output, which doesn't preserve spaces.)
Compare two real sets
In an earlier essay I came up with an ad hoc comparison function I called "sss()":
from chemfp import searchWhat score does it give to two sets which are similar?
Load set data
The file "chembl_sets.tsv" which I just created in SQLite contains the set names and the compounds ids which are in the set. The file "chembl_21.rdkit2048.fpb" created in part 1 contains compound ids and fingerprints. I can combine the two to get the fingerprints for each compound id. The first step is to read the set information, which I do with the following function:
import collections #)I'll use that function to read the set file. I also looked through the list of target names and guessed that "Estrogen receptor alpha" and "Estrogen receptor beta" might be similar, so I'll use that as my initial test case:
>>> set_members_table = load_set_members("chembl_sets.tsv") >>> len(set_members_table["Estrogen receptor alpha"]) 912 >>> len(set_members_table["Estrogen receptor beta"]) 1066 >>> len(set(set_members_table["Estrogen receptor alpha"]) ... & set(set_members_table["Estrogen receptor beta"])) 697
Load set fingerprints
Next I'll extract the fingerprints from the FPB file. The easiest way is to find the index for each of the compound ids and pass the list of indices to the copy() method.
>>> import chemfp >>> chembl_21 = chemfp.load_fingerprints("chembl_21.rdkit2048.fpb") >>> target_ids1 = set_members_table["Estrogen receptor alpha"] >>> indices1 = [chembl_21.get_index_by_id(target_id) for target_id in target_ids1] >>> target1 = chembl_21.copy(indices1) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "chemfp/arena.py", line 576, in copy new_indices.append(range_check[i]) TypeError: sequence index must be integer, not 'NoneType'Well, that was unexpected. What happened? It looks like there are 8 None elements in the list of indices:
>>> indices1.count(None) 8How could I have an activity for a compound, but not have the compound?
There can be a few reasons. Perhaps ChEMBL didn't include the structure data. Except they do. Perhaps RDKit couldn't parse the record. Except they could. The real clue came in beause Iain Watson also sent me the dataset he generated with his sample SQL. There are 76 additions in my file which aren't in his, including 8 estrogen receptor alpha records:
% diff iain_sorted.tsv chembl_sets_sorted.tsv 5697a5698 > Acetylcholinesterase CHEMBL2448138 7776a7778 > Adenosine A3 receptor CHEMBL1386 9350a9353 > Alpha-2a adrenergic receptor CHEMBL1366 9410a9414 > Alpha-2a adrenergic receptor CHEMBL508338 9455a9460 > Alpha-2b adrenergic receptor CHEMBL1366 9479a9485 ... 54058a54091 > Epidermal growth factor receptor erbB1 CHEMBL1909064 58244a58278,58279 > Estrogen receptor alpha CHEMBL219003 > Estrogen receptor alpha CHEMBL219004 58246a58282 > Estrogen receptor alpha CHEMBL219390 58247a58284 > Estrogen receptor alpha CHEMBL219763 58506a58544 > Estrogen receptor alpha CHEMBL373625 58555a58594 > Estrogen receptor alpha CHEMBL385993 58556a58596,58597 > Estrogen receptor alpha CHEMBL386948 > Estrogen receptor alpha CHEMBL387335 65855a65897 > Glutathione reductase CHEMBL2068507 65874a65917 ...He generated his data from one of the dump files for a server-based database, like MySQL. I generated my data from the SQLite dump file. My guess is that the SQLite file was generated slightly later, and includes a few records which were added during that time delta.
The reason my fingerprint file doesn't contain the entries is that the chembl_21.sdf file I used was also generated from the first snapshot, so doesn't include those new structures.
At least, that's my working theory. It's also purely coincidence that I happened to start with one of the few set names where this was a problem.
I'll write a function to skip ids when the id can't be found in the fingerprint arena:
def create_subset(arena, ids): indices = [] for id in ids: idx = arena.get_index_by_id(id) if idx is not None: indices.append(idx) return arena.copy(indices=indices) >>> target1 = create_subset(chembl_21, set_members_table["Estrogen receptor alpha"]) >>> target2 = create_subset(chembl_21, set_members_table["Estrogen receptor beta"])
Compare two fingerprint sets
With the two data sets in hand, it's a simple matter of calling the scoring function:
>>> sss(target1, target2) 4.986669863305914I'll make a little helper function to compare two classes by name:
def compare(name1, name2): target1 = create_subset(chembl_21, set_members_table[name1]) target2 = create_subset(chembl_21, set_members_table[name1]) return sss(target1, target2)and use it to compare a few other sets, judiciously chosen after I implemented the next section:
>>> compare("Estrogen receptor alpha", "Estrogen sulfotransferase") 6.107895939764545 >>> compare("Estrogen receptor alpha", "Vitamin D receptor") 6.107895939764545 >>> compare("Dopamine D5 receptor", "Histamine H1 receptor") 110.6279251170047 >>> compare("Dopamine D5 receptor", "Histamine H2 receptor") 110.6279251170047 >>> compare("Histamine H1 receptor", "Histamine H2 receptor") 24.037402406896796 >>> compare("NEDD8-activating enzyme E1 regulatory subunit", "RNase L") 100.0A real chemist or biologist would need to tell me if these make sense.
Compare all sets
I can use the code from the previous section to generate the Nx(N-1)/2 comparison of every set to every other set. The general algorithm is to load each of the sets into its own object, which I'll call a "Subset" instance. A Subset has a "name" and an "arena", and I'll ignore empty subsets (like "Transient receptor potential cation channel subfamily M member 6"): subsetsThis returns N subsets. I'll iterate over the upper-triangle of the comparison matrix to generate all pair scores. If the score is at least 1.0, I'll print the score and the two set ids. Otherwise I won't report anything.
I also include some progress and run-time information to stderr, which makes the code a bit more complicated to read, but helps soothe the nerves of at least this observer.
Here's the full code, which I saved into the file "set_compare.py":
# set_compare.py from __future__ import print_function, division import sys import collections import chemfp from chemfp import search import") start_time = time.time() subsets = load_subsets(chembl_21, set_members_table) it like this, which ran with 4 OpenMP threads:
% python set_compare.py > set_compare_output.txt No members: Transient receptor potential cation channel subfamily M member 6 No members: Transient receptor potential cation channel subfamily V member 2 No members: Transient receptor potential cation channel subfamily V member 5 No members: Voltage-gated P/Q-type calcium channel alpha-1A subunit load time: 4.57191491127 compare time: 491.902451038
That command found 5410 set comparisons. I'll show the 5 smallest, 5 largest, and values near the median and quartiles:
% wc -l set_compare_output.txt 5410 set_compare_output.txt % sort -n set_compare_output.txt | head -5 1.00 Apoptosis regulator Bcl-2 Membrane-associated guanylate kinase-related 3 1.00 Cyclin-dependent kinase 2 Testis-specific serine/threonine-protein kinase 2 1.00 Cytochrome P450 1B1 Tubulin beta-1 chain 1.00 Fructose-1,6-bisphosphatase Receptor tyrosine-protein kinase erbB-3 1.00 Glucagon receptor Serine/threonine-protein kinase GAK % awk 'NR==int(5410*1/4)' set_compare_output.txt # first quartile 1.55 Cyclin-dependent kinase 4 Cyclin-dependent kinase-like 1 % awk 'NR==5410/2' set_compare_output.txt # median 1.24 Glyceraldehyde-3-phosphate dehydrogenase liver Histone-lysine N-methyltransferase, H3 lysine-9 specific 3 % awk 'NR==int(5410*3/4)' set_compare_output.txt # third quartile 7.62 Neuropeptide Y receptor type 1 Neuropeptide Y receptor type 2 % sort -n set_compare_output.txt | tail -5 300.00 Serine palmitoyltransferase 1 Serine palmitoyltransferase 2 300.00 Serine/threonine-protein kinase 24 Serine/threonine-protein kinase MST1 300.00 Sodium channel protein type I alpha subunit Sodium channel protein type XI alpha subunit 300.00 Ubiquitin carboxyl-terminal hydrolase 10 Ubiquitin carboxyl-terminal hydrolase 13 300.00 Xaa-Pro aminopeptidase 2 Xaa-Pro dipeptidaseThe largest possible score in my similarity metric is 300.0. These last few lines indicate perfect matches.
Pre-compile sets (advanced topic)
It took 4-5 seconds to load the dataset. This was less than 1% of the overall run-time, so optimizing it is usually not worthwhile. However, suppose you want to write a set similarity web service. You'll often end up reloading the server during development, and the 4-5 second wait each time will become annoying.
One possibility is to create an FPB file for each of the subsets. The problem is there are over 1,400 sets. By design the FPB file uses a memory map for each file. Each memory map uses a file descriptor, and many OSes limit the number of file descriptors that a process may use. On my Mac, the default resource limit ("rlimit") is 256, though that can be increased.
The way I usually solve this in chemfp is to store all of the subsets sequentially in a single FPB file, and have a ".range" file which specifies the start/end range for each set. By default the FPB file reorders the fingerprints by popcount, so the fingerprints with 0 on-bits comes first, then those with 1 on-bit, etc. When they are ordered that way then I can create the sublinear search index.
I can disable reordering, so that the fingerprints are stored in the same order they were added to the FPB file. If I know that set A is between indices start and end then I can use arena[start:end] to get the subset.
Create an FPB file with the sets in input order
The following program reads the chembl_21.rdkit2048.fpb and chembl_sets.tsv file to compile a single FPB file named chembl_sets.fpb with the fingerprint sets, in order, and an range file named "chembl_sets.range" with the start/end indices of each set and its range.
from __future__ import print_function, division import collections import chemfp #) # return a list of (id, fingerprint) pairs for the given ids in the arena def get_indices(arena, ids): indices = [] for id in ids: idx = arena.get_index_by_id(id) if idx is None: continue indices.append(idx) return indices def main(): chembl_21 = chemfp.load_fingerprints("chembl_21.rdkit2048.fpb") set_members_table = load_set_members("chembl_sets.tsv") with open("chembl_sets.ranges", "w") as range_file: with chemfp.open_fingerprint_writer("chembl_sets.fpb", reorder=False, metadata=chembl_21.metadata) as writer: start_index = 0 for name, chembl_ids in sorted(set_members_table.items()): indices = get_indices(chembl_21, chembl_ids) if not indices: continue # no ids found end_index = start_index + len(indices) range_file.write("%d\t%d\t%s\n" % (start_index, end_index, name)) writer.write_fingerprints(chembl_21.copy(indices=indices)) start_index = end_index if __name__ == "__main__": main()I ran it and generated the two files. The "chembl_sets.ranges" file starts with the following:
0 77 1-acylglycerol-3-phosphate O-acyltransferase beta 77 1834 11-beta-hydroxysteroid dehydrogenase 1 1834 1881 11-beta-hydroxysteroid dehydrogenase 2 1881 1885 14-3-3 protein gamma 1885 1992 15-hydroxyprostaglandin dehydrogenase [NAD+] 1992 2016 2-acylglycerol O-acyltransferase 2 2016 2019 25-hydroxyvitamin D-1 alpha hydroxylase, mitochondrial 2019 2025 26S proteasome non-ATPase regulatory subunit 14 2025 2027 3-beta-hydroxysteroid dehydrogenase/delta 5--<4-isomerase type I 2027 2030 3-keto-steroid reductase
If you've been following along then nothing here should be new here in the code, except this line:
writer.write_fingerprints(chembl_21.copy(indices=indices))I use the indices to make a new arena containing just those fingerprints. (By default these fingerprints will be in popcount order, which comes in useful in a bit.) I then pass the new arena to write_fingerprints(). This function takes an iterator of (id, fingerprint) pairs, which is what an arena returns if you try to iterate over it.
The end result is to save the selected ids and fingerprints to a file, in popcount order.
Search using pre-compiled fingerprints
It's tempting to load the FPB file, get the set arena using the start/end from the ranges file, and do the comparison. This will work, but it will be slow. The FPB file is not in popcount order and has no popcount index. This means that chemfp's sublinear search optimization will not be used.
If the arena is not indexed by popcount then its slice, like unordered_arena[start:end] will also not be indexed by popcount. Making a simple copy() doesn't help because the copy() by default preserves the ordering. That is, the copy will be ordered if and only if the original arena is ordered.
Instead, I need to tell the copy() to always reorder, with:
unordered_arena[start:end].copy(reorder=True)
As a further optimization, if the copy(reorder=True) notices that the input is already in popcount order then it will skip the step to sort() the fingerprints by popcount. That's the case for us since the compiled chembl_sets.fpb file was created by passing ordered subset arenas to write_fingerprints(), and iterating through ordered arenas returns the (id, fingerprints) in increasing popcount order.
I changed the previous "set_compare.py" program to use the compiled sets file. I call this new program "set_compare2.py". The new code is the function "load_subsets()", which reads the .ranges file, gets subsets from the compiled FPB file, and makes an ordered copy of it.
# set_compare2.py from __future__ import print_function, division import sys import collections import chemfp from chemfp import search import time # class Subset(object): def __init__(self, name, arena): self.name = name self.arena = arena def load_subsets(arena, filename): subsets = [] with open(filename) as range_file: for line in range_file: start, end, name = line.rstrip("\n").split("\t") start = int(start) end = int(end) subset = Subset(name, arena[start:end].copy(reorder=True)) subsets.append(subset) return subsets def main(): chembl_sets_arena = chemfp.load_fingerprints("chembl_sets.fpb") start_time = time.time() subsets = load_subsets(chembl_sets_arena, "chembl_sets.ranges") "set_compare2.py" and compared the results to "set_compare.py". The output files were exactly the same, as expected. The biggest difference was the load time:
# times from set_compare.py load time: 4.57191491127 compare time: 491.902451038 # times from set_compare2.py load time: 0.801107883453 compare time: 451.236635923That saves nearly 4 seconds of load time. The run time also looks faster, but I think that's due to the variability of my desktop, which also has a few Firefox windows and more than a few tabs open.
Z-score
Up until now I've been using an ad hoc scoring function with an arbitrary scaling factor to make it so the background score is 0.01 across the range of input set sizes. It has a few flaws: 1) it has a maximum score of 300, which is unusual, 2) no one will understand how to interpret it, without specific experience with it, and 3) the standard deviation is a function of the input set sizes.
One common technique to remove those flaws is to transform the score in a "z-score" or "standard score":
zscore = (score - background_score) / background_standard_deviationThe "background score" is about 0.01 for my scoring function, but the "background standard deviation" varies based on the input set sizes. I can determine it by generating enough (let's say 25) pairs of subsets containing randomly selecting fingerprints, and with the same sizes as the pair I'm interested in, then computing the standard deviation of all of those comparisons. With this approach my "300*" scaling factor is irrelevant as the numerator and denominator are equally scaled. The division by the product of the sizes also disappears, for the same reason.
You likely see the downside of this scoring function - each Z-score requires 25 additional Tanimoto searches!
There are a few ways to mitigate this. First, while I have 1,400+ sets, there are only 365 different set sizes. I can reuse values for multiple comparisons of the same pair of sizes, which will reduce the number of tests I need to do by about a factor of 15. Second, random sets rarely have matches with a 0.8 similarity. The sublinear index should quickly reject those obvious mismatches. Third, I could cache the values to the file system, database, or other permanent storage and re-use them for future searches. It won't help the first time, but I end up with mistakes in my code, or have something I want to tweak, and will often re-run the code many times before it finally works.
Fourth, and not something I'll do in this essay (if at all), I could do some curve fitting and come up with an equation which does a reasonable job of interpolating or predicting values.
(Oh, my. I was certainly optimistic by using 25 samples. After over an hour it had only processed 229 of 1408 elements. I killed the code, changed the number of samples to 10, and restarted it. I'm glad I put that cache in! If you do this for real, you might vary the number of samples as a function of the sizes. It's much harder getting a non-zero standard deviation for a 1x1 comparison than for a 1000x1000 comparison.)
The biggest change to this code is a new "ZScorer" class, which replaces the "sss" function. The main entry point is "compute_zscore()". That in turn calls "compute_raw_score()" to compute the cumulative score between two arenas, and "compute_background_values()", which computes the mean and standard deviation for the requested set sizes. The latter function also caches to an internal dictionary.
The ZScorer class also has a way to load and save the cache to a file. Before computing the set similarities I ask it to "load()" from the file named ""zscore.cache". I then wrapped the main code with a try/finally block so I can save() the cache no matter if the code went to completion or if it was interrupted by a ^C or coding error.
I also modified the scoring threshold so the Z-score had to be at least 10 (that is, the difference from the average background must be at least 10 standard deviations).
To make the data a bit more useful, I also included information about the number of elements in each set. These are node properties easily determined by getting the length of the corresponding arena. I also added the number of ids common to both sets (this is a new edge property). For this I turned to Python's native 'set' type. Each Subset instance make a set of arena ids:
class Subset(object): def __init__(self, name, arena): self.name = name self.arena = arena self.id_set = set(arena.ids) # used to find intersection countsso the number of elements in common is:
num_in_common = len(subset1.id_set & subset2.id_set)
As a final note before presenting the code, I called this program "set_zcompare.py". It's based on the original "set_compare.py" and does not use the pre-compiled FPB file of set_compare2.py".
# I call this "set_zcompare.py" from __future__ import print_function, division import sys import random import collections import time import numpy as np import chemfp from chemfp import search # Support both Python 2 and Python 3 try: xrange # Check if this is Python 2 except NameError: xrange = range # Use 'range' for Python) # Z-score based on the sum of the similar scores. Please don't use # this scoring function for real work unless you've validated it. def make_random_subarena(arena, n): indices = random.sample(xrange(len(arena)), n) return arena.copy(indices=indices) class ZScorer(object): def __init__(self, arena, threshold=0.8, num_samples=25): self.arena = arena self.threshold = threshold self.num_samples = num_samples self.cached_values = {} def compute_zscore(self, arena1, arena2): # The main entry point score = self.compute_raw_score(arena1, arena2) mean, std = self.compute_background_values(len(arena1), len(arena2)) if std == 0.0: return 0.0 return (score - mean) / std def compute_raw_score(self, arena1, arena2): return search.threshold_tanimoto_search_arena( arena1, arena2, threshold=self.threshold).cumulative_score_all() def compute_background_values(self, i, j): # The scoring function is symmetric so normalize so i <= j if i > j: i, j = j, i # Check if it exists in the cache key = (i, j) try: return self.cached_values[key] except KeyError: pass # Does not already exist, so compute the mean and standard deviation scores = [] for _ in range(self.num_samples): subarena1 = make_random_subarena(self.arena, i) subarena2 = make_random_subarena(self.arena, j) scores.append(self.compute_raw_score(subarena1, subarena2)) mean, std = np.mean(scores), np.std(scores) values = (mean, std) self.cached_values[key] = values # cache the result and return it return values def load(self, filename): # Load values from filename into self.cached_values try: infile = open(filename) except IOError: sys.stderr.write("Warning: cache file %r does not exist\n" % (filename,)) with infile: for line in infile: terms = line.split() i, j, mean, std = int(terms[0]), int(terms[1]), float(terms[2]), float(terms[3]) self.cached_values[(i, j)] = (mean, std) def save(self, filename): # Save values from self.cached_values into the named file with open(filename, "w") as outfile: for key, value in sorted(self.cached_values.items()): i, j = key mean, std = value outfile.write("%s %s %s %s\n" % (i, j, mean, std)) # self.id_set = set(arena.ids) # used to find intersection counts") zscorer = ZScorer(chembl_21, threshold=0.8, num_samples=10) # 25 took too much time zscorer.load("zscore.cache") try: start_time = time.time() subsets = load_subsets(chembl_21, set_members_table) load_time = time.time() print("name1\tsize1\tname2\tsize2\t#in_common\tzscore") # write a header N = len(subsets) for i in range(N-1): sys.stderr.write("\rProcessing %d/%d" % (i, N-1)) subset1 = subsets[i] for j in range(i+1, N): subset2 = subsets[j] zscore = zscorer.compute_zscore(subset1.arena, subset2.arena) if zscore > 10.0: # Use 10 standard deviations as my threshold of importance sys.stderr.write("\r \r") num_in_common = len(subset1.id_set & subset2.id_set) print("%s\t%d\t%s\t%d\t%d\t%.2f" % (subset1.name, len(subset1.arena), subset2.name, len(subset2.arena), num_in_common, zscore)) sys.stderr.write("\rProcessing %d/%d" % (i, N-1)) sys.stderr.write("\r \r") compare_time = time.time() finally: zscorer.save("zscore.cache") print("load time:", load_time-start_time, file=sys.stderr) print("compare time:", compare_time-load_time, file=sys.stderr) if __name__ == "__main__": main()I ran this and saved the 6351 lines of output to "chembl_target_network_zscore_10.tsv. The first few lines look like:
name1 size1 name2 size2 #in_common zscore 1-acylglycerol-3-phosphate O-acyltransferase beta 77 TRAF2- and NCK-interacting kinase 16 0 23.56 11-beta-hydroxysteroid dehydrogenase 1 1757 11-beta-hydroxysteroid dehydrogenase 2 47 35 92.93 14-3-3 protein gamma 4 Androgen Receptor 847 0 13.05 14-3-3 protein gamma 4 Histone deacetylase 1 1598 0 12.41 14-3-3 protein gamma 4 Histone deacetylase 6 799 0 20.43 14-3-3 protein gamma 4 Histone deacetylase 8 396 0 33.06 15-hydroxyprostaglandin dehydrogenase [NAD+] 107 Aldehyde dehydrogenase 1A1 16 1 91.58 15-hydroxyprostaglandin dehydrogenase [NAD+] 107 Serine/threonine-protein kinase PIM1 540 0 17.87 15-hydroxyprostaglandin dehydrogenase [NAD+] 107 Serine/threonine-protein kinase PIM2 346 0 17.48
Network Visualization
Code, code, everywhere, and not an image to see! How do I know that the above code works as I epxected, much less gives useful information which a biochemist might be interested in?
Association network visualization is widely used in bioinformatics. With some pointers from Iain, I downloaded the Java-based Cytoscape 3.4 and used it to visualize the Z-score results from the previous section.
Actually, that network proved too complicated to visualize. I used awk to reduce it to 1450 node pairs with a Z-score of at least 100, in the file named chembl_target_network_zscore_100.tsv. I loaded the file into Cytoscape (File->Import->Network->File...), selected chembl_target_network_zscore_100.tsv, made sure the 'name1' column was the query, 'size1' a query attribute, 'name2' was the target, 'size2' a target attribute, '#in_common' an integer node attribute, and 'zscore' a floating point node attribute. I tried different layouts, but the "preferred layout" seemed best. Here's a screenshot of one part:
It looks reasonable, in that I know dopamine and serotonin are related. I'm neither a chemist nor biologist, nor do I play one on the internet. My goal was to show how you might use chemfp to do this sort of analysis. This image shows I'm in the right neighborhood, so it's good enough for me to say I'm done.
ChEMBL bioactivity data #
This is part 2 of a three-part series in generating fingerprint-based set similarities using chemfp. Read part 1 to see some of the ways to compare two fingerprint sets, and part 3 where I put everything together and visualize the results.
I almost only use ChEMBL structure files. I download the .sdf files and process them. ChEMBL also supplies bioactivity data, which I've never worked with. Iain Wallace suggested I look to it as a source of compound set data, and provided some example SQL queries. This blog post is primarily a set of notes for myself as I experiment with the queries and learn more about what is in the data file.
There is one bit of general advice. If you're going to use the SQLite dump from ChEMBL, make sure that you did "ANALYZE" at least on the tables of interest. This may take a few hours. I'm downloading ChEMBL-22.1 to see if it comes pre-analyzed. If it doesn't, I'll ask them to do so as part of their releases. (26 March 2017: I verified that 22.1 does not come pre-analyzed and send them an email. With a few hours, on a Sunday afternoon(!), I got an email thanking me for the suggestion. They only started shipping SQLite dumps with v21, and "will definitely include ANALYZE command when producing the dump" in the future. You won't need this piece of advice in the future.)
For those playing along from home (or the office, or whereever fine SQL database engines may be found), I downloaded the SQLite dump for ChEMBL 21, which is a lovely 2542883 KB (or 2.4) compressed, and 12 GB uncompressed. That link also includes dumps for MySQL, Oracle, and Postgres, as well as schema documentation.
Unpack it the usual way (it takes a while to unpack 12GB), cd into the directory, and open the database using sqlite console:
% tar xf chembl_21_sqlite.tar.gz % cd chembl_21_sqlite % sqlite3 chembl_21.db SQLite version 3.8.5 2014-08-15 22:37:57 Enter ".help" for usage hints. sqlite>
compound_structures
The 'compound_structures' table looks interesting. How many structures are there?
sqlite> select count(*) from compound_structures; 1583897Wow. Just .. wow. That took a several minutes to execute. This is a problem I've had before with large databases. SQLite doesn't store the total table size, so the initial count(*) ends up doing a full table scan. This brings in every B-tree node from disk, which requires a lot of random seeks for my poor hard disk made of spinning rust. (Hmm, Crucible says I can get a replacement 500GB SSD for only EUR 168. Hmmm.)
The second time and onwards is just fine, thanks to the power of caching.
What does the structures look like? I'll decided to show only a few of the smallest structures to keep the results from overflowing the screen:
1818
For fun, are there canonical SMILES which are listed multiple times? There are a few, so I decided to narrow it down to those with more than 2 instances. (None occur more than 3 times.)
sqlite> select canonical_smiles, count(*) from compound_structures group by canonical_smiles having count(*) > 2; CC(C)Nc1cc(ccn1)c2[nH]c(nc2c3ccc(F)cc3)[S+](C)[O-]|3 CC(C)Nc1cc(ccn1)c2[nH]c(nc2c3ccc(F)cc3)[S+]([O-])C(C)C|3 CC(C)[C@@H](C)Nc1cc(ccn1)c2[nH]c(nc2c3ccc(F)cc3)[S+](C)[O-]|3 CC(C)[C@H](C)Nc1cc(ccn1)c2[nH]c(nc2c3ccc(F)cc3)[S+](C)[O-]|3 CC(C)[S+]([O-])c1nc(c2ccc(F)cc2)c([nH]1)c3ccnc(NC4CCCCC4)c3|3 ...Here are more details about the first output where the same SMILES is used multiple times:?
I don't. I used the techniques of the next section to get the molfiles for each structure. The differences are in the bonds between atoms 23/24 (the sulfoxide, represented in charge-separated form) and atoms 23/25 (the methyl on the sulfur). The molfile for the first record has no asigned bond stereochemistry, the second has a down flag for the sulfoxide, and the third has a down flag for the methyl.
molfile column in compound_structures
There's a "molfile" entry. Does it really include the structure as a raw MDL molfile? Yes, yes it does:
sqlite> select molfile from compound_structures where molregno = 805; 11280714442D 1 1.00000 0.00000 0 8 8 0 0 0 999 V2000 6.0750 -2.5667 0.0000 C 0 0 0 0 0 0 0 0 0 5.3625 -2.9792 0.0000 N 0 0 3 0 0 0 0 0 0 6.7917 -2.9792 0.0000 N 0 0 0 0 0 0 0 0 0 5.3625 -3.8042 0.0000 C 0 0 0 0 0 0 0 0 0 4.6542 -2.5667 0.0000 C 0 0 0 0 0 0 0 0 0 6.0750 -1.7417 0.0000 C 0 0 0 0 0 0 0 0 0 4.6542 -1.7417 0.0000 C 0 0 0 0 0 0 0 0 0 5.3625 -1.3292 0.0000 C 0 0 0 0 0 0 0 0 0 2 1 1 0 0 0 3 1 2 0 0 0 4 2 1 0 0 0 5 2 1 0 0 0 6 1 1 0 0 0 7 8 1 0 0 0 8 6 1 0 0 0 7 5 1 0 0 0 M END
Why did I choose molregno = 805? I looked for a structure with 8 atoms and 8 bond by searching for the substring " 8 8 0", which is in the counts line. (It's not a perfect solution, but rather a good-enough one.
sqlite> select molregno from compound_structures where molfile LIKE "% 8 8 0%" limit 1; 805I bet with a bit of effort I could count the number of rings by using the molfile to get the bond counts and use the number of "."s in the canonical_smiles to get the number of fragments.
compound_properties and molecule_dictionary tables
The compound_properties table stores some molecular properties. I'll get the number of heavy atoms, the number of aromatic rings, and the full molecular weight for structure 805.
sqlite> select heavy_atoms, aromatic_rings, full_mwt from compound_properties where molregno = 805; 8|0|112.17I've been using "805", which is an internal identifier. What's its public ChEMBL id?
sqlite> select chembl_id from molecule_dictionary where molregno = 805; CHEMBL266980What are some of the records with only 1 or 2 atoms?
CHEMBL1098659|1 CHEMBL115849|2 CHEMBL1160819|1 CHEMBL116336|2 CHEMBL116838|2CHEMBL1098659|1 CHEMBL115849|2 CHEMBL1160819|1 CHEMBL116336|2 CHEMBL116838|2
InChI and heavy atom count for large structures
I showed that some of the SMILES were used for two or three records. What about the InChI string? I started with:
1378059||9 After 10 minutes with no other output, I gave up. Those 9 occurrences have a NULL value, that is:1378059||9 After 10 minutes with no other output, I gave up. Those 9 occurrences have a NULL value, that is:
sqlite> select count(*) from compound_structures where standard_inchi is NULL; 9I was confused at first because there are SMILES string (I'll show only the first 40 characters), so there is structure information. The heavy atom count is also NULL:.
What I'll do instead is get the molecular formula, which shows that there are over 600 heavy atoms in those structures::
sqlite> select * from compound_records where molregno = 805; 1063|805|14385|14|1-Methyl-piperidin-(2Z)-ylideneamine|1|Some of the names of the 600+ atom molecules are too long, so I'll limit the output to the first 50 characters of the name:.
The names weren't really help, and the images at ChEMBL were too small to make sense of the structures, so I looked at them over at PubChem. HRV-EnteroX looks like a 12-mer peptide conjugated to about 25 morpholino oligomers. Mirostipen looks like a peptide. CHEMBL1077161 looks like a nucleic acid strand.
I don't think there's anything interesting to explore in this direction so I'll move on.
Assay dataI'll take a look at assay data, which I deal with a lot less often than I do structure data. How many assays are there?
sqlite> select count(*) from assays; 1212831Okay, and how many of them are human assays? For that I need the NCBI taxonomy id. Iain's example code uses 9606, which the NCBI web site tells me is for Homo sapiens. I don't think there's a table in the SQLite data dump with all of the taxonomy ids. The organism_class table says only:
sqlite> select * from organism_class where tax_id = 9606; 7|9606|Eukaryotes|Mammalia|PrimatesThe assay table "assay_organism" column stores the "[n]ame of the organism for the assay system", with the caution "[m]ay differ from the target organism (e.g., for a human protein expressed in non-human cells, or pathogen-infected human cells)." I'll throw caution to the wind and check that field:
sqlite> select count(*) from assays where assay_organism = "Homo sapiens"; 291143|17 Homo sapiens|291135 sqlite> select count(*) from assays where assay_tax_id = 9606 and assay_organism is NULL; 17 It looks like 9606 is indeed for humans.
Assay activitiesWhat sort of assay activities are there?
sqlite> select distinct published_type from activities; ED50 Transactivation % % Cell Death ... AUC AUC (0-24h) AUC (0-4h) AUC (0-infinity) ... Change Change HDL -C Change MAP Change TC ...Okay, quite a few. There appear to be some typos as well:|Activity Activ ity|Activity Activ ity|Activity Activit y|Activity Activty|Activity There are a many ways to publish a report with IC50 data. I'll show only those that end with "IC50".A ctivity|Activity Activ ity|Activity Activ ity|Activity Activit y|Activity Activty|Activity There are a many ways to publish a report with IC50 data. I'll show only those that end with "IC50".
sqlite> select distinct published_type from activities where published_type like "%IC50"; -Log IC50 -Log IC50/IC50 -logIC50 Average IC50 CC50/IC50 CCIC50 CIC IC50 CIC50 Change in IC50 Cytotoxicity IC50 Decrease in IC50 EIC50 FIC50 Fold IC50 I/IC50 IC50 IC50/IC50 Increase in IC50 Log 1/IC50 Log IC50 MBIC50 MIC50 Mean IC50 RIC50 Ratio CC50/IC50 Ratio CIC95/IC50 Ratio ED50/MIC50 Ratio IC50 Ratio LC50/IC50 Ratio LD50/IC50 Ratio pIC50 Ratio plasma concentration/IC50 Relative ET-A IC50 Relative IC50 TBPS IC50 TC50/IC50 Time above IC50 fIC50 log1/IC50 logIC50 pIC50 pMIC50 rIC50The "p" prefix, as in "pIC50", is shorthand for "-log", so "-Log IC50", "Log 1/IC50", and "pIC50" are almost certainly the same units. Let's see:
.
IC50 types
Let's look at the "IC50" values only. How do the "published_type" and "standard_type" columns compare to each other? activity values
How are the IC50 values measured? Here too I need to choose between "published_units" and "standard_units". A quick look at the two shows that the standard_units are less diverse.
?
483041 That's fully 483041/1212831 = 40% of the assays in the data dump.483041 That's fully 483041/1212831 = 40% of the assays in the data dump.
How many of the IC50s are in humans? For that I need a join with the assays table using the assay_id:
240916 About 1/2 of them are in humans.240916 About 1/2 of them are in humans.
Assay target type from target_dictionary
What are the possible assay targets in humans? That information is stored in the target_dictionary:
sqlite> select distinct target_type from target_dictionary where tax_id = 9606; SINGLE PROTEIN ORGANISM CELL-LINE UNKNOWN PROTEIN COMPLEX SUBCELLULAR TISSUE NUCLEIC-ACID PROTEIN FAMILY PROTEIN-PROTEIN INTERACTION PROTEIN COMPLEX GROUP SELECTIVITY GROUP CHIMERIC PROTEIN MACROMOLECULE SMALL MOLECULE OLIGOSACCHARIDE
Remember earlier when I threw caution to the wind? How many of the assays are actually against human targets? I can join on the target id "tid" to compare the taxon id in the target vs. the taxon id in the assay:
301810 291152301810 291152
Compare assay organisms with target organism
What are some of the non-human assay organisms where the target is humans?
rice Typhirice Typhi
Compounds tested against a target name
I'm interested in the "SINGLE PROTEIN" target names in humans. The target name is a manually curated field.
sqlite> select distinct pref_name from target_dictionary where tax_id = 9606 limit 5; Maltase-glucoamylase Sulfonylurea receptor 2 Phosphodiesterase 5A Voltage-gated T-type calcium channel alpha-1H subunit Dihydrofolate reductaseWhat are structures used in "Dihydrofolate reductase" assays? This requires three table joins, one on 'tid' to go from target_dictionary to assays, another on 'assay_id' to get to the activity, and another on 'molregno' to go from assay to molecule_dictionary so I can get the compound's chembl_id. (To make it more interesting, three of the tables have a chembl_id column.):
1386 I'll further limit it to those with an IC50 of under 1 micromolar:1386 I'll further limit it to those with an IC50 of under 1 micromolar:
255 Run Time: real 174.561 user 18.073715 sys 23.285346 I turned on the timer to show that the query took about 3 minutes! I repeated it to ensure that it wasn't a simple cache issue. Still about 3 minutes.255 Run Time: real 174.561 user 18.073715 sys 23.285346 I turned on the timer to show that the query took about 3 minutes! I repeated it to ensure that it wasn't a simple cache issue. Still about 3 minutes.
ANALYZE the tables
The earlier query, without the activity filter, took 5.7 seconds when the data wasn't cached, and 0.017 seconds when cached. It found 1386 matches. The new query takes almost 3 minutes more to filter those 1386 matches down to 255. That should not happen.
This is a strong indication that the query planner used the wrong plan. I've had this happen before. My solution then was to "ANALYZE" the tables, which "gathers statistics about tables and indices and stores the collected information in internal tables of the database where the query optimizer can access the information and use it to help make better query planning choices."
It can take a while, so I limited it to the tables of interest.
sqlite> analyze target_dictionary; Run Time: real 0.212 user 0.024173 sys 0.016268 sqlite> analyze assays; Run Time: real 248.184 user 5.890109 sys 4.793236 sqlite> analyze activities; Run Time: real 6742.390 user 97.862790 sys 129.854073 sqlite> analyze molecule_dictionary; Run Time: real 33.879 user 2.195662 sys 2.043848Yes, it took almost 2 hours to analyze the activities table. But it was worth it from a pure performance view. I ran the above code twice, with this pattern:
% sudo purge # clear the filesystem cache % sqlite3 chembl_21.db # start SQLite SQLite version 3.8.5 2014-08-15 22:37:57 Enter ".help" for usage hints. sqlite> .timer on sqlite> .... previous query, with filter for IC50 < 1uM ... 255 Run Time: real 8.595 user 0.038847 sys 0.141945 sqlite> .... repeat query using a warm cache 255 Run Time: real 0.009 user 0.005255 sys 0.003653Nice! Now I only need to do about 60 such queries to justify the overall analysis time.
Fingerprint set similarity #
This is part 1 of a three-part series in generating fingerprint-based set similarities using chemfp. Read part 2 where I figure out how to use the ChEMBL bioactivity data and part 3 where I put everything together and visualize the results.
Someone recently asked how I might use chemfp to compute the similarity between two sets of fingerprints. This would be used for something like SEA (Similarity Ensemble Approach), which "relates proteins based on the set-wise chemical similarity among their ligands. It can be used to rapidly search large compound databases and to build cross-target similarity maps."
The underlying techniques of document set similarity are used in many fields. Bioinformatics, for example, uses it to analyze gene expression microarrays to find correlations between genes which might indicate they affect similar pathways. That said, I am no expert on the topic. I implemented a SEA-like algorithm for one client, but I don't know enough to judge the effectiveness of one approach over another, nor do I know the underlying statistics.
But that's okay. I'll assume that you know these details, and just want to know how to use chemfp to get the raw numbers to feed into your scoring function.
The first step is to create and load fingerprints. I have some pre-computed fingerprints for ChEMBL 21, so I'll use that as my example. I'll use the 2048-bit RDKit fingerprints. To generate them yourself, can use the command-line tool rdkit2fps, like this:
% rdkit2fps chembl_21.sdf.gz -o chembl_21.rdkit2048.fps.gz
The FPS file is a text-based, line-oriented format which is easy to for both humans and software to read and write. Easy, however, does not mean fast. It takes about 20 seconds to load this fps.gz file:
>>> import time >>> import chemfp >>>>> t1 = time.time(); chemfp.load_fingerprints(filename);t2=time.time() <chemfp.arena.FingerprintArena object at 0x1021aef90> >>> t2-t1 19.388361930847168
Chemfp 2.0 introduced the FPB binary file format, which is more structured around the chemfp internal data structures. It's very fast for chemfp to load, but much more complicated. All of the chemfp tools know how to read and write the FPB format, so you could create the FPB file like this:
% rdkit2fps chembl_21.sdf.gz -o chembl_21.rdkit2048.fpbIf you already have an FPS (or gzip-ed FPS) file then use the fpcat tool to convert it to an FPB file:
% fpbcat chembl_21.rdkit2048.fps.gz -o chembl_21.rdkit2048.fpbThe conversion took about 32 seconds on my 7 year old laptop.
I'll switch back to Python and show why I developed the FPB format. The first time I loaded it took 0.4 seconds:
>>> t1=time.time(); chemfp.load_fingerprints("chembl_21.rdkit2048.fpb");t2=time.time() <chemfp.fpb_io.FPBFingerprintArena object at 0x113809790> >>> t2-t1 0.3760688304901123Much of that time is likely waiting for the hard disk. When I open it again, the data is already in disk cache, so the load time is now under 0.1 seconds:
>>> t1=time.time(); chemfp.load_fingerprints("chembl_21.rdkit2048.fpb");t2=time.time() <chemfp.fpb_io.FPBFingerprintArena object at 0x113809650> >>> t2-t1 0.06340503692626953That's much better than the 20 seconds it took to read from the gzipped FPS file. For what it's worth, for the RDKit 2048-bit fingerprint, the fps.gz file is slightly more compact than the .fpb file:
% ls -l chembl_21.rdkit2048.fpb chembl_21.rdkit2048.fps.gz -rw-r--r-- 1 dalke staff 416031861 May 12 2016 chembl_21.rdkit2048.fps.gz -rw-r--r-- 1 dalke staff 457103145 Mar 18 02:36 chembl_21.rdkit2048.fpb
Select a subset
In chemfp, a set of fingerprints is called an "arena". (I struggled for a while to come up with a good name. "A 'hand' of fingerprints" was humorous, but in the end I re-used the name from memory management.) I want to use a subset of the arena. The best solution is to use the arena.copy() method. By itself it makes a copy of the entire data set:
>>> import chemfp >>> arena = chemfp.load_fingerprints("chembl_21.rdkit2048.fpb") >>> subarena = arena.copy() >>> len(arena), len(subarena) (1583839, 1583839)
Arenas act like an array of (id, fingerprint) pairs. The first element is term [0], the second [1], and so on. For example:
>>> arena[1234] (u'CHEMBL79699', '\x04\x00\x00\x08 ... many bytes omitted ... \x00\xc0') >>> arena[5678] (u'CHEMBL291214', '\x00\x00\x00\x08 ... many bytes omitted ... \x00 \x80')
I'll make another copy, but this time ask chemfp to only copy indices 1234 and 5678:
>>> subarena = arena.copy(indices=[1234, 5678]) >>> len(subarena) 2 >>> subarena.ids [u'CHEMBL79699', u'CHEMBL291214'] >>> subarena[0] (u'CHEMBL79699', '\x04\x00\x00\x08 ... many bytes omitted ... \x00\xc0')(Notes: by default the subarena may rearrange the fingerprints for better search performance. If you want the new arena to have the fingerprint in the same order as the list of indices then also pass in "reorder=False". Also, the u'' here shows that I'm using chemfp 3.0 on Python 2.7. Earlier versions of chemfp return both the id and fingerprint as byte strings, while chemfp 3.0 returns the id as a Unicode string and the fingerprint as a byte string. Under Python 2, Unicode strings are shown using the 'u' prefix. Under Python 3, byte strings are represented with the 'b' prefix.)
Make a random subset
I need some subsets so I can compare them. To get started I'll just use randomly selected elements. This is pleasently easy, using a technique I learned from reading the Python documentation for random.sample:
To choose a sample from a range of integers, use an xrange() object as an argument. This is especially fast and space efficient for sampling from a large population: sample(xrange(10000000), 60).Note: the xrange() object is Python 2 specific. If you are using Python 3 then replace "xrange" with "range".
Here are some examples where I sample 4 values (without replacement) from 0...99:
>>> import random >>> random.sample(xrange(100), 4) [68, 49, 79, 38] >>> random.sample(xrange(100), 4) [77, 57, 33, 43] >>> random.sample(xrange(100), 4) [32, 70, 13, 53]I'll change this to get 4 randomly indices in the arena:
>>> random.sample(xrange(len(arena)), 4) [1089497, 196869, 1474590, 376331] >>> random.sample(xrange(len(arena)), 4) [161904, 1136119, 1455737, 518548] >>> random.sample(xrange(len(arena)), 4) [1092929, 218864, 1436330, 1357672]
Here's a function to create a subarena arena of size n, in easy-to-copy form, which I'll then execute interactively:
import random def make_subset(arena, n): indices = random.sample(xrange(len(arena)), n) return arena.copy(indices=indices) >>> a = make_subset(arena, 4) >>> len(a) 4 >>> a.ids [u'CHEMBL1448181', u'CHEMBL1532656', u'CHEMBL3480613', u'CHEMBL1765556']
Jaccard index between two sets
The Jaccard index is a widely used scoring function. (In cheminformatics we know it better as the Tanimoto, for historical reasons tied more to how knowledge diffuses than to strict date priority.) Simply put, it's defined as:
number of elements common to both A and B Jaccard(A, B) = ----------------------------------------- number of unique elements in A or BThe harder part is to turn this into something meaningful. What does it mean for an fingerprint record to be common to both A and B?
One possibility is to look at the identifier, and say that a record is common to A and B if and only if the record's identifier is in both sets. However, it would be nice to capture some sense of chemical similarity. We can instead say that a record is in common if and only if they have the same fingerprints, but 100% similarity is very strict.
Let's lower the threshold a bit, and say that a record 'a' in A is common to B if and only if there is at least one record 'b' in B such that the fingerprint Tanimoto(a, b) >= 0.8.
If you think at this for a bit you'll see a couple of things. First off, this definition is not symmetric. If there is at least one similar 'b' for some 'a' then there is at least one similar 'a' for that 'b', but 'a' might be the nearest neighbor of two different 'b's. (The asymmetry could be fixed by also including the results of a search of B against A. I'm not going to consider that approach.) Second, "has a neighbor in B" can be implemented as a k=1 nearest neighbor search of A against B, which is a built-in chemfp function.
I'll make two subsets with 1,000 elements each, randomly sampled from the ChEMBL-21 arena:
>>> subset1 = make_subset(arena, 1000) >>> subset2 = make_subset(arena, 1000)then use chemfp to do the k=1 nearest-neighbor search between two arenas, with a minimum required score of 0.8:
>>> from chemfp import search >>> results = search.knearest_tanimoto_search_arena(subset1, subset2, k=1, threshold=0.8)
Most randomly chosen fingerprints are below 0.8 threshold, so they find 0 neighbors, but some of them have 1 neighbor:
>>> len(results[0]) 0 >>> len(results[1]) 0 >>> len(results[26]) 1There are a few ways to count the number of element in common. You can use normal Python functions, like:
>>> sum(len(result) for result in results) 24A faster way is to let chemfp count all of the hits for you:
>>> results.count_all() 24
Okay, so Tanimoto similarity gives the numerator to the Jaccard set similarity. What about the denominator, which is "the number of unique elements in A or B"?
For that I'll count the total number of unique identifiers in both A and B. You saw earlier that arena.ids gives the list of identifiers for the arena. I'll use those to make a Python set made of the union of the two ids, then get the size of the resulting set:
>>> len(set(subset1.ids).union(subset2.ids)) 2000Thus, the Jaccard index of the two sets is 24/2000, or 0.012. They are not that similar.
A slighly more interesting case
The denominator of 2000 is rather boring, as it's 1000+1000, that is, the sum of the individual set sizes. That's because the two randomly chosen sets have no elements in common:
>>> len(set(subset1.ids).intersection(subset2.ids)) 0I'll make some random samples until there are at least 3 elements in common:
>>> while len(set(subset1.ids).intersection(subset2.ids)) < 3: ... subset2 = make_subset(arena, 1000) ... >>> len(set(subset1.ids).intersection(subset2.ids)) 4 >>> set(subset1.ids).intersection(subset2.ids) set([u'CHEMBL64050', u'CHEMBL129835', u'CHEMBL255712', u'CHEMBL88792']) >>> len(set(subset1.ids).union(subset2.ids)) 1996These two subsets have 40 elements in common:
>>> search.knearest_tanimoto_search_arena(subset1, subset2, k=1, threshold=0.8).count_all() 40given them a Jaccard index of 40/1996, or 0.02. That's better than 0.012, but still not very similar.
Estimating background similarity
I want to get some idea of the average similarity between two randomly chosen sets, at different sizes and threshold scores. I'll define a 'jaccard' function to help with the calculation:
from __future__ import division # for Python 2.x, make 1/2 be 0.5, not 0 from chemfp import search # >>> jaccard(subset1, subset2) 0.02004008016032064 >>> jaccard(make_subset(arena, 1000), make_subset(arena, 1000)) 0.017508754377188594 >>> jaccard(make_subset(arena, 1000), make_subset(arena, 1000)) 0.021 >>> jaccard(make_subset(arena, 1000), make_subset(arena, 1000)) 0.0155
It varies quite a bit, so I'll compute 100 scores and get some information about the min, max, average, and standard deviation:
>>> scores = [] >>> for i in range(50): ... scores.append(jaccard(make_subset(arena, 1000), make_subset(arena, 1000))) ... >>> min(scores), max(scores) (0.008, 0.022)
What about the average and standard deviation? Python 3.4 introduced the statistics module, which has those as built-in functions. While chemfp 3.0 supports Python 3.5 and greater, I want this blog post to work on both Python 2.7 and 3.3, so I'll import some functionality from numpy:
>>> import numpy as np >>> np.mean(scores) 0.014819138037618857 >>> np.std(scores) 0.0025880786854096645(One obvious question is, is standard deviation an appropriate characterization, or am I hammering a square block into a round hole? I'll let someone more knowledgeable about statistics figure that out.)
I then wrote a function to show the mean and standard deviation of 100 Jaccard scores, across the NxM cross product of some set sizes, that is size 10x10, size 10x20, ... 10x2000, 20x10, ... up to size 2000x2000. Here's the function, which is mostly taken up by code that tries to format things nicely:)
The output shows the average and standard deviations on alternate lines, so I could make better use of vertical space:
>>> compute_score_table(arena) 10 20 50 100 200 500 1000 2000 10 0.000 +/- 0.000 +/- 0.000 +/- 0.000 +/- 0.000 +/- 0.000 +/- 0.000 +/- 0.000 +/- 0.00000 0.00000 0.00233 0.00127 0.00121 0.00059 0.00036 0.00033 20 0.000 +/- 0.000 +/- 0.001 +/- 0.001 +/- 0.001 +/- 0.001 +/- 0.001 +/- 0.001 +/- 0.00332 0.00000 0.00345 0.00266 0.00199 0.00115 0.00068 0.00054 50 0.000 +/- 0.001 +/- 0.001 +/- 0.002 +/- 0.002 +/- 0.001 +/- 0.001 +/- 0.001 +/- 0.00233 0.00371 0.00271 0.00318 0.00322 0.00146 0.00097 0.00067 100 0.000 +/- 0.001 +/- 0.001 +/- 0.001 +/- 0.003 +/- 0.003 +/- 0.003 +/- 0.002 +/- 0.00127 0.00198 0.00256 0.00256 0.00382 0.00193 0.00174 0.00121 200 0.000 +/- 0.001 +/- 0.002 +/- 0.002 +/- 0.004 +/- 0.005 +/- 0.005 +/- 0.005 +/- 0.00121 0.00276 0.00316 0.00311 0.00305 0.00286 0.00208 0.00119 500 0.000 +/- 0.001 +/- 0.002 +/- 0.003 +/- 0.006 +/- 0.009 +/- 0.010 +/- 0.010 +/- 0.00110 0.00108 0.00236 0.00298 0.00314 0.00318 0.00251 0.00177 1000 0.000 +/- 0.001 +/- 0.002 +/- 0.004 +/- 0.006 +/- 0.012 +/- 0.014 +/- 0.016 +/- 0.00061 0.00132 0.00225 0.00280 0.00291 0.00275 0.00272 0.00254 2000 0.001 +/- 0.001 +/- 0.002 +/- 0.004 +/- 0.007 +/- 0.014 +/- 0.019 +/- 0.024 +/- 0.00128 0.00130 0.00192 0.00269 0.00293 0.00303 0.00309 0.00250(By the way, if all of the values are 0.0 then you are likely using Python 2.7 and have omitted the "from __future__ import division" line when you defined the jaccard() function. I did that a few times when developing this essay.)
My jaccard() function is not reflexive, which you can verify by comparing terms i by j with j by i, as in 1000x2000 vs 2000x100. Their respective values are 0.019+/-0.00309 and 0.016+/-0.00254. The averages are multiple standard deviations away from each other.
The scores are also a function of set size, which is not what you want for a good scoring function. The 2000x2000 term has an average score of 0.024 while the 1000x1000 has an average score 0.014. The elements in the sets are randomly selected, so I expected the two comparisons to have about the same score. This bias towards larger sets is built into my similarity definition, so I conclud that my jaccard() implementation is not a good scoring function for sets.
There are ways to improve the scoring function, for example by normalizing to the background average/standard deviation to give a Z-score, or by better use of statistics. That is "left to the student as an exercise."
Here's the final code in a single block so you can use it more easily:
from __future__ import division import random import numpy as np import chemfp from chemfp import") compute_score_table(arena)
Details about the best match
While it isn't needed for the set-to-set comparison, you might be interested in knowing more about the closest match in the k=1 nearest search. It's easy to get the list of ids and scores for the non-empty rows:
>>> subset1 = make_subset(arena, 1000) >>> subset2 = make_subset(arena, 1000) >>> results = search.knearest_tanimoto_search_arena( ... subset1, subset2, k=1, threshold=0.8) ... >>> for result in results: ... if result: ... print("%-13s %.3f" % result.get_ids_and_scores()[0]) ... CHEMBL386477 0.926 CHEMBL386477 1.000 CHEMBL275879 0.954 CHEMBL2138462 0.921 CHEMBL83932 0.813 CHEMBL2403669 0.805 CHEMBL2372083 0.819 CHEMBL1213903 0.988 CHEMBL1574001 0.901 CHEMBL2337012 0.913 CHEMBL370254 0.806 CHEMBL555410 0.908 CHEMBL2337012 0.897 CHEMBL1683180 0.822 CHEMBL73244 0.801 CHEMBL605903 0.831 CHEMBL1834630 0.806 CHEMBL1475722 0.821 CHEMBL65904 0.912 CHEMBL2106627 0.812 CHEMBL108353 0.816 CHEMBL108353 0.825
It becomes a bit more complicated if you also want information about the query and its fingerprint, so let's do that. I'll also print the query id, the number of 1-bits in the query and target fingerprint (better known as the popcount), and the number of 1-bits in their fingerprint intersection, that is, the number of bits in common. Here's the code:
import random import numpy as np import chemfp from chemfp import search, bit") subset1 = make_subset(arena, 1000) subset2 = make_subset(arena, 1000) results = search.knearest_tanimoto_search_arena( subset1, subset2, k=1, threshold=0.8) print(" query_id #bits target_id #bits common score") for i, result in enumerate(results): if not result: continue # skip elements with no matches query_id, query_fp = subset1[i] num_query_bits = bitops.byte_popcount(query_fp) j, score = result.get_indices_and_scores()[0] target_id, target_fp = subset2[j] num_target_bits = bitops.byte_popcount(target_fp) in_common = bitops.byte_intersect_popcount(query_fp, target_fp) print("%-13s %5d %-13s %5d %5d %.4f" % (query_id, num_query_bits, target_id, num_target_bits, in_common, score))It generates the following output:
query_id #bits target_id #bits common score CHEMBL340793 710 CHEMBL443821 657 657 0.9254 CHEMBL2115272 710 CHEMBL1096675 758 667 0.8327 CHEMBL262239 761 CHEMBL411281 761 761 1.0000 CHEMBL529191 821 CHEMBL3434725 843 784 0.8909 CHEMBL389943 868 CHEMBL414961 887 807 0.8513 CHEMBL350381 1008 CHEMBL162288 899 899 0.8919 CHEMBL1407603 1042 CHEMBL1610293 1001 955 0.8778 CHEMBL484872 1058 CHEMBL1610293 1001 967 0.8855 CHEMBL381366 1066 CHEMBL426130 1060 985 0.8633 CHEMBL2058350 1098 CHEMBL2058354 1005 1005 0.9153 CHEMBL1898269 1112 CHEMBL1893417 1049 982 0.8329 CHEMBL132434 1123 CHEMBL15324 1158 1079 0.8977 CHEMBL332868 1128 CHEMBL77685 1160 1052 0.8511 CHEMBL2143647 1188 CHEMBL1893417 1049 1008 0.8202 CHEMBL1711444 1189 CHEMBL1310059 1231 1102 0.8361 CHEMBL3249400 1232 CHEMBL2371463 1128 1062 0.8182 CHEMBL1367025 1242 CHEMBL1352709 1345 1213 0.8828 CHEMBL77799 1290 CHEMBL1081652 1218 1135 0.8267 CHEMBL2303747 1296 CHEMBL15324 1158 1096 0.8071 CHEMBL451899 1656 CHEMBL477245 1551 1433 0.8078 CHEMBL362938 1708 CHEMBL1731136 1966 1641 0.8072 CHEMBL525425 1730 CHEMBL1731136 1966 1658 0.8135 CHEMBL516074 1742 CHEMBL3113501 1876 1691 0.8775 CHEMBL2029145 1764 CHEMBL2304149 1713 1706 0.9633 CHEMBL298265 1865 CHEMBL1731136 1966 1788 0.8752 CHEMBL373910 1906 CHEMBL1731136 1966 1830 0.8962
Cumulative value scoring methods
I earlier pointed out two big problems with the jaccard() implementation I proposed. It isn't symmetric (jaccard(a, b) is rarely equal to jaccard(b, a)), and it has a bias towards larger sizes.
There's another problem. Suppose A and B both have n elements, with no overlaps (making the denominator 2n), and suppose also that each element in A has exactly one neighbor with at least a 0.8 similarity. Then the jaccard() method will report a similarity of 1.0.
Now consider a set C which again has n elements, with no overlaps with elements in A. But this time each element of A has 5 neighbors in C, and vice versa. The jaccard() function only checks if there is at least one neighbor, so will still return a score of 1.0, even though sets A and C are more compact and closer to each other than A was to B.
One way to capture that information is to look at the total number of matches between A and B, and between A and C. Instead of doing a k=1 nearest neighbor search, I'll do a threshold search and count the total number of matches between the two:
>>> A = make_subset(arena, 1000) >>> B = make_subset(arena, 1000) >>> C = make_subset(arena, 1000) >>> results_AB = search.threshold_tanimoto_search_arena(A, B, threshold=0.8) >>> results_AB.count_all() 34 >>> results_AC = search.threshold_tanimoto_search_arena(A, C, threshold=0.8) >>> results_AC.count_all() 49This result is also reflexive, that is, comparing (A, B) and (B, A) give the same values.
One possible concern is that the match count assumes that all similarities are of equal importance, so a similarity of 0.8 is just as strong as 1.0. A more subtle method might use the cumulative sum of the scores in AxB and AxC, which is returned by the cumulative_score_all() method:
>>> results_AB.cumulative_score_all() 29.499029322763217 >>> results_AC.cumulative_score_all() 42.23963171858597
I pointed out earlier that my naive jaccard() implementation is biased towards comparing large sets. A similar bias applies when using counts or cumulative scores. With randomly selected elements, the count or cumulative scores will scale as the product of the set sizes:
>>> def get_count(n, m, repeat=100): ... count = 0 ... for i in range(repeat): ... subset1 = make_subset(arena, n) ... subset2 = make_subset(arena, m) ... results = search.threshold_tanimoto_search_arena(subset1, subset2) ... count += results.count_all() ... return count ... >>> get_count(1000, 1000) 29709 >>> get_count(1000, 1000) 29803 >>> get_count(2000, 1000) 58237 >>> get_count(2000, 1000) 61533 >>> get_count(3000, 1000) 91164 >>> get_count(3000, 1000) 91833 >>> get_count(3000, 3000) 265121 >>> get_count(3000, 3000)Change the count_all() to cumulative_score_all() to get similar results using the sum of all the scores.
As I mentioned, an advantage of the count and cumulative score approach over the k=1 nearest-neighbor search is that they are symmetric, or more precisely, reflexive. Compare the following to the (3000, 1000) output of the previous code and you'll see they give about the same values:
>>> get_count(1000, 3000) 91185 >>> get_count(1000, 3000) 90586
The joy of floating point arithmetic
This is an aside. Here are two alternative ways to compute the cumulative sum of all of the scores, the first by computing the cumulative_sum() of each individual result, and the second by computing the sum of the sum of each list of scores using Python.
>>> sum(result.cumulative_score() for result in results_AB) 29.499029322763214 >>> sum(sum(result.get_scores()) for result in results_AB) 29.499029322763214It really bugs me that the cumulative_score_all() and the other two approaches differ by -3.552713678800501e-15, which is the least significant bit of the two representations:
>>> sum(sum(result.get_scores()) for result in results_AB).hex() '0x1.d7fc062bd0356p+4' >>> results_AB.cumulative_score_all().hex() '0x1.d7fc062bd0357p+4'
After a bit of research, I confirmed that it's not due to a chemfp bug, but rather because floating point addition using doubles is not associative. If I sum up all of the scores in order, I get the value from cumulative_score_all():
>>> scores = [] >>> for result in results_AB: ... scores.extend(result.get_scores()) ... >>> sum(scores) 29.499029322763217
If I group the scores so I first sum up each row, I get the other value:
>>> scores = [] >>> for result in results_AB: ... scores.append(sum(result.get_scores())) ... >>> sum(scores) 29.499029322763214
Minor differences in the last significant digit are an all too common consequence of using floating point numbers, so while it annoyed me, it was the first reason I thought of.
Cumulative score-based similarity
In an earlier section I showed that the total count or the cumulative score between two randomly chosen sets scales as the product of the two set sizes. What about normalizing the cumulative score by that product?
There's probably a name for this similarity measure, but I don't know much about this field and don't know the name. I'll call it "sss" for "sum of scores similarity".
# I don't actually use this one def sss(arena1, arena2, threshold=0.8): results = search.threshold_tanimoto_search_arena( arena1, arena2, threshold=threshold) return results.cumulative_score_all() / (len(arena1) * len(arena2))After some experimentation, I decided to scale the score by 300, because that results in two sets with randomly chosen terms having a similarity of 0.01.
# This is the one I use.I made a variant of the similarity table I used before, this time with the sss() function as my similarity measure, instead of jaccard():
def compute_sss_table(arena): sizes = (10, 20, 50, 100, 200, 500, 1000, 2000) print(" " + "".join(str(size).center(12) for size in sizes)) for size1 in sizes: output1 = str(size1).rjust(4) + " " output2 = " " for size2 in sizes: scores = [] for i in range(100): score = sss(make_subset(arena, size1), make_subset(arena, size2)) scores.append(score) output1 += ("%.3f +/-" % (np.mean(scores),)).ljust(12) output2 += (" %.5f" % (np.std(scores),)).ljust(12) print(output1) print(output2)The output table is very close to symmetric under transposition, and with a nearly constant similarity score of 0.011:
10 20 50 100 200 500 1000 2000 10 0.024 +/- 0.012 +/- 0.005 +/- 0.013 +/- 0.013 +/- 0.015 +/- 0.008 +/- 0.011 +/- 0.23916 0.12318 0.05309 0.06569 0.04212 0.03571 0.02265 0.01623 20 0.012 +/- 0.013 +/- 0.008 +/- 0.013 +/- 0.013 +/- 0.010 +/- 0.012 +/- 0.016 +/- 0.12176 0.08863 0.05592 0.03930 0.02858 0.01905 0.02362 0.02322 50 0.005 +/- 0.028 +/- 0.018 +/- 0.010 +/- 0.010 +/- 0.010 +/- 0.011 +/- 0.012 +/- 0.04816 0.09862 0.04181 0.02301 0.02007 0.01040 0.01078 0.01331 100 0.010 +/- 0.006 +/- 0.011 +/- 0.012 +/- 0.013 +/- 0.011 +/- 0.011 +/- 0.011 +/- 0.05136 0.02660 0.02348 0.01906 0.01725 0.00920 0.00927 0.00801 200 0.011 +/- 0.013 +/- 0.012 +/- 0.011 +/- 0.010 +/- 0.010 +/- 0.012 +/- 0.011 +/- 0.05310 0.03826 0.02014 0.01716 0.00940 0.00745 0.00716 0.00496 500 0.013 +/- 0.008 +/- 0.011 +/- 0.012 +/- 0.012 +/- 0.011 +/- 0.012 +/- 0.012 +/- 0.03374 0.01458 0.01471 0.01177 0.00873 0.00597 0.00509 0.00354 1000 0.009 +/- 0.012 +/- 0.010 +/- 0.012 +/- 0.012 +/- 0.012 +/- 0.012 +/- 0.012 +/- 0.02291 0.02247 0.01068 0.00878 0.00639 0.00508 0.00347 0.00300 2000 0.011 +/- 0.011 +/- 0.013 +/- 0.013 +/- 0.011 +/- 0.011 +/- 0.011 +/- 0.011 +/- 0.01723 0.01354 0.01446 0.01031 0.00552 0.00332 0.00275 0.00209This is much more like what I expect from a similarity function!
Effect of threshold on the sss score
I picked a threshold of 0.8 because that's a pretty common value and few will object to it. It might be that a different threshold is better. How does the sss score change as a function of the threshold?
One obvious implementation is to call sss() with a different threshold each time:
>>> arena = chemfp.load_fingerprints("chembl_21.rdkit2048.fpb") >>> subset1 = make_subset(arena, 1000) >>> subset2 = make_subset(arena, 1000) >>> sss(subset1, subset2, threshold=0.9) 0.0014073436068272435 >>> sss(subset1, subset2, threshold=0.8) 0.006656850064948667 >>> sss(subset1, subset2, threshold=0.7) 0.038875826785241686 >>> sss(subset1, subset2, threshold=0.6) 0.2569320521578928(As you can see, while I managed to normalize the score as a function of set size, it's still threshold dependent.)
The obvious implementation is perfectly reasonable. There's a more computationally efficient way. You'll notice that each calculation ends up redoing the calculation before it, because all compounds with at least 0.9 are also within at least 0.8 similarity, etc.
Chemfp has another way to get the same values. The cumulative_score_all() method takes extra arguments to select the min and max values to use, and if the summation should include or exclude the endpoints. I'll get a SearchResults instance and ask for its help:
>>> results = search.threshold_tanimoto_search_arena( ... subset1, subset2, threshold=0.6) >>> >>> help(results.cumulative_score_all) cumulative_score_all(self, min_score=None, max_score=None, interval='[]') method of chemfp.search.SearchResults instance The sum of all scores in all rows which are between *min_score* and *max_score* Using the default parameters this returns the sum of all of the scores in all of the results. With a specified range this returns the sum of all of the scores in that range. The cumulative score is also known as the raw score. The default *min_score* of None is equivalent to -infinity. The default *max_score* of None is equivalent to +infinity. The *interval* parameter describes the interval end conditions. The default of "[]" uses a closed interval, where min_score <= score <= max_score. The interval "()" uses the open interval where min_score < score < max_score. The half-open/half-closed intervals "(]" and "[)" are also supported. :param min_score: the minimum score in the range. :type min_score: a float, or None for -infinity :param max_score: the maximum score in the range. :type max_score: a float, or None for +infinity :param interval: specify if the end points are open or closed. :type interval: one of "[]", "()", "(]", "[)" :returns: a floating point countI'll use the min_score parameter. First, I'll find all fingerprints with a similarity threshold of 0.6, which is the lowest score I'm concerned about, and, to reduce complexity later on, I'll merge the "300*" and the product of the sizes into a single scaling factor:
>>> results = search.threshold_tanimoto_search_arena( ... subset1, subset2, threshold=0.6) >>> scaling = 300/len(subset1)/len(subset2)Then I'll compute the scores for each threshold, and get the same results as earlier.
>>> scaling*results.cumulative_score_all(min_score=0.9) 0.0014073436068272435 >>> scaling*results.cumulative_score_all(min_score=0.8) 0.006656850064948667 >>> scaling*results.cumulative_score_all(min_score=0.7) 0.038875826785241686 >>> scaling*results.cumulative_score_all(min_score=0.6) 0.2569320521578928
For fun, I put it in a loop and computed more thresholds:
>>> scaling = 300/len(subset1)/len(subset2) >>> for threshold in (0.99, 0.95, 0.90, 0.85, 0.8, 0.75, 0.7, 0.65, 0.6): ... score = scaling * results.cumulative_score_all(min_score=threshold) ... print("%.2f %7.4f" % (threshold, score)) ... 0.99 0.0003 0.95 0.0006 0.90 0.0014 0.85 0.0030 0.80 0.0067 0.75 0.0166 0.70 0.0389 0.65 0.0921 0.60 0.2569A quick test now shows that computing the similarities once and finding the cumulative sum for the 9 threshold levels is about 4x faster than doing 9 independent threshold calculations.
Weininger's Realization #
Dave Weininger passed away recently. He was very well known in the chemical informatics community because of his contribution to the field and his personality. Dave and Yosi Taitz founded Daylight Chemical Information Systems to turn some of these ideas into a business, back in the 1980s. It was very profitable. (As a bit of trivia, the "Day" in "Daylight" comes from "Dave And Yosi".)
Some of the key ideas that Dave and Daylight introduced are SMILES, SMARTS, and fingerprints (both the name and the hash-based approach). Together these made for a new way to handle chemical information search, and do so in significantly less memory. The key realization which I think lead to the business success of the company, is that the cost of memory was decreasing faster than the creation of chemical information. This trend, combined with the memory savings of SMILES and fingerprints, made it possible to store a corporate dataset in RAM, and do chemical searches about 10,000 times faster than the previous generation of hard-disk based tools, and do it before any competition could. I call this "Weininger's Realization". As a result, the Daylight Thor and Merlin databases, along with the chemistry toolkits, became part of the core infrastructure of many pharmaceutical companies.
I don't know if there was a specific "a-ha" moment when that realization occurred. It certainly wasn't what drove Dave to work on those ideas in the first place. He was a revolutionary, a Prometheus who wanted to take chemical information from what he derisively called 'the high priests' and bring it to the people.
An interest of mine in the last few years is to understand more about the history of chemical information. The best way I know to describe the impact of Dave and Daylight is to take some of the concepts back to the roots.
You may also be interested in reading Anthony Nicholls description of some of the ways that Dave influenced him, and Derek Lowe's appreciation of SMILES.
Errors and Omissions
Before I get there, I want to emphasize that the success of Daylight cannot be attributed to just Dave, or Dave and Yosi. Dave's brother Art and his father Joseph were coauthors on the SMILES canonicalization paper. The company hired people to help with the development, both as employees and consultants. I don't know the details of who did what, so I will say "Dave and Daylight" and hopefully reduce the all too easy tendency to give all the credit on the most visible and charismatic person.
I'm unfortunately going to omit many parts of the Daylight technologies, like SMIRKS, where I don't know enough about the topic or its effect on cheminformatics. I'll also omit other important but invisible aspects of Daylight, like documentation or the work Craig James did to make the database servers more robust to system failures. Unfortunately, it's the jockeys and horses which attract the limelight, not those who muck the stables or shoe the horses.
Also, I wrote this essay mostly from what I have in my head and from presentations I've given, which means I've almost certainly made mistakes that could be fixed by going to my notes and primary sources. Over time I hope to spot and fix those mistakes in this essay. Please let me know of anything you want me to change or improve.
Dyson and Wiswesser notations
SMILES is a "line notation", that is, a molecular representation which can be described as a line of text. Many people reading this may have only a vague idea of the history of line notations. Without that history, it's hard to understand what helped make SMILES successful.
The original line notations were developed in the 1800s. By the late 1800s chemists began to systematize the language into what is now called the IUPAC nomenclature. For example, caffeine is "1,3,7-trimethylpurine-2,6-dione". The basics of this system are taught in high school chemistry class. It takes years of specialized training to learn how to generate the correct name for complex structures.
Chemical nomenclature helps chemists index the world's information about chemical structures. In short, if you can assign a unique name to a chemical structure (a "canonical" name), then it you can use standard library science techniques to find information about the structure.
The IUPAC nomenclature was developed when books and index cards were the best way to organize data. Punched card machines brought a new way of thinking about line notations. In 1946, G. Malcolm Dyson proposed a new line notation meant for punched cards. The Dyson notion was developed as a way to mechanize the process of organizing and publishing a chemical structure index. It became a formal IUPAC notation in 1960, but was already on its last legs and dead within a few years. While it might have been useful for mechanical punched card machines, it wasn't easily repurposed for the computer needs of the 1960s. For one, it depended on superscripts and subscripts, and used characters which didn't exist on the IBM punched cards.
William J. Wiswesser in 1949 proposed the Wiswesser Line Notation, universally called WLN, which could be represented in EBCIDIC and (later) ASCII in a single line of text. More importantly, unlike the Dyson notation, which follows the IUPAC nomenclature tradition of starting with the longest carbon chain, WLN focuses on functional groups, and encodes many functional groups directly as symbols.
Chemists tend to be more interested in functional groups, and want to search based on those groups. For many types of searches, WLN acts as its own screen, that is, it's possible to do some types of substructure search directly on the symbols of the WLN, without having to convert the name into a molecular structure for a full substructure search. To search for structures containing a single sulfur, look for WLNs with a single occurrence of S, but not VS or US or SU. The chemical information scientists of the 1960s and 1970s developed several hundred such clever pattern searches to make effective use of the relatively limited hardware of that era.
WLNs started to disappear in the early 1980s, before SMILES came on the scene. Wendy Warr summarized the advantages and disadvantages of WLNs in 1982. She wrote "The principle disadvantage of WLN is that it is not user friendly. This can only be overcome by programs which will derive a canonical WLN from something else (but no one has yet produced a cost-effective program to do this for over 90% of compounds), by writing programs to generate canonical connection tables from noncanonical WLNs, or by accepting the intervention of a skilled "middle man"."
Dyson/IUPAC and WLNs were just two of dozens, if not hundreds, of proposed line notations. Nearly every proposal suffered from a fatal flaw - they could not easily be automated on a computer. Most required postgraduate-level knowledge of chemistry, and were error-prone. The more rigorous proposals evaluated the number of mistakes made during data entry.
One of the few exceptions are the "parentheses free" notations from a pair of papers from 1964, one by Hiz and the other by Eisman, in the same issue of the Journal of Chemical Documentation. In modern eyes, they very much like SMILES but represented in a postfix notation. Indeed, the Eisman paper gives a very SMILES-like notation for a tree structure, "H(N(C(HC(CIC(N(HH)C(HN(IH)))I)H)H))" and a less SMILES-like notation for a cyclic structure, before describing how to convert them into a postfix form.
I consider the parentheses-free nomenclatures a precursor to SMILES, but they were not influential to the larger field of chemical information. I find this a bit odd, and part of my research has been to try and figure out why. It's not like it had no influence. A version of this notation was in the Chemical Information and Data System (CIDS) project in the 1960s and early 1970s. In 1965, "CIDS was the first system to demonstrate online [that is, not batch processing] searching of chemical structures", and CIDS wasn't completely unknown in the chemical information field.
But most of the field in the 1970s went for WLN for a line notation, or a canonical connection table.
SMILES
Dave did not know about the parentheses free line notations when he started work on SMILES, but he drew from similar roots in linguistics. Dave was influenced by Chomsky's writings on linguistics. Hiz, mentioned earlier, was at the Department of Linguistics at the University of Pennsylvania, and that's also where Eugene Garfield did his PhD work on the linguistics of chemical nomenclature.
Dave's interest in chemical representations started when he was a kid. His father, Joseph Weininger, was a chemist at G.E., with several patents to his hame. He would draw pictures of chemical compounds for Dave, and Dave would, at a non-chemical level, describe how they were put together. These seeds grew into what became SMILES.
SMILES as we know it started when Dave was working for the EPA in Duluth. They needed to develop a database of environmental compounds, to be entered by untrained staff. (For the full story of this, read Ed Regis's book "The Info Mesa: Science, Business, and New Age Alchemy on the Santa Fe Plateau.") As I recall, SMILES was going to the the internal language, with a GUI for data entry, but it turns out that SMILES was easy enough for untrained data entry people to write it directly.
And it's simple. I've taught the basics of SMILES to non-chemist programmers in a matter of minutes, while WLN, Dyson, and InChI, as example of other line notations, are much more difficult to generate either by hand or by machine. Granted, those three notations have canonicalization rules built into them, which is part of the difficulty. Still, I asked Dave why something like SMILES didn't appear earlier, given that the underlying concepts existed in the literature by then.
He said he believes it's because the generation of people before him didn't grow up with the software development background. I think he's right. When I go to a non-chemist programer, I say "it's a spanning tree with special rules to connect the cycles", and they understand. But that vocabulary was still new in the 1960s, and very specialized.
There's also some conservatism in how people work. Dyson defended the Dyson/IUPAC notation, saying that it was better than WLN because it was based on the longest carbon chain principle that chemists were already familiar with, even though the underlying reasons for that choice were becoming less important because of computer search. People know what works, and it's hard to change that mindset when new techniques become available.
Why the name SMILES? Dave Cosgrove told me it was because when Dave woud tell people he had a new line notations, most would "groan, or worse. When he demonstrated it to them, that would rapidly turn to a broad smile as they realised how simple and powerful it is."
Exchange vs. Canonical SMILES
I not infrequently come across people who say that SMILES is a proprietary format. I disagree. I think the reason for the disagreement is that two different concepts go under the name "SMILES". SMILES is an exchange language for chemistry, and it's an identifier for chemical database search. Only the second is proprietary.
Dave wanted SMILES as a way for chemists from around the world and through time to communicate. SMILES describes a certain molecular valence model view of the chemistry. This does not need to be canonical, because you can always do that yourself once you have the information. I can specify hydrogen cyanide as "C#N", "N#C", or "[H][C]#[N]" and you will be able to know what I am talking about, without needing to consult some large IUPAC standard.
He wanted people to use SMILES that way, without restriction. The first SMILES paper describes the grammar. Later work at Daylight in the 1990s extended SMILES to handle isotopes and stereochemistry. (This was originally called "isomeric SMILES", but it's the SMILES that people think of when they want a SMILES.) Daylight published the updated grammar on their website. It was later included as part of Gasteiger's "Handbook of Chemoinformatics: From Data to Knowledge in 4 Volumes". Dave also helped people at other companies develop their own SMILES parsers.
To say that SMILES as an exchange format is proprietary is opposite to what Dave wanted and what Daylight did.
What is proprietary is the canonicalization algorithm. The second SMILES paper describes the CANGEN algorithm, although it is incomplete and doesn't actually work. Nor does it handle stereochemistry, which was added years later. Even internally at Daylight, it took many years to work out all of the bugs in the implementation.
There's a good financial reason to keep the algorithm proprietary. People were willing to pay a lot of money for a good, fast chemical database, and the canonicalization algorithm was a key part of Daylight's Thor and Merlin database servers. In business speak, this is part of Daylight's "secret sauce".
On the other hand, there's little reason for why that algorithm should be published. Abstractly speaking, it would mean that different tools would generate the same canonical SMILES, so a federated data search would reduce to a text search, rather than require a re-canonicalization. This is one of the goals of the InChI project, but they discovered that Google didn't index the long InChI strings in a chemically useful way. They created the InChI key as a solution. SMILES has the same problem and would need a similar solution.
Noel O'Boyle published a paper pointed out that the InChI canonicalization assignment could be used to assign the atom ordering for a SMILES string. This would give a universal SMILES that anyone could implement. There's been very little uptake of that idea, which gives a feel of how little demand there is. [Edited 20 March 2017: Noel points out that CDK uses Universal SMILES for canonical isomeric SMILES generation, so there is some uptake.]
Sometimes people also include about the governance model to decide if something is proprietary or not, or point to the copyright restrictions on the specification. I don't agree with these interpretations, and would gladly talk about them at a conference meeting if you're interested.
Line notations and connection tables
There are decades of debate on the advantages of line notations over connection tables, or vice versa. In short, connection tables are easy to understand and parse into an internal molecule data structure, while line notations are usually more compact and can be printed on a single line. And in either case, at some point you need to turn the text representation into a data structure and treat it as a chemical compound rather than a string.
Line notations are a sort of intellectual challenge. This alone seems to justify some of the many papers proposing a new line notation. By comparison, Open Babel alone supports over 160 connection table formats, and there are untold more in-house or internal formats. Very few of these formats have ever been published, except perhaps in an appendix in a manual.
Programmers like simple formats because they are easy to parse, often easy to parse quickly, and easy to maintain. Going back the Warr quote earlier, it's hard to parse WLN efficiently.
On the other hand, line notations fit better with text-oriented systems. Back in the 1960s and 1970s, ISI (the Institute for Scientific Information) indexed a large subset of the chemistry literature and distributed the WLNs as paper publications, in a permuted table to help chemists search the publication by hand. ISI was a big proponent of WLN. And of course it was easy to put a WLN on a punched card and search it mechanically, without an expensive computer,
Even now, a lot of people use Excel or Spotfire to display their tabular data. It's very convenient to store the SMILES as a "word" in a text cell.
Line notations also tend to be smaller than connection tables. As an example, the connection table lines from the PubChem SD files (excluding the tag data) average about 4K per record. The PUBCHEM_OPENEYE_ISO_SMILES tag values average about 60 bytes in length.
Don't take the factor of 70 as being all that meaningful. The molfile format is not particularly compact, PubChem includes a bunch of "0" entries which could be omitted, and the molfile stores the coordinates, which the SMILES does not. The CAS search system in the late 1980s used about 256 bytes for each compact connection table, which is still 4x larger than the equivalent SMILES.
Dave is right. SMILES, unlike most earlier line notations, really is built with computer parsing in mind. Its context-free grammar is easy to parse using simple stack, thought still not as easy as a connection table. It doesn't require much in the way of lookup tables or state information. There's also a pretty natural mapping from the SMILES to the corresponding topology.
What happens if you had a really fast SMILES parser? As a thought experiment which doesn't reflect real hardware, suppose you could convert 60 bytes of SMILES string to a molecule data structure faster than you could read the additional 400 bytes of connection table data. (Let's say the 10 GHz CPU is connected to the data through a 2400 baud modem.) Then clearly it's best to use a SMILES, even if it takes longer to process.
A goal for the Daylight toolkit was to make SMILES parsing so fast that there was no reason to store structures in a special internal binary format or data structure. Instead, when it's needed, parse the SMILES into a molecule object, use the molecule, and throw it away.
On the topic of muck and horseshoes, as I recall Daylight hired an outside company at one point to go through the code and optimize it for performance.
SMARTS
SMARTS is a natural recasting and extension of the SMILES grammar to define a related grammar for substructure search.
I started in chemical information in the late 1990s, with the Daylight toolkit and a background which included university courses on computer grammars like regular expressions. The analogy of SMARTS to SMILES is like regular expressions to strings seems obvious, and I modeled my PyDaylight API on the equivalent Python regular expression API.
Only years later did I start to get interested in the history of chemical information, though I've only gotten up to the early 1970s so there's a big gap that I'm missing. Clearly there were molecular query representations before SMARTS. What I haven't found is a query line notation, much less one implemented in multiple systems. This is a topic I need to research more.
The term "fingerprint"
Fingerprints are a core part of modern cheminformatics. I was therefore surprised to discover that Daylight introduced the term "fingerprint" to the field, around 1990 or so.
The concept existed before then. Adamson and Bush did some of the initial work in using fingerprint similarity as a proxy for molecular similarity in 1973, and Willett and Winterman's 1986 papers [1, 2] (the latter also with Bawden) reevaluated the earlier work and informed the world of the effectiveness of the Tanimoto. (We call it "Tanimoto" instead of "Jaccard" precisely because of those Sheffield papers.)
But up until the early 1990s, published papers referred to fingerprints as the "screening set" or "bit screens", which describes the source of the fingerprint data, and didn't reifying them into an independent concept. The very first papers which used "fingerprint" were by Yvonne Martin, an early Daylight user at Abbott, and John Barnard, who uses "fingerprint", in quotes, in reference specifically to Daylight technology.
I spent a while trying to figure out the etymology of the term. I asked Dave about it, but it isn't the sort of topic which interests him, and he didn't remember. "Fingerprint" already existed in chemistry, for IR spectra, and the methods for matching spectra are somewhat similar to those of cheminformatics fingerprint similarity, but not enough for me to be happy with the connection. Early in his career Dave wrote software for a mass spectrometer, so I'm also not rejecting the possibility.
The term "fingerprint" was also used in cryptographic hash functions, like "Fingerprinting by Random Polynomial" by Rabin (1981). However, these fingerprints can only be used to test if two fingerprints are identical. They are specifically designed to make it hard to use the fingerprints to test for similarity of the source data.
I've also found many papers talking about image fingerprints or audio fingerprints which can be used for both identity and similarity testing; so-called "perceptual hashes". However, their use of "fingerprint" seems to have started a good decade after Daylight popularized it in cheminformatics.
Hash fingerprints
Daylight needed a new name for fingerprints because they used a new approach to screening.
Fingerprint-like molecular descriptions go back to at least the Frear code of the 1940s. Nearly all of the work in the intervening 45 years was focused on finding fragments, or fragment patterns, which would improve substructure screens.
Screen selection was driven almost entirely by economics. Screens are cheap, with data storage as the primary cost. Atom-by-atom matching, on the other hand, had a very expensive CPU cost. The more effective the screens, the better the screenout, the lower the overall cost for an exact substructure search.
The best screen would have around 50% selection/rejection on each bit, with no correlation between the bits. If that could exist, then an effective screen for 1 million structures would need only 20 bits. This doesn't exist, because few fragments meet that criteria. The Sheffield group in the early 1970s (who often quoted Mooers as the source of the observation) looked instead at more generic fragment descriptions, rather than specific patterns. This approach was further refined by BASIC in Basel and then at CAS to become the CAS Online screens. This is likely the pinnacle of the 1970s screen development.
Even then, it had about 4000 patterns assigned to 2048 bits. (Multiple rare fragments can be assigned to the same bit with little reduction in selectivity.)
A problem with a fragment dictionary is that it can't take advantage of unknown fragments. Suppose your fragment dictionary is optimized for pharmaceutical compounds, then someone does a search for plutonium. If there isn't an existing fragment definition like "transuranic element" or "unusual atom", then the screen will not be able to reject any structures. Instead, it will slowly go through the entire data set only to return no matches.
This specific problem is well known, and the reason for the "OTHER" bit of the MACCS keys. However, other types of queries may still have an excessive number of false positives during screening.
Daylight's new approach was the enumeration-based hash fingerprint. Enumerate all subgraphs of a certain size and type (traditionally all linear paths with up to 7 atoms), choose a canonical order, and use the atom and bond types in order t generate a hash value. Use this value to seed a pseudo random number generator, then generate a few values to set bits in the fingerprint; the specific number depends on the size of the subgraph. (The details of the hash function and the number of bits to set were also part of Daylight's "secret sauce.")
Information theory was not new in chemical information. Calvin Mooers developed superimposed coding back in the 1940s in part to improve the information density of chemical information on punched cards (and later complained about how computer scientists rediscovered it as hash tables). The Sheffield group also used information theory to guide their understanding of the screen selection problem. Feldman and Hodes in the 1970s developed screens by the full enumeration of common subgraphs of the target set and a variant of Mooers' superimposed coding.
But Daylight was able to combine information theory and computer science theory (i.e., hash tables) to develop a fingerprint generation technique which was completely new. And I do mean completely new.
Remember how I mentioned there are SMILES-like line notations in the literature, even if people never really used them? I've looked hard, and only with a large stretch of optimism can I find anything like the Daylight fingerprints before Daylight, and mostly as handwaving proposal for what might be possible. Nowadays, almost every fingerprint is based on a variation of the hash approach, rather than a fragment dictionary.
In addition, because of the higher information density, Daylight fingerprints were effective as both a substructure screen and a similarity fingerprint using only 1024 bits, instead of the 2048 bits of the CAS Online screen. This will be important in the next section.
Chemical data vs. compute power and storage size
CAS had 4 million chemical records in 1968. The only cost-effective way to store that data was on tape. Companies could and did order data tapes from CAS for use on their own corporate computers.
Software is designed for the hardware, so the early systems were built first for tape drives and then, as they became affordable, for the random-access capabilities of hard disks. A substructure search would first check against the screens to reject obvious mismatches, then for each of the remaining candidates, read the corresponding record off disk and do the full atom-by-atom match.
Apparently Derek Price came up with the "law of exponential increase" in "Science Since Babylon" (1961), which describes how science information has exponential growth. I've only heard about that second hand. Chemical data is no exception, and its exponential growth was noted, I believe, in the 1950s by Perry.
In their 1971 text book, Lynch, et al. observed the doubling period was about 12 years. I've recalculated that number over a longer baseline, and it still holds. CAS has 4 million structures in 1968 and 100 million structure in 2015, which is a doubling every 10-11 years.
On the other hand, computers have gotten more powerful at a faster rate. For decades the effective computing power doubled every 2 years, and the amount of RAM and data storage for constant dollars has doubled even faster than that.
In retrospect it's clear that at some point it would be possible to store all of the world's chemistry data, or at least a corporate database, in memory.
Weininger's Realization
Disks are slow. Remember how the atom-by-atom search needed to pull a record off the disk? That means the computer needs to move the disk arm to the right spot and wait for the data to come by, while the CPU simply waits. If the data were in RAM then it would be 10,000x faster to fetch a randomly chosen record.
Putting it all in RAM sounds like the right solution, but in in the early 1980s memory was something like $2,000 per MB while hard disk space was about $300 per MB. One million compounds with 256 bytes per connection table and 256 bytes per screen requires almost 500MB of space. No wonder people kept the data on disk.
By 1990, RAM was about $80/MB while hard disk storage was $4/MB, while the amount of chemistry data had only doubled.
Dave, or at least someone at Daylight, must have realized that the two different exponential growth rates make for a game changer, and that the Daylight approach would give them a head start over the established vendors. This is explicit in the Daylight documentation for Merlin:
The basic idea behind Merlin is that data in a computer's main memory can be manipulated roughly five orders of magnitude faster than data on its disk. Throughout the history of computers, there has been a price-capacity-speed tradeoff for data storage: Large-capacity storage (tapes, drums, disks, CD-ROMS) is affordable but slow; high-speed storage ("core", RAM) is expensive but fast. Until recently, high-speed memory was so costly that even a modest amount of chemical information had to be stored on tapes or disks.
But technology has a way of overwhelming problems like this. The amount of chemical information is growing at an alarming rate, but the size of computer memories is growing even faster: at an exponential rate. In the mid-1980's it became possible for a moderately large minicomputer to fit a chemical database of several tens of thousands of structures into its memory. By the early 1990's, a desktop "workstation" could be purchased that could hold all of the known chemicals in the world (ca. 15 million structures) in its memory, along with a bit of information about each.
On the surface, in-memory operations seem like a straightforward good deal: A computer's memory is typically 105 times faster than its disk, so everything you could do on disk is 100000 times faster when you do it in memory. But these simple numbers, while impressive, don't capture the real differences between disk- and memory-based searches:
- With disk-based systems, you formulate a search carefully, because it can take minutes to days to get your answer back. With Merlin it is usually much faster to get the answer than it is to think up the question. This has a profound effect on user's attitudes towards the EDA system.
- In disk-based systems, you typically approach with a specific question, often a question of enough significance that you are willing to invest significant effort to find the answer. With Merlin, it is possible to "explore" the database in "real-time" - to poke around and see what is there. Searches are so fast that users adopt a whole new approach to exploratory data analysis.
Scaling down
I pointed out earlier that SMILES and fingerprints take up less space. I estimate it was 1/3 the space of what CAS needed, which is the only comparison I've been able to figure out. That let Daylight scale up to larger data set for a given price, but also scale down to smaller hardware.
Let's say you had 250,000 structures in the early 1990s. With the Daylight system you would need just under 128 MB of RAM, which meant you could buy a Sun 3, which maxed out at 128 MB, instead of a more expensive computer.
It still requires a lot of RAM, and that's where Yosi comes in. His background was in hardware sales, and he know how to get a computer with a lot of RAM in it. Once the system was ready, Dave and his brother Art put it in the back of a van and went around the country to potential customers to give a demo, often to much astonishment that it could be so fast.
I think the price of RAM was the most important hardware factor to the success of Daylight, but it's not the only one. When I presented some of these ideas at Novartis in 2015, Bernhard Rohde correctly pointed out that decreasing price of hardware also meant that computers were no longer big purchase items bought and managed by IT, but something that even individual researchers could buy. That's another aspect of scaling down.
While Daylight did sell to corporate IT, their heart was in providing tools and especially toolkits to the programmer-chemists who would further develop solutions for their company.
Success and competitors
By about 1990, Daylight was a market success. I have no real idea how much profit the company made, but it was enough that Dave bought his own planes, including a fighter jet. When I was at the Daylight Summer School in 1998, the students over dinner came up with a minimum of $15 million in income, and maximum of $2 million in expenses.
It was also a scientific success, measured by the number of people talking about SMILES and fingerprints in the literature.
I am not a market analyst so I can't give that context. I'm more of a scholar. I've been reading through the old issues of JCICS (now titled JCIM) trying to identify the breakthrough transition for Daylight. There is no bright line, but there is tantalizing between-the-lines.
In 1992 or so (I'll have to track it down), there's a review of a database vendors' product. The reviewer mentions that the vendor plans to have an in-memory database the next year. I can't help but interpret it as a competitor responding to the the new Daylight system, and having to deal with customers who now understood Weininger's Realization.
Dave is a revolutionary
The decreasing price of RAM and hardware may help explain the Daylight's market success, but Dave wasn't driven by trying to be the next big company. You can see that in how the company acted. Before the Oracle cartridge, they catered more towards the programmer-chemist. They sold VMS and then Unix database servers and toolkits, with somewhat primitive database clients written using the XView widget toolkit for X. I remember Dave once saying that the clients were meant as examples of what users could do, rather than as complete applications. A different sort of company would have developed Windows clients and servers, more tools for non-programmer chemists, and focused more on selling enterprise solutions to IT and middle managers.
A different sort of company would have tried to be the next MDL. Dave didn't think they were having fun at MDL, so why would he want to be like them?
Dave was driven by the idea of bringing chemical information away from the "high priests" who held the secret knowledge of how to make things work. Look at SMILES - Dyson and WLN required extensive training, while SMILES could be taught to non-chemists in an hour or so. Look at fingerprints - the CAS Online screens were the result of years of research in picking out just the right fragments, based on close analysis of the types of queries people do, while the hash fingerprints can be implemented in a day. Look even at the Daylight depictions, which were well known as being ugly. But Dave like to point out that the code, at least originally, needed only 4K. That's the sort of code a non-priest could understand, and the sort of justification a revolutionary could appreciate.
Dave is a successful revolutionary, which is rare. SMILES, SMARTS and fingerprints are part of the way we think about modern cheminformatics. Innumerable tools implement them, or variations of them.
High priests of chemical information
Revolutionary zeal is powerful. I remember hearing Dave's "high priests" talk back in 1998 and the feeling empowered, that yes, even as a new person in the field, cheminformatics is something I can take on on my own.
As I learn more about the history of the field, I've also learned that Dave's view is not that uncommon. In the post-war era the new field of information retrieval wanted to challenge the high priests of library science. (Unfortunately I can't find that reference now.)
Michael Lynch would have been the high priest of chemical information if there ever was one. Yet at ICCS 1988 he comments "I can recollect very little sense, in 1973, that this revolution was imminent. Georges Anderla .. noted that the future impact of very large scale integration (VLSI) was evident only to a very few at that time, so that he quite properly based his projections on the characteristics of the mainframe and minicomputer types then extant. As a result, he noted, he quite failed to see, first, that the PC would result in expertise becoming vastly more widely disseminated, with power passing out of the hands of the small priesthood of computer experts, thus tapping a huge reservoir of innovative thinking, and, second, that the workstation, rather than the dumb terminal, would become standard."
A few years I talked with Egon Willighagen. He is one of the CDK developers and an advocate of free and open source software for chemical information. He also used the metaphor of talking information from the high priests to the people, but in his case he meant the previous generation of closed commercial tools, like the Daylight toolkit.
Indeed, one way to think of it is that Dave the firebrand of Daylight became it high priest of Daylight, and only the priests of Daylight control the secret knowledge of fingerprint generation and canonicalization.
That's why I no longer like the metaphor. Lynch and the Sheffield group published many books and papers, including multiple textbooks on how to work with chemical information. Dave and Daylight did a lot of work to disseminate the Daylight way of thinking about cheminformatics. These are not high priests hoarding occult knowledge, but humans trying to do the right thing in an imperfect world.
There's also danger in the metaphor. Firebrand revolutionaries doesn't tend to know history. Perhaps some of the temple should be saved? At the very least there might be bad feelings if you declare your ideas revolutionary only to find out that not only they are not new, and you are talking to the previous revolutionary who proposed them.
John Barnard told me a story of Dave and Lynch meeting at ICCS in 1988, I believe. Dave explained how his fingerprint algorithm worked. Lynch commented something like "so it's like Calvin Mooers' superimposed coding"? Lynch knew his history, and he was correct - fingerprints and superimposed coding are related, though not the same. Dave did not know the history or how to answer the question.
My view has its own danger. With 75 years of chemical information history, one might feel a paralysis of not doing something out of worry that it's been done before and you just don't know about it.
Leaving Daylight
In the early 2000s Dave became less interested in chemical information. He had grown increasingly disenchanted with the ethics of how pharmaceutical companies work and do research. Those who swear allegience to money would have no problems just making a sale and profits, but he was more interested in ideas and people. He had plenty of money.
He was concerned about the ethics of how people used his work, though I don't know how big a role that was overall. Early on, when Daylight was still located at Pomona, they got purchase request from the US military. He didn't want the Daylight tools to be used to make chemical weapons, and put off fulfilling the sale. The military group that wanted the software contacted him and pointed out that they were actually going to use the tools to develop defenses against chemical weapons, which helped him change his mind.
I think the Daylight Oracle cartridge marked the big transition for Daylight. The 1990s was the end of custom domain-specific databases like Thor and Merlin and the rise of object-relational databases with domain-specific extensions. Norah MacCuish (then Shemetulskis) helped show how the Daylight functionality could work as an Informix DataBlade. Oracle then developed their data cartridge, and most of the industry, including the corporate IT that used the Daylight servers, followed suit.
Most of Daylight's income came from the databases, not the toolkit. Companies would rather use widely-used technology because it's easier to find people with the skills to use it, and there can be better integration with other tools. If Daylight didn't switch, then companies would turn to competitive products which, while perhaps not as elegant, were a better fit for IT needs. Daylight did switch, and the Daylight user group meetings (known as "MUG" because it started off as the Medchem User Group) started being filled by DBAs and IT support, not the geek programmer-chemist that were interested in linguistics and information theory and the other topics that excited Dave.
It didn't help that the Daylight tools were somewhat insular and static. The toolkit didn't have an officially supported way to import SD file, though there was a user-contributed package for that, which Daylight staff did maintain. Ragu Bharadwaj developed Daylight's JavaGrins chemical sketcher, which was a first step towards are more GUI-centered Daylight system, but also the only step. The RDKit, as an example of a modern chemical informatics toolkit, includes algorithms Murko scaffolds, matched molecular pairs, and maximum common substructure, and continues to get new ones. But Daylight didn't go that route either.
What I think it comes down to is that Dave was really interested in databases and compound collections. Daylight was also a reseller of commercial chemical databases, and Dave enjoyed looking through the different sets. He told me there was a different feel to the chemistry done in different countries, and he could spot that by looking at the data. He was less interested in the other parts of cheminformatics, and as the importance of old-school "chemical information" diminished in the newly renamed field of "chem(o)informatics", so too did his interest, as did the interest from Daylight customers.
Post-Daylight
Dave left chemical information and switched to other topics. He tried to make theobromine-free chocolate, for reasons I don't really understand, though I see now that many people buy carob as a chocolate alternative because it's theobromine free and thus stimulant free. He was also interested in binaural recording and hearing in general. He bought the house next door to turn it into a binaural recoding studio.
He became very big into solar power, as a way to help improve the world. He bought 12 power panels, which he called The Twelve Muses, from a German company and installed them for his houses. These were sun trackers, to maximize the power. Now, Santa Fe is at 2,300 m/7,000 ft. elevation, in a dry, near-desert environment. He managed to overload the transformers because they produced a lot more power than the German-made manufacturer expected. Once that was fixed, both houses were solar powered, plus he had several electric cars and motorcycles, and could feed power back into the grid for profit. He provided a recharging station for the homebrew electric car community who wanted to drive from, say, Albuquerque to Los Alamos. (This was years before Teslas were on the market). Because he had his own DC power source, he was also able to provide a balanced power system to his recording studio and minimize the power hum noise.
He tried to persuade the state of New Mexico to invest more in solar power. It makes sense, and he showed that it was possible. But while he was still a revolutionary, he was not a politician, and wasn't able to make the change he wanted.
When I last saw him in late 2015, he was making a cookbook of New Mexico desserts.
Dave will always have a place in my heart.
Andrew Dalke
Trollhättan, Sweden
2 December 2016
Changes
- 2016-12-06: Added a section on leaving Daylight, fixed typos, and added Dave Cosgrove's account about the origin of the term "SMILES".
Fragment by copy and trim #
This is part of a series of essays on how to fragment a molecular graph using RDKit. These are meant to describe the low-level steps that go into fragmentation, and the ways to think about testing. To fragment a molecule in RDKit, use FragmentOnBonds(). The essays in this series are:
-
Why another fragmentation method?
How do you tell if an algorithm is correct? Sometimes you can inspect the code. More often you have test cases where you know the correct answer. For complex algorithms, especially when using real-world data, testing is even harder. It's not always clear where edge cases are, and often the real-world is often more complicated than you think.
Another validation solution is to write two different algorithms which accomplish the same thing, and compare them with each other. I did that in my essay about different ways to evaluate the parity of a permutation order. That cross-validation testing was good enough, because it's easy to compute all possible input orders.In the previous essay in this series, I cross-validated that fragment_chiral() and fragment_on_bonds() give the same results. They did. That isn't surprising because they implement essentially the same algorithm. Not only that, but I looked at the FragmentOnBonds() implementation when I found out my first fragment_chiral() didn't give the same answers. (I didn't know I needed to ClearComputedProps() after I edit the structure.)
Cross-validation works better when the two algorithm are less similar. The way you think about a problem and implement a solution are connected. Two similar solutions may be the result of thinking about things the same way, and come with the same blind spots. (This could also be a design goal, if you want the new implementation to be "bug compatible" with the old.)
I've made enough hints that you likely know what I'm leading up to. RDKit's FragmentOnBonds() code currently (in mid-2016) converts directional bonds (the E-Z stereochemistry around double bonds into non-directional single bonds.
>>> from rdkit import Chem >>> mol = Chem.MolFromSmiles("F/C=C/F") >>> fragmented_mol = Chem.FragmentOnBonds(mol, [0], dummyLabels=[(0,0)]) >>> Chem.MolToSmiles(fragmented_mol, isomericSmiles=True) '[*]C=CF.[*]F'when I expected the last line to be
'[*]/C=C/F.[*]F'
This may or may not be what you want. Just in case, I've submitted a patch to discuss changing it in RDKit, so your experience in the future might be different than mine in the past.
I bring it up now to show how cross-validation of similar algorithms isn't always enough to figure out if the algorithms do as you expect.
Validate though re-assembly
As a reminder, in an earlier essay I did a much stronger validation test. I took the fragments, reassembled them via SMILES syntax manipulation, and compared the canonicalized result to the canonicalized input structure. These should always match, to within limits of the underlying chemistry support.
They didn't, because my original code didn't support directional bonds. Instead, in that essay I pre-processed the input SMILES string to replace the "/" and "\" characters with a "-". That gives the equivalent chemistry without directional bonds.
I could use that validation technique again, but I want to explore more of what it means to cross-validate by using a different fragmentation implementations.
Fragment by 'copy and trim'
Dave Cosgrove, on the RDKit mailing list, described the solution he uses to cut a bond and preserve chirality in toolkits which don't provide a high-level equivalent to FragmentOnBonds(). This method only works on non-ring bonds, which isn't a problem for me as I want to fragment R-groups. (As I discovered while doing the final testing, my implementation of the method also assumes there is only one molecule in the structure.)
To make fragment from a molecule, trim away all the atoms which aren't part of the fragment, except for the atom connected to the fragment. Convert that atom into the wildcard atom "*" by setting its atomic number, charge, and number of hydrogens to 0, and setting the chirality and aromaticity flags correctly. The image below shows the steps applied to cutting the ester off of aspirin:
This method never touches the bonds connected to a chiral atom of a fragment, so chirality or other bond properties (like bond direction!) aren't accidently removed by breaking the bond and making a new one in its place.
A downside is that I need to copy and trim twice in order to get both fragments after cutting a chain bond. I can predict now the performance won't be as fast as FragmentOnBonds().
Let's try it out.
Trim using the interactive shell
I'll use aspirin as my reference structure and cut the bond to the ester (the R-OCOCH3), which is the bond between atoms 2 and 3.
>>> from rdkit import Chem >>> aspirin = Chem.MolFromSmiles("O=C(Oc1ccccc1C(=O)O)C") >>> bond_idx = aspirin.GetBondBetweenAtoms(2, 3).GetIdx() >>> bond_idx 2 >>> fragmented_mol = Chem.FragmentOnBonds(aspirin, [bond_idx], dummyLabels=[(0,0)]) >>> Chem.MolToSmiles(fragmented_mol, isomericSmiles=True) '[*]OC(C)=O.[*]c1ccccc1C(=O)O'
I'll need a way to find all of the atoms which are on the ester side of the bond. I don't think there's a toolkit function to get all the atoms on one side of a bond, so I'll write one myself.
Atom 2 is the connection point on the ester to the rest of the aspirin structure. I'll start with that atom, show that it's an oxygen, and get information about its bonds:
>>> atom = aspirin.GetAtomWithIdx(2) >>> atom.GetSymbol() 'O' >>> [bond.GetBondType() for bond in atom.GetBonds()] [rdkit.Chem.rdchem.BondType.SINGLE, rdkit.Chem.rdchem.BondType.SINGLE]I'll use the bond object to get to the atom on the other side of the bond from atom 2, and report more detailed information about the atom at the other end of each bond:
>>> for bond in atom.GetBonds(): ... other_atom = bond.GetOtherAtom(atom) ... print(other_atom.GetIdx(), bond.GetBondType(), other_atom.GetSymbol(), other_atom.GetIsAromatic()) ... 1 SINGLE C False 3 SINGLE C TrueThis says that atom 1 is an aliphatic carbon, and atom 3 is an aromatic carbon. This matches the structure diagram I used earlier.
I know that atom 3 is the other end of the bond which was cut, so I can stop there. What about atom 1? What is it connected to?
>>> next_atom = aspirin.GetAtomWithIdx(1) >>> [b.GetOtherAtom(next_atom).GetIdx() for b in next_atom.GetBonds()] [0, 2, 12]I've already processed atom 2, so only atoms 0 and 12 are new.
What are those atoms connected to?
>>> next_atom = aspirin.GetAtomWithIdx(0) >>> [b.GetOtherAtom(next_atom).GetIdx() for b in next_atom.GetBonds()] [1] >>> next_atom = aspirin.GetAtomWithIdx(12) >>> [b.GetOtherAtom(next_atom).GetIdx() for b in next_atom.GetBonds()] [1]Only atom 1, which I've already seen before so I've found all of the atoms on the ester side of the bond.
Semi-automated depth-first search
There are more atoms on the other side of the bond, so I'll have to automate it somewhat. This sort of graph search is often implemented as a depth-first search (DFS) or a breadth-first search (BFS). Python lists easily work as stacks, which makes DFS slightly more natural to implement.
To give you an idea of what I'm about to explain, here's an animation of the aspirin depth-first search:
If there is the chance of a ring (called "cycle" in graph theory), then there are two ways to get to the same atom. To prevent processing the same atom multiple times, I'll set up set named "seen_ids", which contains the atoms indices I've before. Since it's the start, I've only seen the two atoms which are at the end of the bond to cut.
>>> seen_ids = set([2, 3])I also store a list of the atoms which are in this side of the bond, at this point is only atom 3, and a stack (a Python list) of the atoms I need to process further:
>>> atom_ids = [3] >>> stack = [aspirin.GetAtomWithIdx(3)]I'll start by getting the top of the stack (the last element of the list) and looking at its neighbors:
>>> atom = stack.pop() >>> neighbor_ids = [b.GetOtherAtom(atom).GetIdx() for b in atom.GetBonds()] >>> neighbor_ids [2, 4, 8]I'll need to filter out the neighbors I've already seen. I'll write a helper function for that. I'll need both the atom objects and the atom ids, so this will return two lists, one for each:and use it:
>>> atom_ids_to_visit, atoms_to_visit = get_atoms_to_visit(atom, seen_ids) >>> atom_ids_to_visit [4, 8] >>> [a.GetSymbol() for a in atoms_to_visit] ['C', 'C']These atom ids are connected to the original atom, so add them all it to the atom_ids.
>>> atom_ids.extend(atom_ids_to_visit) >>> atom_ids [3, 4, 8]These haven't been seen before, so add the atoms (not the atom ids) to the stack of items to process:
>>> stack.extend(atoms_to_visit) >>> stack [<rdkit.Chem.rdchem.Atom object at 0x111bf37b0>, <rdkit.Chem.rdchem.Atom object at 0x111bf3760>] >>> [a.GetIdx() for a in stack] [4, 8]Now that they've been seen, I don't need to process them again, so add the new ids to the set of seen ids:
>>> seen_ids.update(atom_ids_to_visit) >>> seen_ids {8, 2, 3, 4}
That's the basic loop. The stack isn't empty, so pop the top of the stack to get a new atom object, and repeat the earlier steps:
>>> atom = stack.pop() >>> atom.GetIdx() 8 >>> atom_ids_to_visit, atoms_to_visit = get_atoms_to_visit(atom, seen_ids) >>> atom_ids_to_visit [7, 9] >>> atom_ids.extend(atom_ids_to_visit) >>> atom_ids [3, 4, 8, 7, 9] >>> stack.extend(atoms_to_visit) >>> [a.GetIdx() for a in stack] [4, 7, 9] >>> seen_ids.update(atom_ids_to_visit) >>> seen_ids {2, 3, 4, 7, 8, 9}Then another loop. I'll stick it in a while loop to process the stack until its empty, and I'll only have it print the new atom ids added to the list:
>>> while stack: ... atom = stack.pop() ... atom_ids_to_visit, atoms_to_visit = get_atoms_to_visit(atom, seen_ids) ... atom_ids.extend(atom_ids_to_visit) ... stack.extend(atoms_to_visit) ... seen_ids.update(atom_ids_to_visit) ... print("added atoms", atom_ids_to_visit) ... added atoms [10, 11] added atoms [] added atoms [] added atoms [6] added atoms [5] added atoms [] added atoms []The final set of atoms is:
>>> atom_ids [3, 4, 8, 7, 9, 10, 11, 6, 5]
Fully automated - fragment_trim()
In this section I'll put all the pieces together to make fragment_trim(), a function which implements the trim algorithm.
Find atoms to delete
The trim algorithm needs to know which atoms to delete. That will be all of the atoms on one side of the bond, except for the atom which is at the end of the bond. (I'll turn that atom into a "*" atom.) I'll use the above code to get the atom ids to delete:
# = {start_atom_id, ignoreand try it out:
>>> from rdkit import Chem >>> aspirin = Chem.MolFromSmiles("O=C(Oc1ccccc1C(=O)O)C") >>> find_atom_ids_to_delete(aspirin, 3, 2) [1, 0, 12] >>> find_atom_ids_to_delete(aspirin, 2, 3) [4, 8, 7, 9, 10, 11, 6, 5]That looks right. (I left out the other testing I did to get this right.)
Trim atoms
The trim function has two parts. The first is to turn the attachment point into the wildcard atom "*", which I'll do by removing any charges, hydrogens, chirality, or other atom properties. This atom will not be deleted. The second is to remove the other atoms on that side of the bond.
Removing atoms from a molecule is slightly tricky. The atom indices are reset after each atom is removed. (While I often use a variable name like "atom_id", the atom id is not a permanent id, but simply the current atom index.) When atom "i" is deleted, all of the atoms with a larger id "j" get the new id "j-1". That is, the larger ids are all shifted down one to fill in the gap.
This becomes a problem because I have a list of ids to delete, but as I delete them the real ids change. For example, if I delete atoms 0 and 1 from a two atom molecule, in that order, I will get an exception:
>>> rwmol = Chem.RWMol(Chem.MolFromSmiles("C#N")) >>> rwmol.RemoveAtom(0) >>> rwmol.RemoveAtom(1) [13:54:02] **** Range Error idx Violation occurred on line 155 in file /Users/dalke/ftps/rdkit-Release_2016_03_1/Code/GraphMol/ROMol.cpp Failed Expression: 1 <= 0 **** Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: Range Error idx Violation occurred on line 155 in file Code/GraphMol/ROMol.cpp Failed Expression: 1 <= 0 RDKIT: 2016.03.1 BOOST: 1_60This is because after RemoveAtom(0) the old atom with id 1 gets the id 0. As another example, if I remove atoms 0 and 1 from a three atom molecule, I'll end up with what was originally the second atom:
>>> rwmol = Chem.RWMol(Chem.MolFromSmiles("CON")) >>> rwmol.RemoveAtom(0) >>> rwmol.RemoveAtom(1) >>> Chem.MolToSmiles(rwmol) 'O'Originally the ids were [C=0, O=1, N=2]. The RemoveAtom(0) removed the C and renumbered the remaining atoms, giving [O=0, N=1]. The RemoveAtom(1) then removed the N, leaving [O=0] as the remaining atom.
While you could pay attention to the renumbering and adjust the index of the atom to delete, the far easier solution is to sort the atom ids, and delete starting from the largest id.
Here is the function to turn the attachment atom into a wildcard and to delete the other atoms:()(If this were a more general-purpose function I would need to call ClearComputedProps(), which is needed after any molecular editing in RDKit. I don't need to do this here because it will be done during CombineMols(), which is coming up.)
How did I figure out which properties needed to be changed to make a wildcard atom? I tracked the down during cross-validation tests of an earlier version of this algorithm.
I'll try out the new code:
>>> fragment1 = trim_atoms(aspirin, 2, [1, 0, 12]) >>> Chem.MolToSmiles(fragment1, isomericSmiles=True) '[*]c1ccccc1C(=O)O' >>> fragment2 = trim_atoms(aspirin, 3, [4, 8, 7, 9, 10, 11, 6, 5]) >>> Chem.MolToSmiles(fragment2, isomericSmiles=True) '[*]OC(C)=O'
fragment_trim()
The high-level fragment code is straight-forward, and I don't think it requires explanation:)
I'll check that it does what I think it should do:
>>> new_mol = fragment_trim(aspirin, 2, 3) >>> Chem.MolToSmiles(new_mol) '[*]OC(C)=O.[*]c1ccccc1C(=O)O'This matches the FragmentOnBonds() output at the start of this essay.
Testing
I used the same cross-validation method I did in my previous essay. I parsed structures from ChEMBL and checked that fragment_trim() produces the same results as FragmentOnBonds(). It doesn't. The first failure mode surprised me:
fragment_trim() only works with a single connected structure
Here's one of the failure reports:
Mismatch: record: 61 id: CHEMBL1203109 smiles: Cl.CNC(=O)c1cc(C(O)CNC(C)CCc2ccc3c(c2)OCO3)ccc1O cut: 1 2 smiles_trim: Cl.Cl.[*]C.[*]NC(=O)c1cc(C(O)CNC(C)CCc2ccc3c(c2)OCO3)ccc1O smiles_on_bond: Cl.[*]C.[*]NC(=O)c1cc(C(O)CNC(C)CCc2ccc3c(c2)OCO3)ccc1OSomehow the "Cl" appears twice. After thinking about it I realized that my implementation of the copy and trim algorithm assumes that the input structure is strongly connected, that is that it only contains a single molecule. Each copy contains a chlorine atom, so when I CombineMols() I end up with two chlorine atoms.
I could fix my implementation to handle this, but as I'm only interested in cross-validation, the easier solution is to never process a structure with multiple molecules. I decided that if the input SMILES contains multiple molecules (that is, if the SMILES contains the dot disconnection symbol ".") then I would choose the component which has the most characters, using the following:
if "." in smiles: # The fragment_trim() method only works on a connected molecule. # Arbitrarily pick the component with the longest SMILES string. smiles = max(smiles.split("."), key=len)
Directional bonds
After 1000 records and 31450 tests there were 362 mismatches. All of them appeared to be related to directional bonds, like this mismatch report:
Mismatch: record: 124 id: CHEMBL445177 smiles: CCCCCC/C=C\CCCCCCCc1cccc(O)c1C(=O)O cut: 7 8 smiles_trim: [*]/C=C\CCCCCC.[*]CCCCCCCc1cccc(O)c1C(=O)O smiles_on_bond: [*]C=CCCCCCC.[*]CCCCCCCc1cccc(O)c1C(=O)OI knew this would happen coming in to this essay, but it's still nice to see. It demonstrates that the fragment_trim() is a better way to cross-validate FragmentOnBonds() than my earlier fragment_chiral() algorithm.
It's possible that other mismatches are hidden in the hundreds of reports, so I removed directional bonds from the SMILES and processed again:
if "/" in smiles: smiles = smiles.replace("/", "") if "\\" in smiles: smiles = smiles.replace("\\", "")That testing shows I forgot to include "atom.SetIsotope(0)" when I converted the attachment atom into a wildcard atom. Without it, the algorithm turned a "[18F]" into a "[18*]". It surprised me because I copied it from working code I made for an earlier version of the algorithm. Looking into it, I realized I left that line out during the copy&paste. This is why regression tests using diverse real-world data are important. I stopped the testing after 100,00 records. Here is the final status line:
Processed 100000 records, 100000 molecules, 1120618 tests, 0 mismatches. T_trim: 2846.79 T_on_bond 258.18While trim code is slower than using the built-in function, the point of this exercise isn't to figure out the fastest implementation but to have a better way to cross-validate the original code, which it has done.
The final code
Here is the final code:
from __future__ import print_function import datetime import time from rdkit import Chem # = {ignore_atom_id, start) ####### Cross-validation code def fragment_on_bond(mol, atom1, atom2): bond = mol.GetBondBetweenAtoms(atom1, atom2) return Chem.FragmentOnBonds(mol, [bond.GetIdx()], dummyLabels=[(0, 0)])_trim = time_on_bond = 0.0 # Helper function to print status information. Since # the function is defined inside of another function, # it has access to variables in the outer function. def print_status(): print("Processed {} records, {} molecules, {} tests, {} mismatches." " T_trim: {:.2f} T_on_bond {:.2f}" .format(recno, num_mols, num_tests, num_mismatches, time_trim, time_on_bond)) filename = "/Users/dalke/databases/chembl_20_rdkit.smi" start_time = datetime.datetime.now() for recno, id, smiles in read_records(filename): if recno % 100 == 0: print_status() if "." in smiles: # The fragment_trim() method only works on a connected molecule. # Arbitrarily pick the component with the longest SMILES string. smiles = max(smiles.split("."), key=len) ## Remove directional bonds to see if there are mismatches ## which are not caused by directional bonds. #if "/" in smiles: # smiles = smiles.replace("/", "") #if "\\" in smiles: # smiles = smiles.replace("\\", "")_trim() ... t1 = time.time() mol_trim = fragment_trim(mol, a1, a2) t2 = time.time() time_trim += t2-t1 smiles_trim = Chem.MolToSmiles(mol_trim,_trim != smiles_on_bond: print("Mismatch: record: {} id: {} smiles: {} cut: {} {}".format( recno, id, smiles, a1, a2)) print(" smiles_trim:", smiles_trim)()
Software mentioned in ChEMBL SD records #
When I work with real-world data sets, I usually start with PubChem before going on to ChEMBL. Why? The PubChem data is all generated by one chemistry toolkit - I'm pretty sure it's OEChem - while ChEMBL data comes from many sources.
To get a sense of the diversity, I processed the ChEBML 21 release to get the second line of each record. The ctfile format specification says that if non-blank then the 8 characters starting in the third position should contain the program name. The documentation includes a description of the fields:
IIPPPPPPPPMMDDYYHHmmddSS…where those fields are:
- II: two characters for the user's initials
- PPPPPPPP: eight characters for the program name
- MMDDYY: two characters for the month, two for the day, two for the year
- HHmm: two characters for the hour, two for the minutes
- … additional fields omitted …
SciTegic12231509382D Mrv0541 05211314572D 08180812482D 1 1.00000 0.00000 0 Symyx 11300911332D 1 1.00000 0.00000 0The third program wishes to remain anonymous.
Extract program names from the ChEMBL 21 SDF
I'll write a program to extract those program names and count how many times each one occurs. I don't need a general-purpose SD file reader because ChEMBL uses a subset of the SD format. For example, there are only two lines in each record which start with "CHEMBL", the title line (the first line of the record) and the data line after the "chembl_id" tag.
My code can read line through the file. The first time it sees a "CHEMBL" line is the title line, so the following line (the second line) contains the data. Then when it sees "> <chembl_id>" it knows to skip the following line, which will have a CHEMBL on it.
There are two oddities here. First, the gzip reader returns byte strings. I decided to do the pattern matching on the byte strings to ignore the overhead of converting everything to Unicode when I only need a few characters from the file. Second, a Python file object is its own iterator, so I can use "infile" in both the for-loop and in the body itself.
import sys, gzip from collections import Counter counter = Counter() with gzip.open("/Users/dalke/databases/chembl_21.sdf.gz", "rb") as infile: for line in infile: # Get the line after the title line (which starts with 'CHEMBL') if line[:6] == b"CHEMBL": next_line = next(infile) # Print unique program names program = next_line[2:10] if program not in counter: print("New:", repr(str(program, "ascii"))) counter[program] += 1 if line[:13] == b">
": ignore = next(infile) # skip the CHEMBL print("Done. Here are the counts for each program seen.") for name, count in counter.most_common(): print(repr(str(name, "ascii")), count)
Program counts
Here is the table of counts:The number of different CDK lines is not because of a version number but because CDK doesn't format the line correctly. The specification states that the first few fields of a non-blank line are supposed to be:
IIPPPPPPPPMMDDYYHHmmddSS…while the CDK lines use:
CDK 7/28/10,10:58 CDK 8/10/10,12:22 CDK 9/16/09,9:40 CDK 10/7/09,10:42 CDK 11/9/10,11:20were the date starts one byte too early. The date also isn't in the specified format.
Further examination shows these were only generated in 2009 and 2010. The current CDK implementation is correct, and a 'git annotate' suggests it was fixed in 2010.
I don't think anyone uses that line for anything, and don't see the point of changing anything, so I don't think it's worthwhile to even ask ChEBML to change these old records.
Use FragmentOnBonds to fragment a molecule in RDKit #
This is part of a continuing series on how to fragment a molecular graph. So far I have covered:
-
If you are writing production code, you should almost certainly use the built-in RDKit function FragmentOnBonds() to fragment a molecule, and not use any of the code from those essays.
FragmentOnBonds
FragmentOnBonds() will fragment a molecule given a list of bond identifiers. In default use it attaches a new "dummy" atom (what I've been calling a wildcard atom) to the atoms at the end of each broken bond. It will also set the isotope label for each newly created dummy atom to the atom index of the atom to which it is attached.
>>> from rdkit import Chem >>> bond_idx = vanillin.GetBondBetweenAtoms(4, 5) >>> from rdkit import Chem >>> vanillin = Chem.MolFromSmiles("c1(C=O)cc(OC)c(O)cc1") >>> bond = vanillin.GetBondBetweenAtoms(4, 5) >>> bond <rdkit.Chem.rdchem.Bond object at 0x102bec8a0< >>> bond.GetIdx() 4 >>> new_mol = Chem.FragmentOnBonds(vanillin, [bond.GetIdx()]) >>> Chem.MolToSmiles(new_mol, isomericSmiles=True) '[4*]OC.[5*]c1cc(C=O)ccc1O'I don't want a label, so I'll specify 0 for the atom labels. (An unlabeled atom has an isotope of zero.)
>>> new_mol = Chem.FragmentOnBonds(vanillin, [bond.GetIdx()], dummyLabels=[(0, 0)]) >>> Chem.MolToSmiles(new_mol, isomericSmiles=True) '[*]OC.[*]c1cc(C=O)ccc1O'
I'll turn this into function that I can use for testing:
def fragment_on_bond(mol, a1, a2): bond = mol.GetBondBetweenAtoms(a1, a2) return Chem.FragmentOnBonds(mol, [bond.GetIdx()], addDummies=True, dummyLabels=[(0, 0)])That's certainly much easier than fragment_chiral()!
FragmentOnBonds implementation
The RDKit source code is available, so we can look at how this built-in function is implemented. A search for "FragmentOnBonds":
Code/GraphMol/Wrap/MolOps.cpp 1631: python::def("FragmentOnBonds", fragmentOnBondsHelper,plus some investigation shows that the actual C++ code has the name "fragmentOnBonds". The RDKit convention is to use an intial uppercase letter for Python function, and an initial lowercase letter for C++ functions.
A search this time for "fragmentOnBonds" has a few more hits, including this very suggestive one:
Code/GraphMol/ChemTransforms/MolFragmenter.cpp 320: ROMol *nm=fragmentOnBonds(mol,fragmentHere,addDummies,dummyLabelsHere,bondTypesHere,lCutsPerAtom); 329: ROMol *fragmentOnBonds(const ROMol &mol,const std::vectorThat second result is start of the FragmentOnBondsfunction definition,. Here's the key part, with commentary.
&bondIndices,
The function can fragment multiple bonds. It iterates through the bonds in order. For each one it gets the bond type and the begin and end atoms, and if requested it stores information about the number of cuts per atom.
for (unsigned int i = 0; i < bondsToRemove.size(); ++i) { const Bond *bond = bondsToRemove[i]; unsigned int bidx = bond->getBeginAtomIdx(); unsigned int eidx = bond->getEndAtomIdx(); Bond::BondType bT = bond->getBondType(); res->removeBond(bidx, eidx); if (nCutsPerAtom) { (*nCutsPerAtom)[bidx] += 1; (*nCutsPerAtom)[eidx] += 1; }FragmentOnBonds() by default will add "dummy" atoms (atoms with atomic number 0), but this can be disabled. If enabled, then it will create the two atoms with atomic number 0. By default it will set the istope to be the index of the atom it will be attached to, but that can also be specified with the dummyLabels parameter:
if (addDummies) { Atom *at1, *at2; at1 = new Atom(0); at2 = new Atom(0); if (dummyLabels) { at1->setIsotope((*dummyLabels)[i].first); at2->setIsotope((*dummyLabels)[i].second); } else { at1->setIsotope(bidx); at2->setIsotope(eidx); }Next, make the bond from the old terminal atoms to the new wildcard atoms. By default the bond type for the new bonds is the same as the bond which was broken (determined earlier). Otherwise, there's an option to specify an alternate bond type.
unsigned int idx1 = res->addAtom(at1, false, true); if (bondTypes) bT = (*bondTypes)[i]; res->addBond(eidx, at1->getIdx(), bT); unsigned int idx2 = res->addAtom(at2, false, true); res->addBond(bidx, at2->getIdx(), bT);The last part does what the comment says it does; if the atom has a tetrahedral chirality then check if the permutation order has changed and invert the chirality if needed:
// figure out if we need to change the stereo tags on the atoms: if (mol.getAtomWithIdx(bidx)->getChiralTag() == Atom::CHI_TETRAHEDRAL_CCW || mol.getAtomWithIdx(bidx)->getChiralTag() == Atom::CHI_TETRAHEDRAL_CW) { checkChiralityPostMove(mol, mol.getAtomWithIdx(bidx), res->getAtomWithIdx(bidx), mol.getBondBetweenAtoms(bidx, eidx)); } if (mol.getAtomWithIdx(eidx)->getChiralTag() == Atom::CHI_TETRAHEDRAL_CCW || mol.getAtomWithIdx(eidx)->getChiralTag() == Atom::CHI_TETRAHEDRAL_CW) { checkChiralityPostMove(mol, mol.getAtomWithIdx(eidx), res->getAtomWithIdx(eidx), mol.getBondBetweenAtoms(bidx, eidx)); }The code to check if the chirality has changed iterates through all of the bonds of the old atom ("oAt") to make a list ("newOrder") containing all of the atom identifiers on the other side of the bond. (The "OEDGE_ITER" is part of the adapter to work with Boost Graph library. It's an "out_edge_iterator".)
void checkChiralityPostMove(const ROMol &mol, const Atom *oAt, Atom *nAt, const Bond *bond) { INT_LIST newOrder; ROMol::OEDGE_ITER beg, end; boost::tie(beg, end) = mol.getAtomBonds(oAt); while (beg != end) { const BOND_SPTR obond = mol[*beg]; ++beg;The code knows that the RemoveBond()/AddBond() caused the new dummy atom to be placed at the end of bond list, so when it sees the old bond it simply ignores it. Once it's gone through the old atoms, it adds the new wildcard atom id to the end of the list. Interestingly, this knowledge isn't something I can depend on, because it's an implementation detail which might change in the future. That's why I needed to substitute the new bond id at this point in my code.
if (obond.get() == bond) { continue; } newOrder.push_back(obond->getIdx()); } newOrder.push_back(bond->getIdx());The last bit is to compute the permutation order, or as it says, "perturbation order". I believe that is a typo. Dig down a bit and it uses a selection sort to count the number of swaps needed to make the "oAt" atom list and the "newOrder" list match. The result, modulo two, gives the parity. If the parity is odd, invert the chirality:
unsigned int nSwaps = oAt->getPerturbationOrder(newOrder); if (nSwaps % 2) nAt->invertChirality(); }This was a bit different than my approach, which compared the parity before and after, but it gives the same results.
Jumping back to the fragmentOnBonds() function, the last bit of the function is:
res->clearComputedProps(); return static_castThis is needed because some of the computed properties may depend on bond patterns which are no longer present. Note that it does not use SanitizeMol(), which is something I investigated in the previous essay.
(res);
While the details are a bit different between my fragment_chiral() and FragmentOnBonds(), the general approach is the same.
A possible optimization
I mentioned that checkChiralityPostMove() uses insider knowledge. It knows that the deleted bond is removed from the list and the new bond added at the end. It uses this information to build the "newOrder" list with atom indices in the correct order, before determining the difference in parity between that list and the list of atom indices around the new bond.
There's an easier way. Count the number of bond between where the deleted bond was and the end, and take the result modulo 2. For example, if the transformation were:
initial neighbors = [1, 6, 5, 4] - remove bond to atom 5 neighbors after delete = [1, 6, 4] - add bond to new atom 9 final neighbors = [1, 6, 4, 9]then the bond to atom 5 was in the third position, with one bond (to atom 4) between it and the end of the list. The parity is therefore odd. I added this suggestion to the RDKit issue tracker as #1033.
The proof is simple. Assume there are n elements afterward the bond to delete. Then there are n swaps needed to bring it to the end, where it can be replaced with the connection to the new atom. Hence the parity is n%2.
Not using SanitizeMol for this investigation
In the earlier essay in this series, "Fragment
chiral molecules in RDKit", I investigated some failure cases
where the chirality wasn't preserved. Greg
Landrum pointed out:
after you break one or more bonds, you
really, really should re-sanitize the molecule (or at least call
ClearComputedProps() .... I found that if I added SanitizeMol()
then some of the error cases done by using ClearComputedProps() were
no longer error cases. This may be because of a bug in
FastFindRings. Greg, in the second link, writes:
There is a
workaround until this is fixed; explicitly sanitize your molecules or
just cal GetSymmSSSR() before you generate SMILES.
So that's what I did. However, without SanitizeMol() there were only 232 failures out of 1.4M+ input structures. Adding SanitizeMol() didn't fix all of the remaining errors. On the other hand, the performance overhead to SanitizeMol() is (as expected) larger than changing a few bonds around. In one timing test I made, the time to process 10,000 structures using fragment_chiral() without SanitizeMol() was 107 seconds, while the time with SanitizeMol() was 255 seconds. The overall processing time takes about twice as long. (There is an additional constant overhead to parse the SMILES and canonicalize the fragmented molecule, which I did not include.)
For this pedagogical investigation, don't think the performance impact is worth a slightly error rate, so I will not include SanitizeMol() in this essay. That means I can use my new fragment_on_mol() function as-is, and I'll modify fragment_chiral() so it calls ClearComputedProps() instead of SanitizeMol().
In any case, the explicit call to SanitizeMol() is a workaround. I expect ClearComputedProps() will suffice in the RDKit of the future world that you, the reader, inhabit.
Cross-validation
In the earlier essay I tested the fragment_chiral() function by attempting to re-connect the fragments. If the canonicalized result doesn't match the canonicalized input structure then there's a problem. (And there were a small number of problems, at the edge of RDKit's ability to handle chemisty.)
That test ignored what SMILES calls "directional bonds", that is the "/" and "\" in "F/C=C\F", which is how SMILES denotes the stereochemistry of double bonds. This is how SMILES deals with "E-Z notation", which would be more familiar to a chemist. The tests ignored those bonds for the simple reason that FragmentOnBonds() doesn't preserve that information if it cuts the bond. In a future essay I'll show how to implement that ability.
Instead of fully testing fragment_on_bonds(), I'll do a weaker test where I compare its results to fragment_chiral(). I say it's weaker because what it shows is that they are likely bug-compatible, which is different than being correct. It's also weaker because the two implementation methods are very similar. A stronger test would use a different way to fragment.
I think of this as a cross-validation test. As Wikipedia helpful points out, that term is overloaded. Statistics uses a different definition than chemistry; the latter term is very similar to what's used in verification and validation, so I think I'm right to use the term.
The test code is not interesting, so I'll put it at the end and not discuss it in detail. The results weren't interesting either. There were no mismatches. I just had to wait for a few hours while I let it process the 1.4M+ structures from ChEMBL 20.
Here are the last few lines of output:
Processed 1455762 records, 1455763 molecules, 16017700 tests, 0 mismatches. T_chiral: 5912.26 T_on_bond 3554.22 elapsed time: 8:30:15The two implementations gave identical results, and time needed for fragment_chiral() is about 1.7x that needed for fragment_on_bond(). The fragment_on_bond() code is shorter, faster, and easier to understand, so you should always use it instead of fragment_chiral().
The code
The fragment_chiral() function here is a copy of the previous essay, except I replaced the SanitizeMol(new_mol) with a mol.ClearComputedProps() for 2x performance, at the expense of a few incorrect structures. (The SanitizeMol() is a work-around to a current bug report. It shouldn't be needed in long-term use.) I put the code here so the test code was self-contained, except of course you'll need your own dataset.
from __future__ import print_function import time import datetime from rdkit import Chem # Two different methods to cut a bond and fragment an RDKit molecule. # - fragment_on_bond() uses RDKit's FragmentOnBonds() # - fragment_chiral() uses lower-level API calls # This also includes a cross-validation function which checks that the # two methods produce the same fragments as output. # The final two lines when I evaluated ChEMBL20 were: # # Processed 1455762 records, 1455763 molecules, 16017700 tests, 0 mismatches. T_chiral: 5912.26 T_on_bond 3554.22 # elapsed time: 8:30:15 # # The timings show that fragment_on_bond() is about 1.7x the performance of fragment_chiral(). #### Fragment using a high-level RDKit API function def fragment_on_bond(mol, atom1, atom2): bond = mol.GetBondBetweenAtoms(atom1, atom2) new_mol = Chem.FragmentOnBonds(mol, [bond.GetIdx()], dummyLabels=[(0, 0)]) # FragmentOnBonds() calls ClearComputedProps() at the end. There # is a current bug report where, as a downstream effect, that may # cause some chiralities to change, most notably on some # bridgeheads.. A workaround for now is to call SanitizeMol(), # though that ends up tripling the time. I'll stay compatible # with FragmentOnBonds() and not call it. #Chem.SanitizeMol(new_mol) return new_mol #### Fragment using low-level RDKit API functions # See # for implementation discussion CHI_TETRAHEDRAL_CW = Chem.ChiralType.CHI_TETRAHEDRAL_CW CHI_TETRAHEDRAL_CCW = Chem.ChiralType.CHI_TETRAHEDRAL_CCW def parity_shell(values): # Simple Shell sort; while O(N^2), we only deal with at most 4 values # See # for faster versions for fixed-size lists. values = list(values) N = len(values) num_swaps = 0 for i in range(N-1): for j in range(i+1, N): if values[i] > values[j]: values[i], values[j] = values[j], values[i] num_swaps += 1 return num_swaps % 2 def get_bond_parity(mol, atom_id): """Compute the parity of the atom's bond permutation Return None if it does not have tetrahedral chirality, 0 for even parity, or 1 for odd parity. """ atom_obj = mol.GetAtomWithIdx(atom_id) # Return None unless it has tetrahedral chirality chiral_tag = atom_obj.GetChiralTag() if chiral_tag not in (CHI_TETRAHEDRAL_CW, CHI_TETRAHEDRAL_CCW): return None # Get the list of atom ids for the each atom it's bonded to. other_atom_ids = [bond.GetOtherAtomIdx(atom_id) for bond in atom_obj.GetBonds()] # Use those ids to determine the parity return parity_shell(other_atom_ids) def set_bond_parity(mol, atom_id, old_parity, old_other_atom_id, new_other_atom_id): """Compute the new bond parity and flip chirality if needed to match the old parity""" atom_obj = mol.GetAtomWithIdx(atom_id) # Get the list of atom ids for the each atom it's bonded to. other_atom_ids = [bond.GetOtherAtomIdx(atom_id) for bond in atom_obj.GetBonds()] # Replace id from the new wildcard atom with the id of the original atom i = other_atom_ids.index(new_other_atom_id) other_atom_ids[i] = old_other_atom_id # Use those ids to determine the parity new_parity = parity_shell(other_atom_ids) if old_parity != new_parity: # If the parity has changed, invert the chirality atom_obj.InvertChirality() def fragment_chiral(mol, atom1, atom2): """Cut the bond between atom1 and atom2 and replace with connections to wildcard atoms Return the fragmented structure as a new molecule. """ rwmol = Chem.RWMol(mol) atom1_parity = get_bond_parity(mol, atom1) atom2_parity = get_bond_parity(mol, atom2) rwmol.RemoveBond(atom1, atom2) wildcard1 = rwmol.AddAtom(Chem.Atom(0)) wildcard2 = rwmol.AddAtom(Chem.Atom(0)) new_bond1 = rwmol.AddBond(atom1, wildcard1, Chem.BondType.SINGLE) new_bond2 = rwmol.AddBond(atom2, wildcard2, Chem.BondType.SINGLE) if atom1_parity is not None: set_bond_parity(rwmol, atom1, atom1_parity, atom2, wildcard1) if atom2_parity is not None: set_bond_parity(rwmol, atom2, atom2_parity, atom1, wildcard2) # After breaking bonds, should re-sanitize See # # or at least call ClearComputedProps(). I found that # SanitizeMol() improves chiral carbon bridgeheads handling, # though using it doubles the execution time. I'll stick with # ClearComputedProps(), which matches what FragmentOnBonds() does new_mol = rwmol.GetMol() # I must ClearComputedProps() after editing the structure. new_mol.ClearComputedProps() #Chem.SanitizeMol(new_mol) return new_mol ####### Cross-validation code_chiral = time_on_bond = 0.0 # Helper function to print status information. Since # the function is defined inside of another function, # it has access to variables in the outer function. def print_status(): print("Processed {} records, {} molecules, {} tests, {} mismatches." " T_chiral: {:.2f} T_on_bond {:.2f}" .format(recno, num_mols, num_tests, num_mismatches, time_chiral, time_on_bond)) filename = "/Users/dalke/databases/chembl_20_rdkit.smi" start_time = datetime.datetime.now() for recno, id, smiles in read_records(filename): if recno % 1000 == 0: print_status()_chiral() ... t1 = time.time() mol_chiral = fragment_chiral(mol, a1, a2) t2 = time.time() time_chiral += t2-t1 smiles_chiral = Chem.MolToSmiles(mol_chiral,_chiral != smiles_on_bond: print("Mismatch: record: {} id: {} smiles: {} cut: {} {}".format( recno, id, smiles, a1, a2)) print(" smiles_chiral:", smiles_chiral)()
Faster parity calculation # networksT. (NOTE: the day after writing this I realized I'm thinking like a C programmer. In Python the better solution is to copy the items into local variables.)
No need to sort
A sort network will always do D comparisions, but those sorts aren't always needed. The reason is simple - if you think of the network as a decision tree, where each comparison is a branch, then D comparison will always have 2DThe last time I worked on this problem I turned the sorting network for N=4 into a decision tree. With 5 swaps there 25TheObviouslyThis appears to be roughly factorial growth, which is what it should be. For my case, n=4, so 71 lines is not a problem.
I wrote some timing code which does 100,000 random selections from the possible permutations and compares the performance of the parityN() variables) else: if data0 < data3: if data1 < data3: return 0 # (1, 2, 0, 3) else: return 1 # (1, 3, 0, 2) else: return 0 # (2, 3, 0, 1) else: if data0 < data3: if data1 < data2: if data1 < data3: return 1 # (0, 1, 3, 2) else: return 0 # (0, 2, 3, 1) else: return 1 # (0, 3, 2, 1) else: if data0 < data2: if data1 < data2: return 1 # (1, 2, 3, 0) else: return 0 # (1, 3, 2, 0) else: return 1 # (2, 3, 1, 0) else: if data2 < data3: if data0 < data3: if data0 < data2: return 1 # (1, 0, 2, 3) else: if data1 < data2: return 0 # (2, 0, 1, 3) else: return 1 # (2, 1, 0, 3) else: if data1 < data2: return 1 # (3, 0, 1, 2) else: if data1 < data3: return 0 # (3, 1, 0, 2) else: return 1 # (3, 2, 0, 1) else: if data0 < data2: if data0 < data3: return 0 # (1, 0, 3, 2) else: if data1 < data3: return 1 # (2, 0, 3, 1) else: return 0 # (2, 1, 3, 0) else: if data1 < data2: if data1 < data3: return 0 # (3, 0, 2, 1) else: return 1 # (3, 1, 2, 0) else: return 0 # (3, 2, 1, 0)It's easy to change the code so it generates this version instead, so I won't show it.
Are the if/else:s worthwhile?
This is written the day after I wrote the previous text. In the above, I minimized the number of comparisions, at the expense of a lot of code generation. But the sorting network doesn't add that much overhead, and it's clearly a lot easier to implement. The real test shouldn't be how much faster the if/else decision tree is over the Shell sort solution, but how much faster it is to the optimal sorting network.
So I did that. Here's the N=4 and N=5 code, which is clearly simpler than the 71 and 349 lines of code from the decision tree:
def parity4_network(data): # N=4 Bose-Nelson sorting network from num_swaps = 0 x0, x1, x2, x3 = data if x1 < x0: num_swaps += 1 x0, x1 = x1, x0 if x3 < x2: num_swaps += 1 x2, x3 = x3, x2 if x2 < x0: num_swaps += 1 x0, x2 = x2, x0 if x3 < x1: num_swaps += 1 x1, x3 = x3, x1 if x2 < x1: num_swaps += 1 x1, x2 = x2, x1 return num_swaps % 2 def parity5_network(data): # N=5 Bose-Nelson sorting network from num_swaps = 0 x0, x1, x2, x3, x4 = data if x1 < x0: num_swaps += 1 x0, x1 = x1, x0 if x4 < x3: num_swaps += 1 x3, x4 = x4, x3 if x4 < x2: num_swaps += 1 x2, x4 = x4, x2 if x3 < x2: num_swaps += 1 x2, x3 = x3, x2 if x3 < x0: num_swaps += 1 x0, x3 = x3, x0 if x2 < x0: num_swaps += 1 x0, x2 = x2, x0 if x4 < x1: num_swaps += 1 x1, x4 = x4, x1 if x3 < x1: num_swaps += 1 x1, x3 = x3, x1 if x2 < x1: num_swaps += 1 x1, x2 = x2, x1 return num_swaps % 2My timing tests say that the N=4 sorting network takes 1.4x the time of the decision tree with local variables, and the N=5 sorting network takes 1.7x the time. The decision tree is clearly faster.
If you are really interested in performance then you could push this into C or switch to PyPy. But sometimes you just need the code to be fast enough, which is why it can be worthwhile to explore different solutions and know their tradeoffs.
|
http://www.dalkescientific.com/writings/diary/
|
CC-MAIN-2018-34
|
refinedweb
| 29,131
| 56.35
|
Next Chapter: Regular Expressions
Modular Programming and Modules
Modular Programming
Modular programming is a software design technique, which is based on the general principal
of modular design. Modular design is an approach which has been proven as indispensable in engineering
even long before the first computers. Modular design means that a complex system is broken down
into smaller parts or components, i.e. modules. These components can be independently created and
tested. In many cases, they can be even used in other systems as well.
There is hardly any product nowadays, which doesn't heavily rely on modularisation, like cars, mobile phones. Computers belong to those products which are modularised to the utmost. So, what's a must for the hardware is an unavoidable necessity for the software running on the computers.
If you want to develop programs which are readable, reliable and maintainable without too much effort, you have.
Importing ModulesSo far we haven't explained what a Python module is. To put it in a nutshell: every file, which has the file extension .py and consists of proper Python code, can be seen or is a module! There is no special syntax required to make such a file a module. A module can contain arbitrary objects, for example files, classes or attributes. All those objects can be accessed after an import. There are different ways to import a modules. We demonstrate this with the math module:
import mathThe module math provides mathematical constants and functions, e.g. π (math.pi), the sine function (math.sin()) and the cosine function (math.cos()). Every attribute or function can only be accessed by putting "math." in front of the name:
>>> math.pi 3.141592653589793 >>> math.sin(math.pi/2) 1.0 >>> math.cos(math.pi/2) 6.123031769111886e-17 >>> math.cos(math.pi) -1.0
It's possible to import more than one module in one import statement. In this case the module names are separated by commas:
import math, randomimport statements can be positioned anywhere in the program, but it's good style to place them directly at the beginning of a program. If only certain objects of a module are needed, we can import only those:
from math import sin, piThe other objects, e.g. cos, are not available after this import. We are capable of accessing sin and pi directly, i.e. without prefixing them with "math."
Instead of explicitly importing certain objects from a module, it's also possible to import everything in the namespace of the importing module. This can be achieved by using an asterisk in the import:
>>> from math import * >>> sin(3.01) + tan(cos(2.1)) + e 2.2968833711382604 >>> e 2.718281828459045 >>>It's not recommended to use the asterisk notation in an import statement, except when working in the interactive Python shell. One reason is that the origin of a name can be quite obscure, because it can't be seen from which module it might have been imported. We will demonstrate another serious complication in the following example:
>>> from numpy import * >>> from math import * >>> print(sin(3)) 0.1411200080598672 >>> sin(3) 0.1411200080598672 >>>Let's slightly change the previous example by changing the order of the imports:
>>> from math import * >>> from numpy import * >>> print(sin(3)) 0.14112000806 >>> sin(3) 0.14112000805986721 >>>
People use the asterisk notation, because it is so convenient. It means avoiding a lot of tedious typing. Another way to shrink the typing effort consists in renaming a namespace. A good example for this is the numpy module. You will hardly find an example or a tutorial, in which they will import this module with the statement
import numpyIt's like an unwritten law to import it with
import numpy as npNow you can prefix all the objects of numpy with "np." instead of "numpy.":
>>> import numpy as np >>> np.diag([3, 11, 7, 9]) array([[ 3, 0, 0, 0], [ 0, 11, 0, 0], [ 0, 0, 7, 0], [ 0, 0, 0, 9]]) >>> np.e 2.718281828459045 >>>
Designing and Writing Modules turn our Fibonacci functions into a module. There is hardly anything to be done, we just aThe newly created module "fibonacci" is ready for use now. We can import this module like any other module in a program or script. We will demonstrate this in the following interactive Python shell:
>>> import fibonacci >>> fibonacci.fib(7) 13 >>> fibonacci.fib(20) 6765 >>> fibonacci.ifib(42) 267914296 >>> >>>
Don't try to call the recursive version of the Fibonacci function with large arguments like we did with the iterative version. A value like 42 is already too large. You will have to wait for a long time!
As you can easily imagine: It's a pain if you have to use those functions often in your program and you always have to type in the fully qualified name, i.e. fibonacci.fib(7). One solution consists in assigning a local name to a module function to get a shorter name:
>>> fib = fibonacci.ifib >>> fib(10) 55 >>>
But it's better, if you import the necessary functions directly into your module, as we will demonstrate further down in this chapter.
More on ModulesUsually, modules contain functions or classes, but there can be "plain" statements in them as well. These statements can be used to initialize the module. They are only executed when the module is imported.
Let's look at a module, which only consists of just one statement:
print("The module is imported now!")
We save with the name "one_time.py" and import it two times in an interactive session:
>>> import one_time The module is imported now! >>> import one_time >>>We can see that it was only imported once. Each module can only be imported once per interpreter session or in a program or script. If you change a module and if you want to reload it, you must restart the interpreter again. In Python 2.x, it was possible to reimport the module by using the built-in reload, i.e.reload(modulename):
$ python Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import one_time The module is imported now! >>> reload(one_time) The module is imported now!This is not possible anymore in Python 3.x.
>>>
You will cause the following error:
>>> import one_time The module is imported now! >>> reload(one_time) Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'reload' is not defined >>>
Since Python 3.0 the reload built-in function has been moved into the imp standard library module. So it's still possible to reload files as before, but the functionality has to be imported. You have to execute an "import imp" and use imp.reload(my_module). Alternatively, you can use "imp import reload" and use reload(my_module).
Example with reloading the Python3 way:
$ python3 Python 3.1.2 (r312:79147, Sep 27 2010, 09:57:50) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from imp import reload >>> import one_time The module is imported now! >>> reload(one_time) The module is imported now!Since version 3.4 you should use the "importlib" module, because imp.reload is marked as deprecated:
>>>
>>> from importlib import reload >>> import one_time The module is imported now! >>> reload(one_time) The module is imported now3.4.
>>> import numpy >>> numpy.__file__ '/usr/lib/python3/dist-packages/numpy/__init__.py' >>> import random >>> random.__file__ '/usr/lib/python3.4/random.py' >>>The __file__ attribute doesn't always exists. This is the case with modules which are statically linked C libraries.
>>> import math >>> math.__file__ Traceback (most recent call last): File "<stdin>", line 1, in
AttributeError: 'module' object has no attribute '__file__' >>>
Content of a Module
With the built-in function dir() and the name of the module as an argument, you can list all valid attributes and methods for that module.
>>> import math >>>', 'hypot', 'isfinite', 'isinf', 'isnan', 'ldexp', 'lgamma', 'log', 'log10', 'log1p', 'log2', 'modf', 'pi', 'pow', 'radians', 'sin', 'sinh', 'sqrt', 'tan', 'tanh', 'trunc'] >>>Calling dir() without an argument, a list with the names in the current local scope is returned:
>>> import math >>> cities = ["New York", "Toronto", "Berlin", "Washington"] >>> dir() ['__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__', 'cities', 'math'] >>>It's possible to get a list of the Built-in functions, exceptions, and other objects by importing the builtins module:
>>>'] >>>
PackagesIt's possible to put several modules into a Package. A package is basically a directory with Python files.
A package is imported like a "normal" module.
Each directory inside of the Python path, which should function as a package, needs to contain a file named __init__.py,
Otherwise the package can't be used as a package, i.e. it can't be imported!
First of all, we need a directory. The name of this directory will be the name of the package, which we want to create. We will call our package "SimplePackage". This directory needs to contain a file with the name "__init__.py". This file can be empty, or it can contain valid Python code. This code will be executed when a package will be imported, so it can be used to initialize a package, e.g. to make sure that some other modules are imported or some values set. Now we can put into this directory all the Python files which will be the submodules of our module.
We create two simple files a.py and b.py just for the sake of filling the package with modules.
The content of a.py:
def bar(): print("Hello, function 'bar' from module 'a' calling")
The content of b.py:
def foo(): print("Hello, function 'foo' from module 'b' calling")We can import this module in the following way, if we want to use the modules a and b:
>>> from SimplePackage import a, b >>> a.bar() Hello, function 'bar' from module 'a' calling >>> b.foo() Hello, function 'foo' from module 'b' calling >>>It's not possible to access a and b, if you import only SimplePackage:
>>> import SimplePackage >>> SimplePackage.a.bar() Traceback (most recent call last): File "<stdin>", line 1, inWe can use the file __init__.py to automatically import a and b as well, when we import SimplePackage. We add the following lines to the empty file __init__.py:
AttributeError: 'module' object has no attribute 'a' >>> SimplePackage.a Traceback (most recent call last): File "<stdin>", line 1, in AttributeError: 'module' object has no attribute 'a' >>>
import SimplePackage.a import SimplePackage.bNow it works:
>>> import SimplePackage >>> SimplePackage.a.bar() Hello, function 'bar' from module 'a' calling
Next Chapter: Regular Expressions
|
http://python-course.eu/python3_modules_and_modular_programming.php
|
CC-MAIN-2017-09
|
refinedweb
| 1,772
| 66.74
|
Post your Comment
HAVING Clause in JPQL (Java Persistence Query Language)
HAVING Clause in JPQL (Java Persistence Query Language... to use of HAVING Clause
in jpql (Java Persistence Query Language).
HAVING Clause.... If you don't use GROUP BY clause, the HAVING clause is applied to entire query
Persistence
a main class of the application client
Query: The Java Persistence APIs defines the
JPQL (JPA Query Language) that is used to select objects from data source.
The JPQL query has an internal namespace which is declared in the from clause
JPQL helpppppp please very important
want to write a JPQL Query that show only the Available books and not loaned by anyone
Could anyone help me help me please with the Query...JPQL helpppppp please very important there are 3 tables:
book
Introduction to the Java Persistence API
Persistence Query Language
The Java Persistence Query Language or simple JPQL... Specification. The JPQL is a
platform-independent O/R query language. Here name... to introduce you about the Java Persistence API.
The Java Persistence API is Java
Introduction to the Java Persistence API
Introduction to the Java Persistence API
In this section, we are going to discuss about the Java Persistence API (JPA... Beans 3.0) specification.
Java Persistence API provides simple POJO (Plain Old
Where and Having clause
Where and Having clause hii,
What is the difference between where and having clause?
hello,
WHERE clause is used to impose... BY clause.
HAVING clause is used to impose condition on GROUP Function
JPA Concat Function
;
In this section, you will learn about the JPQL
(Java Persistence Query Language) concat() function. The concat()
function concatenates two string fields or literals.
JPQL concat() Function:
Query query
JPA Sum Function
of JPQL (Java Persistence Query Language).
The sum function to calculate the sum... in your jpa application.
JPA Sum Function:
Query query=em.createQuery("SELECT
SUM(p.price) FROM Product p");
This query calculate
Null Expression Example
expression in with JPQL (Java
Persistence Query Language). In this example, we will use a JPQL query
("SELECT st FROM Student st WHERE st.scourse IS NULL"). This
query retrieve all record that contains "scourse" file
where clause in select statement
where clause in select statement In Mysql,I write the following query:
select * from emp where designation = "MANAGER"
But the actual value in the table is "manager"
Will the above query retrieve any records?
According to me
select clause in hibernate
select clause in hibernate Example of select clause of hibernate?
Select clause is used for selection of records from a database. It picks up objects and properties and returns it in a query set.
Here is your
SQL HAVING Clause
SQL HAVING Clause
Having clause is used with the select clause to specify a search
condition for a group or aggregate. The HAVING clause works with where
Where Clause in SQL
Where Clause in SQL
The Where clause in SQL query is used with the
SELECT keyword... is
The select query used with where clause, which
search the data
Mysql Date in where clause
in Where clause. The
given Query create table employee1 is used to create a table... |
+-------+---------+------------+
4 rows in set (0.00 sec)
Query date where clause...
Mysql Date in where clause
Hibernate ORDER BY Clause
java file into which
we will use HQL order by clause for sorting the data from...Hibernate ORDER BY Clause
In this tutorial you will learn about HQL ORDER BY clause.
Purpose for using ORDER BY clause is similar to that it is
used in SQL
JPA Query with where clause and group by function
JPA Query with where clause and group by function Any thing wrong in my JPA Query.
TypedQuery<RaBdrRating> uQuery = (TypedQuery<RaBdrRating>) entityManager.createQuery("SELECT activePackage,SUM(duration),SUM
HQL Where Clause Example
:
*******************************
Query using Hibernate Query Language... HQL Where Clause Example
Where Clause is used to limit the results returned
Database Record findByName
Persistence Query Language) query in
createQuery() method. This is provided... jpqlString): This method
creates an instance of Query to execute jpql (Java Persistence Query Language)
statement.
jpqlString: "SELECT st FROM Student st
HQL Group By Clause Example
HQL Group By Clause Example
Group by clause is used to return the aggregate values
by grouping on returned component. HQL supports Group By Clause. In our example
we
PHP SQL Query Where Clause
PHP SQL Query Where Clause
... query
with where clause.
To understand how to display the result returned
from the sql query execution, we have created whereClause.php page.
First
Hibernate SELECT Clause
= sessionFactory.openSession();
//SELECT clause with from
Query selectClause...Hibernate SELECT Clause
In this tutorial you will learn how to use HQL select clause.
Use of SELECT clause in hibernate is as same as it used in
SQL
Hibernate FROM Clause
Hibernate FROM Clause
In this tutorial you will learn how to use HQL from clause.
Use of FROM clause in hibernate is as same as it used in SQL
for producing the tabular structure, this clause specifies the source table
HQL from clause Example
clause. The from clause is the simplest possible Hibernate Query. Example of from clause is:
from Insurance insurance
Here is the full code of the from clause... HQL from clause Example
Hibernate GROUP BY Clause
create a simple java file into which we will use HQL
group by clause...Hibernate GROUP BY Clause
In this tutorial you will learn how to use HQL GROUP BY clause.
GROUP BY clause in HQL is used as same as it is used in SQL
SQL Aggregate Functions Where Clause
SQL Aggregate Functions Where Clause
SQL Aggregate Functions Where Clause return you the aggregate sum of the
records based on the condition specified in Where Clause condition
HQL Between clause, HQL Between example
HQL Between clause example
In this tutorial you will learn how to use HQL Between clause to select a
range of entity falling between 2 and 6. You can... we are selecting all the customer object having primary
key (id) between 2
Post your Comment
|
http://roseindia.net/discussion/25740-HAVING-Clause-in-JPQL-(Java-Persistence-Query-Language).html
|
CC-MAIN-2014-10
|
refinedweb
| 1,003
| 61.16
|
End Date Cannot Be Populated Correctly In Account Hierachies page
Last updated on MARCH 08, 2017
Applies to:Oracle Fusion General Ledger - Version 11.1.1.5.1 to 11.1.9.2.0 [Release 1.0]
Oracle Fusion General Ledger Cloud Service - Version 11.1.4.0.0 to 11.1.9.2.0 [Release 1.0]
Information in this document applies to any platform.
Symptoms
On : 11.1.8.0.0 version, Functional Setup Manager
User has entered data for values and hierarchies in the spreadsheet template. User has specified start and end date for hierarchy as 2001/01/01 and 2999/12/31 respectively.
When importing this spreadsheet data using 'Segment Values and Hierarchies Interface' the following error occurs.
Also end date shown in Account Hierarchies page is not same as entered in the template.
ERROR
-----------------------
import segment values and hierarchy completed with error
Start date 2001-01-01 must be before end date 1999-12-31.
STEPS
-----------------------
The issue can be reproduced at will with the following steps:
1. Fill the template for importing values and hierarchies.
2. Specify start and end date for hierarchy as 2001/01/01 and 2999/12/31
Cause
My Oracle Support provides customers with access to over a
Million Knowledge Articles and hundreds of Community platforms
|
https://support.oracle.com/knowledge/Oracle%20Cloud/1964614_1.html
|
CC-MAIN-2018-30
|
refinedweb
| 216
| 51.55
|
I just missed my connecting flight in Chicago, so have 3 hours to pass until the next, and decided to post some code I finally got around to writing on the plane from Zurich.
I've had a few requests for code showing how to plot using the .NET API in AutoCAD. There's an existing ObjectARX (C++) sample on the SDK, under samples/editor/AsdkPlotAPI, but there isn't a publicly posted .NET version right now.
Here's the C# code I put together. Please bear in mind that it was written during less than ideal coding conditions, and I haven't spent a substantial amount of time going through it... it seems to work fine for me, but do post a comment if you have trouble with it.
using Autodesk.AutoCAD.Runtime;
using Autodesk.AutoCAD.ApplicationServices;
using Autodesk.AutoCAD.DatabaseServices;
using Autodesk.AutoCAD.EditorInput;
using Autodesk.AutoCAD.Geometry;
using Autodesk.AutoCAD.PlottingServices;
namespace PlottingApplication
{
public class PlottingCommands
{
[CommandMethod("simplot")]
static public void SimplePlot()
{
Document doc =
Application.DocumentManager.MdiActiveDocument;
Editor ed = doc.Editor;
Database db = doc.Database;
Transaction tr =
db.TransactionManager.StartTransaction();
using (tr)
{
// We'll be plotting the current layout
BlockTableRecord btr =
(BlockTableRecord)tr.GetObject(
db.CurrentSpaceId,
OpenMode.ForRead
);
Layout lo =
(Layout)tr.GetObject(
btr.LayoutId,
OpenMode.ForRead
);
// We need a PlotInfo object
// linked to the layout
PlotInfo pi = new PlotInfo();
pi.Layout = btr.LayoutId;
// We need a PlotSettings object
// based on the layout settings
// which we then customize
PlotSettings ps =
new PlotSettings(lo.ModelType);
ps.CopyFrom(lo);
// The PlotSettingsValidator helps
// create a valid PlotSettings object
PlotSettingsValidator psv =
PlotSettingsValidator.Current;
// We'll plot the extents, centered and
// scaled to fit
psv.SetPlotType(
ps,
Autodesk.AutoCAD.DatabaseServices.PlotType.Extents
);
psv.SetUseStandardScale(ps, true);
psv.SetStdScaleType(ps, StdScaleType.ScaleToFit);
psv.SetPlotCentered(ps, true);
// We'll use the standard DWF PC3, as
// for today we're just plotting to file
psv.SetPlotConfigurationName(
ps,
"DWF6 ePlot.pc3",
"ANSI_A_(8.50_x_11.00_Inches)"
);
// We need to link the PlotInfo to the
// PlotSettings and then validate it
pi.OverrideSettings = ps;
PlotInfoValidator piv =
new PlotInfoValidator();
piv.MediaMatchingPolicy =
MatchingPolicy.MatchEnabled;
piv.Validate(pi);
// A PlotEngine does the actual plotting
// (can also create one for Preview)
if (PlotFactory.ProcessPlotState ==
ProcessPlotState.NotPlotting)
{
PlotEngine pe =
PlotFactory.CreatePublishEngine();
using (pe)
{
// Create a Progress Dialog to provide info
// and allow thej user to cancel
PlotProgressDialog ppd =
new PlotProgressDialog(false, 1, true);
using (ppd)
{
ppd.set_PlotMsgString(
PlotMessageIndex.DialogTitle,
"Custom Plot Progress"
);
ppd.set_PlotMsgString(
PlotMessageIndex.CancelJobButtonMessage,
"Cancel Job"
);
ppd.set_PlotMsgString(
PlotMessageIndex.CancelSheetButtonMessage,
"Cancel Sheet"
);
ppd.set_PlotMsgString(
PlotMessageIndex.SheetSetProgressCaption,
"Sheet Set Progress"
);
ppd.set_PlotMsgString(
PlotMessageIndex.SheetProgressCaption,
"Sheet Progress"
);
ppd.LowerPlotProgressRange = 0;
ppd.UpperPlotProgressRange = 100;
ppd.PlotProgressPos = 0;
// Let's start the plot, at last
ppd.OnBeginPlot();
ppd.IsVisible = true;
pe.BeginPlot(ppd, null);
// We'll be plotting a single document
pe.BeginDocument(
pi,
doc.Name,
null,
1,
true, // Let's plot to file
"c:\\test-output"
);
// Which contains a single sheet
ppd.OnBeginSheet();
ppd.LowerSheetProgressRange = 0;
ppd.UpperSheetProgressRange = 100;
ppd.SheetProgressPos = 0;
PlotPageInfo ppi = new PlotPageInfo();
pe.BeginPage(
ppi,
pi,
true,
null
);
pe.BeginGenerateGraphics(null);
pe.EndGenerateGraphics(null);
// Finish the sheet
pe.EndPage(null);
ppd.SheetProgressPos = 100;
ppd.OnEndSheet();
// Finish the document
pe.EndDocument(null);
// And finish the plot
ppd.PlotProgressPos = 100;
ppd.OnEndPlot();
pe.EndPlot(null);
}
}
}
else
{
ed.WriteMessage(
"\nAnother plot is in progress."
);
}
}
}
}
}
A few comments on the code: I chose to plot to a file using the standard DWF plot driver/PC3 file, mainly as it avoids having to second-guess what printers/plotters everyone uses. :-) The SDK sample populates comboboxes in a dialog to allow selection of the device and the media size, but I've just gone with a choice that should work on all systems.
Here's what you should see when you launch the SIMPLOT command:
The output file should be created in "c:\test-output.dwf".
I added a simple check to fail gracefully when another plot is in progress... as this code will drive either a foreground or a background plot (depending on the value of the BACKGROUNDPLOT system variable), there's clear scope for failure if the code doesn't check via PlotFactory.ProcessPlotState (or handle the exception if another plot is already happening when you call PlotFactory.CreatePublishEngine()).
Now that I'm online I've found there's quite a similar code sample to the ADN site (also converted from the original SDK sample, I suspect):
How to plot using AutoCAD's .NET Managed API?
Sir,
I am Akash, pursuing Masters in Construction Management from Indian Institute of Technology- Madras, Chennai (India)
I had been using the VB 6.0 version till date and still would be continuing my work with that..
I have an application to be coded where in 1) I have a set of coordinates say 6 coordinates for a cuboidal shape in excel sheet.
2) I want to use them in Autocad to plot a drawing (every cuboid) based on these coordinates by importing from Excel file.
3) I have a series of such cuboids to be connected to make a long span. I want this Plot to be finally displayed on VB form where my application is there.
I look forward to receive some useful information from you at the earliest..
PS: I have a bigger constraint in switchhing over to .Net for my application hence, please keep VB 6.0 in mind..
Posted by: Akash | March 05, 2008 at 08:30 AM
Akash - please have your institute join ADN (if they're not already a member) or submit your question to one of the discussion groups.
Regards,
Kean
Posted by: Kean | March 05, 2008 at 09:10 AM
G'day Kean
Thanks for a great site.
I am having trouble doing consecutive prints from one method call. I am trying to produce a DWF and a PDF of a drawing, but the second plot fails because the first plot does not start immediately. I have tried sleeping the thread and checking the ProcessPlotState. I also tried starting the plot in a separate thread, but that is crashing AutoCAD (even with try/catch blocks).
Any ideas?
Regards
Michael
Posted by: Michael Csikos | May 20, 2008 at 07:35 AM
Hi Michael,
You probably need to make sure background plotting is disabled. If the BACKGROUNDPLOT sysvar is set to 1 or 3, then PLOT will background plot (which is not what you want).
Cheers,
Kean
Posted by: Kean | May 20, 2008 at 09:05 AM
Thanks Kean, that did the trick.
Posted by: Michael Csikos | May 28, 2008 at 06:38 AM
Dear Kean,
I'm currently working on VBA for AutoCAD Mechanical 2007. I wonder if there is a way to edit the pc3/pmp files programatically from this interface. The goal is to enable the users to create custom sized flowsheets without manually add the Papersize to the pc3.
So I'd like to create a mask, where the user types the desired width (height is given by the rolls in the plotter...) and they get their layout correctly sized. Wath I did as workaround is that I set up a 15 m long papersize for all Paperrolls and set the Cutting mode to extents. The only problem is, if I set up a Page Layout, users will get the whole 15 m as paper and this will be confusing I think... So can I access these settings programatically from VBA?
Thanks in advance,
Daniel
Posted by: Daniel Balogh | September 11, 2008 at 07:46 AM
Dear Daniel,
The recommended way of changing plot configurations is via the API, not by modifying PC3 files. This API is exposed via C++ (ObjectARX) and .NET.
I don't know of a way to access the API directly from VBA - you may need to include a C++ or .NET module in your application for this.
Regards,
Kean
Posted by: Kean | September 11, 2008 at 08:24 AM
I'm writing some code for automatic PDF plotting (Won't go into details) but I'm new to AutoCAD.Net and I've been having problems adapting your code.
I've tried stipping back my changes up to the point where it is basically a copy and paste of the above code without the progress bar in a hope to find what's wrong and I've made it to the point where even though my code is exactly the same I get this error: Autodesk.AutoCAD.Runtime.Exception: eInvalidInput
at Autodesk.AutoCAD.DatabaseServices.PlotSettingsValidator.SetPlotConfigurationName(PlotSettings plotSet, String plotDeviceName, String mediaName)
My code for that line being:
Psv.SetPlotConfigurationName(Ps, "PDF Creator - A1.pc3", "A1");
Any ideas?
Posted by: Andrew | January 05, 2009 at 06:21 AM
My best suggestion would be to double-check your assumptions by testing your code against another device/driver.
Kean
Posted by: Kean Walmsley | January 12, 2009 at 09:44 AM
I've made it to the point where this all works except for the actual plot. It gets to the line "Pe.BeginPage(PpInfo, PInfo, true, null);"
and then as soon as I step past it I get an error in AutoCAD saying "INTERNAL ERROR: !dbplotset.cpp@422: eLockViolation"
Nobody seems to be able to give me a straight answer as to what this means, or why my code has only minor differences to yours (negligible) and yet when I run them together mine hits that point and crashes while yours doesn't.
Posted by: Andrew | January 22, 2009 at 02:55 AM
A lock violation typically means that access to a document has been attempted without it being locked or a second attempt has been made to access a currently locked document - I forget exactly which.
The current document is locked implicitly when a command is entered, so perhaps you're not plotting the current document and have forgotten to lock it?
Another (less likely) possibility is that it's somehow related to background plotting.
If these suggestions don't help I suggest contacting the ADN team (if you're a member), or post it to the AutoCAD .NET Discussion Group, if not.
Kean
Posted by: Kean Walmsley | January 22, 2009 at 09:42 AM
I figured it was something to do with access to the document. That being said, I thought I had already tested for that by commenting out practically my entire program apart from the plotting code and yet I'm still getting the same error. I then thought it was because the command is being run from a modeless dialog, but figured that wouldn't have any influence on a command run from it.
At the moment my entire program is a foreach statement: foreach(string CurrentDwg in LstDwg.Items)
this encloses: Autodesk.AutoCAD.ApplicationServices.Application.DocumentManager.Open(CurrentDwg, false);
and then proceeds straight to your plot code. After this is closes and discards changes.
Posted by: Andrew | January 26, 2009 at 11:34 PM
How is "the command" being run from a modeless dialog? If the code is simply behind a button (or another control), then it's running in the session context, so you must lock documents manually.
The best way to drive functionality from a modeless dialog is to define actual commands and fire them off using SendStringToExecute(). That takes care of locking and lots of other bits and pieces.
Although as you're looping through documents, you'll need to lock them yourself.
Kean
Posted by: Kean Walmsley | January 27, 2009 at 06:23 AM
Once you enter the CommandMethod I've set up it opens the dialog. You select the settings you want to use (drawings, job number, etc.) and press the "Go" button.
The above code is then run.
The only thing I can think of now is closing each drawing, and then opening as read only before the plot code.
I'll look into SendStringToExecute but for my purposes it may not work too well. At least I know where I'm going wrong now. Sort of.
Posted by: Andrew | January 28, 2009 at 06:32 AM
Try using Document.LockDocument() before you plot each one.
SendStringToExecute() is actually a red herring, in this case: you have a session command that is working with multiple documents, so document locking is most likely the answer.
Kean
Posted by: Kean Walmsley | January 28, 2009 at 12:50 PM
WORKED FIRST TIME!
Can't believe the entire problem was missing 3 words!
Posted by: Andrew | January 29, 2009 at 04:36 AM
Hello Kean!
If I run your function in modelspace, all is ok, but if I run your function when a layout with viewport is active i always get the error "eNotCurrentLayout" at "piv.Validate(pi);"!
Is their something which i don't understand or works this function only if the modelspace is active?
thanks in advance
Patrick
Posted by: Patrick Ottenschläger | February 23, 2009 at 03:55 PM
Hi Patrick,
It works fine when a paperspace layout is selected, but I see it does fail if the modelspace viewport within that layout is active.
I haven't coded to check for that (which it would take me some work to do), but there are couple of easy ways to code defensively for this case.
1) You could execute a PSPACE command prior to executing the plot.
2) You could use COM to do the same...
Include the namespace for AutoCAD's COM interop assembly (after adding the assembly reference, of course):
using Autodesk.AutoCAD.Interop;
And this code early on in the SIMPLOT command:
AcadDocument ad =
(AcadDocument)doc.AcadDocument;
ad.MSpace = false;
I hope this helps,
Kean
Posted by: Kean Walmsley | February 24, 2009 at 01:56 PM
I have used your example to some success, with the addition of reading the active layout's canonical media size before setting the new plot configuration.
Having solved that one, I am having difficulty modifying to produce a plot preview. Do you have any pointers?
Posted by: markc | June 01, 2009 at 07:22 PM
This post should help.
Kean
Posted by: Kean Walmsley | June 01, 2009 at 07:28 PM
Thanks for the pointers to the preview code.
I have managed finally to program a custom eTransmit command that includes the plot to PDF and DWF along with a bound drawing.
I was performing some final testing with a particularly large file (70MB) on a standard ISO plot. The plot to PDF can be completed manually, however when I execute the plot using .net I receive a System.AccessViolationException.
Any attempts to avoid this using a Try Catch results in a pure virtual function call.
The error occurs at:
pe.BeginGenerateGraphics(null);
The plot does appear to begin to work...
Any ideas?
Posted by: markc | June 25, 2009 at 11:59 AM
Hi Mark,
Sorry - I have no idea what's happening, although something about it makes it smell like a memory issue. I'd check to see whether the issue is reproducible with a slightly modified scenario:
a) on another system
b) with another, similarly-sized, drawing
c) using a different (perhaps non-PDF) driver
I'd also suggest submitting your question to the ADN team, if you're a member.
Regards,
Kean
Posted by: Kean Walmsley | June 25, 2009 at 03:52 PM
(meaning a, b or c, not a, b and c... :-)
Kean
Posted by: Kean Walmsley | June 25, 2009 at 03:53 PM
Hello Kean, Do you have any sample in RealDWG application? thank you
Posted by: Alberto Benitez | June 29, 2009 at 06:23 PM
Here's a sample, although it's unrelated to this post (you can't plot from RealDWG).
Kean
Posted by: Kean Walmsley | June 30, 2009 at 09:17 AM
|
http://through-the-interface.typepad.com/through_the_interface/2007/09/driving-a-basic.html
|
crawl-002
|
refinedweb
| 2,604
| 56.66
|
I have seen the light.
As part of a new project for work I have finally broken
down and learned Struts and JSPs. Struts is tremendously useful. I wish it had been around five years ago when I was up to my ears
in webbased applications. But JSPs I've never been impressed with. They are
good for templating but the combination of java code and html always
seemed crufty. I've been minimizing the amount of code I put in them and certainly prefer to use something like XSL to keep the UI and code separate.
Things are going fine and well, except that now I have a new
problem. I want to set up a hosting environment for some friends and family. My sister has a weblog which I wrote as a JSP. I've tried to
minimize the amount of java code in it, but it's still brittle. She can code the HTML around it but every now and then her editor breaks it. If she codes by hand then she's even more likely to damage something, and either way she certainly can't modify it with new options. She simply
doesn't understand Java code, and honestly there's no reason for her to do so.
But now I've discovered custom taglibs. These are brilliant!
From my sister's point of view there are now a few magic tags she can
use to do something. My scripts are no longer code, just extensions to
HTML, as if there was a new version of Netscape with support for
a magic
eager to update our browsers and get us crazy-cool new tags instead of being frustrated at the lack proper support for the existing tags. ahh, those were the days. :).
I think this teaches us a good lesson. Presentation of technology
is just as important as it's implementation. To a programmer there
is no difference between:
<% Blog blog = Blog.getBlog("rachel");
for(int i=0; i<blog.getNumEntries(); i++) {
%>
this is my <%=entry%>
<%
}
%>
and
<blog name="rachel">
this is my <entry/>
But to a novice computer user who just knows HTML there is a world of
difference. With a simple tag lib I have hidden all details about iteration, data access and authentication. Now that's some serious
power.
This realization has led me on a quest to find more taglibs that
fit a hosting mentality. They are hard to find. Most taglibs are still designed for programmers. Helpers for coding frameworks. Very few are end user oriented. And the few that I've found haven't been so great. Even with such a simple user interface it's
hard to find tags that are genuinely easy to use. It takes a lot of
work to design usable software, even if it's small, and many times
it's easy to create the quick solution and move on to something else.
Maybe that's why most tags are still for developers.
My challenge this weekend was to find a tree tag. Something that
would generate a DHTML tree with as little work as possible. I was
surprised to see how many trees would not even let you specify
the tree data from within the webpage. One had three separate files.
One for the javascript, one for the style configuration, and one for
the data configuration. Great for an advanced webdesigner but not so good for the hosted user. Even some of the taglib based tree scripts
did not have optimal user interfaces.
Fortunately we live in the great world of opensource where if you don't like something you can make it better instead of complaining. I downloaded one version, rewrote the tag classes, added some jar packaging, stir in one ant build file: and voila you get treetag now available at code.joshy.org.
It needs more cleanup but it functions, and the UI (API) is very simple. Go check it out.
I only have one question now. To make these tags truely hosting-safe
(ie, drop in a magic tag and it just simply works) I want a way to
make the tags already be preloaded into the default namespace. Then
my users won't have to declare the taglib at the top of the page or
use namespace prefixes.
I would like to thank Guy Davis
for the original tags which I based mine on.
- Login or register to post comments
- Printer-friendly version
- joshy's blog
- 1362 reads
|
https://weblogs.java.net/blog/joshy/archive/2003/10/i_have_seen_the.html
|
CC-MAIN-2015-27
|
refinedweb
| 749
| 82.65
|
Create a connection:
final client = await Client.connect('redis://localhost:6379');
Get a type-safe view of the available Redis Commands:
final commands = client.asCommands<String, String>();
Run some commands:
await commands.set('key', 'value'); final value = await commands.get('key'); print(value);
Disconnect:
await client.disconnect();
Connection string must follow the following pattern:
redis://{host}:{port}
Example:
redis://localhost:6379
Clients can work in the following modes:
In this mode the client can send any command to the Redis server.
// Connect final client = await Client.connect('redis://localhost:6379'); // Run some commands final commands = client.asCommands<String, String>(); final result = await commands.ping(); print(result); // Disconnect await client.disconnect();
See
client.dart in the
example folder.
In this mode the only allowed commands are
subscribe,
unsubscribe,
psubscribe,
punsubscribe,
ping and
quit.
The replies to subscription and unsubscription commands along with the published messages are received in the form of events, so that the client can just read a coherent
Stream of events.
final pubsub = await PubSub.connect<String, String>('redis://localhost:6379'); // Subscribe to some channels and patterns pubsub ..subscribe(channel: 'dev.dart') ..psubscribe(pattern: 'dartlang.news.*'); // Listen for server replies pubsub.stream.listen(print, onError: print);
See
pubsub.dart in the
example folder.
In this mode the commands are sent to the server using the "inline command" format. Ideal to use in interactive sessions, like a Telnet session.
final terminal = await Terminal.connect('redis://localhost:6379'); // Run some commands terminal.run('PING\r\n'.codeUnits); // Listen for server replies terminal.stream.listen(print);
Note that in this mode the commands are just lists of bytes with a trailing
\r\n.
See
terminal.dart in the
example folder.
In this mode the client receives all the commands procesed by the Redis server. Useful for debugging.
final monitor = await Monitor.connect('redis://localhost:6379'); // Start the monitor mode monitor.start(); // Listen for server replies monitor.stream.listen(print);
In this mode the client can not run any command.
See
monitor.dart in the
example folder.
The method
asCommands<K, V> of the client returns a type-safe view of the available Redis Commands.
K is the type to be used for Redis keys and
V for values. Most times, using
String for keys and values is what you want:
final commands = client.asCommands<String, String>();
However, it's correct to call this method several times in order to get views with different parameterized types:
final strings = client.asCommands<String, String>(); final bytes = client.asCommands<String, List<int>>(); String title = await strings.get('book:24902:title'); List<int> cover = await bytes.get('book:24902:cover'); // ERROR String author = await bytes.get('book:24092:author');
Keep in mind that Redis stores sequences of bytes, not just
Strings.
Pipeling is used in order to send multiple commands to the server in only one call, instead of doing one call for each command.
In this mode the client stores locally all the commands without sending them to the server until the
flush method is called.
// Start pipeline client.pipeline(); // Run some commands commands.incr('product:9238:views').then(print); commands.incr('product:1725:views').then(print); commands.incr('product:4560:views').then(print); // Flush pipeline client.flush();
The method
flush returns a list of
Futures that can be used for waiting the completion of all the commands.
// Start pipeline client.pipeline(); // Run some commands commands ..incr('product:9238:views') ..incr('product:1725:views') ..incr('product:4560:views'); // Flush pipeline final futures = client.flush(); // Wait for all the Futures await Future.wait<Object>(futures).then(print);
Please note that in this mode
await can not be used for waiting the result of the execution of each command because the returned
Futures will not be completed until
flush was called.
In this mode the server doesn't sent replies for the commands, so the client doesn't need to wait for them.
This mode is started running the
clientReply command with
ReplyMode.off or
ReplyMode.skip.
In this mode the
Futures are immediately completed with
null.
// Discard all the server replies await commands.clientReply(ReplyMode.off); // Run some commands await commands.ping().then(print); // null await commands.ping().then(print); // null await commands.ping().then(print); // null
The following modes are available:
ReplyMode.off: In this mode the server will not reply to client commands.
ReplyMode.skip: In this mode the server will skip the reply of command immediately after it.
ReplyMode.on: In this mode the server will return a reply to every command.
Redis allows to group commands together so that they are executed as a single transaction.
A transaction begins running the
multi command, ends running the
exec command, and can be aborted running the
discard command.
// Start transaction await commands.multi(); // Run some commands commands.set(key, 1).then(print); commands.incr(key).then(print); // End transaction await commands.exec(); // Or abort: commands.discard()
The
watch command can be used for perfoming optimistic lockings over some keys. A transaction will fail if the "watched" keys are modified by another client.
// Watch await commands.watch(key: key); // Start transaction await commands.multi(); // Run some commands commands.set(key, 1).then(print); commands.incr(key).then(print); // End transaction await commands.exec();
Please note that in this mode
await can not be used for waiting the result of the execution of each command because the returned
Futures will not be completed until
exec or
discard were called.
Don't run the
clientReply command inside a transaction. If the "fire and forget" mode is de/activated inside a transaction then the client could go out of sync with the server.
Redis transactions are deprecated in favor of Lua scripting.
Redis allows to run Lua scripts in the server.
Scripts can be executed with the
eval and
evalsha commands.
// Evaluate await commands.eval<void>( 'return {KEYS[1],KEYS[2],ARGV[1],ARGV[2]}', keys: [key1, key2], args: ['first', 'second']);
The result of a script can be anything. It can be ignored, like in the above example, or it can be mapped to a most useful thing.
// Maps a list of server replies to a list of Strings class _Mapper implements Mapper<List<String>> { @override List<String> map(Reply reply, RedisCodec codec) => codec.decode<List<String>>(reply); } ... // Evaluate with a mapper final results = await commands.eval<List<String>>( 'return {KEYS[1],KEYS[2],ARGV[1],ARGV[2]}', keys: [key1, key2], args: ['first', 'second'], mapper: _Mapper()); print(results); // ['key1', 'key2', 'first', 'second']
Encoders are used for serializing all the values sent to Redis. They convert instances of any type to list of bytes. Encoders for types
int,
double,
String and
List<int> are registered by default. UTF-8 is used for
Strings.
Custom encoders can be written extending from
Encoder or
Converter.
Example:
An encoder that encodes instances of
DateTime to lists of bytes:
class DateTimeEncoder extends Encoder<DateTime> { @override List<int> convert(DateTime value, [RedisCodec codec]) => utf8.encode(value.toString()); }
Decoders are used for deserializing all the replies received from Redis. They convert list of bytes to instances of any type, and arrays of server replies to lists of instances of any type. Decoders for types
int,
double,
String,
List<int>,
List<double>,
List<Sring> and
List<List<int>> are registered by default. UTF-8 is used for
Strings.
Custom decoders can be written extending from
Decoder or
Converter.
Example:
A decoder that decodes lists of bytes to instances of
DateTime.
class DateTimeDecoder extends Decoder<SingleReply, DateTime> { @override DateTime convert(SingleReply value, [RedisCodec codec]) => value.bytes == null ? null : DateTime.parse(utf8.decode(value.bytes)); }
Custom encoders and decoders can be registered using the
codec member of the client:
client.codec.register( encoder: DateTimeEncoder(), decoder: DateTimeDecoder());
Custom sets of commands can be written extending from
ModuleBase. This class exposes the method
run that sent to Redis any given line of commands, so it can be used for implementing the API of any Redis module.
Example:
A module that exposes a
HELLO name command:
class HelloModule extends ModuleBase { HelloModule(Client client) : super(client); Future<String> hello(String name) => run<String>(<Object>[r'HELLO', name]); }
Usage:
final module = HelloModule(client); final message = await module.hello('World!'); print(message);
Note that standard Redis commands can be rewritten too for building custom interfaces.
Example:
An even more type-safe set of commands:
class TypedCommands<K> extends ModuleBase { TypedCommands(Client client) : super(client); Future<void> set<R>(K key, R value) => run<void>(<Object>[r'SET', key, value]); Future<R> get<R>(K key) => run<R>(<Object>[r'GET', key]); }
Usage:
final commands = TypedCommands<String>(client); await commands.set<String>('name', 'Bob'); await commands.set<int>('age', 29); await commands.set<List<int>>('photo', png); final name = await commands.get<String>('name'); final age = await commands.get<int>('age'); final photo = await commands.get<List<int>>('photo');
Note that if a module works with a custom structure, like a record with multiple fields, then custom encoders and decoders should be used.
The logging package is used for logging messages through a custom logger named 'dartis'.
Here is a simple logging configuration that logs all messages via
import 'package:logging/logging.dart'; ... Logger.root.level = Level.INFO; Logger.root.onRecord.listen((LogRecord record) { print('${record.time} ${record.level.name} ${record.loggerName} ${record.message}'); });
Set the log level according your needs. Most times,
INFO is what you want.
ALL is good for filling issues.
Dependencies of this packages can installed with
pub get and test cases can
be run with
pub run test. These test cases requires a redis running on
localhost:6379, for local development this can be created with docker:
docker run --rm -p 127.0.0.1:6379:6379 redis
This starts a container running redis and exposes port
6379 localhost, when
killed using
ctrl+c the container will be deleted.
example/main.dart
// Copyright (c) 2018, Juan Mellado. All rights reserved. Use of this source // is governed by a MIT-style license that can be found in the LICENSE file. import 'package:dartis/dartis.dart' as redis; void main() async { // Connects. final client = await redis.Client.connect('redis://localhost:6379'); // Runs some commands. final commands = client.asCommands<String, String>(); // SET key value await commands.set('key', 'value'); // GET key final value = await commands.get('key'); print(value); // Disconnects. await client.disconnect(); }
Add this to your package's pubspec.yaml file:
dependencies: dartis: ^0.3.0
You can install packages from the command line:
with pub:
$ pub get
Alternatively, your editor might support
pub get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:dartis/dartis.dart';
We analyzed this package on Apr 12, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms:
Low code quality prevents platform classification.
Fix
lib/src/command/commands.dart. (-91.90 points)
Analysis of
lib/src/command/commands.dart reported 49 warnings, including:
line 33 col 43: Parameters can't override default values, this method overrides 'ClusterCommands.clusterAddslots' where 'slots' has a different value.
line 45 col 43: Parameters can't override default values, this method overrides 'ClusterCommands.clusterDelslots' where 'slots' has a different value.
line 139 col 29: Parameters can't override default values, this method overrides 'GeoCommands.geoadd' where 'items' has a different value.
line 154 col 22: Parameters can't override default values, this method overrides 'GeoCommands.geohash' where 'members' has a different value.
line 159 col 22: Parameters can't override default values, this method overrides 'GeoCommands.geopos' where 'members' has a different value.
Fix platform conflicts. (-20 points)
Low code quality prevents platform classification.
|
https://pub.dartlang.org/packages/dartis
|
CC-MAIN-2019-18
|
refinedweb
| 1,932
| 52.36
|
See also: IRC log, agenda
-> minutes 10 Jan
RESOLUTION: to accept minutes 10 Jan
HH: my laptop is kaput
RESOLUTION: to meet 24 Jan, chime to scribe, regrets Murray
Harry: where did this come from? what are the positions?
DanC: I added it while thinking about RDFa... some plans for RDFa involve adding a pointer from the XHTML namespace document to an RDFa transformation
danja: why treat XHTML special? what about SVG?
Murray: a better way would be for GRDDL processors to have config files... e.g. in an HL7 context, they'll want to cache HL7 namespaces...
Chime: let's be explicit in one of 2 ways: (1) it's a stopping point; there's no transformations there (2) there's a transformation there, e.g. RDFa
DanC: I think config files are a reasonable implementation approach, but as for the spec, that can be endorsed silently, under an implicit "do The Right Thing"
Ron: what's the downside of being silent?
DanC: one risk is that future
implementations will follow current implementations, ...
... which don't check the XHTML namespace documents; then RDFa folks try to deploy a transformation at the XHTML namespace document, and the grddl implementations would lose.
Danny: how about some sort of "you should check that your copies are reasonably current"
DanC: I think a health-warning like that is appropriate; are you willing to draft something?
Danny: I think so
<scribe> ACTION: Danja to draft a health warninng about caching namespace documents. [recorded in]
RESOLUTION: to postpone issue-tx-element
HH: we have adopted consistent
terminology: GRDDL aware agent, source document, result
[?]
... though not conformance label yet
<chimezie> My niave interpretation is that the spec defines what an agent is
RESOLUTION: to use consistent vocabulary, but not use them as conformance labels.
-> GRDDL links in HTTP? (new issue?)
<danja> for ref :
<scribe> ACTION: IanD to propose some details for GRDDL links in HTTP [recorded in]
<chimezie> DanC: I'm unable to pull from homer.w3.org:8123
<scribe> ACTION: DanC to give Brian McBride CVS access [CONTINUES] [recorded in]
<scribe> ACTION: HarryH to check in a test case on content negotiation [CONTINUES] [recorded in]
DanC: some progress on EARL
3 actions continue
<scribe> ACTION: Murry to draft paragraph giving us caveat for faithful infoset issue closure. [CONTINUES] [recorded in]
current spec is: "If an information resource IR is represented by an XML document whose root node is" ...
<danja> brb (log on fire)
chime: is it coherent to say that both the xincluding-using and the not-xinclude-using outcomes are grddl resutls?
Murray: yes, they're both GRDDL results
DanC: I think that's bad web architecture to allow this ambiguity, but I probably wouldn't object
<scribe> ACTION: Chime to work on details of 2 allowed results of xinclude test [recorded in]
Harry: I'm having trouble running this test
<scribe> ACTION: Danny to test #base-param and e-mail group [recorded in]
<scribe> ACTION:Chime's add HL7 plain XML health care use-case and check it into test suite. [recorded in]
HH: comments have asked for (1) plain XML, not XHTML, (2) something with non-trivial entialment
<scribe> ACTION: Chime to propose some primer text for the hl7 case [recorded in]
<briansuda> sure, i email Ian and CC WG
HH: Ian, please [link rq files
Ian: OK
<scribe> ACTION: Fabien to post to sawsdl list relevant questions about RDF mapping and relationship to GRDDL [recorded in]
fabien, continue that one? or withdraw?
<scribe> -- continues
HH: note SAWSDL expanded scope to not just WSDL but also XML Schema. So this seems important.
<scribe> ACTION: DanC to add a sample implementation appendix to the GRDDL spec. [CONTINUES] [recorded in]
ADJOURN.
|
http://www.w3.org/2007/01/17-grddl-wg-minutes.html
|
crawl-002
|
refinedweb
| 617
| 57
|
Hello Again, World
It has been a while. Life has a way of magicking off with your time, especially when you have five little wizards to look after. Feeling: Harry Potter-ish. Can't you tell?
Anywho, I thought we'd play with some GUI stuff today.
Note:If you need help creating basic GUIs for PowerShell, venture over to this great article by @FoxDeploy :
Crafting Our GUI
We will be using this Base GUI Script as a starting point. Download and follow along!
Note:For those of you who like to read the last page of a novel first, you can download the final script here: New-AddButtonGUI.ps1
Our GUI framework is going to look like this:
- Main Window
- StackPanel
- < Where we will add buttons >
- Button to Add Buttons
And here's the raw XAML code, directly out of Visual Studio 2015, that we're going to be working with:
,260,0,0" VerticalAlignment="Top" Width="170"/> </Grid> </Window>
Place this XAML code in the here string at the top of the base script and you'll be ready for the next step.
The Un-Enchanted Window
You've plugged the XAML code into your script, so if you run it you should get this:
Sweet! A Window! Ooo, and you even have an Add-A-Button button! Which might be cool... if it added buttons.
Close this window and go back to your PowerShell prompt. If you type Get-Variable WPF*, you should get this as a result:
Name Value ---- ----- WPFAddAButton System.Windows.Controls.Button: The Add-A-Button Button WPFStackButtonsHere System.Windows.Controls.StackPanel
And there we have our button and our stackpanel variables.
So how do we get our muggle of an add-a-button button to actually and magically add buttons?
Events!
No. Not like Quidditch. But I like where your head is at.
System.Windows.Controls namespace objects (buttons, textboxes, etc) can listen for events. Run:
$WPFAddAButton | Get-Member -MemberType Event
So many events! 111 to be exact. We're really only going to focus on one right now: Click!
To make our button "listen" for clicks, where going to Add a click event to our PowerShell code. Like so:
$WPFAddAButton.Add_Click({ #What to do when button is clicked })
Now that the button is listening, we can tell it what to do when clicked. First, we need to create a new button object in PowerShell
$NewButton = New-Object System.Windows.Controls.Button
Note:For a list of available System.Windows.Controls classes, visit:
Then we need to add it to our stackpanel. We do this by calling the AddChild method. Doing so uses the stackpanel as the parent, and adds our new button inside it.
$WPFStackButtonsHere.AddChild($NewButton)
So the whole event looks like this:
$WPFAddAButton.Add_Click({ $NewButton = New-Object System.Windows.Controls.Button $WPFStackButtonsHere.AddChild($NewButton) })
Let's try it!
Oh my.. those buttons are tiny. That just won't do. Perhaps we should consider changing the height of our summoned buttons?
$WPFAddAButton.Add_Click({ $NewButton = New-Object System.Windows.Controls.Button $NewButton.Height = 20 $WPFStackButtonsHere.AddChild($NewButton) })
Eureka! Wait.. Blank Buttons. What am I supposed to do with blank buttons?
Chaining Spells Together
What if I added a text box for naming them? That requires changing up our XAML code a little, so here's the new block:
,277,0,0" VerticalAlignment="Top" Width="170"/> <TextBox x: </Grid> </Window>
Run the script, close the window, and let's check our variables again..
Get-Variable WPF* Name Value ---- ----- WPFAddAButton System.Windows.Controls.Button: The Add-A-Button Button WPFButtonName System.Windows.Controls.TextBox: Button Name Here WPFStackButtonsHere System.Windows.Controls.StackPanel
Aha! A Button Name Box! Perfect! Let's make the new button's name and content be whatever is in that box. To do so we utilize the Name and Content properties of our $NewButton.
$WPFAddAButton.Add_Click({ $NewButton = New-Object System.Windows.Controls.Button $NewButton.Name = $WPFButtonName.Text $NewButton.Content = $WPFButtonName.Text $NewButton.Height = 20 $WPFStackButtonsHere.AddChild($NewButton) })
Specialis Revelio: One could get the available properties of a button (or any other control) by using: New-Object System.Windows.Controls.Button | Get-Member -MemberType Property
A little flourish... and voila!
Merlin's Beard! More muggle buttons! What good are those?
Well, how about we make the text box change to the name of whatever button we pressed?
We know the name of each new button, because it comes from the text box. So we can create a new event nested inside the original Add_Click. We'll have to grab the actual button object we just added to the stack panel.
# Get the button you just added $AddedButton = ($WPFStackButtonsHere.Children | Where-Object { $_.Name -eq $WPFButtonName.Text })
Here we're just finding the child object we added to the stackpanel by referencing the name that you used in the text box. Next, we'll need to add an event to that button.
# Add Click Event $AddedButton.Add_Click({ [System.Object]$Sender = $args[0] $WPFButtonName.Text = $Sender.Name })
I don't know if you noticed, but there's some serious sorcery [sounds like a good name for something] happening here. The Add_Click event actually passes the control object into the event as $args[0]. So we assigned it as $Sender, and then told it to change the text on the button name box to be whatever $Sender.Name is. This way, whichever button you click, it pulls directly from that button's properties.
You're a [PowerShell] Wizard!
You've now made a GUI that has dynamically created content, where each new object actually has a function of its own. Might not be the most useful example, but perhaps it will lead to bigger and better things!
Hope you found this useful.. and I wish you the best of luck in your future Wizarding adventures!
|
http://126kr.com/article/7veggei05f9
|
CC-MAIN-2017-09
|
refinedweb
| 970
| 59.9
|
Difference between revisions of "TechDraw TemplateHowTo"
Revision as of 21:30, 13 February 2020".
Inskcape: document with page size and orientation
3. Use the XML Editor to add a "freecad" namespace clause to the
<svg> item.
xmlns:freecad="".
Inkscape: XML Editor adding the "freecad" namespace clause to the <svg> item:
Inkscape: tentative template layout
Create editable fields
9. Use the XML Editor to add a
freecad:editable tag to each editable
<text> item.
- Assign a meaningful field name to each editable
12. We need to shrink it.
- Edit → Select All in All Layers, or box select and select all.
- Adjust the W: and H: spinboxes to match your artwork's size in millimeters.
- Set it to the page size less any applicable margins, for example, W: 250, and H: 200.
13. Use "Align and Distribute" or the X: and Y: spinboxes to position the artwork within the limits of the page if required.
14. Your template should now look right, just like it did in the finished artwork picture above.
Remove transformans on the SVG
15. Ensure that all your editable texts are "ungrouped" with Shift+Ctrl+g.
16. Select everything on your page, Edit → Select All, and then Edit → Copy..
-
|
https://wiki.freecadweb.org/index.php?title=TechDraw_TemplateHowTo&diff=609724&oldid=248890
|
CC-MAIN-2020-16
|
refinedweb
| 201
| 75.2
|
See agenda and IRC log.
<scribe> ScribeNick: timeless
<darobin> is this mostly for scribing or can we also make fools of ourselves?
<TabAtkins> darobin: Do we need to choose one?
<darobin> TabAtkins: well, so long as we don't talk about unicorns
<dom> "it's 203" — as in "203 Non-Authoritative Information" ? :)
IanJ: Welcome everyone
... My name is Ian Jacobs
... welcome to the first developer meeting we've ever had
... we have a great lineup
... and i will get out of the way
... your names are up here, so just get up on time
... We have w3c groups and non w3c groups represented
... timeless has accepted to scribe
... the proceedings will all be public
Arun: My name's Arun for people
who haven't met me personally
... I work for Mozilla on Firefox
... on evangelism
... A lot of what we do is reach out to developers
... to see what we should be doing
... How many people are web developers?
... that's the lion's share
... How many people are in the business of developing web applications?
... interesting a smaller number of hands
... Both as part of my work at mozilla and as someone who works on standards
... I'm an author of a spec in W3C ... File API
Arun: I also work on a spec outside W3C WebGL
Arun: We work in Web Apps
WG
... You should be able to access databases on a client just as you can on a server
... Today you guys have come to the hallowed precincts of a sausage factory
... you're actively following a list-serve
... a lot of developers out there don't follow
... to them, they may be happy to take an API and run with it
... We'd like to get them more involved in standards
... For examples, two browser vendors released browsers with a SQL API
... but two vendors: Mozilla and Microsoft indicated they don't want to do this
... We got feedback from developers indicating they didn't want a SQL API
... ... In HTML5 and web apps
... How many people recognize this movie?
[Back to the future Movie picture]
["Where we're going, we don't need roads"]
scribe: I wanted to condense how
standards work into one day
... In fact, there's a story on CNet referring to a date in 2012
... So I thought about last night how to condense this into a day
[Slide: Morning, Afternoon, Evening]
scribe: Basic rules of thumb on
Time
... Morning is the period that already passed
... Morning is a little while ago
... Features that are already available in browsers today
... The afternoon is things you can build on right now, but you may not be able to do that in a cross platform manner
... How many people do things for some platforms and worry about other browsers (esp IE6)
... The evening is stuff that has yet to come
... it holds in it the promise of a good night
... it holds in it the promise of things that are still being fleshed out
... it's a bit more than that, but it's not something that you can rely on in your bag of tricks
... The morning includes things like LocalStorage
... It's supported in IE8, Opera, Safari, Firefox 3.X (?)
... As for XMLHttpRequest
... it's implemented in most browsers (?)
... and XDomainRequest (in IE8)
... and they're implemented with the same security approach (?)
... And you can grab me and we can look at snippets of code where I can show you how you can do this in a cross browser manner
... postMessage is also available in IE8, Firefox, ...
... CSS2.1 support, reasonably good, even in IE8
... you can look to it for better support than before
<krisk> IE8
scribe: That's the morning
... The afternoon gets a little bit more interesting
... it introduces more things about the platform
... things aren't as mature
... supported in Firefox (25% of the market)
... Safari, and Chrome
[Slide: Afternoon]
scribe: HTML5 Canvas
... HTML5 Video
... HTML5 Drag and Drop
<Tobias> Is this slide online somewhere?
scribe: CSS WebFonts (...)
... and geolocation
[ Demo ]
scribe: This demo brings together
a lot of pieces of the platform
... it sorta does, but the color isn't so great
... I've got a colleague of mine
... doing a fanning gesture between two iPhones
... if I click here, I've got a video supplanted between the two iPhones
<dom> Hi Developers!
scribe: this only works if video is part of the browser, I can't do this with Flash
<IanJ> demo dynamically injecting content into a canvas (within a video element)
scribe: I can also embed a video
inside the animation
... This works when video is a first class citizen of the web
... and this comes from html5
<IanJ> Arun: You get this flexibility when video is a first-class citizen of the web
scribe: here's another demo
... The color isn't great
... [about the demo]
... the pixels of the video are dumped into a canvas
... I'm going to label extracted bits
[names which people might not recognize]
[ fwiw, this usually works better, but for some reason one of the pins on the cable seems to be doing strange things ]
[arun uses the demo to establish facial recognition]
scribe: this demo uses
localStorage
... because i switch between two urls
<JonathanJ>
scribe: I have a lot of demos,
please come visit me later
... this is in the context of "the afternoon"
... I wanted to show you this demo
[IanJ: t-minus 10 minutes]
<dom> [very impressive demo for video]
scribe: I'm going to open font
book, a handy dandy application on my mac
... what I'm going to do now
... is show a bunch of technologies working together
... HTML5 Drag and Drop
... CSS Font Face
... HTML localStorage
... contentEditable
... I'm going to drag a font onto the page
... and dropped it onto the page
... and the page restyled itself using the font
[drags a font onto the page]
[arun misses the drop target]
<JonathanJ>
scribe: I'm going to drop in
garamond
... now you can see the page has taken on a different look
... this is directly using localStorage
... to store the stuff that I dragged
... it's using the drag and drop api
... and it's using font-face to set the font
... and it's using contentEditable
... if i got to a page flickr.com/maps
... I can share my location
[ Arun shares our location ]
scribe: this is the GeoLocation
API introduced in Firefox 3.5
... and I can drag my mouse over locations and see pictures from there
... this is flikr using pieces of an API that recently became a spec
... and now... "Evening"
... We're looking at pushing the hardware
... we're discussing storage
... and orientation events (angles)
... multitouch
[Arun holds up an n900 proto]
scribe: Firefox Mobile will ship for this device
[Arun demos playing a game by tilting his macbook pro]
scribe: this is a pretty popular go-kart game
[ demos an expanding red panda but tilting his laptop]
scribe: this is stuff we'd like to do in the evening
<scribe> ... pending discussion with other folks
UNKNOWN_SPEAKER: this is stuff in
the evening, the promise of a tomorrow, or a tomorrow
morning
... and of course there's 3d graphics
... 3d graphics are extensions of the html5 canvas element
... and exposes a new way to do hardware accelerated 3d graphics
... these are the things I'm talking about from the promise of an evening
Van Ryper (Krillion):
scribe: I've heard a lot about the web3d consortium
Arun: the deliverable of web3d
(x3d)
... is an interchange format that represents 3d graphics
... it's the ability for javascript to parse such graphics
... and use webGL to expose those graphics
Robin: and someone's done that
Arun: X3D OM
... The promise of today is that javascript's performance have improved so much
IanJ: so... Don Brutzman a chance to speak
DonB: We'll be showing this tomorrow morning at X-oclock
<smfr> 9-oclock
<darobin> X3DOM:
Tom Strobers (user):
scribe: Java - JavaScript ?
<krisk> ...in the HTML5 WG Meeting
Arun: that's an interesting
question
... and I'll speak in a continuum
... java has historically been used as a technology that can be used anywhere
... and so in fact can javascript
... javascript as it now runs in browsers
... and browsers run on mobile devices
Fantasai: JavaScript and Java has
no relation
... JavaScript runs natively in the browser
... whereas java runs separately
Arun: JavaScript is the defacto language of the web
IanJ: I'll see if we can talk
about moving things into programming languages and out of
declarative languages
... I want to keep things moving
... we have a lot of interesting speakers
... we have 3 bottles of wine
... You can use a business card, or just a piece of paper
... thank you Arun
fantasai: I'm an invited expert
of the CSS WG
... I've brought along a number of people from the CSS WG with exciting demos
... first speaker is David Baron
... he writes specs, makes interesting comments, ...
[ laughter ]
dbaron: ...
... there've been a lot of demos of css stuff floating around lately
... I've wanted to demo a few features that are not the ones that get the most press
... the stuff that people demo are these new visual effects
... shadows, columns, rounded corners, transforms
... one of them is border image
... the ability to take an image and then logically that image gets split up into 9 pieces
... and then you can use those slices to form the border of something else
[demo]
scribe: and this will now resize
as i resize the window
... another feature that's been in specs for almost 10 years
... but that hasn't been implemented until recently
... is font-size-adjust
... it lets you get better font behavior
... one of the problems is that font size is whatever the font designer wants it to mean
... what font-size-adjust lets you do
... is that instead of letting font size do what it does
... font-size-adjust lets you operate on the x height
[ demo of font-size-adjust ]
scribe: another feature that's
now pretty widely implemented
... Mozilla, WebKit, and Opera
... are CSS Media Queries
... which let you change the style of the web page
... based on characteristics of the thing that's displaying it to you
... so you can change the page based on, e.g. the width of the window
... e.g. you can specify something that only operates for windows >22em's wide
... the final thing, is a feature that I think is only implemented in Mozilla
... many designers have struggled with css using intrinsic widths in basic ways
... there's the longest word of the line
... and (??)
... so you can say the width is the min-content-width
... or the width is the mac-content-width
... or that it fits the container
... which is the same algorithm used for tables
fantasai: we'll have a couple of questions after each talk
dbaron: questions now...
VanR: ... how pervasive are things
dbaron: most of these that I've demo'd are supported in Firefox, Opera and Safari, but not IE
<IanJ> quirksmode.com suggested
VanR: ... meta question, is there a way to see supported list
Tab Atkins:
scribe: quirksmode ... and ...
Ian: ... status of a test suite?
fantasai: we're working on
it
... next speaker is Tab Atkins
TabA: ... i just got brought in
based on my work on the gradient spec for CSS3
... I'm going to take this opportunity to go over how spec work is done
... because page designers often wonder how to get things added
... steps:
... look at the problem and figure out what the actual problem is
TabA: and then there's a mailing
list
... it's a public list
... I have an example
<Tobias> thanks dbaron
TabA: gradients
... Safari introduced experimental support for css gradients in 2008
... I don't know if these will work in Chrome
... that's ok... I have other things that will work
... We kick things around on the mailing list
... later Mozilla created something similar
<smfr> TabAtkins was showing:
TabA: they said that they didn't like the way it was done
<benjick> IE have had CSS gradients for ever!
TabA: each vendor uses its own prefix (-webkit-, -moz-)
<smfr> now showing:
TabA: not the bad old browser
wars
... gradients can be done in CSS, with good performance
... without the network bandwidth
... the problem with gradients, was the syntax
... we kicked it around on the mailing list
... it's all public, you can read it on the mailing list
... what it ended up with was a proposal by me
<Arron> www-style archive :
TabA: i just proposed it on the
list
... it grew out of discussions with people
<smfr> Current proposal:-
TabA: talking about what people were trying to do
[shows the detailed description of the proposal ?]
scribe: these are based on the
firefox implementation
... this is a Minefield build of Firefox
... it's now in the nightlies
dbaron: it will be in Firefox
3.6
... as of a few hours ago
TabA: so 3.6 will have the new
syntax
... so you can do things like you want to do
[ demos creating _beautiful_ gradients ]
scribe: this is directly using the syntax
<smfr> TabAtkins is testing with
scribe: I'm just using javascript to set the background
Robin: ... do you have demos where it's animated?
TabA: it's an open problem
... it shows the evolution of an idea
... from someone identifying a problem
... to implementations
... to implementations(?) on the style list
... to proposals(?)
... so if you have a problem
... you tell us about it
... we kick it around
... or we decide to put it off until later
... browser developers are not always page authors
fantasai: coming to a browser
near you
... next up, Simon Fraser
... giving a demo of transforms and transitions
... these are new drafts that are coming up
... Simon works for Apple on WebKit, and used to work for Mozilla
Simon= smfr
smfr: so...
<TabAtkins> darobin, I failed!
smfr: with transitions and
transforms
... this is some content that we put together using assets from a band called Wilco (?)
... as I hover over things on the left
... you see transitions
[ shows the basic bits ]
scribe: we've got a standard
color
... and the nice red color?
... using completion in textmate
... let's put a transition right here
... over one second
... so now when i go back to the page
... you can see the transition
... transitions take a comma separated list
... another thing that dbaron mentioned was transforms
... so let's put a hover on the transform
... a precanned rotate of say X degrees
... and now let's make this nice and smooth
... so let's say ... .5s
... you can use milliseconds too
... so let's go back to our original page
... but this slideshow
... you can have crossfades
... we can use a translate
... and vertical scales
... that's a keyframe animation
... it's a little bit more complex
... they're not as advanced as transitions
... we can do a spin
... we're also proposing a 3d transform
... we're rotating around the vertical axis with some perspective
IanJ: so are all the images there?
smfr: yes, it's all there, it's just css classes tweaking
Dan (HP):
scribe: we've got all these
transforms that we can use on a page
... we've also got canvas
... why should you use one or the other
smfr: with canvas, you draw and
then don't know what's there
... with transforms, you aren't making the content more opaque
... you still have links
... we've also got examples of applying 3d to slices of a page
... and revealing things from the page
... and we can hover over here and see hit testing still works
... all done with css transforms
... we've done this inside apple
... this demo was done by charles ying (?)
... outside apple
... it uses flikr to fetch images as a wall
... hold keys down, move backwards and forwards
[ wiring glitch, we dimmed another room ]
scribe: thanks
fantasai: so that's our three
speakers
... we also have people from Microsoft here
... we've got a bunch of members from the CSS WG
Bernard Ling (?):
scribe: when does it appear in ie6? [just kidding]
dbaron: Mozilla has 2d transforms
in FF3.5
... transitions will be in FF3.7
smfr: 2d transforms should be identical in behavior
dbaron: 3d would be after 3.7
<dbaron> if we did it
IanJ: how to tell css wg your idea's
fantasai: www-style@w3.org
xx-? :
scribe: is there any work being done on opacity across browsers
ms-1:
<dbaron>
ms-1: currently it's available
through filters
... it's a proprietary property
... we can sit down and i can try to help you with it
... moving forward in the future
... we can see about looking to expand
fantasai: css3 color is a
CR
... there's an opacity property
... I believe WebKit, Gecko and Opera
IanJ: ok...
RRSAgent: make minutes
<fantasai> Thanks everyone, your talks were amazing! :)
<dom> very impressive, indeed
Next Speaker: Philippe Le Hégaret (W3C)
IanJ: Philippe has worked on building a test suite
Philippe: let's talk about
something embarrassing
... testing at w3c
... I'm responsible for [long list of wg's]
... one of my plans is how do we test all that
... talking about testing at w3c
... we already have plenty of test suites at w3c
... css1, ... DOM event 2, css2.1, ...
... why do we have those test suites?
... one of those reasons is that in 1999 in the DOM working group
... we came up with this phase called Candidate Recommendation (CR)
... "we think we're done", but now we want to prove that we're actually done
... this came out of the DOM WG
... to come out of the phase
... the WG _should_ come out with two implementations of each feature
<karl> we need to brush up the matrix
Philippe: it's a negotiation with
the [TBL role]
... what working groups tend to do
... is just demonstrate that each feature has been implemented
... do they actually do this?
... no, they don't have enough resources
... and no one really wants to write tests
... but do we get interoperability on the web?
... and i would argue no
<dom> karl, I actually updated it (somewhat) a few weeks ago
Philippe: how can we make the web
a better place?
... w3c has limited resources
... yes we have microsoft
... but we have limited amount of time
... limited amount of budget, for product teams as well
... so what we really want is the community to help us
... tell us what works
... you run into problems all the time
... tell us about it
... can you please submit a test about it?
... what i'd like to see is the community help us
... let's make it a bit harder
[ slide: svg, mathml, video, ...]
scribe: I can manipulate the DOM
tree
... if i want to play a video, i just click a button which is just a thing with css style on it
... and it will work
... but who is going to test all this?
... while we have produced some test suites
... we haven't produced combinations of specs
... css+svg+ ...
... so how do we test that?
... first we need to test the parsers
... we need to guarantee that the document you're writing will generate one single DOM
... how do we test dynamic scripting
... if i want to test a css animation
... how do i test it if it's 3 seconds
... i don't want to test just the first frame and the last frame
... we need to understand that there are limitations
... it's impossible to test everything
... and we have to acknowledge that
... but at the same time
... we need to do something
... the most common thing
... is a test that requires a human
... a "self describing test"
... [ pass with green, fail with red ]
... we can also test plain text output
... we can compare screen shots
... if you have for example in svg
... we know exactly what the output should be
... if you have a rectangle, we know what it should be
... we can take a screen capture
... with fonts, it's different
... what dbaron did
... is that instead of trying to write tests to match a static image
... is how about we write two pages that should have the same rendering
... using different features
... that's called reftests
... the advantage is that it can be cross platform/browser
... with webkit, you can
<fantasai> ScribeNick: fantasai
<timeless> ... do a dump of a dom tree
scribe: and there are probably other ways that I'm not aware of.
<dom> Webkit DumpRenderTree, an example of layout tree comparison
scribe: one of the things I've
been trying got push inside the consortium is to have a browser
testing framework
... that other groups can use. They can choose a method to test their specification.
... we want to make this as automatic as possible.
... we need to produce a lot of tests.
... e.g. Microsoft submitted 7000 tests that were all self-describing tests
... that is not scalable
... it takes a long time to go through those tests
... because of our limited resources, we need to produce a mechanism to help our working groups
... if they are reviewing tests, they are not writing specs
... they should just be able to focus on controversial tests
... if others can submit a test, then we can look if there's a problem
... we can also see if its a bug in the browser, and they have to change their impl
... We also have to be careful here, because if the tests are wrong we get interop on the wrong behavior!
... We need to have testing for all these technologies, not just one of them, or each separately
... but all of them together
... with HTML5 normatively integrating with SVG and MathML, we need to test them together, not just each on the side.
... We need to be able to test HTML inside SVG
<smfr> and SVG inside HTML inside SVG
scribe: As I said there are
multiple ways to test a browser, and we should allow more than
one
... The browser implementors are not going to rewrite all their tests for us
... but agree on some common formats so that we can all share tests
... We also need to have a life after the Recommendation stage
... the specs still exist after Rec, and we need to continue testing them
... I don't want W3C to run that test suite. We don't have the resources.
... We can't buy 100 servers and run tests on every possible version of every browser
... So we want to allow others to run the tests. To run screen shots on their own computer
... There are some difficulties. E.g. I don't know how to take a screen shot by script on all the platforms
... What happens then?
... We can make the test results useful to you.
... Show reports of what works, and what doesn't. Let's make the test suites useful for the community as well.
... And we should improve our validators at W3C.
... Maybe make it use test results.
... e.g. it notices You are using this feature, note that it doesn't work on FF3.6!
... We're not alone, there are others who are trying to do the same thing.
... test swarm for example is an effort from jQuery author, because he was running into the same problem
... he cannot run every OS /browser combination himself
... browser scope is interesting too. It allows you to compare screenshots across platforms
... It uses a web server locally to determine when to take the screen shot
... We need to produce these tools incrementally
... and try to get them to work on all browsers
... I think the message that I like you to get out of this is that we need help.
... I can get some help from browser vendors, but ultimately we need help from the community because you are the ones suffering every day.
... and until you tell us what is wrong, we are not able to help you
<caribou> For the record, Help W3C Validators program at
Ian: Questions for Philippe?
<darobin> interesting article on mobile testing:
Dianna Adair:
<timeless> RRSAgent: make minutes
Dianna Adair: Could there be any hooks in the syntax so that you can pass arguments to the syntax automatically, through some sort of test generation program
Dianna Adair: Are there valid simulators for the major browsers?
Dianna: So that you can push the tests against the simulated suite of browsers
plh: For the first question, yes,
because we are starting from scratch
... For the other we can get screenshots of the major browsers
... browsertests.org was done by an engineer in Switzerland
... At the beginning of Sept. a few folks including me and a few Moz developers got together and started writing code to do that
<dom> BrowserTests.org
plh: We made a prototype that
works on the 3 major platforms
... He is improving his browser test framework
... At W3C we have a way to do human testing, I showed a demo of the mobile web browser
... It requires a human to click pass fail pass fail
Dianna: One way I've seen that
works, is to set up a location and have some sort of
"bugfest"
... You have people all over the world trying to test things simultaneously
plh: ...
... My goal is not to point fingers at browsers and tell them they're doing bad stuff
... I want to serve the community
IanJ: Have you set up a mailing list or public place for people to come help out?
plh: Not yet
IanJ: ACTION Phillipe? :)
plh: We need to create a group
within W3C itself
... I know for example Mozilla and Microsoft are interested in helping
... We need to organize and provide a venue for the community to come together
Dianna: I propose that Universities are a great source intelligence and creativity and might be able to help
Kris (MSFT): There is a test suite alias in the HTMLWG
plh: Yes, we also want cross-tech testing
Kevin Marks: Do you know the test suite for called doctype at Google
plh: I only mentioned testing
framework. There are plenty of efforts out there
... One thing I did in August was to collect some of that
... We are not alone, there are a lot of others trying to solve the same problem
IanJ: Ok, we have 4 more speakers
after the break
... I'll hand over to Tim for now
TimBL: Thanks for coming
<karl> there was
Tim: It's important that everyone designing specs is in contact with lots and lots of people using their specs
<karl> and now
Tim: Good to have feedback, and feedback on how to get feedback.
IanJ: Next speaker, Brendan Eich,
representing ECMA
... and ECMA harmony
brendan: I'm here from
Mozilla
... I'm here to talk about ECMA Harmony
... which is a ... which we reached last summer
... before that, we had problems
... the figure of XX ...
... identified as gandalf ...
... there were people like Doug and sometimes me
... advocating for JS Hobbits
... small enough and based on principals from Scheme itself
... it had virtues which were only discovered years later on the web
... it was the dumb kid brother to Java
... JavaScript was supposed to be the duct tape language
... you were supposed to write it around the real language .. Java
... I think people will agree that Java is basically dead on the client
... there were problems with Java
... the main issues was that JavaScript was a dynamic language
... is a dynamic language, and will continue to be a dynamic language
... the fear with ECMAScript 4 (?)
... was that it would become a static language
... the fear, as with Visual Basic 7
... was that you take a dynamic language
... and you convert it into a large proof system
... and that's not how languages are built
... if ES4 would have been that, i'd be that guy with Gandalf
... there was a point in 2006 where the committee seemed united
... the MS representative was going to put some old version into JScript.net
... and we were all united
[ Slide: The Fellowship of JS ]
scribe: the fellowship was broken
[ Slide: Conflict (2007) ]
scribe: some of it was based on
the real prospect that i was somehow working toward
... of pulling the drive language of flash, actionscript, into the web
... and again, microsoft was working on pulling a version into JScript.net
... based on waldemar horwat
... ECMA requires consensus
... and we didn't have that
... at the time this happened in march
... it was clear to me that this wasn't going to work, someone was going to win, and someone was going to lose
... but this was going to be ok
... because it would involve improvements to the language for the web (?)
... ecma was stagnating
... 4th edition was mothballed in 2003
... netscape was dying - partially because of its own failings, and partly because of microsoft (see antitrust)
... msie was sleeping
... in 200x (?) ... there was a chance of things improving
... in April 2007, there were things like
... Microsoft's Silverlight offering
... a JScript Chess demo was converted into C#
... it was 100s of times faster
[ Slide: The Two Towers ]
* ES4
* Waldemar Horwat's work at Netscape, 1999-2003
* JScript.NET, 2000
* ActionScript 3, 2005-2006
----
* ES3.1
* Doug's recommendations
* Document JScript deviations
brendan: ...
... there were a lot of bugs in IE's implementation of JavaScript
... and MS was heavily involved in the standard writing for ECMAscript 2/3
... and there were serious bugs in the MS implementation
* "No new syntax"
brendan: ...
... if you never add things, you can't break things
... if you aren't careful, and you add global objects/methods
... you can break the web
... no new syntax doesn't save you
... time was passing, we were trying to get es4 out
[ Slide: Synthesis (2008) ]
brendan: ....
... Allen proposed meta object API
... on the ES3.1 side
... Lars Hansen on the ES4 side, "Packages must go"
... in April 2008
... Namespaces must go too (June-July)
... unfortunately, we lost Adobe
... because they felt they lost the bits they had derived from the standard
... but that's a risk when working on standards
... when we reached harmony in July in Oslo
... the language again is inspired by Scheme with influences from Self
... one of the foundations of Scheme is lexical scope
... javascript has some broken bits of scope
... Doug's teaching and attitude
... in es4 we're looking toward a strict mode
... we have a hope for "use strict mode" for es5
... similar to perl5
... we're trying to avoid "use stricter" for future versions
... that's my quick recap of how we reached harmony
... ES3.1 was renamed ES5 (March 2009)
... We decided not to trouble ECMA with fractional standard version numbering
... ES4 died
... we're not sure if Harmony will really be 5
... we might need to do some quick versions for standardization reasons
... the committee is not the gatekeeper
... a chokepoint for all innovation
[ Slide: Namespaces ]
brendan: ...
... who here knows about Namespaces in Flash?
[ hands raised ]
scribe: there's ECMAScript For
XML (e4x)
... it has a lot of problems as a spec IMO
... it's a spec whose pseudo code was extracted from java code
... so you have bugs from the code, translation, etc.
... it almost broke the object model
... it had integrated query
... it had namespace objects
... you had to use ::'s to qualify stuff
... sometimes people complain about namespaces in XML documents
... es4 was much worse
... it was very powerful
... because you could use lexical scope to change how code behaves
[ Slide: Packages ]
scribe: packages are built on
namespaces
... even today in actionscript, there are some funny things about them
... there's a temptation to think that using long chains of dotted things
... there's a temptation to think that the dotted things can win
... but because the language is dynamic
... the winner can be the normal object with similar property paths
... I think this problem still exists in actionscript
... and then there are problems with <script> tags
[Slide: Namespace Problems]
scribe: here's a problem with
namespaces
... ambiguities can happen when scripts come along and defined a namespace later
... I'm explaining why Namespaces died in ES4
... Question: Why am I talking about why Namespaces when you already said it's dead
... Answer: it died because it had technical problems
... that we couldn't figure out how to solve
... the alternative was to require whole-program analysis
[ Slide: ES5 Meta-programming ]
scribe: the 3.1
contribution
... Create properties with getters and setters
... we have this in mozilla under a different name
... it's finally standardized
... instead of __defineGetter__/__defineSetter__/__lookupGetter__/__lookupSetter__
... We implemented this about 10 years ago in Mozilla
... but MS/Opera didn't implement it
... when live maps launched
... it treated the DOM as the ie DOM
... and then it looked to see if the host wasn't IE
... it decided it wasn't IE, then it must be Gecko
... so it used __defineSetter__/__defineGetter__
... to support it
... this caused a fire drill in Opera/Safari
... to implement this missing feature (within a week!)
... to support live maps
[ Slide: ES5 Meta-programming ]
scribe: you can define things
that don't appear in for-in loops
... because the ajax community learned about how pollution breaks iteration
... with this facility, you can not break those things
... with this, lack of syntactic salt
... you can create objects
[ Slide: Hardening Objects ]
scribe: you can make an object
that delegates to another object
... without using a constructor pattern
... you can prevent extensions, and prevent reconfig
... and prevent all writing
... this enabled a lot of what we had in ES4 classes
[ Slide: Harmony Requirements ]
scribe: as we worked on harmony,
we realized we should state our requirements
... in some way
... we don't want to do anything that requires innovation in committee
... or abstract jumps
... we want to keep the language pleasant for casual developers
... so you could start small and grow
... javascript is in some ways a virus
... it has grown into an application programming language
... we want to keep these features
[ Slide: Harmony Goals ]
scribe: * Be a better language
for writing
... [] complex applications
... [] libraries (possibly including the DOM!)
... [] code generators
... * Switch to a testable specification
... * Improve interoperability, adopt de facto standards
... * Keep versioning as simple and linear as possible
... * A statically verifiable ...
[ Slide: Harmony Means ]
scribe: * Minimize additional
semantic state
... * Provide syntactic conveniences for:
... [] good abstraction patterns
... [] hide integrity patterns
... [] define desugaring to kernel semantics
... * Remove (via opt-in versioning or pragmas) confusing or troublesome constructs
... * Build Harmony on ES5 strict mode
... * Support virtualizability for host objects
[ Slide: Harmony Proposals ]
brendan: ...
... prognosis, it should be sorted out in 2-3 years
... things that don't make it go toward es6
... you can self host your way to a stronger language
... ECMA standards group TC39 is still strong
IanJ: thank you brendan
... that was our transition talk
... into Internet Ecosystem
scribe: I'm talking about
international domain names (IDNA ?)
... IDNA 2003
... There was a system developed in 2003 that allowed people to have international characters in domain names
... I don't know if people saw the news this week
... What happened this week is that the top level domains can have non ascii characters
[ Slide: IDNA 2003 ]
scribe: you can't do certain
things...
... IDNA 2003 is tied to Unicode 2.3 (?)
... If you look at the uppercase characters
... they're mapped to lowercase characters before they reach dns
... O with two dots is converted to a lowercase version before it gets sent out
... it gets converted with an ascii binding with something called punycode
[ Slide: IDNA 2008 ]
scribe: about 3 years ago there
was an effort to revise it
... it updates to the latest version of unicode
... and makes the system unicode version independent
... but it invalidates certain urls
... uppercase letters are invalid
... it removes a class of symbols and punctuation characters
... and it makes certain classes of characters not equivalent to other expansions
... IDNA 2008 is not yet approved
[ Slide: ISSUES ]
scribe: this causes problems for
browser vendors
... which need to retain compatibility with pages using IDNA2003
... need to match user expectations
... it causes problems for search engine vendors
... need to match old and new browsers
... need to match old and new expectations
[ Slide: UTS46 - Compatibility "Bridge" ]
scribe: It enables everything
that was allowed in 2003 with the same behavior
... it allows the new stuff allowed from 2008
... it has different things for lookup/display
display: ß, lookup: ss
[ Slide: Compatibility for Transition ]
scribe: aimed at client SW, not
registries
... allows client SW to handle both 2003 and 2008
... consensus from browsers and search
... I'll send the slides to IanJ
IanJ: thank you
[applause]
IanJ: thank you for your good
work at unicode
... you mentioned hot controversies
Mark Davis: ...
scribe: there are
controversies
... I'll introduce Eric Vanderpool (?)
... Michel Suignard
... one of the coauthors of the IRI spec
... a key issue is the compat difference between the 2003 and 2008 spec
... we've been trying to walk a delicate line
... while not trying to stomp on the IETF toes
... because it's their spec
Diane: ... ...
... If I own one, name sparkasse-gießen.de
... can you squat on sparkasse-giessen.de
Mark Davis: ...
scribe: you can't reserve
'sparkasse-gießen.de'
... it's like case
Doug: ...
... about the heart case (I❤NY.blogspot.com)
Mark Davis: That will resolve to an all lowercase version
scribe: If you were using a
browser that implemented IDNA 2008 strictly
... it will fail
questioner: ...
... so the issue about uppercase
... does that mean that you can't type?
Mark Davis: no...
scribe: it's limited to IDN cases
<dom> (♥NY.blogspot.com/ resolves to in my FF3.5)
Doug: what's the goal in not making it work?
Mark Davis: that's part of the controversy
<smfr> Safari goes to♥ny.blogspot.com/
scribe: it was bad to show something that was other than what was being resolved to
Robin: so what's with the heart?
Mark Davis: well symbols and punctuation look to close to other things
scribe: We dropped ~3000 such
characters
... We dropped ~4000 characters relating to ÖBB.at
... For a lot of us, this didn't really solve the problem
Doug: so it doesn't limit you to non mix ranged characters?
Mark Davis: ...
scribe: there are a number of
guidelines in Unicode 36
... the problem is that there are a number of cases where it's needed and customary
... such as mixing Japanese and Latin
... and distinguishing legitimate from others
... and over time, browsers are solving the problems
Rohit: ...
... regexps that pigeon parses urls ... will that break?
Mark Davis: ... you need to replace the dot with the character class
Rohit: are you familiar with the
ipv6 handling in urlbars
... and how long it took before it was implemented?
Mark Davis: most stuff users do should work
scribe: but sure the servers will break and have probably been broken since 2003
Tom: ...
... this is at which level?
Mark Davis: this is all handled at the browser level
scribe: punycode ... was adam
costello's pun
... as far as dns is concerned, it's all xn--...
... the routing/dns system doesn't see this
... the browsers basically get to agree with this standard
Richard: ...
<IanJ> [Richard Ishida]
Richard: what if i have a
heart
... what you've been describing is something which does a transformation of these strings
... if i understand this correctly
... you will continue to use this
... we've been working with all the browser representatives and search engine companies to handle this
xx-4: from hp
... you've alluded to phishing attacks
... what's the status
Mark Davis: ...
scribe: everyone has some
approach for dealing with this
... but it isn't consistent
... I think it's a bad idea to standardize too early
... there are a lot of holes before we come up with a cohesive solution
IanJ: thank you
[applause]
Leslie's slides (originals)
IanJ: Next Leslie Daigle @ ISOC
Leslie: ... talk/pres/discuss/...
[ Slide: The Internet - Evolution and Opportunity ]
[ Slide: The Internet Society ]
scribe: Founded in 1992
... 100 members, 90 chapters ...
... promoting/sustaining internet as a platform for innovation
[ Slide: Internet Evolution ]
scribe: Incremental changes
... seven layers
... independent building blocks
... flexible + responsive
... impossible to nail up a global deployment plan
[ Slide: An external pressure ... IP addresses ]
scribe: Running out of IPv4
addresses
... last allocation from IANA predicted for Oct 2011
<rohit> apologies for interrupting the scribe, but I wanted to share a link for the dumb-app-guy question I asked earlier:
scribe: last allocation to ISP
anywhere, predicted for Feb 2013
... Lots of IPv6 addresses
<rohit> -- an example of sw currently broken and the very first request from the devs was for a regex :)
scribe: it's not going to be an ipv6 internet before the last ipv4 address is handed out
<rohit> viz "valid_url() marks correct IDN domains as invalid"
scribe: more NATs
[ Slide: Implications above the IP Addressing Layer ]
scribe: IPAffinity breaks!
... a recent roundtable of industry leaders we held included reps from Google, Yahoo, Akamai, Netflix, and Comcast
... discussed impending impact on geolocation, geoproximity, management of distribution of copyrighted materials
...
... Multiple open streams breaks!
... sharing addresses => fewer ports => ajax apps have troubles
... poor performance of web pages, e.g. Google maps
... users see google maps tiling in slowly
... users don't blame the network, they blame the server
[ Slide: Responses to the IP addressing situation]
scribe: major ISPs and content
providers are including ipv6 in their current deployment
plans
... wireless broadband LTE has IPv6 baked in
[ Slide: Opportunities in the IP Addressing Situation ]
scribe: make sure your web
servers are ipv6 capable
... don't write ipversion specific apps
... with ipv6 you can imagine a world where everything is uniquely addressable
... example of problems ...
... Opera tries to outsmart OS
... if it finds ipv6 address it will use it
... whereas vista might know to fail over to an ipv4 tunnel
... but it can't because opera didn't let it decide
[ Slide: Another external pressure - Unwanted Traffic ]
[ Slide: Responses to Unwanted Traffic ]
scribe: no final conclusion
[ Slide: Alternatives? ]
scribe: top down imposition of
security doesn't fit the Internet
... the internet is a "network of networks"
[ Slide: Security Tools Must Address Total Threat Model ]
[ Slide: Different Security Mechanisms Are Needed for Different Threats ]
[ Slide: Too Much Security Technology Impedes Usage, without Reducing Bad Behavior ]
[ laughter ]
[ Slide: One building block: DNSSEC ]
scribe: secure results from
domain name servers
... so you can be sure whatever you get back from dns is what the dns server intended to send you
... tamper proof packaging on dns responses
... this doesn't prevent phishing
... it doesn't encrypt the data in the response
[ Slide: DNSSEC opportunities ]
scribe: with DNSSEC you have a
better platform for confidence in dns
... dnssec is deploying rather slowly
... I've referred to these in other contexts as broccoli technologies
... you should eat it, but it's better with a cheese sauce
... but there's no cheese sauce
[applause]
Phillip (Cisco): ...
scribe: when do you think ISPs will deliver IPv6 connectivity?
Leslie: ...
... some soon in Europe, and a few maybe in the US
... I think it's fair to say of the service providers thinking about it
... they will have it deployed before *they* run out
... this is slightly better than before when it was like "yeah, we have it in our research lab"
Tom: you mentioned an ISP
... what specific one?
Leslie: of the list I have here, Comcast is the access one
Mark: Have you seen a hoarding of ipv4 addresses?
Leslie: I think some have
retirement plans by auctioning them off
... In principal the regional providers have fairly strict releases of addresses
... the problem is that we're going to run out
Doug: ...
... I asked about a tutorial
... I still think you need a tutorial
... with a tweetable domain name
Leslie: yeah, that'd be great
[laughter]
scribe: part of the challenge is
that everyone's problem is different
... at some point, we'll figure out the commonalities
... we're trying to get some of the ones who have done it in a room with some who haven't
... so they can share knowledge
dbaron: David Baron,
Mozilla
... in what would you like to do with DNSSEC
... none of this is speaking for mozilla
... there are certain things i'd like to be able to do with dnssec
... among them is putting public keys in dns
<shepazu> put i18n urls in Acid4 :)
dbaron: to avoid using a CA
... another is putting email autoconfig information into dns
... another is to put a default https flag
... to say "foo.com" should go to "" instead of ""
Leslie: thanks for thinking about
the questions
... in terms of where to go with them
... some of them are pursued within IETF
... particularly, some levels of email, e.g. domain keys
... to say "this is the server allowed to send email for this domain"
... so the IETF is the right place to go for a lot of this
Kevin: ...
... what would i like to do if i had lots of addressable points
... i'd like to setup servers on my own machines
... without proxies
Leslie: yes
... it'd be good if people would stand up and say that loudly
Kevin: we've seen this problem with real time flow
Diana: ...
... what would i do if i had lots of addresses
... what if i was a washing machine
... what if i was an animal owner
... and i put a chip in each one
... or a hospital owner
... and i wanted chips in patients
... i think ...
... I think we'll run out of ipv6 within 10 years
Tom: who is the definitive place for ipv6
Leslie: all of them
IanJ: thanks Leslie
[applause]
<scribe> ScribeNick: IanJ
Speaker: Kevin Marks on OpenID,
etc.
... open social web standards
<dom> [note that JessyInk provides similar effects as Prezi in SVG]
KevinMarks: How I got to this
point.
... the problem is Babel
... see the "Map of Online communities and related points of interest"
(one example:)
KevinMarks: Histogram your users...people use 12345 and 90210 when they are lying to you :)
<timeless>
KevinMarks: You have to give
people a reason to @@
... Open social builds on existing infrastructure
... Defining an API for your favorite programming language...as long as it's javascript.
Open social v0.8 enabled new client and programming models by adding server to server protocols.
<caribou> [original pic at]
KevinMarks: Over 1 billion users
of open social stuff.
... developing REST APIs.
... Challenge is to identify "me"...people accustomed to identify selves via HTTP URIs
... WebFinger(email) -> URI
... Next think you want to know is "my friends"
... Portable contacts....vcard + some useful fields used by most of the social networks.
<smfr> vcard
KevinMarks: what we
do....(photos, etc.)
... the model underneath that is "feeds" but those were designed for blogs.
... Activity Streams codify "actions" (that were not part of feed design)
... Notion of "flow" ...atom pub (posting; and equivalent JSON APIs)...and newer: pubsubhubbub
... a way to get around the feed polling problem.
... you don't check the feed ever N cycles...you get a post when the feed changes.
... Salmon builds on those ideas....codifies "going back up the stream and down again"
... A big chunk of the challenge is to get delegated login.
... didn't get you that much...
... not much improvement to actual user experience.
... but now we have more to help solve form-filling problem.
... you can make a business case now for using the APIs rather than creating yet-another-UI
... we are starting to see this implemented
... you can delegate your logins to the site...will go to site and get not just user identify, but richer identity as well.
... not quite convergence, but we are trying to pull them together (from different site approaches)
... OAuth is a way of issuing tokens.
... you do an HTTP request; knows who you logged in as and your creds; gives you back things you have right to.
... replaces cookies; state management doing in RESTful fashion
... google and yahoo offer this for all their services; twitter likely to as well
... empirical standards (as we experienced with microformats)
... focused on agreement rather than completeness.
... "t-shirt not a suit"
... "good enough standard"
... example of portable contacts.
... we looked at social networks and what they have in common.
... activity stream stuff...we have enough social network sites...what actions are they taking that is common enough to build a vocabulary
[end of overview of the space]
scribe: ad hoc realm.
IJ: Have you taken some to IETF?
KevinMarks: Yes.
... We've set up foundations...but then created OWF.
... as a foundation factory...to do all the legal stuff that you have to do...so you could use this in other places...model was the Apache foundation...but to do for specs what Apache does for code.
... I've worked in video standards before...didn't seem in these cases to have the same patent thicket.
Dan Appelquist (Vodafone): How would you compare this approach to integrating social networks to one based on XMPP?
KevinMarks: Bits and pieces
around that.
... there's some overlap and some you can bridge through.
... a lot of this came from open social experience...and part was moving through their comfort zone.
... there's nothing stopping you from sending this over XMPP (as transport)
DanA: I ask because I have heard a view expressed -- isn't all of this retrofitting onto existing web sites something that could be done with a different approach?
KevinMarks: pubsubhubbub stuff
closest to xmpp...
... there's some similarity, but a lot was about web developers writing web stuff....but that is changing...
... I think a lot comes down to tastes.
... You can build bridges...there are also cultural tastes among programmers.
... for some, dynamic programming languages not scary, for others it may be.
DanA: In the social web incubator
group meeting we held this week, we spent a lot of time talking
... I'm a user on one social network; I want to create a friend connection to someone on another social network.
... how would you do that?
KevinMarks: When we defined open
social, it was with one site in mind.
... but we are now at the point where it's becoming more important....xfn in microformats.
... that works like crawling foaf works.
... many sites have mixes of public and private...you can't just use a crawler over public data.
<tantek> crawling XFN works like that today, using HTML <a href> today
KevinMarks: you need to be able to provide access control.
<tantek> OAuth provides the access control for private data
KevinMarks: there are still some
issues sorting out assertions from multiple parties.
... there may be some bindings I can make that you may not want to become public.
... we've punted on some of the stickiness...we addressed some issues first (such as "no more forms asking for personal data")
... the delegation part becomes important.
... about 2000 twitter apps now.
... because you can delegate into it the list of people you are interested in.
... we are trying to correlate patterns in various apps and get commonality.
timbl: When you want to aggregate
cal info there are two ways (1) go to a site or (2) run
something on your laptop that goes to fetch info.
... if you run on your laptop you don't have delegated authentication. No site knows everything about you.
... you don't give one site access to stuff, where another site might be confused about access boundaries.
... how do you see competition between cals on desktops and sites going in the future?
KevinMarks: I would hope we could
use the same protocols for both.
... I can't get a remote site to call me back on my laptop..I have to open the connection first.
... I have to do those things over a "hanging GET" from the browser.
... rather than opening ports to listen to things
... that militates towards going to the site.
timbl: if you are a native desktop app, you can open a port.
KevinMarks: It's a NAT problem.
(e.g., from a cafe)
KevinMarks: That's driving people
to web services that feed info through.
... services can use sms, email, other protocols.
... Once we can put up servers again (with ipv6), that will help.
timbl: I think a lot has been architected differently because of NAT.
KevinMarks: Bittorrent is arguably a layer that tries to game TCP.
rohit: a couple of the big open id scares (some since resolved) hover around this issue.
KevinMarks: A big chunk of this
is constraining delegation to what is "should be."
... may not be better, but is better than name/password and associated.
timeless++
<marie> timeless++
John: Efficient XML
Interchange
... Sometimes you need just the right problem to kick you to the next level of evolution.
... web is always evolving to new places.
... part of what EXI is meant to do is take the Web/XML to new places.
<KevinMarks> my prezi is at if you can pardon my Flash
John: XML has been wildly successful: communities, vendors, open source
thanks!
scribe: we want to make it easier
to use xml in environments with bandwidth/space
limitations.
... people wanted to be able to tap into communities and tools...30 or so binary xml technologies that popped up.
... diversity is good for evolution but not particularly good for interop.
... created EXI WG
... at first, nobody believed.
... we brought experts together...9 months later and 147 pages of analysis, found one!
[EXI Benchmarks]
scribe: lots of other specs
published at w3c...will give interop across a broad set of use
cases.
... a lot of the people behind this were the people previously competing...fracturing of marketplace going away.
... we are looking at one good solution.
... source is info theory and formal language theory
... the results are great:
- bandwidth utilization
- faster processor speeds
- greater battery life
scribe: simultaneously optimizes a lot of things
....we wanted to see how compares to compression....lots of test cases...better in every case and faster
...if you compare to packed binary formats, it consistently beats those as well
very efficient way to share data in general.
[demo time]
real world example to send 1M data to an aircraft
With EXI was 1 second.
Without EXI 2:23
...there is some processing time on the other end.
...but it's not compression...you process it faster on the other end, too
|
http://www.w3.org/2009/11/05-w3cdev-minutes
|
CC-MAIN-2016-40
|
refinedweb
| 9,084
| 77.84
|
on larger blocks of data (typically 64 bits). The drawback to this type of system is that if the key is discovered, all messages can be decrypted.
In the .NET framework, there are classes derived from the
System.Security.SymmetricAlgorithm class in the
System.Security.Cryptography namespace that equip us to use symmetric encryption. Each of the classes uses a different algorithm for this purpose. These algorithms use a random initialization vector (IV) which makes the encrypted data different even when using the same source data. Generally speaking, the algorithm is more secure the larger its key size. Here are the different classes available:
With .NET, we can wrap a stream of data with the
CryptoStream. This gives us a very easy way of using symmetric encryption classes. If you wrap a
FileStream with the
CryptoStream, it will encrypt data as it's being written and decrypt it as it's being read. The main weakness of this type of system is the vulnerability of the one key.
Private Rijndael As New RijndaelManaged()
Dim keyFile As New FileStream("key.bin", FileMode.CreateNew) keyFile.Write(Rijndael.Key, 0, Rijndael.Key.Length) keyFile.Close()
Dim Transform As ICryptoTransform = Rijndael.CreateEncryptor() Dim outFile As New FileStream("crypt.bin", FileMode.Create) outFile.Write(Rijndael.IV, 0, Rijndael.IV.Length) Dim cryptStrm As New CryptoStream(outFile, Transform, CryptoStreamMode.Write) Dim writer As New StreamWriter(cryptStrm) writer.Write(txtSource.Text) writer.Flush() cryptStrm.FlushFinalBlock() writer.Close()
CryptoStreamwould use a decryptor and it would use read mode instead of write mode.
Asymmetric encryption uses a separate key for encryption and decryption. The decryption key is very hard to derive from the encryption key. The encryption key is public so that anyone can encrypt a message. However, the decryption key is private, so that only the receiver is able to decrypt the message. It is common to set up "key-pairs" within a network so that each user has a public and private key. The public key is made available to everyone so that they can send messages, but the private key is only made available to the person it belongs to.
In the .NET framework, there is a
RSACryptoServiceProvider class that supports this type of encryption. This class has a default key size of 1024 bits. Because this type of encryption does not use a stream, it is more cumbersome to use. Instead of being able to wrap a
FileStream, you have to encrypt data in small blocks. This type of system is generally used to encrypt keys, not entire messages. This is because asymmetric encryption is slow and()
Dim outFile As New FileStream("crypt.bin", FileMode.Create) Dim Rijndael As New RijndaelManaged() Dim EncryptedKey() As Byte = RSA.Encrypt(Rijndael.Key, False) Dim EncryptedIV() As Byte = RSA.Encrypt(Rijndael.IV, False) outFile.Write(EncryptedKey, 0, EncryptedKey.Length) outFile.Write(EncryptedIV, 0, EncryptedIV.Length) Dim Transform As ICryptoTransform = Rijndael.CreateEncryptor() Dim cryptStrm As New CryptoStream(outFile, Transform, CryptoStreamMode.Write) Dim writer As New StreamWriter(cryptStrm) writer.Write(txtSource.Text) writer.Flush() cryptStrm.FlushFinalBlock() writer.Close()
Cryptography is a very robust field. This article tries to point out the advantages of combining different systems into one. In the current state of cryptography, the keys are the most important tools in keeping data secure. Keeping the private keys secure and large enough will make it very difficult to crack an encryption system.
General
News
Question
Answer
Joke
Rant
Admin
|
http://www.codeproject.com/KB/security/ComboEncryption.aspx
|
crawl-002
|
refinedweb
| 569
| 51.75
|
Which of the 3 solutions you propose is the best in terms of preparing the
transition to 1.7 and roles ?
And the easiest to understand for users ?
How can we formulate the rules in terms of when an xml prefix such as
antcontrib: is required ? When is it optional ?
On the details side, I did not understand why you did not prefix with
antcontrib the <then/> tag in your original example a ?
And in the example b, I did not understand either why you prefixed with
antcontrib: the <then/> tag.
Also, do you mean that everything in this regard would be more clear as soon
as we have roles ?
Cheers,
Antoine
-----Ursprüngliche Nachricht-----
Von: peter reilly [mailto:peter.reilly@corvil.com]
Gesendet: Dienstag, 11. November 2003 10:15
An: Ant Developers List
Betreff: Re: AW: Namespace support in ant 1.6
On Tuesday 11 November 2003 08:57, Antoine Levy-Lambert wrote:
> >Von: peter reilly [mailto:peter.reilly@corvil.com]
> >Gesendet: Montag, 10. November 2003 19:21
> >An: Ant Developers List
> >Betreff: Namespace support in ant 1.6
> >
> >
> >Hi,
> >I would like to get some movement on the outstanding issues
> >of ant 1.6.
>
> +1
>
> >One of the outstanding issues is what namespace to use
> >for nested elements of tasks. (Discovered by introspection
> >rules).
> >
> >The choices are:
> > a) Use the default ant namespace, this is the current rule.
> > b) Use the namespace of the enclosing task or type.
> > c) Allow either - let the build script author choose.
> >
> >Using the if task from ant-contrib and assuming an
> >project tag of <project xmlns: >as an example:
>
> Do <or/> and <equals/> in your example come from ant core ? I guess so,
but
> <then> comes from ant-contrib, no ?
The <or/> and <equals/> come from
org.apache.tools.ant.taskdefs.condition.ConditionBase
which is extended by
net.sf.antcontrib.logic.IfTask,
thus, as seen by introspection, they come
from the IfTask class, hence the "antcontrib" prefix.
Of course, in ant1.7 when roles get sorted out,
<or/> and <equals/> will be taskdef'ed elements as well as hardcoded
methods of ConditionBase.
Peter
> I would have expected your examples a and b to be like this :
>
> Choice a)
>
> <antcontrib:if>
> <or>
> <equals arg1="a" arg2="${x}"/>
> <antcontrib:ispropertytrue
> </or>
> <antcontrib:then>
> <echo>both conditions are true</echo>
> </antcontrib:then>
> </antcontrib:if>
>
>
> Choice b)
>
> <antcontrib:if>
> <antcore:or>
> <equals arg1="a" arg2="${x}"/>
> <antcontrib:ispropertytrue
> </antcore:or>
> <then>
> <antcore:echo>both conditions are true</antcore:echo>
> <then>
> </antcontrib:if>
>
>
> Cheers,
>
> Anto
|
http://mail-archives.apache.org/mod_mbox/ant-dev/200311.mbox/%3CNCEPINDOKGPFICGMMGLNGECDCGAA.antoine@antbuild.com%3E
|
CC-MAIN-2014-52
|
refinedweb
| 417
| 66.64
|
Fitting-Summation
We have showed you how to perform fitting with an integral using the NAG Library, and now you'll learn how to do that without calling NAG functions. In this tutorial, we will show you how to do integration by the trapezoidal rule and include the summation procedure in the fitting function.
Minimum Origin Version Required: Origin 8.0 SR6
We will fit the same model as the integral fit using NAG:
The difference is that we will perform the integration within the fitting function. Using the trapezoidal rule, we will first divide the curve into pieces and then approximate the integral area by multiple trapezoids. The precision of the result then depends on how many trapezoids will be used. Since this is a semi-infinite integration, we will set an increment (steps) and construct trapezoids from the upper integral limit, x, to the lower integral limit, negative infinity and then accumulate the area of these trapezoids. When the increment of the area is significantly small, we will stop the summation. Before doing the summation, you should guarantee that the function is CONVERGENT, or you should include a convergence check in your code.
Select Tools:Fitting Function Organizer or alternatively press the F9 key to open the Fitting Function Organizer and then define the function as follows:
Click the button (icon) beside the Function box to open Code Builder. Define, compile and save the fitting function as follows:
#pragma warning(error : 15618)
#include <origin.h>
// Subroutine for integrand
double f(double x, double A, double xc, double w)
{
return A * exp(-2*(x-xc)*(x-xc)/w/w) / w / sqrt(PI/2);
}
//----------------------------------------------------------
//
void _nlsfsummation(
// Fit Parameter(s):
double y0, double A, double xc, double w,
// Independent Variable(s):
double x,
// Dependent Variable(s):
double& y)
{
// Beginning of editable part
// Set the tolerance for stop integration.
double dPrecision = 1e-12;
// Initialization
double dIntegral = 0.0;
double dTrapezia = 0.0;
// Steps, or Precision.
double dStep = 0.01;
// Perform integrate by trapezoidal rule.
// Note that you should guarantee that the function is CONVERGENT.
do
{
// Trapezia area.
dTrapezia = 0.5 * ( f(x, A, xc, w) + f((x-dStep), A, xc, w) ) * dStep;
// Accumulate area.
dIntegral += dTrapezia;
x -= dStep;
}while( (dTrapezia/dIntegral) > dPrecision );
// Set y value.
y = y0 + dIntegral;
// End of editable part
}
We can use the same data to test the result.
|
https://www.originlab.com/doc/Tutorials/Fitting-Summation
|
CC-MAIN-2021-25
|
refinedweb
| 390
| 54.83
|
The code for RaptorDB is
on CodePlex () under Git source control, I will be actively
maintaining and adding features to this project as I believe
very strongly in it. I will keep this article and the source
code in sync.
RaptorDB
fastJSON
fastBinaryJSON
miniLog4net
WAHBitArray
BitArrays
hOOt
RaptorDB the key value store
MGIndex
RaptorDB
RaptorDB
RaptorDB has been built with the following
features in mind:
RaptorDB
Strings
string
byte[]
string
AND
OR
The following limitations are in this release:
Int32
Name='bob'
StatusColumn >
LastStatusColumn
There are a number competing storage systems, a few of which
I have looked at, are below:
2012-04-29 12:38:40|DEBUG|1|RaptorDB.Views.ViewHandler|| query bitmap done (ms) : 40.0023
2012-04-29 12:38:40|DEBUG|1|RaptorDB.Views.ViewHandler|| query rows fetched (ms) : 33.0019
2012-04-29 12:38:40|DEBUG|1|RaptorDB.Views.ViewHandler|| query rows count : 300
2012-04-29 12:38:40|DEBUG|1|RaptorDB.Views.ViewHandler|| query bitmap done (ms) : 1.0001
2012-04-29 12:38:40|DEBUG|1|RaptorDB.Views.ViewHandler|| query rows fetched (ms) : 469.0268
2012-04-29 12:38:40|DEBUG|1|RaptorDB.Views.ViewHandler|| query rows count : 25875
2012-04-29 12:38:45|DEBUG|1|RaptorDB.Views.ViewHandler|| query bitmap done (ms) : 4.0002
2012-04-29 12:38:45|DEBUG|1|RaptorDB.Views.ViewHandler|| query rows fetched (ms) : 6.0003
2012-04-29 12:38:45|DEBUG|1|RaptorDB.Views.ViewHandler|| query rows count : 500
2012-04-29 12:38:45|DEBUG|1|RaptorDB.Views.ViewHandler|| query bitmap done (ms) : 0
2012-04-29 12:38:45|DEBUG|1|RaptorDB.Views.ViewHandler|| query rows fetched (ms) : 677.0387
2012-04-29 12:38:45|DEBUG|1|RaptorDB.Views.ViewHandler|| query rows count : 50000
It took me around a month of intense research, debugging and
tinkering to get my head around the LINQ provider interface and
how it works, while the title of this section is a bit harsh but
I hope it conveys the frustration I felt at the time.
To be fair what emerged is very clean, concise and
elegant. Admittedly it is only the Expression evaluation
part of LINQ and is a fraction of what you have to go through
for a full LINQ provider. This was all I needed for RaptorDB so
I will try to explain how it was done here for anybody wanting
to continue as resources are very rare on this subject.
For RaptorDB we want a "where" clause parser in LINQ which will
essentially filter the view data and give us the rows, this is
done with the following command :
int j = 1000;
var result = db.Query(typeof(SalesInvoice),
(SalesInvoice s) => (s.Serial > j && s.CustomerName == "aaa")
);
The main part we are focusing on is the line :
(SalesInvoice s) => (s.Serial > j && s.CustomerName == "aaa")
From this we want to parse the expression which reads : given
the SalesInvoice type (used for denoting the property/column
names, and serves no other purpose) filter where the [serial
number is greater than j and the customer name is "aaa"] or the
count is greater than zero. From this the query engine
must determine the "column names" used and fetch them from the
index file and get the associated values from that index and
apply logical arithmetic on the results to get what we want.
There are two quirks in parsing LINQ queries :
In RaptorDB we want to be able to extract and query the index
for each clause in the filter expression based on the order and
logic of the expression. Because the indexes are built on the
WAHBitArray the result will be a WAHBitArray. All this is done
in the following very small code (compared to writing a language
parser) :
WAHBitArray
delegate WAHBitArray QueryExpression(string colname, RDBExpression exp, object from);
internal class QueryVisitor : ExpressionVisitor
{
public QueryVisitor(QueryExpression express)
{
qexpression = express;
}
public Stack<object> _stack = new Stack<object>();
public Stack<object> _bitmap = new Stack<object>();
QueryExpression qexpression;
protected override Expression VisitBinary(BinaryExpression b)
{
this.Visit(b.Left);
ExpressionType t = b.NodeType;
if (t == ExpressionType.Equal || t == ExpressionType.LessThan || t == ExpressionType.LessThanOrEqual ||
t == ExpressionType.GreaterThan || t == ExpressionType.GreaterThanOrEqual)
_stack.Push(b.NodeType);
this.Visit(b.Right);
t = b.NodeType;
if (t == ExpressionType.Equal || t == ExpressionType.NotEqual ||
t == ExpressionType.LessThanOrEqual || t == ExpressionType.LessThan ||
t == ExpressionType.GreaterThanOrEqual || t == ExpressionType.GreaterThan)
{
// binary expression
object lv = _stack.Pop();
ExpressionType lo = (ExpressionType)_stack.Pop();
object ln = _stack.Pop();
RDBExpression exp = RDBExpression.Equal;
if (lo == ExpressionType.LessThan)
exp = RDBExpression.Less;
else if (lo == ExpressionType.LessThanOrEqual)
exp = RDBExpression.LessEqual;
else if (lo == ExpressionType.GreaterThan)
exp = RDBExpression.Greater;
else if (lo == ExpressionType.GreaterThanOrEqual)
exp = RDBExpression.GreaterEqual;
_bitmap.Push(qexpression("" + ln, exp, lv));
}
if (t == ExpressionType.And || t == ExpressionType.AndAlso ||
t == ExpressionType.Or || t == ExpressionType.OrElse)
{
// do bitmap operations
WAHBitArray r = (WAHBitArray)_bitmap.Pop();
WAHBitArray l = (WAHBitArray)_bitmap.Pop();
if (t == ExpressionType.And || t == ExpressionType.AndAlso)
_bitmap.Push(r.And(l));
if (t == ExpressionType.Or || t == ExpressionType.OrElse)
_bitmap.Push(r.Or(l));
}
return b;
}
protected override Expression VisitMethodCall(MethodCallExpression m)
{
string s = m.ToString();
_stack.Push(s.Substring(s.IndexOf('.') + 1));
return m;
}
protected override Expression VisitMember(MemberExpression m)
{
var e = base.VisitMember(m);
var c = m.Expression as ConstantExpression;
if (c != null)
{
Type t = c.Value.GetType();
var x = t.InvokeMember(m.Member.Name, BindingFlags.GetField, null, c.Value, null);
_stack.Push(x);
}
if (m.Expression != null && m.Expression.NodeType == ExpressionType.Parameter)
{
_stack.Push(m.Member.Name);
return e;
}
return e;
}
protected override Expression VisitConstant(ConstantExpression c)
{
IQueryable q = c.Value as IQueryable;
if (q != null)
_stack.Push(q.ElementType.Name);
else if (c.Value == null)
_stack.Push(null);
else
{
_stack.Push(c.Value);
if (Type.GetTypeCode(c.Value.GetType()) == TypeCode.Object)
_stack.Pop();
}
return c;
}
}
Most of the work is done in the VisitBinary method
(for evaluating logical [&& || ] operations and clauses
[b>3] ) so to distinguish the two a stack is used store the
clause values for further processing. VisitBinary
will be called recursively for left and right sides of
expressions so a stack of bitmap is also required for
aggregating the results of the expression.
VisitBinary
&& ||
b>3
VisitBinary
The constructor to the class takes two delegates which are
supplied by the caller for handles to the underlying indexes
which this class calls when a binary clause is completely
parsed. The results are push onto the bitmap stack.
The VisitMember method is responsible for
replacing the compiler generated code for constant values with
the appropriate value ( the j in the above example).
VisitMember
The rest of the code is generally for extracting the "column
names" without the prefixes (s.Serial -> Serial etc.).
As you will see below this is so easy and simple that it just happens
and you don't need to learn anything new or worry about configurations
or breaking things at runtime as the compiler will catch your error at
compile time.
A great feature is the total absence of anything SQL related, the
associated schema pains and having to switch to a database management
product to define and check tables and columns as everything is in your
source file.
The first thing you should do is define your entities or data classes (referred to as domain driven development), these are plain c# (vb.net) classes or POCO's like the following :
public class LineItem
{
public decimal QTY { get; set; }
public string Product { get; set; }
public decimal Price { get; set; }
public decimal Discount { get; set; }
}
public class SalesInvoice
{
public SalesInvoice()
{
ID = Guid.NewGuid();
}
public Guid ID { get; set; }
public string CustomerName { get; set; }
public string Address { get; set; }
public List<LineItem> Items { get; set; }
public DateTime Date { get; set; }
public int Serial { get; set; }
public byte Status { get; set; }
}
There is nothing special about the above other than the lack of anything
extra you need to do like adding attributes etc. (even Serializable) as
they are not needed.
Next you create your primary view for your entities as follows :
public class SalesInvoiceView : View<SalesInvoice> // create a view for the SalesInvoice type
{
public class RowSchema // define the schema for this view
{
public NormalString CustomerName; // CustomerName is a normal string index
public DateTime InvoiceDate;
public string Address;
public int Serial;
public byte Status;
}
public SalesInvoiceView()
{
this.Name = "SalesInvoice";
this.Description = "A primary view for SalesInvoices";
this.isPrimaryList = true;
this.isActive = true;
this.BackgroundIndexing = true;
this.Schema = typeof(SalesInvoiceView.RowSchema);
this.AddFireOnTypes(typeof(SalesInvoice));
this.Mapper = (api, docid, doc) =>
{
api.Emit(docid, doc.CustomerName, doc.Date, doc.Address, doc.Serial, doc.Status);
};
}
}
This is pretty straight forward also.
SalesInvoice
RowSchema
int
string
decimal
NormalString
BackgroundIndexing
AddFireOnTypes
Mapper
api
Fetch
log
Query
Registering a view is as simple as :
if (rap.RegisterView(new SalesInvoiceView()).OK == false)
{
Console.WriteLine("Error registering view");
return;
}
RaptorDB will do some checks on your view and if everything is fine it will return true, which means you are good to go.
Now you can use RaptorDB and save documents as follows :
var inv = new SalesInvoice()
{
Date = FastDateTime.Now,
Serial = i % 10000,
CustomerName = "me " + i % 10,
Status = (byte)(i % 4),
Address = "df asd sdf asdf asdf"
};
inv.Items = new List<LineItem>();
for (int k = 0; k < 5; k++)
inv.Items.Add(new LineItem()
{ Product = "prod " + k, Discount = 0, Price = 10 + k, QTY = 1 + k });
rap.Save(inv.ID, inv); // save to RaptorDB
Querying is as simple as writing LINQ predicates like the following:
var q = rap.Query(typeof(SalesInvoice), // call by the view type or the primary document type
(SalesInvoice s) => (s.Serial < j) && (s.Status == 1 || s.Status == 3));
q = rap.Query("SalesItemRows", // call by the view name
(LineItem l) => (l.Product == "prod 1" || l.Product == "prod 3"));
As you can see you can call the query in 2 ways by specifying the type of the view (or type of the document type for primary views) or by calling the string name of the view.
From the below image you can see the test application doing its work.
RaptorDB
was configured to do background indexing, so 100,000 documents
were inserted in 12 secs and the primary view was populated (the query
results for 500 items) and the background indexer is working on
populating the other view defined, which after a couple of queries shows
the final results of 50,000 items.
GUID
{ invoice GUID, date, invoice number, status, salesman name }
{ invoice GUID, date, total sales amount, total sales discounts, salesman name, customer name }
For RaptorDB to function you must follow these
steps:
The save process follows the following steps:
GetPrimaryListForType()
SaveData()
SavePrimaryView()
SaveInOtherViews()
FindMapFunctionsForType()
ExecuteMapFunction()
DeleteFromView(DocID)
InsertView(newData)
This is a work in progress, I will be happy if anyone wants to join in.
Some major features were added in this release so here they are:
In this version you can change the view schema or properties, and also add new views to existing documents and have the engine rebuild the view. This is controlled via a Version property in your view definition.
Version
The responsiblity of incrementing this version number is up to you and you can decide when to do so and when it would make sens<code><code>e. RaptorDB will just check the verison numbers and act accordingly.
<code><code>
A breaking change was the removal of the NormalString type in the schema of your view and replacing it with string and a [FullText] attribute, which is much more simpler and user friendly.
[FullText]
public class RowSchema // define the schema for this view
{
[FullText]
public string CustomerName; // CustomerName is a hOOt index
public DateTime InvoiceDate;
public string Address;
public int Serial;
public byte Status;
}
RaptorDB can now parse string LINQ queries and give you the results. This can be seen in the updated console application. You would probably want to stick to LINQ in your source code, but this might be useful if you need your users to generate filters in an UI for example.
This feature will be more prevalent in the server version as LINQ does not serialize accross boundries.
An interesting feature is that you can get near SQL syntax like :
var q = rap.Query(typeof(SalesItemRows),
"product = \"prod 1\" or product = \"prod 3\""));
Where column can be non cased and you can use single '=' and have 'or' instead of c# style '||' etc.
You can now give the view type to the Query function for querying.
var q = rap.Query(typeof(SalesItemRows),
((LineItem l) => (l.Product == "prod 1" || l.Product == "prod 3"));
Some major features were added in this version:
The results of your queries will now return a list of the View.Schema objects, this allows for client side data binding and LINQ aggregate queries.
View.Schema
A windows application project was added to show case the data binding capabilities of RaptorDB. You can do the same functions of the console app but with visual feedback. To query the views just enter your view name and the query string in the text box and press enter.
In the menu a client side sum has been added which will give you the following results.
You can do client side aggregate queries like the following which is very powerful.
var q = rap.Query(typeof(SalesItemRowsView), (LineItem l) => (l.Product == "prod 1" || l.Product == "prod 3"));
// grouping
List<SalesItemRowsView.RowSchema> list = q.Rows.Cast<SalesItemRowsView.RowSchema>().ToList();
var e = from item in list group item by item.Product into grouped
select new { Product = grouped.Key,
TotalPrice = grouped.Sum(product => product.Price),
TotalQTY = grouped.Sum(product => product.QTY)
};
The main point in the above is the Cast method which will give you the types so you can sum on.
Cast
To help you write less code you can use the api.EmitObject method in your mapper code which will match the object given to the view schema column names, you must make sure the names match.
api.EmitObject
this.Mapper = (api, docid, doc) =>
{
if (doc.Status == 3 && doc.Items != null)
foreach (var item in doc.Items)
api.EmitObject(docid, item);
// instead of writing the following
//api.Emit(docid, item.Product, item.QTY, item.Price, item.Discount);
};
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
this.Version=2;
2016-12-13 13:07:01|DEBUG|11|RaptorDB.Views.ViewHandler|| query : ((customername = "john will*" and serial = 7030) OR (customername = "john white" and serial = 1327))
2016-12-13 13:07:01|DEBUG|11|RaptorDB.Views.ViewHandler|| found in cache
2016-12-13 13:07:01|DEBUG|11|RaptorDB.Views.ViewHandler|| query bitmap done (ms) : 0
2016-12-13 13:07:01|DEBUG|11|RaptorDB.Views.ViewHandler|| query rows fetched (ms) : 1,0001
2016-12-13 13:07:01|DEBUG|11|RaptorDB.Views.ViewHandler|| query rows count : 1
2016-12-13 13:07:01|DEBUG|14|RaptorDB.Common.NetworkServer|| tcp connects/sec = 2
2016-12-13 13:07:36|DEBUG|8|RaptorDB.aWebServer|| Remote Address = ::1
2016-12-13 13:07:36|DEBUG|8|RaptorDB.Views.ViewHandler|| query : SalesInvoice
2016-12-13 13:07:36|DEBUG|8|RaptorDB.Views.ViewHandler|| query : ((customername = "john will*" and serial = 7030) OR (customername = "john white" and serial = 1327))
2016-12-13 13:07:36|DEBUG|8|RaptorDB.Views.ViewHandler|| found in cache
2016-12-13 13:07:36|DEBUG|8|RaptorDB.Views.ViewHandler|| query bitmap done (ms) : 0
2016-12-13 13:07:36|DEBUG|8|RaptorDB.Views.ViewHandler|| query rows fetched (ms) : 2,0002
2016-12-13 13:07:36|DEBUG|8|RaptorDB.Views.ViewHandler|| query rows count : 2
John Williams Me 0 1986-10-21T00:17:10Z 26 Wilson Woodland 7030 2 false 4b090763-9549-4892-8d0a-90eb7083d807
John White Me 7 2016-10-14T02:37:17Z 12 Cherry Sunset 1327 3 false 75367f65-0142-452c-8eb5-e593dd383f8b
John White Me 7 14.10.2016 04:37 12 Cherry Sunset 1327 3 False 75367f65-0142-452c-8eb5-e593dd383f8b
John Miller with Serial 51 OR John Kirby with Serial 45
()
for (int i = 0; i <= pos; i++)
for (int i = 0; i < pos; i++)
System.Exception: Page read error header invalid, number = 261 bei RaptorDB.IndexFile`1.LoadPageFromPageNumber(Int32 number) bei RaptorDB.MGIndex`1.LoadPage(Int32 pagenum) bei RaptorDB.MGIndex`1.GetKeys() bei RaptorDB.Views.ViewHandler.SortBy(String sortcol) bei RaptorDB.Views.ViewHandler.Query(String filter, Int32 start, Int32 count, String orderby) bei RaptorDB.Views.ViewManager.Query(String viewname, String filter, Int32 start, Int32 count, String orderby) bei RaptorDB.rdbRest.DoQuery(IRaptorDB rdb, HttpListenerContext ctx, String path, RDBRoute route) bei RaptorDB.rdbRest.ProcessGET(IRaptorDB rdb, HttpListenerContext ctx, String path, RDBRoute route)
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
https://www.codeproject.com/script/articles/articleversion.aspx?aid=375413&av=562645&fid=1708723&df=90&mpp=10&noise=1&prof=true&sort=position&view=none&spc=none&fr=11&pageflow=fixedwidth
|
CC-MAIN-2017-13
|
refinedweb
| 2,776
| 50.73
|
The passed around a program just like any other type of value.
Now, when a function accepts another function as its argument, or it yields another function as its return value - or both - it is said to be a higher-order function. We actually already saw an example in the previous article, if you recall the Sieve of Eratosthenes exercise, which had this function in it:
private Predicate<Integer> notInNonPrimesUpTo(int limit) { var sieve = new HashSet<Integer>(); for (var n = 2; n <= (limit / 2); n++) for (var nonPrime = (n * 2); nonPrime <= limit; nonPrime += n) sieve.add(nonPrime); return candidate -> !sieve.contains(candidate); }
That function is returning a
Predicate. A predicate is a function that yields a boolean value. This means that
notInNonPrimesUpTo is a higher-order function: it builds the sieve and yields a function that tests whether a number is within the sieve or not.
We’ve seen other examples too. Do you remember
map from part three? It takes a function and applies it to all the elements in an array, yielding another array.
map is a higher-order function. So is
filter because it takes a predicate, tests it on every element of an array, and uses the result of the predicate to decide whether to keep the element or discard it.
qsort is a higher-order function too, because it takes a comparator function and uses it to determine the order of any two elements in the array, without knowing the types of the elements. So the previous article was full of higher-order functions, and you shouldn't be intimidated by the term. It does not mean anything rarified or exalted. You are almost certainly using some kind of higher-order functions regularly in your work. In fact, first-class functions are useless without higher-order functions to pass them into or return them from.
Function composition.
You'll hear about this a lot in the functional programming world. To compose two functions means to arrange them so that the result of one function is applied directly as the input of the other function. Your code is probably full of examples of this, but if the code is not structured so as to highlight this fact then you may not always notice. Functional programmers are always alert to notice when functions are arranged this way, because it allows the possibility of certain programming structures, which we will come to shortly. Programmers steeped in the functional style often find it useful to consider two composed functions as a third function in its own right. Let me explain what I mean by that.
Say you have a function f that takes a value x as its argument and returns a value y :
f ( x ) = y
and you have another function g that takes y as its argument and returns z :
g ( y ) = z
clearly, then, you can apply g to the output of f like this :
g ( f ( x )) = z
This implies, therefore, that there is a third function h that maps x directly to z :
h ( x ) = z
Functional programmers would say that h is the composition of functions f and g. In Haskell this would be defined like:
h = g . f
In Haskell minimalism is prized as a virtue. In Clojure, rather more verbose, it would be defined like this:
(def h (comp f g))
Functional programming devotees tend to view function composition this way. Personally, I don't find the practice of explicitly naming composed functions like that especially useful. In particular I don't see any difference between the Clojure above and this:
(defn h [arg] (g (f arg)))
other than that the first example is slightly more concise. FP devotees like to wax lyrical about the power of function composition, while my own outlook is rather more prosaic.
Function composition as plumbing.
The idea of composing functions together is not novel. In 1964, Doug McIlroy wrote this in a memo:
We should have some ways of coupling programs like garden hose – screw in another segment when it becomes necessary to massage data in another way.
The idea Doug was getting at was later realised in Unix as pipes, probably the single feature that makes Unix shell scripting so powerful. Unix pipes are a system of inter-process communication; they can be created and used directly by processes via system calls, but they can also be created in the shell by using the | symbol, like this:
program1 | program2
The effect is to create a pipe that reads everything written to standard output by
program1 and feeds it verbatim to
program2 via its standard input. This means that you can chain programs together like building blocks to accomplish tasks that none of the programs can do by themselves. For example, if I wanted to find the top 3 largest Java programs in a directory by lines of code, I could do this:
wc -l *.java | grep \.java | sort -nr | head -n 3 82 Book.java 43 Isbn.java 38 Genre.java
McIlroy put it this way:
This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together.
Replace “programs” with “functions” and you have the principle of composability.
Connascence of execution.
So, I think the value of writing functions that “do one thing and do it well” pretty much self-evident, but it might not be clear yet why it is a good idea to write functions to be composable, i.e. to work together. You may have heard of connascence. Connascence is a way of describing things that are coupled to each other in various kinds of ways. There are many different types of connascence, including:
- Connascence of name - if the name of a thing is changed, other things must be renamed to match or the program will break. Usually function calls work by connascence of name. Modern refactoring IDEs can help you when renaming things by automatically updating all the other names that need to be changed to match.
- Connascence of type - two or more things must have the same type. In statically-typed languages this can usually be enforced by the compiler, but if you’re working in a dynamically typed language then you must take care to match up types by yourself.
- Connascence of meaning - also often referred to as “magic values”, this refers to things that must be set to specific values which have certain meanings and, if altered, will break the program.
- Connascence of execution - things must happen in a certain order, in other words, temporal coupling.
It is the last one which is important to us here. It is frequently critical in programming that things are done in a certain order:
In this code, an email object is created, then the sender, recipient and subject are set, then the email is sent. After the email has been sent, it then sets the email body. Almost certainly this is wrong, and the likely outcome is the email will be sent with an empty body. Slightly less likely, but an outcome that cannot be ruled out, is that setting the body on the email after it has been sent might cause an error. Either way it is bad.
But we can design things so that it becomes impossible to do things out of order:
mailer.send(emailBuilder() .sender("walter@example.com") .addRecipient("thedude@example.com") .subject("Proposal") .body("Let's go bowling") .build())
Since we need an email object to pass to
mailer.send, we make it so that the only way to create one and set it up is to use the builder. We remove all setter methods on the email class so that impossible to modify anything to the email after it has been built. Therefore the object that is passed to
mailer.send is guaranteed not to be tampered with afterwards. The builder pattern seen above is a very common way to turn imperative operations into composable functions. You can use it to wrap things that aren’t in the functional style and make them seem like they are.
The dread Monad.
When I first envisaged this series of articles, I thought I was not going to mention monads at all, but as it developed I realised that any discussion of the functional style would be incomplete without them. Moreover, Monads turn up sometimes without announcing themselves. I struggled for a long time to understand the Monad, and the explanations I found were quite unhelpful, and I believe this is why they have got their reputation for being hard to understand. I will try to explain it here in terms of code, which I hope will convey the concept clearly enough. As always, I have an example to illustrate the point with; it is a little Java project that I use to try out ideas on, which implements a simple webservice API comprising a set of endpoints that pretend to serve a library. You can search for books with it, view their details, borrow and return them, etc. There is an endpoint to retrieve a book by its ISBN number, and its implementation looks like this:
public LibraryResponse findBookByIsbn(Request request) { try { Isbn isbn = isbnFromPath(request); Book book = findBookByIsbn(isbn); SingleBookResult result = new SingleBookResult(book); String responseBody = new Gson().toJson(result); return new LibraryResponse(200, "application/json", responseBody); } catch (IllegalArgumentException e) { return new LibraryResponse(400, "text/plain", "ISBN is not valid"); } catch (Exception e) { LOG.error(e.getMessage(), e); return new LibraryResponse(500, "text/plain", "Problem at our end, sorry!"); } }
I deliberately messed up this code a little for our purposes here - though it's still better than much code I have seen in the wild - so let’s critique it. I really don’t like the exception handlers here. They represent special cases, and one of the things I have learned through experience is that special cases are the enemy of clean code. They disrupt the flow of the program and they make ideal hiding places for bugs.
Exceptions bring their own evil with them, being essentially gotos in disguise, but worse still, only one of the exception handlers here is handling genuinely exceptional behaviour. The other is handling part of the API's specified behaviour. We'll come back to that in a moment.
Now, we don’t need to go into the details of the web framework being used here (it’s spark-java); suffice to say that all web frameworks can be configured to trap unhandled exceptions and return a preconfigured HTTP response when they happen. Different responses can be mapped to different exception classes: it would be appropriate to return the HTTP 500 response when a top-level
Exception is thrown, so we can remove that
catch block from the
findBookByIsbn method.
On the other hand, the 400 response “ISBN is not valid” is due to invalid input from the client and is very much part of the specified API behaviour. The
isbnFromPath method is throwing an
IllegalArgumentException when the parameter value from the client does not match the right format for an ISBN number. This is what I meant by a disguised GOTO; it obscures the logic because it is not immediately obvious where the exception is coming from.
There is something more that seems to be missing entirely there. What happens when
findBookByIsbn does not find the book? That should result in an HTTP 404 response and, in use, so it does, so where did that happen? Examining
findBookByIsbn we see the answer:
Book findBookByIsbn(Isbn isbn) { return bookRepository.retrieve(isbn).orElseThrow(() -> Spark.halt(NOT_FOUND_404, BOOK_NOT_FOUND)); }
This makes things even worse! Here we're making use of a framework feature by which an exception encodes an HTTP 404 response within it. This is important control flow that is completely obscured in the endpoint implementation.
So what can we do about it? We could improve things by creating specific exception types for the different outcomes, but we would still be using exceptions as a means of control flow. Alternatively, we could rewrite the code not to depend on exceptions at all:
public LibraryResponse findBookByIsbn(Request request) { Isbn isbn = isbnFromPath(request); if (isbn.valid()) { Optional<Book> book = findBookByIsbn(isbn); if (book.isPresent()) { SingleBookResult result = new SingleBookResult(book.get()); String responseBody = new Gson().toJson(result); return new LibraryResponse(200, "application/json", responseBody); } else { return new LibraryResponse(404, "text/plain", "Book not found"); } } else { return new LibraryResponse(400, "text/plain", "ISBN is not valid"); } }
At least all the different execution paths are now present in the method. This code is hardly great either, although a better solution is hinted at in there by the
findBookByIsbn method which has been modified now to return an
Optional<Book>. That
Optional type speaks something to us: it says that it may or may not return a book and that we must handle both eventualities, although Optional can be used far more neatly than it is there. What we need is a way to make it similarly explicit that
findBookByIsbn will return either a valid ISBN number or some kind of invalid request error.
Maybe it's valid, maybe it isn't.
In Haskell there is the
Either type that lets you do exactly that, and it is frequently used for error handling.
Either values may be either
Left or
Right and the programmer must deal with both. Conventionally, the
Left constructor is used for indicating an error and the
Right constructor for wrapping a non-erroneous value. Personally I’m not a fan of the use of “left” and “right” in this way: those words only have meaning to me in terms of spatial orientation. Anyway, Java has its own stereotypical construction for this kind of thing, which has been established by the
Stream and
Optional classes. We could create a
MaybeValid type to wrap values that may be valid or not, and by designing it to resemble the built-in types we could cause the least astonishment:
interface MaybeValid<T> { <U> MaybeValid<U> map(Function<T, U> mapping); <U> MaybeValid<U> flatMap(Function<T, MaybeValid<U>> mapping); T ifInvalid(Function<RequestError, T> defaultValueProvider); }
The
ifInvalid method is the terminating operation. It is meant to return the wrapped value in the case that it is valid, and the
defaultValueProvider function will supply the value when it is not valid. We can conveniently provide separate implementations for valid values and invalid values, respectively:
public class Valid<T> implements MaybeValid<T> { private final T value; public Valid(T value) { this.value = value; } @Override public <U> MaybeValid<U> map(Function<T, U> mapping) { return new Valid<>(mapping.apply(value)); } @Override public <U> MaybeValid<U> flatMap(Function<T, MaybeValid<U>> mapping) { return mapping.apply(value); } @Override public T ifInvalid(Function<RequestError, T> unused) { return value; } }
The key parts here are:
ifInvalidreturns the wrapped value rather than executing the supplied function.
mapapplies the wrapped value to the mapping function and returns a new
MaybeValidinstance wrapping the mapped value.
flatMapapplies the mapping function and simply returns its result, which is already wrapped in a
MaybeValidinstance.
public class Invalid<T> implements MaybeValid<T> { private final RequestError error; public Invalid(RequestError error) { this.error = error; } @Override public <U> MaybeValid<U> map(Function<T, U> unused) { return new Invalid<>(error); } @Override public <U> MaybeValid<U> flatMap(Function<T, MaybeValid<U>> unused) { return new Invalid<>(error); } @Override public T ifInvalid(Function<RequestError, T> defaultValueProvider) { return defaultValueProvider.apply(error); } }
The crucial differences are:
- The
mapand
flatMapmethods do not execute the mapping functions; they simply return another
InvalidRequestinstance. The reason they have to create a new instance is because the wrapped type might change (from
Tto
U).
- The terminating
ifInvalidmethod uses the
defaultValueProviderfunction to supply the return value.
- The default value provider is provided with the request error as its argument in case it needs it in order to return the appropriate result.
All of this means that we need to wrap the
isbnFromPath method in order to return a
MaybeValid instance:
MaybeValid<Isbn> maybeValidIsbn(Request request) { Isbn isbn = isbnFromPath(request); return isbn.valid() ? new Valid<>(isbn) : new Invalid<>(new RequestError(400, "ISBN is not valid")); }
And we must give a similar treatment to
findBookByIsbn:
MaybeValid<Book> maybeValidBook(Isbn isbn) { return findBookByIsbn(isbn) .map(book -> new Valid<>(book)) .orElseGet(() -> new Invalid<>(new RequestError(404, "Book not found"))); }
Please note that
RequestError is not an exception; it does, however, contain an HTTP status code, therefore this code must live in the application component that is dealing with HTTP requests and responses. It would be inappropriate for it to live anywhere else: in a service class, for example.
Now we can rewrite the endpoint like this:
public LibraryResponse findBookByIsbn(Request request) { return maybeValidIsbn(request) .flatMap(isbn -> maybeValidBook(isbn)) .map(book -> new SingleBookResult(book)) .map(result -> new Gson().toJson(result)) .map(json -> new LibraryResponse(200, "application/json", json)) .ifInvalid(error -> new LibraryResponse(error.httpStatus(), "text/plain", error.body())); }
Some of the lambdas could be replaced with method references but I left them as they are to bear the closest resemblance to the original code. There are other possibilities for further refactoring too. But notice how it reads clearly now as a sequence of chained operations. This is possible because the original was a indeed chain of composable functions: the return value from each function was passed as the sole argument to the next. The use of higher-order functions has allowed us to encapsulate the logic pertaining to validation errors inside the
MaybeValid subtypes. In the library service there are several endpoints with requirements similar to this and the
MaybeValid class could be used to simplify all of them.
So what about the monad...?
I mentioned the dread word “monad” earlier, and you've probably guessed that
MaybeValid is one, otherwise I wouldn’t have brought it up. So what is a monad exactly? First we need to clear one thing up, because you may have heard the word in the context of a “monadic function” - this is a completely different usage. It means a function with one argument (a function with two arguments is dyadic, and one with three arguments is triadic, etc.); this usage originated in APL and it has nothing to do with what we're talking about here. The monad we are talking about is a design pattern.
Doubtless you are already familiar with design patterns. The ones you already know, like Strategy, Command, Visitor etc. are all object-oriented design patterns. Monad is a functional design pattern. The Monad pattern defines what it means to chain operations together, enabling the programmer to build pipelines that process data in a series of steps, just like we have above:
- Retrieve the ISBN number from the request (may be invalid, i.e. wrong format).
- Look up the book by its ISBN number (may be invalid, i.e. not found).
- Create a
SingleBookResultDTO from the retrieved book.
- Map the DTO to a JSON string.
- Create a
LibraryResponsewith status 200 containing the JSON.
Each step may be ‘decorated’ with the additional processing rules provided by the monad. In our case, the additional rules are:
- The step actions are only to be performed when the value is valid.
- When the value is invalid then the error is passed along instead.
The terminating operation
ifInvalid makes the final decision about what to return: it returns the wrapped value if it is valid, otherwise it uses the supplied default value provider to build a suitable response from the client request error.
A formal definition.
More formally, the monad pattern is usually defined as an assemblage of the following three components, which together are known as a kleisi triple:
- A type constructor that maps every possible type to its corresponding monadic type. This wording does not make much sense in Java. To understand it, think of generic types, e.g:
Isbn→
MaybeValid<Isbn>.
- A unit function that wraps a value in an underlying type with an instance of the corresponding monadic type, e.g:
new Valid<Isbn>(isbn).
- A binding operation that takes a function and applies it to the underlying type. The function returns a new monadic type, which becomes the return value of the binding operation, e.g:
map(book -> new SingleBookResult(book))which yields a
MaybeValid<SingleBookResult>.
If you have these three components, you have a monad.
I heard Monads are all about encapsulating side-effects.
If you first came across the Monad pattern while learning Haskell, then most likely you would have learnt about it in the shape of the I/O Monad. The Haskell tutorial on I/O literally advises you not to worry about the Monad part for now, that you don't need to understand it in order to do I/O. Personally, that would just have the effect of making me worry more. Probably because of this, people who learn Haskell think that the purpose of a Monad is to encapsulate side-effects such as I/O. I'm not going to disagree, I cannot comment on that, but I have not come to understand the Monad pattern that way.
In my view, a Monad wraps a typed value (of any type) and maintains some additional state separately from the wrapped value. We have seen two examples here. In the case of the
Optional monad, the additional state is whether or not the value is present. In the case of the
MaybeValid monad, it is whether or not the value is valid, plus a validation error in the case that it is not. Notice that there are two types here: the monadic type (e.g.
Optional) and the wrapped type.
You can supply the Monad with a function that operates on the wrapped value. Whatever the type is of the wrapped value, the function's argument must match it. The Monad will pass its wrapped value to the function and will yield a new Monad, of the same monadic type, encapsulating the value returned by function. This is called a “binding operation”. The wrapped type of the new Monad may be different and that is fine. For example, if you have an
Optional wrapping a
Date, you may bind a function that maps a
Date to a
String and the result will be an
Optional wrapping a
String. If there is some functionality associated with the Monad's additional state, the Monad handles it as part of the binding operation. For example, when you pass a function to an empty
Optional, the function will not executed; the result is another empty
Optional. In this way, you can call a chain of composed functions in sequence, morphing from type to type, all within the context of the Monad.
Finally, the Monad provides a means for you to handle the value, taking account of the additional monadic state, in whatever the appropriate manner is given the context of your program. The appropriate behaviour is, naturally, handled using first-class functions. The other functions used in the binding operations are thus decoupled from the additional state maintained in the Monad and freed from all responsibility for dealing with it.
In other words, the Monad provides another tool in your box for creating abstractions, helping you to reduce the global complexity of your programs.
Next time.
In the next article we will continue our investigation of higher-order functions. We will take a look at currying, and how, despite seeming on the face of it very arcane, in fact it is very useful. To do this we will solve an exercise in Clojure, which will be a rather more involved exercise than the others we have seen in this series so far. We will go through it step by step and get a glimpse of the power of REPL-driven development.
Part 3 - First-Class Functions I: Lambda Functions & Map
Part 4 - First-Class Functions II: Filter, Reduce & More
Part 5 - Higher-Order Functions I: Function Composition and Monads
Part 6 - Higher-Order Functions II: Currying
Part 8 - Persistent data structures
Follow the author on Twitter
|
https://functional.works-hub.com/learn/the-functional-style-part-5-higher-order-functions-i-function-composition-and-the-monad-pattern-bc74a?utm_source=rss&utm_medium=automation&utm_content=bc74a
|
CC-MAIN-2020-16
|
refinedweb
| 4,004
| 52.9
|
Hi Zach,
I checked the QuantizeGraph pass and I think probably it can benefit from
CSE pass to eliminate additional quantize/quantize_v2 nodes. Having said
that, I think it may still be an overkill to add another NNVM pass to have
a generic common subexpression elimination pass. Currently, this
elimination logic takes only additional 3 to 6 lines of code in each of the
two NNVM pass. Also, a generic common subexpression elimination has its own
associated maintenance costs. I think it is better to continue with the
current approach and revisit this need in the future as we add more NNVM
passes.
Anirudh
On Mon, Apr 29, 2019 at 2:22 PM Anirudh Subramanian <anirudh2290@gmail.com>
wrote:
> Hi Zach,
>
> You raise an interesting point. Thank you for the pointer!
>
> Incorporating CSE pass comes with its own cost, and the advantage it
> brings is to make the ReducePrecision nnvm pass more lightweight. Since the
> amortized cost of the ReducePrecision pass is O(1) it shouldn't matter much
> whether we add it or not from performance point of view.
>
> From maintenance point of view, I would agree that separating these two
> logics can be helpful if we have other such workflows which require the
> original Pass followed by CSE pass. Currently, as far as I know only the
> ReducePrecision pass using it. I will check to see if CSE pass can benefit
> other NNVM pass also like quantization pass apart from ReducePrecision, and
> will get back.
>
> Anirudh
>
> On Mon, Apr 29, 2019 at 11:18 AM Zach Kimberg <zachary.kimberg@gmail.com>
> wrote:
>
>> I have one suggestion. In the current design, there are the additional
>> maps
>> from each input entry to each target casted entry dtype in order to avoid
>> creating duplicate casts. Instead of creating these, another option is to
>> use a general purpose Common Subexpression Elimination (CSE) [1] pass to
>> apply afterwards. So, you would run the mixed precision pass which creates
>> the duplicates and then the CSE pass which would remove all duplicates.
>>
>> This design is common in existing compilers like LLVM because maintaining
>> and testing the passes is much easier when they are kept as simple as
>> possible. The CSE can also be reused as necessary for other passes that
>> could create duplicates or to remove duplicate expressions in general.
>> This
>> tutorial [2] talks about it a bit.
>>
>> Zach
>>
>> [1] -
>> [2] -
>>
>> On Mon, Apr 29, 2019 at 9:26 AM Anirudh Subramanian <
>> anirudh2290@gmail.com>
>> wrote:
>>
>> > Hi Tao,
>> >
>> > Thanks for raising this question! I thought about the existing
>> quantization
>> > workflow and whether it can be included with the AMP API. Although
>> > quantization can be considered as mixed precision, there are
>> differences.
>> > For example, only a small number of operators can be quantized compared
>> to
>> > the operators that can run in FP16 precision. Thus, overriding the
>> > operators to run in original dtype vs target dtype doesnt make much
>> sense
>> > for quantization.
>> >
>> > Also, quantization workflow may require a calibration dataset to
>> calibrate
>> > the min and max and calib_mode.
>> > Arriving at a common API, for quantization with calibration and mixed
>> > precision inference (FP16 and BF16) may make the API too complicated and
>> > not very easy to use. I understand that this may cause some confusion as
>> > people may try to use target_dtype of int8 but I think its still better
>> > than causing user confusion with the API usage.
>> >
>> > Also, when we move quantize_model APIs outside contrib we can consider
>> > adding them under AMP namespace. The challenge would then be to educate
>> > users on difference between "quantize" and "convert".
>> >
>> > Anirudh
>> >
>> > On Mon, Apr 29, 2019 at 7:45 AM Lv, Tao A <tao.a.lv@intel.com> wrote:
>> >
>> > > Thank you for the explanation. Sorry I didn't realize the proposal is
>> for
>> > > inference only.
>> > >
>> > > Then how do you think the amp_cast and amp_multicast in this proposal
>> can
>> > > work with the existing INT8 quantization workflow which I think should
>> > also
>> > > be considered as 'mixed precision'.
>> > >
>> > > -----Original Message-----
>> > > From: Anirudh Subramanian [mailto:anirudh2290@gmail.com]
>> > > Sent: Monday, April 29, 2019 10:25 PM
>> > > To: dev@mxnet.incubator.apache.org
>> > > Subject: Re: Proposal for Conversion from FP32 to Mixed Precision
>> Models
>> > >
>> > > Hi Tao,
>> > >
>> > > The APIs proposed: "convert_model" and "convert_block" are mainly for
>> > > inference use cases, where customers bring a FP32 model to convert it
>> to
>> > a
>> > > mixed precision model to get improved performance while not losing
>> out on
>> > > the accuracy.
>> > > The PR: is
>> supposed
>> > > to handle the training use cases and this proposal doesn't cover the
>> AMP
>> > > feature added in the PR. I think ptrendx@ and canoerst@ are better
>> > > equipped to answer questions 1 and 2.
>> > >
>> > > > - more generally, what will be saved when users want to serialize
>> > > > their
>> > > model to disk?
>> > >
>> > > Lets say users want to save converted mixed precision model used for
>> > > inference to disk. It will save both, the symbol with the amp_cast and
>> > > amp_multicast operators and the params (which are casted if
>> necessary).
>> > >
>> > > Anirudh
>> > >
>> > >
>> > > On Mon, Apr 29, 2019 at 6:55 AM Lv, Tao A <tao.a.lv@intel.com> wrote:
>> > >
>> > > > Thank you for sharing this, Anirudh.
>> > > >
>> > > > Curious to know:
>> > > > - what will be saved in a training checkpoint or snapshot? Can it
be
>> > > > resumed on another platform which might not support the lower
>> > > > precision the previous one used?
>> > > > - what will be saved in the final symbol.json and params file when
>> > > > training is finished?
>> > > > - more generally, what will be saved when users want to serialize
>> > > > their model to disk?
>> > > >
>> > > > Thank you,
>> > > > -tao
>> > > >
>> > > > -----Original Message-----
>> > > > From: Anirudh Subramanian [mailto:anirudh2290@gmail.com]
>> > > > Sent: Monday, April 29, 2019 7:00 PM
>> > > > To: dev@mxnet.incubator.apache.org
>> > > > Subject: Proposal for Conversion from FP32 to Mixed Precision Models
>> > > >
>> > > > Hi all,
>> > > >
>> > > > I have created a doc for conversion from FP32 to Mixed Precision
>> > Models:
>> > > >
>> > > >
>>
>> > > > +to+Mixed+Precision+Models
>> > > >
>> > > > I look forward to your feedback on the same.
>> > > >
>> > > > Thanks,
>> > > > Anirudh
>> > > >
>> > >
>> >
>>
>
|
http://mail-archives.apache.org/mod_mbox/mxnet-dev/201904.mbox/%3CCAFVLAA9AzDmnqCUwSc+NHbDgPOu6+Fjn1VyyXjQj64BRfO6nrQ@mail.gmail.com%3E
|
CC-MAIN-2021-31
|
refinedweb
| 968
| 54.12
|
Five Common Daylight Saving Time Antipatterns of .NET Developers
It's 2015, and Daylight Saving Time is just around the corner. For most of North America, the clocks will "spring-forward" on Sunday, March 8th, stealing an hour of precious time from our daily routine. A few weeks later, the same thing will occur in much of Europe on Sunday, March 29th. Shortly after that, our friends in Australia will "fall-back" on Sunday, April 5th, gaining an hour back (lucky bastards!).
There are actually many changes throughout the year, and not all occur at the same time of day. Refer to this page for precise details of the upcoming DST transitions.
In this article, I'll highlight some of the most common mistakes .NET developers make in their code that might blow up when daylight saving time hits. These are the sort of things that could set off alerts in the middle of the night, or have other more drastic consequences. Wouldn't you rather sleep soundly, knowing that you've accounted for DST properly? Then read on!
#1 - Measuring Human Activity
Many applications track the actions of human beings. These include productivity trackers, health and fitness monitors, workforce management applications, security applications, and many others. If you're timing when something occurred and using that time in some sort of business logic, then you need to pay close attention.
Antipattern
public void RecordStartTime() { foo.StartDateTime = DateTime.Now; // Wheee! I got a time! db.Save(foo); } public void RecordStopTime() { foo.StopDateTime = DateTime.Now; // Wheee! I got another time! db.Save(foo); } public TimeSpan GetTotalTime() { return foo.StopDateTime - foo.StartDateTime; // Math all the things! }
You may see this manifest in a few other ways, but the key smells are:
- Usage of
DateTime.Nowto record an activity time.
- Really, any usage of
DateTime.Nowis a smell, especially in a web app.
- Calculation of
StopDateTime - StartDateTimeusing local time values
- Or any place math is done with
DateTimevalues that have their
.Kindproperty set to either
DateTimeKind.Localor
DateTimeKind.Unspecified.
Consequences
In the spring, when the clock shifts forward, an hour is lost on the clock. If you don't account for this, you will compute an extra hour when the start and stop times encompass the transition. (Bonus!)
In the fall, when the clock shifts backward, an hour is repeated on the clock. If you don't account for this, you could compute up to an hour less than the actual elapsed time. (Bummer!)
As an example, consider an timekeeping application for hourly employees. A worker who takes the night shift might clock in at 10:00 PM and clock out at 6:00 AM. On most days that is 8 hours, but if the clocks spring forward during that time then only 7 hours have actually passed. When the clocks fall back, then 9 hours have actually passed. If you don't account for DST, then you will either overpay the the employee or rob them of an hour's work. This has some serious ramifications when you consider overtime laws.
Remedy
Choose one of the following approaches:
- If the local time is unimportant in your application, then record the activity with respect to UTC instead of the local time. Daylight saving time does not occur in UTC.
foo.StartDateTimeUtc = DateTime.UtcNow; ... foo.StopDateTimeUtc = DateTime.UtcNow;
- If the local time is important, then record values using a
DateTimeOffsetinstead of a
DateTime. The offset will keep track of how the time is related to UTC, which will change depending on whether DST is in effect or not.
foo.StartDateTimeOffset = DateTimeOffset.Now; ... foo.StopDateTimeOffset = DateTimeOffset.Now;
- If the user could be in some other time zone, you should take that into account when you determine the current time.
private DateTimeOffset GetCurrentTime(string timeZoneId) { TimeZoneInfo tzi = TimeZoneInfo.FindSystemTimeZoneById(timeZoneId); DateTimeOffset now = TimeZoneInfo.ConvertTime(DateTimeOffset.UtcNow, tzi); return now; }
foo.StartDateTimeOffset = GetCurrentTime(theUsersTimeZoneId); ... foo.StopDateTimeOffset = GetCurrentTime(theUsersTimeZoneId);
#2 - Measuring Computational Activity
If you haven't noticed, computers are a bit different than people. In particular, they tend to be more precise. Sometimes we want to know how long they take to do things, so we might write some code like this:
Antipattern
DateTime start = DateTime.Now; // ...do some work... DateTime stop = DateTime.Now; TimeSpan elapsedTime = foo.StopDateTime - foo.StartDateTime;
Consequences
If this code happens to be "doing the work" when the clock shifts forward or backward, what kind of results do you think we might get? (hint: not good ones!)
Even without DST there are problems. You see, the system clock just isn't all that precise! Just because
DateTimehas 7 decimal places worth of "ticks" doesn't mean your computer can actually keep time with that level of precision. In most cases, it's closer to about 10 milliseconds worth of accuracy. If you want more precision than that, you'll need a hardware solution.
Let's not forget about clock drift. Your computer's clock is always ever-so-slightly falling out of alignment. Periodically, the operating system will sync with a network time server to correct itself. These corrections are usually on the order of a few milliseconds to a few seconds, and they happen quite frequently. If you're timing things with
DateTime, who knows what kind of results you will get!
Remedy
Within a single unit of work, it's quite simple to use the
System.Diagnostics.Stopwatch class.
using System.Diagnostics; ... Stopwatch stopwatch = Stowatch.StartNew(); // ...do some work... stopwatch.Stop(); TimeSpan elapsedTime = stopwatch.Elapsed;
A lesser known fact about the
Stopwatch class is that it can also be used even when you're not in a single unit of work. In fact, you don't even need to be in the same thread or in the same process! As long as you are comparing values on the same computer, this will work:
using System.Diagnostics; ... long start = Stopwatch.GetTimestamp(); // ...do some work, even on another thread or process... long stop = Stopwatch.GetTimestamp(); long elapsed = stop - start; // Maths are ok here. TimeSpan elapsedTime = TimeSpan.FromSeconds(elapsed * (1.0 / Stopwatch.Frequency));
The
Stopwatch class uses your computer's CPU for timing, through something called a
QueryPerformanceCounter. You can read about QPCs in all their glory, here (A must read for time geeks.)
The only gotcha is that on some classes of older processors while running pre-Vista operating systems (read "WinXP"), there can be inconsistencies when comparing QPCs from different threads. Hopefully that's not you, but you can read more in the geeky fine print.
#3 - Using the
TimeZone Class
Antipattern
Any use of the
TimeZone class. For any reason, whatsoever. Period.
No, really, I'm serious. Just don't.
Consequences
The
System.TimeZone class has two major flaws:
- It only can be used for the local time zone of the computer it is running on.
- If your users might be in some other time zone, it's useless.
- It is only aware of the current set of daylight saving time rules for that time zone.
- If you store data, then later retrieve it and pass it through
TimeZone, you might be using the wrong set of DST rules.
- Ok, perhaps this isn't such a big deal for 2015 in the USA, since last time we changed DST rules was in 2007. But did you know that time zones and DST rules often change for other time zones all over the world? Multiple updates are made every year. Think globally, people!
Remedy
Anything
TimeZone can do,
TimeZoneInfo can do better. And anything
TimeZoneInfo can do, Noda Time can do better than that!
The only excuse for having
System.TimeZone in your project is if you're stuck in .NET 2.0. Even then (God help you) there are better alternatives, such as TZ4NET.
Really, I don't want to hear any complaining when you try to move to .NET Core and find
TimeZone has been removed. Just switch.
#4 - Field Validation
Many applications perform little or no validation when it comes to taking date and time values from their user interfaces.
Antipattern
DateTime dt = DateTime.Parse(someUserInputString); SaveItOrCallSomeBusinessLogicWithoutAnySortOfValidation(dt);
(or
ParseExact,
TryParse, or
TryParseExact variants - and I'm not even going into globalization issues...)
Consequences
Just because the input fits in a
DateTime doesn't mean it's a valid date time.
- The value might be out of range (duh!)
- Especially if you are fitting it into a SQL
datetimecolumn, since it doesn't support years before 1753. (hint: use a
datetime2)
- But more on point, the value might not exist within the target time zone, due to the DST spring-forward transition.
- And don't forget that in the fall-back DST transition, the value might exist twice. Do you know which one to use?
Consider the following graph of US Pacific Time on March 8th, 2015. If your user supplies 2:30 AM - that time doesn't exist on this day!
Now consider the following graph of US Pacific Time on November 1st, 2015. If your user supplies 1:30 AM - that time exists twice on this day. Once at 8:30 UTC, and again at 9:30 UTC.
Remedy
Validate your inputs!
- Always check for a reasonable range.
- If your user enters an invalid date, alert them. Ask them to enter a valid one.
- If your user enters an ambiguous date, prompt them to choose which value they meant.
public bool ValidateDateTime(DateTime dt, TimeZoneInfo tzi) { if (dt < YourMinDateTime || dt > YourMaxDateTime) throw new ArgumentOutOfRangeException(); // catch and present an error dialog if (tzi.IsInvalidTime(dt)) throw new ArgumentException(); // catch and present an error dialog if (tzi.IsAmbiguousTime(dt)) return false; // present the user with a choice of daylight time or standard time return true; // all is good! }
But what if the user isn't present?
That's unfortunate, but understandable. Perhaps you're calculating the time for a daily job to run. It needs to run at the same time every day, but what do you do when that time is invalid or ambiguous?
- Have a plan. If you don't check, who knows what will happen.
- Here's a good plan if you don't know what else to do:
public DateTime ComputeNextUtcTimeToRun(int localHour, int localMinute, TimeZoneInfo tzi) { // Get the current time in the target time zone DateTime now = TimeZoneInfo.ConvertTimeFromUtc(DateTime.UtcNow, tzi); // Compute the next local time DateTime next = new DateTime(now.Year, now.Month, now.Day, localHour, localMinute, 0); if (next < now) next = next.AddDays(1); // If it's invalid, advance. The clocks shifted forward, so might you! if (tzi.IsInvalidTime(next)) return TimeZoneInfo.ConvertTimeToUtc(next.AddHours(1), tzi); // If it's ambiguous, pick the first instance. Why not - You have to pick something! if (tzi.IsAmbiguousTime(next)) { TimeSpan offset = tzi.GetAmbiguousTimeOffsets(next).Max(); DateTimeOffset dto = new DateTimeOffset(next, offset); return dto.UtcDateTime; } // It's safe to use as-is. return TimeZoneInfo.ConvertTimeToUtc(next, tzi); }
BTW - if you're using Noda Time, this is what we call a
ZoneLocalMappingResolver - because we're fancy like that.
#5 - Reporting
Like many systems, yours probably runs some kind of daily or weekly reports. Are you considering daylight saving time when you evaluate the results?
Antipattern
Assuming that all days have 24 hours.
Consequences
- On the day of the spring-forward transition, the day will have only 23 hours.
- On the day of the fall-back transition, the day will have 25 hours!
- Depending on your business, the totals might have significant discrepancies when compared to other days. Or perhaps not. It depends what you are doing.
Remedy
There's no cheating, a local day really does have less or more time on these DST transition days. However, you might consider some of the following mitigations:
- If aligning your "business day" to UTC is possible, then go for it. Problem solved. (But that's not always practical.)
- You might just make the business aware. Just like not all months have the same number of days, these couple of days don't have the same number of hours. Put an asterisk in the report footer.
- Ok, so you want a technical solution? Consider adjusting that day's totals as follows:
- Take the total, and divide by the number of hours in the day (23 in spring, 25 in the fall). That computes the average value per hour.
- In the spring, increase the total by that amount.
- In the fall, reduce the total by that amount.
- This will level out your results, and is good for general day-by-day or week-by-week comparison reports and graphs.
- Do not use this technique for critical financial reports! Your accountant won't appreciate having the totals not matching up!
|
https://codeofmatt.com/2015/03/06/common-daylight-saving-time-mistakes-for-net-developers/
|
CC-MAIN-2018-39
|
refinedweb
| 2,109
| 67.55
|
Diving Into Visual Studio 2015: Debugging Improvements (Breakpoint Configurations and New Error List)
Diving Into Visual Studio 2015: Debugging Improvements (Breakpoint Configurations and New Error List)
This article covers the various debugging improvements that Visual Studio 2015 has come up with. Read on and check out these features complete with examples.
Join the DZone community and get the full member experience.Join For Free
Deploying code to production can be filled with uncertainty. Reduce the risks, and deploy earlier and more often. Download this free guide to learn more. Brought to you in partnership with Rollbar.
Introduction
Visual Studio has always been a great IDE for code debugging. It provides numerous features for debugging and configuring the code. Being a developer, we always spend a lot of time in running and debugging the code, therefore, improvements to debugging features can have a big impact on our productivity. This article covers the debugging improvements that Visual Studio 2015 has come up with. Following are a few of the major features that will be covered in this article:
- Breakpoint configuration improvements
- New improved Error List
- Tool window support for LINQ and lambda expressions
- New PerfTips displaying execution time in the code editor
- New Diagnostic Tools window
Breakpoint Configuration Improvements
The earlier versions of Visual Studio already provided the feature of breakpoint configuration, so it’s not new to developers. The only thing new in Visual Studio 2015 is the user experience and ease of using the configurations. Breakpoint configuration is now more easy to use and reachable. VisualStudio 2015 introduces a new inline toolbar. With this too
lbar, you can easily open the Breakpoint Configuration Settings or enable/disable the breakpoint. Secondly, the Context menu for breakpoint configuration in Visual Studio 2015 is simplified. The few options of the Context menu have been moved to the Breakpoint Configuration Settings window. The Settings window now comes in a peek window, so you can easily check and change the settings as there will be no modal window. The whole breakpoint configuration is now divided into two parts, Actions and Conditions. Let us understand the topic in detail with practical implementation.I am using Visual Studio 2015 enterprise edition for this article and have added a console application named VS2015ConsoleApplication in my Visual Studio.Let’s say we have class, we are just fetching all the products and creating a new list of products for a new entity named ProductCodeWithPrice, where we list only the product code and price of products.
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace VS2015ConsoleApplication { public class ProductCodeWithPrice { public string ProductCode { get; set; } public string ProductPrice { get; set; } } class Program { static void Main() { var myProducts = new MyProducts(); var products = new List < ProductCodeWithPrice > (); var allProducts = myProducts.GetProductList(); foreach(var product in allProducts) { ProductCodeWithPrice prod = new ProductCodeWithPrice(); prod.ProductCode = product.ProductCode; prod.ProductPrice = product.ProductPrice; products.Add(prod); } Console.ReadLine(); } } }
Now let's say we are debugging the code while a new product list is created and we want to place a breakpoint after a new ProductCodePrice instance is created ina foreach loop.
When a breakpoint is put at line 27, notice the new inline toolbar. From here I can open the settings or enable and disable the breakpoint. When we right click on the breakpoint to open the context menu, we see a new simplified context menu with most of the options that used to be presenting there now moved to the settings option.
Let's again check the inline toolbar. Let's pick the settings option. Notice that the settings now appear in a peek window instead of a modal dialog window. This helps a developer to easily modify the settings while debugging.
Conditions
Let's try to explore how conditions work. When we place a breakpoint and open the settings window, it shows options for conditions and actions and also mentions the location of breakpoint with the details like file name, line number and character position. Clicking on conditions checkbox shows some other options on how a condition can be configured.
The default is Conditional Expression, but there are two other options as well i.e. Hit Count and Filter. Hit Count option is used when there is a need that an execution pause is required at a particular iteration in the loop.
The second drop-down list is used to validate the condition. In this case, we have placed a breakpoint after prod object is created in each iteration.
Notice that we could pick Is a multiple of, or greater than or equal to validate the Hit Count.
Let’s suppose there is a scenario where we need to pause the execution and check the products list values after 3 iterations. So we choose Hit Count option as condition and “Is equal” to option in the second dropdown and in the text box near to it, type 3. This means that when the loop will be running the third time the execution is paused at line number 27, therefore, hitting the breakpoint. Run the application and wait for the breakpoint to get hit.
Notice that the condition information is live. It shows me the current Hit Count. The application stopped at debugging point when the hit count was 3. At this point the count can also be changed, let’s change it to 4, or it could simply be reset, and data tooltips can still be used to view the variables. If we hover over the products list we can see it already has two products (prod) in it, so we must be in the third iteration of the loop because we're breaking before we're adding to the list.
One of the interesting features w.r.t. Visual Studio 2015 breakpoint configuration is that if a breakpoint is accidentally deleted , it could again be applied by using Ctrl+Z.
A breakpoint condition with the Hit Count can be used anytime if we need to hit the breakpoint at specific hit count or at some particular interval of hits. This is normally useful while processing lists of items and in recursive methods. Even though the application is still running, another condition can also be selected to be added, let's add it to the conditional expression. We’ll check this by adding a Conditional Expression here. Let’s say we want the breakpoint to be hit when the product code of prod instance is “0004” . So click on Add condition option while the application is stopped at the breakpoint and add a conditional expression.
You can add multiple conditions and configure your breakpoint for desired debugging to improve productivity. When Add condition option is clicked a new row is added with all available options as shown earlier while applying Hit Count breakpoint. Choose conditional expression option and validate it to be true when prod.ProductCode==”0004”. Notice that you can write any expression in the expression text box. The expression could be simple or complex with multiple && and || conditions too. Moreover, while typing, the IntelliSense also works and helps to create expressions.
If you want you can delete the prior condition of hit count , else the debug point will be hit multiple times. I am removing the prior condition here. Run the application and you’ll see that the breakpoint is hit when the condition that was mentioned at breakpoint becomes true.
We see here the execution stops as soon as the condition of product code being “0004” is met.
Actions
Let us see how Actions work. By default, when the conditions are true, the debugger will pause at the particular breakpoint. This behaviour can also be configured by checking the actions. One can select to log the message, enter the desired message in the Message field provided.
We can also enter desired plain text of our choice and customise the message for better readability and understanding. Dollar ($) can be used to display system values here, when you type dollar in the message field , you get the list of all the pseudo-variables that can be used to log the message.
Curly braces {} are used to display the output or variables from the application or code base and you get the IntelliSense support as well in the message fields. You can log the message in the output window. let’s give it a try and try to log something at this breakpoint condition. You also have the option to Continue execution. This option refrains the debugger from pausing each time a breakpoint is hit.This option could be selected if you want to log the message without stopping at the breakpoint.
In actions message field, I am trying to log a message when the condition of prod having product code == “0004” is true. I have configured the message field to log $Function , $TID, $TNAME along with {prod} i.e. product instance and prod.ProductCode. notice that I have also used plain text like “Method : ”, “Product Instance”, “Product Code” to make my message more readable. I have chosen to continue the execution without stopping at the breakpoint. Let’s run the application and see what happens.
All the information that we defined in Message field is logged into output window as desired. All the details along with the plain text that I used is logged in the same sequence as defined. You can use the Log a message action anytime when you want to display information each time the breakpoint is hit.
New Improved Error List
The new Error List in Visual Studio 2015 is now much more improved where you can get your live list of compiler and code analysis errors and warnings. The major improvements in the Error List include the display of the error code, linked to a document on that issue. You can click that link to view the document online. Filtering has been expanded much more. One can still filter on the current project or document, but now filtering can also be done on error severity, the error code, a set of projects or on a set of files.
The maximum error limit in Visual Studio 2015 has also been removed. Earlier there was no way to really tell how many errors we had in one go when the error number was too high. Each time we fix certain numbers of errors, we were shown more errors on compilation Now, all of your errors and warnings will appear in the Error List in one go.Let’s practically try to see the error list improvements. I have intentionally made few changes in the Main method of the program.cs class to get some errors and warnings. I have removed var from products declaration, added an empty catch block with an empty finally block. Before compiling the code, I have also enabled the Enable Code Analysis on Build option. You can find this option by right-clicking on your project, open properties and in properties window selects Code Analysis option , normally appears at last as shown in the image.
Now when we compile that code we get a few errors and warning as expected.
We see here that we get errors and warnings from the compiler and as well from the code analyser. CS as a prefix to the error/warning code represents that it is through compiler and CC represents code analysers here. We got all the expected warnings and errors. Notice that errors and warnings have their respective symbols. The tabs at the top show 2 Errors, 5 Warnings and 1 Message. You can choose these options to filter and see what you need. Let’s say you don’t want to see Warnings and Messages, then you can click on the respective tabs above to see only Error list. Notice that every error code is in the form of a link when you click on any error code, it redirects you to its documentation page.Let’s click on CS 0103 i.e. the first error saying “The name ‘products’ does not exist in the current context”.
We see that the ink has redirected to MSDN link having the detailed document of that error.
Filtering has been expanded more to filter on errors, warning and severity as well. To check that just click on top of the columns of error list where the error/warning symbol is displayed.
As soon as you click on the top as shown in the above image, the filter option will appear there itself and you can click that filter icon to see the types of more available filters.
You can choose to filter the list based on your selection by checking or unchecking the check box. Filter option is widely available for code as well as for Projects. You can particularly select which code to include as shown below in the image,
Or which project or files to select as a filter.
So you can see that filtering option has been expanded to take care of multiple options, therefore, improving control, configurations and productivity of a developer.
Conclusion
In this article, we covered the new improved debugging techniques that Visual Studio 2015 has come up with. We covered the break point configurations with several practical scenarios and sneak peeked at a new improved Error list. These options can also be found in prior versions of Visual Studio, but VS 2015 has an improved and more discoverable version of them. In the next article, we’ll cover the remaining two debugging options that are PerfTips and new diagnostic tool window.
Deploying code to production can be filled with uncertainty. Reduce the risks, and deploy earlier and more often. Download this free guide to learn more. Brought to you in partnership with Rollbar.
Published at DZone with permission of Akhil Mittal . See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/diving-into-visual-studio-2015-debugging-improveme
|
CC-MAIN-2018-30
|
refinedweb
| 2,328
| 63.29
|
A Groovy DSL For Spring Integration
Spring Integration implements Enterprise Integration Patterrns using the Spring programming model to enable messaging in Spring-based applications. Spring Integration also provides integration with external systems using declarative adapters supporting jms, http, amqp, tcp, ftp(s), smtp, and so on. Currently, configuring message flows is primarily done via Spring XML and Spring Integration supports several namespaces to make this as succinct as possible. Earlier this year, SpringSource released a Scala DSL for Spring Integration. Now, we are pleased to announce the first milestone release (1.0.0.M1) of a Groovy DSL.
Both of these DSLs share a common goal - to provide a powerful and flexible alternative to XML configuration for Spring Integration. The two languages are also semantically similar since the Groovy DSL draws from concepts introduced by the Scala DSL. Additionally, both are essentially a facade atop Spring Integration. However the similarities end here. Many of the differences can be attributed to language differences between Scala and Groovy, most notably, static vs dynamic typing. The Groovy DSL is targeted primarily at Groovyists, who are comfortable with the hierarchical syntax of the builder pattern on which the DSL is based. This should also appeal to Java developers who can take advantage of the rich features a DSL has to offer and will find the syntax very approachable.
Hello World
Let’s start at the beginning with a Groovy example:
def builder = new IntegrationBuilder() def flow = builder.messageFlow { transform {"hello, $it"} handle {println it} } flow.send('world')
This creates a Spring application context, constructs a Spring Integration message flow consisting of a transformer (transform) and a service activator (handle) and wires these endpoints with direct channels so that they will be executed in sequence. The transformer appends the message payload (“world” in this case) to the string “hello, ” and the service activator prints the result to STDOUT. Voila! Here we see a simple instance of the builder pattern. For those who are not familiar with Groovy, this is valid Groovy syntax. messageFlow, transform, and handle are all methods defined by the DSL. The {} is Groovy syntax for a closure. Since parentheses and semicolons are optional in Groovy, this is equivalent to :
def flow = builder.messageFlow({ transform({"hello, $it"}); handle({println(it)}); });
Also, I should mention that, by default Groovy closures expect a single argument named ‘it’. Closure arguments may be named and optionally typed. For example:
transform {String payload -> "hello, $payload"}
One more thing. Groovy allows you to embed variable expressions in double quoted Strings. This is not intended to be a Groovy tutorial, but suffice it to say that all Groovy’s syntactic sugar makes for very sweet, concise and readable code. By the way, the equivalent XML for this is
<beans xmlns="" xmlns: <si:transformer <si:service-activator </beans>
And then we would have to write about ten lines of code to initialize the Spring application context and send a message.
To run the DSL example in Java, there are a few options. One way is to load and run an external DSL script:
HelloWorld.groovy
messageFlow { transform {"hello, $it"} handle {println it} }
Main.java
public class Main { public static void main(String[] args) { IntegrationBuilder builder = new IntegrationBuilder(); MessageFlow flow = (MessageFlow) builder.build(new File("HelloWorld.groovy")); flow.send("world"); } }
In addition to a File instance, as shown above, the build() method also accepts an InputStream, GroovyCodeSource, Spring Resource, even a groovy.lang.Script. So if you compile your project with the Groovy compiler, you can do something like this, where HelloWorld is an instance of groovy.lang.Script.
public class Main { public static void main(String[] args) { IntegrationBuilder builder = new IntegrationBuilder(); MessageFlow flow = (MessageFlow) builder.build(new HelloWorld())); flow.send("world"); } }
Next Steps
Hopefully, this brief introduction illustrates how easy the DSL is to use. If you want to take it to the next level, the spring-integration-dsl-groovy Github repository includes a DSL User’s Guide which describes the DSL’s features in more detail with lots of examples.
|
http://spring.io/blog/2012/11/06/a-groovy-dsl-for-spring-integration
|
CC-MAIN-2018-05
|
refinedweb
| 670
| 54.63
|
In ASP.NET security lingo, two key terms are authentication and authorisation. Authentication mechanisms help ASP.NET figure out who you are. Once that is determined, authorisation mechanisms figure out whether you are allowed to access the pages you are requesting.
Two of the three types of ASP.NET authentication rely on technology outside of your Web application:
- Windows authentication integrates with the operating system (Windows NT/2000/XP) authentication mechanisms.
- Passport authentication requires interactions with a Microsoft Passport server.
But you can define the third authentication mechanism, forms authentication, completely within the confines of your own ASP.NET application. We’ll focus on that type of authentication here.
Simplicity
One of the advantages of forms authentication is its simplicity. You can have a rudimentary authentication system up and running very quickly. At its simplest, forms authentication requires you to follow these steps:
- Edit Web.config <authentication> and <authorization> elements.
- Create a standard ASPX page that someone will try to visit.
- Create a login page with username and password text boxes, and a button to submit the form.
- Handle the button’s Click event and call the forms authentication’s Authenticate and RedirectFromLoginPage methods.
Web.config changes
The <authentication> element of Web.config lets you tell ASP.NET that you want to use forms authentication. It has a child element, <forms>, that allows you to specify the login page. Within the <forms> element, you can optionally add a <credentials> element that directly assigns usernames and passwords, as in Listing A. It’s important to note that I’m explaining the simplest use of forms authentication, which means I’m willing to store the user credentials directly inside Web.config. This is not a good idea for large Web sites. More on that later.
For the user authentication to be useful in your application, you must explicitly deny access to anonymous users. This is where the <authorization> element comes in. Use a <deny> subelement with the users attribute set to “?,” as shown in Listing A.
Now whenever someone attempts to access an ASPX page in the application that is governed by this Web.config file, ASP.NET will ensure that the user has been authenticated. Unauthenticated users will be sent automatically to the login page specified in the <forms> element.
A standard page
To test whether your authentication settings are actually doing anything, you should create at least one ASPX page besides the login page. The sample in Listing B creates a page that simply shows the authenticated user’s username. This is nice proof that authentication actually worked. When you’re ready to test, you will try to access this page directly in your browser. If your authentication system is working, you’ll be redirected to the login page you create below.
Login page
The simplest login page has a username text box, a password text box, and a button, as the markup in Listing C shows. You must handle the button’s Click event in code to call the methods that perform authentication and redirect the user (if authenticated) back to the originally requested page.
Listing D shows the Click event handler in the codebehind class for the ASPX page in Listing C. It uses the System.Web.Security namespace’s FormsAuthentication class. The first method call, Authenticate, passes the username and password. .NET compares these to the credentials that were put directly into the <forms> element inside the Web.config file. If the username and password are valid, the RedirectFromLoginPage method is called. The nice thing here is that you don’t need to tell ASP.NET what the originally requested page was—it knows where to go when you tell it to redirect.
The parameters for the RedirectFromLoginPage method are the user’s name and a Boolean value indicating whether the authentication cookie is persistent or not. A persistent cookie will survive browser sessions until the user is specifically signed out (the SignOut method of the FormsAuthentication class).
After compiling all of this, if you try to use a browser to access the standard page, you will be redirected to the login page. As long as you do not authenticate properly, you will continue to sit at that login page. When you provide the correct credentials, you will be taken to the page you requested.
Improvements
As I mentioned, you won’t want to store usernames and passwords directly in the Web.config file for long. For one thing, you can’t dynamically accept new user information, such as from a sign-up form, since you can’t really use runtime code to append the new usernames and passwords to the <credentials> list inside Web.config. Also, whenever you manually change Web.config, your application restarts, which can cause all sorts of interesting anomalies for your current visitors.
So you'll want to use a separate file or database for your usernames and passwords. Once you do that, you can’t use the FormsAuthentication.Authenticate method anymore, because it looks directly to Web.config for the credentials. Instead, your database or file access code will be responsible for checking the username and password. If you find a match, you can then use the RedirectFromLoginPage method.
You might also want to think about encrypting the passwords in case someone manages to gain access to your user and password lists. The FormsAuthentication class’s HashPasswordForStoringInConfigFile can help you do this. Though its name suggests that you should use it only if you are putting credentials directly in the Web.config file, this is not really the case. It simply returns the hashed string, which can be used anywhere you like.
Finally, it’s important to note that hashing the password does not mean anything at all when it comes to “over-the-wire” security. Unless your users access your site via SSL, their passwords will be passed from browser to Web server in clear text. The password encryption described earlier is performed after the password is received from the user.
Simple and secure in a hurry
Using forms authentication, you can have a simple and secure Web application within minutes. You don’t have to place any code in your application’s ASPX files to try to determine whether the visiting user has already been authenticated. If you have forms authentication turned on in the Web.config file, ASP.NET simply won’t let the unauthenticated visitor see any of your ASPX pages. If you spent a lot of time in classic ASP developing your own authentication code, you’ll know what a big improvement this is.
1
DSK Chakravarthy - 03/03/05
Hi,
I could not able to look at the listings. Pl make the listings as the inline text to the article instead of redirecting to another webpage.
The article is good, but with out examples.
Thanks and it's a good work.
Chakravarthy
» Report offensive content
2
Mohamed - 19/04/07
Hi, I have web form
» Report offensive content
|
http://www.builderau.com.au/program/windows/soa/Master-simple-forms-authentication-in-NET/0,339024644,320273348,00.htm
|
crawl-002
|
refinedweb
| 1,163
| 57.77
|
In addition to having one or more main containers (or app containers), a pod can also have one or more init containers which run before the app containers. Init containers allow you to reduce and reorganize setup scripts and “glue code”.
An init container is exactly like a regular container, except that it always
runs to completion and each init container must complete successfully before
the next one is started. If the init container fails, Kubernetes will restart
the pod until the init container succeeds. If a pod is marked as
RestartNever,
the pod will fail if the init container fails.
You specify a container as an init container by adding an annotation
The annotation key is
pod.beta.kubernetes.io/init-containers. The annotation
value is a JSON array of objects of type
v1.Container
Once the feature exits beta, the init containers will be specified on the Pod
Spec alongside the app
containers array.
The status of the init containers is returned as another annotation -
pod.beta differently. Init containers do not support readiness probes since they will run to completion before the pod can be ready. An init container has all of the fields of an app container.
If you specify multiple init containers for a pod, those containers run one at a time in sequential order. Each must succeed before the next can run. Once all init containers have run to completion, Kubernetes initializes the pod and runs the application containers as usual.
Because init containers have separate images from application containers, they have some advantages for start-up related code. These include:
FROManother image just to use a tool like
sed,
awk,
python,
dig, etc during setup).
Because init containers have different filesystem view (Linux namespaces) from app containers, they can be given access to Secrets that the app containers are not able to access.
Since init containers run to completion before any app containers start, and since app containers run in parallel, they provide an easier way to block or delay the startup of application containers until some precondition is met.
Because init containers run in sequence and there can be multiple init containers, they can be composed easily.
Here are some ideas for how to use init containers:
- Wait for a service to be created with a shell command like:
for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; exit 1
- Register this pod with a remote server with a command like:
curl -X POST -d 'instance=$(POD_NAME)&ip=$(POD_IP)'
using
POD_NAME and
POD_IP from the downward API.
- Wait for some time before starting the app container with a command like
sleep 60.
- Clone a git repository into a volume
- Place values like a POD_IP into a configuration file, and run a template tool (e.g. jinja)
to generate a configuration file to be consumed by the main app contianer.
```
Complete usage examples can be found in the PetSets guide and the Production Pods guide.
Each pod may have 0..N init containers defined along with the existing 1..M app containers.
On startup of the pod, after the network and volumes are initialized, the init containers are started in order. Each container must exit successfully before the next is invoked. If a container fails to start (due to the runtime) or exits with failure, it is retried according to the pod RestartPolicy, except when the pod restart policy is RestartPolicyAlways, in which case just the init containers use RestartPolicyOnFailure.
A pod cannot be ready until all init containers have succeeded. The ports on an
init container are not aggregated under a service. A pod that is being
initialized is in the
Pending phase but should has a condition
Initializing
set to
true.
If the pod is restarted all init containers must execute again.
Changes to the init container spec are limited to the container image field. Altering a init container image field is equivalent to restarting the pod.
Because init containers can be restarted, retried, or reexecuted, init container code should be idempotent. In particular, code that writes to files on EmptyDirs should be prepared for the possibility that an output file already exists.
An init container has all of the fields of an app container. The following fields are prohibited from being used on init containers by validation:
readinessProbe- init containers must exit for pod startup to continue, are not included in rotation, and so cannot define readiness distinct from completion.
Init container authors may use
activeDeadlineSeconds on the pod and
livenessProbe on the container to prevent init containers from failing
forever. The active deadline includes init containers.
The name of each app and init container in a pod must be unique - it is a validation error for any container to share a name.
Given the ordering and execution for init containers, the following rules for resource usage apply:
Quota and limits are applied based on the effective pod request and limit.
Pod level cGroups are based on the effective pod request and limit, the same as the scheduler.
A Pod may “restart”, causing reexecution of init containers, for the following reasons:
A cluster with Kubelet and Apiserver version 1.4.0 or greater supports init containers with the beta annotations. Support varies for other combinations of Kubelet and Apiserver version; see the release notes for details.
Create an Issue
Edit this Page
|
http://kubernetes.io/docs/user-guide/pods/init-container/
|
CC-MAIN-2016-50
|
refinedweb
| 898
| 53.61
|
I can have several different versions of this build tool running simultaneously, when I'm working on more than one project, release or feature. I don't know exactly how it works, but I activate (a python term I guess) a different python environment for each version of the software that I'm working on. Apparently, Sublime tries to use this environment when I start it from a command-line there I have activated the build tool. This results in a lot of tracebacks like the following when I run 'sublime_text.exe'.
- Code: Select all
Traceback (most recent call last):
File ".\sublime_plugin.py", line 1, in <module>
import os
File ".\os.py", line 398, in <module>
File ".\UserDict.py", line 83, in <module>
File "C:\views\jsc_FwMainlinePrj\pooma\Lib.zip\Lib\abc.py", line 109, in register
File "C:\views\jsc_FwMainlinePrj\pooma\Lib.zip\Lib\abc.py", line 151, in __subclasscheck__
File "C:\views\jsc_FwMainlinePrj\pooma\Lib.zip\Lib\_weakrefset.py", line 69, in __contains__
TypeError: cannot create weak reference to 'classobj' object
The path 'C:\views\jsc_FwMainlinePrj' is one of the branches of our software that I'm working on and 'C:\views\jsc_FwMainlinePrj\pooma' is the path to the build tool. I know very little about python, but to me it looks like Sublime is trying to use the python environment that comes with the build tool. Is there a way I can make Sublime use its own python environment while the python environment for our custom build tool is still activated?
|
http://www.sublimetext.com/forum/viewtopic.php?p=25998
|
CC-MAIN-2015-14
|
refinedweb
| 254
| 54.63
|
04 January 2012 09:38 [Source: ICIS news]
LONDON (ICIS)--The ICIS petrochemical index (IPEX) remained flat at 300.82 in January, a minute change from its revised* December figure of 300.78.
The Asian component of the IPEX strengthened by 4.8%, offsetting falls in the ?xml:namespace>
The Asian component observed the only price rises for some of its basket products. Butadiene (BD) led these hikes, soaring by 47.8%, mainly as a result of limited availability and increased buying interest.
Chinese traders have been securing and storing material in anticipation of rising prices for their upcoming start-ups of new downstream synthetic plants in the first quarter of 2012. Traders have also been seeking to secure material ahead of the Lunar New Year holidays in the region towards the end of January.
However, BD in the US and Europe fell significantly, by 14.8% and 13.5% in dollar terms respectively, as low levels of demand persist for both regions.
The most significant price drop in, BD, polyvinyl chloride (PVC), polyethylene (PE), polypropylene (PP) and polystyrene (PS).
* The December IPEX has been revised from 300.94 to 300.78, following incorporation of the
The revised historical IPEX data is available from ICIS on request
For a full methodology of the revised IPEX, please click
|
http://www.icis.com/Articles/2012/01/04/9520028/january-ipex-stable-as-strong-asia-offsets-weaker-us-europe.html
|
CC-MAIN-2015-22
|
refinedweb
| 217
| 58.48
|
MouseEvent is handled by MouseListener overriding 5 abstract methods. Example given with Screenshot in Simple terms.
Java events are of two types – a) generated by frame, button etc. (AWT components) known as semantic events and b) generated by mouse, keyboard (Hardware components) known as low-level events. When you click a mouse, the mouse generates mouse event represented by class MouseEvent. Similarly, when you type a key on Keyboard, the key generates a key event represented by class KeyEvent and handled by KeyListener.
Generation of mouse events, again is of two types. If the mouse is stable (not moving) it generates MouseEvent and is handled by MouseListener and if the mouse is in motion, the mouse generates the same MouseEvent but handled by MouseMotionListener.
The mouse can generates five types of events (of course, when mouse stable only) and each type is represented by one abstract method of MouseListener. Following table illustrates.
This example illustrates how to handle the events when the mouse is stable using MouseListener by overriding the above 5 abstract methods. In this code, the mouse action is displayed as a label.
import java.awt.*; import java.awt.event.*; public class FrameMLDemo extends Frame implements MouseListener { Label lab; public FrameMLDemo() { add(lab = new Label(), "South"); lab.setFont(new Font("Monospaced", Font.BOLD, 18)); addMouseListener(this); setSize(300,300); setVisible(true); } // override all the 5 abstract methods of MouseListener public void mousePressed(MouseEvent e) { lab.setText("Mouse is pressed"); } public void mouseReleased(MouseEvent e) { lab.setText("Mouse is released"); } public void mouseEntered(MouseEvent e) { lab.setText("Mouse is entered"); } public void mouseExited(MouseEvent e) { lab.setText("Mouse is exited"); } public void mouseClicked(MouseEvent e) { lab.setText("Mouse is clicked"); } public static void main(String snrao[]) { new FrameMLDemo(); } }
addMouseListener(this);
This method defined in class Frame links or registers the frame with the MouseListener. If this statement is omitted, MouseListener does not handle the events of mouse and thereby no actions.
Label component is explained in AWT Label – Alignment.
All the five abstract methods of MouseListener are overridden.
Output screen when mouse exited the frame.
Output screen shown with one of the actions when mouse clicked.
|
https://way2java.com/awt-events/handling-mouseevent-mouselistener-example/
|
CC-MAIN-2022-33
|
refinedweb
| 357
| 58.18
|
Using the same open source .NET library as I did in my last post (Language detection and words-in-sentence classification in C#), I use some of its other machine learning capabilities to automatically generate "you may also be interested in" links to similar posts for any given post on this blog.
This site has always had a way for me to link related posts together - for example, if you scroll to the bottom of "Learning F# via some Machine Learning: The Single Layer Perceptron" then it suggests a link to "Face or no face (finding faces in photos using C# and Accord.NET)" on the basis that you might be super-excited into my fiddlings with computers being trained how to make decisions on their own. But there aren't many of these links because they're something that I have to maintain manually. Firstly, that means that I have to remember / consider every previous post and decide whether it might be worth linking to the new post that I've just finished writing and, secondly, I often just forget.
There are models in the Catalyst library* that make this possible and so I thought that I would see whether I could train it with my blog post data and then incorporate the suggestions into the final content.
* (Again, see my last post for more details on this library and a little blurb about my previous employers who are doing exciting things in the Enterprise Search space)
Specifically, I'll be using the fastText model that was published by Facebook's AI Research lab in 2015 and then rewritten in C# as part of the Catalyst library.
When I first launched my blog (just over a decade ago), I initially hosted it somewhere as an ASP.NET MVC application. Largely because I wanted to try my hand at writing an MVC app from scratch and fiddling with various settings, I think.. and partly because it felt like the "natural" thing to do, seeing as I was employed as a .NET Developer at the time!
To keep things simple, I had a single text file for each blog post and the filenames were of a particular format containing a unique post ID, date and time of publishing, whether it should appear in the "Highlights" column and any tags that should be associated with it. Like this:
1,2011,3,14,20,14,2,0,Immutability.txt
That's the very first post (it has ID 1), it was published on 2011-03-14 at 20:14:02 and it is not shown in the Highlights column (hence the final zero). It has a single tag of "Immutability". Although it has a ".txt" extension, it's actually markdown content, so ".md" would have been more logical (the reason why I chose ".txt" over ".md" will likely remain forever lost in the mists of time!)
A couple of years later, I came across the project neocities.org and thought that it was a cool idea and did some (perhaps slightly hacky) work to make things work as a static site (including pushing the search logic entirely to the client) as described in The NeoCities Challenge!.
Some more years later, GitHub Pages started supporting custom domains over HTTPS (in May 2018 according to this) and so, having already moved web hosts once due to wildly inconsistent performance from the first provider, I decided to use this to-static-site logic and start publishing via GitHub Pages.
This is a long-winded way of saying that, although I publish my content these days as a static site, I write new content by running the original blog app locally and then turning it into static content later. Meaning that the original individual post files are available in the ASP.NET MVC Blog GitHub repo here:
github.com/ProductiveRage/Blog/tree/master/Blog/App_Data/Posts
Therefore, if you were sufficiently curious and wanted to play along at home, you can also access the original markdown files for my blog posts and see if you can reproduce my results.
Following shortly is some code to do just that. GitHub has an API that allows you to query folder contents and so we can get a list of blog post files without having to do anything arduous like clone the entire repo or trying to scrape the information from the site or even creating an authenticated API access application because GitHub allows us rate-limited non-authenticated access for free! Once we have the list of files, each will have a "download_url" that we can retrieve the raw content from.
To get the list of blog post files, you would call:
api.github.com/repos/ProductiveRage/Blog/contents/Blog/App_Data/Posts?ref=master
.. and get results that look like this:
[ { "name": "1,2011,3,14,20,14,2,0,Immutability.txt", "path": "Blog/App_Data/Posts/1,2011,3,14,20,14,2,0,Immutability.txt", "sha": "b243ea15c891f73550485af27fa06dd1ccb8bf45", "size": 18965, "url": "", "html_url": "", "git_url": "", "download_url": "", "type": "file", "_links": { "self": "", "git": "", "html": "" } }, { "name": "10,2011,8,30,19,06,0,0,Mercurial.txt", "path": "Blog/App_Data/Posts/10,2011,8,30,19,06,0,0,Mercurial.txt", "sha": "ab6cf2fc360948212e29c64d9c886b3dbfe0d6fc", "size": 3600, "url": "", "html_url": "", "git_url": "", "download_url": "", "type": "file", "_links": { "self": "", "git": "", "html": "" } }, ..
While the API is rate-limited, retrieving content via the "download_url" locations is not - so we can make a single API call for the list and then download all of the individual files that we want.
Note that there are a couple of files in that folders that are NOT blog posts (such as the "RelatedPosts.txt" file, which is the way that I manually associate "You may also be interested in" post) and so each filename will have to be checked to ensure that it matches the format shown above.
The title of the blog post is not in the file name, it is always the first line of the content in the file (to obtain it, we'll need to process the file as markdown content, convert it to plain text and then look at that first line).
private static async Task<IEnumerable<BlogPost>> GetBlogPosts() { // Note: The GitHub API is rate limited quite severely for non-authenticated apps, so we just // call it once for the list of files and then retrieve them all further down via the Download // URLs (which don't count as API calls). Still, if you run this code repeatedly and start // getting 403 "rate limited" responses then you might have to hold off for a while. string namesAndUrlsJson; using (var client = new WebClient()) { // The API refuses requests without a User Agent, so set one before calling (see //) client.Headers.Add(HttpRequestHeader.UserAgent, "ProductiveRage Blog Post Example"); namesAndUrlsJson = await client.DownloadStringTaskAsync(new Uri( "" )); } // Deserialise the response into an array of entries that have Name and Download_Url properties var namesAndUrls = JsonConvert.DeserializeAnonymousType( namesAndUrlsJson, new[] { new { Name = "", Download_Url = (Uri)null } } ); return await Task.WhenAll(namesAndUrls .Select(entry => { var fileNameSegments = Path.GetFileNameWithoutExtension(entry.Name).Split(","); if (fileNameSegments.Length < 8) return default; if (!int.TryParse(fileNameSegments[0], out var id)) return default; var dateContent = string.Join(",", fileNameSegments.Skip(1).Take(6)); if (!DateTime.TryParseExact(dateContent, "yyyy,M,d,H,m,s", default, default, out var date)) return default; return (PostID: id, PublishedAt: date, entry.Download_Url); }) .Where(entry => entry != default) .Select(async entry => { // Read the file content as markdown and parse into plain text (the first line of which // will be the title of the post) string markdown; using (var client = new WebClient()) { markdown = await client.DownloadStringTaskAsync(entry.Download_Url); } var plainText = Markdown.ToPlainText(markdown); var title = plainText.Replace("\r\n", "\n").Replace('\r', '\n').Split('\n').First(); return new BlogPost(entry.PostID, title, plainText, entry.PublishedAt); }) ); } private sealed class BlogPost { public BlogPost(int id, string title, string plainTextContent, DateTime publishedAt) { ID = id; Title = !string.IsNullOrWhiteSpace(title) ? title : throw new ArgumentException("may not be null, blank or whitespace-only"); PlainTextContent = !string.IsNullOrWhiteSpace(plainTextContent) ? plainTextContent : throw new ArgumentException("may not be null, blank or whitespace-only"); PublishedAt = publishedAt; } public int ID{ get; } public string Title { get; } public string PlainTextContent { get; } public DateTime PublishedAt { get; } }
(Note: I use the Markdig library to process markdown)
This raw blog post content needs to transformed into Catalyst "documents", then tokenised (split into individual sentences and words), then fed into a FastText model trainer.
Before getting to the code, I want to discuss a couple of oddities coming up. Firstly, Catalyst documents are required to train the FastText model and each document instance must be uniquely identified by a UID128 value, which is fine because we can generate them from the Title text of each blog post using the "Hash128()" extension method in Catalyst. However, (as we'll see a bit further down), when you ask for vectors* from the FastText model for the processed documents, each vector comes with a "Token" string that is the ID of the source document - so that has to be parsed back into a UID128. I'm not quite sure why the "Token" value isn't also a UID128 but it's no massive deal.
* (Vectors are just 1D arrays of floating point values - the FastText algorithm does magic to produce vectors that represent the text of the documents such that the distance between them can be compared; the length of these arrays is determined by the "Dimensions" option shown below and shorter distances between vectors suggest more similar content)
Next, there are the FastText settings that I've used. The Catalyst README has some code near the bottom for training a FastText embedding model but I didn't have much luck with the default options. Firstly, when I used the "FastText.ModelType.CBow" option then I didn't get any vectors generated and so I tried changing it to "FastText.ModelType.PVDM" and things started looked promising. Then I fiddled with some of the other settings. Some of which I have a rough idea what they mean and some, erm.. not so much.
The settings that I ended up using are these:;
I already mentioned changing the Data.Type / ModelType and the LossType ("NegativeSampling") is the value shown in the README. Then I felt like an obvious one to change was IgnoreCase, since that defaults to false and I think that I want it to be true - I don't care about the casing in any words when it's parsing my posts' content.
Now the others.. well, this library is built to work with systems with 10s or 100s of 1,000s of documents and that is a LOT more data than I have (currently around 120 blog posts) and so I made a few tweaks based on that. The "Epoch" count is the number of iterations that the training process will go through when constructing its model - by default, this is only 5 but I have limited data (meaning there's less for it to learn from but also that it's faster to complete each iteration) and so I bumped that up to 50. Then "Dimensions" is the size of the vectors generated - again, I figured that with limited data I would want a higher value and so I picked 512 (a nice round number if you're geeky enough) over the default 200. The "MinimumCount", I believe, relates to how often a word may appear and it defaults to 5 so I pulled it down to 1. The "ContextWindow" is (again, I think) how far to either side of any word that the process will look at in order to determine context - the larger the value, the more expensive the calculation; I bumped this from the default 5 up to 10. Then there's the "NegativeSamplingCount" value.. I have to just put my hands up and say that I have no idea what that actually does, only that I seemed to be getting better results with a value of 20 than I was with the default of 10.
With machine learning, there is almost always going to be some value to tweaking options (the "hyperparameters", if we're being all fancy) like this when building a model. Depending upon the model and the library, the defaults can be good for the general case but my tiny data set is not really what this library was intended for. Of course, machine learning experts have more idea what they're tweaking and (sometimes, at least) hopefully what results they'll get.. but I'm happy enough with where I've ended up with these.
This talk about what those machine learning experts do brings me on to the final thing that I wanted to talk about before showing the code; a little pre-processing / data-massaging. The better the data is that goes in, generally the better the results that come out will be. So another less glamorous part of the life of a Data Scientist is cleaning up data for training models.
In my case, that only extended to noticing that a few terms didn't seem to be getting recognised as essentially being the same thing and so I wanted to give it a little hand - for example, a fair number of my posts are about my "Full Text Indexer" project and so it probably makes sense to replace any instances of that string with a single concatenated word "FullTextIndexer". And I have a range of posts about React but I didn't want it to get confused with the verb "react" and so I replaced any "React" occurrence with "ReactJS" (now, this probably means that some "React" verb occurrences were incorrectly changed but I made the replacements of this word in a case-sensitive manner and felt like I would have likely used it as the noun more often than a verb with a captial letter due to the nature of my posts).
So I have a method to tidy up the plain text content a little:
private static string NormaliseSomeCommonTerms(string text) => text .Replace(".NET", "NET", StringComparison.OrdinalIgnoreCase) .Replace("Full Text Indexer", "FullTextIndexer", StringComparison.OrdinalIgnoreCase) .Replace("Bridge.net", "BridgeNET", StringComparison.OrdinalIgnoreCase) .Replace("React", "ReactJS");
Now let's get training!
Console.WriteLine("Reading posts from GitHub repo.."); var posts = await GetBlogPosts(); Console.WriteLine("Parsing documents.."); Storage.Current = new OnlineRepositoryStorage(new DiskStorage("catalyst-models")); var language = Language.English; var pipeline = Pipeline.For(language); var postsWithDocuments = posts .Select(post => { var document = new Document(NormaliseSomeCommonTerms(post.PlainTextContent), language) { UID = post.Title.Hash128() }; pipeline.ProcessSingle(document); return (Post: post, Document: document); }) .ToArray(); // Call ToArray to force evaluation of the document processing now Console.WriteLine("Training FastText model..");; fastText.Train( postsWithDocuments.Select(postWithDocument => postWithDocument.Document), trainingStatus: update => Console.WriteLine($" Progress: {update.Progress}, Epoch: {update.Epoch}") );
Now that a model has been built that can represent all of my blog posts as vectors, we need to go through those post / vector combinations and identify others that are similar to it.
This will be achieved by using the HNSW.NET NuGet package that enables K-Nearest Neighbour (k-NN) searches over "high-dimensional space"*.
* (This just means that the vectors are relatively large; 512 in this case - two dimensions would be a point on a flat plane, three dimensions would be a physical point in space, anything with more dimensions that that is in "higher-dimensional space".. though that's not to say that any more than three dimensions is definitely a bad fit for a regular k-NN search but 512 dimensions IS going to be a bad fit and the HNSW approach will be much more efficient)
There are useful examples on the README about "How to build a graph?" and "How to run k-NN search?" and tweaking those for the data that I have so far leads to this:
Console.WriteLine("Building recommendations.."); // Combine the blog post data with the FastText-generated vectors var results = fastText .GetDocumentVectors() .Select(result => { // Each document vector instance will include a "token" string that may be mapped back to the // UID of the document for each blog post. If there were a large number of posts to deal with // then a dictionary to match UIDs to blog posts would be sensible for performance but I only // have a 100+ and so a LINQ "First" scan over the list will suffice. var uid = UID128.Parse(result.Token); var postForResult = postsWithDocuments.First( postWithDocument => postWithDocument.Document.UID == uid ); return (UID: uid, result.Vector, postForResult.Post); }) .ToArray(); // ToArray since we enumerate multiple times below // Construct a graph to search over, as described at // var graph = new SmallWorld<(UID128 UID, float[] Vector, BlogPost Post), float>( distance: (to, from) => CosineDistance.NonOptimized(from.Vector, to.Vector), DefaultRandomGenerator.Instance, new() { M = 15, LevelLambda = 1 / Math.Log(15) } ); graph.AddItems(results); // For every post, use the "KNNSearch" method on the graph to find the three most similar posts const int maximumNumberOfResultsToReturn = 3; var postsWithSimilarResults = results .Select(result => { // Request one result too many from the KNNSearch call because it's expected that the original // post will come back as the best match and we'll want to exclude that var similarResults = graph .KNNSearch(result, maximumNumberOfResultsToReturn + 1) .Where(similarResult => similarResult.Item.UID != result.UID) .Take(maximumNumberOfResultsToReturn); // Just in case the original post wasn't included return new { result.Post, Similar = similarResults .Select(similarResult => new { similarResult.Id, similarResult.Item.Post, similarResult.Distance }) .ToArray() }; }) .OrderBy(result => result.Post.Title, StringComparer.OrdinalIgnoreCase) .ToArray();
And with that, there is a list of every post from my blog and a list of the three blog posts most similar to it!
Well, "most similar" according to the model that we trained and the hyperparameters that we used to do so. As with many machine learning algorithms, it will have started from a random state and tweaked and tweaked until it's time for it to stop (based upon the "Epoch" value in this FastText case) and so the results each time may be a little different.
However, if we inspect the results like this:
foreach (var postWithSimilarResults in postsWithSimilarResults) { Console.WriteLine(); Console.WriteLine(postWithSimilarResults.Post.Title); foreach (var similarResult in postWithSimilarResults.Similar.OrderBy(other => other.Distance)) Console.WriteLine($"{similarResult.Distance:0.000} {similarResult.Post.Title}"); }
.. then there are some good results to be found! Like these:
Learning F# via some Machine Learning: The Single Layer Perceptron
0.229 How are barcodes read?? (Library-less image processing in C#)
0.236 Writing F# to implement 'The Single Layer Perceptron'
0.299 Face or no face (finding faces in photos using C# and AccordNET)
Translating VBScript into C#
0.257 VBScript is DIM
0.371 If you can keep your head when all about you are losing theirs and blaming it on VBScript
0.384 Using Roslyn to identify unused and undeclared variables in VBScript WSC components
Writing React components in TypeScript
0.376 TypeScript classes for (React) Flux actions
0.378 React and Flux with DuoCode
0.410 React (and Flux) with Bridge.net
However, there are also some less good ones - like these:
A static type system is a wonderful message to the present and future
0.271 STA ApartmentState with ASP.Net MVC
0.291 CSS Minification Regular Expressions
0.303 Publishing RSS
Simple TypeScript type definitions for AMD modules
0.162 STA ApartmentState with ASP.Net MVC
0.189 WCF with JSON (and nullable types)
0.191 The joys of AutoMapper
Supporting IDispatch through the COMInteraction wrapper
0.394 A static type system is a wonderful message to the present and future
0.411 TypeScript State Machines
0.414 Simple TypeScript type definitions for AMD modules
I'd like to get this good enough that I can include auto-generated recommendations on my blog and I don't feel like the consistency in quality is there yet. If they were all like the good examples then I'd be ploughing ahead right now with enabling it! But there are mediocre examples as well as those poorer ones above.
It's quite possible that I could get closer by experimenting with the hyperparameters more but that does tend to get tedious when you have to analyse the output of each run manually - looking through all the 120-ish post titles and deciding whether the supposed best matches are good or not. It would be lovely if I could concoct some sort of metric of "goodness" and then have the computer try lots of variations of parameters but one of the downsides of having relatively little data is that that is difficult*.
* (On the flip side, if I had 1,000s of blog posts as source data then the difficult part would be manually labelling enough of them as "quite similar" in numbers sufficient for the computer to know if it's done better or done worse with each experiment)
Fortunately, I have another trick up my sleeve - but I'm going to leave that for next time! This post is already more than long enough, I think. The plan is to combine results from another model in the Catalyst with the FastText results and see if I can encourage things to look a bit neater.
If you want to try fiddling with this code but don't want to copy-paste the sections above into a new project, you can find the complete sample in the "Similarity" project in the solution of this repo: BlogPostSimilarity.
Posted at 22:21
Dan is a big geek who likes making stuff with computers! He can be quite outspoken so clearly needs a blog :)
In the last few minutes he seems to have taken to referring to himself in the third person. He's quite enjoying it.
|
https://www.productiverage.com/automating-suggested-related-posts-links-for-my-blog-posts
|
CC-MAIN-2022-05
|
refinedweb
| 3,595
| 52.19
|
Subject: Re: [boost] namespace boost?
From: Patrick Horgan (phorgan1_at_[hidden])
Date: 2011-01-15 17:44:53
On 01/15/2011 02:11 PM, Robert Ramey wrote:
> vicente.botet wrote:
>> ----- Original Message -----
>> From: "Denis Shevchenko"<for.dshevchenko_at_[hidden]>
>> To:<boost_at_[hidden]>
>> Sent: Saturday, January 15, 2011 9:56 PM
>> Subject: Re: [boost] namespace boost?
>>
>>
>>> 16.01.2011 00:49, vicente.botet ?????:
>>>> I find useful to have the file that include all the other files at
>>>> the boost directory level. As it is the case for Boost.Flyweight
>>>>
>>>> #include<boost/flyweight.hpp>
>>> Hmm... But what if I want to update one header-only library? If I
>>> have 'common' file that include all the other files at the boost
>>> directory level, I must replace this file AND own directory. But if
>>> all files placed in own directory, I replacing only it.
>> though I would prefer the former.
>
> My normal practice is not to use these anyway but rather pick out
> the specific one's I'm interested in. So it's not a huge issue for me.
> The reason I do this is that I like knowing that I'm including only
> the minimum required to get the job done.
Your compiler loves you for this, less work to do. I do the same,
because it is less work for the compiler, because I worry about side
effects from things I don't need, because I don't know without close
examination if any extra memory will be used by including something I
don't need, and because by including the minimum I need I can convince
myself that I have a deeper understanding.
All of this implies that I prefer people to break things into different
headers more intelligently, i.e. if something is used in multiple places
it should have a separate header so I don't get extra stuff I don't need
by including something I do need. That's one of my pet peeves.
Of course I have no objection to the existence of a parent include file
that includes everything for people that don't want to think about it
and to treat a particular part of a project as a black box. I do that
sometimes too. There's not enough time in the world to become an expert
on everything, (gad that's so frustrating, isn't it?!).
Patrick
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2011/01/175028.php
|
CC-MAIN-2020-10
|
refinedweb
| 417
| 73.88
|
uping doesn´t work on LTE-M1
I am attached and connected to LTE-M in Argentina :
lte.send_at_cmd('at+cgdcont=1, "IP", "datos.personal.com"')
Also enable the functionality +CFUN=1
When I import uping.py file and try uping.ping('google.com') I got an error in line 52
I thinks it is a modem configuration issue, something I have to define as I was playing with different at commands after lte.pppsuspend() , and one time when I return to LTE lte.pppresume() , it worked , but I didn´t save the configuration and since then I can not make it run.
Any idea to configure in the modem or in the uping script ?[uping.py]
Also I can not run the simple LTE example
s = socket.socket()
s = ssl.wrap_socket(s)
s.connect(socket.getaddrinfo('', 443)[0][-1])
s.send(b"GET / HTTP/1.0\r\n\r\n")
print(s.recv(4096))
It fails in the s.connect(socket.getaddrinfo('', 443)[0][-1]) line
(/assets/uploads/files/1584659765014-uping.py)
@kjm thanks for your reply , the problem was the pppsuspend/resume ,absolutely it hangs the modem for internet. The uping file and the ping file you have sent works properly.
- I've found pppsuspend/resume to be unreliable, maybe avoid them for the time being & use lte.disconnect() instead if you want to run an at cmd
- Try to limit yourself to one query per post, I've found it improves your chances of getting a reply if you don't conflate multiple issues.
- There are multiple versions of most micropython libraries, many of them not functional on pycom product. Try running the version of ping below on an attached/connected gpy
def checksum(data): if len(data) & 0x1: # Odd number of bytes data += b'\0' cs = 0 for pos in range(0, len(data), 2): b1 = data[pos] b2 = data[pos + 1] cs += (b1 << 8) + b2 while cs >= 0x10000: cs = (cs & 0xffff) + (cs >> 16) cs = ~cs & 0xffff return cs def ping(host, count=4, timeout=5000, interval=10, quiet=False, size=64): import utime import uselect import uctypes import usocket import ustruct import uos # prepare packet assert size >= 16, "pkt size too small" pkt = b'Q'*size pkt_desc = { "type": uctypes.UINT8 | 0, "code": uctypes.UINT8 | 1, "checksum": uctypes.UINT16 | 2, "id": (uctypes.ARRAY | 4, 2 | uctypes.UINT8), "seq": uctypes.INT16 | 6, "timestamp": uctypes.UINT64 | 8, } # packet header descriptor h = uctypes.struct(uctypes.addressof(pkt), pkt_desc, uctypes.BIG_ENDIAN) h.type = 8 # ICMP_ECHO_REQUEST h.code = 0 h.checksum = 0 h.id[0:2] = uos.urandom(2) h.seq = 1 # init socket #sock = usocket.socket(usocket.AF_INET, usocket.SOCK_RAW, 1) sock = usocket.socket(usocket.AF_INET, 3, 1) sock.setblocking(0) sock.settimeout(timeout/1000) try: addr = usocket.getaddrinfo(host, 1)[0][-1][0] # ip address except IndexError: not quiet and print("Could not determine the address of", host) return None sock.connect((addr, 1)) not quiet and print("PING %s (%s): %u data bytes" % (host, addr, len(pkt))) seqs = list(range(1, count+1)) # [1,2,...,count] c = 1 t = 0 n_trans = 0 n_recv = 0 finish = False while t < timeout: if t==interval and c<=count: # send packet h.checksum = 0 h.seq = c h.timestamp = utime.ticks_us() h.checksum = checksum(pkt) if sock.send(pkt) == size: n_trans += 1 t = 0 # reset timeout else: seqs.remove(c) c += 1 # recv packet while 1: socks, _, _ = uselect.select([sock], [], [], 0) if socks: resp = socks[0].recv(4096) resp_mv = memoryview(resp) h2 = uctypes.struct(uctypes.addressof(resp_mv[20:]), pkt_desc, uctypes.BIG_ENDIAN) # TODO: validate checksum (optional) seq = h2.seq if h2.type==0 and h2.id==h.id and (seq in seqs): # 0: ICMP_ECHO_REPLY #t_elasped = (utime.ticks_us()-h2.timestamp) / 1000 t_elasped = utime.ticks_diff(h2.timestamp, utime.ticks_us()) / 1000 ttl = ustruct.unpack('!B', resp_mv[8:9])[0] # time-to-live n_recv += 1 not quiet and print("%u bytes from %s: icmp_seq=%u, ttl=%u, time=%f ms" % (len(resp), addr, seq, ttl, t_elasped)) seqs.remove(seq) if len(seqs) == 0: finish = True break else: break if finish: break utime.sleep_ms(1) t += 1 # close sock.close() ret = (n_trans, n_recv) not quiet and print("%u packets transmitted, %u packets received" % (n_trans, n_recv)) return (n_trans, n_recv) ping('google.com')
An interesting thing is that If I put IPV4 instead of IP it works , but after a minute if I try again it throws error in line 55
addr = usocket.getaddrinfo(host, 1)[0][-1][0] # ip address
|
https://forum.pycom.io/topic/5816/uping-doesn-t-work-on-lte-m1
|
CC-MAIN-2021-43
|
refinedweb
| 743
| 52.97
|
Simplicity and Performance: JavaScript on the Server
For years, Douglas Crockford, the high priest of JavaScript (JS), has claimed that it is a powerful, flexible language suited to a multitude of tasks, especially if you can separate it from the ugly browser-side piece that is the Document Object Model, or DOM. Because of the browser, JavaScript is the most popular programming language around by number of users. Job sites dice.com and monster.com post more jobs for JavaScript than any other language, except Java. Of course, if JavaScript runs in a browser, or anywhere, it must have an engine. Those engines have been around since the earliest JS-capable browsers, and they have been available as separate standalone entities for several years. Thus, the potential for running JS on its own always has been available. However, JavaScript always has missed two critical elements to make it worthwhile to run on the server side.
The first missing piece was a common set of libraries. Quite simply, because JS was so focused on the browser, it was missing basic I/O libraries, such as file reading and writing, network port creation and listening, and other elements that can be found in any decent standalone language. Ruby includes them natively; Java includes them in its java.io and java.net packages. For JavaScript, running alone when all you can do is process text and data structures, but not communicate with the outside world, was rather useless. Over the years, several attempts have been made to make some form of JS I/O and Net packages, mostly wrapped around native C calls, if the JS engine was written in C, such as SpiderMonkey, or java.io, and java.net calls, if the JS engine was written in Java, for example, Rhino.
This began to change in early 2009 with the creation of the CommonJS Project (which, for some mystical reason, stands for Common JavaScript), which unified these efforts under a common namespace, with JS-specific semantics and included a package-inclusion system to boot.
Using Rhino as an example, you could read from a file using:
defineClass("File"); var f = new File("myfile.txt"), line; while ((line = f.readLine()) !== null) { // do some processing } // this example slightly modified and simplified // from the Mozilla Rhino site
As you can see, this is not file processing in JavaScript; it is file processing in Java! All I have done is opened the Java API to JavaScript. It is great if you really intend to program in Java, but it's of limited help if you are trying to do pure JS, and especially if your engine is not Java-based.
With CommonJS, there emerged a standard JavaScript-native interface to include a package, for example an I/O package or http package, and define many of the standard functionalities. Under the covers, the implementation may be C, Java, Erlang or Gobbledygook. All that matters is that the interface to the developer is platform-agnostic and portable from one interpreter to another.
The second missing piece was a server, similar either to Tomcat/Jetty for Java or Mongrel/Thin for Ruby, that provides a real environment, includes the necessary modules and is easy to use. Most important, it needed to take advantage of JavaScript's strengths, rather than attempt to copy a system that works for Java or Ruby. The real breakthrough was Ryan Dahl's Node.JS. Ryan combined Google's highly performant V8 engine, JavaScript's natural asynchronous semantics, a module system and the basic modules to create a server that suits JavaScript to a tee.
Most Web servers have a primary process that receives each new request. It then either forks a new process to handle the specific request, while the parent listens for more requests, or creates a new thread to do the same, essentially the same method if somewhat more efficient. The problem with processes or threads is threefold. First, they require significant resource usage (memory and CPU) for a small amount of differing code. Second, these threads often will block on various activities, such as filesystem or network access, tying up precious resources. Finally, threads and processes require context switches in the CPU. As good as modern operating systems are, context switches still are expensive.
The alternative, gaining in popularity, is event-driven, asynchronous callbacks. In an event model, everything runs in one thread. However, each request does not have its own thread. Rather, each request has a callback that is invoked when an event—like a new connection request—occurs. Several products already take advantage of the event-driven model. Nginx is a Web server with similar CPU utilization characteristics to dominant Apache, but with constant memory usage, no matter how many simultaneous requests it serves. The same model has been taken to Ruby using EventMachine.
As anyone who has programmed in JavaScript, and especially in asynchronous AJAX, knows, JS is extremely well suited to event-driven programming. Node.JS brilliantly combines packaging and an asynchronous event-driven model with a first-rate JS engine to create an incredibly lightweight, easy-to-use yet powerful server-side engine. Node has been in existence for less than two years and was first released to the world at large only at the end of May 2009, yet it has seen widespread adoption and has served as a catalyst for many other frameworks and projects. Quite simply, Node changes the way we write high-performance server-side nodes (pun intended) and opens up a whole new vista.
The rest of this article explores installing Node and creating two sample applications. One is the classic “hello world”, a starting point for every programming example, and the other is a simple static file Web server. More complex applications, Node-based development frameworks, package managers for Node, available hosting environments and how to host your own Node environment, will be subjects for future articles.
|
http://www.linuxjournal.com/article/10880?quicktabs_1=2
|
CC-MAIN-2016-40
|
refinedweb
| 988
| 61.16
|
Enabling Tracing
This chapter describes how to enable tracing, view trace messages, and log trace messages. It contains the following sections:
About Tracing
Tracing is a tool for use primarily during development. Trace elements enable you to see the behavior of various elements in a production, for the purpose of debugging or diagnosis. You typically disable tracing before a production goes live.
The Ensemble trace mechanism works as follows:
As part of the development process, Ensemble developers add trace elements to the appropriate areas of your code. These trace elements (potentially) write trace messages at runtime. See “Adding Trace Elements” in the chapter “Programming in Ensemble” in Developing Ensemble Productions.
Note that these are messages only in a general sense; trace messages are simply strings and are unrelated to Ens.Message and its subclasses.
As part of the configuration process, do the following:
Configure the production to enable tracing. This step means that, at runtime, the trace elements are executed (rather than being ignored).
Optionally enable logging for the trace messages. This step writes trace messages to the Event Log.
Optionally configure the applicable business hosts to run in the foreground so that you can see trace messages in the Terminal while the production is running. See the last section in this chapter.
You typically disable tracing before a production goes live.
Enabling Tracing
By default, all user trace elements are enabled. You can also enable tracing of various system events.
To do so, set values for some or all of the following nodes of the ^Ens.Debug global:
For example, to enable tracing related to message queue management, enter the following command in the Terminal, in the appropriate namespace:
set ^Ens.Debug("TraceCat","queue")=1
Also see “Enabling %ETN Logging” in the chapter “Testing and Debugging” in Developing Ensemble Productions.
Enabling Logging for Trace Messages
Ensemble can also log trace messages (that is, write them to the Event Log). To enable or disable logging of trace messages, use the following settings:
For any business host, use the Log Trace Events setting. When this setting is selected, Ensemble logs all the enabled trace messages for this business host log.
For the production, use the Log General Trace Events setting. When this setting is selected, Ensemble logs all enabled trace messages from production elements that are not business hosts.
There is no overlap or interaction between these settings; Log General Trace Events does not override or provide a default value for Log Trace Events.
See “Settings in All Productions” in Configuring Ensemble Productions.
Seeing Trace Messages in the Terminal
To see the trace messages in the Terminal, do the following:
If you are using Windows Vista or Windows 7, enable the Interactive Services Detection Service, as follows:
On the Windows Start menu, go to Administrative Tools > Services.
Scroll to Interactive ServicesDetection.
Right click and select Start.
Enable the Foreground setting for the business host or business hosts in which you are interested.
When you run the production, Ensemble opens a Terminal window for each foreground business host. This Terminal window shows all enabled trace messages for that business host. It also shows all log items and alerts.
On Windows Vista or Windows 7, the Interactive Services Detection Service displays a dialog box to indicate that a program is attempting to display a message. Click View the Message. The Interactive Services Detection Service then displays a window that contains one or more Terminal windows.
|
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=EMONITOR_tracing
|
CC-MAIN-2022-33
|
refinedweb
| 572
| 56.35
|
Sometimes, it is inconvenient to place many breakpoints in the code to see how the application, or its particular module, is being executed. In such a case, it is a good idea to log various information and present it to the developer in the IDE in real time.
In this recipe, you will learn how to perform such a task using the
Debug class from the
System.Diagnostics namespace as well as present logs in the IDE within the Output window.
To complete this recipe, you need the project with two pages, represented by the
ProductsPage classes. It is necessary to pass a category identifier while navigating from
ProductsPage.
To prepare an example that shows how to log ...
No credit card required
|
https://www.oreilly.com/library/view/windows-application-development/9781786467720/ch01s18.html
|
CC-MAIN-2019-35
|
refinedweb
| 122
| 64.61
|
This year, I got a genuine press
pass from a kindly soul at Sun. While my quest for priority seating was
still futile, I made progress on my quest to locate the stash of free
booze, and I got the chance to ask more hard-hitting questions. Here is
your intrepid reporter's take on day 2.
Last night, our cat, Mr. Darcy,
escaped, and it took a couple of hours to locate him. As a result, I
overslept and missed Gavin King's presentation on Web Beans. That's too
bad—I enjoy Gavin's presentations, particularly since he isn't shy
about voicing his opinions. I looked at the slides and was amused by this
bullet point: "True AOP is a boondoggle—an absurdly overcomplex
solution to a narrow range of problems."
Off-topic: Check out the Greenfoot entries.
More off-topic: If you are packrats and want to get the PDFs from all the presentations, put this Groovy script
def myDocument = new XmlParser( new org.cyberneko.html.parsers.SAXParser() ).parse("")
def links = myDocument.depthFirst().A['@href'].findAll{ it.endsWith(".pdf") }
for (l in links) println("" + l);
in a file pdfs.groovy and run the following in bash:
groovy pdfs.groovy | xargs wget --user=******** --password=********
You should have received an email with the username/password.
Linda DeMichiel gave an overview about JPA 2.0. (If you haven't looked
at JPA 1.0, you should. It is very developer-friendly and you can learn it
without ever bothering with the nasty parts of the old EJB. You can use it
in standalone or plain web applications, outside an EJB container.)
JPA (as part of EJB 3.0) was a great start, but it was a 1.0 release.
Some open issues and ambiguities, a few bugs, and some missing pieces. In
2.0, expect clarification, reduction of non-portability issues, and some
enhancements. It's a new JSR, completely decoupled from EJB
The enhancements seemed pretty straightforward. Embeddables can contain
other embeddables; the whole hierarchy gets flattened. Sets of primitives
and strings can be declared conveniently. One-to-many bidirectional
mapping will be supported. The table-per-class inheritance strategy will
be required.
My pet peeve will be addressed,
and we'll get some kind of ordered lists. That is, when I persist
@Entity public class Quiz
{
@OneToMany @Ordered @OrderColumn // strawman syntax
private List<Question> questions;
}
the order of the question is remembered, by storing an index into
another column. There is a headache about inserting into the middle in the
list; she wasn't sure how that would be addressed.
JPA 1.0 forces single access type (either fields or getters/setters)
for every entity hierarchy. I don't think I care since I always use
fields. Then again, maybe I do, if I want to use getter/setter for
debugging in one case.
Some JPQL limitations will be removed. You'll be able to say
SELECT d.name, SUM(c.hourlyRate * c.hoursWorked * 52) FROM Contractor c JOIN c. dept d GROUP BY d.name
There will be some syntax for dynamic queries where one adds criteria
objects rather than building up strings. It sure would be nice to have a
domain-specific language rather than cumbersome chained method calls, but
Java doesn't support DSLs (yet?)
Configuration hints will be provided in a uniform way, not ad-hoc
escape hatch. Obvious candidates: JDBC params, timeouts, logging, DDL
handling. Put them into javax.persistence namespace
We'll get better support for detached objects (another pet peeve of
mine): fetch plans, predictable behavior when unfetched state is touched.
Depending on timeline, there may be alignment with the general-purpose
validation mechanism of JSR 303.
Overall, it seems like an evolutionary improvement of an already very
good API.
My Core JSF coauthor, David Geary, has just finished a book on GWT and keeps telling me how
cool it is (“Swing for the web”). I knew nothing about it and
was skeptical when I first hear that they translate Java code into huge
gobs of client-side JavaScript. But I was quite impressed with the
presentation.
I am more than a bit scared of writing Ajax code myself. With JSF, I
can presumably just drop in someone else's well debugged Ajaxified
component. I don't actually hate writing JSF (well, except for the Stack
Trace from Hell), but I don't enjoy it much either. GWT seems non-scary,
even fun, especially if I can figure out how to use JPA on the backend.
I'll give that a try in my copious spare time.
At the show floor, I stopped by at the Eclipse booth and kvetched about
how tedious it is to install the plugins for JSF and JPA, and how much
easier it is to just use NetBeans. The fellow at the booth turned out to
be Mik Kersten, the creator of Mylar. He agreed that Eclipse
needed to get their act together with packaging but he still felt that it
had an edge because there are so many cool plugins such as, well, Mylar.
He gave me a demo, and it was cool. You can deal with bug tracker issues
right in Eclipse without ever going to Trac or (blecch) Bugzilla. Bugs are
tightly tied to code lines. Also, Mylar remembers the parts of code files
that you have recently looked at and hides everything else. You focus on
the stuff that you care about, not the mass of other files in the project.
Here is a better
explanation with screen shots by Kirill Grouchnikov.
I am glad that we have both Eclipse and NetBeans around.
It's a Java One tradition. They take an ugly Swing app and add
gratuitous eye candy with as few lines of code as possible. Such as:
The table effects had pretty nasty looking code, though. I called it
quits and went to a press-only round table discussion on JCP.
There was a distinguished panel, including Danny Coward, Linda De
Michiel, Hani
Suleiman, a bunch of folks from big corporations, and the director of
the JSR program, Onno Kluyt.
Someone asked the obvious question: What is the difference between the
JCP and OpenJDK. Answer: The JCP defines what Java is, the OpenJDK
provides an implementation. There can be only one Java standard, but there
can be multiple implementations.
I asked hard-hitting questions about the lack of openness in the expert
group deliberations, and the risk of expert groups with tunnel vision
delivering less than optimal results. It might have been polite not to
cite JSR 127 as an
example, but I showed no such restraint.
Answer: (1) The expert groups in the smoke-filled back room are a sign
of the past. Nowadays, the process is much more open. There are multiple
interim deliverables so that unhappy stakeholders can squawk early, and
many JSRs have open mailing lists. (Onno told me later that he would like
to see that as a requirement, not just an option, at some point in the
future.) (2) If a JSR goes off the deep end, the executive committee can
stop it.
When the discussion petered out, someone mumbled something about
"cocktails in room 114", and off we went. Apparently, the secret to free
drinks is to stick it out until the end.
I was so proud of my foresight that I had reserved a seat for the F3
presentation, long before I knew that it would be the star of the show and
prepared to smirk at a long line of people without pre-registration. But
there were plenty of empty spaces.
Chris Oliver gave an overview of Java FX
Script. (Doesn't that just roll off your tongue? After a couple of
failed attempts, he just called it JFX.)
He started out asking:
As we all know from the keynote, JFX is the solution: a toolkit to
produce flash-like GUIs with a declarative programming language.
The language is interpreted; it will be compiled at some later point.
It is statically typed.
You build component hierarchies in which every line, rectangle, etc. is
a component. Any Swing component can also be included. You declare
properties for each component, such as position, color, transparency,
transforms, filters (glow, noise, etc., like in Gimp/Photoshop).
If Java allowed for domain-specific sublanguages, could this be done in
Java? Or Groovy? (Chris Oliver doesn't believe in duck typing and isn't
shy about saying so.)
There is a bind keyword for binding values/components (?)
together. When a one of them changes, the bound entity also changes. An
example: We have an animation
rotval = [0...360] dur 1000
That is, rotval goes from 0 to 360 in 1000 milliseconds. Bind
a rotation transform property of a rectangle to rotval. As
rotval changes, the transform changes, and the rectangle rotates.
That's how one avoids the writhing mess of listeners. According to Chris,
data binding is not a part of any mainstream language. It can be found in
functional
reactive programming languages.
In the Q&A, Elliotte Harold asked why not just use SVG. Answer: SVG
behavior is specified by JavaScript operations on the DOM, which is less
declarative and messier.
At the end, there were more flashy demos. Chris: “Whatever you
can do in Flash, you can do in Java. JFX gives you a faster way of
expressing it.” (I should have asked “What about
multimedia?” I'll have to work on my reporter skills.)
“What about multimedia?” I'll have to work on my reporter skills.)
-1 for that. You missed out asking a very important point there. Unless Java lets developers easily write to diverse audio and video formats, competing with flash will remain a pipe dream. RIAs are not just about Java2D, sorry. We're leagues behind flash. Make no mistake.
Posted by: bharathch on May 10, 2007 at 01:12 AM
there's also the question of deployment: how streamlined is it, what's the footprint, etc..
Posted by: eitan on May 10, 2007 at 07:48 AM
Someone asked Chris about deploymnnt and he said--quite reasonbly, I think--that's not his department. F3 is delivered just like any other Java technology. From comments by Bob Brewin and others, I think it is sinking in at Sun that this is a big problem that has been neglected for a long time.
Posted by: cayhorstmann on May 10, 2007 at 08:14 AM
Check out Chris's F3 demos he had a few video related demos ages ago in his blog. They weren't terribly impressive since they used quicktime which sucks on Windows but thats not an F3 problem rather a Java problem. Generally I think F3 is nice, but as others have pointed out a relatively small piece of the puzzle. The big piece of the puzzle would be proper deployment.
Posted by: vprise on May 10, 2007 at 10:46 AM
Ugh, I know very little about video, but I do know that Quicktime isn't going to cut it. That does seem a problem. When I looked at Silverlight, I was struck by how Microsoft was not playing up glitzy UIs. Their featured partners were TV and sports sites that said they could do video stuff that Flash couldn't touch.
Posted by: cayhorstmann on May 10, 2007 at 10:54 AM
That command line should be
groovy pdfs.groovy | xargs wget --http-user=***** --http-passwd=*****
Posted by: elharo on May 10, 2007 at 06:17 PM
Chris Adamson has pointed out many weaknesses with Java multimedia (e.g. JMF and QTJ) in several posts on his blog.
I don't see how JavaFX solves the underlying problem of Java's neglected multimedia support.
Posted by: andrewdavison62 on May 10, 2007 at 08:43 PM
Thanks Elliotte for the Mac parameters for wget--I ran the script on Ubuntu. I guess the Mac is ok if you don't mind futzing until it works :-)
Posted by: cayhorstmann on May 10, 2007 at 09:46 PM
|
http://weblogs.java.net/blog/cayhorstmann/archive/2007/05/java_one_day_2.html
|
crawl-002
|
refinedweb
| 2,011
| 74.39
|
This file defines all compare functions. More...
#include "sql_priv.h"
#include <m_ctype.h>
#include "sql_select.h"
#include "sql_optimizer.h"
#include "sql_parse.h"
#include "sql_time.h"
#include <algorithm>
This file defines all compare functions.
Aggregates field types from the array of items.
This function aggregates field types from the array of items. Found type is supposed to be used later as the result field type of a multi-argument function. Aggregation itself is performed by the Field::field_type_merge() function.
Create an AND expression from two expressions.
Parse date provided in a string to a MYSQL_TIME.
Parses a date provided in the string str into a MYSQL_TIME object. If the string contains an incorrect date or doesn't correspond to a date at all then a warning is issued. The warn_type and the warn_name arguments are used as the name and the type of the field when issuing the warning. If any input was discarded (trailing or non-timestamp-y characters), return value will be TRUE.
|
http://mingxinglai.com/mysql56-annotation/item__cmpfunc_8cc.html
|
CC-MAIN-2018-17
|
refinedweb
| 164
| 62.34
|
This site uses strictly necessary cookies. More Information
I am making a click and drag game and I got the click and drag down, but my problem is the objects goes through the walls and I don't want that to happen.
C#
using UnityEngine;
using System.Collections;
public class Move : MonoBehaviour {
private Vector3 screenPoint;
private Vector3 offset;
void OnMouseDown()
{;
}
}
Do you use proper Colliders attached to the dragged object and the walls?
transform.position = curPosition strictly sets the position, no matter of colliders or rigid bodies.
transform.position = curPosition
I would suggest to add force to the dragged object in direction to mouse position.
Answer by Suddoha
·
Sep 08, 2015 at 01:32 PM
As it has already been mentioned in a comment, you shouldn't set the transform.position directly when expecting physical behaviour from this movement.
Once you solved this using proper methods like the rigidbody's AddForce methods, there might occur other issues such as bouncing behaviour at walls. And when pulled with enough speed, the objects will glitch through as well. You might need to consider another or at least an additional detection system using raycasts for this purpose, because, as stated, forcing a physics-driven object with enough speed to collide permanently with other objects looks weird.
And that is, because the physics engine calculates the new force which should normally be applied to the object (usually another direction) whereas the player might still force the object into the other direction. It looks like a strange and weird bouncing effect then.
This can be really tricky if you work against the physics engine.
Answer by LinkoVitch
·
Sep 08, 2015 at 07:52 PM
Rather than move the object as you drag it, perform a sweep test from the current object to the new destination position. You can then use this to see if the object would collide with anything on it's way to the new destination and react accordingly.
It may be useful to display a 'ghost' of the object under the cursor and if a collision occurs stop the ghost at the position of the collision, if no collision then the ghost will be located at the mouse cursor.
When the mouse is released, you then remove the ghost object and change the position of the actual object you wanted to move to it's new position.
H to Make A Character Stop At Wall?
0
Answers
AddRigidBody Script Error
1
Answer
change rigidbodies in all objects to kinematic
1
Answer
How can I make the character movement realistic?
1
Answer
Rigidbody - Stop & Jump
0
Answers
EnterpriseSocial Q&A
|
https://answers.unity.com/questions/1063558/rigid-body-goes-through-walls.html?sort=oldest
|
CC-MAIN-2021-31
|
refinedweb
| 437
| 59.43
|
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.
On Fri, May 02, 2003 at 06:46:16AM -0700, Kean Johnston wrote: > > I am with Neil Booth on the -Wall issue. After all, the new > > warning _is_ correct and provides useful functionality, so it > > should be on by default. Those who are stuck with legacy source > > trees should be able to come with necessary Makefile changes > > order of magnitude easier that with the necessity to modify > > each and every source file in the tree. > In my case, I would *only* have to change 4009 makefiles. That's > in tree 1 of 3. The others are similarly sized. Well Id have to > change them *IF* I was affected by this, which personally I am > not, but I can see many many cases where people will be. If you have -W* options in each of those 4009 makefiles (as opposed to have CFLAGS set in a few locations), yes. But how hard is to do that with a script. Even better is to just write a script which will change all those rcsid/scsids into some macros and define the macros to something reasonable. #ifndef LINT static char *rcsid = "something"; #endif I saw in one of the mails is plain stupid, in that it not only wastes .rodata space, but also .data and in shared libraries causes dynamic relocations. Jakub
|
https://gcc.gnu.org/legacy-ml/gcc-patches/2003-05/msg00203.html
|
CC-MAIN-2020-45
|
refinedweb
| 237
| 81.33
|
In the first part of this series on Stock Price Prediction Using Deep Learning, we covered all the essential concepts that are required to perform stock market analysis using neural networks. In this second article, we will execute a practical implementation of stock market price prediction using a deep learning model. The final result obtained might be helpful for users to determine if the outcome of their investments will be worth it in the long term.
Before we dive more deeply into this topic, let us explore the table of contents for this article. This list will help us understand the realistic expectations from this article, and you can feel free to skip to the sections you deem most necessary.
Table of Contents
- Introduction
- Preparing the data
- Visualization of data
- Data pre-processing and further visualization
- Analyzing the data
1. First sets of data elements
2. Second sets of data elements
- Building the deep learning model
1. Stacked LSTM model
2. Model summary and plot
3. Compiling and fitting the model
4. Analyzing results
- Making Predictions
- Conclusion
Bring this project to life
Introduction
In this article, we will study the applications of neural networks on time series forecasting to accomplish stock market price prediction. You can find the full code for this tutorial and run it on a free GPU from the ML Showcase. While you can directly run the entire notebook on your own, it is advisable that you understand and study each individual aspect and consider code blocks individually to better understand these concepts. The methodology for this project can be extended to other similar applications such as cryptocurrency price predictions as well.
The entire architecture of the stacked LSTM models for the predictive model will be built using the TensorFlow deep learning framework. Apart from the TensorFlow library, we will also utilize other libraries such as pandas, matplotlib, numpy, and scikit-learn for performing various operations throughout the course of the project. These actions include loading and viewing the datasets, visualizing our data, converting it into suitable shapes for computation with stacked LSTMs, and final preparation of before starting our project.
The primary objective of this article is to provide an educative view on the topic of deep learning applications in time series forecasting through the example of stock market price prediction. This example is just one of the many use case options that are available for you to build with neural networks (stacked LSTMs in this case). It is important for to note that before implementing these ideas in real-life scenarios, make sure you explore the numerous tools available to you and check out which ideas work best for your application. Let us now proceed to have a complete breakdown of our project from scratch.
Preparing the data
The first step to complete this project on stock price prediction using deep learning with LSTMs is the collection of the data. We are going to consider a random dataset from Kaggle, which consists of Apple's historical stock data. We are going to read the CSV file using the Panda's library, and then view the first five elements of the data.
# Importing the Pandas Library for loading our dataset # Grab The Data Here - import pandas as pd df = pd.read_csv('HistoricalQuotes.csv') df.head()
Although there are many parameters to consider for the stock market prediction model, we will only refer to the Close/Last column because it gives us more of an average approach. And it is also easier to consider just one of the columns for training and validation purposes. Keenly observing the dataset is an essential step to solving most Data Science problems. After looking at the dataset, we can easily deduce that the dates are in descending order. We need to correct this.
data = df.reset_index()[' Close/Last'] # Make Sure You add a space data.head()
Result
0 $273.36 1 $273.52 2 $292.65 3 $288.08 4 $298.18 Name: Close/Last, dtype: object
Then, we need to arrange our dataset in the ascending order. While you can use the reindex function from the panda's module, I would prefer constructing a new list and reversing that list, and finally creating our new data frame for processing the following tasks.
Option 1
df3 = data.reindex(index=data.index[::-1]) df3.head()
Result
2517 $29.8557 2516 $29.8357 2515 $29.9043 2514 $30.1014 2513 $31.2786 Name: Close/Last, dtype: object
Option 2
df2 = [] for i in data: i = i[2:] i = float(i) df2.append(i) df2.reverse() df1 = pd.DataFrame(df2)[0] df1.head()
Result
0 29.8557 1 29.8357 2 29.9043 3 30.1014 4 31.2786 Name: 0, dtype: float64
Visualize Your Data
For any machine learning and deep learning problem, one of the most crucial steps is the visualization of your data. Once you visualize and pre-process the data, you can have a brief understanding of the type of model you are dealing with and the necessary steps and measures required to solve the task. One of the best visualization libraries in the Python programming language is the Matplotlib library. It will allow you to visualize the dataset accordingly. Let us plot the model with the data and their respective indexes. The data consists of the values of the stocks at their respective intervals.
# Using the Matplotlib Library for visualizing our time-series data import matplotlib.pyplot as plt plt.title("Data Plot") plt.xlabel("Index") plt.ylabel("Data") plt.plot(df1)
From our visualization, we can notice that the plot of the dollar rate per day usually seems to have an increasing trend.
Data pre-processing and further visualization
Our next steps will be to prepare our dataset. We will import the numpy and scikit-learn libraries for visualizing our data in the form on an array for better modeling. Since the data is a bit large and the values are high, it is better to fit them between 0 and 1 for better computation. We are going to use the Min Max Scaler function from the scikit-learn library to perform the specific action.
import numpy as np from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler(feature_range=(0,1)) df1 = scaler.fit_transform(np.array(df1).reshape(-1,1)) df1
Result
array([[6.72575693e-05], [0.00000000e+00], [2.30693463e-04], ..., [8.83812549e-01], [8.19480684e-01], [8.18942624e-01]])
From the same sklearn module, we can proceed to import the function to split our data into training and testing datasets. The essential step to consider here is to ensure that the shuffle is set to False because we don't want our data being distributed randomly. Hence, we will give a small test size and distribute the rest to the training data. Once we are done splitting the data as desired, let us proceed to visualize both the train and test datasets that we have created.
from sklearn.model_selection import train_test_split X_train, X_test = train_test_split(df1, test_size=0.20, shuffle=False)
Training data visualization
plt.title("Data Plot") plt.xlabel("Index") plt.ylabel("Data") plt.plot(X_train)
X_train[:5]
Result
array([[6.72575693e-05], [0.00000000e+00], [2.30693463e-04], [8.93516807e-04], [4.85229733e-03]])
Testing data visualization
plt.title("Data Plot") plt.xlabel("Index") plt.ylabel("Data") plt.plot(X_test)
X_test[:5]
Result
array([[0.49866208], [0.4881699 ], [0.49223898], [0.49429034], [0.49378591]])
Now that we have a brief understanding of how our training and testing data looks like, the next important step is to consider a bunch of these elements and have a prediction for each of them. This procedure is how we will create our final training and testing data along with their respective final training and testing predictions. These predictions or outcomes will be considered as per the timesteps we have considered for our datasets. I will assume a timestep of about 100 data elements and one particular prediction for each of these datasets. This assumption is the integral aspect of time-series analysis with deep learning models.
The elements from the order of 0 to 99 (basically, the first 100 elements) will constitute the elements of the first set of the training or testing dataset. The 100th element will constitute the first prediction. This first outcome will be stored respectively in the results (Y) training or testing list. The next elements from 1 to 100 will constitute the next (or the second) set of elements in the training or testing dataset, while the next prediction or outcome will consist of the 101st element of the dataset. By using this method, we can build our dataset. Our deep learning model architecture will be best suited to solve these kinds of datasets and problems of stock price prediction.
X_train_data = [] Y_train_data = [] X_test_data = [] Y_test_data = [] train_len = len(X_train) test_len = len(X_test) # Create the training dataset for i in range(train_len-101): a = X_train[i:(i+100), 0] X_train_data.append(a) Y_train_data.append(X_train[i + 100, 0]) # Create the test dataset for j in range(test_len-101): b = X_test[j:(j+100), 0] X_test_data.append(a) Y_test_data.append(X_test[j + 100, 0]) X_train_data = np.array(X_train_data) Y_train_data = np.array(Y_train_data) X_test_data = np.array(X_test_data) Y_test_data = np.array(Y_test_data)
Analyzing The Data
Now that we have a brief understanding of what we are trying to achieve with the previous explanations and code blocks, let us try to gain further insight on this topic by actually analyzing and looking at the first two sets of training data and their respective outputs that are stored in the output dataset.
First set of training data and output
X_train_data[0][0]
Result
0.024104103955989345
Second set of training data and output
X_train_data[1][1]
Result
0.024544304746736592
As the final step of our analysis and pre-processing, let us print the shapes of our training and testing dataset as well as their respective outputs. The shapes of the training and testing datasets must contain 100 elements in each of their sets, while their respective results must have the shape equivalent to the number of sets present in the training or testing datasets. Using these figures and shapes, we can finally proceed to the final step of our pre-processing.
### Printing the training and testing shapes print("Training size of data = ", X_train_data.shape) print("Training size of labels = ", Y_train_data.shape) print("Training size of data = ", X_test_data.shape) print("Training size of labels = ", Y_test_data.shape)
Result
Training size of data = (1913, 100) Training size of labels = (1913,) Training size of data = (403, 100) Training size of labels = (403,)
Now that we have a brief idea of the shapes produced by our testing and training datasets as well as their respective outputs, the final essential step is to convert these existing shapes of the datasets into a form suitable for Long short-term memory (LSTM) models. Since our current structure is only a 2-dimensional space, we need to convert it into a 3-dimensional space for making it suitable to perform LSTM operations and computations.
### Converting the training and testing data shapes into a 3-dimensional space to make it suitable for LSTMs X_train_data = X_train_data.reshape(1913, 100, 1) X_test_data = X_test_data.reshape(403, 100, 1) print(X_train_data.shape) print(X_test_data.shape)
Result
(1913, 100, 1) (403, 100, 1)
Building The Deep Learning Model
We will be using the stacked LSTM layers and build our deep learning neural network to construct an architecture to encounter this task. Firstly, we will import all the essential libraries for performing this computation. We will be using a simple Sequential architecture. I would highly recommend checking out my previous two articles on the introduction to TensorFlow and Keras to learn a more detailed approach to these specific topics.
However, to provide a short summary on these topics, a Sequential model is one of the simplest architectures to build your deep learning models and construct your neural networks for solving a variety of complex tasks. A Sequential model is appropriate for a plain stack of layers where each layer has exactly one input tensor and one output tensor.
The only two layers that we will require for processing our computation of the stock market price prediction problem are the Dense layers and the LSTM layers. The Dense layers basically act like a fully connected layer in a neural network. They can be used either as hidden layers with a specific number of nodes or as an output layer with a specific number of nodes. Here, our Dense layer will only have one output node because we only require one output or prediction for a specific set of parameters or sets of data. We have already discussed in further detail the topic of LSTM layers previously.
from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.layers import LSTM # Build The Architecture model=Sequential() model.add(LSTM(100,return_sequences=True,input_shape=(100,1))) model.add(LSTM(100,return_sequences=True)) model.add(LSTM(100)) model.add(Dense(1))
With that step complete, we have finished building our stack LSTM architecture that we can utilize for making stock market predictions. However, we still have a few steps left before the model can be deployed and used to make actual predictions and find out the outcome. Let us look into the model summary and the model plot.
model.summary()
Result
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= lstm (LSTM) (None, 100, 100) 40800 _________________________________________________________________ lstm_1 (LSTM) (None, 100, 100) 80400 _________________________________________________________________ lstm_2 (LSTM) (None, 100) 80400 _________________________________________________________________ dense (Dense) (None, 1) 101 ================================================================= Total params: 201,701 Trainable params: 201,701 Non-trainable params: 0 _________________________________________________________________
Model Plot
# Plot the Model from tensorflow import keras from keras.utils.vis_utils import plot_model keras.utils.plot_model(model, to_file='model.png', show_layer_names=True)
Before we move on to the compilation and training procedures, I will implement one callback that will be useful for saving our checkpoints for the best value of validation loss processed. The reason for saving the best models is so that you can use them later on to load these models and choose to make further predictions with the saved models. While you are simply loading the models as required, you do not have to repeat the entire procedure of training and fitting again repeatedly. During the deployment stages, the saved model can be used for making further predictions.
The second and final callback that we will utilize for our model is the tensorboard callback. We will mainly use the tensorboard callback from the Keras module for visualizing the graphs of the training loss and the validation loss. We will be using a random log directory that will store the folders for the training and validation data. Finally, we will utilize this directory with the Tensorboard callback function. Once our training is complete, we can utilize this saved information from the logs directory to visualize our training and validation losses.
You can also use other callbacks, as mentioned in my previous Keras article. However, I do not deem it necessary to use any of the other Keras or custom callbacks for performing this specific task. You can feel free to try out the other callbacks if you want to explore them in further detail.
# Initializing the callbacks from tensorflow.keras.callbacks import ModelCheckpoint from tensorflow.keras.callbacks import TensorBoard checkpoint = ModelCheckpoint("checkpoint1.h5", monitor='val_loss', verbose=1, save_best_only=True, mode='auto') logdir='logs1' tensorboard_Visualization = TensorBoard(log_dir=logdir)
After initializing our callbacks, we can proceed to compile the stacked LSTM model that we completed building using the Sequential modeling architecture. The loss function we will utilize for the compilation of the model is the mean squared error. The mean squared error function computes the mean of squares of errors between labels and predictions. The formulation of the equation is interpreted as follows:
$loss = square(y_true - y_pred)$
As discussed previously in my Keras article, one of the best optimizers to choose by default for performing the compilation of the model is the Adam optimizer. Adam optimization is a stochastic gradient descent method that is based on an adaptive estimation of first-order and second-order moments. You can also feel free to try out other optimizers such as RMS Prop and see if you are able to achieve better results during the training procedure.
# Model Compilation model.compile(loss='mean_squared_error', optimizer='adam')
The final step after the construction and compilation of the model is to train the model and ensure the fitting process occurs perfectly. During the training stage, we will use the fit function of our model and then train both the X_train (the training data) as well as the X_test (Validation data) accordingly. Make sure you give the equivalent parameters accordingly. The first two attributes in the fit function should have the training data and their predictions, respectively. For the next parameters, we will ensure that we compute the validation data consisting of the test data and their predictions, respectively enclosed in parentheses.
I will be running the model for twenty epochs. You can choose to run it for more than the specified epochs if you want to obtain a slightly better result. I will use a batch size of 32 for the fitting process, but if you are increasing the number of epochs, you can also feel free to edit this in the form of increasing batch sizes like 64, 128, etc. The verbose parameter, when set as 1, will allow you to see the training procedure occurring in each step. You can also choose to set it as 0 or 2, as per your convenience. Finally, we will implement the callbacks function that consists of our previously defined callbacks. Now that we have completed defining all the required parameters for the fit function, we can run the code block and observe the training process.
# Training The Model model.fit(X_train_data, Y_train_data, validation_data=(X_test_data, Y_test_data), epochs=20, batch_size=32, verbose=1, callbacks=[checkpoint, tensorboard_Visualization])
Result
Epoch 19/20 1824/1913 [===========================>..] - ETA: 0s - loss: 1.0524e-04 Epoch 00019: val_loss improved from 0.04187 to 0.03692, saving model to checkpoint1.h5 1913/1913 [==============================] - 1s 508us/sample - loss: 1.0573e-04 - val_loss: 0.0369 Epoch 20/20 1824/1913 [===========================>..] - ETA: 0s - loss: 1.3413e-04 Epoch 00020: val_loss did not improve from 0.03692 1913/1913 [==============================] - 1s 496us/sample - loss: 1.3472e-04 - val_loss: 0.0399
Tensorboard Analysis
After the execution of the fitting code block, we will have a saved checkpoint labeled as "checkpoint1.h5", using which we can directly load the best weights of our model and start making predictions with it directly. There is no need to re-train the entire model once again.
We will now refer to the working environment once again to notice that we can find a newly created directory called logs1. The logs1 directory consists of the training and validation events that will be utilized by us to understand and analyze the training and validation components. The graphs produced will give us a brief description and understanding of the performance of the loss and validation loss with the increasing epochs. To access these graphs, open a command prompt in your respective workspace and type the following command after activating your respective virtual environment - tensorboard --logdir="./logs1".
After the execution of the following, you should be able to access your localhost from your web browser, which should direct you to the tensor board graphs that you can view. Let us now proceed to analyze the training and validation loss graphs.
from IPython.display import Image pil_img = Image(filename='image1.png') display(pil_img)
Since the tensor board graph does not appear to give us the precise required value in the first image, let us scale in a bit and zoom into it a bit more so that we can analyze the values a bit more closely.
pil_img = Image(filename='image2.png') display(pil_img)
From the second image, it is clearly visible that both the training loss and validation loss are decreasing with increasing epochs. Hence, we can determine that the fitting procedure of the model is working pretty well because the model is constantly improving with the increasing number of epochs.
Predictions and Evaluations
In the next few code blocks, we will use the predict function from the model to make the respective predictions on the training and testing data. It is essential to note that we had previously converted the range of values for the training and testing data from 0 to 1 for faster and efficient computation. Hence, it is now time to use the inverse transform attribute from the scaler function to return the desired values. By making use of this procedural method, we will achieve the required result that we need to attain since the values obtained are scaled back to their original numbers.
We will also analyze the Mean Square Error (MSE) parameters for both the training and testing data. The method for performing this task is fairly simple. We will require two additional libraries, namely math and sklearn.metrics. From the metrics module, we will import the mean squared error function that will help us to compute the mean squared error between the training values and testing values with their respective predictions effectively. Once we have computed, calculated, and acquired the required MSE values, we can determine that the root of these MSE values is not too bad and are able to produce effective results. They are compatible with the model built and can be used to make decent and appropriate predictions.
train_predict = model.predict(X_train_data) test_predict = model.predict(X_test_data) # Transform back to original form train_predict = scaler.inverse_transform(train_predict) test_predict = scaler.inverse_transform(test_predict)
# Calculate RMSE performance metrics for train and test data import math from sklearn.metrics import mean_squared_error print("Train MSE = ", math.sqrt(mean_squared_error(Y_train_data, train_predict))) print("Test MSE = ", math.sqrt(mean_squared_error(Y_test_data, test_predict)))
Finally, we will evaluate our model and analyze the numerous parameters. A small reference for the code is considered from here. I would recommend checking the following website as well to gain further information on this topic. For further detail on how you can build the predictive systems to evaluate and test your model, you can check out this link. The below graphical structure is an example and a brief representation of the type of graph that we will obtain when we try to make the respective predictions. Feel free to try these methods on your own from the provided references. I will not be covering them in this article because the intended purpose is to only provide an educational perspective and view on these concepts.
With this, we have successfully built our stock market prediction model with the help of LSTMs and Deep Learning.
Conclusion
We have completed our construction of the stacked LSTM deep learning model that finds its uses in fields of stock market price prediction, cryptocurrency mining and trends, and numerous other aspects of daily life. You can even build similar models to understand everyday weather patterns and so much more. The field of time-series forecasting with deep learning is humungous to explore. It provides a wide variety of opportunities for us to work on many different kinds of unique projects and construct deep learning models. These deep learning models built will create an amazing approach for us to contemplate the working of many time-series problems.
The previous article addressed most of the common topics related to time-series forecasting. Please check out part-1 of this series to understand most of the concepts more intuitively. The first part provides a conceptual overview of the knowledge required for some of the essential topics of time-series forecasting, as well as includes some of the theoretical explanations to the content covered in this second part. With the help of the Jupyter Notebook provided in this article, feel free to explore and try out a variety of integrations to improve the performance of the model and create much more innovative projects.
Thank you all for reading this 2-part series. In future articles, we will cover more topics on GANs, forecasting, NLP models, and so much more. Until then, enjoy coding and keep up the practice!
Add speed and simplicity to your Machine Learning workflow today
|
https://blog.paperspace.com/forecasting-stock-price-prediction-using-deep-learning/
|
CC-MAIN-2022-27
|
refinedweb
| 4,047
| 54.83
|
The fork system call is unique in that while it is called once, it returns twiceto the child and to the parent processes. As noted in Chapter 1, "Programs and Processes," if the fork system call is successful, it returns a value of 0 to the child process and the process ID of the child to the parent process. If the fork system call fails, it returns a -1 and sets the global variable errno . The failure of the system to generate a new process can be traced, by examination of the errno value, to either exceeding the limits on the number of processes ( systemwide or for the specific user ) or to the lack of available swap space for the new process. It is interesting to note that in theory the operating system is always supposed to leave room in the process table for at least one superuser process, which could be used to remove (kill) hung or runaway processes. Unfortunately, on many systems it is still relatively easy to write a program (sometimes euphemistically called a fork bomb) that will fill the system with dummy processes, effectively locking out system access by anyone , including the superuser.
After the fork system call, both the parent and child processes are running and continue their execution at the next statement after the fork . The return from the fork system call can be examined, and the process can make a decision as to what code is executed next. The process receiving a 0 from the fork system call knows it is the child, as 0 is not valid as a PID. Conversely, the parent process will receive the PID of the child. An example of a fork system call is shown in Program 3.1.
Program 3.1 Generating a child process.
File : p3.1.cxx /* Generating a child process */ #include + #include #include using namespace std; int main( ){ 10 if (fork( ) == 0) cout << "In the CHILD process" << endl; else cout << "In the PARENT process" << endl; return 0; + }
There is no guarantee as to the output sequence that will be generated by this program. For example, if we issue the command-line sequence
linux$ p3.1 ; echo DONE ; p3.1 ; echo DONE ; p3.1
numerous times, sometimes the statement In the CHILD process will be displayed before the In the PARENT process , and other times it will not. The output sequence is dependent upon the scheduling algorithm used by the kernel. Keep in mind that commands separated by a semicolon on the command line are executed sequentially, with the shell waiting for each command to terminate before executing the next. The effects of process scheduling are further demonstrated by Program 3.2.
Program 3.2 Multiple activities parent/child processes.
File : p3.2.cxx /* Multiple activities PARENT -- CHILD processes */ #include + #include #include #include using namespace std; int 10 main( ) { static char buffer[10]; if (fork( ) == 0) { // In the child process strcpy(buffer, "CHILD..."); } else { // In the parent process + strcpy(buffer, "PARENT.."); } for (int i=0; i < 3; ++i) { // Both processes do this sleep(1); // 3 times each. write(1, buffer, sizeof(buffer)); 20 } return 0; }
Figure 3.1 shows the output of this program when run twice on a local system.
Figure 3.1 Output of Program 3.2.
linux$ p3.2 PARENT..CHILD...CHILD...PARENT..PARENT..CHILD...linux$ linux$ p3.2 PARENT..CHILD...PARENT..CHILD...PARENT.. $ CHILD...
There are several interesting things to note about this program and its output. First, the write (line 19) system call, not the cout object, was used in the program. The cout object (an instance of the ostream class defined in ) is buffered and, if used, would have resulted in the three-message output from each process being displayed all at one time without any interleaving of messages. Second, the system call sleep (sleep a specified number of seconds) was used to prevent the process from running to completion within one time slice (which again would produce a homogenous output sequence). Third, one process will always end before the other. If there is sufficient intervening time before the second process ends, the system will redisplay the prompt, thus producing the last line of output where the output from the child process is appended to the prompt (i.e., linux$ CHILD... ).
Keep in mind the system will flush an output stream (write its data to the physical media) in a variety of circumstances. This synchronization occurs when (a) a file is closed, (b) a buffer is full, (c) in C++ the flush or endl manipulators are placed in the output stream, or (d) a call is made to the sync system call.
|
http://flylib.com/books/en/1.23.1/the_fork_system_call_revisited.html
|
CC-MAIN-2018-05
|
refinedweb
| 774
| 59.13
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to inherit standard report parser class in qweb reports
I am modifying hr_payslip report in custom module. I want to add a custom method to this report for this I need to update localcontext with custom method for this report. I am trying
class payslip_report(report_sxw.rml_parse):
def __init__(self, cr, uid, name, context):
super(payslip_report, self).__init__(cr, uid, name, context)
self.localcontext.update({
'get_payslip_total':self.get_payslip_total,
})
def get_payslip_total(self, obj):
payslip_line = self.pool.get('hr.payslip.line')
res = []
total = 0.0
for id in range(len(obj)):
if obj[id].appears_on_payslip is True:
total = total + obj.total
return total
and calling in qweb template like
<td><span t- </td>
but getting error "QWebException: "'NoneType' object is not callable" while evaluating
'get_payslip_lines(o.line_ids)". Can anyone help me what I am missing here?
Yogesh,
In order to inherit your"payslip_report" class, just inherit your previous class, using python inheritance.....as:
from openerp.addons.[your_mocule_name].[directory_structue/parent_folder] import payslip_report
and then redefine the class,
class payslip_report(payslip_report.payslip_report):
# here you can override or add previous/new functions.......
Don't forget to keep your [your_module_name] dependency in __openerp__.py
Hope it helps you and i am posting this post after so much long time, since i got it now and it may help you or someone else :)
You should be inheriting from the payslip_report class in hr_payroll/report/report_payslip.py. First, before the class definition you should put: from hr_payroll.report import report_payslip. Then the class definition should be class payslip_report(report_payslip.payslip_report). This way, you chain your inheritance (this is python inheritance, not Odoo ORM inheritance) through the class that provides the get_payslip_lines method.
Also don't forget to re-register your report with the new parser. At the end of the py file, add the following line: report_sxw.report_sxw('report.payslip', 'hr.payslip', 'hr_payroll/report/report_payslip.rml', parser=payslip_report) adjust the information as necessary.
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
Hi, I've you solved your issue?
No
I'm also stuck with this. Just trying to reuse the parser for a new qweb report and it doesn't seems to be possible...
I also got stuck and had to pause the task , if you get solution please post the answer it will help me and others. I have to start that task again in 3-4 days.
i've maybe go a solution for my case... and I think that for your case you will need to override payslip_report instead of report_sxw.rml_parse with something like : class payslip_report(payslip_report): and importing previously the payslip_report with someting like import addons.hr_payroll.report.report_payslip hope this help
try this,
Did you solve this post? I have the same.
|
https://www.odoo.com/forum/help-1/question/how-to-inherit-standard-report-parser-class-in-qweb-reports-67126
|
CC-MAIN-2017-22
|
refinedweb
| 498
| 51.24
|
Describes the "schema" for a an XML feed to "googlebase". Note that this is only an "abstract schema" in the sense that google base does not currently accept a "feed" in this format. Instead, google base accepts feeds in a wide variety of "standard" XML content feed formats (RSS2.0, Atom). The "item/entry" subelement of a "RSS/Atom" feed corresponds to the "item" element defined below. The sub elements of an item defined herein serve as the concrete definition of the "googlebase defined extensions" to the Atom/RSS schemas. Describes the "schema" for an "item" in googlebase -- the basic unit in a "feed". Since google base will also accept feeds in a wide variety of "standard" XML content feed formats (RSS2.0, Atom), some of the item's sub elements (the "baseElementGroup") are represented by their corresponding child elements of "item/entry" in "RSS/Atom". The correspondence is defined in a seperate document. The rest of the item's sub elements are represented as "extensions" (belonging to the googlebase namespace) on the "item"/"entry" elements in RSS/Atom formats. We group these extension elements into three, "google defined with complex content" ("complexExtensionElementGroup") and " google defined with "simple content" ("simpleExtensionElementGroup") and "customer defined" (customExtensionElementGroup). Since all the elements in the google defined element groups are optional, the user is free to define all extensions as "custom". However, the user is strongly encouraged to first consult the (ever-growing) list of google defined extensions to see if the information they want to provide is represented there and if so use thos extensions in favor of custom ones. Defines the collection of (sub)elements of an item that are "mappable" to (sub)elements of an "item/entry" in an existing "standard" content syndication format (RSS/Atom). Note that none of these elements (except "id") ARE top level elements in this schema. This is because we expect them to be represented using the corresponding RSS/Atom elements. "Id" is special because RSS1.0 does not have such an element defined. We will accept documents that have both id defined as a rss (sub)element and the same id defined in the gn namespace. We will use the value in the gn namespace in that case. Defines the collection of (sub)elements of an item that are "mappable" as "extension (sub) elements" of an "item/entry" in an existing "standard" content syndication format (RSS/Atom) and have "complex" content. Defines the collection of (sub)elements of an item that are "mappable" as "extension (sub) elements" of an "item/entry" in an existing "standard" content syndication format (RSS/Atom) and have "simple" content. The following wild card defines our extension mechanism. Note that while the following --> XML schema syntax is very loose, we will accept only elements with "simple content" (no sub elements). The elements can have a "optional" type attribute ("gn:elementType") associated with a value defined in the elementTypeEnumeration Defines shipping option for an item. Defines a time interval. The country that this listing is located in. The value must be a ISO 3116 uppercase 2-letter country code (). For example, United States is US and Canada is CA. ISO 4217 standard.
|
http://www.w3.org/2002/ws/databinding/edcopy/collection/cache/Google-Base.xml
|
CC-MAIN-2016-26
|
refinedweb
| 529
| 54.52
|
Extending the jQuery prototype to access Bootstrap components
Recently while transitioning some pages from jQuery-UI to Bootstrap, I found that one feature I missed (or rather, couldn’t find in the documentation) in Bootstrap is the ability to get references to the component object. For example, if we have a button and attach a popover to it, wouldn’t it be nice to grab the HTML of the popover so we can manipulate it later? Bootstrap exposes some functions via the .popover() call, but these mostly deal with toggling the visibility. While the Bootstrap documentation doesn’t explicitly point out how to do this, it’s quite simple to extend the jQuery prototype to get at the component object.
So given some HTML:
<input type='button' id='example' value="Example" />
and some Javascript to add a Bootstrap popover to that element:
$("#example").popover({ title: 'Example popover', content: 'Let's get at this container.', placement: 'top' });
we notice that Bootstrap itself doesn’t provide any way to grab the popover object. This is because it stores all the info that we need about the component in the trigger element’s jQuery data function:
// In Bootstrap 2.3 and under $('#example').data('popover'); // In Bootstrap 3.0+ $('#example').data('bs.popover');// Note the new 'bs' namespace
From there we can access the options, enabled state, arrow and tip objects, a plethora of functions, and more. Care should be taken manipulating some of these elements directly as the result may not be quite as expected (especially when dealing with positioning).
Now, if we want to abstract the .data(‘popover’) call (ie. so that if we plan to transition from 2.x to 3.x we don’t have to change every call from ‘popover’ to ‘bs.popover’, or if we plan on making any changes to the returned object), we can extend the jQuery prototype (via $.fn) to add a function:
$.fn.getPopover = function() { return $(this).data('popover');// Or bs.popover in Bootstrap 3.0+ }
Then from there if we want to, say, add a class to the popover, we could easily call our getter and grab the element:
var popoverObject = $('#example').getPopover().$tip; popoverObject.addClass('info');
It is also worth noting that elements that have arrows store them separately in the $arrow property; useful in case you plan on re-positioning the component.
If we wanted to further extend this logic for multiple components, we could create an array and iterate over it:
function createBootstrapGetters() { // Again, you need to prepend 'bs.' here for Bootstrap 3.0+ var list = ['popover', 'tooltip', 'carousel']; $.each(list, function(i, name) { var capitalizedName = this.charAt(0).toUpperCase() + name.slice(1); $.fn['get' + capitalizedName] = function() { return $(this).data(name); } }); }
Finally, it’s worth mentioning that this technique may not work with all components (such as dropdowns), as their data attributes are a little different. However, those components are essentially equal to the trigger elements anyway, so manipulating their HTML should be a non-issue.
Igor Shults
One thought on “Extending the jQuery prototype to access Bootstrap components”
Thanks – saved the day with the name space issue.
|
https://objectpartners.com/2013/10/02/extending-the-jquery-prototype-to-access-bootstrap-components/
|
CC-MAIN-2020-40
|
refinedweb
| 519
| 56.96
|
A couple of years ago, I wrote a post about a method for sampling from a very large data file. The “reservoir sampling” can efficiently sample a fixed number of samples from a huge data file while you read through it only once. Now, the natural step forward is sampling with replacement.
I sometimes need to resample large data, like a huge alignment or SNP table, with replacement for statistical analysis, that is, I need to bootstrap them. Is there a way to efficiently bootstrap a huge file? This looks a lot more difficult than sampling a fixed number of samples without replacement because all items in a file stream must have chance to be sampled several times.
I did some google searches and found that this issue has been studied in various fields from traditional biostatistics to cutting-edge data science. As I expected, the methods for efficient bootstraping are more complicated than the reservoir sampling.
Let’s start with a simple approach. The following code does bootstrapping of lines in a file. You need to load all lines on the memory to resample lines, and bootstrapping will become memory-demanding when you process a data file with millions of lines.
import sys import numpy lines = [l for l in open(sys.argv[1], "r")] for i in numpy.random.randint(0, len(lines), size=len(lines)): sys.stdout.write(lines[i])
The memory usage will be slightly reduced if you use a multinomial distribution instead of simply sampling lines. This is justified because the number of occurrence of one entry in the bootstrap procedure follows multinomial distribution. This idea appears to have been proposed even in 1980s.
import sys import numpy size = sum((1 for line in open(sys.argv[1], "r"))) cnt = numpy.random.multinomial(size, [1.0/size]*size) for line, c in zip(open(sys.argv[1], "r"), cnt): for i in range(c): sys.stdout.write(line)
A clear problem of the code above is that it requires to read a file twice; it counts the number of lines first then bootstraps them following a multinomial distribution. Because it needs to read the file twice, it does not work on stream inputs.
Is there a way to bootstarp samples while you read a file only once? According to some literatures, an algorithm called “Poisson bootstrap” does this job. Instead of sampling with multinomial distribution of size=N (N is the total number of samples), you can approximate the number of occurrence of an item by Poisson distribution of lambda=1. This means that the bootstrap procedure is approximated by sampling each item n times while reading lines and drawing n from a Poisson distribution.
import sys import numpy for line in open(sys.argv[1], "r"): cnt = numpy.random.poisson(lam=1) if cnt == 0: continue else: for i in range(cnt): sys.stdout.write(line)
This simple, but powerful code (it’s even simpler than the reservoir sampling code) does the approximated bootstrap while reading a file only once.
I could not fully understand the mathematical justifications of this procedure, but if you are interested, there are several papers discussing about the properties of Poisson boot strap. For example, Hanely&MacBibbon(2006) or Chamandy et al. (2012). (I think the first paper is more approachable than the second one.)
A disadvantage of this code is you can not sample a fixed number of samples. Bootstrap replicates have different numbers of samples, which probably introduces extra variances to the bootstrap estimates. However, according to the papers above, the variation of sample size will be negligible when the sample size is huge.
I just generated bootstrap replicates of 100 and 10000 samples with three algorithms above, and checked how many times each sample is sampled. The three algorithms appears to be identical when the sample size is large (N=10000). (Dashed line is the expected number of count with Poisson distribution.)
while the variation is not quite negligible when sample size is 100.
This Poisson bootstrap procedure appears to be particularly useful when you need to resample very large data. It gets closer to the true bootstrap as you have a larger number of samples.
|
https://tmfujis.wordpress.com/2016/06/24/bootstrap-a-very-large-data-file/
|
CC-MAIN-2022-33
|
refinedweb
| 700
| 64.61
|
Afair while back, I had a call from someone who wanted to know if I could fit in some consultancy time to help solve some development problems. It turned out to be the least profitable work that I have ever done. I had an immediate hunch and after just five minutes on the blower to their programmer it was problem solved. Honestly, I am really, really shit at capitalism.
They were developing a client-server game for the web and had a terrible capacity issue with the server: one connection was good, two was OK but unreliable but above three and it was unusable. Given that they were aiming for many thousands, this was clearly not good and they were desperately worried: they had interfaced correctly with the server software that they had licensed in and couldn’t figure out what was going on.
Mr Snake here is wearing some nifty threads. His idea of threads, though, is somewhat different to mine
What’s a thread? Is it knitting related? Threads are like little mini-mes that you spawn so that you can do more an one thing at once. They get on with jobs whilst you do something else allowing you to be super duper efficient. Problems arise if the mini-mes and you are doing jobs that require touching the same thing at the same time. Imagine this: three people engaged in a “sitting down visit” in the same toilet at the same time. That grimacing picture in your mind? That is multi-threaded programming, screw it up and the mess is… unpleasant.
Their callback was receiving and processing each packet as it arrived. Seems cool, right? Packet received, callback fired, do what needs to be done, return, pour brandy, sit in front of fire basking in a job superbly done. Well, not quite. Some of the jobs that were performed took a very long time to complete. Take those that involved database access: some took seconds to complete. Now, in a world where tens of thousands of events could occur every second, spending over a second doing stuff is simply outrageous. My guess is that their server software created a thread pool of approximately
number_of_CPUs x 4 threads. For dual CPU, dual core rack mounted test server they had, that is about 16 threads. Here is how their system collapsed: with a few people online all moving around in the world doing stuff, there were more than a few database accesses going on. The moment that 16 of these jobs were in progress at the same time, ALL THE THREADS WERE BUSY. That meant that nobody was manning the phones. Furthermore, the more people who were connected, the worse it got: an exponential growth in crapness.
One of the problems that human beings have is misunderstanding what “a long time” is when it comes to computers. The chaps that were working on this web application were web programmers: Flash, JavaScript, Java and other such web stuff were their weapons of development and when the server’s documentation said “don’t spend too much time in the callback” they interpreted this as “a few seconds is fine, right?”. In fact, “too much time” was any amount of time greater an a few hundred MICROseconds. They were spending several orders of magnitude too long processing their stuff.
Their programmer, not used to real-time asynchronous performance code, immediately understood the problem but dramatically underestimated the scale of solution. His first answer? “Ooooooooh, OK, I will optimise the database stored procedures and that should do it!” Bzzzzzzzt, wrong answer. I explained that by fast, it had to be lightning fast. His callback need to do this:
- Take copy of the data received
- Store that copy in a list of jobs to do ASAP
- Return RIGHT NOW
It should take microseconds. Now, the thread returns, goes back into the pool and can start answering phones again. Of course, you then end up with a rapidly growing list of jobs that you need to get done, but you can create a thread to deal with those. Most importantly, though, the long tasks need to be parallelised. Instead of doing database operations one at a time, jobs like that need to be fired off in their own threads so that they can TAKE AS LONG AS THEY WANT.
Parallel processing is a tough cookie for programmers to grasp. Humans are used to doing things sequentially even though we are massively distributed parallel processing systems internally. Here is a little example program that creates two threads each of which adds one to a memory location ten times. The threads are adding to the same memory location. The inexperienced would expect the answer to be 20. Indeed, mostly it is. Occasionally, though, it is 19, or 18 or even less. So many programmers make this mistake that I wonder what computer science graduates are actually taught these days beyond drinking beer, how to chat up the chicks and an encyclopaedic knowledge of Monty Python. So, the code:
#include <pthread.h> #include <stdio.h> #include <stdlib.h> static int badVibes = 0; void* PeacefulThread(void *argument) { for (int i = 0; i < 10; ++i) { ++badVibes; } return NULL; } int main (int argc, char *argv[]) { pthread_t thread1; pthread_t thread2; // No error detection, don't try this at home, kids! pthread_create(&thread1, NULL, PeacefulThread, NULL); pthread_create(&thread1, NULL, PeacefulThread, NULL); // Wait for them to complete: pthread_join(thread1, NULL); pthread_join(thread2, NULL); // Show result: printf("Yes, this number [%-d] should be 20!\n", badVibes); return (0) }
You can compile and run this code yourself if you have a Mac or a Linux machine. Run it from the command line and see what you get. Add more threads using the miracle of copy-paste and you will see less reliability in getting 20. The issue here is that incrementing is a read-modify-write operation. The two threads could perform the read at the same time (if you have two cores this is even more likely): so they both read a value, say, 5. They both then add one to it and get 6. Then they both store the result: 6. Oops. One should have made it 6 and the next made it 7 but WHICH ONE? Welcome to the nightmare, it gets so, so much worse.
Operating systems such as Linux, Unix and Windows have armies of special features1 that allow the programmer to defend against such issues. One such mechanism is that of critical sections, code that can only be executed by one thread at once. They act as a nice locked door around things that would be bad to touch by two threads simultaneously: you can enter the toilet, lock the door, drop the kids off at the pool, unlock the door and leave. The chap outside can then pop in and do his business. The trick with these locks is for them to be in use for the absolute minimum amount of time. For example, if everyone taking a dump reads the newspaper whilst sitting on the throne, then the efficiency of the toilet drops dramatically. Efficient multi-threaded programming architectures work just like this: not locking is best, if you must lock, do it briefly and read the paper later. Take the above example program. It is dead easy to fix the issue with critical sections, here is the new code for the thread:
static pthread_mutex_t cs_mutex = PTHREAD_MUTEX_INITIALIZER; void* PeacefulThread(void *argument) { // Take the critical section: pthread_mutex_lock(&cs_mutex ); for (int i = 0; i < 10; ++i) { ++badVibes; } // We are done, release the critical section: pthread_mutex_unlock(&cs_mutex ); return NULL; }
Neat, eh? The loop is now protected as a critical section! Of course, I have also completely disabled any benefit of having two threads working on the problem as one blocks the other out until it is finished. Beginners playing with threads tend to end up in this position or at the opposite end: protection EVERYWHERE, and believe me, if you are desperate to visit the facilities, working your way through ten locked doors is not going to help. Furthermore, such over-zealous application of locks can lead to my favourite example: the deadlock. This is where you are waiting to enter a critical section protected piece of code but cannot because the code that has that section locked is waiting for a lock to free up that you are in. Between the possibility of nesting yourself into disaster, overdoing it and doing it in the wrong place I will be chuffed to bits if your braiiiiins are hurting at this point.
Just one wafer thin additional problem
As well as the maze of mutexes, it is also common to see examples of threads where threads are not needed or worse still, no threads where they are needed… and it was is that led to the second call they gave me a week or two later. After a while, the latency between client and server was building up to incredible levels – a few hours into a run, latency was measured in minutes. Furthermore, the longest they had been able to run the server with people connected without it failing was a day. Their detective work had led them to the problem and they wanted advice on the solution. The server failed because it ran out of memory and it ran out of memory because the queued list of jobs to do from the thread pool was longer each time that it was processed. The only way of sorting the issue was for everyone to disconnect and then to wait for a few minutes whilst everything cleared.
They had installed a double-buffered type solution to their long list of jobs. They had two lists: one that was being filled by the callbacks whilst the other was being processed. This dealt with only half of the “long lock” issue mentioned above – only a tiny lock was required to swap over the lists which was superbly efficient. Their new issue was that they then dealt with the items in the list synchronously. One at a time. So, 100 database operations that would take a second each would therefore take a minute to process. In the meanwhile, the other “live” list had got real, real big. This would repeat with processing of a list that got longer each time because clearing a list took longer than filling it.
This betrayed a certain naivety for asynchronous multi-threaded software architecture that is shockingly common in software development. Getting into a true asynchronous parallel mindset is terribly difficult: you have to really think about how to break jobs down and how to effectively parallelise them. Some do need to be executed in sequence but others do not. Some can be completed in micro or milliseconds, but some cannot. In the end, they created a clever sequencer for per-client database accesses and ended up with their own thread-pools for processing long duration jobs and then firing events when those jobs completed. The processing of the list became lightning quick and then a whole army of threads would go to work dealing with the database accesses, game logic and other such bits and pieces. When I last heard, their servers were running pretty much indefinitely and their biggest problem now is bugs in the licensed server software. Regrettably, it took as long to provide my advice as it has taken you to read this; total earnings, well, they had to have that on the house. However, I did get a nice note from the programmer thanking me for turning his brain into spaghetti. He said that all this stuff is like peering into a TARDIS2: it looks so small from the outside but one peers in the door and realises the full size of the nightmare!
It’s almost rocket science
Thinking asynchronously is hard. Designing software that works in a hugely parallel fashion is harder still. Putting locks in the wrong place and for the wrong durations can be catastrophic. The worst I had was a bug that took over six months to find in live servers. It turned out to be a lock command that I had copy-pasted from some other piece of code many months earlier and I had it ONE LINE down from where it should be. The error, when it rarely occurred, caused a minor fault that snowballed. The eventual crash sprung up somewhere else in the code perhaps minutes later. Debugging that, regardless of the data that you have is as close to impossible as I believe you can get. In the end, I found the issue using old-fashioned debugging: lots of logging (
printf for the win…) and exhaustive searching through “suspicious” code, i.e., I thoroughly checked my locks.
Good architecture and structure is the best defence against crap software in all cases: put down the right foundations and you can modify the building later without having to rip the whole thing down. This applies even more so to multi-threaded code. In these days of increased parallelism in CPUs, it is even more important to consider how to take advantage of this for performance and usability reasons. It is tough to retrofit and it is tough to scale top-down slicing of jobs into threads to infinite CPUs, but nearly every application has some job that would traditionally have slapped a “please wait” on the screen that could be removed with good use of a background thread or two even if it is just to display a groovy interactive display whilst something is happening.
“A learning experience is one of those things that say, “You know that thing you just did? Don’t do that.”
– Douglas Adams.
Experience leads to good solutions: you make your mistakes, you learn and you move forwards (unless you already know everything, of course). Programs in all languages by all programmers show experience levels primarily in their architecture: I saw one so-called expert demonstrate that his experience was about 8 years shorter than he had suggested by creating an application component that looked like a check-list of C++ features. I had never seen so many redundant uses of templates, classes and inheritance before (or since) in my entire life. I have used that code as an example of “how not to program C++” ever since.
Incidentally, our example can be made bug free without any use of critical sections:
void* PeacefulThread(void* argument) { // Do the calculation (yes, obviously it could return 10): int* pResult = (int*)argument; for (int i = 0; i < 10; ++i) { *pResult = *pResult + 1; } return NULL; } int main (int argc, char *argv[]) { int thread1Result = 0, thread2Result = 0; pthread_t thread1; pthread_t thread2; // No error detection, don't try this at home, kids! pthread_create(&thread1, NULL, PeacefulThread, (void*)&thread1Result); pthread_create(&thread1, NULL, PeacefulThread, (void*)&thread2Result); // Wait for them to complete: pthread_join(thread1, NULL); pthread_join(thread2, NULL); // Show result: const int finalResult = thread1Result + thread2Result; printf("Yes, this number [%-d] should be 20!\n", finalResult); return (0) }
The cost of adding the results together synchronously still allows parallel benefits and we all win. Code is simpler, less likely to go tits up and we have not need to play pthread bingo on our feature use.
I will leave you with a final thought. Humans tend to chop problems into known slices when designing for multiple CPUs. This scales fine for one, two, four or even eight or sixteen CPUs but diminishing returns set in fast. The ultimate solution is a more bottom-up approach: create very large populations of small virtual computers that can combine their work to solve larger problems. This approach, if taken correctly, leads to software that can scale to an infinite number of CPU cores. It borrows its philosophy from biology: the most incredible distributed parallel processing system we know of. I humbly suggest that this is the future of software development.
One day your computer will contain life.
One day, your computer will be smarter than you.
… in the meanwhile, though, the smartest component of a computer is the person sitting in front of it. Relish this time, it’s temporary.
–
1 This article, as over-bloated, vast and tedious as it is, can merely touch at the surface of this stuff. One programmer I know reckons a life-time is probably not sufficient to grasp all the intricacies of multi-threaded architecture.
2 A Dr Who reference. It is supposed to stand for Time And Space Blah Blah Relative Whatever, but I like to think of it as “Twice As Roomy, Despite Its Size”
2 Responses to Thready McParallel: the patron devil of programming
|
http://cobrascobras.com/2011/05/03/thready-mcparallel-the-patron-devil-of-programming/
|
CC-MAIN-2021-25
|
refinedweb
| 2,779
| 68.91
|
Hello,
I'm trying to calculate factorials with my program, but it is only working for the first few numbers. It compiles fine, so I can't find the problem. Help please! Keep it simple, I am very new to C++ and programming in general.
Here is my code:
#include <iostream> #include <cmath> using namespace std; int factorial(int); int factorial(int n) { int i=0, fact=1; if(n<=1) { return(1); } else { for (i=1; i<=n; i++) { fact=fact*i; } return(fact); } } int main() { int i, z, q; int n=10; for (i=0; i<=n; i++) { q=((2*i)+1); z=factorial(q); cout<<z <<endl; } system("PAUSE"); return 0; }
It works for i=0 to i=6, but after that the values are incorrect.
|
https://www.daniweb.com/programming/software-development/threads/435273/factorial-problems
|
CC-MAIN-2017-39
|
refinedweb
| 129
| 62.21
|
What do pigeons and primes have in common?
Hi there!
This week we have two new missions for you, an interview with last season’s Top Coder Veky and… other niceties. Let’s dive right in!
Feeding pigeons
We got a couple of hungry birds and we need your help! When you start to feed one pigeon, a minute later two more fly by. And a minute later another 3. Then 4... One portion of food lasts a pigeon for a minute. How many pigeons do we have to feed at least once if we have N portions wheat?
Restricted Prime
We want to teach our censored calculator to check if numbers are primes or not. This crazy calculator learned new words (but forget some) and does not accept new words and certain symbols. Plus: it hates digits! Given a number (0 < n < 10000), you should check if it is a prime or not. Your solution can not contain any of the forbidden words, symbols or digits.
To give you and idea, how to go about this task: this solution by weerd totally made our weeks.
from math import *
Veky was our Top Coder in autumn - scoring level after badge after ... more badges? We got the chance to interview one of our favorite faces of the CheckiO forum. Veky answered like only he could answer...
Hacky New
|
https://py.checkio.org/blog/what-do-pigeons-and-primes-have-common/
|
CC-MAIN-2018-09
|
refinedweb
| 227
| 83.46
|
Is there a easy way to check that the gradient flow is proper in the network? Or is it broke somewhere in the network?
Will this gradcheck be useful? How do I use it? Example?
Is there a easy way to check that the gradient flow is proper in the network? Or is it broke somewhere in the network?
Will this gradcheck be useful? How do I use it? Example?
Gradcheck checks a single function (or a composition) for correctness, eg when you are implementing new functions and derivatives.
For your application, which sounds more like “I have a network, where does funny business occur”, Adam Paszke’s script to find bad gradients in the computational graph might be a better starting point. Check out the thread below.
Best regards
Thomas
But when I use it in my network:
... import bad_grad_viz as bad_grad ... loss_G , loss_D = model(data_1, data_2) get_dot = bad_grad.register_hooks(loss_1) model.optimizer_network.zero_grad() loss_G.backward() loss_D.backward() dot = get_dot() dot.save('/path/to/dir/tmp.dot')
I get this error from bad_grad_viz.py:
grad_output = grad_output.data AttributeError: 'NoneType' object had no attribute 'data'
More info:
model is a GAN network.
print(type(loss_G)) >>> <class 'torch.autograd.variable.Variable'> print(type(loss_D)) >>> <class 'torch.autograd.variable.Variable'> print(loss_G.requires_grad) >>> True print(loss_D.requires_grad) >>> True
Any suggestion @apaszke ?
What if you test that condition and return False?
Try 1:
def is_bad_grad(grad_output): if grad_output.requires_grad == True: print('grad_ouput have grad') grad_output = grad_output.data
Error:
if grad_output.requires_grad == True: AttributeError: 'NoneType' object has no attribute 'requires_grad'
Try 2:
def is_bad_grad(grad_output): if grad_output.requires_grad == False: print('grad_ouput doesnt have grad') grad_output = grad_output.data Traceback (most recent call last): grad_ouput doesnt have grad grad_ouput doesnt have grad if grad_output.requires_grad == False: AttributeError: 'NoneType' object has no attribute 'requires_grad'
Please give more details, so that i can debug this issue.
It seems that something you makes your output not require grads as much as one would expect, this could happen due to networks being in
.eval() instead of
.training(), or setting requires_grad = False manually or volatile or something entirely different…
If you had a minimal demo of how it happens, it would be easier to find out why it is not working.
Best regards
Thomas
I use a simple trick. I record the average gradients per layer in every training iteration and then plotting them at the end. If the average gradients are zero in the initial layers of the network then probably your network is too deep for the gradient to flow.
So this is how I do it -
def plot_grad_flow(named_parameters): ave_grads = [] layers = [] for n, p in named_parameters: if(p.requires_grad) and ("bias" not in n): layers.append(n) ave_grads.append(p.grad.abs().mean()) plt.plot(ave_grads, alpha=0.3, color="b") plt.hlines(0, 0, len(ave_grads)+1, linewidth=1, color="k" ) plt.xticks(range(0,len(ave_grads), 1), layers, rotation="vertical") plt.xlim(xmin=0, xmax=len(ave_grads)) plt.xlabel("Layers") plt.ylabel("average gradient") plt.title("Gradient flow") plt.grid(True)
loss = self.criterion(outputs, labels) loss.backward() plot_grad_flow(model.named_parameters())
This is my training iteration. I need to check the gradient. I incorporated what you suggested.
I am getting this error.
grad_output = grad_output.data**
atributeError: ‘NoneType’ object has no attribute ‘data’**
The code snippet:
for i,batch in enumerate(train_loader):
if (i%100)==0:
print(epoch,i,len(train_loader))
#Clear the GRADIENT optimizer.zero_grad() #INPUT imgs,ques,ans = batch #imgs,ques,ans = imgs.cuda(),ques.cuda(),ans.cuda() imgs,ques,ans = imgs.to(device),ques.to(device),ans.to(device) #FORWARD PASS #OUTPUT ansout,queryout = net(imgs,ques,ans) #LOSS CALCULATED #Here answer is my target thus ".data" is used queryloss = loss(queryout, ansout.data) #Here query is my target ansloss = loss(ansout, queryout.data) get_dot = check_grad.register_hooks(ansloss) #MSE Added #print(ansloss) sum_ans_train += ansloss.item() queryloss.backward(retain_graph=True) #BACKPROPAGATION ansloss.backward() dot = get_dot() dot.save('/home/Abhishek/plots/tmp.dot')
A much better implementation of the function
def plot_grad_flow(named_parameters): '''Plots the gradients flowing through different layers in the net during training. Can be used for checking for possible gradient vanishing / exploding problems. Usage: Plug this function in Trainer class after loss.backwards() as "plot_grad_flow(self.model.named_parameters())" to visualize the gradient flow''' ave_grads = [] max_grads= [] layers = [] for n, p in named_parameters: if(p.requires_grad) and ("bias" not in n): layers.append(n) ave_grads.append(p.grad.abs().mean()) max_grads.append(p.grad.abs().max()) plt.bar(np.arange(len(max_grads)), max_grads, alpha=0.1, lw=1, color="c") plt.bar(np.arange(len(max_grads)), ave_grads, alpha=0.1, lw=1, color="b") plt.hlines(0, 0, len(ave_grads)+1, lw=2, color="k" ) plt.xticks(range(0,len(ave_grads), 1), layers, rotation="vertical") plt.xlim(left=0, right=len(ave_grads)) plt.ylim(bottom = -0.001, top=0.02) # zoom in on the lower gradient regions plt.xlabel("Layers") plt.ylabel("average gradient") plt.title("Gradient flow") plt.grid(True) plt.legend([Line2D([0], [0], color="c", lw=4), Line2D([0], [0], color="b", lw=4), Line2D([0], [0], color="k", lw=4)], ['max-gradient', 'mean-gradient', 'zero-gradient'])
simply wanted to comment that this^ is a wonderfully written function to inspect gradients -> highly recommend.
I have a peculiar problem. Thanks to the function provided above I was able to see the gradient flow but to my dismay, the graphs show the gradient decreasing from right side to left side, which is as God intended. But, in my case the graphs show the gradient decreasing from left side to right side, which is clearly wrong, albeit, I will be highly grateful if somebody can tell me what’s going on with the network.
It has a convolutional block followed by an encoder and decoder. The network is fully convolutional.
I will be highly grateful for any help provided.
I have a class of VGG16 and I wonder if named_parameters in your function refers to model.parameters()? model is an instance of class VGG16 by the way. If your response is ‘yes’ then I receive an error ‘too many values to unpack (expected 2)’ for command ‘for n, p in model.parameters():’. Do you see the reason?
For any
nn.Module instance,
m.named_parameters() returns an iterator over pairs of
name, parameter, while
m.parameters() just returns one for the parameters.
You should be able to use
m.named_parameters().
Best regards
Thomas
This helped a lot. Thank you very much.
Best Regards
There is a place in heaven for people like you! Any chance pytorch is integrating something alike soon?
@RoshanRane
I used your code for plotting the gradient flow (thank you!), and obtained this output:
This is for a single layer GRU. I was surprised to see the gradient of the hidden state stay so small. The only thing I can think of as to why this would be the case is because the hidden state is re-initialized with each training example (and thus stays small), while the other gradients accumulate as a result of being connected to learned parameters. Does that seem correct? Is this what a plot of the gradient flow in a single layer GRU should typically look like?
Alternatively, for a 4 layer LSTM, I get the following output:
Does that seem correct? Is this what a plot of the gradient flow in a multi-layer layer LSTM should typically look like? The larger gradient values are from the initial epochs. I am not sure when they are so much larger to start with. Thoughts?
not sure what is
Line2Din your code???
Its a function from the matplotlib library, just add this line to the top of your script:
from matplotlib.lines import Line2D
|
https://discuss.pytorch.org/t/check-gradient-flow-in-network/15063
|
CC-MAIN-2019-30
|
refinedweb
| 1,285
| 53.17
|
Hi,
i had some loops on the annoying "bug in get_wchan" and after i hopefully
understood whats going on
arch/mips/kernel/process.c
196 unsigned long get_wchan(struct task_struct *p)
197 {
198 unsigned long schedule_frame;
199 unsigned long pc;
200
201 if (!p || p == current || p->state == TASK_RUNNING)
202 return 0;
203
204 pc = thread_saved_pc(&p->thread);
205 if (pc == (unsigned long) interruptible_sleep_on
206 || pc == (unsigned long) sleep_on) {
207 schedule_frame = ((unsigned long *)p->thread.reg30)[9];
208 return ((unsigned long *)schedule_frame)[15];
209 }
210 if (pc == (unsigned long) interruptible_sleep_on_timeout
211 || pc == (unsigned long) sleep_on_timeout) {
212 schedule_frame = ((unsigned long *)p->thread.reg30)[9];
213 return ((unsigned long *)schedule_frame)[16];
214 }
215 if (pc >= first_sched && pc < last_sched) {
216 printk(KERN_DEBUG "Bug in %s\n", __FUNCTION__);
217 }
218
219 return pc;
220 }
What does "thread_saved_pc(&p->thread);" return ? Does it really
return the exact address of the schedule functions as assumed in
205-214 ?
Most other architectures search the stack page for the calling function
but it seems their asmlinkage is more strict in the means of
location of the return address on the stack.
The current implementation is buggy not only in the means of
the printk but also in the wchan - Most of the output
is "schedule" itself which means none of the statements above
had been true but when extending the printk with the pc
it never exactly matches &schedule so a == schedule wont help.
Flo
--
Florian Lohoff flo@rfc822.org +49-subject-2-change
"Technology is a constant battle between manufacturers producing bigger and
more idiot-proof systems and nature producing bigger and better idiots."
|
https://www.linux-mips.org/archives/linux-mips/2000-04/msg00076.html
|
CC-MAIN-2016-44
|
refinedweb
| 267
| 55.58
|
Conservapedia talk:What is going on at CP?/Archive169
Contents
- 1 Andy sets a world record in crazy
- 2 Holy Shit
- 3 Encyclopedia Dramatica
- 4 KJV vs. Conservative Bible
- 5 Capitalism WIGO Fail!
- 6 Ray Comfort and the YouTubers
- 7 Karajou really is that batshit insane
- 8 conservo-detector
- 9 Nutshell series
- 10 "you" and "this country"
- 11 No slight against Obama
- 12 Latest toon
- 13 With friends like that ...
- 14 Jpatt on our "affirmative action President"
- 15 Who is Andy's ISP?
- 16 Grammar fail
- 17 Writing and SAT Verbal Skills lectures
- 18 Google/China
- 19 Password thief?
- 20 Not wigo-worthy...
- 21 Adding book citations to CP
- 22 "Bilingual education" according to Ed: "Education that makes you bilingual"
- 23 Another day, another Ed stub
- 24 TK's cellphone driving WIGO
- 25 Phonics
- 26 Lovely new conservative words
- 27 Crappy fit of conservative words
- 28 Guess the admin
- 29 Is Andy reformed on complex numbers?
- 30 CBP revisited
- 31 That's the question we're all asking ourselves.
- 32 TKour de force
- 33 Y'all actually support that idiotic parking proposal?
- 34 Red Ken
- 35 Rob Has His Mind In The Gutter. Again.
- 36 Obama calls healthcare collectivism "a Bolshevik plot" according to CP
- 37 Andy's smoking-lung cancer bit & Godwin
- 38 I thought of Andy
- 39 Did anyone listen to Kens Creationathisms summit? Did they mention conservapedia?
- 40 The homework assignment wigo
- 41 Haitian Children WIGO
- 42 apple eating wigo
- 43 Lancet Retraction
- 44 Blocks in 2009
- 45 The blog that respects freedom of religion...
- 46 More inexplicable from Karibou
- 47 Those liberal atheists and their assumptions...
- 48 It's been a short week...
- 49 CP is now pro-gambling
- 50 Main Page
- 51 Question about Conservapedia's problems with Einstein's Theory of Relativity
Andy sets a world record in crazy[edit]
- User: Uh here is proof Obama didn't use teleprompters to talk to children.
- Andy, making history:
In these two sentences are contained EVERYTHING YOU NEED TO KNOW ABOUT ANDY.
- Liberals Everything is politically motivated (no I mean everything)
- are trying to dampen the ridicule Liberals are highly organized schemers who conspire to score political points over trivialities
- but pictures don't lie. my first impression of any subject is always the correct one and I will defend it to the death
- Obama had the teleprompters there If any part of my argument is true, the conclusions must be correct
- and using them to speak to reporters would be just as absurd if multiple arguments can prove what I want, I'll accept whichever's convenient, or all of them even if they're contradictory
- even if the liberal explanation were true. anything that contradicts me is a desperate attempt by liberals to avoid the truth
Historians will remember this as the Schlaflyburg Address. WodewickWelease Wodewick! 03:14, 27 January 2010 (UTC)
- My god that entire section is priceless.
-?
- Three things, Karajerk. First, it may be splitting hairs, but he's only called "the messiah" and "the one" by mocking conservatives. Second, to be the most intellectually gifted president of the 21st century at this point wouldn't be that hard. Third, the earth did go under a cold spell while Obama was on his way to Copenhagen. Just sayin'.... Junggai (talk) 07:25, 27 January 2010 (UTC)
A few thoughts from my fuzzy little head...
- This has the potential to become Obama's equivalent of "invented the Internet" or "couldn't spell potato", especially since its become fodder for comedians. There is a rational explanation, but the myth is already out there.
- Of course he had a teleprompter to speak to reporters -- he probably had a prepared statement, which is usually the case for such events.
- The right wing yank-fest over Obama's use of teleprompters is a classic of the complete paucity of their arguments. Presidents have used teleprompters for years. (I under Ronaldus Maximus preferred note cards, but really, what's the difference?) But for some reason, its an outrage Obama uses one. is
- I can envision their response to the Gettysburg Address: "written on the back of envelope? How deceitful to let his audience think he was speaking from the top of his head! Damn hippie with that beard." (Of course, a take on conservatives critiquing the Gettysburg Address has been done...
MDB (talk) 12:03, 27 January 2010 (UTC)
- Well if it's an address, it's not so strange that it's written on an envelope... --GTac (talk) 14:47, 27 January 2010 (UTC)
- Boo! I have just eaten
& stiltontalk 15:21, 27 January 2010 (UTC)
Holy Shit[edit]
I was just reading about O'Keefe, the guy arrested for trying to wiretap Senator Landrieu, and look what he says after being released on bail (about halfway down the page): "The truth shall set me free." Has this guy been reading Conservapedia? He just about stupid enough to. DickTurpis (talk) 14:19, 27 January 2010 (UTC)
- That's one of the most famous Biblical quotations, and has a long history in America. Odds are slight that it has anything to do with CP. I'll agree he is deceitful and dumb enough to fit the bill, though.--Tom Moorefiat justitia 14:39, 27 January 2010 (UTC)
- My secular upbringing thwarts me again. I never knew that was from the Bible. And while I knew it was not an Andy original, in my mind it has become so associated with CP that it's hard to separate the two. Well, if O'Keefe describes himself as trusworthy then we'll know for sure. (Though I never really thought O'Keefe was likely a CP reader, just that it was an odd coincidence. Now that I know it's a common phrase I guess it isn't much of a coincidence after all.) DickTurpis (talk) 15:12, 27 January 2010 (UTC)
- Why won't CP cover the news? All those whinefests about how the "liberal media" refused to cover the ACORN issues, brought about by the same man, where's the outrage that he was caught trying to wiretap a sitting senator? We're waiting, TeaKakkke. --Irrational Atheist (talk) 15:33, 27 January 2010 (UTC)
- CP wont cover it because CP is inherently dishonest in presenting the world --Opcn (talk) 21:00, 27 January 2010 (UTC)
- The problem is that Andy has already claimed him as a conservative over the ACORN flap. Meaning, that he does not engage in deceit of any kind. Well, this was pretty damn deceitful, therefore, publishing this would make Andy seem wrong and the universe would collapse. SirChuckBWill Sysop for food 21:02, 27 January 2010 (UTC)
- See my talk page for why SirChuck is dead wrong. And I did comment on the story on Main Page Talk. --TK/MyTalkRW User #45 23:39, 27 January 2010 (UTC)
- TK is wrong (surprised?) "it seems the gentlemen in question were actually investigating the odd phenomenon of reports from citizens that whenever they called the Senator's office to complain about her support of Obamacare, they always received a 'busy signal' day and night." No, TK, the gentlemen were arrested because they stated they were representatives from the phone company to do work on the phone lines. "Authorities said two of the defendants posed as telephone repairmen in hard hats, fluorescent vests and tool belts and asked to see the phones at Landrieu's office; one of them had a tiny camera in his helmet. A third man is alleged to have waited outside in a car with a listening device to pick up transmissions. The fourth, James O'Keefe, used his cell phone to try to capture video of the scene inside, authorities said." Basel was seen manipulating the handset of the Senator's office phone. In two different areas of a federal building, the three men inside the building identified themselves as employees of the telephone company. That's a felony.
- Your "comment" was passing off bullshit lies and arguing the liberal media was wrong in its reporting. Look at the federal agent's affidavit. No matter how you try to pass it off, the three lied about who they were and a fourth had equipment to receive audio transmissions. Only the delusional would assume they were innocent and trying to do something positive for a senator, given that O'Keefe is already going to be in court for wiretapping charges in Pennsylvania. --Irrational Atheist (talk) 23:57, 27 January 2010 (UTC)
Encyclopedia Dramatica[edit]
Conservapedia hates Encyclopedia Dramatica. TK blocks Girlvinylimg and Michaeldsuarezimg (me). Girlvinyl's block summary was simply Bye while mine was POV editing/Removal of valid content / adding material without citation: ED editor inserting nonsense, pop culture.
I asked Aschlafly to unprotect that article myselfimg, and Aschlafly honored my requestimg.
DMorris <capture>nominatedimg the Encyclopedia Dramatica article for deletion, so I offered to rewrite the articleimg. DMorris then saidimg that a complete rewrite was needed, so I did a complete rewriteimg.
TR then appeared from nowhere and reverted my revisionsimg and removed the Deletion templateimg without leaving any comments on the deletion nomination page. TK also protected the articleimg using the excuse High traffic page. The protection didn't may sense since there's wasn't any "high traffic", since I didn't call for /b/ackup, and since he also protected it for 5 hours.
It should also be noted that TK has made a series of questionable revisions to the Encyclopedia Dramatica article. He removed the "need citations and sources" templatesimg; he nominated the article for deletionimg, but then he apparently changed his mindimg. TK also needs to start using the "preview" buttonimg; "Template:Locked" doesn't exist.
I'm not sure why I'm banned. DMorris asked me to do a complete rewrite, and I did. After TK, DMorris then tells us that the Conservapedia-ED comparison should be removedimg. I find this ironic since my rewrite removed that comparison. If DMorris wants that comparison remove, he should revert the article back to my version. --Michaeldsuarez (talk) 00:17, 28 January 2010 (UTC)
- "High traffic" might also be that a considerable number of hits to the page are being logged. When we linked the Conservapedia Commandments on the WP page for CP, to say that they aren't used as the guidelines for the site (even though the WP article said they did), TK locked that page for "high traffic" as well. If you want to edit a more serious wiki, feel free to do so here. CP is TK/Andy/Ed/Ken and whichever morons follow each one, if any. Oh, and the parodists, so obvious to us outsiders, that they create more fun to watch. --Irrational Atheist (talk) 00:22, 28 January 2010 (UTC)
- Hate to break it to you, Michael, but DMorris isn't a Admin at CP. Are you not an editor at ED? If you had that kind of interaction, normal, rational people would have emailed explaining all that instead of posting here. All my contact information is on my CP user page, for everyone to see. --TK/MyTalkRW User #45 00:52, 28 January 2010 (UTC)
- TK, stop ignoring my previous posts to you. Or shall I take them to your talk page? If you're so bored with CP because Andy's getting all the attention and no one cares about you that you have to spend most of your time here now, then at least entertain me more. --Irrational Atheist (talk) 01:02, 28 January 2010 (UTC)
- TK, where did I say that DMorris was a sysop? Yes, I'm a ED user. If I had wished to hide that, I would've chosen a different user name. Also, you barred me from sending Emails via Special:Emailimg, so I assumed that you wouldn't welcome Emails from me. --Michaeldsuarez (talk) 01:04, 28 January 2010 (UTC)
KJV vs. Conservative Bible[edit]
Passage: Luke 11:53-54
KJV: "And as he said these things unto them, the scribes and the Pharisees began to urge him vehemently, and to provoke him to speak of many things:
Laying wait for him, and seeking to catch something out of his mouth, that they might accuse him."
Conservative Bible:img "As Jesus told them off, the scribes and Pharisees furiously interrogated Him about everything,
plotting and seeking to quote Him for a politically incorrect remark to use against Him." — Unsigned, by: Night Jaguar / talk / contribs
- Damn liberal tricksters, always trying to mess with the Holy One! — Unsigned, by: Human / talk / contribs ħuman
05:00, 27 January 2010 (UTC)
- And yet they said The Message "bastardized" the Bible. Irony meters, etc. WodewickWelease Wodewick! 05:16, 27 January 2010 (UTC)
Héhé, two birds with a stone, they justify the 'enhanced interrogations techniques' so popular to the neo-cons and their self declared 'political incorrectness' not a bad job for a single verse... Alain (talk) 14:41, 28 January 2010 (UTC)
Capitalism WIGO Fail![edit]
In a misreading of what is literally the first solid and well reported news piece I have ever seen on conservapedia as being disparaging someone F'ed up a WIGO. It mentions Liberal and some people doing a good thing for themselves. It doesn't say anything about them that is bad, It's a news story that points out that not everything liberals do is evil, if it had been conservatives we would have considered CP mentioning them like that to be a compliment. CHILLAX --Opcn (talk) 07:49, 27 January 2010 (UTC)
- The WIGO was more about hate than any factual comment, totally using supposition to decide what it meant. Conservatives admire what the Google founders did, creating a market where there was none, and creating tens of thousands of jobs. The news section at CP isn't encyclopedic, it merely presents thought-provoking, sometimes controversial items, to get people to discuss. --TK/MyTalkRW User #45 08:52, 27 January 2010 (UTC)
-
- If the thoughts you mean to provoke are "What the hell?" and "Are they serious?" you've done admirably. Barikada (talk) 08:58, 27 January 2010 (UTC)
It really depends on how someone wants to discuss, and if they post like they do here, they would be banned on most sites. But the idea that they are made to trap others is not correct. That's just a happy by-product sometimes.
--TK/MyTalkRW User #45 09:20, 27 January 2010 (UTC)
- So things like having this and this on the front page at the same time isn't trollbait-- you seriously believe both are accurate? Barikada (talk) 09:24, 27 January 2010 (UTC)
- No, no, no. The news items aren't about being "encyclopaedic", "factual" or "accurate", it's just about being thought provoking and controversial. You know, like 4chan. --GTac (talk) 09:34, 27 January 2010 (UTC)
- TK is becoming worse than MC. Lets vandal bin him. Boring, lying tart that he is. Acei9 09:41, 27 January 2010 (UTC)
- The thing that annoys me the most is his unfunny, overuse of the smileys. - π 09:46, 27 January 2010 (UTC)
- Isn't the vandal bin for people who spam, not those who's actions elsewhere and opinions we find extremely disagreeable? --Opcn (talk) 09:48, 27 January 2010 (UTC)
- Folks, I'm gonna have to agree with Opcn. TK may be argumentative at times, and a flat out troll at times, but we pride ourselves in keeping the 'high ground.' If you don't like what he's saying, then argue back, or even ignore him, but lets not drop the B-word until he acts like a vandal. Lets use the true value of the First Amendment. "Free Speech: It lets us know who the idiots are." - Ravenhull (talk) 11:01, 27 January 2010 (UTC)
- I'm not sure you could describe this place as "high ground" by any means. That said, I subscribe to the viewpoint that TK is not being sufficiently entertaining on these 'ere pages to warrant having her around. MaxAlex Swimming pool 11:59, 27 January 2010 (UTC)
- I understand if TK comes to RW looking for entertainment because he's growing bored with CP, but that's his own damn fault for banning every new user to come along, before they could develop into long term personalities he could play his bullying/manipulation games on. Meanwhile having him here breaks the fourth wall and reduces MY entertainment of his CP hijinx.
- I wouldn't put it past TK to be thinking a step ahead, cultivating socks on RW so that when he's done with CP he can come here, be "forgiven" by those fake personalities and start a new drama game. After all RW is now a larger and more active wiki than CP.
- I say "go back and play in your sandbox TK." A ban of the PERSONALITY "TK" is materially harmless to the person Terry K. as we and he both know, given how many socks he has. All it does is stop that PERSONALITY from trying to play drama games on this wiki.
- With that explanation, I am moving us to
DEFCONHCM 5 and leveling an hour long block on TK every time I see him make a talk or mainspace edit. WodewickWelease Wodewick! 12:33, 27 January 2010 (UTC)
- TK, how can the news items provoke discussion when you ban people for 90/10, complaining, liberal arguments, etc.? You don't want a discussion, you want a trollfest. That's the problem with the CP crowd: You have to think like Andy, agree with Andy, or you don't get to stay. That's not a discussion in any sense of the word. Impress us by getting rid of that 90/10 rule and arbitrarily banning people because they say things you don't agree with, and maybe I'll consider that the news list is there for provoking discussion. --Irrational Atheist (talk) 15:32, 27 January 2010 (UTC)
In communicating with TK elsewhere I've consistently noticed that he considers the rules at CP to be very different from the rules here. I tend to agree. While I disagree with how things are run at CP and how the rules they talk about and the rules they act on are as different as night and day I do really think we should stick to our own rules. Yes, he might not be funny here; yes, you can have your sour grapes; no, that isn't a reason to block him. Rational wiki doesn't have the highground on every issue, but it was created to give people a place to speak about the issue and TK the person can say things here he cannot say as TK the sysop on CP, so we should let him speak here. What is the worst that could happen? --Opcn (talk) 20:50, 27 January 2010 (UTC)
- Have read through some of these, my naive young thing. - π 22:17, 27 January 2010 (UTC)
- Odd, I don't see much "funny" above at all. Where is it? --TK/MyTalkRW User #45 23:18, 27 January 2010 (UTC)
And it didn't bother anyone else that the header is misspelled????!!! [sad] PS, fuck you, TK. Destructive non-entity that you are. ħuman
05:42, 28 January 2010 (UTC)
- TK, no one at CP gives a shit about you. --Irrational Atheist (talk) 13:36, 28 January 2010 (UTC)
Ray Comfort and the YouTubers[edit]
Will the Conservapedia atheism and evolution articles be mentioned during the interview? I can answer that for you, Ken: no. I remember reading something somewhere from Nephilimfree where he said something along the lines of Conservapedia's not a reliable source. Dagless (talk) 15:14, 27 January 2010 (UTC)
- There's actually a reasonably good chance one of them will. Ken seems to have a quid pro quo going in which he plugs some tard's blogs and videos on youtube and they in turn mention Conservapedia. With 4 fundie idiots in a room it's hardly impossible one will bring up CP with all the nonchalance and inelegance of a radio DJ peddling Metabolife. I'm sure Ken will let us know. DickTurpis (talk) 15:27, 27 January 2010 (UTC)
- And the seven or eight people who aren't listening in to laugh at it will rejoice in the glow of another piece of shit on the landfill which is American fundamentalist Christianity. Dagless (talk) 15:33, 27 January 2010 (UTC)
- "Ray Comfort and the YouTubers." Sounds like a really shit 80s pop band. SJ Debaser 15:35, 27 January 2010 (UTC)
- Ahahahaha! Well-put, SJ. Tetronian you're clueless 18:33, 27 January 2010 (UTC)
- I love how ♥ K e n D o l l ' s ♥ news items always take place in the fucking future. ħuman
05:55, 28 January 2010 (UTC)
- Well, should we safely assume ♥ K e n D o l l ' s ♥ future involves fucking? K61824Ask me for relationship advice 00:59, 29 January 2010 (UTC)
- @Human: What about Ken's regular future? Tetronian you're clueless 01:00, 29 January 2010 (UTC)
Karajou really is that batshit insane[edit]
I didn't notice this before (but since Taj has given it a category today, it came to my attention), but Karajou really is that insane.img He thinks the moon landing is a hoax? Besides the rocks we carried back, there are mirrors on the surface of the moon, and we can measure how far away the moon is by bouncing a beam of light off them! What's more, the whole "no stars in the background" argument is a sure sign of complete ignorance. Do you see stars during the day here on earth? Besides the sun, of course? You can't see them on the moon during the day, either, because of the sun. But since the moon lacks an atmosphere, there's nothing to scatter the sun's rays, thus there's no color to the sky like there is on earth. I bet the page stays up for the homeschoolers in Andy's eventual earth science class to use. --Irrational Atheist (talk) 16:40, 27 January 2010 (UTC)
- Capt. -- =w= 20:49, 27 January 2010 (UTC)
- Well, it seems he falls short of actually endorsing this view, though it's hard to tell (he certainly does nothing to refute the bogus claims). He calls it a "hoax" in the edit summary, but in context it seems he means "alleged hoax", and is referring merely to the allegations. We should try to find out, as it would be hilarious if Karajou were this insane. DickTurpis (talk) 16:49, 27 January 2010 (UTC)
- EC) Yes, he is. Many moons[sic] ago (12?), I nominated Moon's talk page as a favourite article. They're just batty when it comes to anything they can't touch. I have just eaten
& stiltontalk 16:50, 27 January 2010 (UTC)
- The Moon-landing-as-hoax must be one of the most debunked conspiracy theories out there, along with JFK and 9/11. If CP really take a stance on it being anything other than Crazy McCrazy from Crazytown, then they are simply confirming their status as confirmed nutjobs. --Worm(t | c) 17:14, 27 January 2010 (UTC)
- JacobB reads RW.img --Irrational Atheist (talk) 20:47, 27 January 2010 (UTC)
- Capt. -- =w= 20:49, 27 January 2010 (UTC)
- Of course JacobB reads RW. He's one of us. DickTurpis (talk) 13:31, 28 January 2010 (UTC)
- Taj's categorization spree has also brought this gem back to light. No mention of Bush, who pushed for the corporate welfare for Wall Street before Obama was even elected? Must be nice to redo history and reality whenever necessary. --Irrational Atheist (talk) 17:00, 27 January 2010 (UTC)
- Well, no surprises there - we have always been at war with Eastasia, after all. Tetronian you're clueless 18:32, 27 January 2010 (UTC)
- Taj is doing a wonderful job of bringing forth the insanity. The only "famous example" of scientific fraud is something that isn't scientific fraud? Come on, YECs love to push the most famous fraud, Piltdown Man. Pathetic, CP. I want to see that added by the end of business today! --Irrational Atheist (talk) 19:08, 27 January 2010 (UTC)
conservo-detector[edit]
Re: the latest WIGO [2], didn't the actual author get banned? I can't remember. -- Coarb (talk) 01:03, 28 January 2010 (UTC)
- Mark left after the entire KSorenson awesomeness. I think TK then blocked him and Andy unblocked him. Either way, he's gone. Keegscee (talk) 01:19, 28 January 2010 (UTC)
- Yeah, that's him. Edited a lot of math articles, got blocking rights and then resigned. Smelled like a parodist to me; I'm pretty sure he had at least 2 other accounts over there. -- JLauttamusdiscussion 01:34, 28 January 2010 (UTC)
- So they want to use the software created for them by someone who got chased off the site? And this software is suspected to be compromised if anyone with bad intentions would know of its source code? Sounds like nothing could go wrong with this at all! --GTac (talk) 10:02, 28 January 2010 (UTC)
- Andy was so impressed by MarkGall's creation that he drools over anyone who uses it. Tetronian you're clueless 16:03, 28 January 2010 (UTC)
Nutshell series[edit]
I like these.
Logic Funneh does not require verbosity!
Poor FrankC though. He still doesn't get that adding citations to Conservapedia is a bannable offense. WodewickWelease Wodewick! 04:03, 28 January 2010 (UTC)
- This is indeed an excellent WIGO. I just wish Andy hadn't tainted the word "concise". I really loved the happenings around FrankC, these kind of posts are my favorite. He doesn't compromise, sticks to the point, doesn't get distracted by the bullying and doesn't really give them a reason to ban him, EXCEPT that his facts go against their opinion. With cases like these their deceit is undeniable. They didn't have an answer for his factual posts, they couldn't pin him on some ridiculous rule, they banned them out of cowardice and deceit.
- I'm giving FrankC a slow clap of appreciation. --GTac (talk) 10:08, 28 January 2010 (UTC)
- Likwise. It was an excellent post through and through. Facts are a real bitch, huh? I've always wondered about the behind the scenes talk between the admins. I wouldn't be surprised if Andy actually goes crying to TK to revert facts so it looks like he never saw them. — Sincerely, Neveruse / Talk / Block 13:49, 28 January 2010 (UTC)
"you" and "this country"[edit]
Andy complaining again about Obama using the wrong person for his pronouns? St. Ronnie's 1982 SOTU, for comparison:
- "this administration": 5
- "this nation": 1
- referring to America as "her" rather than "us": 1
-- Coarb (talk) 04:51, 28 January 2010 (UTC)
- It's been 28 years since that speech. The liberals have distorted its original meaning. Keegscee (talk) 05:24, 28 January 2010 (UTC)
Ronald Reagan Translation Project[edit]
At the time, Ronald Reagan didn't have access to words that have been coined since 1982, such as "Islamofascism," "War On Terror," and "Segway." A modern translation of his State of the Union speech would have to take this into account, clarifying the original meaning with these powerful new conservative words. WodewickWelease Wodewick! 06:57, 28 January 2010 (UTC)
- So we should translate Reagan's speeches using these new conservative insights? Makes perfect sense. Tetronian you're clueless 16:01, 28 January 2010 (UTC)
No slight against Obama[edit]
However, Bill Clinton delivered a joint address from memory for several minutes when the teleprompter broke in, I believe, 1993. ConservapediaEditor (talk) 06:30, 28 January 2010 (UTC)
- You never know, maybe Obama may have been able to as well. The teleprompter is that, a prompter. I doubt anyone reads straight from the thing, otherwise they would be staring straight at it the whole time, rather than looking around the room. - π 07:00, 28 January 2010 (UTC)
- You know who was VISIBLY and AUDIBLY reading from a teleprompter? Governor Bob McDonnell from Virginia. Fafuxsake, he looked like HowTheWorldWorks talking about free market capitalism on YouTube and EVERY TIME HE LOOKED TO THE LEFT OR RIGHT we were left hanging at the end of a sentence. You could see the realization come across his face like he was thinking "Hey, I know I was homeschooled but, a sentence shouldn't end like that..." *looks back at the prompter* "Oh yeah! That word! That ends it!" *says word*. Now, Andy, THAT is what it looks like when a public speaker "simply reads what is written". Fuck you. The Foxhole Atheist (talk) 12:47, 28 January 2010 (UTC) (PS - Kudos to the president for announcing the end of DADT. It's about time. That policy is stupid.)
- @Pi: I've heard that they are trained to look back and forth between the 3 prompters so it looks like they are looking around the room. If you tape a speech by any President and fast-forward it, you can see that this is what they are really doing. And that goes for all Presidents in the last few decades. Tetronian you're clueless 16:00, 28 January 2010 (UTC)
Latest toon[edit]
My god, KackyPants just gave us the best parody template ever! CrundyTalk nerdy to me 09:52, 28 January 2010 (UTC)
- Link for the lazy. I'm astonished at this week's toon! It is NOT about climate change, AND it even has traces of an actual joke in there instead of boiling down to "I HATE LIBERALS, THATS THE JOKE"! Of course, the people actually suffering from "Barackaspeechaphobia" are the people at CP, since that name implies a fear of Barack's speeches.. --GTac (talk) 10:15, 28 January 2010 (UTC)
- This is the worst toon yet. Where did all this "Barack Obama can't talk extemporaneously" bullshit come from? Do they not remember the buttocks-spanking he gave "veteran public servant" John McCain in, hmm, all three debates? WodewickWelease Wodewick! 13:36, 28 January 2010 (UTC)
- In Conservaland, McCain won all those debates, just like Palin gave Biden a beating he'll never forget. Vulpius (talk) 14:33, 28 January 2010 (UTC)
- Prove there were no teleprompters, liberal. — Sincerely, Neveruse / Talk / Block 14:36, 28 January 2010 (UTC)
I'm amazed no one else has said that toon is a racist charicature.
- It's not overtly racist, just potentially racist. Tetronian you're clueless 15:57, 28 January 2010 (UTC)
- It's just free from liberal bias. -- Lauttydauttywe likes to party 16:01, 28 January 2010 (UTC)
- One thing I'll say for KJ, he can't draw hands for shit. Aboriginal Noise Oh, what a lovely tea party! 16:56, 28 January 2010 (UTC)
...is that comic sans? That's comic sans. God damn. X Stickman (talk) 17:36, 28 January 2010 (UTC)
- Since CP craps on copyright anyways, someone should tell Karajou to steal a font or two from Blambot! --Irrational Atheist (talk) 17:39, 28 January 2010 (UTC)
- I don't get it. The only joke I see is parody of right wingers going apeshit over Obama using a teleprompter. But to say that the guy can't speak without one is ridiculous. He's a fabulous debater and extemper. I sat in on some of his lectures at the University of Chicago law school - he would engage really tough questions from students intelligently and without the least bit of condescension. He owned McCain up and down those debates. Roundly. Painfully. I think some of the criticism of his "elitism" is frankly because he's a little wooden (like Gore), which does distance people who can't see past it and freaks them out when he does "urban" stuff like fist bumping his wife. I happen to see a real guy behind all that, but hey, I'm a liberal so I fall for all kinds of liberal deceit.
18:01, 28 January 2010 (UTC)
In fairness to Karajou, while it is dreadfully unfunny, at least there's something approaching a traditional joke format in this one, a marked step up from previous efforts. H. Randolph Twist (talk) 17:54, 28 January 2010 (UTC)
I would say it's awful timing on Koward's part, putting out a "potentially racist" cartoon just as JPratt puts up an overtly racist news item about the same subject. I will give him an infinitesimal bit of credit for actually making a joke, and relevant to something on the site it appears, but that's about it. And yeah, he's lousy at hands but hands are really tough. I'm not so hot at them either. --Kels (talk) 02:16, 29 January 2010 (UTC)
With friends like that ...[edit]
== Hello, my friend! ==
"It is good to see you this fine morning, Joaquín! I also was pleased to read the news from [[Honduras]]. The people there deserved a break! --ṬK/Admin/Talk 09:20, 28 January 2010 (EST)"
I presume the Honduras ref was the ex pres leaving. yummy
& honey(or marmalade) 01:38, 29 January 2010 (UTC)
Jpatt on our "affirmative action President"[edit]
What a prick. Is he ever going to stop deluding himself? Tetronian you're clueless 23:13, 28 January 2010 (UTC)
- That's a trick question, right? --Kels (talk) 23:15, 28 January 2010 (UTC)
- What delusion? Obama IS black isn't he? It stands to "reason" that the only way a black could be elected is by affirmative action. Jimaginator (talk) 13:23, 29 January 2010 (UTC)
Who is Andy's ISP?[edit]
Reading up on one of the blogs I follow (legal stuff for photographers) there is the question of when is an ISP immune to copyright infringement?[4] There are three parts to the test in the DMCA. The key in the tests is that "upon obtaining such knowledge or awareness, acts expeditiously to remove, or disable access to, the material" (Aiii) and "upon notification of claimed infringement as described in paragraph (3), responds expeditiously to remove, or disable access to, the material that is claimed to be infringing or to be the subject of infringing activity." (C). It might not be a bad thing to point out to Andy's ISP all of the copyright infringements going on within CP. If Andy won't take those images down - maybe his ISP might be interested. --Shagie (talk) 23:17, 28 January 2010 (UTC)
- I doubt it will make a difference - people infringe copyright every day and don't get caught. Why would they target him? Tetronian you're clueless 23:22, 28 January 2010 (UTC)
- Notifying individuals isn't a bad idea, particularly if their work is being used in a context they'd likely find unpleasant, but contacting his ISP seems a bit unsporting. Although our content doesn't have any bearing on the illegality of CP's regular misuse of copyright protected content, it would seem hypocritical to go for the nuclear option without being confident that RW's material is all correctly licenced. Besides, you can't file a formal DMCA request unless you're authorised to act as (or on behalf of) the rights holder. You could offer them a friendly tip, but the ISP wouldn't be under any obligation to take action. That's my armchair lawyer take on things. --
Ask me about your mother 23:39, 28 January 2010 (UTC)
- Doesn't the DMCA apply to the hosting company, not the ISP, if they are different? CS Miller (talk) 00:11, 29 January 2010 (UTC)
- The question is "does a friendly tip kick them into the area of being notified" - its not a take down request, but rather if there is a friendly tip to the ISP and then National Geographic come in with an infringement suit and find that the ISP was advised that there was copyrighted material on the machines. Not so much a "We demand you take CP offline" but rather "notification has been sent to National Geographic that a web server you are hosting has copyrighted material without any attribution or reasonable claim of fair use and you might want to point this out to your client." --Shagie (talk)
- It could be a case of contributory copyright infringement, but I'm not sure how the US courts would handle that kind of thing. I know that you're pretty good with adding licences to images you're uploading, but realistically RW is not exactly perfect. We've still got a lot of images to sort through. It could be hypocritical, but there are some key differences.
- We have demonstrated good faith with previous copyright infringement claims by engaging the claimants and making changes where necessary. Most of our infringement appears to be for silly little things, smileys and such. We can demonstrate that we're taking steps to correctly licence our stuff, and have deleted content that's obviously illegally posted here. I'm not aware of CP having done anything similar.
- RW is a non-profit set-up, while CP is being used for Andy's homeskooling business. The allegedly educational aspects of his business may give him some leeway, but it's certainly not a get out of jail free card for people like fairuse Karajou to spend their time pillaging pictures to slap on the turds that are their articles.
- Overall I'd be reluctant to go down the host/ISP route. It's up to you, but I'd rather just notify the copyright holder and let them decide what to do. --
Ask me about your mother 15:54, 29 January 2010 (UTC)
- DMCA requires the copyright holder or assigned agent (and theres a legal definition of assigned agent) to file the claim. The hosting company can ignore anything else. Notifying the copyright holder with all the info is the best option. (picture, url for pic, email for DMCA filing, whois for hosting company email) some interpretations say you must mail, harcopy via snail mail, many sites accept email filings. Hamster (talk) 16:15, 29 January 2010 (UTC)
Grammar fail[edit]
Nice catch. Normally I don't like making fun of a person's tyops, but in the context of a man who A) uses typos and minor grammatical errors as an argument and B) is advertising a writing class, this is absolutely perfect. Corry (talk) 02:24, 29 January 2010 (UTC)
- Why the hELL does he write these "lectures" on line. He'd have a little more credibility if he dropped them on the page as fully edited things. (laconic (pronounced: luh-KAH-nik) really? Strange accent there, Andy.) yummy
& honey(or marmalade) 02:33, 29 January 2010 (UTC)
- My favourite part was the link from the home page. It's a link to this pageimg which has the heading "Welcome to Students." Now, I know he probably means a shortened version of "I wish a warm welcome to students", but was "Welcome, Students!" too difficult? This writing course should go really well. Nick Heer 03:26, 29 January 2010 (UTC)
- Will Andy be using IPA phonetics, or does he reject their internationalist bent? WodewickWelease Wodewick! 03:38, 29 January 2010 (UTC)
- If it is like last time, it will be a series of increasing length essay bashing liberals. - π 13:07, 29 January 2010 (UTC)
- Andy makes such an issue out of people's typos (which are mainly on talk pages anyway) just because he uses Firefox's spelling checker. However, that doesn't catch homonyms, grammar, punctuation, or typos which are still proper words. So by all means, mock the miserable git for his writing errors.
ГенгисGum disease 15:37, 29 January 2010 (UTC)
He only takes people up on typos etc. when he's beaten on a point.- Oh wait a minute - that means every time someone disagrees with him. yummy
& honey(or marmalade) 15:42, 29 January 2010 (UTC)
Writing and SAT Verbal Skills lectures[edit]
As writing is a large part of what I do, I've eagerly been waiting to see what Professor Schlafly has to say about it. Writing Lecture One gives me a good clue: it's written like a high school book report. There's not much grammatically wrong with it, aside from a few inappropriate placements of commas; but most of it consists of simple declarative sentences with no life or warmth to them. The person who wrote that has no special skill with the English language. He shouldn't be teaching it. - Cuckoo (talk) 14:06, 29 January 2010 (UTC)
- You got one extra word there, Cuckoo. He shouldn't be teaching. Period. Aboriginal Noise Oh, what a lovely tea party! 14:23, 29 January 2010 (UTC)
- Remember... This is Andrew Layton Schlafly, Esq., BSE, JD that we're talking about here. He can't even speak with life and warmth. Even the guy's laugh sounds like a robot being forced at gunpoint. The Foxhole Atheist (talk) 14:31, 29 January 2010 (UTC)
- Couldn't agree with Cuckoo more, the lecture flows like a river of bricks. 'Writing is...' 'Writing is...' 'Writing often...' 'Writing is...' 'Writing can..' 'Writing is...' 'Writing has...'. What a wonderful variety of statements. EddyP (talk) 15:26, 29 January 2010 (UTC)
- It might actually be OK spoken by a decent orator, but written it's just a sequence of random bullet points. yummy
& honey(or marmalade) 16:07, 29 January 2010 (UTC)
- I'm still working on a project today, but hopefully I will get to ripping the page to shreds. Hardly anything he's brought up in that lecture pertains to improving writing *or* the verbal aspects of the SAT I test. --Irrational Atheist (talk) 16:09, 29 January 2010 (UTC)
- Might want to wait until he has "finished" it - that would be when he "assigns" it to the new class. Maybe the reason it reads like a bullet list right now is becvause that's all it really is - notes to build the "lecture" from. I'm sure the final draft will be equally hilarious... ħuman
02:23, 30 January 2010 (UTC)
Google/China[edit]
Anyone know what's going on with this? Ken takes his usual handful of revisions to get to an entry. Then wipes it outimg? — Unsigned, by: Worm / talk / contribs
- The ways of Ken are mysterious and not to be fathomed by the base humanity on Rationalwiki. Know ye not that he is all powerful when it cometh to Google and Alexa? yummy
& honey(or marmalade) 14:50, 29 January 2010 (UTC)
- In a nutshell (apart from CP), Google is threatening to pull out from China on the basis that China wants to censor Google's search capabilities and all that jazz. Besides that, there have been reports of chinese government attempts at hacking Google (one or the other). Conservative is thanking Google for not censoring search results, since <insert Conservative's terrible article> is number 1 among searches. He removed it because either A) He was wrong, and it wasn't anywhere near number 1, B) Google is a liberal organization (scroll down about two-thirds) and WE CAN'T BE PRAISING DEM LIBERALS, or C) Someone told him to stop being retarded.
NorsemanCyser Melomel 14:55, 29 January 2010 (UTC)
Password thief?[edit]
What would make JacobB suspect my IP of thievery? TKEtoolshedFrag Out! 18:38, 29 January 2010 (UTC)
- The guy's a turd trying to make a name for himself. He's referred to getting password change emails, which list IPs, as attempts to steal his password by "password thieves," which couldn't be farther from the truth. You can't change someone's password without access to the associated email account. It wouldn't surprise me if JacobB was using his having received such emails as an excuse for blocking other IP addresses. On the other hand, were you sending him password reset requests from you home IP address? If so, that's stupid. Don't do it anymore.
18:43, 29 January 2010 (UTC)
- How would he have gotten your IP, if not in a password reset email?
19:07, 29 January 2010 (UTC)
- I've never emailed anybody on CP. It seems like something that could get creepy real fast. I do go through a college network, so maybe they range blocked it? They won't let me sock up anymore. TKEtoolshedFrag Out! 19:45, 29 January 2010 (UTC)
Not wigo-worthy...[edit]
...because its not funny or obnoxious, just an example of how CP spins things.
Hillary Clinton said today that she doesn't want a second term as Secretary of State because its such a demanding job.
How does Andy phrase it? Hillary Clinton announces she'll resign as Secretary of State if Obama is reelected for a second term. Now, at the level of Secretary of State, "resign" usually implies leaving due to scandal or dispute with the boss, not just she doesn't want to keep doing the job. Andy's phrasing goes even further, almost making it sound as if her "resignation" would be to protest Obama's reelection.
MDB (talk) 16:00, 29 January 2010 (UTC)
- In other "Not wigo-worthy" news, cut-rate parodist AlexWD plays to the crowd by insinuating that global warming is less credible because bin Laden believes in itimg. — Sincerely, Neveruse / Talk / Block 16:42, 29 January 2010 (UTC)
- Well, in terms of achieving public acceptance for global warming, having bin Laden endorse the theory isn't good news. I mean, if you were a Hollywood producer, you really wouldn't want to see this review:
"Two arms raised way, way up! Best picture we've seen since Triumph of the Will. All true Aryans should see it again and again!" -- Adolf H. and Eva B., Berlin News-PicayuneMDB (talk) 17:11, 29 January 2010 (UTC)
- Surely it follows that bin Laden's support of prayer in school and religion in government would undermine the credibility of CPs advocacy of these things? --
Ask me about your mother 17:20, 29 January 2010 (UTC)
- Don't be silly -- bin Laden is supporting the wrong religion in those matters. To CP, that's the key difference between their religious zealotry and bin Laden's. MDB (talk) 17:26, 29 January 2010 (UTC)
- That, and possibly the fact that CPers have not killed thousands and thousands of people.
18:14, 29 January 2010 (UTC)
- Doesn't attempting to permanently twist and distort the mind of impressionable children and teenagers count? Besides, I'm pretty certain that OBL and AndyPandy have the same attitudes about the female of the species being educated (they should only be educated that their place is in the kitchen baking non-Mohammed shaped cookies).--
Spirit of the Cherry Blossom 18:53, 29 January 2010 (UTC)
- Does attempting to permanently twist and distort etc. etc. count as what? Being equal to bombings trains and subways and flying planes into buildings and dinghys laden with explosives into ships? No, I would say not. I'm as anti-CP as the next girl, but saying the only difference between Andy's wingnuttery and bin Ladens is the god they worship is pretty wingnut itself.
19:06, 29 January 2010 (UTC)
- Please, don't make me get the Sarcasm MarkTM out, Camembert.--
Spirit of the Cherry Blossom 22:15, 29 January 2010 (UTC)
- Whereas Andy has not personally killed anyone (to the best of my knowledge) other Christians with his views have, and continue to do so. Throw in that when something like the Tiller murder occurs, Andy and his ilk cheer it.... I say fanatism period is a bit of a wash. SirChuckBObama/Biden? 2012 01:43, 30 January 2010 (UTC)
Adding book citations to CP[edit]
is now close to verboten, because a community of, hm let's say twenty or so active non-parodist editors is quite understandably too small fact check their own wiki. WodewickWelease Wodewick! 04:03, 30 January 2010 (UTC)
- Twenty? You greatly exaggerate the active usership of Conservapedia, methinks. DickTurpis (talk) 04:35, 30 January 2010 (UTC)
- TK: "If I have to choose between a more lax citation, but one that is easily verified, or one more complete, but not available to easily check, it really isn't a contest as to which I will permit to remain." ...so don't add anything that isn't common knowledge. Otherwise people would think Conservapedia is some kind of reference source, like an encyclopedia or something. WodewickWelease Wodewick! 10:51, 30 January 2010 (UTC)
- Once again TK manages to single-handedly dumb-down CP. Anybody else notice how quick the man who always says 'this is Andrew Schlafly's project and only his' is to make up destructive policy on his own, with no input from his fellow sysops. In fact, he's of the opinion that the Blue Room is not there to debate policy. --Psygremlin講話 11:04, 30 January 2010 (UTC)
- Yeah, it's so lame. Hell, I've gone to the Uni library to check out PJR's loser refs, why can't TK do that? = Lazy + Stupid. Also, who cares what TK thinks? ħuman
11:31, 30 January 2010 (UTC)
- Why stop there? I don't think they should include any citations at all. Just put a pile of links that have something or nothing to do with the subject at the end of the page. Leave it up to the readers to sort out how to figure something out. Wouldn't that be encouraging learning? -Lardashe
- You don't learn much from reading a source. It's much better to write one. --Sid (talk) 13:36, 30 January 2010 (UTC)
- As long as there's lots of pictures of Hitler/Gore_burning/Obama_in_whiteface it doesn't matter if it's referenced or not. yummy
& honey(or marmalade) 13:40, 30 January 2010 (UTC)
- Don't forget the ranking in a popular search engine starting with "G"! The higher it is, the truthier the article is (unless it's Wikipedia, of course), regardless of sources. --Sid (talk) 14:04, 30 January 2010 (UTC)
- Gasp!* Karajou sides with the peonsimg. How long before we get a "you should have consulted with me via e-mail before giving in to your liberal instincts and posting that?" from TightKnickers? --PsygremlinParla! 16:24, 30 January 2010 (UTC)
- MLA-style is very liberal-leaning, by CP standards. It's for English and other liberal arts majors. AP or Chicago should be more preferable for conservatives, one would think? --Irrational Atheist (talk) 18:38, 30 January 2010 (UTC)
- Psychology and Science are both forbidden there. I think that eventually all you will be able to cite is chapter and verse from CBP --Opcn (talk) 05:30, 31 January 2010 (UTC)
- Has anyone seen Ed quote from a scientific journal or scholarly book? He usual uses the first thing he finds on a blog. - π 05:33, 31 January 2010 (UTC)
"Bilingual education" according to Ed: "Education that makes you bilingual"[edit]
Ed uses scare quotes just because he doesn't get the difference between bilingual education and learning a second language. --Sid (talk) 13:57, 30 January 2010 (UTC)
- Now with CP article by Ed, who tells us what the term SHOULD mean and that it's the fault of liberals (duh) that it now means the opposite of what he wants. --Sid (talk) 14:03, 30 January 2010 (UTC)
- This could be the beginnings of a new category: "How stuff ought to be, according to Ed." This is right up there with his confusing masterwork cp:Essay:Women wearing pants, but at least he had the good sense to put that in to Essay namespace. --
Ask me about your mother 14:18, 30 January 2010 (UTC)
- Aw! Look - he's even run off to Andy. 'Master! Master! Lookee. I haz new articlesimg forz youz.' He's kinda cute when he fawns like that. --Psygremlin말하십시오 14:35, 30 January 2010 (UTC)
Another day, another Ed stub[edit]
Nuff saidimg a) The man does not respect the wiki he works on to deliver tripe like that. b) Surely by now he's learnt that, like WP, nobody is going to jump in and turn his steaming pile of faeces into an article. --PsygremlinSprich! 16:40, 30 January 2010 (UTC)
- Surely you mean unlike WP? ħuman
01:04, 31 January 2010 (UTC)
- That doesn't even read like it was created by human hand. Maybe Ed's acquired a stub-o-matic bot that'll create an article and pop in a vaguely related sentence while Ed gets on with whatever it is he does. --
Ask me about your mother 01:24, 31 January 2010 (UTC)
TK's cellphone driving WIGO[edit]
TK's typical right-winger point obviously is that Nanny-Statism and excessive legislation is a Bad Thing. But is the suggestion SERIOUSLY that it's not a good idea to prevent driving while texting and the like?! Lord knows, every single study ever done - and personal and anecdotal experience of everyone I'm sure including TK himself - indicates cellphone use while driving is as dangerous, or even more dangerous, than driving while loaded. Would you have the ban lifted TK, and then what about removing the DUI offense, eh? (Would be useful for me). As usual with these fucking right-wing dipshits, they have such a small brain they can't see beyond the "Stop Big Gub'mint!" argument. Thickheads. DogPMarmite Patrol 22:05, 30 January 2010 (UTC)
- It would have been sensible to not make the ban. Laws like that do nothing but teach people nobody cares if you break laws. --Swedmann (talk) 22:31, 30 January 2010 (UTC)
- That's a ridiculous position to take, and completely illogical. Are you going to remove the DUI laws? How about we have no speed limits, can drive on any side of the road we like, add knife blades to our wheels, and do lines off the tits of hookers while we swig from Jack Daniels bottles? I suppose that makes for a freer society, but I'd guess your taxes are going to have to go up to pay for the big increases in morgue and disability budgets. DogPMarmite Patrol 23:01, 30 January 2010 (UTC)
- Nobody will obey it so it was a big waste of everything to make it. Laws should be easy and consistent. Speed limits are fundamental to driving, while having illogical "do not touch the arbitrary magical item, it's bad" laws undermine the respect towards safe driving and other laws. It's not a big effect but it exists. If they made all new cars have hand-free systems I wouldn't complain, but this doesn't make sense. --Swedmann (talk) 23:38, 30 January 2010 (UTC)
- Well, technically, I'd much prefer good Grappa or perhaps Black Bush, but I thought Jack Daniels suited the image of rails off hooker's tits. It was a writing thing, you know, not a booze choice. Work with me on this one Ace. DogPMarmite Patrol 23:12, 30 January 2010 (UTC)
- Lifting the DUI would work for me. But they only argue against "big government" when the govt. is doing things they don't like. Big government getting in the way of abortions and gay marrige is A-OK! Acei9 22:32, 30 January 2010 (UTC)
- Just for the record, DP, I would like to point out that most of the studies on this issue are seriously flawed and there have been many studies that prove the opposite: that cell phone usage has no greater effect on driving than other acitivities divers engage in (applying makeup, shaving, eating etc...) SirChuckBI have very poor judgement 23:35, 30 January 2010 (UTC)
Phonics[edit]
Andy's Phonics listimg is brilliant. You can really tell form the opening line that it's Conservapedia:
He's putting the fear of God into your belly. "You haz made miztakez without phonicks!!!" SJ Debaser 14:29, 30 January 2010 (UTC)
- Anyone with a sock want to add the following:
- Relativity | Relativism = Relativity used in a physics sense is sometimes confused with relativism, thus causing poorly educated people to reject relativity because they believe it's somehow related to cultural relativism. --
Ask me about your mother 14:42, 30 January 2010 (UTC)
- Hang on, who the hell confuses Lieberman with Libertarian? Is Andy basing this list on a conversation he had 20 years ago while stoned in the back of a camper van? --
Ask me about your mother 14:43, 30 January 2010 (UTC)
- You fool! Lack of phonics obviously causes dyslexia and breast cancer. Open your mind! God's peed! Vulpius (talk) 14:48, 30 January 2010 (UTC)
- Better is "amendment" versus "agreement". That's just a baffling mistake to make. MaxAlex Swimming pool 14:54, 30 January 2010 (UTC)
- That's why it's on the Liszt.
16:11, 30 January 2010 (UTC)
- Butt ore ewe shore abut tat? --PsygremlinSnakk! 16:41, 30 January 2010 (UTC)
I'm confused. I thought, for some reason, that conservatives hated phonics as a means of learning to read. It is a late 19th century innovation, after all. Beyond that, the idea that mistakes will be made if you don't learn with phonics is as ridiculous an argument as that against phonics that learning it will lead to mistakes. We're talking about learning to read English, after all. We have words like "enough" and "colonel" and even "Thames" that sound nothing like they should under phonics, and that no one would remember how to spell or read without just using them a lot and remembering. We also have large numbers of words from a large number of different languages, each of which has its own pronunciation scheme. Consequently, there is no perfect way to learn how to read this language. It just has to be done in a systematic fashion while leaving sufficient room for the general willy-nilly of the language. Kaalis (talk) 16:51, 30 January 2010 (UTC)
- A central part of Andy's worldview is that there is a perfect way to do everything. He really doesn't like nuanced approaches. Broccoli (talk) 17:50, 30 January 2010 (UTC)
Phoney Phonetics
One."
Attributed to Vivian Buchan
Auld Nick (talk) 18:09, 30 January 2010 (UTC)
So it is Andy's opinion that Gore should have won the election but lost because of confusion over the ballot?
- Woot. AlexWD raised the relativity/relativism thing. Should be interesting to see how this one plays out, assuming TK doesn't just stomp in to spare Andy's blushes. --
Ask me about your mother 23:01, 30 January 2010 (UTC)
- Here is a growing list of mistakes that result from reading by phonics. Thieh"6+18=24" 23:11, 30 January 2010 (UTC)
- Andy's edit to the talk page re relativity/relativitism hints at another echo from the past.
ГенгисGum disease 12:51, 31 January 2010 (UTC)
Lovely new conservative words[edit]
The 2000s are coming along slowly, but this suggestions is excellent. Now only if there were some piece of equipment by which the degree of irony could be measured... MaxAlex Swimming pool 18:50, 30 January 2010 (UTC)
- didn't you even read it bro? it says it's LIBERAL behavior. HoorayForSodomy (talk) 19:31, 30 January 2010 (UTC)
Crappy fit of conservative words[edit]
Hopefully this will work I broke down the words into 50 year segments, and what do you know, turns out shoehorning the data into one measure of exponential growth will not make it fit another measure of exponential growth. I haven't tried to fit a curve to it yet but I can't see it being anywhere near perfect. I probably should have passed this off to someone with a sock to post up on CP but I suspect it would get burnt anyways. --Opcn (talk) 08:56, 31 January 2010 (UTC)
- Gee I haven't looked at that list in a long time. There are some dumb words on that list. Phonics from the 17th century didn't mean the teaching style Andy prefers (or his mummy sells books on) originally. Rapture, I assume that Conservatives are looking forward to this event on April the 1st (too easy really). - π 09:10, 31 January 2010 (UTC)
Basic Statistics and Conservative Words[edit]
Opcn: Thank you for making the table. Aschlafly's hypothesis is:
H0: The number of conservative words follows a geometric progression and roughly doubles every century
This hypothesis is consistent with the observed numbers of words over the centuries:
But how does Aschlafly get his numbers? I can see two possibilities:
Ho.m.: Aschlafly stumbles upon a conservative word and determines the year of its origin. As the words are distributed according to H0, the observed pattern emerges.
Hc.m.: Everytime, Aschlafly finds a conservative word of the 16th century, he looks two words of the 17th century, four words of the 18th century, eight words of the 19th century - and won't stop until he find sixteen words for the 20th century. This search is obviously independent of the actual distribution of conservative words.
How can we test these hyoptheses? If Ho.m. is true, the geometric procession should be mirrored by the decades - independently from the centuries: for one word in the fifties, one should find 1.07 word in the sixties and so on.
Quite a difference. In fact, a χ²-test shows that this hypothesis has to be dismissed (for α = .05).
As for Hc.m.: in this case, one would expect a uniform distribution on the decades, i.e.,
As this hypothesis can't be dismissed (again, for α = .05), one may conclude that the way Aschlafly gets the numbers has nothing to do with a real distribution: it is probable that he pulls them out of his ass.
larronsicut fur in nocte 14:47, 31 January 2010 (UTC)
- "Larron" (if that is your real name, I doubt it), you're clueless. Your comment clearly shows your verbose liberal style. I can tell by the way you spell : "Hyoptheses" that you went to public school, and probably support the censoring of classroom prayer. I would encourage you to read or translate the Bible for 5% your spare time, it does wonders for all of us. Godspeed. Andy Schlafly
Guess the admin[edit]
"I do worry, of course, that saboteurs and cranks will put fake references into articles, for the sake of POV-pushing. A famous black conservative has complained about this. I hope some of our homeschooled contributors will take the time to pull books down from shelves and check offline references."
- Sentence #1) complete misunderstanding of the topic
- Sentence #2) vaguely disturbing non sequitur
- Sentence #3) exhortation for someone else to do the work
Yep, it's Uncle Ed. WodewickWelease Wodewick! 10:00, 31 January 2010 (UTC)
- In some parallel universe Wikipedia user #188 works diligently in his sandbox to produce fully formed articles before moving them to mainspace. He avoids mentioning himself in the articles, and when it's a topic he doesn't understand he acknowledges his limitations, allowing an expert to offer a critique. He comes up with suggestions, and then leads by doing what he suggested, instead of relying on others to run with his amazing ideas while he sits on his arse. He probably also has a really cool beard.
- CP has a rich history of citing blogs and other opinion pieces in support of their articles, and it's painfully obvious that the only reason they don't like citations from books is that someone will have to get of their arse to go check them. I can understand that attitude on a more laid-back site, but it's really odd to see on a site with a mission such as theirs. --
Ask me about your mother 11:31, 31 January 2010 (UTC)
- That whole page has now been memory-holed. Typical TK.
ГенгисGum disease 16:18, 31 January 2010 (UTC)
I don't want to research this, lest brain damage ensue. But does anyone know exactly what he's referring to when he claims a "famous black conservative has complained about" "saboteurs and cranks" putting "fake references into articles, for the sake of POV-pushing"? - Poor Excuse (talk) 14:51, 31 January 2010 (UTC) (And for that matter, why negritude is an issue here?) - Poor Excuse (talk) 14:53, 31 January 2010 (UTC)
- That's a good word. I'm going to find out if the negroes like it. Who should I call?
16:51, 31 January 2010 (UTC)
Full props to Poor excuse for name-dropping "negritude" as I'm reading about Aimé Césaire. Cosmic co-incidence. TheoryOfPractice (talk) 17:03, 31 January 2010 (UTC)
- (EC)Remember MLK was a conservative. I think he said this, on that subject, I might be wrong but can't be bothered to check because I'd have to go to a library to see a transcript.
- “And so even though we face the difficulties of today and tomorrow, I still have a dream. It is a dream deeply rooted in the American dream.
- I have a dream that one day this nation will rise up and accept that online references are better when used in an online encyclopedia. For it is the saboteurs and cranks who would use fake references to push their point of view, if we would use sources written on paper, with the ink of the liberal.”
- Internetmoniker (talk) 17:05, 31 January 2010 (UTC)
Oh, yeah. MLK was a conservative--a conservative who likened the war in Vietnam to an imperialist venture, who thought that fear of communism had undermined the true values of the American Revolution, and who thought the war was diverting resources and energy from a government that should be more firmly involved in a project of societal reform. (See Singh, Nikil Pal. Black Is a Country: Race and the Unfinished Struggle for Democracy. Cambridge: Harvard University Press, 2004, 1-2.)
- I believe the appropriate CP logic to this is: MLK was Christian minister. Christians=Conservatives. MLK=Conservative. Internetmoniker (talk) 17:40, 31 January 2010 (UTC)
- You're wrong their IM, it's not simply the Christian status that makes MLK a conservative. Andy (and those who think him) are more than happy to label someone Unchristian when they don't agree with their views. The reason they cling so hard to MLK as a conservative is because they need him lest they appear racist. MLK was a registered Republican who was killed before the party swap of the 1970's. Even though everyone and their brother recognizes that he would be a liberal Democrat today, the GOP (and by extension, the crazy conservatives) hold on to him as a way to say "look, we're not racist, even though we speak in racist code language and endorse policies that hurt Black America, MLK agreed with us." SirChuckBOne of those deceitful Liberals Schlafly warned you about 20:28, 31 January 2010 (UTC)
Is Andy reformed on complex numbers?[edit]
Hermitian matrices are described in terms of the complex conjugate, which Andy is afraid of. -- Coarb (talk) 00:46, 1 February 2010 (UTC)
CBP revisited[edit]
I know Andy's off on another obsession now, but really he ought to be made aware of this:
It doesn't mention Liberal bias, so there's room for a nice presentation by him.
Fretfulporpentine (talk) 11:43, 31 January 2010 (UTC)
- Holy shit it's you. Didn't know you still perused CP/RW. HoorayForSodomy (talk) 16:35, 31 January 2010 (UTC)
- Brian! How's Mrs Ugler? Totnesmartin (talk) 21:28, 31 January 2010 (UTC)
- Good to see you out and about, FP. That is an interesting article, let's hope it reaches Andy's ears. Tetronian you're clueless 14:01, 1 February 2010 (UTC)
That's the question we're all asking ourselves.[edit]
Really.img TheoryOfPractice (talk) 23:20, 31 January 2010 (UTC)
- "What a crass insult. Go fuck yourself."img – Nick Heer 05:37, 1 February 2010 (UTC)
- Amazing how long it took Andy to reply to that. One would have thought, after the time lapse, he would had a more intelligent reply (than one with a typo - TK, go fix it!) ħuman
05:58, 1 February 2010 (UTC)
- I guess I haven't been paying close attention. Isn't/Wasn't AlexWD a regular contributing editor-in-good-standing? That whole talk page seems to indicate a complete breakdown that would get anyone else banned for life.--WJThomas (talk) 12:45, 1 February 2010 (UTC)
- I'm amazed that he didn't get blocked for that. Tetronian you're clueless 13:59, 1 February 2010 (UTC)
TKour de force[edit]
Better version of WIGO:
"TK is smacked by the other admins over book citations. TK pretends to back down, but the next day he consigns the whole page to the memory hole, adds his stance to the manual of style as if the discussion never happened, and puts an intimidate ban on the offending user complete with "Retired" banner."
The point here is TK's perfidy. Debates or good faith discussions with the guy are pointless, he will just do what he wants, hoping that no one will notice (like the time he used a "vandal" IP block to shut out another CP user as well). If ever confronted on this expect lots of "MYOB," "Andy trusts me," and week long blocks for "prevarication and making up screenshots." (talk) 00:27, 2 February 2010 (UTC) JDW was apparently notimg cowed into submission so I bet TK is burning the midnight oil plotting ways to Politely Remove him. WodewickWelease Wodewick! 01:32, 1 February 2010 (UTC)
- What's "an intimidate ban"? 82.23.209.253
Y'all actually support that idiotic parking proposal?[edit]
I'll concede that bill has no chance in hell of passing, lest the Democrats are willing to be the new permanent minority in California. Parking is either a local issue or a private issue, not a state issue. Hell, it's trouble enough to keep enough quarters in the car to feed the meters without raising the price of parking. And the notion that increasing parking meter rate will ease congestion is bunk. All that will do is lead to the creation of more ugly parking garages. [5] ConservapediaEditor (talk) 06:30, 1 February 2010 (UTC)
- god (extra lower case) forbid that anyone should, like take a train or a buss in or anything... --Opcn (talk) 07:14, 1 February 2010 (UTC)
- Nice red herring ... ConservapediaEditor (talk) 07:32, 1 February 2010 (UTC)
- Actually he has a point. If Californians know they can't park free, they may stop and consider alternate transport. However, I think this is the wrong way to go about. SirChuckBI brake for Schukky 07:49, 1 February 2010 (UTC)
- There are much better ways to encourage that goal, including tolling of incoming roads, and expanding public transportation routes. By increasing parking fees on the streets, all you do is push people to park in garages, or in the case of what happens here in Dallas, drive to the nearest rail station with no parking fee a couple miles away from downtown and park there. That may reduce congestion, but it does not reduce pollution. ConservapediaEditor (talk) 07:59, 1 February 2010 (UTC)
- Toll increases waiting time, and if implemented ex novo one has to build lots of expensive infrastructure and hire new employees, whereas higher parking fees are just a change in the firmware of the vending machines. Perhaps people parking at a railway station will have a tiny spark of inspiration from seeing the choo-choo's passing by every day. — Pietrow ☏ 08:17, 1 February 2010 (UTC)
- The problems with tolls are already laid out above, plus they have the unintended problem of diverting drivers around your city rather than through it. Expanding public transportaion is a good idea, but if there are no new riders, the city starts losing money by the bucket. California, especially, is a city tied to their cars. There will never be a major change in the way people get to work if they don't have some reason to do. They'll never give up their cars. Here in Denver, the RTD fixed that little parking problem by charging anyone who parks in one of their lots (unless the car is registered to that district) and that cleared up that free parking problem right away. Besides, there aren't that many large scale garages in LA with open spaces (most of them rent by the month like NYC) and I'm sure the city can block the building of any new ones. Like I said, I don't agree with the method, but it does have its own updsides. SirChuckBI brake for Schukky 08:33, 1 February 2010 (UTC)
- London's answer is the congestion charge which, although not universally popular, has noticeably not been dropped by Boris. I has made a huge difference to the centre of town and, as a keen cyclist, I'm pleased to see just how many have reached for the two wheel answer. Jack Hughes (talk) 09:33, 1 February 2010 (UTC)
- He is planning to drop the western extension, though, and even that's taking long enough. There are contracts and other things in place for the main zone which he couldn't just wipe out without paying Capita the big bucks. So, don't take Boris' "inaction" as tacit approval of the scheme. MaxAlex Swimming pool 10:31, 1 February 2010 (UTC)
- Boris doesn't approve of it, I agree, but that doesn't make it a bad concept, and as long as enough of that money goes to improvements in public transport I think it'll remain quite popular. I'd like to see the idea implemented elsewhere in the country, with money going to real improvement in transport. Cities that could really benefit are Birmingham (CENTRO desperately needs serious, long term funding commitments to finally get Metro and commuter rail up to scratch), Liverpool to finish what was started with Merseyrail, Manchester (to get the tram extensions done), and Bristol to get any kind rail based mass transit going ASAP. As for California, I agree parking is a local issue, but the reasoning is sound. However the proposal itself is fail. Instead of offering incentives to curtail free parking, a better scheme would be to impose a small state wide tax on parking facilities, with that funding ring fenced for funding public transport, particularly heavy commuter rail in LA which was nothing more than abad joke when I was there (1997). --
13:41, 1 February 2010 (UTC)
Red Ken[edit]
Wait, isn't Google China blocked in China now or something? So, are Ken's articles soaring with the nobody who can read them? I'm so confused. But not quite as confused as ♥ K e n D o l l ♥ is. --JeevesMkII The gentleman's gentleman at the other site 08:47, 1 February 2010 (UTC)
- The pitch is that it has contents *important* enough for Chinese government to block google because of it. Ergo, it matters not whether anyone can read it or whether anyone in China understands ♥ K e n D o l l ' s ♥ English. ThiehZOYG I edit like Kenservative! 12:05, 1 February 2010 (UTC)
Rob Has His Mind In The Gutter. Again.[edit]
Cummunism?img Is that what Rob does after looking at picturs of Kara in her Yellow Dress? TheoryOfPractice (talk) 12:45, 1 February 2010 (UTC)
- Hehe, Freud would've had a field day with Rob. CrundyTalk nerdy to me 12:58, 1 February 2010 (UTC)
- I'm a semi-Freudian Psychology student and I have a field day with Rob.... He's the perfect mix of paranoia, blind ideology with strong attacks on something he doesn't understand, and creepy sexual behavior (who looks at non-porn porn? honestly?) SirChuckBCall the FBI 18:07, 1 February 2010 (UTC)
- Rob? Rob? ROB! It still says "cum" on the family-friendly wiki. Stop gazing longly at naked girls and fix it. TheoryOfPractice (talk) 20:32, 1 February 2010 (UTC)
Obama calls healthcare collectivism "a Bolshevik plot" according to CP[edit]
Only he doesn't....[6] Mick McT (talk) 12:58, 1 February 2010 (UTC)
- No surprises there. Tetronian you're clueless 13:57, 1 February 2010 (UTC)
Andy's smoking-lung cancer bit & Godwin[edit]
I was just reading this cracked.com article, which had something that made me think of Conservapedia.
Remember how one of Andy's favorite phrases is about people and companies denying the relation between smoking and lung cancer? Well, you know who also wasn't blinded by this liberal deceit (according to cracked and wikipedia)? Hitler!
That's right, this time I envoked Godwin's law. What you gonna do about it, ♥ K e n D o l l ♥? --GTac (talk) 13:59, 1 February 2010 (UTC)
- But...but...liberals censor school prayer! Tetronian you're clueless 14:31, 1 February 2010 (UTC)
- And the bible from their daily lives. TKEtoolshedFrag Out! 14:40, 1 February 2010 (UTC)
- The Nazis used junk science to support their racial theories and to achieve power - our own leaders have done the same to seize power and crush our liberties. - Kind of unrelated, but that the #2 response to a WND question about global warming. — Sincerely, Neveruse / Talk / Block 15:08, 1 February 2010 (UTC)
- Our leaders don't really use junk science, they ignore real science when it doesn't work in their favor and base all their decisions on irrational fear and politics. Just sayin. TKEtoolshedFrag Out! 22:03, 1 February 2010 (UTC)
- I have a nasty feeling that politicians were abusing science for political ends long before Hitler was even born. Totnesmartin (talk) 22:21, 1 February 2010 (UTC)
I thought of Andy[edit]
the last panel especiallyOpcn (talk) 21:06, 1 February 2010 (UTC)
Did anyone listen to Kens Creationathisms summit? Did they mention conservapedia?[edit]
I tried to listen, but then after 15 minutes I had to go any kill myself --Opcn (talk) 20:46, 31 January 2010 (UTC)
- Where is the link? I'll listen.
20:48, 31 January 2010 (UTC)
- [7] --Opcn (talk) 21:19, 31 January 2010 (UTC)
- By the power of W3C, I declare that site as a sin against web design. Etc 21:32, 31 January 2010 (UTC)
- Maybe they hired Ken for web design? --Kels (talk) 21:37, 31 January 2010 (UTC)
- Oh dear Chirst.... I didn't know Stevie Wonder was designing websites. SirChuckBCall the FBI 21:58, 31 January 2010 (UTC)
- Awesome. Reminds me of Bud Uglly. I guess that dates me. Whatever. Those were the times. Mountain Blue 22:37, 31 January 2010 (UTC)
- Considering that their 'Wicca Exposed' page has links to Chick Tracks as 'refs', I think we can safely classify this page... - Ravenhull (talk) 23:49, 31 January 2010 (UTC)
The homework assignment wigo[edit]
Maybe we should strike this one? I think the students should be kept out of wigos. They're the victims here. The fun is in the wingnuts, not the kids. Internetmoniker (talk) 23:47, 1 February 2010 (UTC)
- Not maybe--definitely. I'm pretty sure picking on the students is not on. TheoryOfPractice (talk) 23:52, 1 February 2010 (UTC)
- How many students are there on CP anyway? I just find it absolutely bizzare that any parents would seriously consider enrolling there kid in a class that uses CP as an exam hall, and Andy as the examiner. *facepalm* --
00:30, 2 February 2010 (UTC)
- It's a pity, that "thing that attics stop for" was pretty funny. Oh well, yeah, leave the kids alone. But when Andy grades it there might be some good laughs. ħuman
00:32, 2 February 2010 (UTC)
- What does "thing that attics stop for" mean? This is baffling. --Fawlty (talk) 08:53, 2 February 2010 (UTC)
- It's a typo. It's supposed to read "addicts." Tetronian you're clueless 12:44, 2 February 2010 (UTC)
I think I may be the only one holding this position here, but I'm not much for leaving those kids alone at all. They're part of CP and if they do something stupid, I'd laugh just as hard in their face. You may argue that some of them didn't choose to be there, but that's all the reason for me to ridicule them for following Andy's insane agenda. These aren't 5 yo's, in my eyes they're old enough to learn that writing down mindless crap will get you mocked.
But I'll go with the status quo if that's how you all feel. Just wanted to put my 2 cents in. --GTac (talk) 09:55, 2 February 2010 (UTC)
- I vote for "keep the kids out of WIGO's." I'm willing to make an exception for Andy giving an extremely lenient grade to a very flawed homework answer, but even then, the focus should be on Andy, not the kids. MDB (talk) 11:59, 2 February 2010 (UTC)
- If we go by mental age we couldn't report any of CP's funnies, there's barely one of them over the mental age of 14. yummy
& honey(or marmalade) 13:39, 2 February 2010 (UTC)
Haitian Children WIGO[edit]
This WIGO about the Christian group having legal troubles in Haiti for taking kids without permission is unfairly phrased. CP doesn't seem to be defending the Christian groups at all, but the WIGO definitely implies they are. MDB (talk) 12:21, 2 February 2010 (UTC)
- I agree. CPs write-up on this doesn't support the tagline we've used. --
Ask me about your mother 12:47, 2 February 2010 (UTC)
- I think a key rule for writing WIGO's is "the fact CP lists something as 'In the News' does not necessarily mean they are taking a stance on it."
- And as far as the actual story goes, I'm willing to chalk it up to an honest, well-intentioned mistake. MDB (talk) 13:06, 2 February 2010 (UTC)
- Yeah. It's very confusing to read a CP news article that's ostensibly free of editorial bias. I need to lie down a while. --
Ask me about your mother 13:12, 2 February 2010 (UTC)
apple eating wigo[edit]
I though the question was pretty fucking stupid as it was not a closed system. Andy's response seemed quasi-reasonable. — Sincerely, Neveruse / Talk / Block 16:35, 2 February 2010 (UTC)
- I found it funny because one person talks about Jesus's miraculous healing, and Andy counters with very rudimentary equivalents. Is Jesus's healing equal to eating an apple or taking an aspirin now? --Irrational Atheist (talk) 16:59, 2 February 2010 (UTC)
- I couldn't tell what thermodynamics have to do with telehealing, since Andy's concept of it seems too vague to be contradicted by entropy. On the other hand, the response was a lot weirder than the question. ~ Kupochama[1][2] 17:13, 2 February 2010 (UTC)
- How so? He is saying that there are clearly methods of healing that don't contradict thermodynamics laws, so why should the miraculous method have to. — Unsigned, by: 131.107.0.112 / talk / contribs
-
- He was trying to illustrate that the system isn't closed. In a rudimentary way, he did that. — Sincerely, Neveruse / Talk / Block 17:52, 2 February 2010 (UTC)
- This 2nd Law of Thermodynamics shit has to stop. It's not applicable in nearly any of the systems for which it gets raised - they're rarely "closed" systems in the sense required. Andy's discussion reflects that he doesn't understand this.
18:42, 2 February 2010 (UTC)
- In real life it is typically beneficial to ask what the other laws of thermodynamics are when ever anyone brings up the 2nd law. If they don't know chances are that they don't understand the 2nd law. As for healing, I think that jesuses healing was probably very similar to apple eating or Aspirin popping. --Opcn (talk) 21:20, 2 February 2010 (UTC)
- Assuming Andy's Electrical Engineering course of study was like mine, he had to take a Thermodynamics course, and would know the laws. (And if he was like me, he hated thermo for non-majors. At least at my school, the various Engineering departments assigned their worst instructors/professors to teach the non-majors courses.) MDB (talk) 00:40, 3 February 2010 (UTC)
Lancet Retraction[edit]
Any ideas what this news article will mean at CP? TheMayor (talk) 20:33, 2 February 2010 (UTC)
- Just more assertions that liberals bullied maverick Best Of The Public researchers to discredit their work. The usual blah blah. --Sid (talk) 21:19, 2 February 2010 (UTC)
Blocks in 2009[edit]
Just a few numbers:
larronsicut fur in nocte 20:18, 1 February 2010 (UTC)
- Is there a way you can factor out our block-wars (e.g., any block under 32 seconds does not count)?
ListenerXTalkerX 20:20, 1 February 2010 (UTC)
- 315 seconds would probably be a better number. What defines an "active" editor? --Opcn (talk) 20:34, 1 February 2010 (UTC)
- Active may be a misnomer: lots of editors are blocked without leaving any trace, i.e., an edit in the database - those I called inactive. Active editors made a (surviving) comment in 2009. larronsicut fur in nocte 20:41, 1 February 2010 (UTC)
- I'd rather show blocks lasting longer than a day. Just a guess, but I bet 95% of all blocks on RW are less than a day, while 99% of all CP blocks are more than a day. Unless those block wars against MC, TK, etc. lasted longer than I thought.
NorsemanCyser Melomel 20:43, 1 February 2010 (UTC)
Does the pic help? larronsicut fur in nocte 22:28, 1 February 2010 (UTC)
- What does "blocked editors" mean? "Current blocks" still fits on one screen... ħuman
00:16, 2 February 2010 (UTC)
- As many editors were blocked several times at RW, the 5910 blocks involved only 609 individual editors. larronsicut fur in nocte 07:35, 2 February 2010 (UTC)
- So 309 people received infinite blocks? or a handful of people received multiple infinite blocks? How do block issued to force login factor in? Also, do we have the data for all time? I would imagine that the traffic slowdown slowed down CP blocks until the Colbert bump. --Opcn (talk) 03:03, 3 February 2010 (UTC)
- there were 309 infinite blocks: it's very probable that some were made for the same editor. I didn't check
- these are only blocks of editors, not of IPs. So the blocks issued to force login don't factor in at all.
- of course, as MC's HCM speeds up our blocks... larronsicut fur in nocte 09:53, 3 February 2010 (UTC)
- larronsicut fur in nocte 09:53, 3 February 2010 (UTC)
- I had a quick look, and can't quite get the same figures as LArron (which is certainly my own failing), but quite a few of the infinite blocks are for MC, with the others spread around generally. Would need to do more work to see how many of those are still inplace, but I would guess that not that many. For instance Ace got at least 9 infinite blocks, and I'm fairly sure he's still here ;) --Worm(t | c) 10:44, 3 February 2010 (UTC)
- I'm afraid it's a glitch on my side: blocks for negative amounts of time ("yesterday", "-314s", etc.) were counted as infinite: The number of blocks made explicitly for an infinite or indefinite period is a meager 134 - according to my numbers...
Here is a list of those who were blocked at lest twice for infinity:
larronsicut fur in nocte 11:13, 3 February 2010 (UTC)
- and the most common combinations (infinite blocks)? Ace blocking MC (9), Goonie blocking MC(7) and Ace blocking Ace (3) ;) --Worm(t | c) 13:50, 3 February 2010 (UTC)
- Nice idea, I added a column to the table! larronsicut fur in nocte 14:30, 3 February 2010 (UTC)
Just for comparison: A list of those who were blocked at least three times in 2009 at CP (no, not only indefinitely) larronsicut fur in nocte 14:50, 3 February 2010 (UTC)
- I see a trend here: TK "doing Andy's bidness" of blocking prevaricators and troublemakers!
16:00, 3 February 2010 (UTC)
larronsicut fur in nocte 14:59, 3 February 2010 (UTC)
The blog that respects freedom of religion...[edit]
...doesn't seem to do so when they heard the news that the an Air Force Academy allow space for Wiccans to practice their religion. --Dark Paladin X (talk) 02:56, 2 February 2010 (UTC)
- Of course not - then it qualifies as "forcing your religion on someone else." Tetronian you're clueless 03:26, 2 February 2010 (UTC)
- Everyone's free to
wear sunscreenbe Christian equally. --Kels 03:38, 2 February 2010 (UTC)
- "We must coexist with evil, such as Wiccan beliefs"... and so must they... ħuman
03:43, 2 February 2010 (UTC)
- *Headdesk* It's funny to watch them blow minor things out of proportion, but other times... you just pity them. Not just them, but for all of humanity.
NorsemanCyser Melomel 04:43, 2 February 2010 (UTC)
- Conservapedians only support "freedom of religion" insofar as that means, "the State stays out of our wingnut churches," which in turn means, "the State does whatever the people in our wingnut churches say."
ListenerXTalkerX 04:51, 2 February 2010 (UTC)
- young fit women dancing naked every full moon , whats not to like. The demon summoning and hardcore demon sex might be off putting. Hamster (talk) 05:10, 2 February 2010 (UTC)
- You seems a little confused or joking, it is hard to tell. - π 05:14, 2 February 2010 (UTC)
- (EC) Paganism has to get a little more cultural clout before beautiful women will become pagans in volume, and seeing as how neo-pagans run screaming whenever someone mentions the word mainstream, this is a pipe dream.
ListenerXTalkerX 05:15, 2 February 2010 (UTC)
- For the record, one of the hottest girls at my high school was a Wiccan. And yes, we saw her naked all the time - the tool shed behind the theaters, the dugouts, etc... she's a UU now, with three kids... weird...
05:22, 2 February 2010 (UTC)
- Getting away from the CP hysteria (almost want to create a pic of a 'chaplin' with a pentegram on his collar just to give them a heart attack, but they don't need the help), this is a positive thing to me. It was only a few years ago that there was a big todo at the AFAcadamy due to right wingers trying to shove Christianity down the throats of a couple non-Christians there, and the brass trying to blow it off. - Ravenhull (talk) 06:23, 2 February 2010 (UTC)
- Is that the one that Mikey Weinstein went so emphatically against?
ListenerXTalkerX 06:42, 2 February 2010 (UTC)
- I dabbled in Wicca for a while, until I realized I found it extraordinarily stupid. While I wasn't especially inclined to notice the women, I do remember that some were reasonably attractive, but I don't remember any as especially hot. (Word of explanation: Just because I am not attracted to women, it doesn't mean I can't appreciate that a woman is attractive.) MDB (talk) 12:05, 2 February 2010 (UTC)
- Agree Wicca is "extraordinarily stupid", and more than a tad intellectually dishonest, but it is a religion and so in s secular country like the US is meant to be, it's followers should have exactly the same rights as followers of other faiths. Coming back to the WIGO... I just love this nice bit of slippery slope argument... a true Jpatt classic...[1]img--
20:27, 2 February 2010 (UTC)
- My description of Wicca as "extraordinarily stupid" was in no way meant to suggest that Wiccans don't deserve the same rights to practice their beliefs as do any followers of any other religion, and I'm sorry if my statement implied otherwise. MDB (talk) 12:28, 3 February 2010 (UTC)
- Wicca is a religion, ergo it is "extraordinarily stupid" by definition. But as has been said, if your going to 'respect' certain religions, you've got to respect them all. Here in the UK, a 'Pagan Police Association' has recently started - good for them! If the extraordinarily stupid Muslim Police Association exists, why shouldn't they have a piece of the action too? I'm still hoping a Pastafarian Police Association will pop up soon... DeltaStarSenior SysopSpeciationspeed! 16:09, 3 February 2010 (UTC)
- Bunk; religions must stand or fall on their own merits, and Wicca has less going against it than creationism.
ListenerXTalkerX 16:25, 3 February 2010 (UTC)
- That's you being incredibly stupid in a nutshell. "Religion stands on it's own merits! Wicca has nothing going against it!" Complete non-sequitur. You're an idiot, psuedoscientific, a paranoiac and a crank. — Sincerely, Neveruse / Talk / Block 16:28, 3 February 2010 (UTC)
- Note the complete lack of arguments in your mini-rant there. Note also your stuffing words in my mouth.
ListenerXTalkerX 16:38, 3 February 2010 (UTC)
- By all means, go ahead and list the merits of Wicca/Odinism/whatever. Note your disassociation from the burden of proof you just heralded. — Sincerely, Neveruse / Talk / Block 16:44, 3 February 2010 (UTC)
- I said that Wicca has less against it than creationism, by which I meant that the ideas of Wicca have not been blatantly falsified by 150+ years of geologic and biological science. There are arguments against Wicca, but they are primarily philosophical in nature, such as, "Wicca rips off every pagan tradition that doesn't run away fast enough," or, "We have laws of physics. Therefore, certain phenomena cannot be attributed to magic."
ListenerXTalkerX 16:50, 3 February 2010 (UTC)
- Note that you did not list any of the merits of Wicca. — Sincerely, Neveruse / Talk / Block 16:53, 3 February 2010 (UTC)
- I never said there were any.
ListenerXTalkerX 16:56, 3 February 2010 (UTC)
- You simply said it stood on them. — Sincerely, Neveruse / Talk / Block 16:59, 3 February 2010 (UTC)
- I also said it could fall on them, or the lack of them...
ListenerXTalkerX 17:00, 3 February 2010 (UTC)
- I'm sure you are both feeling the same way essentially. How about "Wicca has no merit whatsoever as an explanation of the world, but still, it is less batshit insane than creationism". There, you can be friends again now. — Pietrow ☏ 23:05, 3 February 2010 (UTC)
More inexplicable from Karibou[edit]
Latest Toon is not just weak, but really tremendously unfunny. "Shadow acquisition unit"? My, that's a baroque way of putting it. And he misses the big news: "Pawx T. Phil" predicted six more weaks of winter! Surely this must mean the game is up for the climate change denialist claque! MaxAlex Swimming pool 15:00, 2 February 2010 (UTC)
- You may have missed this story, which might make the latest toon make more sense. PETA has attempted yet another publicity stunt, suggesting that Punxsatwawney Phil be replaced with an animatronic groundhog. MDB (talk) 15:06, 2 February 2010 (UTC)
- Perhaps his least funny one yet (though it's sort of hard to say, it is anti-funny, like most of his others). Really, Karajou, could you set your sites any lower? Making fun of PETA is like shooting fish in a barrel. DickTurpis (talk) 15:08, 2 February 2010 (UTC)
-
- Not an inexplicable joke, but a childish one. This is the sort of thing you'd see in a junior high newspaper, not something that makes the grand claims of trusworthiness that CP does. --Kels (talk) 15:09, 2 February 2010 (UTC)
- I kinda like the 50s style robot, although. As for the joke Karajou got stuck at "draw a funny robot". This one didn't need a speech bubble, it adds nothing. This is a problem with more of his cartoons, he thinks all cartoons should have a speech bubble so he draws one. Then he has the problem of filling those bubbles without the ability of putting something pertinent in there. So we end up with a robot-groundhog asking a real groundhog for a ? turnip ?? Internetmoniker (talk) 15:27, 2 February 2010 (UTC)
- At least the groundhog is cute. Vulpius (talk) 15:36, 2 February 2010 (UTC)
- I'm guessing the turnip is a reference to PETA opposing eating meat (but aren't groundhogs herbivores, anyway?) And the robot reminds me a bit of this famous robot. MDB (talk) 16:31, 2 February 2010 (UTC)
- ." The way-too-deep analysis cracked me up. — Sincerely, Neveruse / Talk / Block 16:52, 2 February 2010 (UTC)
- Tachikomas make a mockery of your primitive notions of robot design. Barikada (talk) 20:59, 2 February 2010 (UTC)
- To be fair, Tachikoma wheels are retractable (IIRC), so they can walk on their proper feet if that is better suited for the terrain (and the speed requirements). --Sid (talk) 21:21, 2 February 2010 (UTC)
- Maybe it's just me, but this is the first one I thought was funny. Maybe it's because his signature looks a bit like my old one (because of the grass growing on it)? ħuman
00:51, 3 February 2010 (UTC)
- I was under the impression that they just locked the wheels, but I could be mistaken. Barikada (talk) 08:59, 3 February 2010 (UTC)
- If they do replace Paux with a Faux Paux, then some ingenious kid somewhere will hack the thing. I'm just waiting for the gopher to poke out, look around, and say "You shall be assimilated. Resistance is Futile." to the assembled, now frightened masses. -- CodyH (talk) 09:56, 3 February 2010 (UTC)
- This toon is so last week Internetmoniker (talk) 10:30, 3 February 2010 (UTC)
- Possible that they just locked them. It's been a good while since I watched the series. --Sid (talk) 12:09, 3 February 2010 (UTC)
Those liberal atheists and their assumptions...[edit]
Andy's slow motion bullyingimg of BMcP continues: "BMcP, on this site when someone states something as true, he should be able to explain why he thinks it is true. It's not enough to say the equivalent of "my assumptions are the same as that of liberals and atheists and I don't even know what those assumptions are." If you have no idea what the basis for a claim is (other than the equivalent of "liberals say so"), then please find out first, reconsider it with an open mind, and only then consider posting it."
The very possibility that a star can be 8000 years old is apparently a very radical (and liberal and atheistic) statement. His standard for proof is very interesting though, considering almost everything Conservapedia deems to be true is the equivalent of "World Nut Daily says so." Junggai (talk) 22:48, 2 February 2010 (UTC)
- The amazing thing is bmcp never claimed it was 8000 years old, he claimed it was light years away. Does Andy believe the Andromeda galaxy is 4004 light years away at most?
22:58, 2 February 2010 (UTC)
- Ahh, but you see, something 8000 light-years away would indeed mean that it is 8000 years old, because a true YEC believer doesn't believe in the Starlight Problem. Therefore, only a librul astronomer would discount the possibility that time-dilation designed™ by God makes all celestial objects in the mirror closer than they appear. BMcP is fucked. (BTW, your sig makes me hungry. Time to pull the Gruyere out of the fridge...) Junggai (talk) 23:06, 2 February 2010 (UTC)
- Mmm. Pork pie & Stilton for supper: AFK yummy
& honey(or marmalade) 23:08, 2 February 2010 (UTC)
(UI)Or BMcP can admit that the theory of relativity is within the assumptions and get over it, and since the theory of relativity is no longer correct, things can be assumed to travel at infinite speed (at least at some point in time), and as such the term "lightyear" is no longer meaningful. Q.E.D. ThiehEd Poor types in Chinese? 01:21, 3 February 2010 (UTC)
- Wrong! Relativity has nothing to do with that, just the term light years does. He's converting from parsecs, which are units of distance which don't depend on relativity but simple geometry. If you accept that the distance from the Earth to the Sun is 150,000,000 km, the distances of the closer stars (ie, this side of our spiral arm) follows from simple geometry, and nothing else.
02:27, 3 February 2010 (UTC)
- Geo-metry is only useful for measuring the Earth. For anything else it iz librul deseets. ħuman
02:53, 3 February 2010 (UTC)
- I think the "assumption" Andy wants admitted here is the idea that lightspeed is historically constant. BMcP risks losing credibility if he continues denying that Jesus can make light travel at whatever speed He wants (in addition to changing the rates of erosion, radioactive decay, genetic mutation, continental drift, etc etc etc). WodewickWelease Wodewick! 06:42, 3 February 2010 (UTC)
- I know it's been brought up before, but I really can't get my head around Andy's way of thinking with regards to miracles, or displays of divine power. His definition seems to be "something we can't do, but still within the confines of nature, i.e. they conform to natural rules", which is why he's so obsessed with enforcing this idea of historical consistency of the speed of light etc (with Jesus and the bible being the baseline). Why can't he accept the idea that god and/or Jesus, being *freakin' deities*, can change the laws of reality as they see fit? Why does he try to pick and choose what parts of science he accepts in order to make sure god and Jesus aren't breaking the laws of "reality" (his reality, that is), rather than saying "god can do what he wants because he's god" like pretty much anyone else does? X Stickman (talk) 09:31, 3 February 2010 (UTC)
- That's not just Andy, it is the entire field of creation "science" that does this type of cherry picking. Whenever they can "prove" something they use "science" (like the laws of thermodynamics for instance), whenever they can't they go for special pleading. Internetmoniker (talk) 09:44, 3 February 2010 (UTC)
- Fine Cheeses & Wine has it right above... There is no assumption about relativity, or the speed of light, in finding the distance to the Crab. The angular velocity is measured geometrically, and the radial velocity is based on the red shift (this assumes that a shift in a spectral line is caused by radial motion, and I've never heard anyone dispute that). Everything else is geometry, and the distance is in pure distance units (meters, miles, what have you... light year is also a pure distance unit, with no time involved, but its name confuses people). The only way that the speed of light comes into this is in working out the *age* of the nebula. It's possible that BCmP hasn't realised that when he talks about distance, Ashlafly thinks age. Why isn't Ashlafly concerned about large distances to other objects? Simply because we know the exact year the Crab nebula was formed, and combining this with distance - and the assumption of a fixed speed of light - that gives almost exactly the year the Crab was formed (sorry, couldn't bring myself to say "created") --Fawlty (talk) 10:13, 3 February 2010 (UTC)
- It's a small wonder he hasn't brought up the naming convention yet. Why is it taking him so long? — Pietrow ☏ 23:06, 3 February 2010 (UTC)
- @Fawlty: If things (particularly light) can travel at infinite speed then the minimum age of any celestial objects can no longer be determined by distance from it (or any practical method since we can't yet take samples of anything there). therefore the linkage is decorrelated if speed of light is assume to be finite. ThiehAsk me for relationship advice 23:42, 3 February 2010 (UTC)
It's been a short week...[edit]
We already got a new toonimg again. And Kara already returns to one of his classic themes: "It snows, thus global warming doesn't exist." --Sid (talk) 12:11, 3 February 2010 (UTC)
- Wait, did Kara actually just have a setup followed by a punchline? "Cold winter day == no global warming" aside, did Kara actually just do humor? HumanisticJones (talk) 13:17, 3 February 2010 (UTC)
- Humor? No, no. Not at all. DickTurpis (talk) 13:20, 3 February 2010 (UTC)
- Worse, look at some of the lines - I think his felt-tip pen is running out. Is there a fund somewhere so that we can contribute to a new Sharpie for him? MaxAlex Swimming pool 13:31, 3 February 2010 (UTC)
- I'm with HumanisticJones -- if you accept their basic assumption that the harsh winter disproves global warming, the idea of Al Gore getting bent out of shape by some kids wanting to shovel his driveway isn't a bad joke. Not really hilarious, but its kinda funny at least. Its the underlying premise that's flawed, not the joke itself. MDB (talk) 15:00, 3 February 2010 (UTC)
- Yes, there is an actual joke in there. Global warmists get amusingly angry when confronted with the new reality : snow. He did use it before though. Internetmoniker (talk) 16:24, 3 February 2010 (UTC)
- It's effin' blizzarding here now. Just back from shop (cat food & Mars bars) & I was head to foot white. yummy
& honey(or marmalade) 17:15, 3 February 2010 (UTC)
- I believe his cartoons would improve tremendously if his pen completely ran out. Vulpius (talk) 18:43, 3 February 2010 (UTC)
CP is now pro-gambling[edit]
It seems to me that CP has never been a fan of gambling.
Until now.
Since Obama has suggested not "blow(ing) a bunch of cash in Vegas", they're all for gambling. MDB (talk) 14:23, 3 February 2010 (UTC)
- CP will support anything as long as it's against Obama. If Obama said "Smoking is bad!", Andy and TK would be crowing about how Evilbama tries to destroy the poor, poor American tobacco industry. --Sid (talk) 18:47, 3 February 2010 (UTC)
- They sure are classy. Here is another one for you TK. Internetmoniker (talk) 17:11, 3 February 2010 (UTC)
- Stealing this for Fun:Conservafake!--
talk
21:10, 3 February 2010 (UTC)
- Heh, just this afternoon I was wondering whether I should start making posters of Andy with the text "Child Molester", just to show them how childish they're being. But then I realized that would be a more truthful poster than theirs, since Andy does molest children on an intellectual level. --GTac (talk) 18:37, 3 February 2010 (UTC)
- I think just one that says "Gerbiller" would work well. There are rumors this one is true. DickTurpis (talk) 18:43, 3 February 2010 (UTC)
Question about Conservapedia's problems with Einstein's Theory of Relativity[edit]
Hey, I just found this site. I don’t if this is the right place to ask this. The people of Wikipedia told me I should ask my question here. Anyway, I’m familiar with objections creationists & conservatives make against global warming, evolution, the big bang, & the scientific established age of the earth, but I never knew these people had objections against Relativity too, so this is pretty new for me & also, I’m not familiar enough with the theory itself. Discovering Conservapedia was the first time I’ve seen & got exposed to objections against the Theory of Relativity. Are there any responses against Conservapedia's criticisms of Relativity? Are any of their claims against Relativity valid? Here is the the link. Thanks for your help.— Unsigned, by: 71.98.169.172 / talk / contribs
- Andy's brother Roger (Rschlafly on CP) has even told Andy his arguments are bullshit. It's not CP's criticisms of relativity, but Andy being the ignoramus on matters outside his scope, like normal. --Irrational Atheist (talk) 22:35, 1 February 2010 (UTC)
- Andy has a scope? Jaxe (talk) 22:40, 1 February 2010 (UTC)
- Take your time to read:
- There you'll find loads of knowledgeable people running in vain against a wall of ignorance... larronsicut fur in nocte 22:41, 1 February 2010 (UTC)
- (EDIT CONFLICTS!):The objections to relativity are all from one editor, namely the site owner Aschlafly. He believes strongly that relativity leads to moral relativism; primarily because they both start with the root word relative. He has not provided proof that the two are related. His entire argument stems from the Genetic Fallacy, He thinks that since the one is related to the other (in reality this is false) that since relativism is wrong (which he has not shown) that relativity must be false (again, this is a fallacy). Every other editor on the article wanted to see it changed, including many conservapedia Sysops and his own daughter and Brother Roger, both of whom know more about physics than he does. Andy considers himself an expert in all things, he is not. --Opcn (talk) 22:42, 1 February 2010 (UTC)
- What I don't understand is if Andy dislikes relativity so much, is why the cp:quantum mechanics page seems (to my limited understanding of subject) fairly accurate, when quantum mechanics has even more weirdnesses that relativity. I found (but forgot to bookmark) a page on WP that states that relativity precludes an omni-potent deity from being able to predict the future state of system; perhaps that is why Andy doesn't like relativity. However, quantum mechanics also states that it is impossible to predict exactly what will happen in the future; it can only predict the chances of each possible outcome. The classic example of this is the wp:double-slit experiment, which has been repeated with macro-molecules like buckyballs. (Does the ball have the wave-nature?) CSMiller 23:08, 1 February 2010 (UTC)
- Weirdness has nothing to do with it, it is all the association in Andy's head. he is convinced, that's all. --Opcn (talk) 23:35, 1 February 2010 (UTC)
- He's done an Andy: made a pronouncement totally off the top of his head and now has to stand by it no matter what. He does it all the time. yummy
& honey(or marmalade) 23:42, 1 February 2010 (UTC)
Thanks for your responses. By the way, I really liked the debate between Johanan Raatz & Andrew Schlafly on the Archive 3 of Conservapedia’s Theory of Relativity talk page. That made it one of the best wiki talk pages I’ve ever read. One thing I noticed while I was reading that debate was that Andrew Schlafly doesn’t seem to realize is that most “creation scientists” actually support & embrace the Theory of Relativity (I went to the Answers in Genesis website & looked up the Theory of Relativity) although I don’t think they don’t understand it well in my opinion. I didn't understand their explanations about something called Dr. Humphreys’s White Hole Cosmology & John Hartnett’s Cosmological Relativity. They sound very wacko to me. — Unsigned, by: 71.98.169.172 / talk / contribs
- unsigned said the magic word of the day , here's your Gerbil... Dont let Andy see it
- @BON: Andy will nitpick on you via "although I don’t think they don’t understand it well in my opinion"... beware if you still edit CP. K61824Looking for potion recipes 01:39, 2 February 2010 (UTC)
CP's denial of relativity is certainly a wonder to behold. As far as I know, Andy is the only relativity denialist, unlike other topics, such as the axiom of choice and complex numbers, in which various other sycophants have joined in the fun. Andy denies relativity at many levels. The confusion with "moral relativism" only scratches the surface of his craziness. While the connection with moral relativism is something that conservatives could rightly be alarmed about, Andy goes far beyond that. Even in discussions in which it is completely clear that the topic is just the technical correctness, he is batshit insane. He has some other non-technical problems as well:
- The confusion with moral relativism is well documented.
- He claims that the study of relativity can lead people away from reading the Bible. In one discussion he asked (KSorenson, I think it was) whether she would still support relativity if she knew that its study led people to read the Bible less. He has argued with people, during discussions of the technical correctness of relativity, how much time they spend reading the Bible. (I'm only saying this because you're a newcomer. Those of us that have been around for a while know all about this kind of behavior.) And, of course, he claims a negative correlation between Bible reading and studying relativity in college, these statistics being researched with his usual thoroughness.
- On one occasion, I don't have the reference off the top of my head, he claimed that (left-wing, of course) professors teach relativity for the purpose of leading people away from the Bible. Maybe it's just me, but it seems an awful lot of effort for such a dubious payback. All the books, courses, lectures, seminars, conferences, space probes, and particle accelerators, just to get people to stop reading the Bible? The connection isn't well established, nor the motive. And there might be more cost-effective ways, like firebombing some churches maybe? (The preceding was a joke!)
- He is in utter denial about the technical aspects. He denies that relativity explains the anomalous precession of the perihelion of Mercury, and got in a big fight with KSorenson. He refused to understand the numbers right before his eyes. KSorenson left shortly after this, along with MarkGall and PatrickD. KSorenson's departure was a great loss for CP; She, along with PatrickD, was writing up a wonderful article on general relativity. (PatrickD seems to be taking this material over to Wikiversity.)
- He insists that "nothing useful has ever come out of relativity", unlike quantum mechanics. When called out on the comparison, he moves the goalposts relentlessly. He claims all of electronics and computers as trophies for QM, presumably because QM explains the behavior of the electrons and atoms that make up transistors and such, but won't accept the role of relativity in explaining the magnetic fields that make disk drive motors work.
- He insists that relativity has nothing to do with the correct operation of the GPS satellite system, and gets into nit-picking arguments over whether the technicians uploading correction data for the relativistic gravitational time-shifting understand relativity. If they don't personally understand general relativity, then relativity has nothing to do with GPS.
- He cites with great pride the few crackpot scientists that have denied relativity, including one who was involved in some Michelson-era measurements of the speed of light.
- He cites a magazine article about an experiment comparing the comparative accuracy of relativity and the alternative Brans-Dicke theory of gravity, failing to note that the article explained that the experiment showed that Einstein was right and Brans & Dicke were wrong.
- He carries on and on about the politicization of the scientific community, at the same time complaining about the leftward tilt of the Nobel prize committee and crowing over the fact that only one Nobel prize was ever awarded directly for relativity (Hulse/Taylor, 1993.) And he denigrates their research, pointing out that they are no longer conducting their experiment. (Galileo is no longer dropping cannonballs off the Leaning Tower of Pisa. Does that mean classical physics is wrong?)
- He claims that it was pure (left-wing, of course) politics that caused Brans and Dicke not to get the Nobel prize, and caused their theory not to prevail.
- He quotes Isaac Newton's statement "I feign no hypotheses" in support of Newtonian gravity rather than Einsteinian, without having any understanding of what Newton meant.
- He confuses the (local) curvature of spacetime near gravitating bodies with the (very nearly) flat geometry of the universe as a whole.
- He has this funny notion that the Bible "proves" that relativity is wrong because of the account of the Centurion's slave (some say it was his gay lover, but I digress) in John 4:46-54. Apparently Jesus performed a faith-healing of someone who was not immediately present, and the biblical account requires that the healing effect took place instantly. There is no explanation of how the healing effect could be measured to microsecond accuracy, but I admit that I am not a biblical scholar.
- He has also gotten into some utterly bizarre arguments with people about objects having different relativistic mass for accelerations in the direction of their motion vs. accelerations transverse to it. No amount of just explaining how the equations work were able to help him.
Andy's brother Roger has not drunk the anti-relativity Kool-Aid, and occasionally spars with Andy on the technical aspects of this. However, Roger takes a back seat to no one in denying Albert Einstein's role in it. He repeatedly edits the various articles to give major credit to everyone except Einstein. In one case he cites a paper by David Hilbert (I think) that predates Einstein's major announcement of general relativity, concealing the fact that the paper was co-authored by Einstein. Of course many people contributed to relativity, but Einstein's role was central, and all sensible people know that.
An explanation for the Schlafly brothers' disdain for relativity and Einstein is hard to come by. His liberal, socialist, internationalist leanings are of course major factors. His religion may also have played a role.
See also Conservapedia:Conservapedian relativity and Conservapedia:Einstein and Relativity, Censorship of.
Gauss (talk) 04:39, 2 February 2010 (UTC)
- Einstein (liberal) > Theory of Relativity >>>>> Moral Relativism >>>> Evil Liberal Things
is simple really , + Andy the engineer seems to hate Maths Hamster (talk) 04:59, 2 February 2010 (UTC)
- ) Deny Einstein
- ) ????????????
- ) Profit
Opcn (talk) 09:18, 2 February 2010 (UTC)
- Oh man, I forgot about the argument that "Jesus healed someone instantly over a distance, hence relativity is wrong"! So many things wrong with that one, like HOW WOULD THOSE BOTP OBSERVERS WHO WROTE IT DOWN KNOW WHAT HAPPENED AT THE EXACT SAME TIME AT TOTALLY DIFFERENT PLACES? --GTac (talk) 09:49, 2 February 2010 (UTC)
- Alternatively, couldn't Jesus have violated relativity? I mean, he is, y'know, God and all, and it was a miracle. This is just one more example of the fundamentalist trend to limit God with their hatred of science, just like they limit God by insisting that He could only create life that doesn't change. MDB (talk) 11:56, 2 February 2010 (UTC)
Andy's position on the Theory of Relativity (the theory of relativity is wrong and Einstein didn't invent it) is even more obscure than the view of the Deutsche Physik in the 1930s (the theory of relativity is wrong because Einstein invented it) or the 1940s (the theory of relativity is right but don't mention Einstein). The latter one is somewhat Roger's opinion - he doesn't like Einstein at all. larronsicut fur in nocte 13:02, 4 February 2010 (UTC)
|
https://rationalwiki.org/wiki/Conservapedia_talk:What_is_going_on_at_CP%3F/Archive169
|
CC-MAIN-2019-22
|
refinedweb
| 20,080
| 70.13
|
As new user interface component frameworks are created and old frameworks are replaced with emerging technologies, methods for styling those components must change with them. Long gone are the days of creating a simple HTML component and importing a simple CSS file with corresponding class names. Concerns such as style leaking, local DOM manipulation and theming are driving changes to the approaches we take. With developer IDEs and the JavaScript language itself and extensions such as TypeScript becoming more intelligent and allowing auto-completion, should our styling approaches not give us the same?.
The olden days
In simple forms, a component is just a block of HTML with some corresponding JavaScript and CSS.
<div class='button'> <span class='label'>Click Me</span> </div>
.button { display: block; background: blue; } .button .label { color: white; }
There are many problems with this approach in this form. The first main issue is that any other styles for a
label class will likely affect our button label. The class names are brittle as they are manually entered in both the HTML and CSS and the colours are hardcoded, making any changes or theming more difficult to maintain. Sure, you could tidy this up by using something like a BEM technique, but this is really treating the symptoms of the issue rather than changing the underlying approach.
More recently, styling and theming an app was often achieved using a framework such as the highly popular Bootstrap and adding extra classes / structure to your components to bring them inline with the framework’s guidelines. Other popular approaches such as Material and Semantic have since been released and largely have implementations available for use within React, Polymer, and other frameworks.
Improvements could be made to the approach of base CSS using CSS preprocessors such as Less, Stylus, and Sass. These introduced a proprietary format of CSS including popular features such as mixins, variables, and vendor prefixing. The downside of using preprocessors is that developers must learn this format and the end result is still CSS, without solving many of the same problems including leaking styles across components.
What we are trying to solve?
So what problems are modern component styling approaches really trying to solve?
Style leakage
A common problem with using class names in global CSS is that you can have style clashes. If you have more than one node with the same class name, these nodes will receive the same CSS rules. You can strive to avoid this problem by nesting your CSS selectors and by following the principles of BEM, but this can often become very complicated.
Changes to designs
Changes to your designs when using vanilla CSS can be a challenge as you will likely end up using
find and replace to change color, padding, margin and other properties across your CSS files. Preprocessors provide variables to make this easier, but then you are introducing a new proprietary CSS language into your codebase.
Theming
Vanilla or preprocessed CSS can be challenging to theme. I’ve often found myself going down the path of writing overly complicated CSS rule definitions with highly specific selectors to override the base styles of a component and this quickly gets messy. Especially when the toolkit you are theming changes their style definitions in the next release, and you have to use the browser debugging tools to inspect the CSS to identify any changes impacting your UI components
Class name mistakes
Keeping track of how class names are spelt, and which class names applying particular styles, can be frustrating and inefficient. One small spelling mistake somewhere in your codebase can stop styles from being applied and tracking down the cause can time consuming as this is not reported as an error.
The new way
There are many approaches used in modern component frameworks. These can largely be split into
CSS,
CSS-in-JS and
inline-styles. We’ll stay clear of the latter option and concentrate on those that generate actual CSS stylesheets.
So lets dive in.
Polymer Styling
Polymer uses the cutting edge of available browser functionality largely using polyfills to cover the gaps in browser support. As such, the framework uses the same approach for its styling. Polymer styles are written using proposed future CSS syntax and make heavy use of
css-custom-properties.
css-custom-properties allow you to create DOM level scoped variables that can be used to alter the CSS that is applied to a given element.
<style> :root { --title-color: green; } .warning { --title-color: red; } .title { color: var(--title-color); } </style> <span class='title'>Green Title</span> <span class='title warning'>Red Title</span>
Polymer utilises this approach for both its component styling and its theming. A theme file will typically include a theme class wrapping a number of
css-custom-properties that are then used to manipulate the visible styles at run time. The challenge with this approach is that
css-custom-properties cannot be fully polyfilled at run time as any polyfill must account for the structure of the DOM when applying the correct CSS properties and variables to each node.
To get around this issue, Polymer has a comprehensive build time polyfill for
css-custom-properties which has knowledge of the application’s DOM structure and applies calculated variables to the generated output. At this time, Polymer has deprecated the use of external stylesheets, so all styles must be written within the polymer element template.
Styled components
Styled components for React uses tagged template literals to create styles. They can be used to create predefined components to represent
button,
a and other HTML elements, or to add styles to any standard React component. A CSS stylesheet is created with class names which are passed to the component via the
classname property.
import styled from 'styled-components' const Button = styled.button` background: blue; display: block; color: white; `;
These stye tags can return functions that receive component properties. For example, you could pass a property to your component to indicate if it was a primary button, this would be used by the
styles.button to adapt the styles accordingly.
import styled from 'styled-components' const Button = styled.button` background: ${props => props.primary ? 'green' : 'blue'}; display: block; color: white; `; return( <Button primary>Primary Button</Button> // Green button );
Theming of styled-components is achieved using a
<ThemeProvider> component wrapper which takes a
theme property. The theme can provide variables and rules to be used within the component.
import styled from 'styled-components' const Button = styled.button` background: ${props => props.theme.buttonBackground}; display: block; color: white; `; const theme = { buttonBackground: 'green' }; return( <ThemeProvider theme={theme}> <Button primary>Primary Button</Button> // Green button </ThemeProvider> );
jsxstyle
jsxstyle aims to create display components that can wrap components. These display components can take the form of
block,
inlineBlock,
flex and other typical values for the CSS
display property. Each display component accepts properties that represent CSS attributes, which are in turn used to style their contents. The built code inserts stylesheets into the DOM with unique CSS class names. This avoids style leakage and enables rules to be reused between components that share styles.
import { Block, Inline } from 'jsxstyle'; return ( <Block backgroundColor='blue'> <Inline color='white'>Blue Button</Inline> </Block> );
Glamorous
PayPal’s Glamorous aims to build upon the ideas of both
styled-components and
jsxstyle. Glamorous provides a function that receives styles as a JavaScript object as well as providing a collection of components that can be used with property styles. The latter approach allows you to create styled components without having to give them a name.
import glamorous, {Button} from 'glamorous' const MyButton = glamorous.button({ backgroundColor: 'blue', color: 'white' }); return ( <MyButton>Named Button</MyButton> <Button backgroundColor='blue' color='white'>Anonymous Button</Button> );
The benefit of not having to provide a name is that this avoids having to add a placeholder HTML element just to provide a reference for a group of items. Inline styles allow you to apply styles directly to a node without naming it in that manner. Glamorous’s approach to allowing similar inline styles (that turn into CSS) to be applied directly to an anonymous node / element provides the same benefit.
Theming in Glamorous is provided via a
<ThemeProvider>, much like
styled-components, but it also allows a theme to be directly injected into a component. Themes can provide both variables and styles to components.
import glamorous, {ThemeProvider} from 'glamorous' const myTheme = { button: { backgroundColor: 'blue', color: 'white' } }; const ThemeableButton = glamorous.button((props, theme) => ({ ...theme.button })); return ( <ThemeProvider theme={myTheme}> <ThemeableButton>Themed Button</ThemeableButton> </ThemeProvider> );
CSS in JS (JSS)
CSS in JS uses CSS directly within your JavaScript code. CSS Class names form the top level object keys and each nested object contains the CSS rules. This pattern is used within React and React Native and allows styles to be combined using an array.
// react native example import {Stylesheet, Button} from 'react-native'; const styles = Stylesheet.create({ button: { background: 'blue', color: 'white' }, bold: { fontWeight: 'bold' } ); return( <Button style={styles.button}>Native Button</Button> <Button style={[styles.button, styles.bold]}>Bold Button</Button> );
In several ways, this reminds us of the early Netscape CSS predecessor, JavaScript StyleSheets (JSSS).
React Themeable
React themeable is an attempt to normalise the styling and theming of React components. The idea is that all third-party components should adopt this approach to level out the inconsistencies of styling and theming. It is highly versatile and allows styles to be written using
css-modules,
react style,
radium and plain CSS.
React Components such as react-autosugest make use of react-themeable and ships with zero styles of its own.
React-themeable provides a component with a single function,
themeable, which accepts a theme property and returns a function that may be uses to decorate the relevant nodes. The returned function deals with whether the theme provided is class or style based, and automatically sets up the appropriate attributes.
import themeable from 'react-themeable'; render() { const theme = themeable(this.props.theme); return ( <div {...theme(1, 'button')}> <span {...theme(2, 'label')}>Themed Button</span> </div> ); }
The theme for this component must now provide classes or styles for
button and
label. This approach is highly effective because it gives the component author control over which nodes can receive styles and which cannot, whilst leaving the component user free to use whatever styling approach and technology they see fit.
CSS Modules
CSS modules allows you to write locally scoped class names and animation names using plain CSS. The CSS files are then imported into your JavaScript modules and the classes provided are then used to decorate your DOM nodes. When importing a CSS module, it exports a JSON object with mappings from the specified class names to the localised class names it has generated.
This means that the developer does not need to nest CSS class names in order to achieve style encapsulation. When using CSS modules, you should refrain from id or tag selectors as these cannot be localised by the build process.
When you pair CSS modules with typed-css-modules within a TypeScript project, you are able to get the benefit of Intellisense/auto-completion of CSS class names.
/* from this */ .button .label { color: white; } /* to this */ .label { color: white; } /* after build */ .module_name_label_35j2h3g4 { color: white; }
import css from './style.css' // returns map of classnames return ( <button className={css.button}> <span className={css.label}>My button</span> // Span will receive built classname </button> );
PostCSS and PostCSS-cssnext
PostCSS enables you to write cutting edge CSS and down emit your code to a format that the browser can use, much the same way as Babel allows you to write ES6+ code for a wide range of browsers.
PostCSS has a vast library of plugins available to enable various features. PostCSS-cssnext is one of the most useful plugins available, as it allows you to use modern CSS specification features without having to worry about browser support. These include color functions, rule nesting,
css-custom-properties and more. PostCSS-cssnext also comes bundled with autoprefixer which alleviates the need to add vendor prefixes to your CSS code and maximises browser support.
:root { --primary-color: 'green'; } .button { background: var(--primary-color); &:hover { background: color(var(--primary-color) a(40%)); } }
Dojo 2
Dojo 2 aims to provide a typesafe styling and theming approach using the most recent technology available. With a robust widget authoring framework, its very important that the theming system is baked into the widget creation system. Widget theme files are authored as
css-modules and compiled using
PostCSS and
PostCSS-cssnext. This allows widget authors to write CSS files without having to worry about cross-browser compatibility, style leakage and without using a preprocessor language. Each widget has its own style file with a
.m.css file extension to mark it as a
css-module to the build system. Common module files for
icons and app level styles can be imported into your widget file and the classes provided are used to decorate widget nodes.
css-custom-properties are used to provide CSS variables to the widgets, and these values are computed at build time so that browser compatibility is maximised. A
variables.css file is included with the @dojo/widgets package and can be imported into client CSS files to allow third-party developers to use and extend the Dojo 2 look and feel in their own widgets.
Dojo 2 provides a class level function,
classes, which is used to keep track of CSS classes being added and removed from the rendered Virtual DOM and to allow for themes to be used.
import * as css from './styles/myWidget.m.css`; return v('div', { classes: this.classes(css.root) }, [ v('span', { classes: this.classes(css.label) }) ]);
In order to theme a Dojo 2 widget you must pass it a
theme object. The theme object can provide the theme for multiple widgets keyed by their widget name. Dojo 2 widgets from
@dojo/widgets are keyed with a
dojo- prefix to avoid naming clashes. Themes can target the class names passed to
this.classes. For example, to change the
span element’s appearance from the above example, a theme should provide an alternative
label class.
.label { color: red; }
// import theme file import * as myWidget from './themes/myTheme/myWidget.m.css`; import myWidget from './widgets/myWidget'; const theme = { myWidget } return w(myWidget, { theme } ); //will have a red label
Summary
JavaScript has come a long way and it is great to see that styling, theming and uses of CSS are also progressing quickly. Frameworks like like Dojo 2 and Polymer are making the most of the advances in CSS specifications for
css-custom-properties and localisation of class names, which is great for developers who like to write
CSS. CSS in JavaScript approaches such as
Glamourous and
react-styles are proving very popular in the React world due to the auto-completion/Intellisense that they provide within the IDE, not to mention the options they provide for frameworks like
react-native.
css-modules and TypeScript are a great combination for developers who like their styles to be in CSS files (where they should be!), but still want the Intellisense that CSS in JavaScript approaches provide. There are many options available, to help your providing better styling for your applications, and most importantly, stay away from the
!important declaration!
Need a few style tips?
Let’s talk about how we can get your web app dressed in the latest styles of the season.
Get help from SitePen On-Demand Development and Support, our fast and efficient solutions to JavaScript and web development problems of any size.
Have a question? We’re here to help! Get in touch and let’s see how we can work together.
|
https://www.sitepen.com/blog/2017/08/17/state-of-modern-component-styling/
|
CC-MAIN-2018-09
|
refinedweb
| 2,613
| 53.31
|
A columnar fork of Jawn with added backend support for CSV. The distinction between "columnar" and its asymmetric opposite, "row-oriented", is in the orientation of data structures which you are expected to create in response to the event stream. Jawn expects a single, self-contained value with internal recursive structure per row, and its
Facade trait is designed around this idea. Tectonic expects many rows to be combined into a much larger batch with a flat internal structure, and the
Plate class is designed around this idea.
Tectonic is also designed to support multiple backends, making it possible to write a parser for any sort of input stream (e.g. CSV, XML, etc) while driving a single
Plate. At present, both CSV and JSON are supported, and we have plans to support XML in the future.
Finally, Tectonic implements a form of fast skipping based on signals returned from the user-supplied
Plate implementation driving a particular parse. In this way, it is possible to achieve Mison-style projection pushdown into the parse process. At present, skip scans do not compile down to vectorized assembly instructions (SIMD), but it is theoretically possible to achieve this, despite sitting on top of the JVM.
All of these differences have led to some relatively meaningful changes within the parser implementation. Despite that, the bulk of the ideas and, in some areas, the vast bulk of the implementation of Tectonic's JSON parser is drawn directly from Jawn. Special heartfelt thanks to Erik Osheim and the rest of the Jawn contributors, without whom this project would not be possible.
Tectonic is very likely the optimal JSON parser on the JVM for producing columnar data structures. It is definitely the optimal JSON parser for producing columnar structures when you are able to compute some projections or row filters in advance, allowing skipping. When producing row-oriented structures (such as conventional JSON ASTs), it falls considerably behind Jawn both in terms of performance and usability. Tectonic is also relatively opinionated in that it assumes you will be applying it to variably-framed input stream (corresponding to Jawn's
AsyncParser) and does not provide any other operating modes.
UsageUsage
libraryDependencies += "com.slamdata" %% "tectonic" % <version> // if you wish to use Tectonic with fs2 (recommended) libraryDependencies += "com.slamdata" %% "tectonic-fs2" % <version>
If using Tectonic via fs2, you can take advantage of the
StreamParser
Pipe to perform all of the heavy lifting:
import cats.effect.IO import tectonic.json.Parser import tectonic.fs2.StreamParser // assuming MyPlate.apply[F[_]] returns an F[Plate[A]] val parserF = Parser(MyPlate[IO], Parser.ValueStream)) // assuming whitespace-delimited json val input: Stream[IO, Byte] = ... input.through(StreamParser(parserF)) // => Stream[IO, Foo]
Parse errors will be captured by the stream as exceptions.
Backend FormatsBackend Formats
Tectonic supports two formats: JSON and CSV. Each format has a number of different configuration modes which may be defined.
JSONJSON
There are three modes in which the JSON parser may run:
- Whitespace-Delimited (
Parser.ValueStream)
- Array-Wrapped (
Parser.UnwrapArray)
- Just One Value (
Parser.SingleValue)
Whitespace-delimited is very standard when processing very large JSON input streams, where the whitespace is likely to be a newline. "Just One Value" is quite uncommon, since it only applies to scenarios wherein you have a single row in the data. Array wrapping is common, but not universal. Any one of these modes may be passed to the parser upon initialization.
CSVCSV
Similar to the JSON parser, the CSV parser takes its configuration as a parameter. However, the configuration is considerably more complex since there is no standard CSV mode. Delimiters and escaping vary considerably from instance to instance. Some CSV files even fail to include a header indicating field names!
To account for this, the CSV parser accepts a
Config object wherein all of these values are tunable. The defaults are as follows:
final case class Config( header: Boolean = true, record: Byte = ',', row1: Byte = '\r', row2: Byte = '\n', // if this is unneeded, it should be set to \0 openQuote: Byte = '"', // e.g. “ closeQuote: Byte = '"', // e.g. ” escape: Byte = '"') // e.g. \\
This roughly corresponds to the CSV style emitted by Microsoft Excel. Almost any values may be used here. The restrictions are as follows:
row2must not equal
closeQuote
recordmust not equal
row1
row2may not validly be byte value
0, since that is the indicator for "only use the first row delimiter"
Beyond this, any characters are valid. You will note that, in the Excel defaults, the escape character is in fact the same as the close (and open) quote characters, meaning that quoted values are enclosed in
"...
" and interior quote characters are represented by
"". Backslash (
\) is also a relatively common choice here.
Just to be clear, if you wish to use a singular
\n as the row delimiter, you should do something like this:
Parser.Config().copy( row1 = '\n', row2 = 0)
If the supplied configuration defines
header = false, then the inference will follow the same strategy that Excel uses. Namely:
A,
B,
C, ...,
Z,
AA,
AB, ...
When invoking the
Plate, each CSV record will be represented as a
str(...) invocation, wrapped in a
nestMap/
unnest call, where the corresponding header value is the map key.
Leveraging
SkipColumn/
SkipRow
Depending on your final output representation and exactly what actions you're planning to perform on that representation, you may find that you don't need all of the data. A particularly common case (especially in columnar representations) is projection pushdown. Imagine you're evaluating an operation which is conceptually like the following SQL:
SELECT a + b FROM foo.json
The foo.json input data may contain many, many more columns than just
.a and
.b, and those columns may have significant substructure, numerics (which are often extremely expensive to consume), and more. In such a case, it is often extremely beneficial to instruct the parser to efficiently scan past bytes which are constituent to this superfluous structure, never even generating the
Plate events which correspond to that input (see below for a rough benchmark measurement of how beneficial this can be in a contrived – but not necessarily best-case – scenario).
This technique was pioneered (to our knowledge) by IBM with the Mison Parser, and then later expanded upon by the Future Data team at Stanford in the Sparser project.
Tectonic implements this technique through the use of the
Signal ADT, which is returned from most event-carrying methods on
Plate:
sealed trait Signal object Signal { case object SkipRow extends Signal case object SkipColumn extends Signal case object Continue extends Signal case object Terminate extends Signal }
The meaning of these various signals is as follows:
SkipRow: Efficiently scan to the end of the current row, run
FinishRow, and then proceed with normal parsing. Useful for implementing filter pushdown (
WHEREin SQL)
SkipColumn: Efficiently scan to the end of the current column and then proceed with normal parsing. Useful for implementing projection pushdown (
SELECTin SQL). Ignored when returned from anything other than a
nestmethod (it's impossible to skip something after you've already parsed it).
Continue: Proceed as usual
Terminate: Halt the parse immediately with an error
All signals are hints. The underlying parser backend is always free to ignore them. Your
Plate implementation should not assume that things will actually be skipped, since under certain circumstances (depending on the parser state machine and the nature of the input data format), skipping may not be possible. With that said, it is still best practice to produce the appropriate
Skip whenever you can, as it can result in massive performance gains that are otherwise out of reach from user-land code.
Returning to the example above, you can use the
DelegatingPlate utility class to implement projection pushdown of
.a and
.b via
SkipColumn in a relatively straightforward fashion:
def pushdown[F[_]: Sync, A](delegate: Plate[A]): F[Plate[A]] = { Sync[F] delay { new DelegatingPlate[A](delegate) { private var depth = 0 private var skipping = false override def nestMap(name: CharSequence): Signal = { if (depth == 0) { depth += 1 if (name.toString == "a" || name.toString == "b") { super.nestMap(name) } else { skipping = true Signal.SkipColumn } } else { depth += 1 Signal.SkipColumn } } override def nestArr(): Signal = { depth += 1 if (skipping) Signal.SkipColumn else super.nestArr() } override def nestMeta(name: CharSequence): Signal = { depth += 1 if (skipping) Signal.SkipColumn else super.nestMeta(name) } override def unnest(): Signal = { depth -= 1 if (depth == 0 && skipping) { skipping = false Signal.Continue } else { super.unnest() } } } } }
There's a fair bit of boilerplate here needed to track the depth of nesting. If
SkipColumn were not a hint but rather an absolute mandate, this would be somewhat easier. Either way though, hopefully the general idea is clear. Wrapping this function around any other
Plate will result in that delegate plate receiving almost exactly the same event stream that it would have received if the underlying dataset exclusively contained
.a and
.b columns.
We say "almost exactly" because there is one difference:
skipped. The
Plate#skipped(Int) method will be invoked one or more times by the backend any time a chunk of bytes is skipped due to a
SkipColumn or
SkipRow signal. The
Int parameter represents the number of bytes skipped. Under some circumstances, this value may be
0, and absolute accuracy is not guaranteed (though you can probably rely on it to be within 1 or 2 bytes each time). The default implementation of this function on
Plate is to simply ignore the event.
The purpose of
skipped is to allow
Plates to efficiently build up metrics on data even when it isn't consumed. This can be extremely useful for some types of algorithms.
Parse ErrorsParse Errors
Tectonic backends are free to bypass error discovery in bytes which are ingested during a skip. For example, imagine a
SkipColumn on the
.a column in the following data:
{ "a": [1, 2}, "b": 42 }
Notice the mismatched braces (
[ with
}) within the
.a column. If the
.a column is skipped, this will not be considered a parse error by Tectonic JSON. This may seem somewhat bizarre, but it's actually quite important. Detecting errors in the input stream is actually a relatively expensive process (for example, it requires a stack representation to track array vs map nesting). The skip scan can be made vastly more efficient when error detection is discarded.
Error detection resumes appropriately as soon as the skip scan finishes. Thus, any data which is not skipped will be appropriately checked for parse errors.
PerformancePerformance
Broadly-speaking, the performance of Tectonic JSON is roughly on-par with Jawn. Depending on your measurement assumptions, it may actually be slightly faster. It's very very difficult to setup a fair series of measurements, due to the fact that Tectonic produces batched columnar data while Jawn produces rows on an individual basis.
Our solution to this is to use the JMH
Blackhole.consumeCPU function in a special Jawn
Facade and Tectonic
Plate. Each event is implemented with a particular consumption, weighted by the following constants:
object FacadeTuningParams { object Tectonic { val VectorCost: Long = 4 // Cons object allocation + memory store val TinyScalarCost: Long = 8 // hashmap get + bitset put val ScalarCost: Long = 16 // hashmap get + check on array + amortized resize/allocate + array store val RowCost: Long = 2 // increment integer + bounds check + amortized reset val BatchCost: Long = 1 // (maybe) reset state + bounds check } val NumericCost: Long = 512 // scalarCost + crazy numeric shenanigans object Jawn { val VectorAddCost: Long = 32 // hashmap something + checks + allocations + stuff val VectorFinalCost: Long = 4 // final allocation + memory store val ScalarCost: Long = 2 // object allocation val TinyScalarCost: Long = 1 // non-volatile memory read } }
- Vectors are arrays or objects
- Tiny scalars are any scalars which can be implemented with particularly concise data structures (e.g.
jnullfor Jawn, or
arrfor Tectonic)
- Scalars are everything else
- Numerics are special, since we assume realistic facades will be performing numerical parsing to ensure maximally efficient representations. Thus, both facades check for decimal and exponent. If these are lacking, then the cost paid is the
ScalarCost. If these are present, then the cost paid is
NumericCost, simulating a trip through
BigDecimal.
Addand
Finalcosts for Jawn are referring to the
Contextfunctions, which are generally implemented with growable, associative data structures set to small sizes
Broadly, these costs were strongly inspired by the internal data structures of the Mimir database engine and the QData representation. In the direct comparison measurements,
Signal.Continue is universally assumed as the result from
Plate, voiding any advantages Tectonic can derive from projection/predicate pushdown.
Both frameworks are benchmarked through fs2 (in the case of Jawn, using jawn-fs2), which is assumed to be the mode in which both parsers will be run. This allows the benchmarks to capture nuances such as
ByteVectorChunk handling,
ByteBuffer unpacking and such.
As an aside, even apart from the columnar vs row-oriented data structures, Tectonic does have some meaningful optimizations relative to Jawn. In particular, Tectonic is able to maintain a much more efficient object/array parse state due to the fact that it is not relying on an implicit stack of
Contexts to maintain that state for it. This is particularly noticeable for object/array nesting depth less than 64 levels, which seems to be far-and-away the most common case.
Benchmark Comparison to JawnBenchmark Comparison to Jawn
The following were run on my laptop in powered mode with networking disabled, 20 warmup iterations and 20 measurement runs in a forked JVM. You can find all of the sources in the benchmarks subproject. Please note that these results are extremely dependent on the assumptions codified in
FacadeTuningParams. Lower numbers are better.
Column Skip BenchmarksColumn Skip Benchmarks
Column skips – otherwise known as projection pushdown – are one of the unique features of the Tectonic parsing framework. The exact performance benefits you gain from returning
SkipColumn vary considerably depending on exactly when you skip (e.g. one column, or a lot of columns), what your data looks like, and how expensive your
Plate implementation is on a per-scalar basis. We can get a general idea of the performance impact of skipping though by contriving a benchmark based on the
ugh10k.json dataset, which is not particularly wide, but has enough substructure to demonstrate interesting skipping.
The following benchmark selects just the
.bar field out of the top-level object in each row, skipping everything else. It's being run in tectonic using the
FacadeTuningParams from above, both with and without skips to demonstrate the impact:
So that's a roughly 3.47x performance jump. It's difficult to draw general conclusions from this data, other than the fact that having skips is better than not having skips. In testing on realistic workloads in Spark, IBM found that their Mison parser improved overall batch run times by up to 20x. Tectonic is (presently) unable to benefit from the CPU vectorized instructions that Mison is using to achieve theoretically optimal skip scan performance, but it's at least theoretically capable of hitting those kinds of performance gains on certain datasets with amenable workloads.
At the very least, skipping is never a bad idea.
Row-Counting Benchmark for CSVRow-Counting Benchmark for CSV
Inspired by the uniVocity benchmarks, the Tectonic CSV benchmarks load from the 144 MB Maxmind worldcities.csv dataset, counting the total number of records. Unfortunately, we were unable to find any other async (streaming) CSV parsers on the JVM, and so the only performance comparison we are able to provide is to the time it takes to count the number of
\n characters in the same file.
The line count test really just serves as a lower-bound on how long it takes fs2-io to read the file contents. Thus, parsing as opposed to just scanning the characters adds a roughly an order of magnitude in overhead. However, if you compare to the uniVocity benchmarks, which were performed on a more powerful machine and without the overhead of fs2, it appears that Tectonic CSV is relatively middle-of-the-road in terms of high performance CSV parsers on the JVM. That's with almost no time spent optimizing (there's plenty of low-hanging fruit) and the fact that Tectonic CSV is an async parser, which imposes some meaningful overhead.
LicenseLicense
To the extent that lines of code have been copied from the Jawn codebase, they retain their original copyright and license, which is The MIT License. Original code which is unique to Tectonic is licensed under The Apache License 2.0, copyright SlamData. Files which are substantially drawn from Jawn retain both copyright headers, as well as a special thank-you to the Jawn contributors.
|
https://index.scala-lang.org/slamdata/tectonic/tectonic/6.0.0?target=_2.12
|
CC-MAIN-2021-43
|
refinedweb
| 2,753
| 53.51
|
Questions around the testing topic come up quite often together with React Query, so I'll try to answer some of them here. I think one reason for that is that testing "smart" components (also called container components) is not the easiest thing to do. With the rise of hooks, this split has been largely deprecated. It is now encouraged to consume hooks directly where you need them rather than doing a mostly arbitrary split and drill props down.
I think this is generally a very good improvement for colocation and code readability, but we now have more components that consume dependencies outside of "just props".
They might useContext. They might useSelector. Or they might useQuery.
Those components are technically no longer pure, because calling them in different environments leads to different results. When testing them, you need to carefully setup those surrounding environments to get things working.
Mocking network requests
Since React Query is an async server state management library, your components will likely make requests to a backend. When testing, this backend is not available to actually deliver data, and even if, you likely don't want to make your tests dependent on that.
There are tons of articles out there on how to mock data with jest. You can mock your api client if you have one. You can mock fetch or axios directly. I can only second what Kent C. Dodds has written in his article Stop mocking fetch:
Use mock service worker by @ApiMocking
It can be your single source of truth when it comes to mocking your apis:
- works in node for testing
- supports REST and GraphQL
- has a storybook addon so you can write stories for your components that useQuery
- works in the browser for development purposes, and you'll still see the requests going out in the browser devtools
- works with cypress, similar to fixtures
With our network layer being taken care of, we can start talking about React Query specific things to keep an eye on:
QueryClientProvider
Whenever you use React Query, you need a QueryClientProvider and give it a queryClient - a vessel which holds the QueryCache. The cache will in turn hold the data of your queries.
I prefer to give each test its own QueryClientProvider and create a new QueryClient for each test. That way, tests are completely isolated from each other. A different approach might be to clear the cache after each test, but I like to keep shared state between tests as minimal as possible. Otherwise, you might get unexpected and flaky results if you run your tests in parallel.
For custom hooks
If you are testing custom hooks, I'm quite certain you're using react-hooks-testing-library. It's the easiest thing there is to test hooks. With that library, we can wrap our hook in a wrapper, which is a React component to wrap the test component in when rendering. I think this is the perfect place to create the QueryClient, because it will be executed once per test:
const createWrapper = () => { // ✅ creates a new QueryClient for each test const queryClient = new QueryClient() return ({ children }) => ( <QueryClientProvider client={queryClient}>{children}</QueryClientProvider> ) } test("my first test", async () => { const { result } = renderHook(() => useCustomHook(), { wrapper: createWrapper() }) }
For components
If you want to test a Component that uses a useQuery hook, you also need to wrap that Component in QueryClientProvider. A small wrapper around render from react-testing-library seems like a good choice. Have a look at how React Query does it internally for their tests.
Turn off retries
It's one of the most common "gotchas" with React Query and testing: The library defaults to three retries with exponential backoff, which means that your tests are likely to timeout if you want to test an erroneous query. The easiest way to turn retries off is, again, via the QueryClientProvider. Let's extend the above example:
const createWrapper = () => { const queryClient = new QueryClient({ defaultOptions: { queries: { // ✅ turns retries off retry: false, }, }, }) return ({ children }) => ( <QueryClientProvider client={queryClient}>{children}</QueryClientProvider> ) } test("my first test", async () => { const { result } = renderHook(() => useCustomHook(), { wrapper: createWrapper() }) }
This will set the defaults for all queries in the component tree to "no retries". It is important to know that this will only work if your actual useQuery has no explicit retries set. If you have a query that wants 5 retries, this will still take precedence, because defaults are only taken as a fallback.
setQueryDefaults
The best advice I can give you for this problem is: Don't set these options on useQuery directly. Try to use and override the defaults as much as possible, and if you really need to change something for specific queries, use queryClient.setQueryDefaults.
So for example, instead of setting retry on useQuery:
const queryClient = new QueryClient() function App() { return ( <QueryClientProvider client={queryClient}> <Example /> </QueryClientProvider> ) } function Example() { // 🚨 you cannot override this setting for tests! const queryInfo = useQuery('todos', fetchTodos, { retry: 5 }) }
Set it like this:
const queryClient = new QueryClient({ defaultOptions: { queries: { retry: 2, }, }, }) // ✅ only todos will retry 5 times queryClient.setQueryDefaults('todos', { retry: 5 }) function App() { return ( <QueryClientProvider client={queryClient}> <Example /> </QueryClientProvider> ) }
Here, all queries will retry two times, only todos will retry five times, and I still have the option to turn it off for all queries in my tests 🙌.
ReactQueryConfigProvider
Of course, this only works for known query keys. Sometimes, you really want to set some configs on a subset of your component tree. In v2, React Query had a ReactQueryConfigProvider for that exact use-case. You can achieve the same thing in v3 with a couple of lines of codes:
const ReactQueryConfigProvider = ({ children, defaultOptions }) => { const client = useQueryClient() const [newClient] = React.useState( () => new QueryClient({ queryCache: client.getQueryCache(), muationCache: client.getMutationCache(), defaultOptions, }) ) return <QueryClientProvider client={newClient}>{children}</QueryClientProvider> }
You can see this in action in this codesandbox example.
Always await the query
Since React Query is async by nature, when running the hook, you won't immediately get a result. It usually will be in loading state and without data to check. The async utilities from react-hooks-testing-library offer a lot of ways to solve this problem. For the simplest case, we can just wait until the query has transitioned to success state:
const createWrapper = () => { const queryClient = new QueryClient({ defaultOptions: { queries: { retry: false, }, }, }) return ({ children }) => ( <QueryClientProvider client={queryClient}>{children}</QueryClientProvider> ) } test("my first test", async () => { const { result, waitFor } = renderHook(() => useCustomHook(), { wrapper: createWrapper() }) // ✅ wait until the query has transitioned to success state await waitFor(() => result.current.isSuccess) expect(result.current.data).toBeDefined() }
Silence the error console
Per default, React Query prints errors to the console. I think this is quite disturbing during testing, because you'll see 🔴 in the console even though all tests are 🟢. React Query allows overwriting that default behaviour by setting a logger, so that's what I'm usually doing:
import { setLogger } from 'react-query' setLogger({ log: console.log, warn: console.warn, // ✅ no more errors on the console error: () => {}, })
Putting it all together
I've setup a quick repository where all of this comes nicely together: mock-service-worker, react-testing-library and the mentioned wrapper. It contains four tests - basic failure and success tests for custom hooks and components. Have a look here:
That's it for today. Feel free to reach out to me on twitter
if you have any questions, or just leave a comment below ⬇️
Discussion (0)
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/tkdodo/testing-react-query-14cb
|
CC-MAIN-2021-21
|
refinedweb
| 1,222
| 59.23
|
RcppSimdJson 0.0.6: New Upstream, New Features!
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
A very exciting RcppSimdJson release with the updated upstream simdjson release 0.4.0 as well as a first set of new JSON parsing functions just hit CRAN.). The very recent 0.4.0 release further improves the already impressive speed.
And this release brings a first set of actually user-facing functions thanks to Brendan which put in a series of PRs! The full NEWS entry follows.
Changes in version 0.0.6 (2020-06-25)
-
Created C++ integer-handling utilities for safe downcasting and integer return (Brendan in #16 closing #13).
-
New JSON functions
.deserialize_jsonand
.load_json(Brendan in #16, #17, #20, #21).
-
Upgrade Travis CI to ‘bionic’, extract package and version from
DESCRIPTION(Dirk in #23).
-
Upgraded to simdjson 0.4.0 (Dirk in #25 closing #24)..
|
https://www.r-bloggers.com/2020/06/rcppsimdjson-0-0-6-new-upstream-new-features/
|
CC-MAIN-2021-21
|
refinedweb
| 158
| 62.54
|
A few weeks ago, I took the dive into Smartphones and bought an unlocked MPX-220. Even though the phone isn’t the latest and greatest gadget that some lucky people have, I’m happier with this phone than any phone I’ve ever had.
My first late-night project with the 2005 RTM build is a Smartphone 2003 application to scrape traffic information from the state of Maryland’s Coordinated Highways Action Response Team site. Hint: (if anyone is listening) – I could have built the app in half the time if the information was available from a web service or RSS feed.
Nothing earth shattering here: source and executable.
I wanted to use generics. I couldn’t understand why the compiler didn’t like the namespaces and angle-ly brackets until I read the manual. Smartphone 2003 SE projects only target CF.NET 1.0, which does make sense, if you think about it. No generics. Also, every other project type only targets .NET 2.0. I tried to put unit tests into a separate class library, but there were problems and I ultimately ended with NUnit tests inside the main executable ( #ifdef’ed away).
Daniel Moth’s post “Invoke CF and Full Fx” assured me I wasn’t going crazy thinking asynch programming on the compact framework is hard when compared to asych-ing with the full framework. Some APIs are missing (Control.InvokeRequired), and others are mutated (Control.Invoke is crippled). I skipped the asynch thing for now.
The emulator is a tad sluggish. I needed to install ActiveSync 4.0 for Internet connectivity from the emulator. I decided to switch to a Virtual PC to avoid a developer preview version of ActiveSync from hosing my machine. The error message that appears when trying to configure network settings for the emulator points to this page. The directions on the page are hidden and confusing, but ultimately take you to the Visual Studio for Devices blog post “Virtual PC Network Driver”. The post contains some good instructions (but also 11 broken images). No driver is needed - just ActiveSync 4.0.
Two issues I haven’t resolved:
1) Visual Studio 2005 deploys the kitchen sink to the emulator on every run. Even pre-installed assemblies like System.dll get shoved into the emulator, which makes for long waits during debugging. I did some searching and found some information that indicates this scenario shouldn’t occur, but didn’t find a definitive solution.
Update: By moving from NUnit 2.2.0 to 2.2.3 (the version that officially supports 2.0), the deployment problem went away.
2) I’ve been unable to scroll the CF.NET ListView control one “page” at a time. I’ve set FocusedItem properties, SelectedItem properties, invoked Update, invoked Refresh – nothing seems to highlight a new item X items away and moves focus to the item.
I am a dev working in VSD.
Regarding issue #1 in your blog post i am not sure that i understood your problem. In case if you are complaining that .NetCF is getting installed every time when you start emulator afresh, there is a way to avoid it. Please read my blog post which you might find useful.
blogs.msdn.com/.../468529.aspx
You can send me an email through my blog page for further communication.
thanks
Sivarv
This is shabbar working as a dev in "Visual Studio For Devices" team in Microsoft.
Regarding Issue#1 : One scenario where it can happen is:
1. First create a VB control library solution.
2. Add a windows form to the project.
3. Change the project type to Windows application
4. Change the startup program to "Form"( Form you have added in step 2).
5. Open Build Configuration Manager and change check the deploy option for this project.
6. Now deploy the project.
All the Dll's will get deployed although they are not supposed to deploy.
The reason this is happening is because desktop version of System.Deployment.dll is getting added as a reference when we change the proj type to windows application (this is in vb desktop proj template). As a result it is pulling in other binaries also.
A simple workaround of removing System.Deployment.dll from the list of references,after that everything should work fine.
Please let me know @( shabbarh@microsoft.com)whether this works for you.And if it is not then please pass on the solution and project file(if you can).
Thanks
Shabbar
tramadol hydrochloride
|
http://odetocode.com/blogs/scott/archive/2005/11/12/smartchart-for-the-smartphone.aspx
|
CC-MAIN-2014-15
|
refinedweb
| 752
| 67.86
|
Sunday, May 8, 2011, Andrew Dalke <dalke@...> wrote:
>
Sphinx is a layer on top of Docutils, higher level. Sphinx does
autodocumentation (among other things), Docutils does not (although it
was once my goal for Docutils also).
> - your active/informational PEP 257 but rejected PEP 258
PEP 257 is about docstring syntax and tool-independent formatting
conventions, while PEP 258 was a grand design spec (or vision) for
Docutils, not fully implemented. 258 is marked rejected simply because
standard library inclusion is no longer part of the plan.
> - the existence of a few reST docstrings in the standard library
> *and* a few @param docstrings in trace.py
>
> (Okay, the last is likely historical cruft.)
Cruft or individual whim.
>> I suggest you look at the Sphinx docs and/or ask on the Sphinx mailing
>> lists.
>
> I will do that - thanks for the pointer
Any time.
David
--
David Goodger <>
- your active/informational PEP 257 but rejected PEP 258
- the existence of a few reST docstrings in the standard library
*and* a few @param docstrings in trace.py
(Okay, the last is likely historical cruft.)
> I suggest you look at the Sphinx docs and/or ask on the Sphinx mailing
> lists.
I will do that - thanks for the pointer
Andrew
dalke@...
On Sat, May 7, 2011 at 17:55, Andrew Dalke <dalke@...> wrote:
> I'm trying to understand reST well enough to figure out if it's
> appropriate for a project I'm working on. I've not worked with reST
> before so I've been reading the documentation, but I don't see how I
> can use it as-is for my case.
>
> The main problem is that when I have functions which return a
> dictionary, I want to document the contents of the dictionary. For
> example, the key "nAtoms" has the value "the number of atoms in the
> molecule."
>
> I haven't found much information at all about how to document return
> types in reST, much less complex examples. The few ones I found,
> which come from the Python source, looked like:
>
> :return: Path to the resulting byte compiled file.
...
> that is, single line describing a relatively simple object.
>
>
> I think there isn't a standard way to document complex object with
> reST so the right solution is to forge my own path and do something
> like:
...
> Is there an existing solution for this problem, and if so, what is
> it?
>
> If not, does this sound right, and how do I go about having sphinx
> recognize those fields and including them in the documentation?
There are any number of ways to document a dictionary return value.
As a field list, bullet list, definition list, table, etc.
But I think you're asking in the wrong place. reST provides markup
primitives, but does not provide any conventions for
autodocumentation. The closest we ever got in the Docutils project
was in "Docstring Semantics" (not implemented):
I suggest you look at the Sphinx docs and/or ask on the Sphinx mailing
lists. The Sphinx project is all about the higher-level issues,
including autodocumentation conventions.
--
David Goodger <>
On May 8, 2011, at 3:24 AM, Ben Finney wrote:
> Don't return a dict in that case. Instead, define a type for this
> complex object, give it a meaningful name and docstring, and refer to it
> by name.
I had hoped in my short context description at the end of my post
to imply that there is no meaningful name for the return types of
individual functions.
Here's a more fleshed out example which I think shows the heart
of what I'm trying to do. Rather than working with molecules,
this example computes properties of strings.
There is a dependency chain:
strlen depends on s
lc depends on s (this is the lower-case version of s)
a depends on lc (the number of times "a" or "A" is found in s)
e depends on lc
i depends on lc
o depends on lc
u depends on lc
and given that I have "s" and want "o", I'll write the dependency
handling code, caching all intermediate calculations.
===============================
import inspect
def strlen(s):
"""The length of the string"""
return len(s)
def lc(s):
"""the string in lowercase form"""
return s.lower()
def vowelcounts(lc):
"""Counts for the number of times the vowels [aeiouAEIOU] exist
:key a: The number of times a is found
:key e: The number of times e is found
:key i: The number of times i is found
:key o: The number of times o is found
:key u: The number of times u is found
"""
return dict((c, lc.count(c)) for c in "aeiuo")
class Properties(object):
def __init__(self, s, handlers):
self.cache = {"s": s}
self.handlers = handlers
def __getitem__(self, key):
try:
return self.cache[key]
except KeyError:
pass
handler = self.handlers[key]
# Recursively resolve the values this function needs as input parameters
args = (self[name] for name in inspect.getargspec(handler).args)
# Call it and figure out if it returned a single or multiple values
retval = handler(*args)
if isinstance(retval, dict):
self.cache.update(retval)
else:
self.cache[key] = retval
return self.cache[key]
===============================
Here it is in use.
>>> handlers = dict(strlen=strlen, lc=lc, a=vowelcounts, e=vowelcounts,
... i=vowelcounts, o=vowelcounts, u=vowelcounts)
>>> p = Properties("Is this Ohio?", handlers);
>>> p.cache
{'s': 'Is this Ohio?'}
>>> p["o"]
2
>>> p.cache
{'a': 0, 's': 'Is this Ohio?', 'e': 0, 'lc': 'is this ohio?', 'i': 3, 'u': 0, 'o': 2}
>>>
This is obviously fragile. There's a few ways I plan to make it
more robust. One is to scan the docstring and find the ":key:" (or
whatever) return types. This solves a few issues:
- I can easily determine if the handler computes a single
value or a set of values (lack of :key: means the first)
- At handler registration time I can easily figure out which
properties the function claims to handle, and therefore
do the correct dependency management.
In my API's current incarnation I do all of these with @descriptors
but didn't like how cumbersome it was nor how hard it was to
generate good documentation.
With this example it should be clear that the return type is
an implementation choice. That is, I could instead have
implemented the code in "vowelcounts" as
def a_count(lc):
"The number of times a is found"
return lc.count("a")
def e_count(lc):
"The number of times e is found"
return lc.count("e")
handlers = {... a: a_count, e: e_count ... }
and presented the same outward-facing API. In that API I would
like to have:
>>> p.describe("o")
The number of times o is found.
Suppose I did use individual return types, with documentation
as part of those types. Then I'll have
class VowelCounts(object):
"""Counts for the number of times the vowels [aeiouAEIOU] exist
:var a: The number of times a is found
:var e: The number of times e is found
:var i: The number of times i is found
:var o: The number of times o is found
:var u: The number of times u is found
"""
def __init__(self, a, e, i, o, u):
self.a = a
self.e = e
self.i = i
self.o = o
self.u = u
def vowelcounts(lc):
"""Count the number of times a vowel is found
:rtype: VowelCounts
""""
... implementation here ...
return VowelCounts(**dict((c, lc.count(c)) for c in "aeiuo"))
This is a lot of overhead compared to what I was thinking about,
if only because of the tedious __init__ section (namedtuple doesn't
help here because it has no docstring support), the implementation
of the collection step is more complicated and error-prone (ie,
I have to find "VowelCounts" in the rtype, assume it's in the
same module, and then parse its docstring), and the disassociation
between use and documentation means it's harder to find the
documentation for a given item. (Eg, search up or down?)
Plus, the data types will *never* be reused by another function,
so there's no savings in that regard.
I therefore just don't see your suggestion as being appropriate
for what I'm trying to do, and I hope my fleshed out example
shows why.
Andrew
dalke@...
Andrew Dalke <dalke@...> writes:
> return dict(nAtoms = len(molecule.atoms), nBonds = len(molecule.bonds),
> nHeavies = sum(1 for atom in molecule.atoms if molecule.atomic_number != 1))
>
>
> Is there an existing solution for this problem, and if so, what is it?
Don't return a dict in that case. Instead, define a type for this
complex object, give it a meaningful name and docstring, and refer to it
by name.
--
\ “… one of the main causes of the fall of the Roman Empire was |
`\ that, lacking zero, they had no way to indicate successful |
_o__) termination of their C programs.” —Robert Firth |
Ben Finney
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
|
https://sourceforge.net/p/docutils/mailman/docutils-users/?viewmonth=201105&viewday=8
|
CC-MAIN-2016-36
|
refinedweb
| 1,519
| 72.66
|
VueJS is dead, long live VueJS!
Stefan Dorresteijn
・4 min read
With the release of the VueJS 3 "Request for Comment" documentation about two weeks ago, Evan You introduced the VueJS function-based API and has set the VueJS community ablaze. These new ideas are still in the "Request for Comments" stage, so they're far from set in stone, but because the RFC introduces such significant changes, I made a quick summary of what you need to know.
NB: All this information and much more is in the RFC, so I do suggest you read that.
Setup
VueJS 3 departs from the option-based API we've grown to love and introduces the
setup() function, which will be where all the magic happens. This function single-handedly sets up the the logic for our component and returns data that's exposed to the template. The option-based API will continue to work even in VueJS 3, but this new function-based API will be the new standard.
For all the functionality we're used to from VueJS like reactive data, computed values, methods and watchers, we
import functions from
vue and use them in our
setup() function. Here's a basic example from the RFC:
<template> <div> <span>count is {{ count }}</span> <span>plusOne is {{ plusOne }}</span> <button @count++</button> </div> </template> <script> import { value, computed, watch, onMounted } from 'vue' export default { } } } </script>
But why?
If that example doesn't make it clear why this change was introduced, or if it feels like a step back in terms of usability, I understand. I had the same initial reaction and it took me a bit of time to figure out why this change was necessary. The v2.x API is widely loved and is often the reason why people move to VueJS from other frameworks like ReactJS or AngularJS, so a change this drastic seems like a strange idea.
Encapsulation is king
The component API was created in part to make it easier to reuse code across your application. While VueJS is seriously modular and uses components, the current option-based API doesn't allow for an easy extraction of functionality that relates to a single piece of functionality or data. You need to define your data(/state), computed values and methods separately, while they might all be related. This gets confusing when components grow and methods deal with different pieces of data.
This is where the new function-based API comes in. It allows you to extract all code related to a piece of logic and put it together in what they call a "composition function", which returns reactive state. An example given in the RFC uses one of those composition functions to extract the logic of listening to the mouse position:
function useMouse() { const x = value(0) const y = value(0) const update = e => { x.value = e.pageX y.value = e.pageY } onMounted(() => { window.addEventListener('mousemove', update) }) onUnmounted(() => { window.removeEventListener('mousemove', update) }) return { x, y } } // in consuming component const Component = { setup() { const { x, y } = useMouse() return { x, y } }, template: `<div>{{ x }} {{ y }}</div>` }
If we compare this to how we would write this functionality in the v2.x API, we can see that the functionality related to using the mouse position is all over the place, where in the v3.x API, it's quite nicely grouped in a singular function:
<template> <div> {{ x }} {{ y }} </div> </template> <script> export default { data() { return { x: 0, y: 0, }; }, mounted() { window.addEventListener('mousemove', this.update); }, beforeDestroy() { window.removeEventListener('mousemove', this.update); }, methods: { update(e) { this.x = e.pageX; this.y = e.pageY; }, }, }; </script>
And more
Encapsulation isn't the only reason why these changes are useful, so here are two other reasons why this change might be what VueJS needs.
The current option-based API in VueJS has an issue in that it doesn't properly support TypeScript type inference. The proposed changes fix this issue, and achieve full typing support, with TS code looking almost the same as JS code as a cherry on top of an already very useful pie.
VueJS is loved for its extremely small bundle size and this change shrinks that bundle even more. Since function and variable names can be shortened with standard minification (while object/class methods and properties can't), the code simply compresses better.
Thoughts?
Initial reactions to the RFC have been mixed, with some users comparing these changes to React and suggesting VueJS is losing its edge. My first response was far from positive too, but the longer I look at it, the more the encapsulation advantage starts to outweigh the cleanliness of the current API.
What are your thoughts?
Who's looking for open source contributors? (November 26th edition)
Find something to work on or promote your project here. Please shamelessly pro...
It feels to me like all the front-end frameworks are constantly trying to scratch an itch that "maybe/probably doesn't exist".
It's all good that everyone is striving to make things better, but these types of changes don't only affect the projects you are still going to do, but also those you've already done. Then a few months later there will be another change, further removing various Vue (or any other FE framework) projects from one another.
In that respect I don't like that the whole development community is the guinea pig for framework development. ie. just as you figure out how a pencil works, someone goes and changes it :)
I guess I want the option to use the current or the next version whenever I want, to infinity. To much to ask?
Couldn't agree more. That's what separated Vue from others, a good common sense one way of building components, where you could hop from one code base to another an find the same approach because the framework had an opinion and consistency.
Now, the setup method will turn common way of doing things to be much like React, where each code base is different and largely a mess. Unfortunately components along don't solve having messy code bases.
I forgot about this aspect, I really liked the idea that most Vue apps were structured the same way! Maybe there was a compromise to be found between the current version and the needs explored by the new one. I don't know, it's too easy to comment from afar
If i had to guess i would say that is because 90% of tutorials begins with "let's install the vue cli and create a project."
One thing Vue got right was give the users freedom to do whatever they want but still provide a recommended way of doing things.
Is like a jedi mind trick, imagine Evan telling you "listen, i won't tell you how to live your life but if you do this thing and use those other things you will find money and happiness."
You will be able to use 2.x API for all 3.x lifecycle and even beyond if community still heavily uses it. Otherwise they are very likely to release a compatibility plugin to let you use 2.x API for as long as you need
yeah great, we can remain in "compatibility" box forever while Vue team only improves for the people adopting the new API.
NO. This should be a branching into a sister project, TypeVue.js, not a 3.x deprecating the original (eventually)
these changes seem more like additions than breaking changes. Like react hooks were
Well said! Agreed
Like you wrote poeple did already compare, it reminds me quite a bit of React's hooks. But that's not a critique, having started with a new React project and having used hooks, I found them to be pretty useful. Vue should not shut themselves away from good concepts just to distinguish itself from other libraries/frameworks. I think incorporating good concepts and making proposals for new ones helps everybody.
The difference is that hooks are in the component and executed on every render. Vue 3’d setup function would only be executed when the component is created.
Hooks is predicated on an assumption that creating functions is cheap, which isn’t always the case, especially in some old and/or embedded environments. This change seems to solve the same problems as hooks but in a more efficient manner.
You're writing single page apps in embedded environments?
Yes. Smart TV apps tend to run in a browser and there isn’t a whole lot of horsepower to work with.
Are you using Vue for building TV apps? I would love to hear your experiences with it if you did!
I was working on the Vue app built for TV last year and it was terrible in terms of performance and general usability. I guess that team that passed it to us was partly to blame because the code was a god damn spaghetti mess.
We're currently building apps for a few TV vendors and React is working fine, the only thing that Vue does better is the animations IMO.
I’m using React, but if it was up to me, I’d probably use Svelte instead.
I’m using React, but if the choice was mine, I’d love to try Svelte instead. It doesn’t use a virtual DOM, so there’s less overhead.
I have been using Vue for a few years now and have several medium to large projects in production and I have never felt the need for what this change provides. The examples used in the RFC feel like outliers or made to be more common than they actually are.
So you've never used a mixin once on a big project? I really find it hard to believe.
I use mixins all the time. I am not sure what you are referring to. I didn't mention mixins.
That's exactly the point. Mixins are the same composition pattern that 3.x API introduces, but with greater problems, for example :
You can read more on the rendered RFC, but the point is that there can be quite many improvements that the 3.x API can give to applications, especially if they are big
I have never had an issue where two mixins clash and if so it sounds like a design problem more than a problem with the core API/framework and it should be addressed during dev time.
And the times I have been reviewing a component's logic and saw a prop or method being used and cant find it declared in that component I look in the mixins because thats likely the place its coming from.
Again I don't feel like these are common issues I have come across in the 3 years I have been using Vue professionally but that may just be me.
To be honest I hadn't had a problem either but I've never worked on a massively large Vue app either. Maybe they could have just introduced a different way to alias mixins so that methods had a prefix instead of adding a completely new way of using Vue
You can get clashes in the Function API too, if you spread together the return values of several
useXXX()functions and return them in
setup().
@ju66ernaut It is a design problem, but for example if you have two mixins both fetching a remote API, you need to be extra creative on the
isLoadingproperty name to avoid clashing, and check every other mixin to be 100% safe. Another problem is that if you use two third party libraries as mixins and they both declare a property, computed, or method with the same name you are pretty much out of options.
@amcsi Yes, but with the spread operator you are explicitly merging the keys and somehow take full responsibility of what can potentially clash 😄. With mixins your only option is to rename one of the properties, which can be problematic or even not possible if mixins come from 3rd party libs.
I don't wanna convince you that 3.x API is 100% better than the current, but as you can see the current implementation has some issues that cannot be solved easily
@matteorigon True, my bad.
Me too.. I have SSR and Hybrid applications running on Android and iOS, always was able to go around any limitations I found. I haven't even had any issues with typecasting, what am I doing wrong?? hahaha
Perhaps people writing huge projects that are crapping out because of lack of organization or bad design should investigate using microservices approaches instead?? just a suggestion
Poor organization and project structure plus spaghetti code are so much worse once developers introduce mixins to it. I was working on at least two enterprise apps written in Vue with a bunch of overly complicated mixins. If the code is bad - it can be refactored, project structure can be reorganized, but when you have components with a logic that is really tightly coupled with mixins logic it is hell on earth. In the end, we had to rewrite the whole app...
This isn't exactly correct, and is actually a huge topic of debate on the RFC currently. The current format is not disappearing, it will be offered alongside the new
setup()function. And it will stay this way if users continue to show that they use the "old" way throughout the lifetime of Vue v3.
While that's absolutely true, my point was more to say that the RFC introduces an API that departs from the option-based API, even if that API will continue to be available.
Yes, definitely. With the large amount of misunderstanding on Reddit, HN, and even the RFC thread itself, I'm just trying to help clear any confusion :)
The poor core team seems like they're dealing with a lot during this comment period.
Oh yeah, I (and I'm sure they) appreciate it. I'll update the article to reflect that too :)
We should all just go Vanilla. At least I'm trying. Did a lot with React, Angular, Vue, now digin Svelte but I'm tired od learning.
Bein' selfemployed gives me the posibility to choose the stack and that is awesome....
This reminds me that sometimes I feel tired that over the years I've been learning so many technologies to do the same exact thing :D
What do you think about svelte?
Thanks for taking the time to write this out. As a beginner, I suspected there were benefits to the change and this helped clear up the pieces that I was missing!
Personally, I kinda like the way
setup()looks 👀
You can start using it right now with vue-function-api in vue2.
Stefan, I completely agree with your view (pun intended).
To be frank, I don't understand the frustration around these changes. I'm sure it's pretty clear to the devs right now that they'll need keep supporting the old API so I'm not at all afraid that I'll have trouble maintaining my current projects.
The useMouse example is a brilliant enough reason to use the new API in my future projects. Anything that boasts reusability, I'm for it.
Great post! I think people initially missed the benefits of encapsulation and type inference that you explained so well because the syntax looks so different. I love these changes and think they make Vue simpler overall, and the smaller bundle size you mentioned is great. Also I think the Vue team is very responsive to the community and they’ve handled changes to the framework really well in the past, so I don’t think people need to be super worried about backwards compatibility. The fact that this conversation is all happening before even a beta release reflects the Vue team’s proactive approach to communicating about its development.
I don't think the issue here is solely the RFC, it's the way the Vue team communicates these changes to the community and responds to criticism from people. The way I have seen the Vue team interact with the community and criticism, it's not a good look in my opinion.
Just look at that RFC, people are worried and confused, and I don't think Evan is doing a very good job quelling the fire, to be honest.
One of the biggest reasons people either choose Vue over React or move to it from React was the fact it's cognitively a lot simpler. React gets messy and complex once you start building large apps with it.
All people see with this new RFC is the Vue team making more work for developers and introducing code changes that make Vue look more like React.
Coupled with the fact Evan has been showing a demo of v3 (a secretive prototype not publicly available) and some impressive benchmark numbers, but it seems to me that Vue 3 is nowhere near done and won't be done in 2019.
There seems to be a lot of uncertainty, and the perception to me is they're stuck between a rock and a hard place. They don't know how Vue 3 will look, they want to improve it, but at the same time, they're being held back by their own success (see Angular 1.x).
It raises the question many others are asking: why Vue 3 instead of React?
That's quite a change. I'm all for functional composition, Vue mixins are quite limited.
Was there a real need for it? Probably not as @jaakidup hinted at. Is it better than the current version? Yes.
I wonder how long "support" for Vue 2.x will go on because I don't see companies rewriting overnight.
It's already stated in the RFC post that 2.x API is DEPRECATED, it's just that it will be included as compatibility mode for 3.x not gonna be improved any longer. So any of the updates and benefits the team makes will not come to the people who want to do the 2.x API
I believe support for 2.x would run until 4.x, but none of this is set in stone either
As Far As I am concerned that VUE.JS was the First JavaScript Framework that I learned and it is quite simple for new beginners to get into the JavaScript Frameworks as well. After learning this I jumped to the Angular 2 at that time and it is also have its own benefits as well.
If a new comer goes directly into React or Angular then it can be a kind of overwhelming and he or she can fed-up from all this JavaScript frameworks and so on.
So as far as I know VUE.Js is still used and new beginners should start their JavaScript Journey From Here :)
The Vue Core team has listened to the community feedback and decided to keep the current API and provide the new API as a plugin as stated in this tweet by Guillaume Chau twitter.com/Akryum/status/11431148....
Great decision making, it takes courage to change course!
Add it as a plugin, see if the community likes it, fixes the problems and then maybe merge it in the mainline....
Clean code is crucial to software engineering of any kind. Without it it makes maintenance a nightmare.
Don't get me wrong, I'm all for encapsulation in part BECAUSE of that reason, but using mixins to keep code in a different file sounds like a better compromise. The trade off is that mixins are like traits, but (AFAIK) they don't have the ability to rename their methods on import. So, they pollute the class with the extra methods instead of using their own class.
Unless VueJS 3 provides considerable value, or they deprecate VueJS 2 far too early, then people aren't going to migrate, even for new projects.
Encapsulation provided by this new syntax is awesome and addresses some of my current implementations in a better way.
I haven't read the RFC so far but this introduction to the new setup syntax looks really neat to me.
👍
You could try it with vue-function-api in vue2.
This is gonna be great for encapsulation indeed. Also like the fact that Vue still stick with their philosophy of providing different approaches. Hopefully they keep the old SFC API, because like you folks said, it help keep some kind of consistancy.
Just a question, it this new API going to make Vuex deprecated? I see that we can now declare states, and mutations/actions might be imported functions.
A couple of thoughts:
This seems like something that sounds like it has some good advantages but it's definitely something that needs testing out. Why make this the standard method straight away? Why not release it as an option and get feedback on what it's actually like to use in the real world?
This is clearly a transition from declarative to imperative. There are numerous, well-documented reasons for why declarative syntax is better.
Do the advantages of composing behaviours really outweigh these?
Lol. this is by no means a transition to imperative code. It's actually the other way around... With this setup function, we can go full FP, no more 'this' etc...
I like it! Been using Vue for about 2 years and love it, but the composition function definitely reduces cognitive load for me. Separating logic based on lifecycle/config constraints usually increases cognitive load by having to jump through the code to understand what a specific user action results in. The less time I spend just trying to find out where the relevant code is located, the more time I can spend thinking about how to efficiently provide solutions and fix problems.
Firstly this is purely an addative API - you don't need to use it, nor will you be forced to use it. The beauty about Vue is you can use the strong API given to you as default per 2.6.x or you can opt in to use the 3.x API.
Their need to intro this to fix mixins due to name clashes (i.e solve it using hooks principle) - doesn't make too much sense to me but hey Evan's, made some pretty good decisions so far.
You can always resolve to custom naming convensions with your mixins to derive source and prevent clashes
I like this format much better than the current 2.X Vue. I never really liked it before
The format reminds me quite a bit (in some ways) of Svelte 2 syntax.
Actually, you can start using the new API right now with vue-function-api in vue2.
Call me crazy, but to me the logic of mouse position listening seems much more easier to follow with the Option API. Like if you do too.
I was totally hooked on React hooks. Seeing this in Vue makes me really REALLY want to try Vue, and honestly makes me feel a little scared I will like it way too much.
Vue.js maintainers, for the sake of your project talk to Python core devs about python3 and learn from their mistakes. It looks like you are going down the same path and it's not a pleasant one...
Wow, so react hooky!
To be honest, both of these look horrible. I am so used to doing
@Component() export class MyApp extends Vue { blar blar }
Vue is starting to look a bit like React :/ Not bad thing tho
No. Just no..
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/stefandorresteijn/vuejs-is-dead-long-live-vuejs-1g7f
|
CC-MAIN-2019-30
|
refinedweb
| 3,964
| 69.62
|
Imshow in Python
How to display image data [1]:
import plotly.express as px import numpy as np img_rgb = np.array([[[255, 0, 0], [0, 255, 0], [0, 0, 255]], [[0, 255, 0], [0, 0, 255], [255, 0, 0]] ], dtype=np.uint8) fig = px.imshow(img_rgb) fig.show()
Read image arrays from image files¶
In order to create a numerical array to be passed to
px.imshow, you can use a third-party library like PIL, scikit-image or opencv. We show below how to open an image from a file with
skimage.io.imread, and alternatively how to load a demo image from
skimage.data.
In [2]:
import plotly.express as px from skimage import io img = io.imread('') fig = px.imshow(img) fig.show()
In [3]:
import plotly.express as px from skimage import data img = data.astronaut() fig = px.imshow(img) fig.show()
|
https://plotly.com/python/imshow/
|
CC-MAIN-2020-34
|
refinedweb
| 146
| 71.1
|
MRM PlaybackDelegate Class
The
PlaybackDelegate class manages playback of PCM audio.
- PlaybackDelegate Class Declaration
- How MRM Uses Your Implementation of PlaybackDelegate
- Pause/Resume
- Handling Underruns
- getPlaybackRate()
- start()
- stop()
- write()
- AudioFormat Enum
- StopBehavior Enum
PlaybackDelegate Class Declaration
This is a sample declaration of the
PlaybackDelegate class:
class PlaybackDelegate { public: virtual int getPlaybackRate() = 0; virtual int start(AudioFormat format, int channels, int rate) = 0; virtual int stop(StopBehavior behavior) = 0; virtual int write(int64_t pts, const void *buffer, size_t *frames) = 0; };
How MRM Uses Your Implementation of PlaybackDelegate
On startup,
getPlaybackRate() is called to get the frame rate supported by a device. For each song played, the MRM SDK calls
start(),
write(), and
stop().
getPlaybackRate() is called from the WHA constructor before the rendering thread starts. The
start(),
write(), and
stop() functions are guaranteed to be called from a single rendering thread.
The MRM SDK outputs audio at a rate specified by Client and performs all the needed resampling inside of the MRM SDK.
Pause/Resume
The MRM SDK calls
stop() when the user pauses playback and calls
start() when user resumes playback. The MRM SDK writes completely new data along with a new presentation timestamp (PTS) after resume. Thus it is not a classic pause/resume case where the hardware stops on pause, with buffers kept intact and, on resume, continues from exactly the same point it stopped at. It is a start/stop case where audio positioning must be performed again upon resume.
To prevent playing back stale data on resume, the client must drain all buffered data on
stop(). Consequently, buffers should not be longer than 200ms on the client side as pausing playback will take a long time.
Handling Underruns
If there is no data available in WHA,
write() is called with
frames pointing to 0.
Depending on underlying platform code, a typical implementation may write some silence to prevent a restart of the audio pipeline if buffers are close to empty or it can wait for some predefined amount of time. The ALSA implementation in the sample app writes one period of silence.
As with an ordinary
write() in case of error, the Client shall return a non-zero value. If an error is reported, the MRM SDK stops the playback by calling
stop(), and PlaybackFailed is reported to the Alexa Cloud.
getPlaybackRate()
Implement this function to return (in Hz) the playback frame rate supported by the device or return a negative value on error. The function is called when the client instantiates the WHA class.
virtual int getPlaybackRate() = 0;
start()
Implement this function to handle a request from the MRM to start playing content. (A typical implementation is to open the hardware device and prepare it for writing.) Return zero on success and non-zero otherwise.
When the MRM SDK has content to play, it calls
start(). The MRM SDK calls
start() at the beginning of every song and calls stop() at the end of every song.
virtual int start(AudioFormat format, int channels, int rate) = 0;
stop()
Implement this function to respond to a request from the MRM SDK to stop, where the handling of buffered data depends on the value of
behavior. Return 0 for success and non-zero otherwise.
virtual int stop(StopBehavior behavior) = 0;
write()
Implement this function to present audio data (perform audio positioning) at the local time specified in pts. Before returning, set
frames to the number of frames consumed. Return 0 for success and non-zero otherwise.
The client can consume less frames than specified in
frames. Consequently, the MRM SDK would call
write() immediately with recalculated pts and the rest of the buffer.
Audio Playback API is an example of a push model where processing speed is controlled by the Client. If more data is available upon return from write(), the MRM SDK immediately calls
write() again. If no data is available,
write() is called with
frames set to 0.
If the client's buffers are full, the client shall block until it can consume more data.
An approach to audio positioning:
Output some amount of silence if the buffer is in the future, set
framesto 0, and return 0. The MRM SDK would call again with the same pts and buffer.
Drop the whole buffer if the buffer is in the past, leave
framesunchanged, and return 0. With
framesunchanged, this would indicate that the whole buffer was consumed, and the MRM SDK would call
write()with the next pts and buffer.
Trim part of the buffer and write the rest if part of the buffer is in the past, and return 0. The unchanged
frameswould indicate the whole buffer was consumed, and the MRM SDK would call
write()with the next pts and buffer.
Write the whole buffer, leave
framesunchanged, and return 0. The unchanged
frameswould indicate the whole buffer was consumed, and the MRM SDK would call
write()with the next pts and buffer.
virtual int write(int64_t pts, const void *buffer, size_t *frames) = 0;
AudioFormat Enum
enum AudioFormat { // Signed 16-bit integer, interleaved AUDIO_FORMAT_S16 };
StopBehavior Enum
enum StopBehavior { // Drain (play) all the samples and then stop STOP_DRAIN, // Drop all the samples and stop STOP_DROP };
|
https://developer.amazon.com/it-IT/docs/alexa/mrm/playbackdelegate.html
|
CC-MAIN-2020-10
|
refinedweb
| 859
| 60.55
|
hi,
---- short-------
i need to get familliar with the sources of a med. sized
program but it has many #ifdef's.
is there a tool or gcc/cpp methode to get only
the c-code after it has been clean from all the #ifdef blocks ?
-------long-------
unfortunatly there are a lot of #ifdef , #elif 's
preporcessor driectives most of them only containing
a 1-5 lines of code , but they are all nested and it's such
a mess to read.
additionally most of them are #ifdefs for diffrent parallel
computing platforms (MPI,PVM,and OLD stuff) and are absolutelly
uninteresting for me.
is there a tool or anyway , to clean see the code for just that one #define
i use ( in this case #define PARALLEL MPI). like a lightweigt cpp that does not modify
the #includes but only delete the #if , #else --- #endif blocks i don't use ?
this would reduce the code to a third at least and make it so much easier to read.
thx
thomas
|
http://cboard.cprogramming.com/linux-programming/69443-prepro-clean-tool-sharpidsharpdef-mess.html
|
CC-MAIN-2015-22
|
refinedweb
| 169
| 76.45
|
Beginning Portable Shell Scripting 186
Joe MacDonald writes "The earliest UNIX shell I encountered was the Bourne shell on a SPARCStation 2 at my university. As with many students of my generation, prior to that nearly all of my exposure to command line interfaces was some variant of DOS. I was quite proficient with the primitive scripting language that was available on such platforms but I immediately felt far out of my depth in this new environment. The commands seemed arcane, possibly dangerous, and almost immediately I regretted stepping into this unfamiliar wilderness without some sort of guide." Read below for the rest of Joe's thoughts.
It was probably a few weeks after that first, rough introduction that I returned for another round with this strange but somehow seductive tool, armed with a book I'd found and a determination to learn it's secrets. I had no idea then that seventeen years later I'd still be learning new tricks, discovering new features and taking so much pleasure from sharing what I've learned with others. In fact, in those early forays into the realm of shells and scripting, I didn't even really have a strong concept of the separation between the shell and the operating system, so at the time I couldn't have conceived of how much fun I would have in later years discussing and debating the relative strengths and weakness of shells with friends and colleagues, but it is probably my favorite touchstone of computer geek conversation. Discussion of shell features, scripting tricks and semantics almost always result in my learning something new and interesting and having a new tool to add to my collection.
Peter's book, Beginning Portable Shell Scripting, therefore may sound like something intended as a gentle introduction, aimed at the initiate — the sort of text I'd been seeking to carry with me when I first attempted to write what I thought of as "batch files" on that now-ancient UNIX machine — but there's more truth in the subtitle, From Novice to Professional, than one might expect. He writes in an accessible, at times conversational, style and presents detailed technical information alongside a mixture of anecdotes and historical detail that does more than simply serve as a technical reference, it helps the reader understand a great deal about why things are the way they are. It was such an entertaining read that I frequently found myself skipping ahead, reading a section I knew was coming up, then resisting the urge to just keep going from that point. The first of these I encountered on page 18 in which he discusses the relative portability of printf in shell scripts. I knew what he knew, it's clearly non-portable and should be avoided, and thoroughly enjoyed the explanation of how he determined his (and by extension my) assumption was in error. Another on page 108 is the sort of good advice all UNIX users, not just those aiming to write good scripts, should take to heart. Many times, though, I've related precisely the same advice to colleagues to be met with confused stares, so it certainly bears repeating.
This book is a desktop reference in the truest sense of the term for me, it is an interesting, at times laugh-out-loud amusing, discussion of how to write shell scripts that will work on the widest possible range of Bourne-derived and POSIXly correct shells and why this is a desirable goal. In true UNIX tradition, the author doesn't provide simply a set of rules, but guidelines that will help you find your own way through the task of creating portable, maintainable shell scripts.
The real meat of the book begins in Chapter 3 (more on Chapter 2 in a moment) with a discussion of control structures and redirection, the latter being perhaps the defining characteristic of UNIX command line interfaces. I struggled somewhat with trying to decide if redirection would be better discussed after the material on how the shell parses tokens, presented in the first part of Chapter 4, but it does seem that the correct logical grouping is the one presented. It would be easy to get lost, for example, in the semantics of why the same streams of redirection tokens behave differently on different shells, but the key concept in the early chapters is that of many tools, each doing a specific task, working in concert. That objective is achieved quite effectively.
Chapters 5 and 6 go into detail (possibly too much for some, just right in my opinion) on how UNIX executes shells and how shells can spawn other shells, the costs and the benefits and the available alternatives for one to make an informed decision. Frequently there isn't one right answer whether some activity is better done in a script, in a shell function or in a subshell, but the material here will certainly aid in making those determinations. My personal bias being almost always toward writing a shell function — perhaps an indication I've had too much exposure to C programming, perhaps more due to a frugal upbringing and my own sense that spawning a whole new shell to do something is overkill — had me wishing for a larger section on the value of such constructs, but there should be enough there for me to win some converts to my cause.
By far the sections I learned the most from, however, would be Chapter 7: Shell Language Portability and Chapter 8: Utility Portability since I actively avoid exposure to other shells. I have my two preferred options and a third that I will use when presented with no alternative. While this does mean I know "my own" shells very well, it also means that I often bump into the furniture, so to speak, when I find myself using a new shell. These chapters haven't been immediately useful to me, but I know they're the ones that I'll be turning to in the future, I've needed something like them in the not-too-distant past, after all.
The final three chapters assemble the information presented in the earlier sections and suggest a sort of "best practices" approach to writing scripts. Concepts like "degrade gracefully" seem like pretty fundamental ideas when you hear them but I frequently find myself writing functions or scripts that don't do that at all when intended for a limited, usually singular, audience. It may seem like an okay idea when you're doing something for your own use, but when you write a complex function that works then discover a bug in it two or three years late and you have to return to fix it, it can be just as helpful for it to simply fail in an informative way as it would be to have detailed comments explaining the intent and the mechanics.
Truly, there's something here for everyone. In my office I'm considered something of an expert when it comes to complex regular expressions and the subtleties of using them in different editors and tools, but Chapter 2 and Appendix C both had enough new material in them that I found myself frequently making notes in the margins.
I have many, many books in my bookshelf in my office but nearly none on my desk. Beginning Portable Shell Scripting is going to be one of the very few that will be spending a great deal of time lying flat on my desk, in easy arm-reach.
You can purchase Beginning Portable Shell Scripting from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
more than this? (Score:3, Interesting)
Re: (Score:3, Insightful)
Yes.
... Well, come on, what do you expect me to say?
If you have a choice... (Score:5, Insightful)
Don't do it.
Shell scripts have horrible error handling, and quickly become a maintenance nightmare. These days, e.g. Python is installed everywhere you need to go.
Just do this:
def c(s): os.system(c)
and you have mostly covered the area where shell scripts excel. You can still write minimal "shell scripts" inside c().
Unluckily, you still *need* to grok shell scripting to some extent, or at least be able to read them. Just don't write them if you can help it.
Re:If you have a choice... (Score:5, Insightful)
;-)
Re: (Score:2, Insightful)
Re: (Score:2)
Oh yes?
How about counting each time a shell script has to spawn a separate process and python can do the job inside it?
And what when you stack together 5 such scripts?
Beyond maintainability, readability, sanity, one can also have some gains in speed and memory usage by using python;
Re: (Score:3, Insightful)
Why learn an arcane language like sh when you can learn a nice well structured language like Python and write better scripts?
Where I work pretty much everything has bash already (I install cygwin on all the Windows boxes. Of course, Python is usually there too
:) ).
If you already have a bash script (or find one via the Google), changing it is usually simpler than porting it to Python.
If you work with people that already know bash scripting but don't know Python using the lowest common denominator can be easier than training.
There is less memory overhead for a simple shell script than there is for a simple Python script.
This
Re:If you have a choice... (Score:4, Interesting)
Re: (Score:2, Interesting)
One reason may be that Python's startup time is an order of magnitude slower that a shell's, and sometimes it does matter, even for scripts of much more than 10 lines.
Re: (Score:2)
sed -n -e '// {
s/.*\(.*\).*/\1/p
}' musicbrainz.txt
Of course, I think yours is skipping the first one; maybe it's an album title? I leave fixing that as an exercise to for the reader.
:)
Re:If you have a choice... (Score:4, Informative)
sed -n -e '/<title>/ {
s/.*<title>\(.*\)<\/title>.*/\1/p
q
}' < musicbrainz.xml
Forgot that "plain old text" isn't. (To verify I had it right, of course, I pasted it into a file with the original and then ran
!sort | uniq)
Re:If you have a choice... (Score:4, Insightful)
That's exactly what the Perl people said years ago, and we all know how well that worked out (for low maintenance sysadmin-type tasks). I know the sinking feeling I get every time I find a crontab entry pointing to a Perl script.
Re: (Score:2)
That's exactly what the Perl people said years ago, and we all know how well that worked out (for low maintenance sysadmin-type tasks). I know the sinking feeling I get every time I find a crontab entry pointing to a Perl script.
There is a big difference between Perl and Python. While you can write readable Perl, or write horribly opaque Python, there's a reason that Perl has a reputation as a write-only language and Python has a reputation for being quite readable.
Then there's the guy I worked with years ago who wrote (writes, I'm sure) all of his scripts in elisp...
Re:If you have a choice... (Score:4, Insightful)
That's not what they said years ago. People arguing to use Perl instead of Bash did so because Perl was just far more functional. Perl and Bash both have pretty terrible maintainability, but Perl is a million times more functional.
Python and Ruby have the functionality of Perl without the maintenance issues inherent in a language which is really a hodge-podge of ancient unix idioms.
Re:If you have a choice... (Score:5, Insightful)
maintenance issues inherent in a language which is really a hodge-podge of ancient unix idioms.
What a ridiculous claim, there are no "maintenance issues" with ancient idioms... The very fact that those techniques are ancient shows how incredibly flexible and useful they are. I'd much rather use conventions which are widely accepted and in many cases are required by Posix/SUS/XPG4 than find myself having to hack up my stuff to accommodate broad and pervasive changes such those experienced when moving from python 2.x to python 3.x...
People who are constantly advocating against shell scripts tend to be those who see system administration as something it isn't; namely a low level development job. When in reality a sys admin uses shell scripts to glue together existing products of developers in order to manage administrative tasks. If I were an auto mechanic no one would propose that I learn to master a casting foundry and a milling machine in order to work on cars, those are clearly manufacturing/development tools AND certainly no good mechanic would suggest that using a wrench to fasten a nut to a bolt is "a hodge podge of ancient idioms" which should be replaced with whatever flavour of the week fastening system and power tool happens to be popular at the moment.
Sure there are some arcane aspects to shell scripting, but when I learned Unix in college they taught a thing called "the unix philosophy" which basically said that you should always use the smallest tools for the job, leverage the pipes/redirection, and build to a usable script which doesn't replicate existing functionality of ubiquitous tools. Seems like these days every python/perl wizard around fancies themselves an administrator and yet they waste a large portion of their time rewriting tried and true unixisms; sort, wc, cut, paste, tee, etc...
Also, get off my lawn!
Re: (Score:2)
I know the sinking feeling I get every time I find a crontab entry pointing to a Perl script.
For the younger folks out there, the feeling is similar too:
/script/all_your_code_belong_to_us.exe
mono
Re: (Score:2, Funny)
I dont necesarily disagree with what you say here.
Ill just point out that your carpet is UGLY AS HELL.
And I Bash Script quite a lot (and even have fun at that).
Re: (Score:2, Interesting)
I can't agree more. I switched over to python for all non-trivial scripting a couple of years ago, and I find it much more pleasant. I even sometimes use iPython instead of bash when I know I'll need to do something complex interactively.
By the way - if you like using python to control systems, you might also enjoy the func project [fedorahosted.org].
Re: (Score:2)
To me it's mixed feelings there
Even though python is very easy, you can't beat the ease of use of find / sed / others
And in python you would need to go to popen/pclose usually, to get the output and return values
Re: (Score:2)
from commands import getoutput as cmd
foo = cmd("echo bar")
I often find myself writing small, special-purpose wrappers for some commands, like ps or find. I liked the MS PowerShell's concept of piping objects instead of text. Imagine... "ps | grep name=spam | kill". Yeah, sh sucks.
Re:If you have a choice... (Score:5, Informative)
Python is nice, but hardly installed everywhere. It's available on Linux certainly, but not always on AIX or Solaris. Yes, it is just an installation away, but many of the systems I maintain require change management procedures to even chmod a file.
Shell scripts do have decent error handling for what they need to do. With traps and proper usage of error codes, they are not much different from lower level languages.
I'd agree that I now *prefer* to write longer scripts in Python. However, few of the people I work with know Python, or even Perl. They can get around with korn and bourne as these are the default scripting languages on more traditional Unix systems.
Which comes down to the gist of the issue. Do you write code in a language you prefer or one that can be maintained by the admins? I'd argue that it doesn't matter what language you use. If you write poor code in shell you will likely write poor code in Python too.
Re: (Score:3, Interesting).
Re: (Score:2).
Sounds like a rather dysfunctional working environment.
I had something like this in mind when I said "everywhere you need to go" (as opposed to just saying "everywhere").
Re: (Score:2, Informative)
> These days, e.g. Python is installed everywhere you need to go.
Sorry, but no, it isn't.
Re:If you have a choice... (Score:4, Informative)
Shell scripts have horrible error handling, and quickly become a maintenance nightmare. These days, e.g. Python is installed everywhere you need to go.
Python doesn't help much over shell scripts without extra libraries, which may or may not be present on any given system.
Python has changed incompatibly several times already.
Python has a large startup overhead:
20 seconds: 1000x python -c 'print("test")'
2 seconds: 1000x sh -c 'echo test'
Python is clumsy to use for gluing several programs together.
Python is not the same syntax as the shell. If you don't learn the shell then your day-to-day command lines are gimped.
So Ruby or Python or anything else is better for writing actual programs that do anything complicated, but there are plenty of appropriate uses for shell scripting. Ruby is actually much better... since it has a sensible syntax you could make a rubysh that wouldn't suck.
Re: (Score:2)
Python doesn't help much over shell scripts without extra libraries, which may or may not be present on any given system.
What are those extra libraries without which you can't work? Python standard library is absolutely enough for shell script like tasks, both trivial and non-trivial.
Python has changed incompatibly several times already.
It's actually quite rare. Python 3.0 was created explicitly to allow breaking of the compatibility. If you write scripts that use the new stuff, of course they won't work on older scripts. But old scripts still work on new python 2.x interpreters.
Or do you have a concrete example in mind?
Python has a large startup overhead:
So a 0.02 second startup time is a problem for you? If you
Re: (Score:2)
.
The advantages are rapid prototyping, and many places to stop putting work into the project without losing functionality. If you have a really well defined large project, big design up front ma
Re: (Score:2)
c = os.system
Re: (Score:2)
Or better, just...
c = os.system
Ok, I cheated. I usually have it like this:
def c(s):
print ">",s
os.system(s)
Re: (Score:2)
"if [ $? != 0 ]; then" goes a long way and you get bonus points for wrapping it in a function.
What I like about the python way is that you don't have to write any of that - if something fails, the whole function fails (unless you specifically catch the exception). No need for explicit error handling to clutter your scripts.
Re: (Score:2)
Does python can really replace shell scripting ?
Yes, for everything apart from script you run with 'source' that manipulate the environment directly.
I'd really like to know. When I write scripts, I use intensively pipes and awk.
Would python allow me to do the same kind of things that awk do ?
Yes, you can call out for a shell by using os.system, os.popen, subprocess module.
I do think awk is pretty redundant though.
Python still can't replace quick scripting (Score:4, Insightful)
I find shell scripting have a nasty habit of not working quiet right when moved between Linux, the BSDs and Mac to be safe, and it's always a pain to write scripts that work correctly with spaces in file names.
Why isn't there (or is there?) a simple python cheat guide, or library, that do the same things as grep, awk, find, mv and xargs?
Re:Python still can't replace quick scripting (Score:4, Informative)
Why isn't there (or is there?) a simple python cheat guide, or library, that do the same things as grep, awk, find, mv and xargs?
re.findall, s.split(), os.walk, shutil.move,
" ".join
Re: (Score:2)
I've run into one big problem replacing find/xargs in Python: There's no good equivalent of the find '-print0' and xargs '-0' options.
This seems to work:
fs = os.popen('find -print0').read().split('\0')
Re: (Score:2)
Your idea works until you have a huge number of files (not uncommon for a backup script) and run out of memory on the
.read() call. Not pretty. To avoid that, you have to use .readline() or the file iterator, both of which have the newline limitation I mentioned.
So, it's back to actual coding then
:-). I don't have a oneliner for that, but shouldn't be too hard:
b = f.read(blocksize)
... do stuff with lines, and reloop
spl = b.split('\0')
Always add the last line from the split to the beginning of next block.
This way, you can make a memory-efficient generator that only allocates data when it needs it.
Re: (Score:2)
Why isn't there (or is there?) a simple python cheat guide, or library, that do the same things as grep, awk, find, mv and xargs?
Forgot to mention - no need to do it all in raw python. Just call out to shell when you feel like it, if you are sure it can stay unix-only. You can do wonders with some shell invocation + os.popen(...).read()
Shell scripts are a glue language (Score:5, Insightful)
One does not write a web server in Bash, one wraps a webserver in it, pipes its output to a log analyzer, restarts it automatically if it crashes, and so on.
The most important part of any UNIX-derived shell langauge is not its syntax or power but the fact it lets you construct large ad-hoc applications out of a toolbox of tens of thousands of pieces.
This is where all other operating systems (that I've ever used, and that's 30-40) have failed.
Any serious developer should know several glue languages, Unix shells being the most flexible and accessible.
Re:Shell scripts are a glue language (Score:5, Interesting)
I accept your challenge.
:-D
But seriously, yeah, you're absolutely right. Ooh, but a basic web server written as a Bourne shell script called by inetd would be so freaking cool....
Oh, no. Somebody actually did that [debian-adm...ration.org].... Yikes! Now I'm scared.
Re: (Score:2)
"The most important part of any UNIX-derived shell langauge is not its syntax or power but the fact it lets you construct large ad-hoc applications out of a toolbox of tens of thousands of pieces."
What's the advantage supposed to be? Constructing a "large ad-hoc application" doesn't sound like a great idea to me.
Re:Shell scripts are a glue language (Score:5, Interesting)
One of the points might be that there's a fairly specialized task which takes a person six hours to do, but which is NEARLY all done automatically -- just a bit of hand twiddling.
Lemme give an example that isn't portable. One of the things I do fairly frequently is take about six large toolchains distributed to me as binaries and source tarballs, and turn them into patches against upstream versions, reorganize them, delete some unused files, create configuration files that refer to the binaries, generate md5sums, and so on.
This is a task which, if I sit down at 10AM and start typing, is usually done by about 4PM. Testing takes a bit longer, and usually uncovers SOME kind of typo or thing I forgot to do.
Enter the shell script.
I tell the script where the files are, and I walk away. An hour later I have the results. Testing is also automated (another script). But testing is also uneventful, because the script never forgets a step, makes a typo, or otherwise screws up.
By the second time I did this, the script had saved me time. By the third, it had saved me close to a full working day. By now it's closer to a week of my time that wasn't spent messing with this stuff.
Portability isn't entirely crucial here, you might think? Well, not ENTIRELY crucial, except that when they had me start doing this on a new box running a different variety of Linux, the total time I spent revising the script was 0 minutes.
Re: (Score:2)
I can see the value of shell scripts for small or possibly medium size ad-hoc applications but not for large ones.
Re:Shell scripts are a glue language (Score:5, Funny)
I think "One does not write a web server in Bash" is like "One does not simply walk into Mordor." You're practically daring short people with hairy feet to attempt it.
But your point is basically good.
:)
Re: (Score:2)
Yeah but Linus is already working on the kernel...
Re: (Score:2)
I'm mentioning this just to point out that these distinctions of granularity are a bit artificial. Yes, glue languages give you composability at the coarse end of the scale. But they're often quite acceptable as programming languages as well.
Re: (Score:2)
Uh oh. Someone should have warned this guy [wikidot.com].
Re: (Score:2)
The most important part of any UNIX-derived shell langauge is not its syntax or power but the fact it lets you construct large ad-hoc applications out of a toolbox of tens of thousands of pieces.
Like constructing a Windows app out of ActiveX components?
No, I'm not saying that ActiveX is as good a glue as the Unix equivalents, or that Windows is as good as Unix. I'm just saying that there's more to Unix than its ability to glue stuff together.
This is where all other operating systems (that I've ever used, and that's 30-40) have failed.
I guess your definition of "failure" is "Pieter hates using it". By any other measure, there are are fair number of non-failed OSs that aren't Unix or Unix-like.
Re:Shell scripts are a glue language (Score:5, Funny)
Wow. That's a really brilliant question.
Wouldn't it have been cool if someone had written a book on the common ground of the major shells covering how to use them as a single highly-accessible and universally-available language?
:)
PSS can be a recurring problem (Score:3, Insightful)
I'm sure all of us at one time or another have had a shell script we've relied on for years fail miserably when bringing it to a new environment. The sad fact is, shell scripts were never meant to be programming languages in and of themselves, and I wonder if, knowing what we know now, it isn't overly ego-driven and masochistic to try to take this feature -- tied to a shell which is tied to an operating system -- and promote it beyond its competency.
So, let's say we take the PSS principles seriously, and abstract away any non-platform-agnostic features you can think of. A few years down the road, you've got PSS all over the shop and you want to upgrade to a different platform nominally supporting your shell of choice. Even if you shake off PSS features you thought could create incompatibilities, you discover the new system buffers differently. Or added a parameter somewhere. Point is, if you went with something like Perl which is designed for cross-compatibility you would have been fine, but now you're all wet.
Shell scripting is good for what it's meant for, but at the risk of oversimplifying with a tool analogy, I'm concerned that this falls into the trap of "If all you have is a hammer, all your problems look like nails".
Re: (Score:2)
You know, that's exactly why a book on it is useful -- because it turns out that you can, in fact, write scripts which port quite nicely.
Seriously, "buffers differently"? What's THAT supposed to mean?
That said, I won't deny for a moment that there are things shell isn't a good choice for. In fact, one of the first sections in the book is under the heading "What Shell Scripting Isn't". Because sometimes the best way to write a good program is to pick the right language to begin with. Often, even.
But some
Re: (Score:2)
Seriously, "buffers differently"? What's THAT supposed to mean?
One example is how and when data gets written to disk in the absence of a flush().
Re: (Score:2)
And what "flush()" do you expect to see in a shell script?
You're thinking at a level that generally doesn't apply well to shell scripts. Scripts rarely deal with questions such as "has this been actually written to disk" -- because that's at the wrong level. If you need that information, you shouldn't be writing in shell anyway.
But normally you don't...
Re: (Score:2)
I agree with your overall message, but...
Just some general comments... (Score:5, Informative)
First off, in the interests of full disclosure, Joe MacDonald is one of my coworkers.
Anyway... The big surprise to me was the word "Beginning", which somehow showed up in the publisher's cover pages, but which I didn't know about during the writing process. My tech reviewer was Gary V. Vaughan (yes, the autoconf/libtool guy). I bounced material off a number of seasoned expert scripters during the process. Basically, my goal was to write a book that I could use as a reference, and which would teach me something.
I succeeded beyond my wildest dreams. The discovery that printf(1) is essentially universal these days was a complete shock to me; I had no idea it was portable. During my first pass on the regular expressions section, I started by writing down what I believed I knew about sed, awk, etcetera. Then I tested it... and had to revise most of it. A number of things I was used to were GNU or BSD extensions. When Gary sent the chapter back for tech review, he'd flagged most of these things, because he "knew" the same things I did.
So everything there should be pretty thoroughly checked out now -- I realized very early on that this field was full of things "everyone knows". Many of them wrong. We tested things on a field of around 30 different versions of Unix and Linux. We tested them on unusual installs, we tested them on special cases.
Why?
Because portable shell is an incredibly portable language, and sometimes that matters. Because shell is a very powerful language, too. Because sometimes shell is all you have -- and because sometimes shell is more expressive for a task than your other choices. I love me some C, I program in C by preference much of the time -- but there are a lot of tasks I'll do in shell rather than in C. There are similarly many tasks I'd rather write in shell than in perl. Shell is what make uses to run commands, and sometimes you need to write something clever in shell because make doesn't have quite the right feature.
In short, it's something I have found consistently useful, day in and day out, for about twenty years now. I just wish I'd realized how much more there was to learn years ago, I coulda saved a lot of time...
:)
And, to answer a question hinted at earlier: Yes, now that this book exists, I keep a copy on my desk. I look stuff up in it about once a week.
Nobody's seriously comparing C and shell (Score:2)
Saying "I love C but there are things that are better in shell" is completely anachronistic.
Seriously. The question's been settled for over 20 years.
And there are other languages, you know. The question is more whether to use Python (for example) instead of shell in some cases, and when.
Re: (Score:3, Interesting)
I compare C and shell all the time. Sometimes the answer surprises me. e.g., until I knew about printf(1), I sometimes went to C if I needed to pretty-print output. Now sometimes I don't.
I will happily mix and match multiple languages; one of my first shipping products was written in shell, perl, and C. Each did some things well that the others didn't...
I tend not to use Ruby for things that I want to be portable, because not everything has Ruby around yet. I tend to avoid Python because it just never
Re: (Score:3)
Care to elaborate?
I have seen some rather complex scripts that are portable that do some useful things.
Re: (Score:3, Interesting)
As usual, it comes down to use cases. Describe the useful things that are done. One might choose to write a shell script to perform some pure mathematical utility function, but this certainly isn't the usual role of such scripts. Rather, one uses a shell script when accessing files (logs, etc.) on the disk, or when opening sockets, or when spawning host level commands. Other device level access is often required, for instance, a local or UTC clock might be consulted, requiring knowledge of timezones. A
Re:portable shell scripting is an oxymoron (Score:4, Informative)
As usual, it comes down to use cases. Describe the useful things that are done.
Take a naked box and boot it (/etc/init.d/*) ?
I know it's a bit trivial but it still qualifies as useful in my book.
Actually
/etc is just pretty much a collection a collection of fairly useful shell scripts. I've always found it interesting that Unix was mostly held together by /bin/sh (aka /bin/bash on a lot of systems nowadays) and spit. And that it worked.
To take one of the posts above where the poster had been exposed to DOS. The DOS system (although it wasn't really a system, merely a program loader) was configured by the autoexec script. All the Unix do the same with a number of chained scripts (and their order can even dynamically change nowadays) all running sh (or an extended version of it).
I still wonder at it sometimes. It's simple and accessible on one side. And it can degenerate into an awful mess on the other
:) (less so nowadays thankfully)
More ?
Anyway, wanted something useful the shell could do ? How about run the whole operating system (find a service that isn't actually handled by a !#/bin/sh script...).
Re:portable shell scripting is an oxymoron (Score:4, Informative)
Portability isn't boolean.
I wrote a wrapper around cdrecord to clean up the UI, automatically handle things like creating an isofs from directories, and so on.
It's not 100% portable; every new system, I change the path to cdrecord, the device spec for the CD drive, and the command used to eject a CD.
Everything ELSE stays the same, and I don't need to remember how to use mkisofs, or anything like it. Directories, bzipped images, whatever; it gets burned correctly. I win.
If the script were not written in otherwise-portable shell, it might not work on the broad variety of boxes I've wanted to use it on.
I've done scripts to handle tasks like "open this file" (not as flexible or smart as the OS X one, but quite good about various compressed tarballs and archives). Surprisingly portable.
I have a script for the idiom of "for every file named or provided as standard input, run it through this filter in place". Repeating commands at intervals, for a given number of times, until they fail... Tons of little utilities like this that save me time.
If you want complete applications with no dependencies, that's harder to find. That said... Have you ever used autoconf to configure something? That's a fair bit of portable shell right there...
Re: (Score:3, Insightful)
Yes, you're quite right that this says a lot about poor user interface choices in some utilities.
Thinking about it more: One of the key applications of portability involves scripts which are never ported.
It's that I don't use only one machine. I use a Mac desktop, a BSD server, and some Linux servers. Even if I'm just typing "for i in..." on the command line, I don't want to try to remember three or four different sets of commands to work across these environments. I want to write things that work the s
Re:portable shell scripting is an oxymoron (Score:5, Interesting)
Well, there is some truth to the GPP's comment. Linux and Mac OS X don't even agree on how to tell echo not to print a newline or how to enable extended regular expression mode in sed. May heaven help you if you want to do something as esoteric as creating or mounting a filesystem, creating or mounting a disk image/ramdisk, talk to a USB device in any way, get a list of processes in any useful way, etc. There's a very big lack of standardization in a lot of things you might like to do with scripts, in other words. The Single UNIX Spec and POSIX are not quite sufficient, but more annoyingly, most OSes (Linux, *BSD) out there don't even come close to conforming to it, so you end up with this dichotomy between BSD behavior and AT&T behavior.
That said, a lot of things are standardized, and many others can be worked around with clever use of variables (or possibly eval in a few extreme cases). I've written chapters on the subject myself. The big things you need to remember are that $(( $FOO + 3 )) is not portable, nor for ((...)), nor >&, nor anything involving extended regexp except using Perl, that even "the one true awk" is not quite SUS-compliant, GNU awk doubly so, bash triply so, that you should use printf instead of echo for output if you don't want newlines, that signal numbers are not portable (for trap), that proper quoting of arguments is crucial, and that you need to work with the bare minimum base behavior of utilities (using few or no flags) if you expect any hope of portability without needing to make platform-specific changes.
For some quick examples of some interesting portability issues, read some of my comments in the games at shellscriptgames.com or search for the word "compatibility" in Apple's "Shell Scripting Primer". It's a real eye opener to see how many portability problems exist even for fairly simple shell scripts.
Re: (Score:2)
'sed', 'perl', 'awk' and 'mount' are not part of the shell.
for and trap are part of a posix standard shell.
It's subtle, but the differences are easy to understand. The best example I can think of is most linux systems I've used use bash, but Ubuntu make
/bin/sh a symbolic link to /bin/dash because it's faster. Many scripts are broken because they expect /bin/sh to be a symbolic link to /bin/bash. This is not an assumption you can make.
This causes a problem for example when you use bash functions like popd w
Re: (Score:2)
Fair enough about those things being extensions, though many are supported so broadly (the $(()) math syntax, for example) that it surprises people when they don't work. Unless you code on the Almquist or Debian Almquist shell regularly, chances are very good that you'll be in for a rude awakening if you ever switch to it. Things that nearly every
/bin/sh implementation on the planet has provided for a decade or more (not just bash) simply don't work in ash or dash because they depend on behavior that isn
Re: (Score:2)
Good ones, but you forgot the biggie that I see people get tripped-up on the most, test.
Sometimes test -a mean test -e and sometimes it is meant to be used [ exp1 -a exp2 ]. Most often these days the confusion is on solaris using ucb.
So the moral is, never use -a, -o, or -e in tests.
(Yes there were ancient versions of test that did not have AND and OR, so you use && and || from the shell instead.)
Re: (Score:2)
From my experience, shell scripts that use a significant amount of non-shell builtin commands are not portable. The typical shell script is highly dependent on the awk, grep, sed, etc. version on the system. And that varies not only between platforms, but between OS versions.
Simple things, like the improved mv that's floating around, tend to be easier to port. But the chance of a successful port is inversely proportional to the complexity of the script. As well, usefulness, while a loaded term, tends to be
Re: (Score:2)
Care to elaborate?
I have seen some rather complex scripts that are portable that do some useful things.
Here's one: most uses of "find" together with "xargs"...
By default, "find" prints out newline-delimited filenames, while "xargs" consumes whitespace delimited values... So if you've got filenames with spaces in them, you're in trouble.
The GNU solution is the "--print0" argument to "find", and the "-0" argument to "xargs" - which uses the zero-byte as a delimiter for both commands... The problem is, this isn't supported in other implementations of "find".
This is just one example of a command where the comm
Re:portable shell scripting is an oxymoron (Score:5, Funny)
You're in luck! I have recently heard of a book that could help with your scripting...
Re: (Score:3, Funny)
That link should really be [thedailywtf.com]
(I should write a shell script to correct my comments.)
Bash script, ASPX page, culture clash.
Re: (Score:3, Informative)
Check out GNU autoconf. That's a good example of how a script works on *nix box. And, yes, they are useful!
Autoconf is also a horrible peace of crap. One of the better reason to hate the concept of shell scripts, actually.
Program('hello.c')
or
SharedLibrary('foo', ['f1.c', 'f2.c', 'f3.c'])
And that's pretty much it. I'm not sure all the horrors required by autoconf would fit into a slashdot posting.
Re: (Score:2)
I think it could fill an entire book.
Re: (Score:2)
Most of those issues are primarily issues with third party autoconf m4 modules, or with people who don't know what they are doing adding functionality to configure.ac. That is not really autoconf's fault. (The fact that it is easy to mess things up if you don't know what you are doing might be autoconf's fault). And proper use of autoconf can and does make portability easier, although I will admit little outside the GNU project uses autoconf properly, and there are even exceptions within the GNU project.
The
Re: (Score:2)
Sorry, but no, that is nowhere near 'pretty much it' - as soon as you want to link in with libraries in a cross platform manner, generate platform specific release mechanisms (thing mac framework and app bundles), or do anything which doesn't fall into your example there, you're back in wild and woolly territory... and yes, those things are beyond autotools, but they're also beyond the core scons too.
Re: (Score:2)
You miss the point.
Autoconf is godawful crap, no doubt about it. But people still use it. You know why?
*IT WORKS*. It works on Linux. It works on Solaris. It works on BSD. It works on Tru64. It works on Cygwin. It works all over the place. A decently written autoconf script works for cross-compiling. It doesn't require a particular language to be installed.
I've used SCons-based projects. It was a nuisance to get it set up and working. Sure, it does some stuff nicely -- but I couldn't just grab i
Re: (Score:2)
It's obvious that you can buy simplicity by special casing. What's hard is keeping a bit of it when you generalize. I had the same exp
Re: (Score:2)
Hmm... did someone re-invent imake?
No, SCons does not generate makefiles.
SCons here was just an example to illustrate how much Automake sucks. Obviously we have other systems like CMake as well.
Re:Hardly Need a Whole Book (Score:5, Informative)
Oracle, too... (Score:2)
All the DBA's I have worked with (as well as C programmers worth their salt) have tended to use ksh by default.
I think Oracle's documentation always uses korn, and maybe I have just worked with a bunch of old IBM'ers..
Re: (Score:3, Insightful)
Sounds great.
Now, off the top of your head, what happens to variables set in the last component of a pipeline in ksh? Do you know whether it's the same on systems where "/bin/ksh" is actually pdksh?
... Oh, and just for reference, about half the Linux systems our IT department installs don't have ksh. No, I don't know why. (I only know because I can't log into them because my default login shell is /bin/ksh...)
Re: (Score:2)
ksh isn't part of Linux normally. Sometime's it's pdksh and sometimes it's genuine AT&T ksh. Same with cygwin. You have to explicitly add it too.
I had to port some shell scripts from Ultrix to SunOS, Solaris, HP-UX, Irix and OSF/1. ksh wasn't part of SunOS. However, we had bought a license for ksh and put it in
/bin/ksh everywhere.
I needed functions and Ultrix
/bin/sh didn't have them. IIRC /bin/sh5 (?) did.
Anyways, ksh had to be licensed back then. This was Linux 1.09 era. pdksh wasn't even clos
Re: (Score:2)
Exactly -- you can't just "write for ksh", and even if you do, it's not universal.
You can do pretty well writing for a common subset of ksh88/pdksh, but I'd rather do the extra few minutes' work and write for plain old POSIX shell by default.
Re: (Score:2)
Want your shell to be portable? Write it in Korn Shell. You will find this shell on 15 year old *NIX boxes and the script will still work with bash on Linux.
Except when it doesn't.
There are differences between bash and ksh, and between different versions of ksh. In some cases the difference means syntax errors, but sometimes it can be more subtle and a script written for ksh runs just fine but doesn't do in bash what it did in ksh, or doesn't do in pdksh what it does in ksh88.
It is also not the case that ksh exists everywhere. It wasn't on MacOS until 10.4. It isn't on FreeBSD by default. It isn't on many Linux boxes, particularly the small network devices
Re: (Score:2)
But you gotta love that mic stand [hrgiger.com] H.R. Giger created for Jonathan Davis.
Re: (Score:2)
Not so much a guide, but "man bash"* or "info coreutils" are incredible resources for those common bits you routinely forget. If you stick to what's in coreutils, I think you've got a pretty good chance of portability.
*for systems that use bash as the shell, of course. Frankly, I think there might be too much in there, though and some of the builtins should have their own man pages.
Re: (Score:2)
Actually, that's useful in bash too sometimes.
:)
Anyway, the reason I bother is this:
I wrote a bunch of scripts in 1992 or so. I'm still using them. I haven't touched any of them in years, except for one update to deal with Linux differing from BSD in where the actual errno definitions are.
I don't have to worry about what shells are installed, I don't have to guess whether bash is "/bin/bash" or "/usr/pkg/bin/bash", I don't have to wonder whether the sysadmin bothered to install the "GNU utilities". I ju
Re: (Score:2, Insightful)
So WHO does not? (Score:2)
I've heard that said for 15 years, that might have been true 15 years ago, but now
... ?
News flash (Score:2)
Linux is obliterating commercial Unices.
It has in part something to do with the backspace key working out of the box without typing "stty erase ^H" every time. (Have they fixed that on Solaris, yet?)
Re:Oh I hate those [ "X$var" == "X" ] (Score:5, Informative)
Why bother with portable shell scripts, seriously? Everybody has bash installed, and/or zsh that is mostly compatible, and even then you have bash anyway. I understand retro-nostalgia and all that, but necrophilia is overrated
False.
The majority of systems I work on these days and the majority of systems I have worked on since the mid 90's have not had bash installed. That includes systems running FreeBSD, NetBSD, OpenBSD, AIX, Tru64, Solaris, MacOS, and even Linux. Current versions of some of those will usually have bash in a default installation, but some still do not. Companies running stable systems as important parts of their business do not generally upgrade their OS's just for the sake of novelty. Running older systems isn't usually about nostalgia or necrophilia, it's more often about not having any compelling reason to upgrade. There is also a system hygiene practice common on the BSD's of keeping the base system minimal and only adding on what is needed, a practice that helps in keeping systems secure and stable because they are easier to fully understand. This is also common in many virtualization environments, where a running OS instance is likely to exist for a very narrow purpose and intentionally have a stripped-down set of utilities fit to that narrow purpose.
Re: (Score:2)
Recent versions of MacOSX have bash by default. By recent I mean 10.4 had bash, and probably 10.3 but I'm not sure.
All Linux distribs have had bash installed by default for ever. And by all I mean 99.999% of the installed base, I'm sure you can find a silly exception.
Recent versions of Tru64
... do not exist.
As for the BSDs, Netcraft confirms it,
.. err. I don't know, what's their default shell?
And as for Solaris, its default shell -- a Soviet-era knock off of the original Unics v1.0 -- is so fucktarded tha
Re: (Score:2)
Recent versions of MacOSX have bash by default. By recent I mean 10.4 had bash, and probably 10.3 but I'm not sure.
10.3 had it. Prior versions did not.
All Linux distribs have had bash installed by default for ever. And by all I mean 99.999% of the installed base, I'm sure you can find a silly exception.
A large fraction of the Linux systems I work with are embedded versions which use things which seem to be descended from the BSD (Almquist) sh. Talking about installed base numbers is silly, because it is probably also true that 90%+ of those systems have never been seen by a competent sysadmin who has any intention of ever using anything Unixy that isn't a major Linux distro. They might as well be Windows for all of the relevanc
Sry, I don't do archaelogoy (Score:2)
I know paleounices suck. Does it matter to 99% of shell writers, is the question.
Re: (Score:3, Insightful)
Well, I suppose because the contents are different. We're answering different questions.
I would never dream of discouraging people from looking at and using the various free guides out there. The autoconf manual is full of useful information about portability, too.
There's more than one article or book because there's more than one topic. "Shell Scripting" is not a single topic; "portable shell" is very different from "advanced bash". It solves different problems, and is useful for different circumstance
Re: (Score:2)
Re: (Score:2)
Honestly, I just never got into python. I like perl (and hate it), and I like Ruby (and love it), but Python never "clicked" for me. Python and Tcl are the "scripting languages I just don't enjoy working in".
But... I also don't have it on everything I use. And I could get it, but that's another distraction from Getting The Job Done. So I do a ton of work in shell. Especially work that's entirely built around running other commands.
Re: (Score:2)
Honestly, I just never got into python. I like perl (and hate it), and I like Ruby (and love it), but Python never "clicked" for me. Python and Tcl are the "scripting languages I just don't enjoy working in".
I believe this is a common mindset issue. Ruby borrowed lots of stuff from perl (which they thought was good, or appealing to convert - I'm not sure), including TIMTOWTDI, whereas Python community thinks of Perl as a warning example more than anything else. My theory is that the different languages balance between simplicity and 'regularity' (python) vs. 'playfullness' and 'interesting' solutions (ruby, perl).
Re: (Score:2)
That may be. I find Ruby much better than perl; perl always struck me as inherently ugly. Perl tolerates careful programming and writing for clarity; Ruby encourages it.
Python felt a bit too constrictive to me, if you'll pardon the pun. And I didn't like the indentation (although I grant the underlying intent, I just don't think it pays off enough).
Re: (Score:2)
Wherever you want to do an awful lot of things with the input and output of) system utilities and./or bash builtins.
Whenever you try to do something nontrivial in between the system utilities, you start losing the benefit of shell scripts.
Re: (Score:2)
Wherever you want to do an awful lot of things with the input and output of) system utilities and./or bash builtins. Look at the gargantuan effort that is the Knoppix boot scripts - I seriously doubt it would make sense to rewrite those in Python or Perl, since nearly every line is a pipe between utilities or redirection. And it works well.
I'm not familiar with Knoppix, but that is true for all boot script systems, and beyond being hard to do in something else, there are cases where it may be impossible. There are still systems out there that boot on very small root filesystems that do not have any shared libraries available until the rest of their storage is mounted, so you've got to have something small and statically linked to run the scripts that get your whole software edifice in place. You can have similar problems in some failure stat
Going the Other Direction (Score:2)
My first Unix shell was the Mashey shell that preceded Bourne shell, on a PDP-11 running v6.
:-) Before that I'd used RSTS-11, HPUX Basic, the IBM System 34 shell (OCL was fairly powerful, though less than sh), CMS, Plato, and various things with punch cards. Unix shell seemed powerful, flexible, and really really appropriate. I later used other shells like TSO and occasionally CP/M; I forget if I used VMS and ddt before DOS or only after.
Eventually MS-DOS came around, and it was painfully clumsy. It wa
Re: (Score:2)
VMS looked far more like RSTS-11 or RSX-11 than like Unix. I don't know if CP/M was inspired by VMS or by its predecessors. DOS certainly wasn't inspired by Unix, at least in the original versions; they clearly didn't have any of the concepts, even though they had a machine that was almost as powerful as a small PDP-11 and more powerful than an IBM System/34 (which had a quite nice shell), and later versions of DOS (around 3ish?) started emulating some Unix shell syntax while still not having the underpin
|
https://news.slashdot.org/story/09/02/11/1355243/beginning-portable-shell-scripting
|
CC-MAIN-2017-43
|
refinedweb
| 9,472
| 71.65
|
The goal of
proDA is to identify differentially abundant proteins in label-free
mass spectrometry data. The main challenge of this data are the many missing values.
The missing values don’t occur randomly but especially at low intensities. This
means that they cannot just be ignored. Existing methods have mostly focused on
replacing the missing values with some reasonable number (“imputation”) and then
run classical methods. But imputation is problematic because it obscures the
amount of available information. Which in turn can lead to over-confident
predictions.
proDA on the other hand does not impute missing values, but constructs a
probabilistic dropout model. For each sample it fits a sigmoidal dropout
curve. This information can then be used to infer means across samples and the
associated uncertainty, without the intermediate imputation step.
proDA
supports full linear models with variance and location moderation.
For full details, please see our preprint:
Constantin Ahlmann-Eltze and Simon Anders: proDA: Probabilistic Dropout Analysis for Identifying Differentially Abundant Proteins in Label-Free Mass Spectrometry. biorXiv 661496 (Jun 2019)
proDA is implemented as an R package.
You can install it from Bioconductor by typing the following commands into R:
if(!requireNamespace("BiocManager", quietly = TRUE)) install.packages("BiocManager") BiocManager::install("proDA")
To get the latest development version from
GitHub, you can use
the
devtools package:
# install.packages("devtools") devtools::install_github("const-ae/proDA")
The pkgdown documentation for the package is available on.
In the following section, I will give a very brief overview on the main
functionality of the
proDA package, aimed at experienced R users.
New users are advised to skip this “quickstart” and to go directly
to section 1.3, where I give a complete walkthrough and explain in
detail, what steps are necessary for the analysis of label-free mass
spectrometry data.
The three steps that are necessary to analyze the data are
proDA())
test_diff())
# Load the package library(proDA) # Generate some dataset with known structure syn_dataset <- generate_synthetic_data(n_proteins = 100, n_conditions = 2) # The abundance matrix syn_dataset$Y[1:5, ] #> Condition_1-1 Condition_1-2 Condition_1-3 Condition_2-1 Condition_2-2 Condition_2-3 #> protein_1 19.17814 NA 18.89003 19.90698 NA 18.83656 #> protein_2 NA NA NA NA NA NA #> protein_3 23.89169 24.03214 23.73394 23.54467 23.57230 23.92561 #> protein_4 20.94756 21.03668 20.76283 20.51360 21.11377 20.66439 #> protein_5 19.44029 19.74747 19.29078 19.55662 19.28023 19.75506 # Assignment of the samples to the two conditions syn_dataset$groups #> [1] Condition_1 Condition_1 Condition_1 Condition_2 Condition_2 Condition_2 #> Levels: Condition_1 Condition_2 # Fit the probabilistic dropout model fit <- proDA(syn_dataset$Y, design = syn_dataset$groups) # Identify which proteins differ between Condition 1 and 2 test_diff(fit, `Condition_1` - `Condition_2`, sort_by = "pval", n_max = 5) #> # A tibble: 5 × 10 #> name pval adj_pval diff t_statistic se df avg_abundance n_approx n_obs #> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 protein_99 0.0000518 0.00518 -6.71 -18.4 0.365 4 22.4 4.96 5 #> 2 protein_91 0.000125 0.00624 -2.33 -14.7 0.158 4 20.4 6.00 6 #> 3 protein_100 0.00195 0.0649 4.74 7.23 0.657 4 20.5 3.07 3 #> 4 protein_96 0.00415 0.104 -1.69 -5.89 0.287 4 21.9 6.00 6 #> 5 protein_92 0.00674 0.135 0.933 5.15 0.181 4 20.6 6.00 6
Other helpful functions for quality control are
median_normalization() and
dist_approx().
proDA is an R package that implements a powerful probabilistic dropout model
to identify differentially abundant proteins. The package was specifically designed
for label-free mass spectrometry data and in particular how to handle the many
many missing values.
But all this is useless if you cannot load your data and get it into a shape that is useable. In the next section, I will explain how to load the abundance matrix and bring it into a useful form. The steps that I will go through are
proteinGroups.txtMaxQuant output table
NAs and take the
log2()of the data
median_normalization()
dist_approx())
proDA()
test_diff()
I will now demonstrate how to load a MaxQuant output file. For more information about other approaches for loading the data, please take a look at the vignette on loading data.
MaxQuant is one of the most popular tools for handling raw MS data. It produces
a number of files. The important file that contains the protein intensities is
called
proteinGroups.txt. It is a large table with detailed information about
the identification and quantification process for each protein group (which I will
from now on just call “protein”).
This package comes with an example
proteinGroups.txt file, located in the
package folder. The file contains the reduced output from an experiment studying the different
DHHCs in Drosophila melanogaster.
system.file("extdata/proteinGroups.txt", package = "proDA", mustWork = TRUE) #> [1] "/tmp/RtmpBsqe1q/Rinst1a542264d1eef4/proDA/extdata/proteinGroups.txt"
In this example, I will use the base R functions to load the data, because they don’t require any additional dependencies.
# Load the table into memory maxquant_protein_table <- read.delim( system.file("extdata/proteinGroups.txt", package = "proDA", mustWork = TRUE), stringsAsFactors = FALSE )
As I have mentioned, the table contains a lot of information (359 columns!!), but we are first of all interested in the columns which contain the measured intensities.
# I use a regular expression (regex) to select the intensity columns intensity_colnames <- grep("^LFQ\\.intensity\\.", colnames(maxquant_protein_table), value=TRUE) head(intensity_colnames) #> [1] "LFQ.intensity.CG1407.01" "LFQ.intensity.CG1407.02" "LFQ.intensity.CG1407.03" #> [4] "LFQ.intensity.CG4676.01" "LFQ.intensity.CG4676.02" "LFQ.intensity.CG4676.03" # Create the intensity matrix abundance_matrix <- as.matrix(maxquant_protein_table[, intensity_colnames]) # Adapt column and row maxquant_protein_table colnames(abundance_matrix) <- sub("^LFQ\\.intensity\\.", "", intensity_colnames) rownames(abundance_matrix) <- maxquant_protein_table$Protein.IDs # Print some rows of the matrix with short names so they fit on the screen abundance_matrix[46:48, 1:6] #> CG1407.01 CG1407.02 CG1407.03 CG4676.01 CG4676.02 CG4676.03 #> A0A0B4K6W1;P08970 713400 845440 0 0 1032600 0 #> A0A0B4K6W2;A0A0B4K7S0;P55824-3;P55824 5018800 4429500 2667200 0 8780200 1395800 #> A0A0B4K6X7;A1Z8J0 0 0 0 0 0 0
After extracting the bits from the table we most care about, we will have to modify it.
Firstly, MaxQuant codes missing values as
0. This is misleading, because the actual
abundance probably was not zero, but just some value too small to be detected by the mass spectrometer.
Accordingly, I will replace all
0 with
NA.
Secondly, the raw intensity values have a linear mean-variance relation. This is undesirable, because
a change of
x units can be a large shift if the mean is small or irrelevant if the mean is large.
Luckily, to make the mean and variance independent, we can just
log the intensities. Now a change
of
x units is as significant for highly abundant proteins, as it is for low abundant ones.
abundance_matrix[abundance_matrix == 0] <- NA abundance_matrix <- log2(abundance_matrix) abundance_matrix[46:48, 1:6] #> CG1407.01 CG1407.02 CG1407.03 CG4676.01 CG4676.02 CG4676.03 #> A0A0B4K6W1;P08970 19.44435 19.68934 NA NA 19.97785 NA #> A0A0B4K6W2;A0A0B4K7S0;P55824-3;P55824 22.25891 22.07871 21.34689 NA 23.06582 20.41266 #> A0A0B4K6X7;A1Z8J0 NA NA NA NA NA NA
Quality control (QC) is essential for a successful bioinformatics analysis, because any dataset shows some unwanted variation or could even contain more serious error like for example a sample swap.
Often we start with normalizing the data to remove potential sample specific effects. But already this step is challenging, because the missing values cannot easily be corrected for. Thus, a first helpful plot is to look how many missing values are in each sample.
barplot(colSums(is.na(abundance_matrix)), ylab = "# missing values", xlab = "Sample 1 to 36")
We can see that the number of missing values differs substantially between samples (between 30% and 90%) in this dataset. If we take a look at the intensity distribution for each sample, we see that they differ substantially as well.
boxplot(abundance_matrix, ylab = "Intensity Distribution", xlab = "Sample 1 to 36")
Note that, the intensity distribution is shifted upwards for samples which also have a large number of missing values (for example the last one). This agrees with our idea that small values are more likely to be missing. On the other hand, this also demonstrates why normalization methods such as quantile normalization, which distort the data until all the distributions are equal, are problematic. I will apply the more “conservative” median normalization, which ignores the missing values and transforms the values so that the median difference between the sample and average across all other samples is zero.
normalized_abundance_matrix <- median_normalization(abundance_matrix)
An important tool to identify sample swaps and outliers in the dataset is to look at the sample distance matrix. It shows the distances of samples A to B, A to C, B to C and so on.
The base R
dist() function can not handle input data that contains missing values, so we might be
tempted to just replace the missing values with some realistic numbers and calculate the distance
on the
completed dataset. But choosing a good replacement value is challenging and can also be misleading
because the samples with many missing values would be considered too close.
Instead
proDA provides the
dist_approx() function that takes either a fitted model (ie. the
output from
proDA()) or a simple matrix (for which it internally calls
proDA()) and
estimates the expected distance without imputing the missing values. In addition, it reports
the associated uncertainty with every estimate. The estimates for samples with many missing
values will be uncertain, allowing the data analyst to discount them.
da <- dist_approx(normalized_abundance_matrix)
dist_approx() returns two elements the
mean of the estimate and the associated
sd.
In the next step I will plot the heatmap for three different conditions, adding the 95% confidence
interval as text to each cell.
# This chunk only works if pheatmap is installed # install.packages("pheatmap") sel <- c(1:3, # CG1407 7:9, # CG59163 22:24)# CG6618 plot_mat <- as.matrix(da$mean)[sel, sel] # Remove diagonal elements, so that the colorscale is not distorted plot_mat[diag(9) == 1] <- NA # 95% conf interval is approx `sd * 1.96` uncertainty <- matrix(paste0(" ± ",round(as.matrix(da$sd * 1.96)[sel, sel], 1)), nrow=9) pheatmap::pheatmap(plot_mat, cluster_rows = FALSE, cluster_cols = FALSE, display_numbers= uncertainty, number_color = "black")
In the next step, we will fit the actual linear probabilistic dropout model to the normalized data. But before we start, I will create a data.frame that contains some additional information on each sample, in particular to which condition that sample belongs.
# The best way to create this data.frame depends on the column naming scheme sample_info_df <- data.frame(name = colnames(normalized_abundance_matrix), stringsAsFactors = FALSE) sample_info_df$condition <- substr(sample_info_df$name, 1, nchar(sample_info_df$name) - 3) sample_info_df$replicate <- as.numeric( substr(sample_info_df$name, nchar(sample_info_df$name) - 1, 20) ) sample_info_df #> # A tibble: 36 × 3 #> name condition replicate #> <chr> <chr> <dbl> #> 1 CG1407.01 CG1407 1 #> 2 CG1407.02 CG1407 2 #> 3 CG1407.03 CG1407 3 #> 4 CG4676.01 CG4676 1 #> 5 CG4676.02 CG4676 2 #> 6 CG4676.03 CG4676 3 #> 7 CG51963.01 CG51963 1 #> 8 CG51963.02 CG51963 2 #> 9 CG51963.03 CG51963 3 #> 10 CG5620A.01 CG5620A 1 #> # … with 26 more rows
Now we can call the
proDA() function to actually fit the model. We specify the
design using
the formula notation, referencing the
condition column in the
sample_info_df data.frame that
we have just created. In addition, I specify that I want to use the
S2R condition as the reference
because I know that it was the negative control and this way automatically all coefficients
measure how much each condition differs from the negative control.
fit <- proDA(normalized_abundance_matrix, design = ~ condition, col_data = sample_info_df, reference_level = "S2R") fit #> Parameters of the probabilistic dropout model #> #> The dataset contains 36 samples and 122 proteins #> 59.7% of the values are missing #> #> Experimental design: y~condition #> The model has successfully converged. #> #> The inferred parameters are: #> location_prior_mean: 19.5 #> location_prior_scale: 8.37 #> location_prior_df: 3 #> variance_prior_scale: 0.283 #> variance_prior_df: 1.64 #> dropout_curve_position: 19.9, 19, 20.1, 22.8, ... #> dropout_curve_scale: -0.816, -0.601, -1.02, -1.31, ...
The
proDAFit object prints a number of useful information about the convergence of the model,
the size of the dataset, the number of missing values, and the inferred hyper parameters.
To make it easy to find available methods on the
proDAFit object, the
$-operator is overloaded
and shows a list of possible functions:
Screenshot from Rstudio suggesting the available functions
# Equivalent to feature_parameters(fit) fit$feature_parameters #> # A tibble: 122 × 4 #> n_approx df s2 n_obs #> <dbl> <dbl> <dbl> <dbl> #> 1 12.0 0.001 3808. 5 #> 2 12.0 0.001 2439. 1 #> 3 19.3 8.93 4.07 14 #> 4 12.0 0.001 850. 6 #> 5 17.4 7.04 0.470 17 #> 6 12.0 0.001 2472. 1 #> 7 12.0 0.001 2410. 1 #> 8 28.9 18.6 0.217 29 #> 9 12.0 0.001 1798. 4 #> 10 12.0 0.001 1881. 4 #> # … with 112 more rows
Internally the
proDAFit object is implemented as a subclass of
SummarizedExperiment.
This means it can be subsetted, which is for example useful for calculating the distance
of a subset of proteins and samples.
# This chunk only works if pheatmap is installed # install.packages("pheatmap") pheatmap::pheatmap(dist_approx(fit[1:20, 1:3], by_sample = FALSE)$mean)
Lastly, we will use a Wald test to identify in which proteins a coefficient is significantly different
from zero. The
test_diff() function takes first the fit object produced by
proDA() and a
contrast argument. This can either be a string or an expression if we want to test more complex
combinations. For example
conditionCG1407 - (conditionCG6017 + conditionCG5880) / 2 would test
for the difference between CG1407 and the average of CG6017 and CG5880.
Alternatively
test_diff() also supports likelihood ratio F-tests. In that case instead of the
contrast
argument specify the
reduced_model argument.
# Test which proteins differ between condition CG1407 and S2R # S2R is the default contrast, because it was specified as the `reference_level` test_res <- test_diff(fit, "conditionCG1407") test_res #> # A tibble: 122 × 10 #> name pval adj_pval diff t_statistic se df avg_abundance n_approx n_obs #> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 Q8IP47;Q9VJP8;Q9V43… 0.904 0.964 -0.132 -0.122 1.08 24 18.9 12.0 5 #> 2 A0A023GPV6;A8JV04;Q… 0.923 0.964 -0.0992 -0.0978 1.01 24 18.4 12.0 1 #> 3 A0A023GQA5;P24156 0.0356 0.265 -2.92 -2.23 1.31 24 19.3 19.3 14 #> 4 Q1RKY1;A0A0B4LG19;A… 0.667 0.964 0.632 0.435 1.45 24 18.7 12.0 6 #> 5 A0A0B4JD00;A8DY69;I… 0.919 0.964 0.0691 0.103 0.670 24 20.0 17.4 17 #> 6 A0A0B4JCT8;Q9V780 0.923 0.964 -0.0994 -0.0980 1.01 24 18.5 12.0 1 #> 7 A0A0B4LHQ4;A0A0B4JD… 0.923 0.964 -0.0990 -0.0977 1.01 24 18.4 12.0 1 #> 8 A0A0B4JCW4;Q9VHJ8;Q… 0.643 0.964 -0.197 -0.469 0.419 24 21.9 28.9 29 #> 9 Q9VDV4;A0A0B4JCY1;Q… 0.295 0.860 1.95 1.07 1.82 24 18.7 12.0 4 #> 10 A0A0B4JCY6;Q7KSF4;A… 0.598 0.964 -0.783 -0.535 1.46 24 19.0 12.0 4 #> # … with 112 more rows
This walkthrough ends with the identification which proteins are differentially abundant. But for a real dataset, now the actual analysis only just begins. A list of significant proteins is hardly ever a publishable result, one often needs to make sense what the relevant underlying biological mechanisms are. But for this problem other tools are necessary, which depend on the precise question associated with the biological problem at hand.
sessionInfo() #> LC_TIME=en_GB #> ] proDA_1.8.0 BiocStyle_2.22.0 #> #> loaded via a namespace (and not attached): #> [1] SummarizedExperiment_1.24.0 xfun_0.27 bslib_0.3.1 #> [4] lattice_0.20-45 colorspace_2.0-2 vctrs_0.3.8 #> [7] htmltools_0.5.2 stats4_4.1.1 yaml_2.2.1 #> [10] utf8_1.2.2 rlang_0.4.12 jquerylib_0.1.4 #> [13] pillar_1.6.4 BiocGenerics_0.40.0 RColorBrewer_1.1-2 #> [16] matrixStats_0.61.0 GenomeInfoDbData_1.2.7 lifecycle_1.0.1 #> [19] stringr_1.4.0 MatrixGenerics_1.6.0 zlibbioc_1.40.0 #> [22] munsell_0.5.0 gtable_0.3.0 evaluate_0.14 #> [25] Biobase_2.54.0 knitr_1.36 IRanges_2.28.0 #> [28] fastmap_1.1.0 GenomeInfoDb_1.30.0 fansi_0.5.0 #> [31] highr_0.9 Rcpp_1.0.7 scales_1.1.1 #> [34] BiocManager_1.30.16 DelayedArray_0.20.0 S4Vectors_0.32.0 #> [37] magick_2.7.3 jsonlite_1.7.2 XVector_0.34.0 #> [40] digest_0.6.28 stringi_1.7.5 bookdown_0.24 #> [43] GenomicRanges_1.46.0 grid_4.1.1 cli_3.0.1 #> [46] tools_4.1.1 bitops_1.0-7 magrittr_2.0.1 #> [49] sass_0.4.0 RCurl_1.98-1.5 tibble_3.1.5 #> [52] crayon_1.4.1 pkgconfig_2.0.3 ellipsis_0.3.2 #> [55] pheatmap_1.0.12 Matrix_1.3-4 rmarkdown_2.11 #> [58] extraDistr_1.9.1 R6_2.5.1 compiler_4.1.1
|
http://bioconductor.org/packages/release/bioc/vignettes/proDA/inst/doc/Introduction.html
|
CC-MAIN-2022-05
|
refinedweb
| 2,854
| 50.63
|
go to bug id or search bugs for
Description:
------------
On windows systems you can use spl_autload to load namespaced class and it works!
On *nix system you can't!
Tested on linux (ubuntu 10.04 64 bit) with package : php5 5.3.2-1ubuntu4.2
You can test the next script on a windows and on a linux or a mac.
Test script:
---------------
create a file called index.php
------------------------------
use My\Framework\Test;
spl_autoload_register();
$test = new Test();
echo $test;
------------------------------
Another file in a subdir My/Framework called Test.php
------------------------------
namespace My\Framework;
class Test
{
public function __toString()
{
return 'hi';
}
}
------------------------------
Expected result:
----------------
Expected result :
script that produce :
hi
Actual result:
--------------
windows:
hi
Linux:
Fatal error: spl_autoload() [<a href='function.spl-autoload'>function.spl-autoload</a>]: Class My\Framework\Test could not be loaded in /.../index.php on line 7
Add a Patch
Add a Pull Request
Automatic comment from SVN on behalf of felipe
Revision:
Log: - Fixed bug #51991 (spl_autoload and *nix support with namespace)
This bug has been fixed in SVN.
Snapshots of the sources are packaged every three hours; this change
will be in the next snapshot. You can grab the snapshot at.
Thank you for the report, and for helping us make PHP better.
thanks
I just upgraded to PHP 5.3.3 and this bug still persists.
Tested on Linux (Cent OS 5.5) with PHP 5.3.3.
Same Test Script and result as the original bug.
Hi, make sure you are using the right version... (phpversion())
I've already checked that using phpinfo(). It is PHP 5.3.3.
Hi, I forgot to say... the path that will be tried to be found it is lowercased.
i.e. my/framework/test.php
See bug #52406
Thanks.
Are you saying that I'll have to rename all my php file names to be lowercase on
Linux to use spl_autoload?
Exactly. You can implement your custom autoload to behaves as you want. The lowercased stuff is good for some people, but is not the desired for others.
This is hard to believe that automatically lowercasing the path is not a bug...
the majority of PHP users are actually on *nix platforms - especially in
production. Causing the paths to automatically be lowercase drastically takes
away from the default functionality.
I apologize if I am being arrogant here - however, when you write include
'My/Path/To/A/File.php' it does not lowercase that path so why should
spl_autoload do anything different with the default behavior?
Offending code to fix this is on line: 303 in trunk of php_spl.c: 303 >---
lc_name = zend_str_tolower_dup(class_name, class_name_len);
Removing the str_tolower would resolve this issue and we would all be happy. I
will post to php-dev as well with a patch.
After searching around a while I found a workaround without performance loss.
1. Slower but not working with camelcase class files (like "MyClass.php"):
spl_autoload_register(function($classname) { require_once(__DIR__ . '/' . str_replace('\\', '/', $classname) . '.php'); });
2. Faster but not working with camelcase class files:
set_include_path(get_include_path() . PATH_SEPARATOR . __DIR__);
spl_autoload_extensions(".php");
spl_autoload_register();
spl_autoload_register( function($classname) {} );
3. Faster and working with camelcase class files:
set_include_path(get_include_path() . PATH_SEPARATOR . __DIR__);
spl_autoload_extensions(".php");
spl_autoload_register(function ($classname) { spl_autoload($classname); });
I'm sorry, my last post was not correct, case 3 does not work :-(
Bug is reproduced on 5.4.4.
|
https://bugs.php.net/bug.php?id=51991
|
CC-MAIN-2016-50
|
refinedweb
| 551
| 59.5
|
Red Hat Bugzilla – Bug 195647
Review Request: redland
Last modified: 2007-11-30 17:11:35 EST
Spec URL:
SRPM URL:.
Due to review of rasqal, I've made some additional changes to this package.
Latst src.rpm:
This has been sitting in the queue for a while...
I would be happy to review it (and redland-bindings).
Look for a full review in a bit.
This review depends on #195645, which appears stalled out on the issue of
%makeinstall use. This package also uses %makeinstall.
Adding 195645 as a dependency here.
Hey Thomas. Now that rasqal is in, we can move forward on this review.
1.0.5 appears to be out, could you upgrade to that and make any other changes
you would like and I will do a review... :)
well, 1.0.5 relies on raptor 1.4.13. afaict it's still 1.4.9 in extras, so I
can't yet update.
Will fix the other issue though.
Sorry for the delay in reviewing this...
OK - Package name
OK - Spec file matches base package name.
OK - Meets Packaging Guidelines.
See below - License
See below - License field in spec matches
OK - License file included in package
OK - Spec in American English
OK - Spec is legible.
OK - Sources match upstream md5sum:
3ee58cbf5486c97ef3bc0c4368a344cc redland-1.0.4.tar.gz
3ee58cbf5486c97ef3bc0c4368a344cc redland-1.0.4.tar.gz.1
OK - Package compiles and builds on at least one arch.
OK - BuildRequires correct
OK -.
OK - final provides and requires are sane:
SHOULD Items:
OK - Should build in mock.
OK - Should build on all supported archs
OK - Should have subpackages require base package with fully versioned depend.
OK - Should have dist tag
See below - Should package latest version
Issues:
1. You have the License as: "LGPL or Apache License 2.0", but it
apparently also is optionally licensed under the GPL,
so you should add that option.
2. -devel package has a .pc file, so shouldn't it
Requires: pkgconfig
3. rpmlint our little friend says:
W: redland invalid-license LGPL or Apache License 2.0
W: redland invalid-license LGPL or Apache License 2.0
W: redland-debuginfo invalid-license LGPL or Apache License 2.0
W: redland-devel invalid-license LGPL or Apache License 2.0
Can be ignored I think.
4. You aren't packaging the newest version, but thats due to
the version of rasqal available. Is there a bug filed for updating
that?
5. The package dumps 18 files into /usr/include/ Perhaps it could be
changed to put them in a /usr/include/redland/ instead? (And then
the .pc file would need to be adjusted to include the right -I/usr/include/redland.
Thanks for reviewing!
Wrt. your issues:
1) Any LGPL package is always allowed to be used under the GPL - this is a
standard "feature" of the LGPL. As such I don't think it's necessary to add it
to the license field, since I don't see any other LGPL package doing that. What
do you think ?
2) yep, will add that
4) will file that bug now.
5) IMO this is up to upstream if this should be changed. I don't necessarily
feel it should - the headers seem to be namespaced with rdf_ - but in any case I
don't think packagers should make changes like this if they are not strictly
necessary because it creates problems for developers. What do you think ?
I will push a new package when we resolve 1) and 5)
Thanks
Thomas
(In reply to comment #8)
> 5) IMO this is up to upstream if this should be changed.
Well, system integration is the task of an rpm's maintainer ;)
> I don't necessarily
> feel it should - the headers seem to be namespaced with rdf_ - but in any case I
> don't think packagers should make changes like this if they are not strictly
> necessary because it creates problems for developers. What do you think ?
IMO: Move them into a subdir.
In reply to comment #8:
1. True. I guess it's not a big deal either way if you list it or not.
2. ok.
4. ok. Might mention in this review just for completeness.
5. Well, since the package uses pkgconfig it's pretty easy to move things and
have anything using the devel package pick up the correct include path. You are
correct however in that it would be good for upstream to make this change.
Is upstream responsive? Would they take a change like that? Can you ask them?
Some of the headers have a rdf_ namespace, but at least 2 do not.
wrt. 5:
- upstream is relatively responsive, but I only ever sent one patch upstream
- pkg-config is not the only way they provide paths to developer apps, they also
have redland-config
- there is nothing as such wrong with installing headers in /usr/include, it is
merely a style issue.. Changing stuff like this is a cost to
users/developers of the package that gets paid by the upstream maintainer, not
the packager.
- further examples of packages on my system that install headers in /usr/include
directly: gd, gmp, libidn, libjpeg, js, mx, openldap, libodbc, pilot-link,
libtiff, libtermcap, zlib
(In reply to comment #11)
> wrt. 5:
> - there is nothing as such wrong with installing headers in /usr/include, it is
> merely a style issue.
Sorry, this is not a "mere style" issue.
- It affects effectiveness of compilers when searching /usr/include (N
files/dirs more to stat).
- It raises the potential of file conflicts.
- /usr/include is "special"
ONE package isn't much of a problem, but 10's or 100's are.
>.
My position is opposite:
* /usr/include is the "system-include" directory and should largely be reserved
to essential system packages (Many of them are covered by standards, e.g. POSIX).
* In many cases, upstream doesn't care about this because they assume their
package to be installed into a "per package" hierarchy (/opt/<package> or
similar) so they aren't aware about the issues their habits could cause.
* I consider a maintainer who is not able to work around this or unwilling to
address this issue, to be acting a careless and negligent (I am waiting for the
day, somebody installs a file named "stdint.h", or "list" to /usr/include)
> Changing stuff like this is a cost to
> users/developers of the package that gets paid by the upstream maintainer, not
> the packager.
CPPFLAGS=-I/usr/include/<package>
> - further examples of packages on my system that install headers in /usr/include
> directly: gd, gmp, libidn, libjpeg, js, mx, openldap, libodbc, pilot-link,
> libtiff, libtermcap, zlib
Yes, many of them all in the same boat, but .. many of them are out of control
of Extras.
Some of these packages however have evolved into "essential system libs" (e.g..
zlib) which would justify installing their headers to /usr/include, redland
definitely is not one of these.
Also, you should be aware that FE-policy so far had been, headers can go to
"/usr/include, if a package installs very few headers which are almost
guaranteed never to conflict".
In reply to comment #12:
I understand your strong feelings in this, and share many of them, however...
>Also, you should be aware that FE-policy so far had been, headers can go to
>"/usr/include, if a package installs very few headers which are almost
>guaranteed never to conflict".
Where is this guideline listed? I know of no such rule...
Perhaps you should get the package guidelines to include such a blocker rule?
Thomas: Can you mail upstream and ask if they will change this... for now, we
can just move forward with the include files in /usr/include, and then
hopefully they will update to use a /usr/include/<package> soon. If it looks
like they will make this change soon, you can change it in your package and then
drop the change when upstream moves to it?
I'll leave this in REVIEW for a few days to let you talk to upstream and see if
they will do this change, and when...
(In reply to comment #13)
> In reply to comment #12:
> >Also, you should be aware that FE-policy so far had been, headers can go to
> >"/usr/include, if a package installs very few headers which are almost
> >guaranteed never to conflict".
>
> Where is this guideline listed? I know of no such rule...
It's not listed anywhere. It's simply common sense and established "good"
practice by many FE packagers.
> Perhaps you should get the package guidelines to include such a blocker rule?
<sigh> Seems as if we need regulations for anything </sigh>
Please note, that I only commented and expressed my opinion, I did not block
this package review.
(In reply to comment #14):
>> Where is this guideline listed? I know of no such rule...
>It's not listed anywhere. It's simply common sense and established "good"
>practice by many FE packagers.
Right.
>> Perhaps you should get the package guidelines to include such a blocker rule?
><sigh> Seems as if we need regulations for anything </sigh>
For things that should block inclusion of a package, I think so...
>Please note, that I only commented and expressed my opinion, I did not block
>this package review.
Sure, and like I said, I share much of your opinion on this... but I don't think
it's a blocking issue for the package.
Lets see what Thomas can find out from upstream. Hopefully they will quickly do
the right thing and we will all be happy. ;)
Hey Thomas. Any news from upstream about this package?
Hey Thomas. I see 1.0.6 is out.
Does it address the issues above?
Can you update your package to 1.0.6 and we can see where we stand?
Ping? We need this package for KDE 4 (as a dependency of Soprano which is a
dependency of Nepomuk), so it would be nice if it could get in sooner rather
than later.
IMHO, putting the headers in /usr/include:
1. is upstream's call to make,
2. doesn't violate any guidelines,
3. doesn't cause any real problems (see the next paragraph for why) and
4. can be easily changed in an update (if/when upstream changes it).
Therefore, I don't see this as a good reason to block the review.
Moreover, looking at the actual names of the headers, the only ones which don't
have an rdf_* prefix are librdf.h and redland.h, both of which have practically
zero chance to conflict with another package (redland being the project name
and librdf being the name of the shared object it ships), making this a
strictly academic issue.
Therefore, please don't let this hold back this package. If you really can't
get over yourself to approve it without this change, I will.
In reply to comment #19:
If you will read my comment #13, you will see that I was not intending to block
this package based on the include file issue. I simply wanted Thomas to check
with upstream and see if they would be willing to make that change in a timely
manner.
Thomas? Any word from upstream?
In any case can you update to 1.0.6?
Thomas Vander Stichele: Ping?
Shall we consider this review stalled?
sorry, back
I discussed it with upstream, they would be willing to take a patch eventually
once it's done. I will put it on my todo list, even though I don't think it's
super-critical.
meanwhile, will update and test.
ok. If you could post an updated version for me to look over we can get this
thing approved and in so the KDE folks can use it. ;)
Thanks!
Redland 1.0.6 needs updates to raptor and rasqal first.
Currently in F7:
raptor 1.4.14
rasqal 0.9.12
Needed for redland 1.0.6:
raptor >= 1.4.15
rasqal >= 0.9.14
It's possible to update to 1.0.5, which needs:
raptor >= 1.4.9
rasqal >= 0.9.12
but not to 1.0.6 unless raptor and rasqal are updated first.
Created attachment 158091 [details]
Specfile for 1.0.5
I updated the specfile for 1.0.5: only trivial changes were needed:
- update to 1.0.5 (1.0.6 needs newer raptor and rasqal than available)
- update minimum raptor version
Kevin Fenzi: Can this be considered approved now? We can worry about updating
the whole stack to 1.0.6 later. As far as I can tell, any version will do for
KDE 4 (Soprano doesn't seem to check for any particular version), but we really
need one version in, so let's not delay the review just for the version bump.
Thomas Vander Stichele: Can you please import and build this once approved?
All the blocker issues I see are solved in the 1.0.5 version in comment #25.
I would still like to see upstream move the include files into a
/usr/include/redland/ dir, but thats not a blocker.
This package is APPROVED.
Don't forget to close this review request once it's been imported and built.
Now that it's approved, what is this package waiting for? :-(
Do you want a comaintainer to do the CVS requests, imports and builds for you?
New Package CVS Request
=======================
Package Name: redland
Short Description: Redland RDF Application Framework
Owners: thomas@apestaart.org,Kevin@tigcc.ticalc.org
Branches: FC-6 F-7
InitialCC:
Thomas Vander Stichele (the package submitter) agreed to comaintainership.
cvs done.
Built for Fedora 6, 7 and 8, filed in Bodhi (requested direct push to stable)
for F7.
(Please check the contents of .pc file on the review.
I filed this as bug 248106 )
Package Change Request
======================
Package Name: redland
Updated Fedora Owners: thomas@apestaart.org,kevin@tigcc.ticalc.org
Please change my e-mail address to lowercase, I changed it in the Fedora
Account System to match Bugzilla, which now normalizes all e-mail addresses to
lowercase.
|
https://bugzilla.redhat.com/show_bug.cgi?id=195647
|
CC-MAIN-2017-22
|
refinedweb
| 2,358
| 74.9
|
Mithril is a client-side JavaScript framework for creating Single Page Applications. Vue is described as a “perceptive, quick MVVM (Model–View–ViewModel) for creating interactive interfaces.Angularjs is an open-source JS framework which is used for creating Single page applications for mobile and web development. are the top best practices I've developed while working on Vue projects with a large code base. These tips will help you develop more efficient code that is easier to maintain and share.
When freelancing this year, I had the opportunity to work on some large Vue applications. I am talking about projects with more than 😰 a dozen Vuex stores, a high number of components (sometimes hundreds) and many views (pages). 😄 It was actually quite a rewarding experience for me as I discovered many interesting patterns to make the code scalable. I also had to fix some bad practices that resulted in the famous spaghetti code dilemma. 🍝
Thus, today I’m sharing 10 best practices with you that I would recommend to follow if you are dealing with a large code base. 🧚🏼♀️1. Use Slots to Make Your Components Easier to Understand and More Powerful
I recently wrote an article about some important things you need to know regarding slots in Vue.js. It highlights how slots can make your components more reusable and easier to maintain and why you should use them.
🧐 But what does this have to do with large Vue.js projects? A picture is usually worth a thousand words, so I will paint you a picture about the first time I deeply regretted not using them.
One day, I simply had to create a popup. Nothing really complex at first sight as it was just including a title, a description and some buttons. So what I did was to pass everything as props. I ended up with three props that you would use to customize the components and an event was emitted when people clicked on the buttons. Easy peasy! 😅
But, as the project grew over time, the team requested that we display a lot of other new things in it: form fields, different buttons depending on which page it was displayed on, cards, a footer, and the list goes on. I figured out that if I kept using props to make this component evolve, it would be ok. But god, 😩 how wrong I was! The component quickly became too complex to understand as it was including countless child components, using way too many props and emitting a large number of events. 🌋 I came to experience that terrible situation in which when you make a change somewhere and somehow it ends up breaking something else on another page. I had built a Frankenstein monster instead of a maintainable component! 🤖
However, things could have been better if I had relied on slots from the start. I ended up refactoring everything to come up with this tiny component. Easier to maintain, faster to understand and way more extendable!
point is that, from experience, projects built by developers who know when to use slots does make a big difference on its future maintainability. Way fewer events are being emitted, the code is easier to understand, and it offers way more flexibility as you can display whatever components you wish inside.
2. Organize Your Vuex Store Properly2. Organize Your Vuex Store Properly
⚠️ As a rule of thumb, keep in mind that when you end up duplicating your child components' props inside their parent component, you should start using slots at that point.
Usually, new Vue.js developers start to learn about Vuex because they stumbled upon on of these two issues:
That's when they create their first Vuex store, learn about modules and start organizing them in their application. 💡
The thing is that there is no single pattern to follow when creating modules. However, 👆🏼 I highly recommend you think about how you want to organize them. From what I've seen, most developers prefer to organize them per feature. For instance:
😜 On my side, I find it easier to understand when they are organized according to the data models they fetch from the API. For example:
Which one you choose is up to you. The only thing to keep in mind is that a well-organized Vuex store will result in a more productive team in the long run. It will also make newcomers better predisposed to wrap their minds around your code base when they join your team.3. Use Actions to Make API Calls and Commit the Data
Most of my API calls (if not all) are made inside my Vuex actions. You may wonder: why is that a good place to do so? 🤨
🤷🏼♀️ Simply because most of them fetch the data I need to commit in my store. Besides, they provide a level of encapsulation and reusability I really enjoy working with. Here are some other reasons I do so:
If I need to fetch the first page of articles in two different places (let's say the blog and the homepage), I can just call the appropriate dispatcher with the right parameters. The data will be fetched, committed and returned with no duplicated code other than the dispatcher call.
If I need to create some logic to avoid fetching this first page when it has already been fetched, I can do so in one place. In addition to decreasing the load on my server, I am also confident that it will work everywhere.
I can track most of my Mixpanel events inside these actions, making the analytics code base really easy to maintain. I do have some applications where all the Mixpanel calls are solely made in the actions. 😂 I can't tell you how much of a joy it is to work this way when I don't have to understand what is tracked from what is not and when they are being sent.
There usually is no need to create multiple computed properties or methods when you just need to access your state/getters or call your actions/mutations inside your components. Using
mapState,
mapGetters,
mapMutations and
mapActions can help you shorten your code and make things easier to understand by grouping what is coming from your store modules in one place.
// NPM import { mapState, mapGetters, mapActions, mapMutations } from "vuex"; export default { computed: { // Accessing root properties ...mapState("my_module", ["property"]), // Accessing getters ...mapGetters("my_module", ["property"]), // Accessing non-root properties ...mapState("my_module", { property: state => state.object.nested.property }) }, methods: { // Accessing actions ...mapActions("my_module", ["myAction"]), // Accessing mutations ...mapMutations("my_module", ["myMutation"]) } };
All the information you'll need on these handy helpers is available here in the official Vuex documentation. 🤩5. Use API Factories
I usually like to create a
this.$api helper that I can call anywhere to fetch my API endpoints. At the root of my project, I have an
api folder that includes all my classes (see one of them below).
api ├── auth.js ├── notifications.js └── teams.js
Each one is grouping all the endpoints for its category. Here is how I initialize this pattern with a plugin in my Nuxt applications (it is quite a similar process in a standard Vue app).
// PROJECT: API import Auth from "@/api/auth"; import Teams from "@/api/teams"; import Notifications from "@/api/notifications"; export default (context, inject) => { if (process.client) { const token = localStorage.getItem("token"); // Set token when defined if (token) { context.$axios.setToken(token, "Bearer"); } } // Initialize API repositories const repositories = { auth: Auth(context.$axios), teams: Teams(context.$axios), notifications: Notifications(context.$axios) }; inject("api", repositories); };
export default $axios => ({ forgotPassword(email) { return $axios.$post("/auth/password/forgot", { email }); }, login(email, password) { return $axios.$post("/auth/login", { email, password }); }, logout() { return $axios.$get("/auth/logout"); }, register(payload) { return $axios.$post("/auth/register", payload); } });
Now, I can simply call them in my components or Vuex actions like this:
6. Use $config to access your environment variables (especially useful in templates)6. Use $config to access your environment variables (especially useful in templates)
export default { methods: { onSubmit() { try { this.$api.auth.login(this.email, this.password); } catch (error) { console.error(error); } } } };
Your project probably have some global configuration variables defined in some files:
config ├── development.json └── production.json
I like to quickly access them through a
this.$config helper, especially when I am inside a template. As always, it's quite easy to extend the Vue object:
7. Follow a Single Convention to Name Your Commits7. Follow a Single Convention to Name Your Commits
// NPM import Vue from "vue"; // PROJECT: COMMONS import development from "@/config/development.json"; import production from "@/config/production.json"; if (process.env.NODE_ENV === "production") { Vue.prototype.$config = Object.freeze(production); } else { Vue.prototype.$config = Object.freeze(development); }
As the project grows, you will need to browse the history for your components on a regular basis. If your team does not follow the same convention to name their commits, it will make it harder to understand what each one does.
I always use and recommend the Angular commit message guidelines. I follow it in every project I work on, and in many cases other team members are quick to figure out that it's better to follow too.
Following these guidelines leads to more readable messages that make commits easier to track when looking through the project history. In a nutshell, here is how it works:
git commit -am "<type>(<scope>): <subject>" # Here are some samples git commit -am "docs(changelog): update changelog to beta.5" git commit -am "fix(release): need to depend on latest rxjs and zone.js"
Have a look at their README file to learn more about it and its conventions.8. Always Freeze Your Package Versions When Your Project is in Production
I know... All packages should follow the semantic versioning rules. But the reality is, some of them don't. 😅
To avoid having to wake up in the middle of the night because one of your dependencies broke your entire project, locking all your package versions should make your mornings at work less stressful. 😇
What it means is simply this: avoid versions prefixed with
^:
9. Use Vue Virtual Scroller When Displaying a Large Amount of Data9. Use Vue Virtual Scroller When Displaying a Large Amount of Data
{ "name": "my project", "version": "1.0.0", "private": true, "dependencies": { "axios": "0.19.0", "imagemin-mozjpeg": "8.0.0", "imagemin-pngquant": "8.0.0", "imagemin-svgo": "7.0.0", "nuxt": "2.8.1", }, "devDependencies": { "autoprefixer": "9.6.1", "babel-eslint": "10.0.2", "eslint": "6.1.0", "eslint-friendly-formatter": "4.0.1", "eslint-loader": "2.2.1", "eslint-plugin-vue": "5.2.3" } }
When you need to display a lot of rows in a given page or when you need to loop over a large amount of data, you might have noticed that the page can quickly become quite slow to render. To fix this, you can use vue-virtual-scoller.
npm install vue-virtual-scroller
It will render only the visible items in your list and re-use components and dom elements to be as efficient and performant as possible. It really is easy to use and works like a charm! ✨
10. Track the Size of Your Third-Party Packages10. Track the Size of Your Third-Party Packages
<template> <RecycleScroller class="scroller" : <div class="user"> {{ item.name }} </div> </RecycleScroller> </template>
When a lot of people work in the same project, the number of installed packages can quickly become incredibly high if no one is paying attention to them. To avoid your application becoming slow (especially on slow mobile networks), I use the import cost package in Visual Studio Code. This way, I can see right from my editor how large an imported module library is, and can check out what's wrong when it's getting too large.
For instance, in a recent project, the entire lodash library was imported (which is approximately 24kB gzipped). The issue? Only the cloneDeep method was used. By identifying this issue with the import cost package, we fixed it with:
npm remove lodash npm install lodash.clonedeep
The clonedeep function could then be imported where needed:
import cloneDeep from "lodash.clonedeep";
⚠️ To optimize things even further, you can also use the Webpack Bundle Analyzer package to visualize the size of your webpack output files with an interactive zoomable treemap.
Do you have other best practices when dealing with a large Vue code base? Feel free to tell me in the comments below
|
https://morioh.com/p/df0c0ddd875e
|
CC-MAIN-2019-47
|
refinedweb
| 2,073
| 56.55
|
bassdrop
🔊 a downshift powered dropdown library for react vr
bassdrop is a downshift🏎️ powered dropdown library for building drop downs and select lists in react vr. It uses the function as child and “prop getter” patterns, which gives you maximum flexibility with a minimal API.
Table of Contents
Installation
This module is distributed via npm which is bundled with node and
should be installed as one of your project’s
dependencies:
npm install --save bassdrop
This package also depends on
react-vr,
prop-typesand
react. Please make sure you have those installed as well.
Usage
import BassDrop from 'bassdrop';import Text View VrButton Animated from 'react-vr';// Small helper function to choose item bg color.{if selectedItem === itemreturn 'gray';if highlightedIndex === indexreturn 'lightgray';return 'white';}{return<BassDrop// /. ''.=// .// <>stateAndHelpers ...</BassDrop>render=// Prop gettersgetItemPropsgetToggleButtonPropsgetRootProps// StateisOpenselectedItemIsHighlighteditemDisplayhighlightedIndexselectedItem<View//==><VrButton// /=><Text//=>itemDisplay</Text><View// -- !=><Image==/></View></VrButton>isOpen ?<View//=>items</View>: null</View>/>;}
…creates something like this:
Props
See the API Docs for information on the props exposed by this package. The usage example above is not an exhaustive list.
How To Render
bassdrop 🔊 uses the child callback render function pattern. This is where you render whatever you want to based on the state of bassdrop which is passed to the callback as parameters. The function is passed as the child prop of the
BassDrop component:
<BassDrop>/* parameters here */ /* your render code here*/</BassDrop>
or can be called from the render prop
<BassDrop=/>
The paramters of this function can be split into two categories: State and Prop Getters.
See the API Docs for a list of these properties.
Examples
Check out the demo site to see how bassdrop 🔊 works in VR. See the demo repo for the code behind the demo site.
Contributing
If you’d like to make bassdrop 🔊 better, please read our guide to contributing.
These wonderful people have contributed to bassdrop 🔊 in one way or another:
License
bassdrop is MIT licensed.
|
https://www.npmjs.com/package/bassdrop
|
CC-MAIN-2022-05
|
refinedweb
| 319
| 53.81
|
Follow @jongallow?
Another interesting property of those numbers is that they all are divisible by 9.
Just wanted to point out a small 'error' in the otherwise very nice function Reverse(Integer).
Function Reverse(ByVal n As Integer) As Integer
Dim rev As Integer = 0
While n > 0
rev = rev * 10 + (n Mod 10)
n = n \ 10
End While
Return rev
End Function
Try calling the Reverse function with the number 581. the result will not be whats expected.
This is becouse 58 / 10 does not return 5, but instead is rounder up to 6.
One solution is to change the line to:
n = Math.floor(n \ 10)
I liked the puzzle btw :)
Fun puzzle!
I had to write it in Ruby, as that's the language I'm trying to gain proficiency in.
It doesn't run fast, but it was fast to code :).
#!/usr/bin/env ruby
class Integer
def reverse
r = "#{self}".reverse
return nil if r =~ /^0/ or r.to_i == self
return r.to_i
end
end
biggest = 1000000
(1 .. biggest).each do |n|
if n.reverse
puts n if n % n.reverse == 0
Shouldn't you be getting results like 100 is divisible by 001?
@Helen - Take a look at the puzzle rules. I didn't include them since I don't consider 001 a valid number. I'm sure James Bond might take issue with that.
hi your website is very nice i want a little sorry the solution for this puzzle
"I live in water,if you cut my head iam at your door,if you cut my tail iam a fruit , if you cut both then iam with you"
please send me the solution for this
@angel - The answer is a pearl.
From the Python command prompt:
>>> for n in xrange(1000000):
... if n%10==0: continue
... r=int(str(n)[::-1])
... if n==r: continue
... if n%r==0: print n
...
8712
9801
87912
98901
879912
989901
>>>
@angel It would be pearl.
|
http://weblogs.asp.net/jgalloway/archive/2006/11/08/Code-Puzzle-_2300_1-_2D00_-Solution.aspx
|
CC-MAIN-2014-15
|
refinedweb
| 330
| 84.68
|
The Net’s top standards body is getting closer to speeding up XML-based software, a move that could benefit everyone from cell phone carriers to television broadcasters to the military.
Faster XML ahead?
2005-03-23 In the News 21 Comments
The Net’s top standards body is getting closer to speeding up XML-based software, a move that could benefit everyone from cell phone carriers to television broadcasters to the military.
WTF is with all the hype about XML lately? It’s not as if it hasn’t already been around for ~30 years in one form or another.
People are like XML-THIS XML-THAT, when there are ALTERNATIVE and BETTER solutions.
“People are like XML-THIS XML-THAT, when there are ALTERNATIVE and BETTER solutions.”
Let me guess? S-Expressions.
Try: EDI.
People have XML on the Brain. The current disease/fad in IT.
XML is text. With tags. That’s it.
representing data using XML is simply too bulky
and
We’ve started using it in all kinds of situations that it wasn’t designed for.
From a bandwidth perspective, it is one of the most wasteful formats ever, and saddles you with a parsing engine that ends up requiring more horsepower than should really be needed.
The only advantage it has is you can sit down and read it as plain text, which of course makes it OH SO secure.
Ah, time to post one of my favourite quotes about XML:
“XML is not the answer. It is not even the question. To paraphrase Jamie Zawinski on regular expressions, “Some people, when confronted with a problem, think “I know, I’ll use XML.” Now they have two problems.””
taken from
not sure why there is so much discussion on this topic…gzip is stable, well understood, compresses well….webservers already use it
That was a wonderful article you linked to.
It was designed to be a cross platform data swapping format. And now everyone has grabbed the term and tried to morph it into everything. It was great for its original reason. But now its gotten out of hand.
binary XML. which is ridiculous. if you need something concise, DON’T USE XML.
The thing about XML isn’t in the ML part, there are millions of ways to makup some data. The X is the whole point, eXstensible.
With the use of namespaces and uri’s it’s a breeze to add uncoordinated forward and backward compatible changes to a particular language. XHTML+MathML+SVG is a prime example.
xml is just a buzzword. industry hype. just like the semantic web. and like the semantic web, it gets government funding if you sprinkle your grant proposals with these magic words.
binary XML. you’re joking, right?
Well, the XML hype can’t be worse than the UML hype.
As long as there was one only binary format and there was a BSD-licensed library to make a one-to-one translation from binary to standard XML than I don’t really see a problem as it makes the changes below the logic layer of the program.
Unfortunately the error here rests solely in the bad article and synopsis.
The problem is not so much about making XML binary as it is about having to do this in order to *map*, *encapsulate*, and *transport* it over networks that don’t ‘talk XML’ or perhaps not even TCP (i.e. cell, broadcast and mil).
People have been doing this for some time though, using proprietary and non-proprietary mechanisms. Either via translating the data at the transmitter and receiver, or via encapsulating the entire payload as a ‘blob’.
The real good thing of XML is not a matter of technology. The major point is that it is a BROADLY USED standard. That’s why everybody use it : because everbody use it.
That’s the only real purpose and advantage of a standard : being used. Once it is being used, it becomes usefull to use it. Got it ?
Well, with XML I know that at least others can use the file to some degree, more ofc if I provide a DTD!
If I throw a Yaml-File at random persons, I assume those will be deleted, at best.
If you give me a yaml file, I scream “Yeah! He’s got it!” and start coding a Ruby app. 😉
Then you are probably one under a thousand or so to speak.
Well, as I work most with Java or C# however, I would XML just because XML would popup at some places in the framework – and the last thing that is good is having the incosistency of having included two technologies doing exactly the same tasks. In that case technological benefits must be checked agains maintaince and design issues …
Why not use ASN.1? It can be a tagged based binary protocol.
The main difference would be the tags are non-descriptive, which would require a table lookup for interpretation. Plus ASN.1 already exists and has been around since early 80’s.
Because all, XML, YAML and then ASN.1, too, I guess, are for storing data in a common framework. As long it is just your app that just stores and loads data to and from disks, it is no problem what you choose. But when you want to use it for communication, like you do for example with RSS feeds, SOAP, etc., you’ll want to use something that is wide spread, and both Yaml and ASN.1 then seem to fail in this area.
|
https://www.osnews.com/story/10080/faster-xml-ahead/
|
CC-MAIN-2019-51
|
refinedweb
| 931
| 74.19
|
Documenting Long Classes in Jupyter Notebook07 Oct 2016
In an earlier post, I described my experiment with using Jupyter Notebook and Jekyll together to write blog posts. This turns out to be very convienent for writing about scientific software, as it allows me to make blog posts that interlace code, figures, mathematics and explanatory prose in a way that hopefully is helpful for my readers. My approach does, however, come with some annoying limitations. For instance, if I want to describe a Python class with several methods and properties, it is difficult to explain each part of that class independently. Though addressing this limitation is currently on the wishlist for the Jupyter Notebook project, the current version doesn't really have a "good" way of dealing with this limitation.
In this post, I'll describe a "bad" way of describing long classes, by using inheritance to break up long definitions. I say that this is "bad" because it creates a needlessly complicated class hierarchy and code that is far from production-quality, but I think it's still a useful technique in that it allows for clearer explanations.
from __future__ import print_function from future.utils import with_metaclass
Next, to show that the inheritence trick works well for abstract classes, we import from the
abc standard library package.
from abc import ABCMeta, abstractmethod
Getting to the example itself, suppose that you want to define a class with two methods,
foo and
bar. We represent this as an abstract base class (hence
abc) with two abstract methods.
class AbstractBase(with_metaclass(ABCMeta, object)): @abstractmethod def foo(self, x): pass @abstractmethod def bar(self, y): pass
If we now try to describe an example concrete (that is, not abstract) class inheriting from
AbstractBase, we must define both
foo and
bar. If we leave one out, then we get a
TypeError.
class A(AbstractBase): def foo(self, x): return x
try: a = A() except TypeError as ex: print(ex)
Can't instantiate abstract class A with abstract methods bar
Thankfully, we can use inheritance to provide the second method (in this case,
bar) after we have defined our example class.
class A(A): def bar(self, y): return y ** 2
a = A()
a.bar(2)
4
What's going on here? In particular, how can
A inherit from itself? Remember that in Python, classes are themselves values that can be manipulated at runtime. Thus, if we define a class
B, we can do things like print that new class out to the console.
class B(object): pass print(B)
<class '__main__.B'>
As with any other Python value, we can ask for the
id of a class. This returns a unique number that identifies that class, even if we assign it to different variables.
print(id(B)) C = B print(id(C))
67899912 67899912
Notably, if we define a new class that is also called
B, this "hides" the old definition of
B and gives us a class with a different
id.
class B(object): pass print(id(B))
67900856
The old class still exists, however, such that we can get to it if we assigned it to a different variable.
print(id(C)) print(C)
67899912 <class '__main__.B'>
Thus, when we make a class that "inherits from itself," it doesn't really do that per se, but rather inherits from the old value of the variable that held that class. We can confirm this by looking at the special attribute
__bases__, which returns a tuple of all base classes of a class.
class D(object): pass print(id(D)) class D(D): pass print(id(D)) print([id(base_class) for base_class in D.__bases__])
67901800 67882920 [67901800L]
Thus, the "old value" of our class still lives on, as referred to by the
__bases__ attribute of our new class. Practically speaking, this is a terrible and confusing thing to do, but it has a very nice effect for our purposes, of letting us progressively add attributes such as methods and properties to a class.
class E(object): x = 'foo' class E(E): y = 'bar' print(E.x, E.y)
foo bar
In this way, self-inheritence can provide a useful technique for splitting up long class definitions. That said, it's a bad idea, so please don't use this in "actual" code, and if you do, don't blame me for the confusion that results.
We finish the post by exporting to Jekyll:
!jupyter nbconvert --to html --template jekyll.tpl 2016-10-07-documenting-long-classes-jupyter-notebook
[NbConvertApp] Converting notebook 2016-10-07-documenting-long-classes-jupyter-notebook.ipynb to html [NbConvertApp] Writing 19416 bytes to 2016-10-07-documenting-long-classes-jupyter-notebook.html
|
https://www.cgranade.com/blog/2016/10/07/documenting-long-classes-jupyter-notebook.html
|
CC-MAIN-2018-51
|
refinedweb
| 783
| 60.55
|
Doug Robinson wrote on Wed, May 03, 2017 at 15:54:50 -0400:
> Daniel:
>
> On Mon, May 1, 2017 at 2:05 PM, Daniel Shahaf <d.s_at_daniel.shahaf.name>
> wrote:
>
> > Doug Robinson wrote on Mon, May 01, 2017 at 14:20:16 +0000:
> > > On Mon, Apr 17, 2017 at 21:13 Daniel Shahaf <d.s_at_daniel.shahaf.name>
> > wrote:
> > > > Stefan Fuhrmann wrote on Mon, Apr 17, 2017 at 22:22:33 +0200:
> > > > > On 15.03.2017 10:55, Daniel Shahaf wrote:
> > > > > >>From the 1.10 draft release notes:
> > > > > >
> > > > > >>All wildcards apply to full path segments only, i.e. * never
> > matches
> > > > > >>/, except for the case where /**/ matches zero or more path
> > segments.
> > > > > >>For example, /*/**/* will match any path which contains at least
> > > > > >>2 segments and is equivalent to /**/*/* as well as /*/*/**.
> > > > > >Are «/*/**/*» «/**/*/*» «/*/*/**» really equivalent? I would have
> > > > > >expected the first two to match any node except / and /'s immediate
> > > > > >children, but I wouldn't expect the third form to match /trunk/iota
> > > > > >where iota is a file, since the pattern has a trailing slash after
> > the
> > > > > >non-optional second component.
> > > > > How do you know that /trunk/iota is a file?
> > > >
> > > > I was reviewing the API docs as a black box, i.e., from a user
> > > > (repository admin) perspective, not from an implementation perspective.
> > > >
> > > > From that perspective, I would say that having a [/trunk/iota/**]
> > > > stanza to apply to a /trunk/iota file violates the principle of least
> > > > surprise.
> > >
> > >
> > > From a very critical point of view I agree.
> > >
> > > However, the point of wildcards is to easily reserve a complete
> > namespace.
> >
> > Sure, that's a valid use-case.
> >
> > I was envisioning that, if a [/trunk/iota/**] stanza were present, then
> > an authz query for a /trunk/iota file would return either "No access" or
> > a parse error. This would reserve the namespace, wouldn't it? Referring
> > to your next paragraph, this logic would neither leak the contents of
> > the file nor require multiple stanzas.
> >
>
> For an AuthZ check the answer is either Yes or No, not "parser error",
> right?
Wrong. An authz check can return an error. For example, `svnauthz
accessof` has exit code 2 when the authz file fails to parse.
> And it really can't be a "parser error" (invalidating the AuthZ file
> entirely) since in some other revision that "file" could be
> a "directory". So either the stanza gets skipped as "not applicable"
> (and therefore not reserving the namespace) or it gets entered as if
> the file were a directory and we're back to the behavior that I am
> expecting.
You are correct: it will not be a *parse* error since the grammar of
authz files does not depend on the contents of the repository. That
just means it will be a different kind of error — a semantic error — and
will occur at authz query time, not at authz file load time.
That would still break checkouts of /trunk, though, so it might be
better to just default /trunk/iota to "No access" and log a warning to
the server log. (Using, say, svn_repos_fs(repos)->warning().)
>
> > > If we do not apply that stanza apply to the file means requiring 2
> > stanzas
> > > to cover the space entirely. That's both expensive and brittle (2X
> > stanzas
> > > and requires remembering to treat them in pairs - both when adding
> > > and
> > when
> > > removing).
> > >
> > > And I think the "surprise" will be very short-lived if at all.
> > >
> > > From a cost/benefit standpoint I think it is extremely positive.
> >
> > Well, if a common task requires two stanzas, then _of course_ we'll
> > find an easier way for users to spell it. For example, we could
> > invent some new "reserve prefix" stanza syntax, or pass to
> > svn_repos_authz_check_access() the svn_node_kind_t of the path it
> > checks access to, or any number of other solutions.
> >
> > In short: there might well be a design that meets both of our
> > criteria: principle of least surprise _and_ namespace reservation.
> >
>
>?
Cheers,
Daniel
> As I said, I think the surprise, if any (none if we document it well)
> will be very short-lived.
Received on 2017-05-04 10:16:24 CEST
This is an archived mail posted to the Subversion Dev
mailing list.
|
https://svn.haxx.se/dev/archive-2017-05/0015.shtml
|
CC-MAIN-2018-17
|
refinedweb
| 688
| 70.73
|
Hey, I’m new to Julia and not sure about some issues regarding arrays. One of these is this one:
I just want to bind several vectors with zeros to an array. Their length is given in another array like that:
length = [10 20 30 40....]
Now I do the following:
zeroarray = zeros(length[1]) zeroarray = vcat([zeroarray], zeros(length[2]) for i in 3:length(length) zeroarray = vcat(zeroarray, zeros(length[i]) end
This looks not like the appropriate approach to me. I would like to know how to formulate this properly. For explanation, this is the equivalent in python:
import numpy as np length = [10, 20, 30, 40,....] zeroarray = [np.zeros(length[0])] for i in range(1, len(length)): zeroarray.append(np.zeros(length[i]))
I would be very thankful for further advice!
|
https://discourse.julialang.org/t/binding-vectors-to-array/10483
|
CC-MAIN-2018-30
|
refinedweb
| 136
| 65.32
|
Believe it: majority of the programmers write "working code," but not "efficient code." As we mentioned in the beginning of this tutorial, do you want to become the "Most Valued Professional" of your company? Writing "efficient code" is an art that you must learn and practice.
Use Pascal casing for class names:
public class HelloWorld { ... }
Use Pascal casing for method names:
public class HelloWorld { void SayHello(string name) { ... } }
Use Camel casing for variables and method parameters:
public class HelloWorld { int totalCount = 0; void SayHello(string name) { string fullMessage = "Hello " + name; ... } }
Do not use Hungarian notation to name variables. In earlier days, most programmers liked it: having the data type as a prefix for the variable name and using
m_ as the prefix for member variables, e.g:
string m_sName; int nAge;
However, in .NET coding standards, this is not recommended. Usage of data type and
M_ to represent member variables should not be done. All variables should use Camel casing. Use meaningful, descriptive words to name variables:
name,
address,
salaryetc. instead of
nam,
addr,
sal.
i,
n,
x, etc. Use names like
indexand
temp..
<company name>.<product name>.<top level module>.<bottom level module>
File name should match with class name. For example, for the class
HelloWorld, the file name should be helloworld.cs (or helloworld.vb).
Indentation and spacing: use TAB for indentation. Do not use spaces.
Comments should be in the same level as the code. Curly braces ( {} ) should be in the same level as the code outside the braces. Use one blank line to separate logical groups of code.
bool SayHello (string name) { string fullMessage = "Hello " + name; DateTime currentTime = DateTime.Now; string message = fullMessage + ", the time is : " + currentTime.ToShortTimeString(); MessageBox.Show ( message ); if ( ... ) { // Do something // ... return false; } return true; }
This code looks better than the code shown above:
bool SayHello ( string name ) { string fullMessage = "Hello " + name; DateTime currentTime = DateTime.Now; string message = fullMessage + ", the time is : " + currentTime.ToShortTimeString(); MessageBox.Show ( message ); if ( ... ) { // Do something // ... return false; } return true; }
There should be one and only one single blank line between each method inside the class. The curly braces should be on a separate line and not in the same line as
if,
for, etc.
Good:
if ( ... ) { // Do something }
Not good:
if ( ... ) { // Do something }
Use a single space before and after each operator and brackets.
Good:
if ( showResult == true ) { for ( int i = 0; i < 10; i++ ) { // } }
Not good:
if(showResult==true) { for(int i= 0;i<10;i++) { // } }
Avoid having too-large files. If a file has more than 300-400 lines of code, you must consider refactoring the code into helper classes. Avoid writing very long methods. A method should typically have 1-25 lines of code. If a method has more than 25 lines of code, you must consider refactoring it into separate methods. The method's name should tell what it does. Do not use misleading names. If the method name is obvious, there is no need of documentation explaining what the method does.
Good:
void SavePhoneNumber ( string phoneNumber ) { // Save the phone number. }
Not good:
// This method will save the phone number. void SaveData ( string phoneNumber ) { // Save the phone number. }. // ... }
Use the C# or VB.NET specific types, rather than the alias types defined in the
System namespace.
Good:
int age; string name; object contactInfo;
Not good:
Int16 age; String name; Object contactInfo;
Do not hardcode numbers. Use constants instead. Do not hardcode strings. Use resource files. Avoid using many member variables. Declare local variables and pass them to methods instead of sharing a member variable between methods. If you share a member variable between methods, it will be difficult to track which method changed the value and when.; } }
Do not make the member variables
public or
protected. Keep them
private and expose
public/
protected properties. Never hardcode a path or drive name in code. Get the application path programmatically and use relative path. Never assume that your code will run from drive C:. You never know; some users may run it from a network or from a Z:.
In the application start-up, do some kind of "self check" and ensure that all required files and dependencies are available in the expected locations. Check for database connections at start-up, if required. Give a friendly message to the user in case of any problems.
If the required configuration file is not found, the application should be able to create one with default values. If a wrong value is found in the configuration file, the application should throw an error or give a message and also should tell the user what the correct values are. the user should do to solve the problem. Instead of a message like "Failed to update the database," suggest what the user should do: "Failed to update database. Please make sure the login ID and password are correct."
Show short and friendly messages to the user, but log the actual error with all possible information. This will help a lot in diagnosing problems.
Do not write comments for every line of code and every variable declared. Write comments wherever required. Good, readable code will require very few comments. If all variables and method names are meaningful, that will make the code very readable and it will not need much commenting. Fewer lines of comments will make the code more elegant. However, if the code is not clean/readable and there are fewer comments, that is worse. If you have to use some complex or weird logic for any reason, document it very well with sufficient comments. If you initialize a numeric variable to a special number other than 0, -1, etc., document the reason for choosing that value. The bottom line is: write clean, readable code in such a way that it doesn't need any comments to understand it. Do a spell check on comments and also make sure that proper grammar and punctuation are used.
Never do a "catch exception and do nothing." If you hide an exception, you will never know if the exception happened or not. In the case of exceptions, give a friendly message to the user, but log the actual error with all possible details about the error, including the time it occurred, the method and class name, etc. Always catch only the specific exception, not generic exceptions. ""; } }
There's no need to catch the general exception in all your methods. Leave it open and let the application crash. This will help you find most of the errors during the development cycle. You can have an application level (thread level) error handler where you can handle all general exceptions. In the case of an "unexpected general error," this error handler should catch the exception and should log the error, in addition to giving a friendly message to the user before closing the application or allowing the user to "ignore a separate
try-
catch for each task you perform and enclose only the specific piece of code inside the
try-
catch. This will help you find which piece of code generated the exception and you can give specific error messages to the user.
You may write your own custom exception classes, if required, in your application. Do not derive your custom exceptions from the base class
SystemException. Instead, inherit from
ApplicationException.
General
News
Question
Answer
Joke
Rant
Admin
|
http://www.codeproject.com/KB/cs/c__coding_standards.aspx
|
crawl-002
|
refinedweb
| 1,222
| 67.35
|
!
1. Introduction
We want to create an automated pipeline in order to ensure that no manual and error prone steps are required for building, testing and deploying the application. When a failure occurs during one of these steps, we will be automatically notified and can take necessary actions in order to resolve the issue.
What will we be doing in this post?
- Create a sample Spring Boot application.
- Push the source code to GitHub.
- Build the application with AWS CodeBuild.
- Create an AWS Elastic Beanstalk environment.
- Create an AWS CodePipeline to glue everything together.
You will need an account at GitHub (which is fairly easy to create) and an account at AWS for this. A step-by-step guide how to create an AWS account can be found in a previous post where we explored AWS Elastic Beanstalk.
The sources which we will be using can be found at GitHub.
2. Create a Sample Project
In order to create the sample project, we go to Spring Initializr, add dependency Spring Web and generate the project.
We add a simple
HelloController which will just return a welcome message and the name of the host where the application is running.
@RestController public class HelloController { @GetMapping("/hello") public String hello() { String message = "Hello AWS Continuous Delivery!"; try { InetAddress ip = InetAddress.getLocalHost(); message += " From host: " + ip; } catch (UnknownHostException e) { e.printStackTrace(); } return message; } }
Run the application locally:
$ mvn spring-boot:run
Invoke the URL and the welcome message is displayed:
$ curl Hello AWS Continuous Delivery! From host: your-computer-name/127.0.1.1
We will also create a test for this. We are going to create a pipeline and automated test execution is a vital part of a delivery pipeline. The test invokes the URL and verifies whether the welcome message is returned.
@SpringBootTest @AutoConfigureMockMvc class MyAwsCdPlanetApplicationTests { @Autowired private MockMvc mockMvc; @Test void verifyHelloMessage() throws Exception { String patternString = "(Hello AWS Continuous Delivery! From host: ).*\\/\\d+.\\d+.\\d+.\\d+"; Matcher<String> regexMatcher = Matchers.matchesRegex(patternString); this.mockMvc.perform(get("/hello")).andExpect(status().isOk()) .andExpect(content().string(regexMatcher)); } }
Build the application, run the tests and create the jar file with:
$ mvn clean install
We also need to set the server port to 5000 in the
application.properties file because we will deploy the application to AWS Elastic Beanstalk. For more info on AWS Elastic Beanstalk, see one of our previous posts.
server.port=5000
Finally, push the source files to GitHub.
3. Build the Application With CodeBuild
In order to create a pipeline, we will need to be able to build the application with AWS CodeBuild. In this section, we will do the necessary setup in order to create a build project for our Spring Boot Application.
Navigate to the AWS Management Console and search for CodeBuild.
Click the Create build project button in the top right corner.
Give the project a suiteable name, we name it MyAWSCDPlanet.
Choose GitHub as Source provider, choose Connect via OAuth and click the Connect to GitHub button. A window is shown in order to confirm that AWS may access your repositories.
Fill in the Repository URL. You will need to fork the repository first into your own account before you can execute these steps yourself.
In the Environment section, do the following:
- Choose Managed Image as Environment Image.
- Choose Amazon Linux 2 as Operating System.
- Choose Standard as Runtime.
- Choose aws/codebuild/amazonlinux2-x86_64-standard:3.0 as Image.
- Choose Always use the latest image for this runtime version as Image version.
In the BuildSpec section, choose Insert build commands and add the following buildspec after clicking the Edit button.
version: 0.2 phases: install: runtime-versions: java: corretto11 build: commands: - echo Build started on `date` - mvn clean install artifacts: files: - target/myawscdplanet-0.0.1-SNAPSHOT.jar discard-paths: yes
At the end, click the Create build project button at the bottom of the page.
In the build project dashboard, click the Start build button. A new screen is shown, just click the Start build button at the bottom of the page. When everything is ok, the build succeeds successfully.
4. Create AWS Beanstalk Environment
Now that we can build the application on AWS CodeBuild, we will also need a way to deploy the application to AWS Elastic Beanstalk. In this section, we will create the AWS Elastic Beanstalk environment.
Log in to the AWS Management Console and search for the Elastic Beanstalk service.
Create a new application by clicking the Create Application button in the top right corner.
Give your application a name, we choose MyAWSCDPlanet.
Choose Java as platform. We created the Spring Boot application as a Java 11 application, therefore choose Coretto 11 as Platform branch.
Leave the application code to Sample application and create the application. We will deploy our application as part of the delivery pipeline, so it is ok to choose the sample application at this stage. We just want to setup the environment at this moment.
A console window is shown which shows progress information. After a few minutes a green check mark is shown indicating that the environment is up and running.
Clicking on the link above the check mark brings you to a default webpage of the sample application.
Next, go to the Configuration section and click the Edit button of the Capacity section.
Set the Environment type to Single Instance. If we do not do so, a load balancer is instantiated and this might give some extra costs. Click the Apply button at the bottom right corner of the page.
A warning is raised telling us this will reduce our capacity. We know what we are doing and can confirm this message.
Wait again a few minutes and check the application link again in order to verify the sample application is running and accessible.
5. Create a Pipeline With CodePipeline
Now that we have put the source code in GitHub, created a build and created an environment, it is time to tie all these pieces together by means of AWS CodePipeline.
In the same section where we created the CodeBuild, we navigate to the Pipeline section and click the Create pipeline button.
Give the pipeline a name, we choose MyAWSCDPlanet, and click the Next button.
We choose GitHub (Version 2) as a Source provider and click the Connect to GitHub button.
A popup window is shown where we give the connection with GitHub a name (again we choose MyAWSCDPlanet) and click the Connect to GitHub button.
Next, after confirming the permissions which are required, click the Install a new app button.
Again a popup window is shown where we can select the repositories the App connection is allowed to access. We choose only the repository we want to access and click the Install button.
Fill in your credentials and click the Connect button in the Connect to GitHub screen. Finally, we are connected to GitHub. Next, fill in the Repository name and the Branch name (we only want to deploy the app from the master branch) and choose the CodePipeline default as output format. Finally, click the Next button.
In the build stage, choose AWS CodeBuild as Build provider and select the previously created build project as Project name. Click the Next button.
In the deploy stage, choose AWS Elastic Beanstalk as Deploy provider. Fill in the previously created Elastic Beanstalk project as Application name, select the environment and click the Next button.
A review page is shown where you can double check the configuration. When everything seems ok, just press the Create pipeline button at the bottom of the page.
The pipeline will run for the first time. Each stage will indicate whether it is successfull and at the end, in the deploy stage, we can click the AWS Elastic Beanstalk link. This will open the Elastic Beanstalk environment. From this point on, we can click the URL of the environment and add /hello to the URL which will display the welcome message.
The URL returns (where the ip will be different in your case):
Hello AWS Continuous Delivery! From host: ip-172-31-2-222.eu-west-3.compute.internal/172.31.2.222
Great! We now have completed the continuous delivery pipeline on AWS.
6. Make a Change
Let’s see what happens when we make a change to the code. We add the word again to the welcome message and commit and push the change to GitHub.
String message = "Hello again AWS Continuous Delivery!";
The pipeline detects the change and almost immediately starts checking out the code and starts building. Because we did not alter the test, the build fails and the deploy step is not executed.
We first need to fix the test. It is as simple as adding the word again to the
patternString.
String patternString = "(Hello again AWS Continuous Delivery! From host: ).*\\/\\d+.\\d+.\\d+.\\d+";
Run
mvn clean install first locally on your development machine in order to verify whether the test is successful now (which we had to do in the first place).
After committing and pushing the source code to GitHub, the pipeline finishes successfully this time. When we invoke the URL, the new hello message is returned.
Hello again AWS Continuous Delivery! From host: ip-172-31-2-222.eu-west-3.compute.internal/172.31.2.222
7. Cleanup the AWS Resources
After you are done with experimenting with all of the above, it is wise to cleanup all of the resources.
Remove the AWS Elastic Beanstalk environment. Navigate to the AWS Elastic Beanstalk Applications, select the radio button of MyAWSCDPlanet and select Delete application from the Actions menu at the right top corner.
Remove the AWS CodePipeline. Navigate to the AWS CodePipeline section, select the radio button of MyAWSCDPlanet and click the Delete pipeline button.
Remove the Amazon S3 resources. Navigate to the Amazon S3 section, select the radio button of codepipeline-<some id> and click the Delete button. Probably, you will first need to empty the button prior to deleting it. Use the Empty button for this.
Remove the AWS CodeBuild. Navigate to the AWS CodeBuild section, select the radio button of MyAWSCDPlanet and click the Delete build project button.
All used resources should be removed by now.
8. Conclusion
Setting up a continuous delivery pipeline with AWS is easy. The build itself runs inside a Docker container and it is also possible to add extra review steps to the pipeline if you would like to do so. But this will be something for a next blog.
|
https://mydeveloperplanet.com/2020/11/18/how-to-create-an-aws-continuous-deployment-pipeline/
|
CC-MAIN-2020-50
|
refinedweb
| 1,754
| 65.73
|
I.
Operating networking software is hard
Here I’m not talking about operating physical networks (I don’t know anything about that), but instead about keeping software like DNS servers & load balancers & proxies working correctly.
I have been working on a team that’s responsible for a lot of networking infrastructure for a year, and I have learned a few things about operating networking infrastructure! (though I still have a lot to learn obviously). 3 overall thoughts before we start:
- Networking software often relies very heavily on the Linux kernel. So in addition to configuring the software correctly you also need to make sure that a bunch of different sysctls are set correctly, and a misconfigured sysctl can easily be the difference between “everything is 100% fine” and “everything is on fire”.
- Networking requirements change over time (for example maybe you’re doing 5x more DNS lookups than you were last year! Maybe your DNS server suddenly started returning TCP DNS responses instead of UDP which is a totally different kernel workload!). This means software that was working fine before can suddenly start having issues.
- To fix a production networking issues you often need a lot of expertise. (for example see this great post by Sophie Haskins on debugging a kube-dns issue) I’m a lot better at debugging networking issues than I was, but that’s only after spending a huge amount of time investing in my knowledge of Linux networking.
I am still far from an expert at networking operations but I think it seems important to:
- Very rarely make major changes to the production networking infrastructure (because it’s super disruptive)
- When you are making major changes, think really carefully about what the failure modes are for the new network architecture are
- Have multiple people who are able to understand your networking setup
Switching to Kubernetes is obviously a pretty major networking change! So let’s talk about what some of the things that can go wrong are!
Kubernetes networking components
The Kubernetes networking components we’re going to talk about in this post are:
- Your overlay network backend (like flannel/calico/weave net/romana)
kube-dns
kube-proxy
- Ingress controllers / load balancers
- The
kubelet
If you’re going to set up HTTP services you probably need all of these. I’m not using most of these components yet but I’m trying to understand them, so that’s what this post is about.
The simplest way: Use host networking for all your containers
Let’s start with the simplest possible thing you can do. This won’t let you run HTTP services in Kubernetes. I think it’s pretty safe because there are less moving parts.
If you use host networking for all your containers I think all you need to do is:
- Configure the kubelet to configure DNS correctly inside your containers
- That’s it
If you use host networking for literally every pod you don’t need kube-dns or kube-proxy. You don’t even need a working overlay network.
In this setup your pods can connect to the outside world (the same way any process on your hosts would talk to the outside world) but the outside world can’t connect to your pods.
This isn’t super important (I think most people want to run HTTP services inside Kubernetes and actually communicate with those services) but I do think it’s interesting to realize that at some level all of this networking complexity isn’t strictly required and sometimes you can get away without using it. Avoiding networking complexity seems like a good idea to me if you can.
Operating an overlay network
The first networking component we’re going to talk about is your overlay network. Kubernetes assumes that every pod has an IP address and that you can communicate with services inside that pod by using that IP address. When I say “overlay network” this is what I mean (“the system that lets you refer to a pod by its IP address”).
All other Kubernetes networking stuff relies on the overlay networking working correctly. You can read more about the kubernetes networking model here.
The way Kelsey Hightower describes in kubernetes the hard way seems pretty good but it’s not really viable on AWS for clusters more than 50 nodes or so, so I’m not going to talk about that.
There are a lot of overlay network backends (calico, flannel, weaveworks, romana) and the landscape is pretty confusing. But as far as I’m concerned an overlay network has 2 responsibilities:
- Make sure your pods can send network requests outside your cluster
- Keep a stable mapping of nodes to subnets and keep every node in your cluster updated with that mapping. Do the right thing when nodes are added & removed.
Okay! So! What can go wrong with your overlay network?
- The overlay network is responsible for setting up iptables rules (basically
iptables -A -t nat POSTROUTING -s $SUBNET -j MASQUERADE) to ensure that containers can make network requests outside Kubernetes. If something goes wrong with this rule then your containers can’t connect to the external network. This isn’t that hard (it’s just a few iptables rules) but it is important. I made a pull request because I wanted to make sure this was resilient
- Something can go wrong with adding or deleting nodes. We’re using the flannel hostgw backend and at the time we started using it, node deletion did not work.
- Your overlay network is probably dependent on a distributed database (etcd). If that database has an incident, this can cause issues. For example says that if you have data loss in your flannel etcd cluster it can result in containers losing network connectivity. (this has now been fixed)
- You upgrade Docker and everything breaks
- Probably more things!
I’m mostly talking about past issues in Flannel here but I promise I’m not picking on Flannel – I actually really like Flannel because I feel like it’s relatively simple (for instance the vxlan backend part of it is like 500 lines of code) and I feel like it’s possible for me to reason through any issues with it. And it’s obviously continuously improving. They’ve been great about reviewing pull requests.
My approach to operating an overlay network so far has been:
- Learn how it works in detail and how to debug it (for example the hostgw network backend for Flannel works by creating routes, so you mostly just need to do
sudo ip route listto see whether it’s doing the correct thing)
- Maintain an internal build so it’s easy to patch it if needed
- When there are issues, contribute patches upstream
I think it’s actually really useful to go through the list of merged PRs and see bugs that have been fixed in the past – it’s a bit time consuming but is a great way to get a concrete list of kinds of issues other people have run into.
It’s possible that for other people their overlay networks just work but that hasn’t been my experience and I’ve heard other folks report similar issues. If you have an overlay network setup that is a) on AWS and b) works on a cluster more than 50-100 nodes where you feel more confident about operating it I would like to know.
Operating kube-proxy and kube-dns?
Now that we have some thoughts about operating overlay networks, let’s talk about
There’s a question mark next to this one because I haven’t done this. Here I have more questions than answers.
Here’s how Kubernetes services work! A service is a collection of pods, which each have their own IP address (like 10.1.0.3, 10.2.3.5, 10.3.5.6)
- Every Kubernetes service gets an IP address (like 10.23.1.2)
kube-dnsresolves Kubernetes service DNS names to IP addresses (so my-svc.my-namespace.svc.cluster.local might map to 10.23.1.2)
kube-proxysets up iptables rules in order to do random load balancing between them. Kube-proxy also has a userspace round-robin load balancer but my impression is that they don’t recommend using it.
So when you make a request to
my-svc.my-namespace.svc.cluster.local, it resolves to 10.23.1.2, and then iptables rules on your local host (generated by kube-proxy) redirect it to one of 10.1.0.3 or 10.2.3.5 or 10.3.5.6 at random.
Some things that I can imagine going wrong with this:
kube-dnsis misconfigured
kube-proxydies and your iptables rules don’t get updated
- Some issue related to maintaining a large number of iptables rules
Let’s talk about the iptables rules a bit, since doing load balancing by creating a bajillion iptables rules is something I had never heard of before!
kube-proxy creates one iptables rule per target host like this: (these rules are from this github issue)
-A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.20000000019 -j KUBE-SEP-E4QKA7SLJRFZZ2DD[b][c] -A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-LZ7EGMG4DRXMY26H -A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-RKIFTWKKG3OHTTMI -A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-CGDKBCNM24SZWCMS -A KUBE-SVC-LI77LBOOMGYET5US -m comment --comment "default/showreadiness:showreadiness" -j KUBE-SEP-RI4SRNQQXWSTGE2Y
So kube-proxy creates a lot of iptables rules. What does that mean? What are the implications of that in for my network? There’s a great talk from Huawei called Scale Kubernetes to Support 50,000 services that says if you have 5,000 services in your kubernetes cluster, it takes 11 minutes to add a new rule. If that happened to your real cluster I think it would be very bad.
I definitely don’t have 5,000 services in my cluster, but 5,000 isn’t SUCH a bit number. The proposal they give to solve this problem is to replace this iptables backend for kube-proxy with IPVS which is a load balancer that lives in the Linux kernel.
It seems like kube-proxy is going in the direction of various Linux kernel based load balancers. I think this is partly because they support UDP load balancing, and other load balancers (like HAProxy) don’t support UDP load balancing.
But I feel comfortable with HAProxy! Is it possible to replace kube-proxy with HAProxy! I googled this and I found this thread on kubernetes-sig-network saying:
kube-proxy is so awesome, we have used in production for almost a year, it works well most of time, but as we have more and more services in our cluster, we found it was getting hard to debug and maintain. There is no iptables expert in our team, we do have HAProxy&LVS experts, as we have used these for several years, so we decided to replace this distributed proxy with a centralized HAProxy. I think this maybe useful for some other people who are considering using HAProxy with kubernetes, so we just update this project and make it open source:. If you found it’s useful , please take a look and give a try.
So that’s an interesting option! I definitely don’t have answers here, but, some thoughts:
- Load balancers are complicated
- DNS is also complicated
- If you already have a lot of experience operating one kind of load balancer (like HAProxy), it might make sense to do some extra work to use that instead of starting to use an entirely new kind of load balancer (like kube-proxy)
- I’ve been thinking about where we want to be using kube-proxy or kube-dns at all – I think instead it might be better to just invest in Envoy and rely entirely on Envoy for all load balancing & service discovery. So then you just need to be good at operating Envoy.
As you can see my thoughts on how to operate your Kubernetes internal proxies are still pretty confused and I’m still not super experienced with them. It’s totally possible that kube-proxy and kube-dns are fine and that they will just work fine but I still find it helpful to think through what some of the implications of using them are (for example “you can’t have 5,000 Kubernetes services”).
Ingress
If you’re running a Kubernetes cluster, it’s pretty likely that you actually need HTTP requests to get into your cluster so far. This blog post is already too long and I don’t know much about ingress yet so we’re not going to talk about that.
Useful links
A couple of useful links, to summarize:
- The Kubernetes networking model
- How GKE networking works:
- The aforementioned talk on
kube-proxyperformance:
I think networking operations is important
My sense of all this Kubernetes networking software is that it’s all still quite new and I’m not sure we (as a community) really know how to operate all of it well. This makes me worried as an operator because I really want my network to keep working! :) Also I feel like as an organization running your own Kubernetes cluster you need to make a pretty large investment into making sure you understand all the pieces so that you can fix things when they break. Which isn’t a bad thing, it’s just a thing.
My plan right now is just to keep learning about how things work and reduce the number of moving parts I need to worry about as much as possible.
As usual I hope this was helpful and I would very much like to know what I got wrong in this post!
|
https://jvns.ca/blog/2017/10/10/operating-a-kubernetes-network/
|
CC-MAIN-2017-51
|
refinedweb
| 2,347
| 57.1
|
Hide Forgot
Hi...
I found some problems with installer, for example anaconda can't detect
properly the video card SiS 620, i've search it in kudzu pcitable, and i
didn't found it.. so i have added it.... the pci entry i used is...
0x1039 0x6306 "Card:SiS 620" "Silicon Integrated Systems [SiS]|620"
It worked fine, with the card, now we can perform a graphic installation
with this video card, and the entries to Xconfigurator work fine...
the file modified is...
/RedHat/instimage/usr/share/kudzu/pcitable
just add the line, and go...
About the es_MX, i found this problem, when I went to do something like...
[root@geo12 /root]# rm lolis
rm: ?borrar `lolis'? (s/n) s
[root@geo12 /root]# ls -l lolis
-rw-r--r-- 1 root root 5 Dec 11 07:37 lolis
[root@geo12 /root]#
so, the file was not erased, then i changed the definition in
/etc/sysconfig/i18n to...
LANG="es_ES"
LC_ALL="es_ES"
LINGUAS="es_ES"
Ok, this when the system is already installed, but we can fix this in the
installer program...
the archive to modify is this...
RedHat/instimage/usr/lib/python1.5/site-packages/todo.py
in the class Language...
class Language (SimpleConfigFile):
def __init__ (self):
self.info = {}
self.langs = {
"Czech" : "cs_CZ" ,
"English" : "en_US" ,
"German" : "de_DE" ,
"Hungarian" : "hu_HU" ,
"Icelandic" : "is_IS" ,
"Indonesian" : "id_ID" ,
"Italian" : "it_IT" ,
"Norwegian" : "no_NO" ,
"Polish" : "pl_PL" ,
"Romanian" : "ro_RO" ,
"Slovak" : "sk_SK" ,
"Slovenian" : "sl_SI" ,
"Spanish" : "es_ES" ,
"Russian" : "ru_RU.KOI8-R" ,
"Ukrainian" : "uk_UA" ,
}
The originale line said...
"Spanish" : "es_MX" ,
and i changed it to...
"Spanish" : "es_ES" ,
Now, the installation perform well in spanish (Oh, sorry my bad english,
spanish is my native language :) and all ths system is already correctly
located, (Kde, Gnome. mc, etc...).
I hope we can share this to all...
By.
Juan Diego
Fixed in the latest installer available in RawHide.
|
https://partner-bugzilla.redhat.com/show_bug.cgi?id=7754
|
CC-MAIN-2020-29
|
refinedweb
| 303
| 66.33
|
For your first multithreaded application, you will learn the basic techniques of working with the scheduler. This means that you will learn the syntax to create, terminate, suspend, sleep, and join threads, as well as what the uses are for each of those operations.
When creating a thread, you need to use a special delegate called ThreadStart. This delegate will contain a reference to the method that contains the work you want to be performed in the thread. As explained in Chapter 9, delegates are really nothing more than special types that contain methods with specific signatures. You can invoke delegates the same way you invoke methods, and you can pass delegates as parameters to other methods, which is exactly what needs to be done when creating a thread.
To create a thread, you pass a ThreadStart delegate to the constructor of the Thread class, as shown in the following example:
ThreadStart myThreadDelegate = new ThreadStart(MyWork); Thread workerThread = new Thread(myThreadDelegate);
What you will often see in many code samples is the preceding two lines of code consolidated into the following:
Thread workerThread = new Thread(new ThreadStart(MyWork));
In this line of code, MyWork is the name of the method that matches the signature defined by the ThreadStart delegate. All ThreadStart delegates must be void methods.
The code in Listing 10.1 shows how to create and start a thread, as well as how to continuously poll the status of a thread as a crude way of checking to see if it's finished. You'll see a more elegant solution later.
using System; using System.Threading; using System.Collections.Generic; using System.Text; namespace RunThreads { class Program { static void Main(string[] args) { Thread.CurrentThread.Name = "MAIN"; Console.WriteLine("[{0}] Hello.", Thread.CurrentThread.Name); Thread workerThread = new Thread(new ThreadStart(WorkerMethod)); workerThread.Name = "WORKER"; Console.WriteLine("[{0}] Created the new worker thread, {1}", Thread.CurrentThread.Name, workerThread.Name); workerThread.Start(); while (workerThread.IsAlive) { // do nothing until it's finished } Console.WriteLine("Looks like the worker thread finished its job."); Console.ReadLine(); } static void WorkerMethod() { for (int i = 0; i < 20; i++) { Console.WriteLine("[{0}] Doing some work.", Thread.CurrentThread.Name); } } } }
Figure 10.1 shows the output of this sample.
Terminating a thread involves using the Abort method on the thread. Abort can either be called on the thread instance by the block of code that initially created it, or it can be called by the running thread itself.
When a thread is aborted, a ThreadAbortException is thrown. As you will find out in various samples throughout this book, you can trap and suppress most exceptions. The ThreadAbortException is the one exception in the .NET Framework that cannot be ignoredwith good reason. This exception must be allowed to travel up the chain of exceptions in order for aborted threads to know to stop working. You can manually suppress this exception with the Thread.ResetAbort() method, as shown in Listing 10.2, which illustrates aborting a running thread.
using System; using System.Threading; using System.Collections.Generic; using System.Text; namespace AbortThreads { class Program { static void Main(string[] args) { Thread.CurrentThread.Name = "MAIN"; PrintMessage("Application Started."); Thread worker = new Thread(new ThreadStart(DoWork)); worker.Name = "WORKER"; worker.Start(); Console.WriteLine("Press Enter to Abort the Thread!"); Console.ReadLine(); worker.Abort(); Console.WriteLine("Thread abort signal sent."); Console.ReadLine(); } static void PrintMessage(string msg) { Console.WriteLine("[{0}] {1}", Thread.CurrentThread.Name, msg); } static void DoWork() { try { while (true) { Console.Write("..."); Thread.Sleep(100); // small time delay to simulate real work } } catch (Exception e) { PrintMessage("Trapped:" + e.ToString()); } } } }
Figure 10.2 shows a screenshot of the console output of this program. As soon as the user presses Enter, the thread abort signal is sent, and the worker thread catches (and suppresses) the ThreadAbortException exception. If the worker method didn't suppress this, the exception would "bubble up" and eventually cause the main application to stoptypically an undesired result.
When you suspend a thread, you tell the scheduler that the thread no longer needs to be swapped to the foreground for execution. What this means is that as soon as the thread stops executing to give time to another thread, the thread will not continue until it has been resumed.
You suspend a thread with the Suspend method. It takes no arguments and works fairly simply. To resume the thread at will, you can simply call the Resume method on that same thread.
You saw in Listing 10.2 that there is a method called Sleep that does exactly what it sounds like: causes the thread to sleep. By supplying a time interval in milliseconds, the thread will stop executing at that line for the specified duration. You can also pass a 0 as the argument, which will cause the thread to be suspended. If you specify System.Threading.Timeout.Infinite as the value, the thread will block indefinitely.
The Join method serves as a way to block the current thread until the specified thread has completed. This essentially allows the thread to wait for the completion of another method. This is where the term join comes from, where the current thread will wait for another thread to "catch up." Listing 10.3 illustrates the use of the Join method.
using System; using System.Threading; using System.Collections.Generic; using System.Text; namespace JoinTest { class Program { static void Main(string[] args) { Thread worker = new Thread(new ThreadStart(DoWork)); worker.Start(); // now do a 'join' to block this thread until worker // has completed worker.Join(); Console.WriteLine("This line will not execute until 'worker' is complete."); Console.ReadLine(); } static void DoWork() { for (int i = 0; i < 100; i++) { Thread.Sleep(100); Console.Write("."); } } } }
The use of Join replaces the loop seen in an earlier example where the code executed a while loop that continuously looped until the IsAlive property of the executing thread was false. As mentioned earlier, using Join is a far more elegant (and thread-safe) solution.
|
https://flylib.com/books/en/1.237.1.59/1/
|
CC-MAIN-2019-51
|
refinedweb
| 993
| 59.6
|
Test-Driven Development for JavaScript
Stay connected.
This trend is interesting because it represents a shift in development that we’ve never seen before. Our applications are being composed of more and more JavaScript. However, are we effectively testing all of this newfound client-side code?
Sort of.
I believe that we must follow a more robust methodology to test our growing JavaScript applications. Today, I want to explore the idea of practicing Test-Driven Development (TDD) for client-side JavaScript.
What’s So Different About Testing JavaScript?
In web development, we can separate our concerns into two sections: the client-side (stuff that happens in a user’s web browser) and the server-side (things that happen on our application servers).
When it comes to testing, we’ve historically covered our server-side code with unit and functional tests and our client-side with integration tests. We tend to give a lot of thought and investment into server-side tests and simply verify the results with an integration test on the client-side.
However, server-side code is and was never the complete picture of web development. What the user sees on the client-side is often what has the most impact. The most impressive technical features mean nothing if the view layer is messed up!
Integration tests are helpful and effective when we’re only concerned with what shows up on the webpage. It says nothing about what goes on inside the client-side code processing; it only shows the output of such logic.
There was a period of time where integration tests could really serve as proper test coverage for what’s going on in the client-side of our applications. However, with more and more applications having bigger shares of JavaScript, we’ve got to come up with a better way of testing the client-side other than: is x showing up on the page?
To start things off, let’s look at what’s familiar about TDD on the client-side. It turns out a lot of things aren’t that different after all!
JavaScript TDD Isn’t That Different
No matter how experienced in TDD you are, don’t panic! If you’ve done TDD before, applying it to client-side JavaScript isn’t that different on a high level. With TDD, we want to stay true to a cycle of three steps:
Write tests that reflect the expected behavior of your code.
Write code to make those tests pass.
(optional) Refactor and ensure the tests still pass.
Let’s run through a quick example in EmberJS.
A unit TDD example with EmberJS
Ember is really cool because it comes with a lot of great tools for testing right out of the box. If you run something as simple as:
ember g model user, you’ll be gifted with two important files:
app/models/user.js and
tests/unit/user-test.js. If you run
ember test, all of your tests related to
user.js should pass.
Currently, our files look something like this:
user.js
import DS from 'ember-data'; export default DS.Model.extend({ });
user-test.js
import { moduleForModel, test } from 'ember-qunit'; moduleForModel('user', 'Unit | Model | user', { // Specify the other units that are required for this test. needs: [] }); test('it exists', function(assert) { let model = this.subject(); assert.ok(!!model); });
We want to expand our model to do two things: have a name and be able to retrieve a metric called “karma” for a user. Let’s write up some tests to reflect these desires:
user-test.js
import { moduleForModel, test } from 'ember-qunit'; moduleForModel('user', 'Unit | Model | user', { // Specify the other units that are required for this test. }); test('it exists', function(assert) { let model = this.subject(); assert.ok(!!model); }); test('must contain a name', function(assert) { let model = this.subject({ name: 'testing name'}); assert.equal(model.get('name'), 'testing name'); }); // Note: Karma should be the combo of upvotes and downvotes against a user test('should be able to retrieve the karma of a user', function(assert) { let model = this.subject({ name: 'karma user', upvotes: 10, downvotes: 5}); assert.equal(model.get('karma'), 5); });
If we’ve changed nothing to
user.js, we’ll find that our Ember test suite will now fail gracefully. Next up, we’ll work to make these tests actually pass.
user.js
import Ember from 'ember'; import DS from 'ember-data'; export default DS.Model.extend({ name: DS.attr('string'), upvotes: DS.attr('number'), downvotes: DS.attr('number'), karma: Ember.computed( function() { return this.get('upvotes') - this.get('downvotes'); }) });
Now our tests and code are good to go!
With just a few simple steps, we were able to run through the TDD process and develop some simple but stable code! I chose not to refactor on this run through since my changes were pretty simple. However, the more complex our code becomes, the more there becomes a need for refactoring at every step.
Other frameworks and JavaScript code
Not every JavaScript framework or codebase has a built-in test runner like EmberJS (or EmberCLI) has. There still are a lot of excellent tools out there like Mocha and Jasmine. However there’s still a long way to go when it comes to JavaScript test tooling.
While client-side unit testing doesn’t look that different from other kinds of testing we’ve experienced, there are some noticeable differences to how we should approach it.
Javascript TDD Is Different
When implementing TDD on the client-side, we’ll need to divide our testing mindset into client-side and server-side concerns.
Client-side JavaScript cannot query a database. It can make calls for a query or query data presented on the client-side. It cannot, however, directly fetch and insert into a database. This is important because we want to assume that the data being given to the client-side (in our tests) is gospel. We should treat the server-side data being sent as an external dependency on our test.
In many ways, we’re splitting up our applications into two contained testing areas: Server-side tests are concerned with things that go on inside the server; client-side tests are concerned with stuff that happens in the web browser. In order to achieve proper test coverage of these two aspects, the amount of tests we’re hauling along is going to increase. In some cases, we could have all of the following:
Client-side integration tests
Server-side functional tests
Server-side unit tests
Client-side unit tests
Client-side functional tests
Adding a lot more test weight to your application might not seem like the most attractive option. Yet, its a consequence of having complex processing on the server- and client-sides of your application. There’s a lot more space to be covered, and understanding more of what’s going on is essential!
What Does All This Mean?
If you’re implementing TDD practices for the first time, JavaScript is a great place to start. The basics of TDD aren’t too hard to get down, but it takes some practice to grasp what aspects of the domain you should test.
If you’re starting to apply previous knowledge of TDD to your frontend JavaScript, don’t forget what you know. Embrace your historical knowledge and learn to understand the domain layout of the client-side.
Ultimately, JavaScript test coverage is a hard thing to fully visualize and understand. The way we leverage the language to help us develop apps is ever changing as well. You might be adding a lot more tests to your test suite, but the understanding and coverage you’ll have is worth it!
Stay up to date
We'll never share your email address and you can opt out at any time, we promise.
|
https://www.cloudbees.com/blog/test-driven-development-for-javascript
|
CC-MAIN-2022-40
|
refinedweb
| 1,307
| 64.81
|
Up tonight, I am going to use a little behavior driven development to (re-)introduce the ACE code editor (JavaScript) back into the ICE Code Editor (Dart). I already have the basics of the testing and code in place. Hopefully this will be straight-forward.
I have no intention of testing the implementation details of ACE, but I need some way to verify that I can set ACE up with Dart. The easiest way to accomplish that is to check for the existence of an ACE-specific class like
ace_content. Something like this should do:
Since I am BDDing here, I expect this to fail with a message to the effect that querying for theSince I am BDDing here, I expect this to fail with a message to the effect that querying for the
test("starts an ACE instance", (){ var it = new Editor('ice'); expect(document.query('.ace_content'), isNotNull); });
ace_contentCSS class is
null. And, indeed, I do get an error. Just not the error that I expected:
unittest-suite-wait-for-done undefined:1 Exception: '': Error: line 763 pos 28: type 'ExpectException' is not loaded String message = (e is ExpectException) ? e.message : 'Caught $e'; ^ malformed type used. Stack Trace: #0 _registerException () ...Ew.
It turns out that
ExpectExceptionwas removed a while back. So why is unittest complaining about it? That is because I force downgraded unittest yesterday so that I could get headless testing working again. I am back to running tests in the Dartium browser today, and I would like decent error messages. So I temporarily peg my app to the latest unittest via an update to
pubspec.yaml:
name: ice_code_editor version: 0.0.1 description: Code Editor + Preview author: Chris Strom <chris@eeecomputes.com> homepage: dependencies: unittest: any js: anyA quick
pub update, a reload of the test page, and I have a useful error message:
FAIL: defaults starts an ACE instance undefined:1 Expected: not null but: was <null>.Nice.
With that fixed, I am ready for the familiar change-the-message or make-it-pass cycle of BDD. I start by editing the
Editorclass to create an ACE instance via js-interop:
import 'dart:html'; import 'package:js/js.dart' as js; class Editor { bool edit_only, autoupdate; String title; Editor(el, {this.edit_only:false, this.autoupdate:true, this.title}) { var context = js.context; context.ace.edit(el); } }After reloading, I have changed the message. I now get:
FAIL: defaults starts an ACE instance undefined:1 Caught NoSuchMethodError : method not found: 'ace' Receiver: Instance of 'Proxy' Arguments: []This is because I have not loaded the ACE JavaScript sources on the page. The easiest way to accomplish this is to go back and edit the test page itself to include
ace.js:
<html> <head> <title>ICE Test Suite</title> <script src="packages/ice_code_editor/ace/ace.js" type="text/javascript" charset="utf-8"></script> <script type="application/dart" src="editor_test.dart"></script> <script src="packages/browser/dart.js"></script> </head> <body> <h1>Test!</h1> </body> </html>Now, when I reload, I find that I have made the test pass:
unittest-suite-wait-for-done PASS: defaults defaults to auto-update the preview PASS: defaults defaults to disable edit-only mode PASS: defaults starts an ACE instance All 3 tests passed. unittest-suite-successIt is pretty cool that I can bundle JavaScript with my Dart package. I can install the package, then source
main.dart:
<head> <script src="packages/ice_code_editor/ace/ace.js" type="text/javascript" charset="utf-8"></script> <script src="packages/browser/dart.js"></script> <script src="main.dart" type="application/dart"></script> </head> <h1>Hello</h1> <div style="width:600px; height: 400px" id="ace"></div>Best of all, I can
dart2jsthat
main.dartfile:
➜ public git:(master) ✗ dart2js -omain.dart.js main.dartWith that, I can load the ICE Code Editor—ACE and all—in Chrome, FireFox, and even Internet Explorer:
Still, I wonder how much effort it would be to eliminate the need for the web page to source the ACE code. It would seem better to source the Dart code only, and then make the Dart code responsible for sourcing the appropriate JavaScript libraries before kicking in the js-interop stuff. I wonder if that's even feasible.
Something for tomorrow.
Day #738
|
https://japhr.blogspot.com/2013/05/dart-behavior-driven-develop-with-js.html
|
CC-MAIN-2017-47
|
refinedweb
| 706
| 57.16
|
Question:
I have an interface, and some classes that inherit from it.
public interface IFoo {} public class Bar : IFoo {} public class Baz : IFoo {}
If I get the types which implement
IFoo, how can I decide if the type will represent a
Bar or a
Baz (without actually creating the object)?
// Get all types in assembly. Type[] theTypes = asm.GetTypes(); // See if a type implement IFoo. for (int i = 0; i < theTypes.Length; i++) { Type t = theTypes[i].GetInterface("IFoo"); if (t != null) { // TODO: is t a Bar or a Baz? } }
Solution:1
t is neither
Bar nor
Baz - it is
IFoo.
theTypes[i] is
Bar or
Baz.
Solution:2
if (theTypes[i] == typeof(Bar)) { // t is Bar } else if (theTypes[i] == typeof(Baz)) { // t is Baz }
Solution:3
When you do GetInerface, you're getting the interface only. What you need to do is only get the types that implement that interface like so.
var theTypes = asm.GetTypes().Where( x => x.GetInterface("IFoo") != null );
now you can loop through them and do this. or use a switch.
foreach ( var item in theTypes ) { if ( item == typeof(Bar) ) { //its Bar } else if ( item == typeof(Baz) ) { ///its Baz } }
Solution:4
I think this will help with your problem:
IFoo obj = ...; Type someType = obj.GetType(); if (typeof(Bar).IsAssignableFrom(someType)) ... if (typeof(Baz).IsAssignableFrom(someType)) ...
Solution:5
Am I missing something?
theTypes[i] is the type.
Solution:6
A strongly-typed solution to "Does Type X implement interface I" that supports analysis/refactoring is:
Type x = ...; bool implementsInterface = Array.IndexOf(x.GetInterfaces(), typeof(I)) >= 0;
That said, I really have no idea what you are attempting to accomplish.
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon
|
http://www.toontricks.com/2018/10/tutorial-detect-class-from-interface.html
|
CC-MAIN-2018-43
|
refinedweb
| 294
| 69.99
|
Docs |
Forums |
Lists |
Bugs |
Planet |
Store |
GMN |
Get Gentoo!
Not eligible to see or edit group visibility for this bug.
View Bug Activity
|
Format For Printing
|
XML
|
Clone This Bug
Found by Mr. Yan Rong Ge.
This vulnerability was discovered on Monday, but it is not as interesting (nor
straightforward) as the previous local one...
Description of a remote Denial of Service vulnerability in heartbeat
=======================================================================
---------
Summary
---------
The function peel_netstring() in lib/clplumbing/cl_netstring.c does not
validate the 'length' parameter in user input. A crafted heartbeat message
in netstring format can cause the master control process to access (read)
invalid memory and die of a SEGV. This happens before authentication.
Following is an example of a crafted heartbeat message: (two lines)
###
70000000:
Thus a simple command as follows (issued from another member of the cluster)
may cause SEGV to the victim heartbeat daemon (master control process). The
example uses unicast udp as the communication media between the two nodes.
$ ( echo "###" && echo "70000000:" ) | netcat -u 12.34.56.78 694
where 12.34.56.78 is the IP address of the victim node and 694 is the port
where the communication plugin listens from incoming heartbeat messages.
70000000 is an arbitrary offset that may (or may not, depending on the actual
memory layout of the process) cause SEGV.
---------
Details
---------
A backtrace (in gdb) of the master control process when receiving SEGV
is as follows:
296 if (*sp != ','){
(gdb) bt
#0
#1 0x40053ec7 in netstring2msg_rec (s=0x811212c "###\n70000000:\n", length=15,
slen=0xbfd96668) at cl_netstring.c:412
#2 0x40053fe3 in netstring2msg (s=0x811212c "###\n70000000:\n", length=15,
needauth=1) at cl_netstring.c:448
#3 0x400505fa in wirefmt2msg_ll (s=0x811212c "###\n70000000:\n", length=15,
need_auth=1) at cl_msg.c:2422
#4 0x4004f280 in msgfromIPC_ll (ch=0x80f6ac8, flag=1) at cl_msg.c:1787
#5 0x4004f2d2 in msgfromIPC (ch=0x80f6ac8, flag=1) at cl_msg.c:1802
#6 0x080500ee in read_child_dispatch (source=0x80f6ac8, user_data=0x8073700)
at heartbeat.c:1281
#7 0x40042fc7 in G_CH_dispatch_int (source=0x80fb508, callback=0,
user_data=0x0)
at GSource.c:610
#8 0x4009335c in g_main_context_dispatch () from
/opt/gnome/lib/libglib-2.0.so.0
#9 0x400967cb in g_main_context_check () from /opt/gnome/lib/libglib-2.0.so.0
#10 0x40096ae7 in g_main_loop_run () from /opt/gnome/lib/libglib-2.0.so.0
#11 0x080507a8 in master_control_process () at heartbeat.c:1535
#12 0x0804f95c in initialize_heartbeat () at heartbeat.c:990
#13 0x080562f0 in main (argc=1, argv=0xbfd96c74, envp=0xbfd96c7c)
at heartbeat.c:4842
Inspection shows that the vulnerable statement is around line 295:
sp += (*len);
if (*sp != ','){
return(HA_FAIL);
}
where (*len) is assigned in line 275:
if (sscanf(sp,"%d", len) != 1){
return(HA_FAIL);
}
where (sp) points directly to the incoming heartbeat packet.
---------------
Suggested Fix
---------------
The vulnerability described above can be fixed by validating the 'length'
parameter in user input:
295a296,298
> if (sp >= smax) {
> return (HA_FAIL);
> }
Upstream will release in 1-2 weeks time. Attach an ebuild to this bug if you
want pretesting before public disclosure.
Public now ?
CVE reference is still closed but SA21511 has been issued :
solution = update to 1.2.5 or 2.0.7
xmerlin, is 2.0.7 a valuable candidate for stabilization ?
tsunam, could you test it please ?
I think SA 21511 (followed by Ubuntu, Debian and Mandriva) concerns this issue
but i prefer to wait for Sune before opening the bug.
2.0.7 is a valuable candidate
Definitively public now, thanks to our new padawan Akraut.
x86 please test 2.0.7 and mark stable if appropriate, thanks.
*** Bug 144244 has been marked as a duplicate of this bug. ***
1.2.5 and 2.0.7 marked stable on x86.
good. Thanks. Removing x86 from CC.
Sec team : please vote on GLSA.
I personnally vote yes (very easy remote DoS having potentially heavy
consequencies)
(In reply to comment #5)
> Definitively public now, thanks to our new padawan Akraut.
>
> x86 please test 2.0.7 and mark stable if appropriate, thanks.
It was public before then, or show you that.
>
> It was public before then, or
> show you that.
>
i know that, cf my comments #2 + #3 in which SA21511 mentioned it too. It was
only 2 days ago. It was confidential the two weeks before. So what's wrong?
Wasn't i enough responsive ?
(In reply to comment #10)
> i know that, cf my comments #2 + #3 in which SA21511 mentioned it too. It was
> only 2 days ago. It was confidential the two weeks before. So what's wrong?
> Wasn't i enough responsive ?
You're doing fine. My point is that the padawan here didn't break some embargo,
it was public prior to him filing the bug. That's what was implied earlier.
> You're doing fine. My point is that the padawan here didn't break some embargo,
of course :)
> it was public prior to him filing the bug. That's what was implied earlier.
>
ok
Voting YES, let's have a GLSA.
GLSA 200608-23
|
http://bugs.gentoo.org/141894
|
crawl-002
|
refinedweb
| 823
| 69.58
|
If you are running a big website with a large number of pages and categories, you need to track the number of indexed pages on Google SERP at least every week. This will help you monitor the changes in your website and how they affect the number of indexed pages.
One of the first steps in the technical SEO audit is to check the number of indexed pages for your website on Google SERPs and compare them with your XML sitemap to see the difference between the numbers.
Most of the time, the numbers on Google SERPs will be different than your XML sitemap because it will include the paginations and sometimes other pages that you don't want to indexed, like search pages and facets.
By monitoring the number of indexed pages every week, you can fix your website's bugs and issues before they get worse.
You can get the number of indexed pages on Google by using the search operator "site:" and adding the URL you want to track after it.
Most SEOs will agree that the "site:" operator is not accurate, and that's true, you can try it by yourself by doing it many times, and you will get different numbers. But still it's a good indicator to tell you how many pages are indexed.
This tutorial will get the number of indexed pages on Google for each Wikipedia language subdomain using Python, store them in Google Sheets, and visualize the data in a Google Data Studio dashboard.
Here we will use the Google Sheet as a database to store the data automatically without opening it every time you want to add new numbers.
Go to Google Sheets and create a new blank sheet, and give it "indexed_pages" name.
The first column will be used to store the Date, so we need to name the column's header "Date" and change the format of the cells to Date by selecting the entire column without the first cell, and navigate to Data -> Data Validation, and choose Date from the list.
Then we need to add the websites that we want to track by adding each one in a column. In this tutorial, we'll be tracking the below sites:
Remove the HTTPS:// and the subdomains before adding them to the sheet. By doing this, you will track the entire domain. In this tutorial, we will keep the subdomains to track the different languages for Wikipedia.
Now is the time to prepare our Python code to scrape the results from Google SERPs and store them in Google Sheets by using the gspread library.
Gspread is Python API used to connect your script with Google Sheets, where you can add, edit and retrieve data.
To use gspread, you need to get an API key from Google Cloud Services. This API will allow your script to interact with any Google Sheets stored on your drive.
Then, a file will be downloaded to your computer. This file will be used to connect our Python script to Google Sheets.
In the JSON file; you will find an email called client_email. Take this email address and go to your Google Sheets and add this email as editor.
Grant access to the email on Google Sheets
Now take the JSON file to your Python script working directory and rename it to client_secret.json.
Keep the JSON key file in a safe place, don't share it with anyone.
We will be using the below libraries in our script to connect to Google Sheets and scrape the data from Google SERPs.
import gspread from oauth2client.service_account import ServiceAccountCredentials import requests from bs4 import BeautifulSoup as bs from time import sleep from tqdm.notebook import tqdm from datetime import date
You can install any missing library by going to your cmd, then type pip install library name, example: pip install gspread.
Define the scope and the authorization
scope = ['',''] creds = ServiceAccountCredentials.from_json_keyfile_name('client_secret.json', scope) client = gspread.authorize(creds)
Open the Google Sheet
sheet = client.open('indexed_pages')
Replace "indexed_pages" with the name of your sheet.
Get the worksheet
sites_sheet = sheet.get_worksheet(0)
Get the values of the sheet in a list
data_sheet_header = sites_sheet.spreadsheet.worksheet("data").get_all_values()
Get the list of the domains
domains = data_sheet_header[0][1:]
Perform a request on Google SERPs and scrape the data
final_index_list = [] ua = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36" for url in tqdm(domains): google_serp_url = f":{url}&hl=en" google_request = requests.get(google_serp_url,headers={"User-Agent":ua}) google_html = bs(google_request.text,"html.parser") indexed_pages = google_html.find(id="result-stats") final_index_list.append(indexed_pages.text.split()[1]) sleep(2)
Here, I have added 2 seconds delay between each request to avoid getting blocked by Google.
Get the current Date and add it to the results list
today = date.today() final_index_list.insert(0,str(today))
Insert the results into our Google Sheets
sites_sheet.append_row(final_index_list,"USER_ENTERED")
Now go to your Google Sheet, and you will find the results updated automatically without touching it.
You can perform this task every day automatically by using a cron job on your server. The results will be saved on Google sheets automatically without the need to add them manually.
This is the last step in our tutorial, where we will connect our Google Sheets with Google Data Studio to create a small dashboard to display the results
Then we will add a Date Range Control to give us the ability to filter the dates that we want to show on the chart.
Click on Add a Control, and choose Date Range Control.
And that's it, now you can view your report and share it with your manager or client.
Google Data Studio gives you the ability to create complex reports by combining data from Google Sheets, Google Search Console, Google Analytics, MySQL, and other resources.
If you are looking for a more advanced SEO Dashboard, contact me on my Linkedin profile.
You can start from here to build your own SEO Dashbarod for your technical SEO audit or content and share it with your team.
|
https://www.nadeem.tech/python-google-sheets-and-google-data-studio-to-track-the-number-of-indexed-pages/
|
CC-MAIN-2022-27
|
refinedweb
| 1,027
| 70.73
|
From: Karl Ove Hufthammer (huftis@bigfoot.com)
Date: Tue May 14 2002 - 14:27:26 EDT
Paul Rohr <paul@abisource.com> wrote in
news:3.0.5.32.20020514104302.03beb180@mail.abisource.com:
> I've just finished skimming the DC stuff too, and I agree that
> we'll need a superset approach.
I agree.
> As for switching to RDF, the big question for me is whether
> adding all that code is really need to help us solve this
> namespace problem. If not, I'm tempted to follow the simpler
> precedent already in place for HTML:
IMHO, that's a satisfactory solution.
>
>
> In short, the idea would be that we preface any of *our* keys
> which are DC-compatible with the DC prefix.
Note that this *also* solves key name clashes in different
languages. E.g. if the word 'title' means something in language
'xxx', and the UI is 'xxx', the user may try to use the reserved
name 'title' for his metadata.
> All others --
> whether defined by Word or by users -- go at top level.
I would rather see them prefixed by 'custom.' or something
similar. We *may* chose to support other metadata standards in the
future[1], and IMHO it's better if *all* keys are prefixed.
> a few other notes
> -----------------
> 1. For those of you who read the above date examples
> carefully, I'm not sure whether our canonical datetime output
> should include the timezone offsets or not. For details, see:
>
>
I would prefer if they did.
> 3. FWIW, I'm not sure it's all that safe to map Word's
> company onto DC's publisher. Word actually has a separate
> publisher keyword in their custom tag.
Then we shouldn't map it, IMO.
> 4. Getting back to my original metadata vs. document
> properties thread, is a DC.language property the right place
> to store the document's default language,
Yes.
Footnote
========
[1] E.g. A-Core <URL: >.
-- Karl Ove Hufthammer
This archive was generated by hypermail 2.1.4 : Tue May 14 2002 - 14:31:47 EDT
|
http://www.abisource.com/mailinglists/abiword-dev/02/May/0511.html
|
CC-MAIN-2013-20
|
refinedweb
| 341
| 76.01
|
Manipulating the Data
00:00 Now that you can pull all the data out of an Excel workbook, let’s take a look at how you can convert that data into useful Python data structures. All the data you’ve returned so far has been in the form of tuples, which can be thought of as just immutable lists.
00:14 This should be relatively straightforward to work with. So, let’s try out an example where you need to extract product info from the spreadsheet and put it into a dictionary where the key is a product ID.
00:25 I’m going to open up the terminal, activate the environment, and start bpython.
00:33
And, like before,
from openpyxl import load_workbook. Let’s say that
workbook = load_workbook(), with that sample spreadsheet, and then
sheet will be
workbook.active.
00:46 So, one way to make this dictionary is to iterate through all the rows, grabbing the product ID, and then collecting the columns that are related to the product to put into another dictionary.
00:56
This dictionary of dictionaries will let you index our product ID and then return all the pertinent info, which is pretty cool. To get started, go ahead and grab all the header values so that you know which columns you are interested in. So,
for value in sheet.iter_rows()—make sure the capitalization’s correct—and you’ll want
min_row equal to
1, and
max_row is also equal to
1. This will just grab that first row.
01:22
And, because you’re interested in the values, say
values_only=True. And from here, go ahead and
print(value). All right. So take a look at all the header titles—so,
'marketplace',
'customer_id'—and you’ll notice that you have
'product_id',
'product_parent',
'product_title', and
'product_category'. These are conveniently next to each other, so you can quickly pull all of these columns using the min and max column arguments that you used earlier. Let’s take a look. This will be
1,
2,
3,
4,
5,
6,
7, as the max column.
01:56
So, to test this out,
for value in sheet.iter_rows()—and this time, you’ll say
min_row=2, because you don’t want that header row. And then, the
min_col is going to be equal to
4, and the
max_col is going to be equal to
7, and like before, you’ll want
values_only set to
True.
02:20
And from here, go ahead and print the value. Okay! So, that printed out a bunch of stuff here—I’ll scroll to the top—and you should see that you end up with the product ID, the product’s parent, product title—which is a pretty long one—and then the product category, which should be
'Watches', for all of these.
02:39 So, this is looking pretty good. Now, it’s just a matter of taking this data and mapping it into a dictionary. I’m going to clear this.
02:49
Let’s put this into a new script so it’s a little easier to follow. I’m going to make a new file. I’ll call this something like
parse_product.py.
03:02
And to get more fancy, let’s go ahead and
import json, and then
from openpyxl import load_workbook. And also like before, let’s say,
workbook = load_workbook() and pass in the
filename of
"samples.xlsx".
sheet is still the
workbook.active.
03:23
So, the end result: you want to have a dictionary called
products—so, set this up as an empty dictionary, for now—and then, like you did in the interpreter, make a
for loop.
03:32
So,
for row in sheet.iter_rows(), and you’ll pass in the
min_row=2,
03:39
and then the
min_col=4 and the
max_col=7. Then, make sure you have the
values_only and set this equal to
True. And now, instead of printing out that
row—and I should probably have an
in up here—you’re going to make a
product_id, which is going to equal the first item in the row,
04:00
and then make the product dictionary. So, just say something like
product and then start a new dictionary, and you want to pass in the
"parent", which is
row[1], and then the
"title", which is
row[2], and then the
"category", which is that last item, so at index
3.
04:22
And now that you have this, you can say that
products and then, at that
product_id, is going to equal
product. So, this line right here has a lot going on with it, very similar-sounding variables—and I’m not helping by not having an
s here.
04:37
So, let’s break this down.
products here is what you defined up here as an empty dictionary. And then now, you’re filling this dictionary with a key that’s equal to a
product_id, and then the value at that key is going to be the
product, which is this dictionary here and has more information on the product.
04:59 So, this is a pretty cool way to take your data and set it up. Like, if you had a product ID of something you pulled out of a database. Now, you could say the product ID, and then that would pull out all the product info in a dictionary format.
05:12 This could be very useful depending on what you’re doing.
05:16
So, after this runs through every row in the worksheet, let’s take this
products dictionary and use
json to get this into a string format. We’ll just print that out, so,
json.dumps() (dump S)—to dump the dictionary into a string—and then pass in the
products dictionary. Save this. We can actually quit out of the interpreter session. And then, let’s try to run it. So say,
python parse_product.py. I got an error, so let’s see.
No [...] file or directory, so if I go up here—so, I named mine
sample. And that’s wrong. All right, let’s save that.
05:57 And this looks pretty good. Let’s go up to the top…
06:03 and here you go! You’ve got a dictionary here,
06:06
and then the first key is this ID, and then that ID represents this value, which is another dictionary, here. It has all the product information, with the
"parent",
"title", and
"category" down here. So, this is great! This might be a bit tricky to read printed out in the terminal here, but a JSON output is very useful in a lot of cases, and the dictionary that produced it would also be very useful in further Python scripts.
06:34 In the next video, you’re going to learn how to append data to a spreadsheet, before you learn how to write new spreadsheets.
Become a Member to join the conversation.
Oren Wolfe on Sept. 2, 2020
Your On-Screen ‘parse_product.py’ does not match the file in ‘openpyxl_sample_code.zip’, and the sample_code version does not work. Who ya gonna call?
|
https://realpython.com/lessons/manipulating-data/
|
CC-MAIN-2020-40
|
refinedweb
| 1,201
| 81.12
|
I was wanting to read in a file which has 4 values on each line:
title, author, genre, price.
I want to split each value which has a ',' as the delimiter. Then I want to save it to my List each line being an entry in the list. For example
title, author, genre, price
title2, author2, genre2, price2
List[0][1] = title
List[0][2] = author
List[0][3] = genre
List[0][4] = price
List[1][1] = title2
List[1][2] = author2
List[1][3] = genre2
List[1][4] = price2
def readFile(fileName):
List = []
f = open(fileName, 'r')
line = f.readline()
x = 0
while len(line) != 0:
for i in range(4):
List[x][i] = line.split(',')
x += 1
line = f.readline()
f.close()
return List
List index out of range
Python has you covered here, just use the
csv module:
import csv def readFile(filename): with open(filename, 'rb') as f: reader = csv.reader(f) return list(reader)
Your code makes several classical errors:
str.split()returns a list; you are trying to assign that list 4 times to indices of another list. Just use the list returned by
str.split()directly.
\n) included; you probably want to strip that off first.
list.append()instead to add elements.
len(line) != 0; just
if line:is enough because empty strings are considered 'false' in a truth test. See Truth Value Testing.
file.readline()each time; just use a
for line in f:loop and you'll get each line one by one, because file objects are iterable.
withstatement), Python will close the file for you.
So, without the
csv module, you could write your code like this:
def readFile(fileName): rows = [] with open(fileName, 'r') as f: for line in f: columns = line.strip().split(',') rows.append(columns) return rows
|
https://codedump.io/share/A6wfkEcwhpe/1/split-every-word-in-line-by-comma-then-store-entire-line-in-list
|
CC-MAIN-2017-47
|
refinedweb
| 300
| 75
|
How to Define Constants in Java
How to Define Constants in Java
Learn more about defining constants in Java!
Join the DZone community and get the full member experience.Join For Free
There seems to be a lot of confusion around the topic of constants in Java. Some people make use of integers or Strings to define constants, while others make use of enums.
I’ve also come across constants defined in their own interface — where classes that make use of the constants have to implement the interface. This strategy is often referred to as the interface constant design pattern.
In this article, we’ll have a look at the two most common strategies for storing constants in Java: integers and enums.
First and foremost, whenever you decide to use constants, you should be pretty certain that the constants won’t change over time so you can avoid recompiling.
In this article, we’ll work with a very common candidate for constants – weekdays!
Let’s assume that we have a class representing an order in an online store, where we want to keep track of which day of the week the order occurred.
Our class looks like this:
public class Order { private [datatype] weekDay; public [datatype] getWeekDay() { return weekDay; } public void setWeekDay([datatype] weekDay) { this.weekDay = weekDay; } }
Note that the class won’t compile for the moment – [datatype] is simply a placeholder for the type of constant we’ll use.
Defining Constants With Integers
One of the most common ways to define constants in Java is through integers, where the integer variables are static.
public static int MONDAY = 0; public static int TUESDAY = 1; public static int WEDNESDAY = 2; public static int THURSDAY = 3; public static int FRIDAY = 4; public static int SATURDAY = 5; public static int SUNDAY = 6;
The first question to ask when defining integer constants is where to place them. Do we place them directly in the
Order class? Or do we give them their own class?
Since days are pretty general, and not necessarily just connected to objects of type
Order, we’ll define them in their own class
WeekDay.
public class WeekDay { private WeekDay(){} public static int MONDAY = 0; public static int TUESDAY = 1; public static int WEDNESDAY = 2; public static int THURSDAY = 3; public static int FRIDAY = 4; public static int SATURDAY = 5; public static int SUNDAY = 6; }
You probably noticed the private constructor – this is to avoid clients from instantiating the class. The class holds only static variables, which are not tied to an object, so there is no need to instantiate the class.
Now, whenever we need to set a particular day to an order, we do it like this:
Order order = new Order(); order.setWeekDay(WeekDay.MONDAY);
And when we want to check whether an order occurred on a Friday, we can simply call write:
if(order.getWeekDay() == WeekDay.FRIDAY)
So far, so good. There surely can’t be any problems with this design?
Well, let’s assume that you’re coming back to this code a year later – and you have to check whether an order occurred on a Monday.
Oh, and of course – you have completely forgotten about the
WeekDay class...
In this scenario, you might try something like this:
if(order.getWeekDay() == 1)
At that moment, having completely forgotten about the WeekDay class, this code makes perfect sense. Monday is the first day of the week, so weekday should be 1, right?
But no, it isn’t, because the static int variable Monday is defined as 0 in our
WeekDay class!
This is a great example of why you should consider avoiding the use of integer constants. They are error-prone, confusing, and hard to debug.
Defining Constants With Enums
One other alternative to defining constants in Java is by using enums.
When using enums, our constants class will look something like this:
public enum WeekDay { MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, SUNDAY }
Notice the absence of the private constructor — there’s no need for the programmer (you!) to enforce that the class is non-instantiable since enums are non-instantiable by default!
The syntax for setting a
WeekDay to an order is exactly the same as with integer constants:
order.setWeekDay(WeekDay.MONDAY);
There is also no difference in how we can whether the order was processed on a Friday:
if(order.getWeekDay() == WeekDay.FRIDAY)
However, the key difference is that this is the only way you can set and compare values for the weekday variable in the
Order class.
Both
order.setWeekDay(1); and
if(order.getWeekDay() == 1) will make the compiler throw an error since you are trying to use variables of type integerwhen they should be of type
WeekDay!
Think back to the scenario when you had completely forgotten about the
WeekDay class.
With enums, this is no longer an issue. If you try to use integers instead of members of the
WeekDay enum, the compiler will simply throw an error, telling you that you need to use the
WeekDay enum.
Said in other words, the only thing that will work for you to check whether an order occurred on a Friday is:
if(order.getWeekDay == WeekDay.FRIDAY)
It doesn’t get any clearer than this!
You’re no longer forced to remember the constants class, and if any clients were to use your code, they don’t have to wonder whether Monday is actually represented by a 0 or a 1.
I hope this example demonstrated to you why you should consider using enums over integers when defining constants.
Enums will make your code less error-prone, easier to read, and more maintainable!
A quick tip: If you wish to expand your Java skill-set and become an advanced Java developer, I highly recommend you to get yourself a copy of the best-selling Effective Java, by Joshua Bloch!
Published at DZone with permission of Anders Engen Olsen . See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/how-to-define-constants-in-java?fromrel=true
|
CC-MAIN-2019-47
|
refinedweb
| 1,010
| 60.85
|
Now that we’ve covered what functions are and some of their basic capabilities, let’s take a closer look at why they’re useful.
Why use functions?
New programmers often ask, “can’t the code we’re putting inside the function just be put directly inside main?” In many cases (particularly for simple examples), it can. However, functions provide a number of benefits that make them extremely useful in non-trivial programs.
Although it doesn’t look like it, every time you use std::cin or std::cout to do input or output, you’re using a function provided by the standard library that meets all of the above criteria.
Effectively using functions
One of the biggest challenges new programmers encounter (besides learning the language) is learning).
We’ll investigate this topic in more detail in lesson 1.10a -- How to design your first programs.
#include<iostream.h>
void main()
{
for(int i=1;i<20;i++)
{for(int s=20;s>i;s-)
{cout<<" ";}
for(int j=1;j<=i;j++)
{cout<<" $";}
for(int g=1;g<i;g++)
{cout<<"$";}
cout<<endl;}
}
Hi Alex,
You are impressive. I want to learn C++ at the age of 12 and i am of 12 now. This make a mastered in C++ programing.
Thanks Alex! Thanks very very very much!
Hey Kanishka! I’m 15 and I started learning at the age of 7, this tutorial is awesome for learning C++, if this is the first programming language you learn and you find it interesting, congratulations! Nowadays it’s an useful tool to understanding how everything works. When I join a website and I see the design I think: "this is created using margin-left at a 15% or using a container with a col of 2 at both sides".
I just wanted to tell you good luck with your development adventure 🙂
Thanks Alex, this is a great explanation of functions. I really appreciate you explaining the "why" behind many aspects of C++.
tanx alex
I can’t believe how awesome this tutorial is!!! I wonder if Alex has one for GoLang because that’s next on my list, anyways, kudos!!!
So basicly instead of using namespaces you could also just write a function that uses std::cout and call a function that’s defined as printNumber(int x) for integers.
My question is would this also work if I want to print text this way? Defining my function as printText(string x) ???
Yes, if you use std::string. We cover the basics of std::string in a future lesson.
Hey Alex, I made this simple calculator program after last tutorial and I am wondering whether I should have split calculations and things up into functions, what do you think?
Yes, this program is pretty long for one function. It would definitely benefit from being split into smaller functions.
Learning the functionality of functions, in other words how functions function, was the main function of this page about functions. Function.
Hey Alex, I’m curious. Are there any guidelines to doing anything inside main? Like, is it better to do:
Or
Personally I like to keep my main() functions as simple as possible and delegate responsibilities to subfunctions, but that’s just a matter of style. When programs are this short, it doesn’t matter all that much.
hey alex,,,,hats off,,,a great tutorial,,
I would say easily one of the best tutorials so far. I’ve been through many tutorials that overlook certain points and don’t go into detail of what certain things mean… example "using namespace std", many tutorials do not explain what this is and why it’s needed of in your case, needed at all.
Kudos to you, and can’t wait to go through the rest of the pages.
every time you use std::cin or std::cout to do input or output, you’re using a function provided by the standard library that meets all of the above criteria.
This statement is not correct …
cin and cout are not functions they are objects and when we use them the operator overloaded function cout.operator<<(expr ) will be called.
The statement is correct as written. As you note yourself, when we use cin or cout to do input and output (via operator<< or operator>>), an overloaded function is called. That function meets the criteria listed.
Alex,
Thanks for adding this section. However it brings up a lot of question to mind. How do the pro’s store the functions they write? Do they store functions for say integers and floats in different places.? You mentioned retesting, does this mean that they don’t recompile the function again? Can you give us some idea of how to do all these and expand on some of the points you listed above. I would think a set of functions to add, subtract, multiply and divide two variables could be used for all of the intake,long,long long, float, etc., could be written with the same code with minor changes..how would the pro’s do that. You could write a book on the things you mentioned in this section alone.
Thanks
Generally, pros will start creating pairs of header (.h) and code (.cpp) files that contain one of two things:
1) A set of related functions -- For example, math.h and math.cpp would contain math-related functions (including versions for both int and double, if that’s relevant)
2) A new data type -- For example, if you’re doing geometry, you might need a way to represent a Cartesian point. You could create a point.h and point.cpp that contains the definition for this new type.
Once these files have been created and tested once, they can be copied and reused in other programs without having to retest them.
You’re correct that functions to do arithmetic on different types (e.g. int and double) are essentially identical -- and yet, C++ makes you declare them as separate functions. This is where template programming comes in. You can declare a template function, and C++ will instantiate different flavors of that function for each type you call it with. So you could write a single template for function add(), and C++ could create an int and a double version based on this. This can be useful with functions -- it’s even more useful with “container objects” like arrays that need to be able to hold different types. Since template programming is an advanced concept (and the syntax is headache inducing), we talk about it much later in this tutorial series.
Hey there Alex just want to say i appreciate this site alot so far. I like how much detail and background goes into each lesson. Much better than most prior tuts ive read. in fact the program from the last quiz we had to do, i made very easily.
Thanks!
Hi. Thank you for this site, it is really helpful 🙂 .
I would like to point out that there is a typo at the Reusability paragraph: it should be: "Once a function is written, it can be called […]".
Fixed, thanks!
Hello Alex, I really like your tutorial so far. I find it the easiest to learn from all of which I tried in the internet. Please keep up the good work 🙂
Also I’ve got one question. You call a function which uses an argument and then returns a value that you will use later on. In the function though you change the argument that is passed to it. Is there a way to use that new value in main() (Basically I tried returning 2 values in one function xD of course it didn’t work, but I was wondering how I can use an argument from a function in main() )
Yes, in chapter 7 we talk about reference parameters, which allow the value of a parameter that is changed in a function to be used by the caller.
Here’s a sneak peak:
So basically you pass a pointer to the variable in the function 🙂 This got me thinking, if you can access directly the address of a variable is it possible to change the address of that variable? I tried this code, but it is giving me an error:
First it prints out the address of x and its value, then it is supposed to move it up in memory by 4 bytes (size of a 32 bit signed int). BUT line 9 (line where I try to change the address) gives an error, "l-value required as left operand of assignment". So I guess that means you cant change the address like that because it isn’t a valid way 🙁
edit: or i guess it could be that variable addresses are read-only, I guess that could be a valid explanation as well.
You can’t change the address of a variable.
However, you can do something similar to what you’re suggesting by using pointers. Have a read about pointers in chapter 6 -- particularly around pointer arithmetic.
Thanks so much for explaining a lot on this topic and making it easier for us (me) newcomers to understand
much appreciated
Can a function call itself? Does that make any sense?
Yes, it can. This is called a recursive function call. There are quite a few algorithms that are more easily written using recursive function calls.
I talk about recursive function calls in lesson 7.10 -- Recursion.
Thank you for putting this new page up. It has given me a better insight to understand the functionality of functions in C++.
And thank you for this great C++ tutorial website:)
You’re welcome. Was the article clear? Are there any points that can be clarified further?
The article is clear as it is. It clearly communicated the idea of why functions are beneficial in programming. At first, I just learned about functions thinking of it as a necessary requirement, but this article helped me understood its meaning in much deeper context.
Name (required)
Website
|
http://www.learncpp.com/cpp-tutorial/1-4b-why-functions-are-useful-and-how-to-use-them-effectively/
|
CC-MAIN-2017-26
|
refinedweb
| 1,678
| 72.56
|
Search found 1 match
Search found 1 match • Page 1 of 1
- Fri Nov 19, 2010 11:01 pm
- Forum: Volume 4 (400-499)
- Topic: 410 - Station Balance
- Replies: 41
- Views: 16466
I got WA
I test all the test cases I found here and all of them are right. it's my code, it's make me happy if you can find any problem #include<iostream> #include<fstream> #include<math.h> #include<string> #include<algorithm> #include <iomanip> using namespace std; int main() { int n=0; int m=0; int c=1; wh...
Search found 1 match • Page 1 of 1
|
https://onlinejudge.org/board/search.php?author_id=62636&sr=posts
|
CC-MAIN-2021-10
|
refinedweb
| 101
| 74.53
|
I've been working with the pre-release version of VS2015 recently which gave the option of a "ASP.NET 5 Class Library" which has since changed in RTM to simply "Class Library (Package)" With the description
PREVIEW - A project template for creation a class library as a NuGet package that can target any platform
I recently created one of these new Class Library projects and added in into a solution which also included an ASP.NET 5 project. I used the Package manager console to add in references to Entity Framework 7 to both projects which worked fine, adding the correct text into the project.json files in both projects (the Reference section in VS also updated accordingly). But when trying to add
using Microsoft.Data.Entity; to my Class Library project I cannot reference it at all. It works fine in the ASP.NET project.
Intellisense gives me options for 'Microsoft.CSharp' and one other namespace, but not 'Data'. I've tried creating an entirely new solution from scratch but this still hasn't helped.
I also tried adding references to the dnx XUnit stuff to the project as per their website's guidelines but these did not work either.
EDIT: I think the problem lies somewhere with the
dotnet Target Framework Moniker(TFM) which looks to have been introduced in DNX SDK 1.0.0-beta5 as there are not issues when using dnx451 in 1.0.0-beta4
I've finally found a post on github from the author of xunit, Brad Wilson who states that currently the dotnet TFM doesn't work as you would expect and that the
dnx451 and
dnxcore50 are better TFM's to target.
I just went through the process of adding 2 projects. A class library and an mvc 6 project. I could duplicate your issue. To fix it I edited the class library project.json and changed the property under "frameworks" from dotnet to the 2 frameworks dnx451 and dnxcore50 as is found in the web project. Hope this helps.
|
https://entityframeworkcore.com/knowledge-base/32482933/nuget-package-referencing-issues-in-new-nuget-class-library-template-in-visual-studio-2015
|
CC-MAIN-2021-10
|
refinedweb
| 341
| 73.07
|
Class and Object
What is the difference between struct and class in C++?Class and Object
Discuss it
Question 1 Explanation:
Predict the output of following C++ programClass and Object
#include<iostream> using namespace std; class Empty {}; int main() { cout << sizeof(Empty); return 0; }
Discuss it
Question 2 Explanation:
Class and Object
class Test { int x; }; int main() { Test t; cout << t.x; return 0; }
Discuss it
Question 3 Explanation:
In C++, the default access is private. Since x is a private member of Test, it is compiler error to access it outside the class.
Which of the following is true?Class and Object
Discuss it
Question 4.
Assume that an integer and a pointer each takes 4 bytes. Also, assume that there is no alignment in objects. Predict the output following program.Class and Object
#include<iostream> using namespace std; class Test { static int x; int *ptr; int y; }; int main() { Test t; cout << sizeof(t) << " "; cout << sizeof(Test *); }
Discuss it
Question 5 Explanation:
For a compiler where pointers take 4 bytes, the statement "sizeof(Test *)" returns 4 (size of the pointer ptr). The statement "sizeof(t)" returns 8. Since static is not associated with each object of the class, we get (8 not 12).
Which of the following is true about the following programClass and Object
#include <iostream> class Test { public: int i; void get(); }; void Test::get() { std::cout << "Enter the value of i: "; std::cin >> i; } Test t; // Global object int main() { Test t; // local object t.get(); std::cout << "value of i in local t: "<<t.i<<'n'; ::t.get(); std::cout << "value of i in global t: "<<::t.i<<'n'; return 0; }Contributed by Pravasi Meet
Discuss it
Question 6 Explanation:
The above program compiles & runs fine. Like variables it is possible to create 2 objects having same name & in different scope.
A member function can always access the data in __________ , (in C++).Class and Object C++ Misc UGC-NET CS 2017 Nov - II
Discuss it
Question 7 Explanation:
A member function can access it's class member variables, irrespective of the access specifier in which the member variable is declared.So, a member function can always access the data in the class of which it is a member. So, option (A) is correct.
Which of the following is not correct for virtual function in C++ ?Class and Object C++ Misc UGC-NET CS 2017 Nov - II
Discuss it
Question 8 Explanation:
Virtual function is can't be static in C++. So, option (B) is correct.
Which of the following is not correct (in C++) ?Class and Object C++ Misc UGC-NET CS 2017 Nov - II
- Class templates and function templates are instantiated in the same way
- Class templates differ from function templates in the way they are initiated
- Class template is initiated by defining an object using the template argument
- Class templates are generally used for storage classes
Discuss it
Question 9 Explanation:
In C++ class template and function template are similar in the way the are initiated. Class template are not used for storage class. Class templates and function templates are instantiated in the same way and Class template is not initiated by defining an object using the template. So (2), (3), (4) are not correct in C++. So, option (C) is correct.
Which of the following cannot be passed to a function in C++ ?Class and Object UGC NET CS 2017 Jan - II
Discuss it
Question 10 Explanation:
Header file can not be passed to a function in C++. While array, constant and structure can be passed into a function. So, option (D) is correct.
There are 17 questions to complete.
My Personal Notes arrow_drop_up
|
https://www.geeksforgeeks.org/c-plus-plus-gq/class-and-object-gq/
|
CC-MAIN-2020-10
|
refinedweb
| 615
| 63.39
|
Re: Abo3 (can someone help me?)
From: c0n (defcon_at_titan.def-con.org)
Date: 05/26/03
- Previous message: Murat Balaban: "Re: Abo3 (can someone help me?)"
- In reply to: Discussion Lists: "Abo3 (can someone help me?)"
- Next in thread: sin: "Re: Abo3 (can someone help me?)"
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] [ attachment ]
Date: Mon, 26 May 2003 00:26:37 -0500 (CDT) To: Discussion Lists <discussions@lagraphico.com>
#1: abo3 does only allocate 256, so we must use a larger buffer to store
our overflow.
#2:
... taken from the above paper.);
#3:
> /* constructing the buffer */
> p = evil_buffer;
> memset(p, 'B', 256); // Some junk
> p += 256;
>
> *((void **)p) = (void *) (ret);
> p += 4;
> *p = '\0';
>
> /* Two arguments are passed to the vulnerable program */
> execle("/home/user/gera/abo3", "abo3", evil_buffer, "A",
> NULL,env);
p is set to the address of evil_buffer
p is filled with 256 B's
location in p is changed to after the 256 B's
return address is added to the end of p
location in p is changed to after the return address
p is ended with a NULL
hope that helps...
c0n
On Sat, 24 May 2003, Discussion Lists wrote:
> Hi all,
> This list has become far more interesting with the challenges. Thanks
> to all for the participation. Recently, a user posted a particular
> site:
>
>
>
>
> Which has the following code:
>
>
> /* abo3.c *
> * specially crafted to feed your brain by gera@core-sdi.com */
>
> /* This'll prepare you for The Next Step */
>
> int main(int argv,char **argc) {
> extern system,puts;
> void (*fn)(char*)=(void(*)(char*))&system;
> char buf[256];
>
> fn=(void(*)(char*))&puts;
> strcpy(buf,argc[1]);
> fn(argc[2]);
> exit(1);
> }
>
> The issue here is that there is an exit(1) at the end of the code. So
> even if you were to overwrite the return address, it would not matter
> because there is no return (if I understand correctly).
>
> The solution, according to this place:
>
>
>
> is that we have to stick our shellcode in an environment variable, then
> overwrite the address of that variable into the address of the fn()
> function. So they lay out the following code to do it (questions
> in-line):
>
> /*
> ** exp3.c
> ** Coded by CoreSecurity - info@core-sec.com
> **/
>
> #include <string.h>
> #include <uninstd.h>
>
> #define BUFSIZE 261
>
> /* Why 261? THe vulnerable program allocates 256 I thought. Where is
> that other 5 going to/for? */
>
> /* 24 bytes shellcode */
> char shellcode[]=
> /* 1 P h \ \ s h h \ b i */
> "\x31\xc0\x50\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69"
> /* n P 2 */
> "\x6e\x89\xe3\x50\x53\x89\xe1\x99\xb0\x0b\xcd\x80";
> /* so it is pushing /bin/sh backwards on the stack. Aleph1 talks about
> how to create this code so I won't ask about it*/
> int main(void) {
> char *env[3] = {shellcode, NULL};
> char evil_buffer[BUFFSIZE];
> char *p;
>
> /*Calculating address of shellcode */
> int ret = 0xbffffffa - strlen(shellcode) -
> strlen("/home/user/gera/abo3");
> /* That is what I don't get. First, what is the 0xbffffffa address? Is
> that where supposedly the
> ending address of the code when everything is pushed onto the stack? I
> believe strlen calculates the
> length of a string? If that is the case, why do they need to calculate
> shellcode, and the path. I
> also assume the path is case specific. In other words, if the binary
> has a different path on my system,
> I would use that instead. */
>
> /* constructing the buffer */
> p = evil_buffer;
> memset(p, 'B', 256); // Some junk
> p += 256;
>
> *((void **)p) = (void *) (ret);
> p += 4;
> *p = '\0';
>
> /* Two arguments are passed to the vulnerable program */
> execle("/home/user/gera/abo3", "abo3", evil_buffer, "A",
> NULL,env);
> __________________________________________
> I don't completely understand much of that last part either, but I have
> the K&R book, so I will drag it out and see what I can find out.
>
>
- Previous message: Murat Balaban: "Re: Abo3 (can someone help me?)"
- In reply to: Discussion Lists: "Abo3 (can someone help me?)"
- Next in thread: sin: "Re: Abo3 (can someone help me?)"
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] [ attachment ]
|
http://www.derkeiler.com/Mailing-Lists/securityfocus/vuln-dev/2003-05/0127.html
|
CC-MAIN-2014-42
|
refinedweb
| 678
| 71.75
|
Web Features¶
Evennia is its own webserver and hosts a default website and browser webclient.
Web site¶
The Evennia website is a Django application that ties in with the MUD
database. Since the website shares this database you could, for example,
tell website visitors how many accounts are logged into the game at the
moment, how long the server has been up and any other database
information you may want. During development you can access the website
by pointing your browser to.
You may also want to set
DEBUG = Truein your settings file for debugging the website. You will then see proper tracebacks in the browser rather than just error codes. Note however that this will leak memory a lot (it stores everything all the time) and is not to be used in production. It’s recommended to only use
DEBUGfor active web development and to turn it off otherwise.
A Django (and thus Evennia) website basically consists of three parts, a
view an associated template and an
urls.py file. Think of the
view as the Python back-end and the template as the HTML files you are
served, optionally filled with data from the back-end. The urls file is
a sort of mapping that tells Django that if a specific URL is given in
the browser, a particular view should be triggered. You are wise to
review the Django documentation for details on how to use these
components.
Evennia’s default website is located in evennia/web/website. In this
folder you’ll find the simple default view as well as subfolders
templates and
static. Static files are things like images, CSS
files and Javascript.
Customizing the Website¶
You customize your website from your game directory. In the folder
web you’ll find folders
static,
templates,
static_overrides and
templates_overrides. The first two of those
are populated automatically by Django and used to serve the website. You
should not edit anything in them - the change will be lost. To customize
the website you’ll need to copy the file you want to change from the
web/website/template/ or
web/website/static/ path to the corresponding place under one of_overrides`
directories.
Example: To override or modify
evennia/web/website/template/website/index.html you need to
add/modify
mygame/web/template_overrides/website/index.html.
The detailed description on how to customize the website is best described in tutorial form. See the Web Tutorial for more information.
Overloading Django views¶
The Python backend for every HTML page is called a Django view. A view can do all sorts of functions, but the main one is to update variables data that the page can display, like how your out-of-the-box website will display statistics about number of users and database objects.
To re-point a given page to a
view.py of your own, you need to
modify
mygame/web/urls.py. An URL pattern is a regular
expression that you need to enter in the address field of your web
browser to get to the page in question. If you put your own URL pattern
before the default ones, your own view will be used instead. The file
urls.py even marks where you should put your change.
Here’s an example:
# mygame/web/urls.py from django.conf.urls import url, include # default patterns from evennia.web.urls import urlpatterns # our own view to use as a replacement from web.myviews import myview # custom patterns to add patterns = [ # overload the main page view url(r'^', myview, name='mycustomview'), ] urlpatterns = patterns + urlpatterns
Django will always look for a list named
urlpatterns which consists
of the results of
url() calls. It will use the first match it
finds in this list. Above, we add a new URL redirect from the root of
the website. It will now our own function
myview from a new module
mygame/web/myviews.py.
If our game is found on, the regular expression
"^"means we just entered
mygame.comin the address bar. If we had wanted to add a view for, the regular expression would have been
^/awesome.
Look at evennia/web/website/views.py to see the inputs and outputs
you must have to define a view. Easiest may be to copy the default file
to
mygame/web to have something to modify and expand on.
Restart the server and reload the page in the browser - the website will
now use your custom view. If there are errors, consider turning on
settings.DEBUG to see the full tracebacks - in debug mode you will
also log all requests in
mygame/server/logs/http_requests.log.
Web client¶
Evennia comes with a MUD client accessible from a normal web browser.
During development you can try it at. The client consists of several
parts, all under evennia/web/webclient/:
templates/webclient/webclient.htmland
templates/base.htmlare the very simplistic django html templates describing the webclient layout. The
base.htmlacts as a header that sets up all the basic stuff the client needs, so one only needs to modify the much cleaner
webclient.html.
static/webclient/evennia.jsis the main evennia javascript library. This handles all communication between Evennia and the client over websockets and via AJAX/COMET if the browser can’t handle websockets. It will make the
Evenniaobject available to the javascript namespace, which offers methods for sending and receiving data to/from the server transparently. This is intended to be used also if swapping out the gui front end.
static/webclient/js/evennia.jsis the default GUI of the webclient. This handles the input line, the send button and sends text to the right DOM elements in the html file. It makes use of the
evennia.jslibrary for all in/out and implements a “telnet-like” interface.
static/webclient/css/webclient.cssis the CSS file for the client; it also defines things like how to display ANSI/Xterm256 colors etc.
- The server-side webclient protocols are found in
evennia/server/portal/webclient.pyand
webclient_ajax.pyfor the two types of connections. You can’t (and should not need to) modify these.
Customizing the web client¶
Like was the case for the website, you override the webclient from your
game directory. You need to add/modify a file in the matching directory
location within one of the
_overrides directories.
Example: To replace/modify
evennia/web/webclient/static/webclient/js/webclient_gui.js you need
copy and modify the file at
mygame/web/static_overrides/webclient/js/webclient_gui.js.
See the Web Tutorial for more on how to customize the webclient.
The Django ‘Admin’ Page¶
Django comes with a built-in admin website. This is accessible by clicking the ‘admin’ button from your game website. The admin site allows you to see, edit and create objects in your database from a graphical interface.
The behavior of default Evennia models are controlled by files
admin.py in the Evennia package. New database models you choose to
add yourself (such as in the Web Character View Tutorial) can/will also
have
admin.py files. New models are registered to the admin website
by a call of
admin.site.register(model class, admin class) inside an
admin.py file. It is an error to attempt to register a model that has
already been registered.
To overload Evennia’s admin files you don’t need to modify Evennia
itself. To customize you can call
admin.site.unregister(model class), then follow that with
admin.site.register in one of your own admin.py files in a new app
that you add.
More reading¶
Evennia relies on Django for its web features. For details on expanding
your web experience, the Django documentation or the Django Book
are the main resources to look into. In Django lingo, the Evennia is a
django “project” that consists of Django “applications”. For the sake of
web implementation, the relevant django “applications” in default
Evennia are
web/website or
web/webclient.
|
http://evennia.readthedocs.io/en/latest/Web-Features.html
|
CC-MAIN-2018-13
|
refinedweb
| 1,322
| 57.87
|
For object-oriented I'd have to say it's Smalltalk. All inheritance-based OO concepts can be implemented in terms of aggregation, and having run-time mutability is a very powerful advantage.
For imperative, it's a much more difficult choice. PERL is incredibly flexible in the same ways LISP and Scheme are, but I personally feel queasy when it comes to any programming language designed with a natural language mindset. Python would be a better choice, except that really it's object-oriented, and I have major issues with any language which is whitespace-sensitive. And really, anything you can do in PERL or Python you can do in C as well (it'll just be more difficult). So really, this choice boils down to preference.
Functional is easy... Scheme. LISP with a simplified syntax, just enough imperative constructs to make it workable, and none of the crappy wannabe-objects which have slowly sneaked their way into the current implementations of Common LISP. It's also nice to have the scoping rules cleaned up and disambiguated.
Logical is another no-brainer. Prolog. It's inherently parallelizable, has clear and simple resolution and unification rules (with the notable exception of the cut operator, which I still don't really understand), and aside from a few gotchas, it's quite robust for everything from databases to expert systems to problem solvers. And, if speed is what you want, Sicstus can compile, and it's not that hard to use Prolog to prototype at an algorithmic level and then convert it to something like C++ for production uses.
That said, I use C++ for almost everything, because of the set of requirements I have for any language: it must be object-oriented, relatively platform-independent (as long as that platform is UNIX, anyway), and fast. It'd be nice if C++ had a sane object model (not having a base object really cramps its style, and leads to nasty hacks such as templates), and I've still yet to see a reasonable justification to multiple inheritance and all the crap that adds to the language (particularly at the implementation level), and it'd be nice to have some of the nice features of Turbo Pascal/Delphi (interface being encapsulated in the compiled object file, each library having its own implicit namespace, having nested functions, etc.), but overall, C++ is the language which sucks least for my needs. Oh, and although there's no real runtime mutability, you can simulate aggregation through abstract virtual methods, so at least it's got that going for it...
To name a language in common modern usage like everyone else has - I'd have to say Ruby. It combines a clean, elegant syntax with a REAL object system a-la smalltalk and this concept of blocks from CLU resulting in a tool that purrs like a kitten but roars like a lion.
One readily demonstrable aspect of Ruby's flexibility is its dynamism. Any class can be modified/augmented at any time unless it has been explicitly locked (or frozen in Ruby parlance). So if for instance the suits in charge of the statistics app you're developing decide on a whim that "XML is GOOD" and move to web-based output, you can simply augment Object (The common ancestor of every Ruby object - Java uses the same convention like so:
class Object
def to_s
# Code to spew XML tags for each property
end
end
class Object
def to_s
# Code to spew XML tags for each property
end
end
And that's it, now whenever any of your objects are asked to return a string (e.g. they're being printed) they'll do the right thing.
Log in or register to write something here or to contact authors.
Need help? accounthelp@everything2.com
|
https://everything2.com/title/World%2527s+most+flexible+programming+language
|
CC-MAIN-2017-26
|
refinedweb
| 638
| 54.56
|
Given two lists [T], the following algorithm finds the intersection (values common to both lists) in O(min(m, n)) time and O(min(m,n)) space provided that:
def intersection(A, B): a = len(A) s = 0 R = [] for n in B: for i in range(s, a): c = A[i] if n < c: break s += 1 if n == c: R.append(n) break if s == a: break return R
Another algorithm for when intersections should be found in place. This modifies A. Most of the ugliness from the code stems from the fact that it is hard to iterate backwards in Python. Again all conditions from the first algorithm need to be satisfied. However:
def intersection(A, B): b = len(B) - 1 a = len(A) - 1 # Find intersections, and set a to be the index # where the algorithm should search from for i in range(b, -1, -1): n = B[i] for j in range(a, -1, -1): c = A[j] if n > c: break a -= 1 if n == c: break del A[j] # everything below a is not included in the other # set, so delete for j in range(a, -1, -1): del A[j] return A
Both algorithms exploit the fact that the lists are already sorted, and break from the inner loop early to avoid a full scan whenever it makes sense to avoid O(mn). It is possible to use a binary search instead of the linear scan to determine the index.
However, that would give a worse time complexity should the lists have very tighly clustered values: linear search is at worse, O(n) but Ω(1). However a binary search is almost always O(log n). Case in point:
Easy to find 1 in [1,2,3,4,5,6,7,8] using linear search. However a binary search needs ~3 steps.
|
http://eugene-eeo.github.io/blog/intersection-algorithms.html
|
CC-MAIN-2017-34
|
refinedweb
| 310
| 71.18
|
20 January 2010 12:04 [Source: ICIS news]
MUMBAI (ICIS news)--Reliance Industries will have to re-evaluate its strategy for acquiring LyondellBasell following a recent ruling by a US bankruptcy court, a source familiar with the developments said on Wednesday.
“The unsecured creditors’ bid [to include the Reliance offer] has been rejected. That route is closed. And the exclusivity period [for the management plan] is until 15 April. So Reliance will have to rework its strategy,” said the source.
The court on 19 January granted Lyondell Chemical, the ?xml:namespace>
The court also rejected a request by unsecured creditors to expand the scope of an examiner to review how Lyondell was evaluating Reliance’s bid.
“Reliance can still make a binding bid to the LyondellBasell management and then they can take it to court; it can also increase its offer. Reliance will have to evaluate [options],” the source said.
Reliance declined to comment for this story.
Reliance first made a preliminary, non-binding offer for LyondellBasell in November 2009.
Earlier this month, Reliance was reported to have raised its offer by $1.5bn (€1.05bn) to $13.5bn.
For more on LyondellBasell and Reliance,
|
http://www.icis.com/Articles/2010/01/20/9327228/ruling-to-force-reliance-to-rework-lyondell-strategy-source.html
|
CC-MAIN-2015-22
|
refinedweb
| 195
| 57.98
|
What is client data?
getRequestURL(): When the user clicks the submit button, we think that the data entered by the user like user name and password are sent to the server. Ofcourse, right. But along with it, lot of client data goes to the server. The client data includes the protocol used by the client, client IP address and also the name of the browser. Also its version (headers) etc.
Is there anyway to retrieve client data from the Servlet?
Yes, HttpServletRequest interface comes with many methods to retrieve the client data.
This program uses getRequestURL() to retrieve the URL used by the client to call the servlet on the server.
Following is the method signature as defined in HttpServletRequest interface.
Example on getRequestURL()
Client Program: ClientData.html
web.xml entry for ClientInformation servlet
efgh ClientInformation efgh /jasmine
Server Program: ClientInformation.java
import javax.servlet.*; import javax.servlet.http.*; import java.io.*; public class ClientInformation extends HttpServlet { public void service(HttpServletRequest req, HttpServletResponse res) throws ServletException, IOException { res.setContentType("text/html"); PrintWriter out = res.getWriter(); StringBuffer sb = req.getRequestURL(); out.println("req.getRequestURL() : " + sb); out.close(); } }
Output screen of ClientData.html with text field filled up.
The output screen when submit button is clicked.
Observe, the getRequestURL() prints the complete URL (written in ACTION attribute of FORM tag) used by the client to call the servlet. This is where getRequestURL() differs from getRequestURI().
|
https://way2java.com/servlets/request-getrequesturl-method-example-in-servlets/
|
CC-MAIN-2022-33
|
refinedweb
| 233
| 52.66
|
I'm setting up an application that has an object called Person. Every Person has an Address. But it's more complicated than that. For various legal and accounting reasons, we need to keep track of every Address that Person has ever had. So I set it up like this:
@Entity public class Person { private List<Address> addresses; public List<Address> getAddresses() { ... } public void setAddresses(List<Address> list) { .... } // and to make it easy to get the current address: public void setCurrentAddress(Address address) { if(addressess == null) addresses = new LinkedList<Addresses>(); addresses.add(address); } public Address getAddress() { // return the last element in the addresses array } }
The other option would be to have Address be its own object, and then use the session bean to install the Address into the Person just before it persists the whole thing. I'm going to see if that works, unless someone has other ideas.
This documentation certainly will help you:
|
https://developer.jboss.org/message/456297
|
CC-MAIN-2019-35
|
refinedweb
| 155
| 56.96
|
Apparently this bug has been around for ages! What the hell? ing-linebreak-textabove-break-doesnt-justify.html
I'm using Illustrator CS 5.1, but they mention CS2 ! Are you fricken kidding me??
Illie is just doing what she is supposed to do. It is your text that is the problem.
Why have you got all those returns at line ends? They shouldn't be there.
Delete them and things will behave as you expect them to.
The text was just an example (lorem ipsum dolor...). The actual text uses a lot of hyphens to split long words and I can't use Illustrator's built-in dictionary to hyphenate them for me. These hyphens have to be EXACTLY in the right spots, hence the need for manual break lines (Shift+Enter).
You think this behaviour is normal? Because even though these are just two paragraphs, it behaves like every line is an individual paragraph. The icons are very misleading. They act exactly like the first 3.
Apparently this bug has survived across versions for YEARS.
Yup old bug. I've reported it a few times. AI just will not honor full justification if there's a soft return.
I'm with Scott and the OP can't get anyone on the team to respond to this issue.
I don't know how many times I reported it but ery since I can remeber it has been there except maybe in v.8 but I can be wrong about that as well.
steve fairbairn wrote:
Illie is just doing what she is supposed to do. It is your text that is the problem.
Why have you got all those returns at line ends? They shouldn't be there.
Delete them and things will behave as you expect them to.
The Op is trying to set the text to a preferred way the lines should break to the next line. there is no reason that the soft returns line breaks should not be honored.
That ios what line breaks are for to be able to choose the way you want the text to flow.
D, others,
There may be a bug in the CS versions (there seems to be no soft line break up to 10), but it should be possible to get Illy to behave if you set the Word spacing in the Paragraph flyout and the Hyphenation options for the troublesome words.
You are correct Jacob soft returns were Introduced in I believe CS 2 and the justification and hyphenation should not be part of the equation if you do not desire hyphenation for instance.
It is an old bug since the inception of the soft return.
Perhaps it is fixed in CS 6 we will find out soon I already have my license on order.
But what happens if you just use the Word spacing in the Paragraph flyout, setting it to 75%, 50%, or whatever it takes?
That in itself ought to tame the 4th justified sample in the OP.
It's a problem Jacob
There are many instances where adjusting spacing will alter lines which were previously fine. Basically it's a catch 22.... you get bad lines... fix spacing to correct the bad lines... you get other bad lines. What is needed is the ability to add a soft return (a la Indesign) and maintain the justification.
The current workaround is to add a hard return and then adjust any paragraph spacing and/or line spacing for the line above the hard return. It's WAY more trouble than it should be.
I see, Scott and Wade. I may be too inclined to use hyphens to (have come across it often enough to) remember a really bad case.
there is no reason that the soft returns line breaks should not be honored.
Soft returns only work when the lines with hyphenated words exceed the available with of the text box.
In the OP's example the boxes are too wide for the hyphenation to function.
Take two minutes and go to InDesign if you want to understand how Spider wants it to work, Steve.
AI does not do it.
Not a bug by my definition but a poor feature or I won't be surprised if it is deliberately limited because they use it in the same way in Photoshop. If you copy text with non-paragraph breaks (soft returns) from Illustrator and paste it in inDesign the hidden characters show that Illustrator uses another kind of character that has nothing to do with soft return.
Trust Scott and me on the emil emil this is an acknowledged bug it is not supposed to work this way
We know this as a fact it is not suuposed to work this way it is supposed to work as the OP expects it to work. It is a short coming of the type engine but also because it is not implemented properly.
Doug Katz pointed out the there was n soft returns in Illustrator and so I made the feature request and when the soft returns where supported he immediately pointout the bug as i recall right heere on the forum and i filed the bug immediately after it was acknowledged after several other users confirmed the bug and it was never corrected.
Note. This is not fixed in CS6.
But several other things are
If it is acknowledged by Adobe as a bug, then it is a bug. Did Adobe respond to someone with such confirmation?
What makes me think that it may not be a bug is the fact that Photoshop also has the same bug - in its paragraph panel it has the same text justifying options and lines with soft returns are treated the same. I'm also with the OP on this expecting the normal behavior to be the same as inDesign but I've seen crippled features made by design. I'll be happy to be wrong because if it is officially acknowledged as a bug then there is a chance they may eventually fix it.
Take two minutes and go to InDesign if you want to understand how Spider wants it to work
I know all about that. InDesign has the infuriating habit of disarranging text that one has maybe spent a long time arranging.
If you insert an optional hyphen at some point, the lines previous to it it may suddenly be completely messed up although there was nothing wrong with them before.
It sometimes takes some considerable effort to hyphenate a paragraph satisfactorily and InDesign's automation of the hyphenation process frequently works against you.
If you take pride in your work a more manual approach is often more satisfactory.
You never seem to read this correctly I clearly wrote that they have acknowledge it do you really think I am making this up?
Trust me I had communications with Adobe about this in what form I cannot tell you but this is a bug. Like it or not and you might think what you wish it will not change anything and unless they have address this issue in CS 6 then it is a bug that is still there.
It's a bug!
There are beta testers that comne this forum regularly if it were not a bug trust me they would have jumped in by now.
OK, my bad then, I missed to read that or didn't understand it is officially acknowledged by Adobe. I guess they are using the same code in Photoshop too - may be the same team is writing that part for multiple programs.
[scott w] wrote:
Note. This is not fixed in CS6.
But several other things are
Unbelievable... Come on!
Emil, I knew InDesing uses the same type engine, but Photoshop too?? And these are Adobe's flagships?
InDesign does not use the same Type engine as far as I know and I think that Photoshop and ID type engines are more related toeach other than Illustrator's type engine.
Which is one of the problems.
But I am pretty certain, though only guessing, that this will all come togehter in CS 7. And perhaps this bug might be fixed in a point release.
It would make illustrator so much more apealing.
I know at AI10 the type engine was taken from Indesign. I don't know how similar they are anymore.
One thing that has been fixed on the Mac is the ability to use the arrow keys to walk through the font list. Finally Mac users can use that feature in CS6.
As I recall it, AI didn't get the ID text engine code, what it got was most of the features of the ID engine coded into the AI engine, the main feature being every-line composition.
Yours
Vern
Orly? 7d0100196cbc5f-63a4a.html
"Illustrator uses the same composition methods for line and word breaks that is used in Adobe InDesign. For more information on using these features, see web Help."
They wrrote it but that does not mean the implemented it coreectly.
Does not matter this is a bug in AI.
Same methods doesn't mean the same code. More than one way to skin a cat.
Yours
Vern
Yeah, I thought you'd say something like that. The same methods but also the same bug? Without the same code? What are the chances of that?
This problem doesn't exist in ID, so yes different code. Even aside from this problem, identical text doesn't justify the same in AI vs ID. Might not be noticable in a single paragraph, but set a couple of thousand words in both programs and compare and you'll find plenty of different line breaks.
Yours
Vern
|
http://forums.adobe.com/message/4363223
|
CC-MAIN-2014-15
|
refinedweb
| 1,628
| 73.07
|
readline integration for IPython 5.4+ and 6.0+
Project description
rlipython
Up until version 4.2, command-line IPython had a readline frontend, which was replaced by prompt_toolkit in IPython 5. rlipython brings that classic readline functionality to IPython 5.4+ and 6.0+.
See for information.
Try it out
You can try out rlipython like this:
ipython --TerminalIPythonApp.interactive_shell_class=rlipython.TerminalInteractiveShell
Do I have to do that every time?
No. To have rlipython enabled automatically, do this:
import rlipython; rlipython.install()
This will enable rlipython for the default IPython profile if you run it using plain python or the active profile if you run it from ipython.
After running rlipyton.install(), you can go back to starting IPython just by using ipython without the extra configuration flag.
Removal
import rlipython; rlipython.uninstall()
Python 2 or Python 3
rlipython will work in both Python 2 and Python 3. However, as of May 15th, 2017, IPython 6.0 is the only released version of IPython which supports a configurable interactive_shell_class, but IPython 6.0 only works in Python 3. So if you want to use rlipython in Python 2, you will have to install the IPython 5.x branch from git, or wait for IPython 5.4 release.
License
This code has was extracted from IPython 5.x-dev, so it is under IPython’s LICENSE.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/rlipython/
|
CC-MAIN-2021-49
|
refinedweb
| 249
| 78.75
|
![if gte IE 9]><![endif]><![if gte IE 9]><![endif]><![if gte IE 9]><![endif]><![if gte IE 9]><![endif]>
Hello,
I have a .cpp file that uses IER. When trying to compile it, it gives:
ERROR! at EOF: [E0300]1 Assembly Error, No Assembly Warnings The following symbols are undefined: _IERErrors in Source - Assembler Aborted
According to the "TMS320C28x Assembly Language Tools v6.0", section 3.9.5, CPU control registers are predefined, especially IER.
What's going on?
Many thanks for your help,
Fabrice
In C/C++ code, you have to explicitly declare the control registers like this ...
extern cregister volatile unsigned int IFR;extern cregister volatile unsigned int .
In reply to George Mock:
Dear George,
These are already defined in "DSP281x_Device.h" as far as I can see, and I include "DSP281x_Device.h" in my source file.
I however found out that there is a problem only when I use "IER" directly as an argument of a function call. If I don't use it as an argument, it complies properly?!?!?!
Strange...
Thank you anyway,
In reply to Investigator:
I cannot reproduce this problem. Please double-check that the cregister declaration really is present when you compile your file by adding the --preproc_only command-line argument and inspecting the resulting .pp file. If it is present, and you still get that error message, please post a compilable test case which reproduces the problem.
In reply to Archaeologist:
Dear Archeologist,
To reproduce the problem, create a new project in CCS4 with a target of TMS320F2812. I use CGT 6.0.1.
Then add a "main.cpp" file and populate it thus:
#include "DSP281x_Device.h"void fn(volatile Uint16& reg){ reg = 0;}int main(){ fn(IER);}
Then try to build, and you should get the error.
Best regards,
The issue here is that you are attempting to create a C++ reference to IER, which is not supported by the TI compiler. Taking the address of a cregister is also not supported. The parser ought to have rejected this with an error, and the fact that it did not is a bug (now SDSCM00043533).
Here's a cutdown test case:
extern cregister volatile unsigned int IER;
void fn(volatile unsigned & reg);
int main() { fn(IER); }
You'll need to write this code like this:
void clear_IER(void) { IER =.
|
http://e2e.ti.com/support/development_tools/compiler/f/343/p/172440/630200
|
CC-MAIN-2015-06
|
refinedweb
| 388
| 67.86
|
This is the tests I have extracted from the given samples.
def test_provided_1(self): self.assertEqual('Low', solution('Heelo Codevval | Hello Codeeval')) def test_provided_2(self): self.assertEqual('Critical', solution('hELLO cODEEVAL | Hello Codeeval')) def test_provided_3(self): self.assertEqual('Done', solution('Hello Codeeval | Hello Codeeval'))Notice that the two versions have the same length, and that the error checking is case sensitive.
After splitting the input line on ' | ' I simply count the bugs comparing the elements in the same position for the two components in the list:
bugs = 0 for i in range(len(data[0])): if data[0][i] != data[1][i]: bugs += 1Then it is just a matter of swithing on the bugs and returning an appropriated message. However, for some obscure reason, Python does not have a switch statement. So the better I could think of, was a if/elif/else structure:
if bugs == 0: return 'Done' elif bugs < 3: return 'Low' elif bugs < 5: return 'Medium' elif bugs < 7: return 'High' else: return 'Critical'That's it. Submitted to CodeEval, than pushed to GitHub, both test case and python script.
|
http://thisthread.blogspot.com/2017/01/codeeval-testing.html
|
CC-MAIN-2018-43
|
refinedweb
| 183
| 55.34
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.