text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
How would one go about editing file summary attributes? i.e Title, Subject, Author, Comments, etc... Is there a class or standard function for accessing them? This is a discussion on File summary attributes within the Windows Programming forums, part of the Platform Specific Boards category; How would one go about editing file summary attributes? i.e Title, Subject, Author, Comments, etc... Is there a class or ... How would one go about editing file summary attributes? i.e Title, Subject, Author, Comments, etc... Is there a class or standard function for accessing them? Oh there's likely to be something windows specific. Did you start with say ? If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut. If at first you don't succeed, try writing your phone number on the exam paper. I support as the first necessary step to a free Europe. yes i did!! but I am looking for something not just window specific thanks for the help any way.. but bye. I previously posted a sample that does this here (towards the bottom under DocProps). However, it is slightly LCC-WIN specific so I made a few changes to make it compatible with MSVC. It should also work with Dev-C++, but my (fairly old) version did not have adequately up-to-date header files. The code is C++ (not C). You should still check out the linked sample because there are some notes in the readme file. Obviously, this is entirely Windows specific (you did post on the Windows board). I'm not aware of nix having a similar document properties scheme. Code:/* Code by TMouse. Also see: */ #include <objbase.h> #include <stdio.h> #if defined(_MSC_VER) #pragma comment(lib, "ole32.lib") #endif // ================================================================================ HRESULT SetDocProperties(LPCWSTR szFile, LONG cProps, PROPSPEC* pPropSpecs, PROPVARIANT* pPropVars) { HRESULT hr; IPropertySetStorage* pStg = NULL; IPropertyStorage* pPropStg = NULL; // Open file. We could use StgCreateStorageEx() if we wanted to create the file. hr = StgOpenStorageEx(szFile, STGM_READWRITE | STGM_SHARE_EXCLUSIVE, STGFMT_ANY, 0, NULL, NULL, IID_IPropertySetStorage, (void**) &pStg); if (SUCCEEDED(hr)) { // Now open the standard document properties... hr = pStg->Open(FMTID_SummaryInformation, STGM_READWRITE | STGM_SHARE_EXCLUSIVE, &pPropStg); // If the property set dosen't exist, we create it... if (hr == STG_E_FILENOTFOUND) { hr = pStg->Create(FMTID_SummaryInformation, &FMTID_SummaryInformation, PROPSETFLAG_DEFAULT, STGM_READWRITE | STGM_SHARE_EXCLUSIVE | STGM_CREATE, &pPropStg); } if (SUCCEEDED(hr)) { // Write Properties... hr = pPropStg->WriteMultiple(cProps, pPropSpecs, pPropVars, 100); // Commit changes to file... pPropStg->Commit(STGC_DEFAULT); } } if (pStg) pStg->Release(); if (pPropStg) pPropStg->Release(); return hr; } // ================================================================================ HRESULT SetDocPropertyString(LPCWSTR szFile, PROPID propid, LPCWSTR szValue) { PROPSPEC PropSpec = { PRSPEC_PROPID, propid }; PROPVARIANT PropVariant = { 0 }; PropVariant.vt = VT_LPWSTR; PropVariant.pwszVal = (LPWSTR) szValue; return SetDocProperties(szFile, 1, &PropSpec, &PropVariant); } // ================================================================================ int main(void) { HANDLE hFile; HRESULT hr; PROPVARIANT PropVariants[3]; PROPSPEC PropSpecs[3]; CoInitialize(NULL); // Create file to play with... hFile = CreateFile("TEST.TXT", GENERIC_WRITE, FILE_SHARE_READ, NULL, OPEN_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL); CloseHandle(hFile); // Set a single property... hr = SetDocPropertyString(L"test.txt", PIDSI_COMMENTS, L"These are some comments."); if (FAILED(hr)) printf("Failed to set property on test.txt. Is this drive NTFS?\n"); // Now set multiple properties at once... PropSpecs[0].ulKind = PRSPEC_PROPID; PropSpecs[0].propid = PIDSI_TITLE; PropVariants[0].vt = VT_LPWSTR; PropVariants[0].pwszVal = L"Document Summaries Sample"; PropSpecs[1].ulKind = PRSPEC_PROPID; PropSpecs[1].propid = PIDSI_AUTHOR; PropVariants[1].vt = VT_LPSTR; PropVariants[1].pszVal = "TMouse"; PropSpecs[2].ulKind = PRSPEC_PROPID; PropSpecs[2].propid = PIDSI_KEYWORDS; PropVariants[2].vt = VT_LPWSTR; PropVariants[2].pwszVal = L"Title, Subject, Author, Comments, Document Properties, COM, OLE"; hr = SetDocProperties(L"test.txt", 3, PropSpecs, PropVariants); if (FAILED(hr)) printf("Failed to set properties on test.txt.\n"); // Set a property on a word doc (this file must already exist)... hr = SetDocPropertyString(L"test.doc", PIDSI_SUBJECT, L"The wonders of..."); if (FAILED(hr)) printf("Failed to set property on test.doc. Does it exist?\n"); CoUninitialize(); printf("Press ENTER to continue...\n"); getchar(); return 0; } Last edited by anonytmouse; 06-24-2005 at 11:09 AM. Reason: Made clear that code is C++ and not C. What other OS did you have in mind? If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut. If at first you don't succeed, try writing your phone number on the exam paper. I support as the first necessary step to a free Europe. thanks anonytmouse, You were of great help thanks!
http://cboard.cprogramming.com/windows-programming/66913-file-summary-attributes.html
CC-MAIN-2015-22
refinedweb
714
53.58
va_start, va_arg, va_copy, va_end— #include <stdarg.h>void va_start(va_list ap, last); type va_arg(va_list ap, type); void va_copy(va_list dst, va_list src); void va_end(va_list ‘ *’ to type. If there is no next argument, or if type is not compatible with the type of the actual next argument (as promoted according to the default argument promotions, see below), random errors will occur. If the type in question is one that would normally be promoted, the promoted type should be used as the argument to va_arg(). The following describes which types should be promoted (and to what): variable argument list was initialized by va_start() or va_copy(). va_arg() macro after that of the va_start() macro returns the argument after last. Successive invocations return the values of the remaining arguments. The va_start(), va_copy() and va_end() macros return no value.); } <varargs.h>. The va_start(), va_arg() and va_end() macros conform to ISO/IEC 9899:1999 (“ISO C99”). va_start(), va_arg() and va_end() macros were introduced in ANSI X3.159-1989 (“ANSI C89”). The va_copy() macro was introduced in ISO/IEC 9899:1999 (“ISO C99”). stdargmacros do not permit programmers to code a function with no fixed arguments. This problem generates work mainly when converting varargs code to stdargcode, but it also creates difficulties for variadic functions that wish to pass all of their arguments on to a function that takes an argument of type va_list, such as vfprintf(3).
https://man.openbsd.org/OpenBSD-6.1/stdarg.3
CC-MAIN-2018-26
refinedweb
233
64.1
W3C Recommends XSL 19 An Anonymous Coward writes: "The W3C upgraded XSL 1.0 to the status of a recomendation today, as they reported in a press release." From that release: "XSLT 1.0 makes it possible to significantly change the original structure of an XML document (automatic generation of tables of contents, cross-references, indexes, etc.), while XSL 1.0 makes complex document formatting possible through the use of formatting objects and properties." XSL, XSLT, XML... (Score:1, Funny) AC = Not Karmawhore (Score:3, Informative) XSL has two parts, XSL:FO and XSLT. XSL:FO is an XML format for the printed page. XSLT is a language for transforming one XML format into another. XSLT is an incredibly useful language to learn. Imagine being able to take a Docbook [docbook.org] file and spit out XHTML or XSL:FO > PDF, or... yes, even plain text. For an idea of how easy it is I wrote a Docbook to HTML converter for 100 docbook tags in four hours, and I hadn't touched XSLT before. XSLT is based around rewriting a data tree. You can step around the tree like a file system. In a hardcoded way like //html/body/h1, or relatively like ../body. The language has loops and built in functions for analysing the tree. For example, how many times does a paragraph (p) occur in this HTML document when it's parent is a table cell (td)? count(td/p) The downside is the bloated syntax. You know those horror stories about an XML programming language that looked like <xml:if(blah= blah)> do this </xml:if> ? XSL is that. But if you can get past that you'll find one of the most wonderful things for mangling XML into whatever format you want. I'm glad you found it simple, nobody else does. (Score:1) XSLT is not based on rewriting the XML tree, but the correct way to view the process is on creating an output tree from an input tree. Rewriting would imply that elements that were not specifically analyzed (how ever you define it) would still be in the output tree. Also, you cannot reference the structure of the newly constructed output tree, only the input tree. Using XML was supposed to free you of the burden of doing much of the parsing, i.e., have a consistent format for the data. However in XSLT you still have a very large parsing responsibility. They choose XML to represent the top-level transformation structures, but almost every element and attribute that has content has its own mini-language format. XML is too syntax heavy for a pure XML-only syntax, probably. -j XSL is truly a mistake. Beware. (Score:2) Applying this technolgy to page generation and tree transformation is not fantastic. This is what you get when you assemble the same gang of professional committee sitters that tanked SGML, and give them a blank slate. Want to transform documents and render content? Try DOM and CSS. These are much simpler, more flexible, better understood technologies. CSS is actually supported by your browser, and DOM is actually supportedf by real programming languages. Re:XSL is truly a mistake. Beware. (Score:3, Informative) The thing that people are forgetting is that XSLT has been a W3C Rec for a long while. They are just recomending XSL now. XSLT has many uses that just the DOM doesn't have. With XSLT, you can specify a smaller number of canned transforms to take a completely abstract set of XML tags to a viewable form in your browser. An example of this would be an XML document to describe a program: <PROGRAM> <NAME>Blah</NAME> <PLATFORM>Linux 3.0</PLATFORM> <DESCRIPTION>This program sucks</DESCRIPTION> </PROGRAM> Now, tell me, what would a CSS style sheet to properly render this data look like? Sure, you can make the NAME, PLATFORM, and DESCRIPTION tags paragraph level and make them different colors, but that's not helping. With XSLT tree transforms, you can provide a relitively simple scheme for rendering things more like: Name: Blah Platform: Linux 3.0 Description: This program sucks Now, you CAN do that in DOM, but then you will have to provide source code, most likely in JavaScript, to transform the tree. And then the users who surf with JavaScript turned off because all of the OTHER things it can do, like pop up windows at annoying moments, etc. XSLT just transforms XML documents, nothing more. Once we have good support for XSLT in web browsers, I suspect it will become a very useful tool. Now, as for XSL, that's probably a good thing. Sure, there'll be a delay before people actually can handle the formatting objects. But XSL has a more complete formatting model that's useful for more than just webpages. I'm less familiar with it. I suspect they could have added similar functionality to CSS, but this way, your entire formatting process is usable just through transforming an XML tree. This way, you don't need to write all of the CSS parsing code, you just use your DOM. This also means you can embed XSL/XSLT code in a single XML file without any of that messy CDATA stuff. Of course, I think that you could also say that making a whole formatting system around XML parsers is about as useful as making a whole OS around OOP messaging syntax. But the biggest generator of XSL code will probably be future word processors converting the style sheets to XML/XSL code. Wasn't it supposed to be simple? (Score:3, Interesting) I thought that the original goal was to make a _simple_ declarative language that handled 80% of the transformations easily and left the other 20% to something else (like using a real language). Even the simplest tasks require too much code in my opinion. My first XSLT project, a learning project, was to write the game of life, as it is with every new tool. I have two versions, the shortest being 150 lines that required the field to be an ugly composition of <o/> and <X/> elements. There is a very high syntax to semantics ratio. Similar operations require different syntax, such as inserting an element can be done literally, but inserting an attribute requires special instructions (a minimum of 2 elements). There is no continuity in the different ways to reference a variable binding. You use templates to generate the structure of the XML but cannot generate text structure in a similar way, neither content data or attribute value data. Blah blah blah... I could go on. It seems like they ended up with a complex framework without strong expressive power. Why couldn't they just do DSSSL with the XML syntax if they wanted it? Besides, for me it is difficult to look at with XSLT definitions/commands, obviously in XML, sprinkled around output literal XML elements. But I haven't trained my XSLT syntax eye too extensively. It is probably the same way people feel about looking at my DSSSL code. p.s., I think they lie when they say it is side effect free and completely declarative. -j Re:Wasn't it supposed to be simple? (Score:1, Insightful) Representing the game of life in XSLT sounds difficult. I'm not sure if the language is intended for that type of thing (as you say, it was intended to fit 80% of the tasks). I agree that DSSSL is much more elegant, but it's overkill. Incidentally, in what situation was the problem you were having? Re:Wasn't it supposed to be simple? (Score:1) Re:Wasn't it supposed to be simple? (Score:1) +-----+ |ooXXo| |XooXX| |XoooX| |XXXoo| |ooXXo| +-----+ into this document: +-----+ |ooXXo| |oXooX| |XoXoX| |XoXoo| |ooXXo| +-----+ Replace the 'X's and 'o's with your favorite words. They are both very regularly structured blocks of text with a few small precise rules. Why shouldn't a document conversion language be able to do this. Does you think that that mess of vague rules we can English would be any easier? -j < move along people. there's nothing to see here > < something to placate the postercomment compression filter > < these quotes stolen from Slashdot homepage > < This page was generated by a Group of Trained Robots > < This page was generated by a Team of Trained Geese > Re:Wasn't it supposed to be simple? (Score:1) Re:Wasn't it supposed to be simple? (Score:1) -j Re:Wasn't it supposed to be simple? (Score:1) try <myElement myAttribute="{Xpath Expression}"/> All this and more at the excellent xls-list FAQ [dpawson.co.uk] where you'll find that XSLT has a strong user base and a wide range of mature implementations, ie competing on performance now rather than robustness or completeness. But it really is different to non-declarative languages, so expect fun if you still enjoy learning. Re:Wasn't it supposed to be simple? (Score:1) When I posted, I was thinking of a particular time I needed to add an attribute to an element in the middle of a for-each. I remember it because it almost doubled the file size. One, of the many, relevant segments follows. Please, tell me how I am sucking. I am sure there is a better way of doing this (excure the t namespace, I use is because I am lazy): <t:for-each <a><t:attribute <t:value-of</t:attribute> <t:value-of</a></t:for-each> Just as I inserted the "<a>" element, I was something as simple for the attributes. Since you treat a single tree and a forest the same when iterating over them, then why not for assignment, too? Can I do this, hopefully, and have it constructe a forest of "a" elements? <a href="//store/*/designer/url/text()"> <t:value-of </a> That seems to make sense, I hope: make "a" elements with "href" attributes, assigning "href" the text value of each "//store/*/designer/url" element. Take the value of all the "name" elements, placing them on the tree is the "a" element. It would be the same thing that many other languages do with array assignment: a[4 6 8 9 10] meaning, do a scatter/gather assignment, giving the first 5 composite numbers the value of their largest prime factor. (or imagine from c: int a[0 3][0 3] = b[1 2][1 2]; assigning the corners of the 4x4 matrix to be the middle of another matrix) -j Re:Wasn't it supposed to be simple? (Score:2) No, that was CSS. XSLT was supposed to fill in another 15% or so, with only a small fraction then being left to a "real" language.
https://developers.slashdot.org/story/01/10/17/018208/w3c-recommends-xsl?sdsrc=prevbtmprev
CC-MAIN-2017-47
refinedweb
1,803
64.51
Greg Stein wrote: > Tim Peters: > > Trying to overload the current namespace set makes it so > > much harder to see that these are compile-time gimmicks, and users need to > > be acutely aware of that if they're to use it effectively. Note that I > > understand (& wholly agree with) the need for runtime introspection. > > And that is the crux of the issue: I think the names that are assigned to > these classes, interfaces, typedefs, or whatever, can follow the standard > Python semantics and be plopped into the appropriate namespace. There is > no overloading. This is indeed the crux of the issue. For those that missed it last time, it became very clear to me that we are working with radically different design aesthetics when we discussed the idea of having an optional keyword that said: "this thing is usually handled at compile time but I want to handle it at runtime. I know what I am doing." Greg complained that that would require the programmer to understand too much what was being done at compile time and what at runtime and what. >From my point of view this is *exactly* what a programmer *needs* to know and if we make it too hard for them to know it then we have failed. > There is no "overloading" of namespaces. We are using Python namespaces > just like they should be, and the type-checker doesn't even have to be all > the smart to track this stuff. There is an overloading of namespaces because we will separately specify the *compile time semantics* of these names. We need to separately specify these semantics because we need all compile time type checkers to behave identically. Yes, it seems elegant to make type objects seem as if they are "just like" Python objects. Unfortunately they aren't. Type objects are evaluated -- and accepted or rejected -- at compile time. Every programmer needs to understand that and it should be blatantly obvious in the syntax, just as everything else in Python syntax is blatantly obvious. -- Paul Prescod - ISOGEN Consulting Engineer speaking for himself Earth will soon support only survivor species -- dandelions, roaches, lizards, thistles, crows, rats. Not to mention 10 billion humans. - Planet of the Weeds, Harper's Magazine, October 1998
https://mail.python.org/pipermail/python-dev/2000-January/001858.html
CC-MAIN-2016-40
refinedweb
373
61.26
5. Quantum Circuit Simulation¶ quimb has powerful support for simulating quantum circuits via its ability to represent and contract arbitrary geometry tensor networks. However, because its representation is generally neither the full wavefunction (like many other simulators) or a specific TN (for example an MPS or PEPS like some other simulators), using it is a bit different and requires potentially extra thought. Specifically, the computational memory and effort is very sensitive to what you want to compute, but also how long you are willing to spend computing how to compute it - essentially, pre-processing. Note All of which to say, you are unfortunately quite unlikely to achieve the best performance without some tweaking of the default arguments. Nonetheless, here’s a quick preview of the kind of circuit that many classical simulators might struggle with - an 80 qubit GHZ-state prepared using a completely randomly ordered sequence of CNOTs: [1]: %config InlineBackend.figure_formats = ['svg'] import random import quimb as qu import quimb.tensor as qtn N = 80 circ = qtn.Circuit(N) # randomly permute the order of qubits regs = list(range(N)) random.shuffle(regs) # hamadard on one of the qubits circ.apply_gate('H', regs[0]) # chain of cnots to generate GHZ-state for i in range(N - 1): circ.apply_gate('CNOT', regs[i], regs[i + 1]) # sample it once for b in circ.sample(1): print(b) 11111111111111111111111111111111111111111111111111111111111111111111111111111111 As mentioned above, various pre-processing steps need to occur (which will happen on the first run if not explicitly called). The results of these are cached such that the more you sample the more the simulation should speed up: [2]: %%time # sample it 8 times for b in circ.sample(8): print(b) 00000000000000000000000000000000000000000000000000000000000000000000000000000000 11111111111111111111111111111111111111111111111111111111111111111111111111111111 00000000000000000000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000 11111111111111111111111111111111111111111111111111111111111111111111111111111111 00000000000000000000000000000000000000000000000000000000000000000000000000000000 11111111111111111111111111111111111111111111111111111111111111111111111111111111 00000000000000000000000000000000000000000000000000000000000000000000000000000000 CPU times: user 321 ms, sys: 301 µs, total: 321 ms Wall time: 308 ms Collect some statistics: [3]: %%time from collections import Counter # sample it 100 times, count results: Counter(circ.sample(100)) CPU times: user 211 ms, sys: 24.3 ms, total: 235 ms Wall time: 219 ms [3]: Counter({'11111111111111111111111111111111111111111111111111111111111111111111111111111111': 60, '00000000000000000000000000000000000000000000000000000000000000000000000000000000': 40}) 5.1. Simulation Steps¶ Here’s an overview of the general steps for a tensor network quantum circuit simulation: Build the tensor network representation of the circuit, this involves taking the initial state (by default the product state $ | 000 \ldots 00 \rangle $ ) and adding tensors representing the gates to it, possibly performing low-rank decompositions on them if beneficial. Form the entire tensor network of the quantity you actually want to compute, this might include: the full, dense, wavefunction (i.e. a single tensor) a local expectation value or reduced density matrix a marginal probability distribution to sample bitstrings from, mimicking a real quantum computer (this is what is happening above) the fidelity with a target state or unitary, maybe to use automatic differentation to train the parameters of a given circuit to perform a specific task Perform local simplifications on the tensor network to make it easier (possibly trivial!) to contract. This step, whilst efficient in the complexity sense, can still introduce some significant overhead. Find a contraction path for this simplified tensor network. This a series of pairwise tensor contractions that turn the tensor network into a single tensor - represented by a binary contraction tree. The memory required for the intermediate tensors can be checked in advance at this stage. Optionally slice (or ‘chunk’) the contraction, breaking it into many independent, smaller contractions, either to fit memory constraints or introduce embarassing parallelism. Perform the contraction! Up until this point the tensors are generally very small and so can be easily passed to some other library with which to perform the actual contraction (for example, one with GPU support). Warning The overall computational effort memory required in this last step is very sensitive (we are talking possibly orders and order of magnitude) to how well one finds the so-called ‘contraction path’ or ‘contraction tree’ - which itself can take some effort. The overall simulation is thus a careful balancing of time spent (a) simplifying (b) path finding, and (c) contracting. Note It’s also important to note that this last step is where the exponential slow-down expected for generic quantum circuits will appear. Unless the circuit is trivial in some way, the tensor network simplification and path finding can only ever shave off a (potentially very significant) prefactor from an underlying exponential scaling. 5.2. Building the Circuit¶ The main relevant object is Circuit. Under the hood this uses gate_TN_1D(), which applies an operator on some number of sites to any notionally 1D tensor network (not just an MPS), whilst maintaining the outer indices (e.g. 'k0', 'k1', 'k2', ...). . The various options for applying the operator and propagating tags to it (if not contracted in) can be found in gate_TN_1D(). Note that the ‘1D’ nature of the TN is just for indexing, gates can be applied to arbitrary combinations of sites within this ‘register’. The following is a basic example of building a quantum circuit TN by applying a variety of gates to, for visualization purposes, nearest neighbors in a chain. [4]: # 10 qubits and tag the initial wavefunction tensors circ = qtn.Circuit(N=10) # initial layer of hadamards for i in range(10): circ.apply_gate('H', i, gate_round=0) # 8 rounds of entangling gates for r in range(1, 9): # even pairs for i in range(0, 10, 2): circ.apply_gate('CNOT', i, i + 1, gate_round=r) # Y-rotations for i in range(10): circ.apply_gate('RZ', 1.234, i, gate_round=r) # odd pairs for i in range(1, 9, 2): circ.apply_gate('CZ', i, i + 1, gate_round=r) # X-rotations for i in range(10): circ.apply_gate('RX', 1.234, i, gate_round=r) # final layer of hadamards for i in range(10): circ.apply_gate('H', i, gate_round=r + 1) circ [4]: <Circuit(n=10, num_gates=252, gate_opts={'contract': 'auto-split-gate', 'propagate_tags': 'register'})> The basic tensor network representing the state is stored in the .psi attribute, which we can then visualize: [5]: circ.psi.draw(color=['PSI0', 'H', 'CNOT', 'RZ', 'RX', 'CZ']) Note by default the CNOT and CZ gates have been split via a rank-2 spatial decomposition into two parts acting on each site seperately but connected by a new bond. We can also graph the default ( propagate_tags='register') method for adding site tags to the applied operators: [6]: circ.psi.draw(color=[f'I{i}' for i in range(10)]) Or since we supplied gate_round as an keyword (which is optional), the tensors are also tagged in that way: [7]: circ.psi.draw(color=['PSI0'] + [f'ROUND_{i}' for i in range(10)]) All of these might be helpful when addressing only certain tensors: [8]: # select the subnetwork of tensors with *all* following tags circ.psi.select(['CNOT', 'I3', 'ROUND_3'], which='all') [8]: <TensorNetwork(tensors=1, indices=3)> Note The tensor(s) of each gate is/are also individually tagged like [f'GATE_{g}' for g in range(circ.num_gates)]. The full list of currently implemented gates is here: [9]: " ".join(qtn.circuit.GATE_FUNCTIONS) [9]: 'H X Y Z S T X_1_2 Y_1_2 Z_1_2 W_1_2 HZ_1_2 CNOT CX CY CZ IS ISWAP IDEN SWAP RX RY RZ U3 U2 U1 CU3 CU2 CU1 FS FSIM RZZ' 5.2.1. Parametrized Gates¶ Of these gates, any which take parameters - ['RZ', 'RY', 'RZ', 'U3', 'FSIM', 'RZZ'] - can be ‘parametrized’, which adds the gate to the network as a PTensor. The main use of this is that when optimizing a TN, for example, the parameters that generate the tensor data will be optimized rather than the tensor data itself. [10]: circ_param = qtn.Circuit(6) for l in range(3): for i in range(0, 6, 2): circ_param.apply_gate('FSIM', random.random(), random.random(), i, i + 1, parametrize=True, contract=False) for i in range(1, 5, 2): circ_param.apply_gate('FSIM', random.random(), random.random(), i, i + 1, parametrize=True, contract=False) for i in range(6): circ_param.apply_gate('U3', random.random(), random.random(), random.random(), i, parametrize=True) circ_param.psi.draw(color=['PSI0', 'FSIM', 'U3']) We’ve used the contract=False option which doesn’t try and split the gate tensor in any way, so here there is now a single tensor per two qubit gate. In fact, for 'FSIM' and random parameters there is no low-rank decomposition that would happen anyway, but this is also the only mode compatible with parametrized tensors: [11]: circ_param.psi['GATE_0'] [11]: PTensor(shape=(2, 2, 2, 2), inds=('_df874bAARUu', '_df874bAARUo', '_df874bAARUf', '_df874bAARUg'), tags=oset(['FSIM', 'GATE_0', 'I0', 'I1'])) For most tasks like contraction these are transparently handled like normal tensors: [12]: circ_param.amplitude('101001') [12]: (-0.028363740312282972+0.08561344641421921j) 5.3. Forming the Target Tensor Network¶ You can access the wavefunction tensor network \(U |0\rangle^{\otimes{N}}\) or more generally \(U |\psi_0\rangle\) with Circuit.psi or just the unitary, \(U\), with Circuit.uni, and then manipulate and contract these yourself. However, there are built-in methods for constructing and contracting the tensor network to perform various common tasks. 5.3.1. Compute an amplitude¶ This computes a single wavefunction amplitude coefficient, or transition amplitude: with, \(x=0101010101 \ldots\), for example. The probability of sampling \(x\) from this circuit is \(|c_x|^2\). Example usage: [13]: circ.amplitude('0101010101') [13]: (-0.006267589645294041+0.012702244544177423j) 5.3.2. Compute a local expectation¶ For an operator \(G_{\bar{q}}\) acting on qubits \(\bar{q}\), this computes: where \(\psi_{\bar{q}}\) is the circuit wavefunction but only with gates which are in the ‘reverse lightcone’ (i.e. the causal cone) of qubits \(\bar{q}\). In the picture above the gates which we know cancel to the identity have been greyed out (and removed from the TN used). Example usage: [14]: circ.local_expectation(qu.pauli('Z') & qu.pauli('Z'), (4, 5)) [14]: (-0.01818896518545625+1.962405932198763e-17j) You can compute several individual expectations on the same sites by supplying a list (they are computed in a single contraction): [15]: circ.local_expectation( [qu.pauli('X') & qu.pauli('X'), qu.pauli('Y') & qu.pauli('Y'), qu.pauli('Z') & qu.pauli('Z')], where=(4, 5), ) [15]: ((-0.0057847192590974456+6.1257422745431e-17j), (0.05890188167924257+3.0357660829594124e-17j), (-0.018188965185456245+5.410168840702667e-17j)) 5.3.3. Compute a reduced density matrix¶ This similarly takes a subset of the qubits, \(\bar{q}\), and contracts the wavefunction with its bra, but now only the qubits outside of \(\bar{q}\), producing a reduced density matrix: where the partial trace is over \(\bar{p}\), the complementary set of qubits to \(\bar{q}\). Obviously once you have \(\rho_{\bar{q}}\) you can compute many different local expecatations and so it can be more efficient than repeatedly calling local_expectation(). Example usage: [16]: circ.partial_trace((4, 5)).round(3) [16]: array([[ 0.252+0.j , 0.013+0.011j, -0.019+0.007j, -0.016-0.003j], [ 0.013-0.011j, 0.255-0.j , 0.013+0.014j, 0.02 +0.017j], [-0.019-0.007j, 0.013-0.014j, 0.254+0.j , 0.019+0.012j], [-0.016+0.003j, 0.02 -0.017j, 0.019-0.012j, 0.239-0.j ]]) 5.3.4. Compute a marginal probability distribution¶ This method computes the probability distribution over some qubits, \(\bar{q}\), conditioned on some partial fixed result on qubits \(\bar{f}\) (which can be no qubits). Here only the causal cone relevant to \(\bar{f} \cup \bar{q}\) is needed, with the remaining qubits, \(\bar{p}\) being traced out. We directly take the diagonal within the contraction using hyper-indices (depicted as a COPY-tenso above) to avoid forming the full reduced density matrix. The result is a \(2^{|\bar{q}|}\) dimensional tensor containing the probabilites for each bit-string \(x_{\bar{q}}\), given that we have already ‘measured’ \(x_{\bar{f}}\). Example usage: [17]: p = circ.compute_marginal((1, 2), fix={0: '1', 3: '0', 4: '1'}, dtype='complex128') p [17]: array([[0.03422455, 0.02085596], [0.03080204, 0.02780321]]) [18]: qtn.circuit.sample_bitstring_from_prob_ndarray(p / p.sum()) [18]: '00' 5.3.5. Generate unbiased samples¶ The main use of compute_marginal() is as a subroutine used to generate unbiased samples from circuits. We first pick some group of qubits, \(\bar{q_A}\) to ‘measure’, then condition on the resulting bitstring \(x_{\bar{q_A}}\), to compute the marginal on the next group of qubits \(\bar{q_B}\) and so forth. Eventually we reach the ‘final marginal’ where we no longer need to trace any qubits out, so instead we can compute: since the ‘bra’ representing the partial bit-string only acts on some of the qubits this object is still a \(2^{|\bar{q}|}\) dimensional tensor, which we sample from to get the final bit-string \(x_{\bar{q_z}}\). The overall sample generated is then the concatenation of all these bit-strings: As such, to generate a sample once we have put our qubits into \(N_g\) groups, we need to perform \(N_g\) contractions. The contractions near the beginning are generally easier since we only need the causal cone for a small number of qubits, and the contractions towards the end are easier since we have largely or fully severed the bonds between the ket and bra by conditioning. This is generally more expensive than computing local quantities but there are a couple of reprieves: Because of causality we are free to choose the order and groupings of the qubits in whichever way is most efficient. The automatic choice is to start at the qubit(s) with the smallest reverse lightcone and greedily expand (see section below). Grouping the qubits together can have a large beneficial impact on overall computation time, but imposes a hard upper limit on the required memory like \(2^{|\bar{q}|}\). Note You can set the group size to be that of the whole sytem, which is equivalent to sampling from the full wavefunction, if you want to do this, it would be more efficient to call simulate_counts(), which doesn’t draw the samples individually. Once we have computed a particular marginal we can can cache the result, meaning if we come across the same sub-string result, we don’t need to contract anything again, the trivial example being the first marginal we compute. The second point is easy to understand if we think of the sampling process as the repeated exploration of a probability tree as above - which is shown for 3 qubits grouped individually, with a first sample of \(011\) drawn. If the next sample we drew was \(010\) we wouldn’t have to perform any more contractions, since we’d be following already explored branches. In the extreme case of the GHZ-state at the top, there are only two branches, so once we have generated the all-zeros and the all-ones result there we won’t need to perform any more contractions. Example usage: [19]: for b in circ.sample(10, group_size=3): print(b) 1000011000 0110000101 0111111010 1011001011 1111100100 1000100001 0110000111 0101011010 1100000000 0000111011 5.3.6. Generate samples from a chaotic circuit¶ Some circuits can be assumed to produce chaotic results, and a useful property of these is that if you remove (partially trace) a certain number of qubits, the remaining marginal probability distribution is close to uniform. This is like saying as we travel along the probability tree depicted above, the probabilities are all very similar until we reach ‘the last few qubits’, whose marginal distribution then depends sensitively on the bit-string generated so far. If we know roughly what number of qubits suffices for this property to hold, \(m\), we can uniformly sample bit-strings for the first \(f = N - m\) qubits then we only need to contract the ‘final marginal’ from above. In other words, we only need to compute and sample from: Where \(\bar{m}\) is the set of marginal qubits, and \(\bar{f}\) is the set of qubits fixed to a random bit-string. If \(m\) is not too large, this is generally a very similar cost to that of computing a single amplitude. Note This task is the relevant method for classically simulating the results in “Quantum supremacy using a programmable superconducting processor”. Example usage: [20]: for b in circ.sample_chaotic(10, marginal_qubits=5): print(b) 0011100100 1101111111 0101010100 0011011111 0010101111 0000000001 1101101001 1101100111 0110111000 1101000100 Five of these qubits will now be sampled completely randomly. 5.3.7. Compute the dense vector representation of the state¶ In other words just contract the core circ.psi object into a single tensor: Where \(|\psi_{\mathrm{dense}}\rangle\) is a column vector. Unlike other simulators however, the contraction order here isn’t defined by the order the gates were applied in, meaning the full wavefunction does not neccessarily need to be formed until the last few contractions. Hint For small to medium circuits, the benefits of doing this as compared with standard, ‘Schrodinger-style’ simulation might be negligible (since the overall scaling is still limited by the number of qubits). Indeed the savings are likely outweighed by the pre-processing step’s overhead if you are only running the circuit geometry once. Example usage: [21]: circ.to_dense() [21]: [[ 0.022278+0.044826j] [ 0.047567+0.001852j] [-0.028239+0.01407j ] ... [ 0.016 -0.008447j] [-0.025437-0.015225j] [-0.033285-0.030653j]] 5.3.8. Rehearsals¶ Each of the above methods can perform a trial run, where the tensor networks and contraction paths are generated and intermediates possibly cached, but the main contraction is not performed. Either supply rehearse=True or use the corresponding partial methods: - local_expectation_rehearse() partial_trace_rehearse() compute_marginal_rehearse() - sample_chaotic_rehearse() to_dense_rehearse() These each return a dict with the tensor network that would be contracted in the main part of the computation (with the key 'tn'), and the opt_einsum.PathInfo object describing the contraction path found for that tensor network (with the key 'info'). For example: [22]: rehs = circ.amplitude_rehearse() # contraction width W = qu.log2(rehs['info'].largest_intermediate) W [22]: 8.0 Upper twenties is the limit for standard (~10GB) amounts of RAM. [23]: # contraction cost # N.B. raw .opt_cost assumes *real* dtype FLOPs # * 4 to get complex dtype FLOPs (relevant for most QC) # / 2 to get dtype independent scalar OPs (the 'cost') C = qu.log10(rehs['info'].opt_cost / 2) C [23]: 3.9607560909017727 [24]: # perform contraction rehs['tn'].contract(all, optimize=rehs['info'].path, output_inds=()) [24]: (0.014567936533910614+0.039229453534987115j) sample_rehearse() and sample_chaotic_rehearse() both return a dict of dicts, where the keys of the top dict are the (ordered) groups of marginal qubits used, and the values are the rehearsal dicts as above. [25]: rehs = circ.sample_rehearse(group_size=3) rehs.keys() [25]: dict_keys([(0, 1, 2), (3, 4, 9), (5, 6, 7), (8,)]) [26]: rehs[(3, 4, 9)].keys() [26]: dict_keys(['tn', 'info']) 5.3.9. Unitary Reverse Lightcone Cancellation¶ In several of the examples above we made use of ‘reverse lightcone’, or the set of gates that have a causal effect on a particular set of output qubits, \(\bar{q}\), to work with a potentially much smaller TN representation of the wavefunction: This can simply be understood as cancellation of the gate unitaries at the boundary where the bra and ket meet: if there are no operators or projectors breaking this bond between the bra and ket. Whilst such simplifications can be found by the local simplifications (see below) its easier and quicker to drop these explicitly. You can see which gate tags are in the reverse lightcone of which regions of qubits by calling: [27]: # just show the first 10... lc_tags = circ.get_reverse_lightcone_tags(where=(0,)) lc_tags[:10] [27]: ('PSI0', 'GATE_0', 'GATE_1', 'GATE_2', 'GATE_3', 'GATE_4', 'GATE_5', 'GATE_6', 'GATE_7', 'GATE_8') [28]: circ.psi.draw(color=lc_tags, legend=False) We can plot the effect this has as selecting only these, \(| \psi \rangle \rightarrow | \psi_{\bar{q}} \rangle\), on the norm with the following: [29]: # get the reverse lightcone wavefunction of qubit 0 psi_q0 = circ.get_psi_reverse_lightcone(where=(0,)) # plot its norm (psi_q0.H & psi_q0).draw(color=['PSI0'] + [f'ROUND_{i}' for i in range(10)]) Note Although we have specified gate rounds here, this is not required to find the reverse lightcones, and indeed arbitrary geometry is handled too. 5.4. Locally Simplifying the Tensor Network (the simplify_sequence kwarg)¶ All of the main circuit methods take a simplify_sequence kwarg that controls local tensor network simplifications that are performed on the target TN object before the main contraction. The kwarg is a string of letters which is cycled through til convergence, which each letter corresponding to a different method: 'A': antidiag_gauge() 'D': diagonal_reduce() 'C': column_reduce() 'R': rank_simplify() 'S': split_simplify() The final object thus both depends on which letters and the order specified – 'ADCRS' is the default. As an example, here is the amplitude tensor network of the circuit above, with only ‘rank simplification’ (contracting neighboring tensors that won’t increase rank) performed: [30]: ( circ # get the tensor network .amplitude_rehearse(simplify_sequence='R')['tn'] # plot it with each qubit register highlighted .draw(color=[f'I{q}' for q in range(10)]) ) You can see that only 3+ dimensional tensors remain. Now if we turn on all the simplification methods we get an even smaller tensor network: [31]: ( circ # get the tensor network .amplitude_rehearse(simplify_sequence='ADCRS')['tn'] # plot it with each qubit register highlighted .draw(color=[f'I{q}' for q in range(10)]) ) And we also now have hyper-indices - indices shared by more than two tensors - that have been introduced by the diagonal_reduce() method. Hint Of the five methods, only rank_simplify() doesn’t require looking inside the tensors at the sparsity structure. This means that, at least for the moment, it is the only method that can be back-propagated through, for example. The five methods combined can have a significant effect on the complexity of the main TN to be contracted, in the most extreme case they can reduce a TN to a scalar: [32]: norm = circ.psi.H & circ.psi norm [32]: <TensorNetwork1D(tensors=668, indices=802, L=10, max_bond=2)> [33]: norm.full_simplify_(seq='ADCRS') [33]: <TensorNetwork1D(tensors=1, indices=0, L=10, max_bond=1)> Here, full_simplify(), (which is the method that cycles through the other five) has reduced the 500 tensors to a single scalar via local simplifications only. Clearly we know the answer should be 1 anyway, but its nice to confirm it can indeed be found automatically as well: [34]: norm ^ all [34]: 0.9999999999999563 5.5. Finding a Contraction Path (the optimize kwarg)¶ As mentioned previously, the main computational bottleneck (i.e. as we scale up circuits, the step that always becomes most expensive) is the actual contraction of the main tensor network objects, post simplification. The cost of this step (which is recorded in the rehearsal’s 'info' objects) can be incredibly sensitive to the contraction path - the series of pairwise merges that define how to turn the collection of tensors into a single tensor. You control this aspect of the quantum circuit simulation via the optimize kwarg, which can take a number different types of values: - A string, specifiying an opt_einsum registered path optimizer. - A custom opt_einsum.PathOptimizerinstance, like those found in cotengra. - An explicit path - a sequence of pairs of ints - likely found from a previous rehearsal, for example. Note The default value is 'auto-hq' which is the highest quality preset opt_einsum has, but this is pretty unlikely to offer the best performance for large or complex circuits. As an example we’ll show how to use each type of optimize kwarg for computing the local expecation: [35]: # compute the ZZ correlation on qubits 3 & 4 ZZ = qu.pauli('Z') & qu.pauli('Z') where = (3, 4) 5.5.1. An opt_einsum preset¶ Frist we use the fast but low quality 'greedy' preset: [36]: rehs = circ.local_expectation_rehearse(ZZ, where, optimize='greedy') tn, info = rehs['tn'], rehs['info'] info.opt_cost [36]: Decimal('255872') Because we used a preset, the path is cached by quimb, meaning the path optimization won’t run again for the same geometry. Hint You can set a persistent disk cache for quimb to use with set_contract_path_cache(). Now we can run the actual computation, reusing that path automatically: [37]: %%timeit circ.local_expectation(ZZ, where, optimize='greedy') 34.8 ms ± 361 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) We can compare this to just performing the main contraction: [38]: %%timeit tn.contract(all, optimize=info.path, output_inds=()) 4.19 ms ± 38.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) Where we see that most of the time is evidently spent preparing the TN. 5.5.2. An opt_einsum.PathOptimizer instance¶ You can also supply a customized PathOptimizer instance here, an example of which is the opt_einsum.RandomGreedy optimizer (which is itself called by 'auto-hq' in fact). [39]: import opt_einsum as oe # up the number of repeats and make it run in parallel opt_rg = oe.RandomGreedy(max_repeats=256, parallel=True) rehs = circ.local_expectation_rehearse(ZZ, where, optimize=opt_rg) tn, info = rehs['tn'], rehs['info'] info.opt_cost [39]: Decimal('191560') We see it has found a much better path than 'greedy', which is not so surprising. Unlike before, if we want to reuse the path found from this, we can directly access the .path attribute from the info object (or the PathOptimizer object): [40]: %%timeit circ.local_expectation(ZZ, where, optimize=info.path) 36.2 ms ± 1.44 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) We’ve shaved some time off but not much because the computation is not dominated by the contraction at this scale. Note If you supplied the opt_rg optimizer again, it would run for an additional 256 repeats, before returning its best path – this can be useful if you want to incrementally optimize the path, check its cost and then optimize more, before switching to .path when you actually want to contract, for example. 5.5.3. A custom opt_einsum.PathOptimizer instance¶ opt_einsum defines an interface for custom path optimizers, which other libraries, or any user, can subclass and then supply as the optimize kwarg and will thus be compatible with quimb. The cotengra library offers ‘hyper’-optimized contraction paths that are aimed at (and strongly recommended for) large and complex tensor networks. It provides: The optimize='hyper'preset (once cotengrais imported) The cotengra.HyperOptimizersingle-shot path optimizer, (like RandomGreedyabove) The cotengra.ReusableHyperOptimizer, which can be used for many contractions, and caches the results (optionally to disk) The last is probably the most practical so we’ll demonstrate it here: [41]: import cotengra as ctg # the kwargs ReusableHyperOptimizer take are the same as HyperOptimizer opt = ctg.ReusableHyperOptimizer( max_repeats=16, reconf_opts={}, parallel=False, progbar=True, # directory='ctg_path_cache', # if you want a persistent path cache ) rehs = circ.local_expectation_rehearse(ZZ, where, optimize=opt) tn, info = rehs['tn'], rehs['info'] info.opt_cost log2[SIZE]: 10.00 log10[FLOPs]: 4.91: 100%|██████████| 16/16 [00:12<00:00, 1.27it/s] [41]: Decimal('80384') We can see even for this small contraction it has improved on the RandomGreedy path cost. We could use info.path here but since we have a ReusableHyperOptimizer path optimizer, this second time its called on the same contraction it will simply get the path from its cache anway: [42]: %%timeit circ.local_expectation(ZZ, where, optimize=opt) 32.9 ms ± 441 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) Again, since the main contraction is very small, we don’t see any real improvement. cotengra also has a ContractionTree object for manipulating and visualizing the contraction paths found. [43]: tree = ctg.ContractionTree.from_info(info) tree.plot_tent() Here the, grey network at the bottom is the TN to be contracted, and the tree above it depicts the series of pairwise contractions and their individual cost needed to find the output answer (the node at the top). 5.6. Performing the Contraction (the backend kwarg)¶ quimb and opt_einsum both try and be agnostic to the actual arrays they manipulate / contract. Currently however, the tensor network Circuit constructs and simplifies is made up of numpy.ndarray backed tensors since they are all generally very small: [44]: {t.size for t in tn} [44]: {4, 8} When it comes to the actual contraction however, where large tensors will appear, it can be advantageous to use a different library to perform the contractions. If you specify a backend kwarg, before contraction, the arrays will converted to the backend, then the contraction performed, and the result converted back to numpy. The list of available backends is here, including: cupy jax(note this actively defaults to single precision) torch tensorflow Sampling is an excellent candidate for GPU acceleration as the same geometry TNs are contracted over and over again and since sampling is inherently a low precision task, single precision arrays are a natural fit. [45]: for b in circ.sample(10, backend='jax', dtype='complex64'): print(b) 0110110111 0111001011 0010010100 0010100111 0110000101 1100101001 1010100110 0110010010 1100000110 1110110111 Note Both sampling methods, sample() and sample_chaotic(), default to dtype='complex64'. The other methods default to dtype='complex128'. 5.7. Performance Checklist¶ Here’s a list of things to check if you want to ensure you are getting the most performance out of your circuit simulation: What contraction path optimizer are you using? If performance isn’t great, have you tried cotengra? How are you applying the gates? For example, gate_opts={'contract': False}(no decomposition) can be better for diagonal 2-qubit gates. How are you grouping the qubits? For sampling, there is a sweet spot for group_sizeusually. For chaotic sampling, you might try the last $M$ marginal qubits rather than the first, for example. What local simplifications are you using, and in what order? simplify_sequence='SADCR'is also good sometimes. If the computation is contraction dominated, can you run it on a GPU?
https://quimb.readthedocs.io/en/latest/tensor-circuit.html
CC-MAIN-2021-49
refinedweb
4,958
52.39
Implement Swipe to Dismiss The “Swipe to dismiss” pattern is common in many mobile apps. For example, if we’re writing an email app, we might want to allow our users to swipe away email messages in a list. When they do, we’ll want to move the item from the Inbox to the Trash. Flutter makes this task easy by providing the Dismissible Widget. DirectionsDirections - Create List of Items - Wrap each item in a DismissibleWidget - Provide “Leave Behind” indicators 1. Create List of Items1. Create List of Items First, we’ll create a list of items we can swipe away. For more detailed instructions on how to create a list, please follow the Working with long lists recipe. Create a Data SourceCreate a Data Source In our example, we’ll want 20 sample items to work with. To keep it simple, we’ll generate a List of Strings. final items = List<String>.generate(20, (i) => "Item ${i + 1}"); Convert the data source into a ListConvert the data source into a List At first, we’ll simply display each item in the List on screen. Users will not be able to swipe away with these items just yet! ListView.builder( itemCount: items.length, itemBuilder: (context, index) { return ListTile(title: Text('${items[index]}')); }, ); 2. Wrap each item in a Dismissible Widget2. Wrap each item in a Dismissible Widget Now that we’re displaying a list of items, we’ll want to give our users the ability to swipe each item off the list! After the user has swiped away the item, we’ll need to run some code to remove the item from the list and display a Snackbar. In a real app, you might need to perform more complex logic, such as removing the item from a web service or database. This is where the Dismissible Widget comes into play! In our example, we’ll update our itemBuilder function to return a Dismissible Widget. Dismissible( // Each Dismissible must contain a Key. Keys allow Flutter to // uniquely identify Widgets. key: Key(item), // We also need to provide a function that will tell our app // what to do after an item has been swiped away. onDismissed: (direction) { // Remove the item from our data source. setState(() { items.removeAt(index); }); // Show a snackbar! This snackbar could also contain "Undo" actions. Scaffold .of(context) .showSnackBar(SnackBar(content: Text("$item dismissed"))); }, child: ListTile(title: Text('$item')), ); 3. Provide “Leave Behind” indicators3. Provide “Leave Behind” indicators As it stands, our app allows users to swipe items off the List, but it might not give them a visual indication of what happens when they do. To provide a cue that we’re removing items, we’ll display a “Leave Behind” indicator as they swipe the item off the screen. In this case, a red background! For this purpose, we’ll')), ); Complete exampleComplete example import 'package:flutter/foundation.dart'; import 'package:flutter/material.dart'; void main() { runApp(MyApp()); } // MyApp is a StatefulWidget. This allows us to update the state of the // Widget whenever an item is removed. class MyApp extends StatefulWidget { MyApp({Key key}) : super(key: key); @override MyAppState createState() { return MyAppState(); } } class MyAppState extends State<MyApp> { final items = List<String>.generate(3, (i) => "Item ${i + 1}"); @override Widget build(BuildContext context) { final title = 'Dismissing Items'; return MaterialApp( title: title, theme: ThemeData( primarySwatch: Colors.blue, ), home: Scaffold( appBar: AppBar( title: Text(title), ), body: ListView.builder( itemCount: items.length, itemBuilder: (context, index) { final item = items[index]; return Dismissible( // Each Dismissible must contain a Key. Keys allow Flutter to // uniquely identify Widgets. key: Key(item), // We also need to provide a function that tells our app // what to do after an item has been swiped away. onDismissed: (direction) { // Remove the item from our data source. setState(() { items.removeAt(index); }); // Then show a snackbar! Scaffold.of(context) .showSnackBar(SnackBar(content: Text("$item dismissed"))); }, // Show a red background as the item is swiped away background: Container(color: Colors.red), child: ListTile(title: Text('$item')), ); }, ), ), ); } }
https://flutter.dev/docs/cookbook/gestures/dismissible
CC-MAIN-2019-22
refinedweb
661
58.48
public class Solution { public ListNode reverseList(ListNode head) { if (head == null || head.next == null) return head; ListNode reversed = reverseList(head.next); head.next.next = head; head.next = null; return reversed; } public ListNode reverseKGroup(ListNode head, int k) { if (head == null || head.next == null) return head; ListNode runner = head; int temp = 1; while (temp < k && runner != null) { runner = runner.next; temp++; } if (temp == k && runner != null) { ListNode nextNode = runner.next; runner.next = null; ListNode reversed = reverseList(head); head.next = reverseKGroup(nextNode, k); return reversed; } return head; } } Short and clean recursive Java solution(Reuse code from reverse linked list) Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/28994/short-and-clean-recursive-java-solution-reuse-code-from-reverse-linked-list
CC-MAIN-2017-47
refinedweb
113
61.63
CodePlexProject Hosting for Open Source Software We are implementing the cross-OS remote debugging feature () for PTVS 2.0. Similar to how it is done in other Python IDEs like PyDev and Komodo, it requires the code being debugged to be modified to establish connection to the debugger. We would like to get community feedback on the API that will be used for this. Here's a minimal code sample demonstrating the necessary modifications: import ptvsd ptvsd.enable_attach(secret = 'joshua') ptvsd.wait_for_attach() # optional # Your code follows ... ptvsd.break_into_debugger() # explicit breakpoint ... After doing the above, you can pick the new "Python remote debugging" transport in "Attach to Process" dialog in VS, specify the secret and hostname in the Qualifier textbox, and attach. Note two things here. First of all, unlike PyDev, the direction in which the connection is established is reversed: the debugged program is the server, and IDE is the client that connects to that server. This allows connecting to the same script from different machines without having to change it to update the hostname of the IDE debugging server. To ensure that only authorized users can connect, the debugging server requires you to specify a secret value, and only those users who provide that value when attaching will be allowed to debug the process. This can be explicitly disabled if desired. The other difference is that the core API function - enable_attach - registers the necessary trace handlers and launches the debugging server on a background thread, but it does not block program execution. So, after you call it, your script continues to run as usual - except that now you can attach to it at any later time. Furthermore, debugging server remains running, which means that you can detach from that process and re-attach at will. For scenarios where it is desirable to wait until a debugger is attached, a separate function - wait_for_attach - is provided. For scenarios where the process being debugged is on a machine that is physically on a different network, or on the same network which is not guaranteed to be secure from eavesdropping and MITM attacks, the debugging server supports SSL connections - in this mode, a certificate file and a key file have to be provided to enable_attach (same format as used by the standard ssl module). Here are the actual function definitions with their docstrings: def enable_attach(secret, address = ('0.0.0.0', 5678), certfile = None, keyfile = None, redirect_output = True): """Enables Python Tools for Visual Studio to attach to this process remotely to debug Python code. The secret parameter is used to validate the clients - only those clients providing the valid secret will be allowed to connect to this server. On client side, the secret is prepended to the Qualifier string, separated from the hostname by '@', e.g.: secret@myhost.cloudapp.net:5678. If secret is None, there's no validation, and any client can connect freely. The address parameter specifies the interface and port on which the debugging server should listen for TCP connections. It is in the same format as used for regular sockets of the AF_INET family, i.e. a tuple of (hostname, port). On client side, the server is identified by the Qualifier string in the usual hostname:port format, e.g.: myhost.cloudapp.net:5678. The certfile parameter is used to enable SSL. If not specified, or if set to None, the connection between this program and the debugger will be unsecure, and can be intercepted on the wire. If specified, the meaning of this parameter is the same as for ssl.wrap_socket. The keyfile parameter is used together with certfile when SSL is enabled. Its meaning is the same as for ssl.wrap_socket. The redirect_output parameter specifies whether any output (on both stdout and stderr) produced by this program should be sent to the debugger. This function returns immediately after setting up the debugging server, and does not block program execution. If you need to block until debugger is attached, call ptvsd.wait_for_attach. The debugger can be detached and re-attached multiple times after enable_attach is called. """ def wait_for_attach(): """If a PTVS remote debugger is attached, returns immediately. Otherwise, blocks until a remote debugger attaches to this process. """ def break_into_debugger(): """If a PTVS remote debugger is attached, pauses execution of all threads, and breaks into the debugger with current thread as active."""! mwesterdahl76 wrote:! Thanks for the review! Regarding naming, I'd prefer to keep an existing function name for the sake of clarity and discoverability for those people who haven't used the same in other IDEs (and I think that using the same name as sys.settrace is unfortunate, because the semantics differ a lot). But we can certainly add an alias for convenience of those used to it. For threading behavior, it's definitely worth documenting it for enable_attach, now that I think of it, if only because there is a non-obvious catch there - you can only debug those threads that are started after you call it (and the one you've called it on, of course) - debugger won't even see the threads that were there before. While it probably won't affect many people, as normally enabling debugging is the very first thing you do in the script, someone can always get there and be surprised. A good point about timeout when blocking, and easy to implement, too - definitely worth adding. 1) How was the decision to reverse the usual behavior made? I ask because while making the code being debugged the server has many benefits, it also has downsides: for example, remote servers are usually restricted in the ports they can have open. 2) Would it be possible to support debugging without modification to the code? I'd love to be able to use Visual Studio's click-on-the-sidebar interface to set, disable and clear breakpoints rather than use the ipdb-esque approach of sprinkling set_traces everywhere and having to insert debug-related imports and dependencies into server side code. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://pytools.codeplex.com/discussions/430907
CC-MAIN-2017-39
refinedweb
1,036
52.8
: NOTE: FreshPorts displays only required dependencies information. Optional dependencies are not covered. This port is required by:.6.p1_2: PAM Number of commits found: 234 (showing only 34 on this page) « 1 | 2 | 3 -. Extend the description for openssh-portable Fix description for openssh Allow IPv6 connection if detected by configure. fix patch for build on bento - patch to fix undefined (ulong) - builds now for FreeBSD 2.2.8 Fix thinko and make it possible to disable Kerberos support on the make command line even if KRB5_HOME is set in make.conf. Mark BROKEN in Kerberos case: Simon Wilkinson has not released updated patches yet. (I hope dinoex doesn't mind my committing this.) Update to OpenSSH 3.1 OpennSSH-portable 3.1p1 Fix off-by-one error. Add option to support patches: Add patch for: readpassphrase.h Someone in the OpenSSH world doesn't understand the difference between application and implementation namespaces. This causes conflicts with <readpassphrase.h>. PKGNAMESUFFIX set for Option OPENSSH_OVERWRITE_BASE Fix MANPREFIX, so manpages are compressed strip trailing \ adding a knob to the OpenSSH port to allow people to overwrite the ssh in the base system. make OPENSSH_OVERWRITE_BASE=yes - extend patch for batch mode, so no site-specifc files are installed. - Udate to OpenSSH-3.0.2 - make batch-processing cleaner In BATCH mode - clean generated host keys. Give dinoex@ maintainership since he's really been maintaining it and is better suited for maintaining this port. Update to openssh-3.0.1 and openssh-portable-3.0.1p1 Update to OpenSSH 3.0 and OpenSSH-portable 3.0p1 Extracted from Changelog (not complete): cvs rm'ing patch-coredump, as the current versions are safe. It does no harm, so a second bump of PORTVERSION is not needed. - included an patch that solves a coredump in sshd - Bumped PORTREVISION - Update to OpenSSH 2.9.9p2 - security-patch for cookie files obsolete - MD5 password support activated - Update to p2: - stripped down some patches Fix package building, slogin and its manpage is an link - slogin and manpage added to package, bumped PORTREVISION Fix FreeBSD specific patch, exit now if change of password fails. - Switch to the user's uid before attempting to unlink the auth forwarding file, nullifying the effects of a race. - Bump PORTREVISION Update maintainer email New port: OpenSSH portable, which has GNU-configure and more. Diffs to OpenSSH-OPenBSD are huge. So this is here a complete diffrent branch, no repro-copy - Did a bit cleanup in the Makefile Servers and bandwidth provided byNew York Internet, SuperNews, and RootBSD 7 vulnerabilities affecting 10 ports have been reported in the past 14 days * - modified, not new All vulnerabilities
http://www.freshports.org/security/openssh-portable/?page=3
CC-MAIN-2014-23
refinedweb
441
58.79
When I am coding with options I find the fold method very useful. Instead of writing if defined statements I can do opt.fold(<not_defined>){ defined => } if (x.isDefined && y.isRight) { val z = getSomething(x.get) if (z.isDefined) { .... have you tried for comprehension? Assuming you don't want to treat individual errors or empty optionals: import scala.util._ val opt1 = Some("opt1") val either2: Either[Error, String] = Right("either2") val try3: Try[String] = Success("try3") for { v1 <- opt1 v2 <- either2.right.toOption v3 <- try3.toOption } yield { println(s"$v1 $v2 $v3") } Note that Either is not right biased, so you need to call the .right method on the for comprehension (I think cats or scalaz have a right biased Either). Also, we are converting the Either and the Try to optionals, discarding errors
https://codedump.io/share/QTuYOuSc47Vi/1/avoiding-nested-ifs-when-working-with-multiple-options-and-eithers
CC-MAIN-2017-09
refinedweb
135
61.73
by Mihai Corlan Table of contents - The big picture - Differences between Doctrine 2 and Doctrine 1.x - Getting the source code - Installing Doctrine 2 and creating the PHP project - Creating the PHP entities - Creating the PHP services - Using Doctrine Query Language - Creating the Flex client - Doctrine 2 advantages (and some things to watch out for) - Where to go from here Created 24 January 2011 In May 2010, I wrote a blog post that explored how to create a simple application that used the Doctrine ORM (Object Relational Mapper) framework for PHP on the server side, Flex on the client side, and remoting via the Zend Framework to communicate between Flex and PHP. At that time my feelings on Doctrine were mixed. I chatted with Jonathan Wage of Doctrine about some of the shortcomings I found in Doctrine 1.x and he suggested that I check out Doctrine 2 Beta. I've found that Doctrine 2 is a big step forward. In this article I describe how I rewrote the application that I created for my original blog post, this time using Doctrine 2, Flex 4, the Zend Framework, and the Flash Builder data-centric development features. I'll highlight the relevant differences between Doctrine 1 and Doctrine 2 along the way. Thus, you should find this article valuable in any one of these two cases: - You are already working with Doctrine 1 and you've wondered what it would take to move to Doctrine 2 - You want to learn how to use Doctrine 2 with Flex; you know PHP and you know enough Flex not to be scared away if you see some snippets of code If you aren't already using an ORM framework for PHP, then I strongly encourage you to consider it. For most projects, it frees you of the tedious task of writing Create, Read, Update, and Delete (CRUD) code and SQL queries. It allows you to focus on the business logic of your application. Better yet, these advantages are multiplied when working on Rich Internet Applications because for these applications, much of the work is done on the client and not on the server. There are some aspects of using ORM with RIA that could be better. For example, when you use a server side ORM to feed data to a rich client (and enable the client to persist changes), you need additional boiler plate code to make the whole thing work. If you don't know much about ORMs in general, you may want to read my initial blog post on the topic, Working with Doctrine 1.x, Zend Framework, and Flex, before continuing. The sample application for this article provides a simple interface for tracking student grades (or marks) in a series of courses (see Figure 1). The database structure for this application comprises four tables: marks, courses, students, and countries (see Figure 2). Note: The only difference between this database and the one used in my original blog post is the addition of a simple primary key to the marks table. While this is by no means a complex application, it has reasonably sophisticated relationships between tables. There are several courses and countries (courses and countries tables), and each student belongs to a country (many-to-one between students and countries) and receives marks for a number of courses (marks many-to-many table for students and courses tables). The Flex application reads all the data stored in the MySQL database and enables the user to fully edit student records by performing the following operations: - Add, edit, delete students - Change the country for a student - Change the courses taken by a student and assign marks for a course The complete application includes the Flex client, the PHP application server with Zend AMF, and the MySQL database server (see Figure 3). If you're familiar with Doctrine 1.x, it will help to understand the main differences between Doctrine 2 and Doctrine 1.x. Note: Doctrine 2 is still in Beta, so features may change before the final release. The biggest change by far is related to the main pattern used by the Doctrine 2 ORM. Doctrine 1.x versions used the Active Record pattern (Ruby on Rails uses Active Record too); whereas Doctrine 2 uses the Data Mapper pattern (Hibernate uses the same pattern). With Active Record, the entities know how to persist themselves to the database. Basically each entity extends a class from the ORM framework and implements methods such as read(), find(), delete(), and update(). Although it is not mandatory, the entities look very much like the database structure. With Data Mapper, the entities know nothing about the persistence layer and nothing about the database structure. The ORM provides the means to persist the entities and read the data into entities (from the database). From a rich client perspective, this translates into less work on the PHP side when preparing data to send across the wire when using Doctrine 2. With Doctrine 1, the data model was heavy due to the Active Record pattern. When I worked with Doctrine 1.x, I had to create a plain vanilla data model to send the data to the Flex client. In order to efficiently transform the heavy entities used by Doctrine 1.x into the plain vanilla ones used for sending the data to Flex, I had to write custom functions. When data came in from the Flex client, the reverse process was needed; I used the plain vanilla objects to build the Doctrine entity objects. An alternative could be to send all the data to Flex as arrays. Unfortunately, this approach doesn't work out of the box; you have to write functions to transform the graph of objects into a graph of arrays or associative arrays. With Doctrine 2, you don't need this extra layer of simple value objects and you can return the data as a graph of objects or arrays with the built-in capabilities. The second big difference is that Doctrine 2 requires PHP 5.3 or newer. Thus, if your setup requires older versions of PHP, then you have to stay with Doctrine 1.x. Of course the ripples stirred by these two changes are quite big and I think it is safe to say that when moving from Doctrine 1.x to Doctrine 2 you won't reuse much of your previous experience with Doctrine 1.x or the code you wrote. Having said this, I have to say that I am happy with the evolution of Doctrine, because I favor Data Mapper over Active Record. This article outlines the main steps in building the sample application. The sample files included with this article include the PHP doctrine2_students folder, the database SQL dump for creating the tables, and the Flex code. The easiest way to get started with this project is just to extract the ZIP file and import the project in Flash Builder. Next, place the PHP code (the doctrine2_students folder) in your web root folder. Get the Doctrine 2 files, and then reconfigure the doctrine2_students/bootstrap.php file to define the Doctrine 2 location and update your database credentials. Finally, you can explore the sample Flex code or create your own new Flex/PHP project and perform the steps in the following sections, which include creating the Flex wrapper services and value objects. There are four different ways (PEAR, Package Download, GitHub, or SVN) to get the Doctrine framework. The configuration of Doctrine will differ depending on what method you use to obtain it. I used GitHub for my project, and I pulled the code outside of the Apache web root. I recommend using Eclipse PDT with Flash Builder 4 for this kind of development. Follow these steps to use this setup: - Install Eclipse PDT and then install the Flash Builder plug-in on top of Eclipse PDT. - Create a PHP project named students-doctrine2. - Add Flex nature to the project by right-clicking on the project name and selecting Add/Change Project Type > Add Flex Project Type. Make sure you select PHP for the Application Server type and you fill in the path and URL for your web root. - After downloading Doctrine, create a folder named doctrine2_students inside the web root for the PHP services, entities, and Doctrine configuration files. - Inside that folder, create three subfolders named entities, proxies, and services. - Create a linked resource between the doctrine2_students folder and the Eclipse project. (Right-click on the project, choose New > Folder, click Advanced, and navigate to the folder location.) Now you're ready to write PHP and Flex code. The next step is to create a bootstrap file (I named mine bootstrap.php and placed it inside of the doctrine2_students folder) that configures Doctrine 2 for your project. This file is used to load the framework classes, set up database access information, specify the location and annotation method of entities, and configure the different caches that will be used by the application. In the same bootstrap file you can create an instance of the EntityManager class. This is the entry point to Doctrine 2. In the sample files for this article you'll find the bootstrap.php file in the doctrine2_students folder. The file looks like this: <?php use Doctrine\ORM\EntityManager, Doctrine\ORM\Configuration; $applicationMode = 'development'; //Doctrine Git bootstrap $lib = '/Users/mcorlan/Documents/work/_git/doctrine/doctrine2/lib/'; require $lib . 'vendor/doctrine-common/lib/Doctrine/Common/ClassLoader.php'; $classLoader = new \Doctrine\Common\ClassLoader('Doctrine\Common', $lib . 'vendor/doctrine-common/lib'); $classLoader->register(); $classLoader = new \Doctrine\Common\ClassLoader('Doctrine\DBAL', $lib . 'vendor/doctrine-dbal/lib'); $classLoader->register(); $classLoader = new \Doctrine\Common\ClassLoader('Doctrine\ORM', $lib); $classLoader->register(); //additional Symphony components for Doctrine-CLI Tool, YAML Mapping driver $classloader = new \Doctrine\Common\ClassLoader('Symfony', $lib . 'vendor/'); $classloader->register(); //load entities $classloader = new \Doctrine\Common\ClassLoader('entities', __DIR__); $classloader->register(); $classLoader = new \Doctrine\Common\ClassLoader('proxies', __DIR__); $classLoader->register(); //load services $classLoader = new \Doctrine\Common\ClassLoader(null, __DIR__ . '/services'); $classLoader->register(); if ($applicationMode == 'development') { $cache = new \Doctrine\Common\Cache\ArrayCache; } else { $cache = new \Doctrine\Common\Cache\XcacheCache(); } $config = new Configuration; $config->setMetadataCacheImpl($cache); $driverImpl = $config->newDefaultAnnotationDriver(__DIR__ . '/entities'); $config->setMetadataDriverImpl($driverImpl); $config->setQueryCacheImpl($cache); $config->setProxyDir(dirname(__FILE__) .'/proxies'); $config->setProxyNamespace('doctrine2_students\proxies'); if ($applicationMode == "development") { $config->setAutoGenerateProxyClasses(true); } else { $config->setAutoGenerateProxyClasses(false); } //database connection config $connectionOptions = array( 'driver' => 'pdo_mysql', 'dbname' => 'students', 'user' => 'mihai', 'password' => 'mihai' ); $GLOBALS['em'] = EntityManager::create($connectionOptions, $config); If you use this file, remember to change the Doctrine paths as well as the MySQL username and password to match your own setup. With Doctrine 2 (and any ORM that uses the Data Mapper pattern) you have to specify how an entity is persisted by the framework and what relationships it has with other entities (if any). In Doctrine 2 you can choose from four different mechanisms: annotations, YAML, XML, and plain PHP. I initially favored annotations because all the information is stored in the entities classes as PHPDoc comments. Thus if you want to modify an entity you have only one place to look for it. However, after using this approach I think the XML approach is best because you get code-completion hints. Here is the listing for the Course entity (remember there are four tables in the database and so the application needs four entities): <?php namespace entities; /** @Entity * @Table(name="courses") */ class Course { /** * @Id @Column(type="integer") * @GeneratedValue(strategy="AUTO") */ private $id; /** * @Column(type="string", length=255) */ private $name; public function getId() { return $this->id; } public function getName() { return $this->name; } public function setName($val) { $this->name = $val; } } See the entities folder in the sample files for the Country, Mark, and Student entities. You use annotations to specify what table is used for the entity (remember, one row from that table will be wrapped in one instance of the entity). You can set different names for the properties if you want (in SQL you don't use camelCase notation, but in PHP or ActionScript typically you use this convention). The entities I created closely follow the structure of the database. The only difference is in how the foreign keys are represented. For example, the Student entity, which has a many-to-one relation with the Country entity, doesn't have a country_idproperty of type int. Instead, I added a property named countrythat is of type Country. Similarly, I created a property named marksthat holds an array of Mark entities. For example, the marksproperty of a student who attends three courses will hold an array of Mark objects with three instances. - When I worked with Doctrine 1.x, I used the built-in tools to create the domain model from the database structure. With Doctrine 2, you can create the YAML out of the database schema and then generate the entities. I'm not convinced, however, that it is good idea to do this. If you have complex schemas the generated code might not worked as you expect, and you'll need to tweak it manually anyway. In any case, you have to remember to set the properties as privateor protectedand not public, and make sure you add getters and setters. If you don't, you likely encounter some nasty bugs because Doctrine will have problems injecting the code to handle relations. With the four entities in place, it is time to create the PHP services, which will be used by the Flex client to get and persist data. Basically, you'll use the Zend Framework to invoke remote procedure calls on these objects from the Flex application. Inside the doctrine2_students/services/ folder you'll find four PHP files: CountriesService.php, CoursesService.php, MarksService.php, and StudentsService.php. If you examine the bootstrap.php file, you'll see that it loads the services folder along with the rest of the files. Here is the listing for the CountriesService class: <?php require_once(__DIR__.'/../bootstrap.php'); class CountriesService { public function __construct() { $this->entityManager = $GLOBALS['em']; } public function getCountries() { $q = $this->entityManager->createQuery('SELECT c FROM entities\Country c ORDER BY c.name'); return $q->getArrayResult(); } } As you might expect, the complex code is inside the StudentsService class; here's the public API: class StudentsService { //returns all the students public function getStudents() { ... } //save changes for an existent student, or insert a new one public function saveStudent($student) { ... } //deletes a student public function deleteStudent($student) { ... } } You can use the entry point to Doctrine 2—the EntityManager class—to query for persistent objects using a few different methods. The most powerful method is Doctrine Query Language (DQL), which resembles SQL but works on the entities you've defined in your domain model rather than on the underlying tables. If, for example, you want to retrieve the country with the idequal to 1, you could use this code: $id = 1; $dql = 'SELECT c FROM entities\Country c WHERE id = ?1'; $query = $entityManager->createQuery($dql); $query->setParameter(1, $id); $countryEntity = $query->getResult(); If you want to change the name for this country, you could use the following code: $countryEntity->setName('new name for your country'); //persist the changes to database $entityManager->flush(); If you want to create a new country, you’d write this code: $countryEntity = new entities\Country(); $countryEntity->setName('Mihai\'s country'); //set the entity to be managed by Doctrine $entityManager->persist($countryEntity); //persist the changes to database $entityManager->flush(); With DQL when you write a join, it can be a filtering join (similar to the concept of join in SQL used for limiting or aggregating results) or a fetch join (used to fetch related records and include them in the result of the main query). When you include fields from the joined entity in the SELECTclause you get a fetch join. Here is the code for the StudentsService->getStudents()method: StudentsService->getStudents() method: $dql = "SELECT s, c, m, e FROM entities\Student s JOIN s.country c JOIN s.marks m JOIN m.course e ORDER BY s.lastName, s.firstName"; $q = $this->entityManager->createQuery($dql); return $q->getArrayResult(); Each student is retrieved along with the country he belongs to and all of his courses from the many-to-many table—all with a single DQL query. If you use print_r()to show the results you'll see entire structure (see Figure 4). When you create a query with a fetch join and you use the getArrayResult()method on the query object instead of getResult(), you get an array or associative array of other arrays. These data are ready to be sent to the Flex client without any transformation. Note: Handling updates to a Student is covered in the next section, Creating the Flex client. The sample application uses remoting—enabled by Zend AMF—to communicate between the Flex client and the PHP side. In the next section you'll also see how to use Flash Builder 4 to set up the Zend Framework and create the gateway for exposing the four PHP services. With the server code in place, it is time to add the Flex code. When I developed the Doctrine 1.x version of the application, I wrote all client code manually. For this article, you'll use the data-centric development features of Flash Builder to introspect PHP classes and create the service wrappers as well as the ActionScript value objects. The easiest way to do this is to first create the four services, and then to define the return types for the get…()operations. Follow these steps: - From the Data/Services view click the Connect to Data/Service link or the Connect to Data/Service button (it's the third button in the toolbar and it has a plus sign in its icon). - When the wizard opens, select PHP, and click Next. - If this is your first time using the wizard for PHP, then you'll be given the opportunity to install the Zend Framework. Follow the instructions to install it. - Click Browse and select the first PHP service. - If you want to, you can change the package in which the services (and the value objects) will be created. - Click Finish. - Repeat these steps for each of the other three services (see Figure 5). Now, you're ready to define the value objects used on the Flex side. Again you can use the data-centric development features. Because the StudentsService returns a complex type that uses Student, Course, Country, and Mark it is important to start defining those return types first. Start with CountriesService and CoursesService, and then move on to MarksService, before defining the StudentsService value object. - To define the return type for an operation, expand the tree for the service, right-click the operation (for example, select getCountries()from CountriesService) and choose Configure Return Type. - When the wizard opens, select Auto-detect The Return Type From Sample Data and click Next. - After Flash Builder introspects the service, you can type a name for the value object class; for example, type Country. - The most complex type is the return type for the StudentsService.getStudents()method. For this one, on the second page of the wizard you need to expand the nodes and choose the types you defined earlier (Course, Country, or Mark) for the type column (see Figure 6). With the service wrappers and value objects in place, it is time to take care of the application UI and put these files to use. When the application starts, it needs to load the courses, countries, and students first. This is done in the init()function, which is registered on the creationCompleteevent of the application. After you create the init()function, select the getStudents()method from the Data/Services view, right-click it, and choose Generate Service Call. This command adds the following to the code: an instance of StudentsService an instance of CallResponder (you use this object to retrieve the result via the lastResultproperty or to register a result/fault listener for that operation) a method that makes the call to the selected operation and assigns the token returned by the operation to the tokenproperty of the CallResponder object For example, here is the code generated for getStudents(): private function getStudents():void { getStudentsResult.token = studentsService.getStudents(); } … <s:CallResponder <services:StudentsService Take a look at the complete code in Main.mxml to see how it all fits together. In the code you'll see that bidirectional binding is used for the lastName, and registrationfields of the form: <mx:FormItem <s:TextInput </mx:FormItem> <mx:FormItem <s:TextInput </mx:FormItem> <mx:FormItem <mx:DateField </mx:FormItem> When the saveStudent()method is called it invokes the remote operation ( saveStudent()from the StudentsService.php) and passes along an instance of the Student ActionScript class. The PHP method ( StudentsService->saveStudent()) receives an anonymous Object, so it has to manually build an instance of the Student entity and populate it with the data. Here is the complete code for the server-side saveStudent()method: public function saveStudent($student) { if ($student->id) { //update $entity = $this->entityManager->find('entities\Student', $student->id); if (!$entity) throw new Exception('Error saving student!'); $marks = $entity->getMarks(); foreach ($marks as $mark) { //update mark value for existent records $found = false; foreach ($student->marks as $record) { if ($mark->getCourse()->getId() == $record->course->id) { $mark->setMark($record->mark); $found = true; $key = array_search($record, $student->marks, true); //remove the $record from array if ($key !== false) unset($student->marks[$key]); break; } } if (!$found) { //remove current mark $entity->removeMark($mark); $this->entityManager->remove($mark);//remove mark from database } } } else { //insert $entity = new entities\Student(); $this->entityManager->persist($entity); } $this->addNewMarks($entity, $student); //add new marks if any $entity->setFirstName($student->firstName); $entity->setLastName($student->lastName); $d = new DateTime(); $d->setTimestamp($student->registration->getTimestamp()); $entity->setRegistration($d); $country = $this->entityManager->find('entities\Country', $student->country->id); if (!$country) throw new Exception('Error saving student; invalid country!'); $entity->setCountry($country); $this->entityManager->flush(); //save the student } If you're think this is way too much code for this "simple" operation, I'm mostly in agreement with you. However, it's important to remember that this code handles many tasks: creating a new student, inserting marks in the Marks2 table, updating a student, and updating marks if they were changed. In contrast the deleteStudentmethod is quite clean (remember that behind the scenes it removes all the related records from the marks many-to-many table): public function deleteStudent($student) { $entity = $this->entityManager->find('entities\Student', $student->id); if (!$entity) throw new Exception('Error deleting student!'); $this->entityManager->remove($entity); $this->entityManager->flush(); } It's important to keep in mind that Doctrine 2 is still in beta, so conclusions drawn from working with it now are subject to change once the final version is released. From what I've seen, though, Doctrine 2 makes it easier to work on PHP and Flex projects. I especially love the new Data Mapper approach and the flexibility and power it provides. The entities are very light and you can easily use DQL in conjunction with getArrayResult()to build a data structure that's ready to be sent to Flex. There is no need for all the plumbing work I did for my Doctrine 1 project to send the objects on the PHP side. With Doctrine 2 you get a big productivity boost in terms of writing the PHP services and exposing the data to the Flex client. And if you think about it, the server side is not the place where most of the effort goes. So it is a good thing to have a framework that standardizes the PHP code and helps you retrieve and persist data. That said, however, you can tell that it is not a framework architected with rich clients in mind. (There is nothing particularly bad about this; the same is true of most frameworks out there). While it is easy to retrieve data from the underlying persistence layer and send them to a rich client, it is relatively difficult to persist the changes made in the client. On the server side, you have to write custom code to create the PHP entities out of the data received from the client before you can persist the changes. What I feel it is missing is a way to use a data structure (for example an associative array) as the source for creating the PHP entities (more on this below). Another interesting departure from the Doctrine 1 project is that I didn't create an exact match between the ActionScript and PHP entities. When I designed the two sides of the equation, I had in mind the best domain model to serve the Flex client because all the information is edited on the client. Then I used the getArrayResult()method to retrieve associative arrays, which I sent to Flex where they are deserialized into objects. One feature I stayed away from instinctively (both with Doctrine 1 and Doctrine 2) was the ability to generate the database schema using the entities and the mapping between them. With this feature, you can start your project by first writing the PHP data model, and then use Doctrine to generate the database for you. I'm old school and I've learned that relational databases treat you well if you treat them well. Thus, I opted to create the database myself to make sure I set all the indexes and constraints that I needed. I'm not saying that there are any problems with Doctrine feature for generating database schema; I simply haven't tried it. Using Doctrine 2, you have very little work to do when handling the Delete and Read CRUD operations on the server side. However, Create and Update are a different story, especially when the object has associations (many-to-many, one-to-many, or many-to-one) with other entities. I thought it would be enough to retrieve the existing Student data with Doctrine and call the removeMark()method to remove a Mark. In fact, doing this doesn't delete the entry from the many-to-many table. Instead, you have to explicitly remove the Mark instance from the Student and from the entityManager; for example: $student->removeMark($mark); $this->entityManager->remove($mark); Doctrine 2 offers a Cascade feature for persist and remove. For example, here is how you define a cascade delete/update for the Marks entities on the Student object: class Student { ... /** * @OneToMany(targetEntity="Mark", mappedBy="student", cascade={"persist", "remove"}) */ private $marks; ... } However, I found that this actually works perfectly only on delete. It is possible that I didn't fully understand the usage, I was expecting more than intended, or I just missed something. Another small glitch was related to the composite primary keys. When I tried to follow Doctrine's documentation and annotate the Mark entity to compose the primary key out of student_idand course_id, I got a runtime error. As a workaround, I altered the table and added an auto-increment primary key. The only other thing I didn't like was the date handling. When you send a Date object to the PHP side, the PHP code gets a Zend_Date object (when using the Zend Framework). Because Doctrine 2 doesn't know how to handle this kind of object (it uses the PHP DateTime object), you have to handle the transformation manually. It would be helpful to either configure the Zend Framework to use the PHP DateTime instead of Zend_Date or to have Doctrine 2 handle this format. I encourage you to read the excellent documentation you'll find on the Doctrine website to better understand the inner workings of Doctrine 2, the different types of associations it supports, and in general the features it offers. The data-centric development features of Flash Builder greatly simplified the creation of the sample application's data model. They work for many server side technologies, and in this case they really saved a lot of time. For more information on that topic, see Ryan Stewart's three-part series of articles, starting with Flash Builder 4 and PHP – Part 1: Data-centric development. Also see Flex and PHP in the Adobe Developer Connection's Flex Developer Center for other tutorials on connecting a Flex application to a PHP back end.
https://www.adobe.com/devnet/flex/articles/flex-doctrine2-zendamf.html
CC-MAIN-2019-26
refinedweb
4,697
51.28
Overview We’re going to briefly change things up a little bit for this blog post and switch from Java to Jython. Normal Java service will be resumed shortly! I'm somewhat new to Jython myself, so this article shares some of the basics needed to get scripts to work. I plan to create a follow-up article to covered some more advanced topics once I've figured out how things fit together. A capability of Atrium Orchestrator that has been sadly overlooked for many years is its ability to execute Jython scripts from within workflows. AO’s workflow activities and its use of XPath and XSLT allow a reasonable amount of flexibility, and I'm often impressed by what’s possible with some clever manipulation of XML. Unfortunately this does mean that sometimes tasks that ought to be simple become an exercise in futility; I think anyone who has wanted a loop that repeats infinitely until a condition is met will relate to this! One of the great strengths of AO is that you don’t need a programming background to create simple workflows, but there are going to be times when some good old-fashioned code is just the best way to get something done. You have a number of options if you go down this route: - Call an out-of-process script using the command-line adapter. This means just about anything you can run from the command-line is available to you. Running out-of-process has its disadvantages, though, as there is an overhead in memory and process setup time. - Call a Perl script using the Script adapter. This also uses an external out-of-process Perl instance, with the disadvantages as above. - Call a Jython script using the Script adapter. This executes in-process and so is very quick, but Jython is an interesting beast. Prior to the 20.14.01 content release, Jython scripts were somewhat crippled by our inclusion of the out-of-date Jython 2.1 library (from 2001!). We've now moved to version 2.5.3, which is of 2012 vintage and the most recent stable release as at this writing. Ignoring the somewhat out-of-date version of Jython used previously, why aren't more people using these scripts in their workflows? There are, of course, those people who just are not familiar with Python/Jython syntax, and I'm not intending to teach anyone the basics of the language here. I think for those who do know Jython, the issue has been one of documentation: the script adapter documentation is fairly high-level and code examples in the community are few and far between. So that’s what I aim to fix here with some practical examples of things you can achieve with Jython in AO. Basic Setup Firstly, let’s do some basic grid configuration. Initially we’re going to use the script activity, and this means you must have an instance of the ro-adapter-script adapter active on your grid and it MUST be called ScriptAdapter. This name is a restriction of the script activity itself; if we call the script adapter directly we can give the adapter a different name, which is essential if you want to use both Jython and Perl via this adapter. The configuration of the script adapter is a little, well… let’s call it idiosyncratic. It asks for the path to a Perl executable, which of course is not relevant to Jython. You do, however, have to enter the path to a valid file here and it must include the string “jython”. I created an empty file called /opt/bmc/jythonDummyFile on my CDP and used this in the configuration. You DON’T need to point to a valid Python or Jython interpreter as this is provided with the script adapter. The adapter configuration will look something like this: So now let’s start with something simple to demonstrate passing variables into and out of a Jython script. We will convert a string to upper-case; even though there is a basic transform in AO that can do this already, it’s a useful exercise. We need an assign activity to set up the initial string, a script activity to run the Jython code, and we’ll have another assign activity that allows us to do some logging and other post-execution trickery. It looks like this: The Set Variables activity is as follows: It is very important to note the quotes around the string. Imagine this string will be placed directly into a statement in this way: var1 = This is a simple string This will generate an error in Jython. So by quoting the string, the actual assignment is: var1 = "This is a simple string" Of course, when passing numeric values, these don’t need quotes. The Script activity is setup thusly: And last, but by no means least, we have the awesomeness that is our Jython script. We’re using an embedded script here, so click the “View/Edit Script” button and paste this in, ensuring that the scripting language drop-down is set to Jython: var2 = var1.upper() Yes, that’s it. You can see that we take the context item inputString and assign it to the Jython variable var1. The script performs an upper() operation on this string, assigning the result to var2. And upon completing execution, AO will stick var2 into our outputString context item which we can dump out using the logging tab and see: [outputString=THIS IS A SIMPLE STRING] Numerics and Multiple Values This time we’ll make things a little more complex. Set three context items: valueA = 7 valueB = 3 myString = "Multiplying the supplied values gives: " The Jython script is this: returnValue = var3 + str(var1 * var2) And you configure the Script activity in the following way: Running this will get you the output: [outputString=Multiplying the supplied values gives: 21] Hopefully this is enough for you to understand the basic method used for passing variables into and out of a Jython script. So far we have only covered simple string and numeric values; how do we pass out an array or list, for example? Stick in this script code, leaving everything else the same as it was: returnValue = [ "Tomato", "Orange", "Lychee" ] returnValue.append("Watermelon") returnValue.append("Kumquat") Any idea what we’ll get in our outputString context item? [outputString=['Tomato', 'Orange', 'Lychee', 'Watermelon', 'Kumquat']] Unfortunately not the most useful output in AO terms as we need to tokenize it. Far more useful would be an XML document that we could process; however, so far I've been unable to get AO to recognize a returned XML document as XML. Importing Modules Well here’s the really bad news. At the moment, the Script Adapter doesn't include the standard libraries that are available for Jython, so standard Python imports don't work. You can import standard Java classes, so we could use the Java Random class to generate a pseudo-random password: from java.util import Random charList = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz1234567890!#." rgen = Random() generatedPassword = "" for _ in range(int(passLength)): num = rgen.nextInt(len(charList)) generatedPassword = generatedPassword + charList[num:num+1] I'm hoping at this stage you can set up your input and output context items without too much assistance. If you need a hint, you need to pass a number to "passLength" as the input, and "generatedPassword" is your output variable. Wouldn't it be nice if you could use the standard Python libraries from your Jython scripts? Well, you can. Unofficially and in a non-supported fashion for the moment. Grab the Jython 2.5.3 installer and run it (you'll need a JRE installed). Select the Standalone install and this will generate a jython.jar file. Upload this to the adapter folder containing the script adapter on each peer for which it's enabled (under server/grids/<gridname>/library/adapters/implementations/ro-adapter-script_20.14.01.00_1 most likely). Rename the existing jython-2.5.3.jar to jython-2.5.3.old.jar and rename the new file from jython.jar to jython-2.5.3.jar. Restart the script adapter. Magically, scripts like this one should now work: import random charList = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz1234567890!#." generatedPassword = "" for _ in range(int(passLength)): num = random.randint(0, len(charList)) generatedPassword = generatedPassword + charList[num:num+1] Did I mention that doing this is currently totally unsupported by BMC? If you'd like to see this implemented in the product, please go vote for the idea on this subject! Debugging Your Jython Script Debugging a script by running it in Dev Studio is something you only want to do if you’re a masochist and really enjoy digging for syntax errors in the grid.log. You are far better off writing some code against a Jython interpreter and simulating the inputs you expect from AO. Once you’ve squashed your bugs, then paste it into the script window (or link to it as a file if you prefer). At the moment, using this method means you can't use any Python imports, hence either using Java libs or sticking to some pretty basic scripting. If you have a convenient Linux machine, your distro should have a Jython package that you can install with "apt-get install jython" or "yum install jython" depending on your Debian or Redhat leanings. Crazy Gentoo users can figure it out for themselves . On Windows, you should be able to download the installer from the Jython.org downloads page. This will allow you to test your scripts from the command-line first and move them to AO when you have them functionally complete. If you choose to upload the standalone Jython jar that includes the standard libs, you can also do your testing using Python (just bear in mind you can't import Java libs in Python!).
https://communities.bmc.com/community/bmcdn/truesight-orchestration/blog/2014/10/16/introduction-to-jython-scripting-in-bao
CC-MAIN-2019-22
refinedweb
1,651
61.46
12 March 2013 12:33 [Source: ICIS news] LONDON (ICIS)--Average European daily chlorine production in February 2013 rose by 2.4% month on month, industry body Euro Chlor said on Tuesday. Average daily chlorine production for the region – comprising the EU, ?xml:namespace> Production during the month was 4% higher year on year, from 26,184 tonnes/day on average in February 2012. An unusually low export tonnage meant end-of-February caustic soda stocks, at 242,715 tonnes, were 14,738 tonnes above the previous month’s figure of 227,977 tonnes, Euro Chlor said. Caustic soda stocks were 7.7% lower year on year, from 262,822 tonnes in February 2012. Chlorine capacity utilisation stood at 78.7% in February, compared with 76.9% the previous month, and 75.9% in February 2012. Chlorine production & capacity utilisation - EU 27 + Norway/Switzerland Caustic soda stocks (tonnes)Caustic soda stocks (tonnes) Data source: Euro
http://www.icis.com/Articles/2013/03/12/9648730/europe-chlorine-output-caustic-soda-stocks-up-in-february.html
CC-MAIN-2014-41
refinedweb
155
60.01
Definitions for a sockets "filesystem". More... #include <sys/cdefs.h> #include <arch/types.h> #include <kos/limits.h> #include <kos/fs.h> #include <kos/net.h> #include <sys/queue.h> #include <sys/socket.h> #include <stdint.h> Go to the source code of this file. Definitions for a sockets "filesystem". This file provides definitions to support the BSD-sockets-like filesystem in KallistiOS. Technically, this filesystem mounts itself on /sock, but it doesn't export any files there, so that point is largely irrelevant. The filesystem is designed to be extensible, making it possible to add additional socket family handlers at runtime. Currently, the kernel only implements UDP sockets over IPv4 and IPv6, but as mentioned, this can be extended in a fairly straightforward manner. In general, as a user of KallistiOS (someone not interested in adding additional socket family drivers), there's very little in this file that will be of interest. Initializer for the entry field in the fs_socket_proto_t struct. Internal sockets protocol handler. This structure is a protocol handler used within fs_socket. Each protocol that is supported has one of these registered for it within the kernel. Generally, users will not come in contact with this structure (unless you're planning on writing a protocol handler), and it can generally be ignored. For a complete list of appropriate errno values to return from any functions that are in here, take a look at the Single Unix Specification (aka, the POSIX spec), specifically the page about sys/socket.h (and all the functions that it defines, which is available at . Internal representation of a socket for fs_socket. This structure is the internal representation of a socket "file" that is used within fs_socket. A normal user will never deal with this structure directly (only protocol handlers and fs_socket itself ever sees this structure directly). Input a packet into some socket family handler. This function is used by the lower-level network protocol handlers to input packets for further processing by upper-level protocols. This will call the input function on the family handler, if one is found. Open a socket without calling the protocol initializer. This function creates a new socket, but does not call the protocol's socket() function. This is meant to be used for things like accepting an incoming connection, where calling the regular socket initializer could cause issues. You shouldn't really have any need to call this function unless you are implementing a new protocol handler. Add a new protocol for use with fs_socket. This function registers a protocol handler with fs_socket for use when creating and using sockets. This protocol handler must implement all of the functions in the fs_socket_proto_t structure. See the code in kos/kernel/net/net_udp.c for an example of how to do this. This function is NOT safe to call inside an interrupt. Unregister a protocol from fs_socket. This function does the exact opposite of fs_socket_proto_add, and removes a protocol from use with fs_socket. It is the programmer's responsibility to make sure that no sockets are still around that are registered with the protocol to be removed (as they will not work properly once the handler has been removed).
http://cadcdev.sourceforge.net/docs/kos-2.0.0/fs__socket_8h.html
CC-MAIN-2018-05
refinedweb
532
50.63
Hi,I release new version of diskdump for kernel 2.6.7.- IPF(IA64) support- Fusion-MPT scsi driver supportSource code can be downloaded from feel free to use!Best Regards,Takao Indoh------------------------------------Introduction------------Diskdump offers a function to execute so-called crash dump.When "panic" or "oops" happens, diskdump automatically saves systemmemory to the disk. We can investigate the cause of panic using thissaved memory image which we call as crash dump.Overview-------- How it worksThis is 2-stage dump which is similar to traditional UNIX dump. The1st stage starts when panic occurs, at which time the register stateand other dump-related data is stored in a header, followed by thefull contents of memory, in a dedicated dump partition. The dumppartition will have been pre-formatted with per-block signatures.The 2nd stage is executed by rc script after the next system reboot,at which time the savecore command will create the vmcore file fromthe contents of the dump partition. After that, the per-blocksignatures will be re-written over the dump partition, inpreparation for the next panic.The handling of panic is essentially the same as netdump. Itinhibits interrupts, freezes all other CPUs, and then for each pageof data, issues the I/O command to the host adapter driver, followedby calling the interrupt handler of the adapter driver iterativelyuntil the I/O has completed. The difference compared to netdump isthat diskdump saves memory to the dump partition in its own loop,and does not wait for instructions from an external entity.- SafetyWhen diskdump is executed, of course the system is in serioustrouble. Therefore, there is a possibility that user resources onthe disk containing the dump partition could be corrupted. To avoidthis danger, signatures are written over the complete dumppartition. When a panic occurs, the diskdump module reads the wholedump partition, and checks if the signatures are written correctly.If the signatures match, the diskdump presumes that it will bewriting to the correct device, and that there is a high possibilitythat it will be able to write the dump correctly. The signaturesshould be the ones which have low possibility to exist on the disk.We decided that the following format will be written in each oneblock (page-size) unit on the partition. The signatures are createdby a simple formulas based on the block number, making it a lowpossibility that the created signature would ever be the same as auser resource: 32-bit word 0: "disk" 32-bit word 1: "dump" 32-bit word 2: block number 32-bit word 3: (block number+3)*11 32-bit word 4: ((word 3)+3)*11 32-bit work 5: ((word 4)+3)*11 32-bit work 6: ((word 5)+3)*11 ... 32-bit work 1023: ((word 1022)+3)*11The diskdump module also verifies that its code and data contentshave not been corrupted. The dump module computes CRC value of itsmodule at the point that dump device is registered, and saves it.When panic occurs, the dump module re-computes the CRC value at thatpoint and compares with the saved value. If the values aren't thesame, the dump knows that it has been damaged and aborts the dumpprocess.- ReliabilityAfter panic occurs, I/O is executed by the diskdump module callingthe queuecommand() function of the host adapter driver, and bypolling the interrupt handler directly. The dump is executed bydiskdump module and host adapter driver only. It is executed withoutdepending on other components of kernel, so it works even when panicoccurs in interrupt context. (XXX To be exact, a couple of driversare not finished completely, because they calls kmalloc() as anextension of queuecommand())In SCSI, a host reset is executed first, so it is possible to dumpwith a stable bus condition. In a couple of drivers, especially inthe host reset process, timers and tasklets may be used. For thesedrivers, I created a helper library to emulate these functions. Thehelper library executes timer and tasklet functionality, which helpsto minimize the modification required to support diskdump. The sizeof initrd increases slightly because the driver depends upon thehelper library.Multiple dump devices can be registered. When a panic occurs, thediskdump module checks each dump device condition and selects thefirst sane device that it finds.Diskdump and netdump can co-exist. When both of modules areenabled, the diskdump works in preference to the netdump. If thesignature checking fails, if a disk I/O error occurs, or if a doublepanic occurs in the diskdump process, it falls back to netdump.- The architectures and drivers to be supportedIA32 only is supported. Regarding drivers, aic7xxx, aic79xx andqla1280 are supported. I will support some qlogic drivers later.The modification of supported drivers is needed, but the changes arevery small if they are SCSI drivers.- The consistency with the netdumpThe format of the saved vmcore file is completely the same as theone which is created by the netdump-server. The vmcore file createdby the savecore command can be read by the existing crash utility.The saved directory is /var/crash/127.0.0.1-<DATE> which isconsistent with the netdump. 127.0.0.1 is an IP address which thenetdump can never use, so there is no conflict. Our savecorecommand also calls /var/crash/scripts/netdump-nospace script as doesthe netdump-server daemon.- Impact to kernelThe host adapter driver needs to be modified to support diskdump,but the required steps are small. For example, the modificationpatch for the aic7xxx/aic9xxx drivers contains 100 lines for each.At a minimum, a poll handler needs to be added, which is called fromdiskdump to handle I/O completion. If the adapter driver does notuse timers or tasklets, that's all that is required. Even if timersor tasklets are used, it only requires a small amount of code fromthe emulation library.Similar to netdump, the variable diskdump_mode, the diskdump_funchook, and the diskdump_register_hook() function has been created toregister diskdump functionality.To check the result code of Scsi_Cmnd, scsi_decide_disposition() isalso exported.scsi_done() and scsi_eh_done() discards Scsi_Cmnds whendiskdump_mode is set. With this implementation, extra processingcan be avoided in the the extension of outstanding completion ofScsi_Cmd, which is completed in the extension of host reset process.This is the only overhead to be added to the main route. It'ssimply an addition of "if unlikely(diskdump_mode) return", so theoverhead is negligible.Internal structure------------------- The interface between disdkdump.o and scsi_dump.oscsi_dump.o is the diskdump driver for SCSI, and it registers itselfto diskdump.o. (The diskdump drivers for IDE or SATA, if and whenthey are created, would also register themselves to diskdump.o.)scsi_mod.o defines the following structures:struct disk_dump_type { void *(*probe)(struct device *); int (*add_device)(struct disk_dump_device *); void (*remove_device)(struct disk_dump_device *); struct module *owner; struct list_head list;};static struct disk_dump_type scsi_dump_type = { .probe = scsi_dump_probe, .add_device = scsi_dump_add_device, .remove_device = scsi_dump_remove_device, .owner = THIS_MODULE,};scsi_dump registers them by register_disk_dump_type().The probe() handler is called from diskdump.o to determine whetherthe selected kdev_t belongs to scsi_mod.o. If it returns 0 toprobe(), diskdump.o creates a disk_dump_device structure and callsadd_device(). The add_device() handler of scsi_dump.o populates thedisk_dump_device_ops of the disk_dump_device. disk_dump_device_opsis the set of handlers which are called from diskdump.o when panicoccurs:struct disk_dump_device_ops { int (*sanity_check)(struct disk_dump_device *); int (*quiesce)(struct disk_dump_device *); int (*shutdown)(struct disk_dump_device *); int (*rw_block)(struct disk_dump_partition *, int rw, unsigned long block_nr, void *buf, int len);};The handler functions are only called when a panic occurs.sanity_check() checks if the selected device works normally. Adevice which returns an error status will not be selected as thedump device.quiesce() is called after the device is selected as the dump device.If it is SCSI, host reset is executed and Write Cache Enable bit ofthe disk device is temporarily set for the dump operation.shutdown() is called after dump is completed. If it is SCSI,"SYNCHRONIZE CACHE" command is issued to the disk.rw_block() executes I/O in one block unit. The length of data is apage size, and is guaranteed to be physically contiguous. Inscsi_dump.o, it issues I/O by calling the queuecommand() handlerfrom the rw_block() handler. The poll handler of adapter driver iscalled until the I/O has completed.- The interface between scsi_dump.o and the adapter driverThe SCSI adapter which supports the diskdump prepares the followingfunctions: int (*sanity_check)(struct scsi_device *); int (*quiesce)(struct scsi_device *); int (*shutdown)(struct scsi_device *); void (*poll)(struct scsi_device *);The poll function should call the interrupt handler. It is calledrepeatedly after queuecommand() is issued, and until the command iscompleted.The other handlers are called by the handlers in scsi_dump.o whichhave the same names.The adapter driver should set its own handlers to the scsi_host_template. struct scsi_host_template {(snipped) #if defined(CONFIG_SCSI_DUMP) || defined(CONFIG_SCSI_DUMP_MODULE) /* operations for dump */ /* * dump_sanity_check() checks if the selected device works normally. * A device which returns an error status will not be selected as * the dump device. * * Status: OPTIONAL */ int (* dump_sanity_check)(struct scsi_device *); /* * dump_quiesce() is called after the device is selected as the * dump device. Usually, host reset is executed and Write Cache * Enable bit of the disk device is temporarily set for the * dump operation. * * Status: OPTIONAL */ int (* dump_quiesce)(struct scsi_device *); /* * dump_shutdown() is called after dump is completed. Usually * "SYNCHRONIZE CACHE" command is issued to the disk. * * Status: OPTIONAL */ int (* dump_shutdown)(struct scsi_device *); /* * dump_poll() should call the interrupt handler. It is called * repeatedly after queuecommand() is issued, and until the command * is completed. If the low level device driver support crash dump, * it must have this routine. * * Status: OPTIONAL */ void (* dump_poll)(struct scsi_device *); #endif };Supported Drivers------------------Disk dump only works on the disks which are connected toadapters that the following drivers control: aic7xxx aic79xx mptscsihInstallation------------1) Download software 1. Linux kernel version 2.6.7 linux-2.6.7.tar.bz2 can be downloaded from 2. diskdump kernel patch diskdump-0.6.tar.gz can be downloaded from the project page. 3. diskdumputils diskdumputils-0.4.2.tar.bz2 can be downloaded from the project page. 4. crash command and patch crash can be download from. A patch for crash-3.8-5.tar.gz can be downloaded from the project page.2) Build and Install Kernel 1. Untar Linux kernel source tar -xjvf linux-2.6.7.tar.bz2 2. Apply all patches in the diskdump-0.5.tar.gz 3. Kernel Configuration a. make menuconfig b. Under "Device Drivers"-> "Block devices", select the following: i. Select "m" for "Disk dump support". c. Under "Device Drivers"-> "SCSI device support", select the following: i. Select "m" for "SCSI dump support". d. Under "Kernel hacking", select the following: i. Select "y" for "Kernel debugging". ii. Select "y" for "Magic SysRq key". (optional) iii. Select "y" for "Compile the kernel with debug info". e. Configure other kernel config settings as needed. 4. make 5. make modules_install 6. Build initrd if you need 7. Copy the kernel image to the boot directory ex. cp arch/i386/boot/bzImage /boot/vmlinuz-2.6.7-diskdump 8. Reboot3) Build and Install diskdumputils 1. Untar diskdumputils package tar -xjvf diskdumputils-0.4.2.tar.bz2 2. make 3. make install4) Build and Install crash 1. Untar crash package tar -xjvf crash-3.8-5.tar.gz 2. Apply crash-3.8-5.patch 3. make 4. make install5) SetupThe setup procedure is as follows. First a dump device must beselected. Either whole device or a partition is fine. The dumpdevice is wholly formatted for dump, it cannot be shared with a filesystem or as a swap partition. The size of dump device should bebig enough to save the whole dump. The size to be written by thedump is the size of whole memory plus a header field. To determinethe exact size, refer to the output kernel message after thediskdump module is loaded: # modprobe diskdump # dmesg | tail disk_dump: total blocks required: 262042 (header 3 + bitmap 8 + memory 262031)In this case, 262042 is the data size in pagesize units that will bewritten by the diskdump function.select the dump partition in /etc/sysconfig/diskdump, as in thefollowing example: ------------------- DEVICE=/dev/sde1 -------------------Next, Format the dump partition. The administrator needs to executethis once. # service diskdump initialformatLastly, enable the service: # chkconfig diskdump on # service diskdump startTo test the diskdump, use Alt-SysRq-C or "echo c >/proc/sysrq-trigger". After completing the dump, a vmcore file willcreated during the next reboot sequence, and saved in a directory ofthe name format: /var/crash/127.0.0.1-<date>The dump format is same as the netdump one, so we can use crash commandto analyse. # crash vmlinux vmcore!!!NOTE!!!Be careful when you investigate timer/tasklet/workqueue. Diskdump savestimer/tasklet/workqueue structure and clears them before dumping. Pleasesee the following.[timer]Where is the structure saved? static tvec_base_t saved_tvec_base;How is the structure saved? tvec_base_t *base = &per_cpu(tvec_bases, smp_processor_id()); memcpy(&saved_tvec_base, base, sizeof(saved_tvec_base));How is the structure cleared? init_timers_cpu(smp_processor_id());[tasklet]Where is the structure saved? struct tasklet_head saved_tasklet;How is the structure saved? saved_tasklet.list = __get_cpu_var(tasklet_vec).list;How is the structure cleared? __get_cpu_var(tasklet_vec).list = NULL;[workqueue]Where is the structure saved? struct cpu_workqueue_struct saved_cwq;How is the structure saved? int cpu = smp_processor_id(); struct cpu_workqueue_struct *cwq = keventd_wq->cpu_wq + cpu; memcpy(&saved_cwq, cwq, sizeof(saved_cwq));How is the structure cleared? spin_lock_init(&cwq->lock); INIT_LIST_HEAD(&cwq->worklist); init_waitqueue_head(&cwq->more_work); init_waitqueue_head(&cwq->work_done);-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2004/7/23/26
CC-MAIN-2014-41
refinedweb
2,219
50.23
Hi, How to set permissions on a list using web service action? These links does not work anymore: Nintex Connect - Set List Permissions with Permissions.asmx Webservice Nintex Connect - Remove permissions on library before adding new permissions Thanks! Solved! Go to Solution. Hi Evgeny, You can use the SharePoint Permissions web service and use the AddPermission, RemovePermission and UpdatePermission methods. For info about the required parameters see the reference links below. MSDN-Reference for Permissions Web Service: WebSvcPermissions namespace () MSDN-Reference for Permissions Methods: Permissions methods (WebSvcPermissions) Your configured action could look like this (of course you have to change the parameter values in your workflow): Greetings You can as well use REST API to achieve that: To obtain REQUEST DIGEST TOKEN you can follow this post:... Regards, Tomasz Hi Philippe, I followed exactly the procedure you have there. However, I am getting an 500 internal error message on the webservice, The SOAP envelop is as shown here: <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns: <soap:Header> </soap:Header> <soap:Body> <m:RemovePermission> <m:objectName>Usines</m:objectName> <m:objectType>list</m:objectType> <m:permissionIdentifier>DOMAINmyuser</m:permissionIdentifier> <m:permissionType>user</m:permissionType> </m:RemovePermission> </soap:Body> </soap:Envelope> I am using the "Run now" simulator directly from within the "Call web service" action. My goal is to be able to: 1- Break inheritance of a newly created list (stop inheriting rights from site ) 2- Add permission to a user as a Owner only to this list I am unable to make either of the RemovePermission and AddPermission methods work. Do you have any idea why? Is there some pre-requisistes? Many thanks for your help. Henri Hi Henri, unfortunately the 500 error can be caused by various reasons and doesn't really give us any hint on the actual problem. You can try the following things: Keep us updated on your progress! Have a nice weekend, Philipp p. s.: provided another good solution using REST API, if everything fails you could give this a try as well. Hi Tomasz, Thanks a lot for your quick answer. I already had the "\" in the SOAp envelop. I also confirm that the point number 2 is OK (I get the methods) I checked the ULS, and there's one entry before the 500 internal error message that explicitly say "Impossible to find the user". It is probably linked to the error. However, I checked that both the user running the webservice and the user that I want to remove the rights from do exist. Basically, I want to remove the rights for a person inside the "Members" group. Is there something else that I have to check? Many thanks Henri 500 error is not related to permissions I have no idea why it wouldn't work for you. Another thing for you to keep in mind is that, if the user whom permission you want to revoke is in the "Members" group and the group itself has permissions to the object, then it doesn't if you revoke the permissions from the user himself because he will still be granted permissions via the group. So what you should do is a bit more complicated imho, because you must remove the user from the SP group. What SharePoint are you using? What version of Nintex? Regards, Tomasz Yes, beceause I tried this on two different environements and it didn't work. Maybe it's related to the version of Nintex ? The version of Nintex workflow I am using is Nintex Workflow 2013 (3.1.10.0 ) and thus SP2013. On my test, I created a list called "Usines" in which I have deleted all permissions from the list and granted manually access to 2 test users (domain1\user1) and (domain2\user2). When I try the webservice RemovePermissions with either of these users, it doesn't work ! <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns: <soap:Header> </soap:Header> <soap:Body> <m:RemovePermission> <m:objectName>Usines</m:objectName> <m:objectType>list</m:objectType> <m:permissionIdentifier>domaine1\hsa</m:permissionIdentifier> <m:permissionType>user</m:permissionType> </m:RemovePermission> </soap:Body> </soap:Envelope> Regards. Henri A possiblie cause of this 500 error can be claim authentification, if in your farm claim auth is enabled, using the domain\loginname leads to 500 error too. you may try providing a claim token to see if it solves this error. Regards, Kevin I know that this is a quite old topic but perhaps somebody will have the same question. I recently faced the same issue in our SharePoint 2016 environment. 500 error was thrown and the detailed error message was "user not found". The reason for that was claims authentication as mentioned here. This web service requires an exact account name contains "i:0#.w|" in it. In order to make this service work, you need to provide the account name in the following format: "i:0#.w|domain\username". This was not a case in my previous SP 2010 environment.
https://community.nintex.com/t5/Nintex-for-SharePoint/Set-list-permissions-using-web-service/m-p/15867
CC-MAIN-2020-24
refinedweb
836
55.13
Unused imports GHC has a series of bugs related to the "report unused imports" flags, including #1148, #2267, #1074, #2436, #10117, #12067. This page describes a new design.. Say that an import-item is either an entire import-all decl (eg import Foo), or a particular item in an import list (eg import Foo( ..., x, ...)). The general idea is that for each use of an imported name, we will attribute that use to one (or possibly more) import-items. Then, any import items with no uses attributed to them are unused, and are warned about.. - Choose one of these, the "chosen import-item", and mark it "used". - Now bleat about any import-items that are unused. For a decl import Foo(x,y), if both the xand yitems: import Foodominates import Foo(x). (You could also argue that the reverse should hold.) - Otherwise choose the textually first one. Other notes: -or barare used. We could have yet finer resolution and report even unused sub-items. - We should retain the special case of not warning about import Foo (), which implies "instance declarations only". either, RnEnv.lookupGreRn_maybe or RnEnv.lookup_sub_bndr. So in RnEnv.lookupGreRn_maybe, if (gre_prov gre) is (Imported _), and in RnEnv.lookup_sub_bndr, put rdr_name in a new tcg_used_rdrnames :: TcRef (Set RdrName) in TcGblEnv. All the tcg_used_rdrnames are in scope; if not, we report an error and do not add it to tcg_used_rdrnames. Other notes - Any particular (in-scope) used RdrNameis bought into scope by one or more RdrName.ImportSpec's. You can find these ImportSpecsin the GRE returned by the lookup. - The unit of "unused import" reporting is one of these ImportSpecs. - Suppose that rnis a used, imported RdrName, and issis the [ImportSpecs]that brought it into scope. Then, to a first approximation all the issare counted 'used'. - We can compare ImportSpecsfor equality by their SrcSpans. - In TcRnDriver.tcRnImports, save import_decls in a new tcg_rn_rdr_imports :: Maybe [LImportDecl RdrName]in TcGblEnv Algorithm The algorithm for deciding which imports have been used is based around this datatype: data ImportInfo = ImportInfo SrcSpan SDoc (Maybe ModuleName) -- The effective module name [RdrName] -- The names the import provides Bool -- Has it been used yet? [ImportInfo] -- Child import infos We convert import declarations into trees of ImportInfos, e.g. import Foo (a, D(c1, c2)) becomes (only the SDoc and [RdrName] fields are given, as that's the interesting bit) lines as used. When we come to giving warnings, if a node is unused then we warn about it, and do not descend into the rest of that subtree, as the node we warn about subsumes its children. If the node is marked as used then we descend, looking to see if any of its children are unused. Here are how some example imports map to trees of ImportInfo, assuming Foo exports a, b, D(c1, c2). import Foo -> ImportInfo "Foo" ["a", "b", "D", "c1", "c2", "Foo.a", "Foo.b", "Foo.D", "Foo.c1", "Foo.c2"] import qualified Foo as Bar -> ImportInfo "Foo" ["Bar.a", "Bar.b", "Bar.D", "Bar.c1", "Bar.c2"] import qualified Foo (a, D) -> ImportInfo "Foo" [] ImportInfo "a" ["Foo.a"] ImportInfo "D" ["Foo.D"] import qualified Foo hiding (a, D(..)) -> ImportInfo "Foo" ["Foo.b"] import Foo (D(c1, c2)) -> ImportInfo "Foo" [] ImportInfo "D" ["D", "Foo.D"] ImportInfo "c1" ["c1", "Foo.c1"] ImportInfo "c2" ["c2", "Foo.c2"] import qualified Foo (D(..)) -> ImportInfo "Foo" [] ImportInfo "D(..)" ["Foo.D", "Foo.c1", "Foo.c2"] These trees are built by RnNames.mkImportInfo. In RnNames.warnUnusedImportDecls we make two lists of ImportInfos; one list contains all the explicit imports, e.g. import Foo (a, b) and the other contains the implicit imports, e.g. import Foo import Foo hiding (a, b) Then RnNames.markUsages is called for each RdrName that was used in the program. The current implementation marks all explicit import as used unless there are no such imports, in which case it marks all implicit imports as used. A small tweak to markUsages would allow it to mark only the first import it finds as used. As well as the RdrNames used in the source, we also need to mark as used the names that are exported. We first call RnNames.expandExports to expand D(..) into D(c1, c2), and then call RnNames.markExportUsages. Normally this just marks the RdrNames as used in the same way that uses in the module body are handled, but it is also possible for an entire module to be "used", if module Foo is in the export list. In this case RnNames.markModuleUsed does the hard work, marking every module imported with that name as used.
https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/UnusedImports
CC-MAIN-2018-22
refinedweb
765
66.74
How many times have you looked at your code looking for a specific div, and wasted several minutes until you found it. Or maybe you didn't event find it and tried searching the div by the class you used to style it, but oops, you don't really remember de name of the class. It sucks right? Styled Components to the rescue This is my experience using Styled Components, a CSS library which gives you a lot of power when implementing CSS code. Cool thing number 1 Imagine you have the following component: const Component = () => ( <div class='container'> <div class='header'> <p>some text</p> </div> <main> <div class='main_left'> <p>some text</p> </div> <div class='main_right'> <p>some text</p> </div> <main> <div class='footer'> <p>some text</p> </div> </div> ) Very simple right? Now picture this: import styled from 'styled-components' const Container = styled.div` // css code here ` const MainLeft = styled.div` // css code here ` const MainRight = styled.div` // css code here ` const Footer = styled.div` // css code here ` const Component = () => ( <Container active> <Header> <p>some text</p> </Header> <main> <MainLeft> <p>some text</p> </MainLeft> <MainRight> <p>some text</p> </MainRight> <main> <Footer> <p>some text</p> </Footer> </Container> ) Much cleaner right? Notice that the components generated are not real components (they are styling components) in which you can generate JS logic, it's just CSS code definition wrapping a HTML tag and exported with a easy-to-find name. The way I see it is like: HTML tag + class_name = StyledComponent Cool thing number 2 Something to have in mind is: it's reusable! and flexibly reusable. Why flexibly reusable? On the one hand, you can declare the styled components in another JS file and import it in any React component. On the other hand, you can also do this: Imagine a situation in which you have a select HTML tag and a button HTML tag that, at the end, you want them to look the same. You have already finished styling the select tag and you are about to start with the button tag. WAIT, try this. Of course, you first declare the Select styled component styles. const Select = styled.select` width: 400px; height: 400px; background-color: blue; border: 1px solid red; ` After doing this you can inherit all the styles from this Select component wherever you want, in another HTML element. I use bold in styles because it's all that it inherits, the CSS, so: width: 400px; height: 400px; background-color: blue; border: 1px solid red; Let's continue const Select = styled.select` width: 400px; height: 400px; background-color: blue; border: 1px solid red; ` Imagine you want a button with the same styles as the Select. In the component you would use it like this: const Component = () => ( <Select as="button" /> ) What we are saying in the Button declaration is: take all the styles from Select and but renders it as a button HTML tag. Note that the attributes that now Select recieves are the ones that a HTML button tag would. (so, no options). Cool thing number 3 Now imagine you need to conditionaly colour a p HTML tag depending on some state you have in your component, something like this: const Component = () => { const someCondition = useState(false) return ( <p className={`base ${someCondition ? 'when_true' : 'when_false'}`}>some text</p> ) } So, what do I see wrong here. Couple of things: - You need to define 2 classes (one for condition true and one for false) - You will possibly have to create 3 classes (one for the base styles of the p HTML tag, one for the styles that are applied only when the condition is true and one for the styles that applied only when the condition is false) In normal CSS code: <style> .base { background-color: grey; font-size: 1.5rem; font-weight: bold; } .when_true { color: blue; } .when_false { color: red; } </style> Now with the power of Styled Components props: import styled from 'styled-components' const Paragraph = styled.p` background-color: grey; font-size: 1.5rem; font-weight: bold; color: ${props => props.conditionName ? 'blue' : 'red'} ` const Component = () => { const [someCondition, setSomeCondition] = useState(false) return ( <Paragraph conditionName={someCondition}>some text</Paragraph> ) } Discussion (2) Nice article! Just an advice to avoid looking for a specific div. There are a lot of tags in html that have meaning like main, header, nav, footer, ... It helps organizing your markup and makes the information more accessible for screen reader I guess 👍 totally man. I just used div as it's the most common one and anyone who's new to programming can relate. Thanks for the advice
https://dev.to/niconiahi/clean-your-code-from-html-tags-with-styled-components-magic-2jdk
CC-MAIN-2022-33
refinedweb
765
62.48
Neural Networks with Numpy for Absolute Beginners: Introduction In this tutorial, you will get a brief understanding of what Neural Networks are and how they have been developed. In the end, you will gain a brief intuition as to how the network learns. By Suraj Donthi, Computer Vision Consultant & Course Instructor at DataCamp Artificial Intelligence has become one of the hottest fields in the current day and most of us willing to dive into this field start off with Neural Networks!! But on confronting the math intensive concepts of Neural Networks we just end up learning a few frameworks like Tensorflow, Pytorch etc., for implementing Deep Learning Models. Moreover, just learning these frameworks and not understanding the underlying concepts is like playing with a black box. Whether you want to work in the industry or academia, you will be working, tweaking and playing with the models for which you need to have a clear understanding. Both the industry and the academia expect you to have full clarity of these concepts including the math. In this series of tutorials, I’ll make it extremely simple to understand Neural Networks by providing step by step explanation. Also, the math you’ll need will be the level of high school. Let us start with the inception of artificial neural networks and gain some inspiration as to how it evolved. A little bit into the history of how Neural Networks evolved It must be noted that most of the Algorithms for Neural Networks that were developed during the period 1950–2000 and now existing, are highly inspired by the working of our brain, the neurons, their structure and how they learn and transfer data. The most popular works include the Perceptron(1958) and the Neocognitron(1980). These papers were extremely instrumental in unwiring the brain code. They try to mathematically formulate a model of the neural networks in our brain. And everything changed after the God Father of AI Geoffrey Hinton formulated the back-propagation algorithm in 1986(That’s right! what you are learning is more than 30 years old!). A biological Neuron Our brain consists of about 100 billion such neurons which communicate through electrochemical signals. Each neuron is connected to 100s and 1000s of other neurons which constantly transmit and receive signals. But how can our brain process so much information just by sending electrochemical signals? How can the neurons understand which signal is important and which isn't? How do the neurons know what information to pass forward? The electrochemical signals consist of strong and weak signals. The strong signals are the ones to dominate which information is important. So only the strong signal or a combination of them pass through the nuclues (the CPU of neurons) and are transmitted to the next set of neurons through the axons. But how are some signals strong and some signals week? Well, through millions of years of evolution, the neurons have become sensitive to certain kinds of signals. When the neuron encounters a specific pattern, they get triggered(activated) and as a consequence send strong signals to other neurons and hence the information is transmitted. Most of us also know that different regions of our brain are activated (or receptive) for different actions like seeing, hearing, creative thinking and so on. This is because the neurons belonging to a specific region in the brain are trained to process a certain kind of information better and hence get activated only when certain kinds of information is being sent. The figure below gives us a better understanding of the different receptive regions of the brain. It has been shown through Neuroplasticity that the different regions of the brain can be rewired to perform totally different tasks. Such as the neurons responsible for touch sensing can be rewired to become sensitive to smell. Check out this great TEDx video below to know more about neuroplasticity. But what is the mechanism by which the neurons become sensitive? Unfortunately, neuroscientists are still trying to figure that out!! But fortunately enough, god father of AI Geff has saved the day by inventing back propagation which accomplishes the same task for our Artificial Neurons, i.e., sensitizing them to certain patterns. In the next section, we’ll explore the working of a perceptron and also gain a mathematical intuition. Perceptron/Artificial Neuron From the figure, you can observe that the perceptron is a reflection of the biological neuron. The inputs combined with the weights(wᵢ) are analogous to dendrites. These values are summed and passed through an activation function (like the thresholding function as shown in fig.). This is analogous to the nucleus. Finally, the activated value is transmitted to the next neuron/perceptron which is analogous to the axons. The latent weights(wᵢ) multiplied with each input(xᵢ) depicts the significance(strength) of the respective input signal. Hence, larger the value of a weight, more important is the feature. You can infer from this architecture that the weights are what is learned in a perceptron so as to arrive at the required result. An additional bias(b, here w₀) is also learned. Hence, when there are multiple inputs (say n), the equation can be generalized as follows: Finally, the output of summation (assume as z) is fed to the thresholding activation function, where the function outputs An Example Let us consider our perceptron to perform as logic gates to gain more intuition. Let’s choose an AND Gate. The Truth Table for the gate is shown below: The perceptron for the AND Gate can be formed as shown in the figure. It is clear that the perceptron has two inputs (here x₁ = A and x₂ = B). We can see that for inputs x₁, x₂ and x₀ = 1, setting their weights as respectively and keeping the Threshold function as the activation function we can arrive at the AND Gate. Now, let’s get our hands dirty and codify this and test it out! def and_perceptron(x1, x2): w0 = -0.5 w1 = 0.6 w2 = 0.6 z = w0 + w1 * x1 + w2 * x2 thresh = lambda x: 1 if x>= 0.5 else 0 r = thresh(z) print(r) >>>and_perceptron(1, 1) 1 Similarly for NOR Gate the Truth Table is, The perceptron for NOR Gate will be as below: You can set the weights as so that you obtain a NOR Gate. You can go ahead and implement this in code as: def nor_perceptron(x1, x2): w0 = 0.5 w1 = -0.6 w2 = -0.6 z = w0 + w1 * x1 + w2 * x2 thresh = lambda x: 1 if x>= 0.5 else 0 r = thresh(z) print(r) >>>nor_perceptron(1, 1) 0 What you are actually calculating… If you analyse what you were trying to do in the above examples, you will realize that you were actually trying to adjust the values of the weights to obtain the required output. Lets consider the NOR Gate example and break it down to very miniscule steps to gain more understanding. What you would usually do first is to simply set some values to the weights and observe the result, say Then the output will be as shown in below table: So, how can you fix the values of weights so that you get the right output? By intuition, you can easily observe that w₀ must be increased and w₁ and w₀ must be reduced or rather made negative so that you obtain the actual output. But if you breakdown this intuition, you will observe that you are actually finding the difference between the actual output and the predicted output and finally reflecting that on the weights… This is a very important concept that you will be digging deeper and will be the core to formulate the ideas behind gradient descent and also backward propagation. What did you learn? - Neurons must be made sensitive to a pattern in order to recognize it. - So, similarly, in our perceptron/artificial neuron, the weights are what is to be learnt. In the later articles you’ll fully understand how the weights are trained to recognize patterns and also the different techniques that exist. As you’ll see later, the neural networks are very similar to the structure of biological neural networks. While it is true that we learnt only a few small concepts (although very crucial) in this first part of the article, they will serve as the strong foundation for implementing Neural Networks. Moreover, I’m keeping this article short and sweet so that too much is information is not dumped at once and will help absorb more! In the next tutorial, you will learn about Linear Regression (which can otherwise be called a perceptron with linear activation function) in detail and also implement them. The Gradient Descent algorithm which helps learn the weights are described and implemented in detail. Lastly, you’ll be able to predict the outcome of an event with the help of Linear Regression. So, head on to the next article to implement it! You can checkout the next part of the article here: Neural Networks with Numpy for Absolute Beginners — Part 2: Linear Regression Bio: Suraj Donthi is a Computer Vision Consultant, Author, Machine Learning and Deep Learning Trainer. Original. Reposted with permission. Related: - Neural Networks – an Intuition - The Backpropagation Algorithm Demystified - Mastering the Learning Rate to Speed Up Deep Learning
https://www.kdnuggets.com/2019/03/neural-networks-numpy-absolute-beginners-introduction.html
CC-MAIN-2019-18
refinedweb
1,559
61.16
The New Sencha Eclipse Plugin We've been working on various tools that make a Sencha developer's life easier, and we started with Sencha Architect. Our new plugin makes it even easier to use Architect by providing a full set of code completion and code assistance features for Eclipse. To start, we included the eBay Open Source project VJET, which provides a set of base functionality for giving Eclipse a much richer understanding of JavaScript, so you get stronger typing and a better IDE experience when using JavaScript in Eclipse. But in order to make it even more useful for Sencha developers, we extended VJET in various ways to better understand Ext JS specific constructs, like our class system, extends and more. You get support for the Ext JS builtin classes (like Ext.panel.Panel) and also your custom types, and any subclasses you make of the Ext JS types. It puts an incredible amount of power into your hands and makes developing Ext JS applications faster and easier. To get started, you'll need to have the new version of Sencha Complete or Sencha Complete: Team, which include the plugin. Once you follow the install instructions, you'll need to install the type library that corresponds to the version of Ext JS that you're using. The type library contains metadata about the framework that describes build-ins methods, classes, mixins, etc and ensures the plugin provides the correct code assistance. The Sencha Eclipse Plugin currently supports Ext JS 4.0.7 and Ext JS 4.1. Load the type library, and once you've switched your Eclipse perspective to the VJET perspective, you'll get code assistance and syntax help as you type. Since the plugin is built directly into the Eclipse code intelligence model, you'll be able to use many familiar Eclipse features such as proposals, outlining, object hierarchy, call hierarchy, and more. I'll run through a few of the major features that the Sencha Eclipse Plugin provides for Ext JS developers. Ext JS Method Proposals Built-in Ext JS types and classes are well known to the Sencha Eclipse Plugin. For example, Ext.define has two signatures, and when you type "Ext.", you get the full list of items contained in the Ext namespace, and for "define", you get proposals for both signatures. The plugin also supports Mixed input types, so if you have a method that can accept (for example) a string or an object or an array, the plugin suppresses errors and allows those mixed types to be passed in. Support for Ext.application You set configs like the appFolder, defaultUrl, viewport configuration options and more in Ext.application. When you start writing your application entry point with the Sencha Eclipse Plugin, all the configuration options for Ext.application are proposed to you as you type. Full Class System Support, including Ext.define and Mixins Not only does the Sencha Eclipse Plugin support built-in Ext JS types, it also supports your custom classes and your subclasses for built-in types. If you were to Ext.define("myclass", { extend: "Ext.Panel"}) as you were creating your subclassed class, you would get access and code completion to all the supertype's methods. In addition to subclassing, mixins are fully supported. When you mixin a class, all the methods for that mixin get proposed to you as you type, so you can get full access to the many powerful mixins that are used throughout Ext JS. Automatic Support for Getters and Setters Imagine your application has a config called "enabled". Ext JS automatically generates getEnabled and setEnabled for you as a part of the class system. The Sencha Eclipse Plugin understands that framework behavior, and as you type this., you'll see proposed completion options for getEnabled and setEnabled, even though those are generated at runtime. It makes it easier to use proper accessor patterns when writing to your application configs. These are just a few of the many code complete and Ext JS specific patterns that the Sencha Eclipse Plugin supports. Since the plugin is based on VJET, you get many more features in addition to all the advanced Ext JS specific code completion, including common mistakes like mismatched curly braces, mismatched quotes, and the like. It makes coding in JavaScript so much easier -- we hope you enjoy using it as much as we have building it. You can get the Sencha Eclipse Plugin through Sencha Complete. There are 36 responses. Add yours. Rafael Carvalho2 years ago The good thing is that i bought the complete pack a month ago I would have waited if I knew it was coming… Jim2 years ago Just to be clear—this article lists Sencha Complete: Team edition, as the minimum requirement, right? And the Team edition is ~$2.2k/seat with a minimum of 10-seats, is that correct? Erick The Red2 years ago Is this also going to be available for the Aptana Studio 3 IDE? John Doe2 years ago How does this compare to the Intellij IDEA development experience? Is it worth creating a plugin for IDEA as well? Ian Skerrett2 years ago Great to see this Eclipse plugin. I know VJet is very cool so it is great to see you are using it. Sencha is incredibly popular, so this will be a great offering for the Eclipse community. Please feel free to list the plugin on our Eclipse Marketplace. Ryan2 years ago Any plans to add support for Sencha Touch? Michael Mullany2 years ago @ JIm - that was an incomplete sentence which I’ve just fixed - the plugin is part of both Sencha Complete & Sencha Complete: Team. Marc Fearby2 years ago I use Visual Studio 2010 so I’m not expecting any of this kind of love for a long time, if ever. I feel a bit like a lepper using Microsoft technologies around here Akeem Oriola2 years ago I use Visual Studio 2010 too and its so great with JQuery editing, I can only wish Sencha take queue and do a plugin for this IDE and save us some trouble. venkatesh.R2 years ago Do you have any plan to support for Sencha touch 2 ? Because we are looking for Sench touch 2 with eclipse Plugin. Thanks Kazuhiro Kotsutsumi2 years ago I translated it into Japanese. Provision: Japan Sencha User Group Sergio Samayoa2 years ago Sencha should crafted Architect as eclipse plugin since the beginning so all this goodies were part of it. Now all this goodies are solely for those who craft ExtJS by hand. Regards. Paul2 years ago I’ve bought sencha complete july the 31-st, is this eclipse plugin freely available for me or should I renew? Paul Jones2 years ago Are there any plans to license this tool on its own? I would really like to license the eclipse plugin for my entire development team, but don’t have the need or the budget to license Sencha Complete for everyone. Alok Ranjan2 years ago Hello Aditya Thanks for the quick overview. As usual Sencha Rocks! If I buy Sencha Complete, is it possible that I can use Sencha Architect while other components are being used by my developer? I don’t often code and it will not look logical to me to buy two copies. Aditya Bansod2 years ago @Paul—you’ll need to renew your support for access. @Paul Jones—right now there are no plans to license this separately from Sencha Complete and Sencha Complete: Team. @Alok—you’ll need to buy two licenses of Sencha Complete in your case. You can contact our sales team via the Contact Us form on the website for additional details. Michael2 years ago Netbeans support would be most welcome! santhosh kumar2 years ago Better if we include ext-gwt (aka gxt) in the eclipse plugin and also as part of the sencha complete (team) Phil Smit2 years ago You mention at the beginning of this article: “Our new plugin makes it even easier to use Architect by providing a full set of code completion and code assistance features for Eclipse.” I use Sencha Architect, but reading through the article it seems this plugin is only intended for Eclipse or does this also enable code completion in Sencha Architect? In my view, Sencha Architect *should* also have code completion when using the built in code editor. Can you clarify whether there is code completion for Sencha Architect? Thanks Atila Hajnal2 years ago Are You planning to sell separate plugin not included in Complete or Complete:Team product? It would bi nice to have this plugin. Doug Bieber2 years ago Too bad it currently supports 4.1.0, two releases and 6 months behind the current release curve. Plus, we’re not paying $900 per developer head. You should unbundle this. Ben2 years ago @Doug: Neither will we. I don’t know the numbers, but it seems like a poor sales decision on their end. At least it isn’t as insulting as the ones only available in Team. The way I view bundles is a benefit to the consumer in reduced prices for purchasing multiple products. Sencha treats bundles as a way to try to force users to pay for things they don’t want. Aditya Bansod2 years ago @Phil—Sencha Architect does not have code completion. If you want code completion, you’ll need to use the Eclipse Plugin along side Architect. Aditya Bansod2 years ago @Phil—Architect does not support code completion. If you want code completion, you will need to use the plugin with Architect side by side. camelcase2 years ago I found references to an opensource VJET extension for ExtJS that seems to do everything mentioned above. Did Sencha acquire that project or are the code bases separate? If I did buy Sencha Complete but was not interested in Architect would the Sencha VJET extension offer benefits for regular JavaScript development in Eclipse targeting ExtJs? Aditya Bansod2 years ago @camelcase—The plugin is built on top of VJET but adds in a ton of Ext JS / Sencha specific features, such as understanding our class system and more. Westy2 years ago Sounds good, but you’ll have to prise Sublime Text 2 from my cold dead hands! :D Travis Phillips2 years ago Is there documentation on this plugin? Just to get a basic understanding how to use it? doesnt it install templates, so I can click “new”, “create project” and sencha is there to build out a basic layout for touch? Aditya Bansod2 years ago @Travis—you can get the trial here: Yevgen2 years ago Bad that this plugin available only under Complete and Team. What a problem to sell it separate, or include to ExtJs Package for additional price? For example i have Architect, and planing get EXtJs (i mean core lib license), and i no need most of what Complete is offering, but im really interesting in Eclipse plugin, cz just it can speedup my dev. Chris2 years ago I agree with Doug and Ben. I was so excited reading about the Eclipse plugin I was mentally preparing myself to switch IDEs! Disappointed to find out that won’t be happening because of the ridiculous bundling. Not worth the price tag if you don’t need all of the other stuff in the Sencha Complete bundle. If you had confidence in your products to sell themselves and truly cared about your customer base, you would offer it separately. I hope you’ll at least consider it. Bruce2 years ago When will a Jetbrain IDEA plugin be made available? chazmanian2 years ago Those of us who want Sencha to unbundle the Eclipse plugin should start an online petition… Owen2 years ago I believe releasing this plugin in a free mode would help increase the uptake of Sencha products by giving people a better sense of how easy it is to use. It can be a bit bewildering at first, or if you are coming from some of the older ExtJS versions to the newer ones where there have been many changes made. gaoxiongxue2 years ago i would like to see the extjs operating principle . Chris2 years ago @Owen - I agree. I’ve had a hard time convincing colleagues to make the jump to ExtJS at work because of the learning curve so I’m the only one trying to use it. The plugin would certainly help increase the chance of adoption here. Too bad the Sencha folks are too short-sighted to see that. Comments are Gravatar enabled. Your email address will not be shown.Commenting is not available in this channel entry.
http://www.sencha.com/blog/the-new-sencha-eclipse-plugin
CC-MAIN-2015-11
refinedweb
2,109
70.94
Name | Synopsis | Interface Level | Parameters | Description | Context | See Also | Warnings #include <sys/types.h> #include <sys/ddi.h> void bzero(void *addr, size_t bytes); Architecture independent level 1 (DDI/DKI). Starting virtual address of memory to be cleared. The number of bytes to clear starting at addr. The bzero() function clears a contiguous portion of memory by filling it with zeros. The bzero() function can be called from user, interrupt, or kernel context. bcopy(9F), clrbuf(9F), kmem_zalloc(9F) The address range specified must be within the kernel space. No range checking is done. If an address outside of the kernel space is selected, the driver may corrupt the system in an unpredictable way. Name | Synopsis | Interface Level | Parameters | Description | Context | See Also | Warnings
http://docs.oracle.com/cd/E19082-01/819-2256/6n4icm04p/index.html
CC-MAIN-2014-52
refinedweb
124
52.36
On Mon, 2005-10-10 at 15:36 +0100, John Kaputin wrote: > > I'm thinking about modifying the xxxElement interfaces to use String and > String[] instead of QName and QName[] and change the Reader to just store > the qname string from a WSDL attribute, instead of converting it to a QName > object at parse-time. I'm confused .. what's a "qname string"? > The QName object still gets created at some point > for the component model interfaces, but now the validator also has access > to the original qname string for diagnostic error reporting if necessary. Unless namespaces are resolved at parse time you have to retain lots of other info .. Sanjiva. --------------------------------------------------------------------- To unsubscribe, e-mail: woden-dev-unsubscribe@ws.apache.org For additional commands, e-mail: woden-dev-help@ws.apache.org
http://mail-archives.apache.org/mod_mbox/ws-woden-dev/200510.mbox/%3C1128957467.21550.41.camel@localhost.localdomain%3E
CC-MAIN-2015-06
refinedweb
132
52.39
Hi, in this Instructable I want to show you, how to build your own Linear-Servo-Motor for Linear-Movement-Applications. But at first the story about this Instructable: When I visited one of the biggest trade faires for Automation here in Germany, I saw a lot of Linear-Motors in Industry Applications. It was quite astonishing how fast these Motors can move with an incredible precision. I thought it would be really cool, if I could addapt these kind of motor in a Laser or 3D-Printer Application. Sadly these Motors are very expensive (>1000€ per piece). So buying a Linear-Motor is not an option anymore. But after some research I thought it will be possible to build my own Linear-Motor. Theoretecly a Linear-Servo-Motor is just a three phase sychronous motor with a closed loop position feedback. So I started to create CAD-Drawings and Electronics. After one month of research and improvements I can say, I have built my own working Ironless Linear-Servo-Motor. However there are some points which need to be improved especially in the position regulation. This project helped me a lot to understand, how a Linear-Motor works in general, and how to programm a closed-loop controller for it. Specification: Travel-Distance: 290mm max. Travel-Speed: 200mm/s (at 24V, I think there is much more possible, but my coils are getting to hot above 24V) accuracy: 0.3mm (remember there is just a 400pulse/rev encoder) cost: <100€ if you have all the tools like CNC-Mill and 3D-Printer at home. Videos: Fast movements; position is set in the Arduino-File: Fixed position; the Motor tries to withstand external forces to keep his position: ### If you like my Instructable, please vote for me in the Epilog-Contest 10### ### You can see this motor live at Maker Faire Ruhr 2019 ### Step 1: The Theory of the Linear Motor Theoretectly a Linear Motor is just an unrolled three phase Motor. You controll the Linear-Motor like a rotary motor by connecting a three phase sinusiodle current at each of the coils. The coils will create a periodic moving magnetic field, which will result in a force which will drive the motor. For a better imagination, please take a look at this Wikipedia article. Sadly the infomations, which are necessary to build an Ironless Linear Motor aren't descriped that well in this article, so I have written down my own experiences, while building this motor. At frist it sounds simple "just three coils between some magnets", but after some tests, I could say it was really hard to find the right parameters: The arrangement of the coils to the magnets: The coils needs to be arranged to the magnets like seen above, this kind of arrangement will cerate the optimal force to the three coils . Because I have choosen to build an ironless type linear motor, that means that there is no iron or any other material with a high permability inside the coils, there are a few points you need to consider about, before start building your own motor: - The distance between the lower and upper magnets: One of the main problems of a motor is the gab of air between the coils and magnets . On the one hand the magnetic field between the magnets will be increased, by decreasing the distance between them. On the other hand, there are still the coils between the magnets, which needs to be fixed in there position and also need to withstand the applied current. After some experiments with different kinds of coil fixtures and distances, I found out that the best distance between the magnets is 6mm (This is not the mathematical perfect distance, but for me it is the closest distance I could reach if you consider that I need to create the fixture and the coils by myself). - The Iron back circuit: As said above the resistance of the magnetic flux will be increased by low permability materials like air. But if you will use a high permability material like Iron, the resistance for the magnetic flux will drop significantly. The conclusion is to use some iron sheets from the outside to mount the magnets. Just because of two sheets of iron, the magnetic flux between the two magnets will be increased. - The coils: The installed coils have a dimension of 16x26mm and are made out of 0.2mm copper isolated wire. The coils are just 3mm thick. You will find more information about the coils in step 5. Step 2: Mechanical-Parts The complete motor is designed with Fusion 360. Below you will find all requiered parts and files of the motor: Fusion 360 File: Standart-Parts: Nuts, Screws and Washers: CNC-Milled-Parts: All installed aluminium parts are milled on my diy CNC-Router. The most complex part to machine is the carriage fixture, because this part requiered a two side machining. The steel parts for the forcer are handmade, because my CNC machine has not the capability to mill steel. You can download all the CAD-Files at the bottom of this step. 3D-Printed Parts: The 3D-Printed parts are made out of PLA with a resolution of 0.1mm. All requiered .stl files are aviable at the bottom of this step. If you don't have a 3D-Printer, you can buy the 3D-Printed-Parts here. Step 3: Electrical-Parts Electronic Parts for the Motor: Electronic Parts for the PCB: Here are all required parts for the Linear Motor. I tried to source all parts as cheap as possible. Here are all the files for the requierd PCB. I have attached all files for manufacturing: gerber files , eagle cad files and even etch templates, if you want to create your PCB with the toner transfer methode. I have created the PCB by a professionell manufacture. The PCB is just single side, so it will be easy to make. Step 4: Building the Motor: the Magnetic Rail What you need for this step: What you have to do: At frist you have to take the upper steel plate and place 24 magnets with alternating polarity on it. For the right distance of 3,33mm between the magnets I've created an allignment tool (See picture [2] below). Once all the magnets are in the right position, you have to fix them at their place with some super glue. Now repeat this procedure with the bottom steel plate. Now you have to combine the upper and the lower steel-plate by using the six M4 Screws. You do that by using the "spacer" aluminium part. Because the magnets are so strong, I recommend to use a piece of wood or something similar to protect the magnets while assembling the steelplates. Step 5: The Coils What you need for this step: This is one of the most complicatest steps of the motor. I've personally needed around ten iterations, after I have found the right dimesion of the spools, which actually worked. You have to find the right parameters between current tollerance and diameter of the coil. If these two values fit together you will have a good coil. Because there is no ferro-magnetic material in the "forcer", it will not amplifiy the magnetic field, so the field out of the coils needs to be strong enough to move the motor. Specifications of my coils: - 0.2mm copper wire - 15 [Ohm] resistance - +-100m of wire Now I will show you how I have made the coils: What you have to do: At first you should print out my tool to wind the coils. It will simplify the process of the spool winding. So at first take the winding tool and attach some baking paper on both sides of the tool. This will be usefull in the next steps. After that I am using a drilling machine to wind the wires on the tool. The wires need to fill the complete space between the two plates. Once finished, you can use some super glue to keep the wires in position and because of the backing paper, the glue will not stick to the fixture ;). Once the glue is dried, you can remove the fixture carefully. The 3D-printed inlay of the coil is not going to be removed, it will stay there for ever. You need to repeat this step three times. Finally you have to take the "forcer" aluminium part and place the three coils inside the three pre-milled pockets. To hold the coils in place I have used some kapton tape. The Advantage of kapton tape is, that it is very thin but also heat resistant, which is ideal for this application. Step 6: The Carriage What you need for this step: What you have to do: At frist you have to attach the coil module to the "carriage" alumnium part. You do that by using two M3x8mm screws. After that you have to connect the coils to the wires. The coils need to be connected in triangle configuration. The connection is simply made by a terminal connector. Then it is time to close the connection box. For that you have to use the "forcer_top.stl" file. Finally attach the cable chain to the upper aluminium plate. Step 7: The Encoder Plate What you need for this step: What you have to do: At first you have to slide the GT2 Pulley on the encoder. Then you need to fix the Pulley by using the allen screws inside the pulley. Now take the encoder and connect it to the "encoder_plate". Do that by using the three M3x16mm screws and the encoder_spacer. In the following steps the bearings for the timing belt are going to be attached to the "encoder_plate": Now put the bearings, screws and washers together. Repeat this three times: Finally connect the bearings to the encoder plate: Step 8: Bringing All Parts Together What you need for this step: What you have to do: At first you have to put the linear rail at the front of the "magnetic-rail" module. After that attach the two bumpers on both sides of the rail. The bumpers will prevent the wagon from falling out. Now it is time to attach the "foot" aluminium parts to the "magnetic-rail" module. For the connection, I have cut four M5-threads inside the aluminium extrusions. The alumnium parts are connected by four M5x20mm screws. For the next step you need to place the "encoder-plate" module on the "magnetic-rail" module. The connection is made by four M4x10mm screws. Now you have to connect the "heart" of the motor, the carriage. For the connection to the linear rail, you have to use the four M3x8mm screws. The cable chain is connected by two M3x8mm screws. Finally the GT2 belt needs to be attached to the rotary encoder. And you are done! The mechanical part of the motor is compelted. Now it's time to step over in electronics and programming :-) . Step 9: The PCB The motor is mainly controlled by an Arduino Nano and the L6234 motor driver chip. As you can see in the schematic I have also break out some pins, so that there will be a connectivity for a step/dir interface (not implemented yet). For the builing of the circuit I recommend to manufacture a PCB. You can do this by yourself with the toner transfer methode or you can order it with the gerber files by a professionell manufacture like I did. All the requierd files are available in the "Electrical-Parts" step. Because I could only purchase ten PCBs at one time, I will sell some of these PCBs in my onlineshop for only 2€. Step 10: Programming the Arduino While I was experimenting with the first prototype of the linear motor, I took some inspiration from this article: berryjam.eu, this guy uses the same L6234 chip like me and he successfully move a brushless motor slow and with precision. The problem with this kind of motor controll is, that there is no feedback where the rotor is located. But if you want to drive a brushless motor or Linear Motor the fast way, you need the information, where the rotor is located, to switch the coils just at the perfect position. In brushless ESC controllers this is often done by HAL-Sensors. But for Linear-Motors with a position feed-back this is not the best option, because you only know where the coils are, when they are in movement.Also the resolution of the position feed-back isn't really accurate. So for my application, the linear motor, I have tried a different way: I installed a rotary encoder, which is connected to the linear motor slider by a belt. This is used to measure the position of the coils, but also to read out the actual position where the slider is located. While using this technic I have solved both problems, the perfect switching of the coils and also the knowledge, where the slider is actually located. So I wrote a small programm on the arduino which will excatly do this kind of motor controll. The complete code: #include <PID_v1.h> const int encoder0PinA = 2 ; const int encoder0PinB = 3 ; volatile int encoder0Pos = 0; const int EN1 = 5; const int EN2 = 6; const int EN3 = 7; const int IN1 = 9; const int IN2 = 10; const int IN3 = 11; double Setpoint, Input, Output; PID myPID(&Input, &Output, &Setpoint,2,5,1, DIRECT); // SPWM (Sine Wave) int pwmSin[72] = {127, 138, 149, 160, 170, 181, 191, 200, 209, 217, 224, 231, 237, 242, 246, 250, 252, 254, 254, 254, 252, 250, 246, 242, 237, 231, 224, 217, 209, 200, 191, 181, 170, 160, 149, 138, 127, 116, 105, 94, 84, 73, 64, 54, 45, 37, 30, 23, 17, 12, 8, 4, 2, 0, 0, 0, 2, 4, 8, 12, 17, 23, 30, 37, 45, 54, 64, 73, 84, 94, 105, 116 }; enum phase {PHASE1, PHASE2, PHASE3}; int count[] = {0,24,48}; int *pos_U = pwmSin; int *pos_V = pwmSin + 24; int *pos_W = pwmSin + 48; const double abstand = 27; //24 int directionPin = 4; int stepPin = 12; double step_position = 0.0; int test; double position_in_mm = (encoder0Pos) / 40.00; void setup() { pinMode(encoder0PinA, INPUT); pinMode(encoder0PinB, INPUT); // encoder pin on interrupt 0 (pin 2) attachInterrupt(0, doEncoderA, CHANGE); // encoder pin on interrupt 1 (pin 3) attachInterrupt(1, doEncoderB, CHANGE); step_position = 0; setPwmFrequency(IN1); // Increase PWM frequency to 32 kHz (make unaudible) setPwmFrequency(IN2); setPwmFrequency(IN3); pinMode(IN1, OUTPUT); pinMode(IN2, OUTPUT); pinMode(IN3, OUTPUT); pinMode(EN1, OUTPUT); pinMode(EN2, OUTPUT); pinMode(EN3, OUTPUT); digitalWrite(EN1, HIGH); digitalWrite(EN2, HIGH); digitalWrite(EN3, HIGH); analogWrite(IN1,*pos_U); analogWrite(IN2,*pos_V); analogWrite(IN3,*pos_W); delay(2000); analogWrite(IN1,0); analogWrite(IN2,0); analogWrite(IN3,0); encoder0Pos = 0 ; Input = encoder0Pos / 40.00 ; Setpoint = step_position; myPID.SetOutputLimits(-1000, 1000); myPID.SetMode(AUTOMATIC); myPID.SetSampleTime(1); unsigned long previousMillis = 0; } unsigned long previousMillis = 0; const long interval = 500; int ledState = LOW; int i = 0; void loop() { int positions[2] = { -100.0 , -100.00}; myPID.SetTunings(15,0,0.4); //val_1,0,val_ Input = encoder0Pos / 40.00; myPID.Compute(); drive(Output/1000); Setpoint = positions[i]; unsigned long currentMillis = millis(); if (currentMillis - previousMillis >= interval) { previousMillis = currentMillis; if (i < 1) { i++; } else { i = 0; } } } double newmap(double x, double in_min, double in_max, double out_min, double out_max){ return (x - in_min) * (out_max - out_min) / (in_max - in_min) + out_min; }; } int berechne(){ double position_in_mm = (encoder0Pos) / 40.00; int vielfaches = position_in_mm / abstand; double phaseshift2 = position_in_mm-(vielfaches*abstand); // Serial.println(phaseshift2); double phaseshift3 = (phaseshift2 - 0) * (72 - 0) / (abstand - 0) + 0; //Serial.println(phaseshift3); return phaseshift3; } //######################## Shift-Array######################## void shift(int **pwm_sin , int shift_distance , int array_size, phase phase_number){ if(shift_distance == array_size) return; if(shift_distance > 0){ if(count[phase_number] + shift_distance < array_size){ *pwm_sin = *pwm_sin + shift_distance; count[phase_number] += shift_distance ; } else { int temp =count[phase_number] + shift_distance - array_size; *pwm_sin = *pwm_sin - count[phase_number]; *pwm_sin = *pwm_sin + temp; count[phase_number] = temp; } return; } if(shift_distance < 0){ int temp_distance = array_size + shift_distance; shift(pwm_sin, temp_distance , array_size, phase_number); } if(shift_distance = 0 ); return; } //########################### ENCODER-INTERRUPT######################################### void doEncoderA() { // look for a low-to-high on channel A if (digitalRead(encoder0PinA) == HIGH) { // check channel B to see which way encoder is turning if (digitalRead(encoder0PinB) == LOW) { encoder0Pos = encoder0Pos + 1; // CW } else { encoder0Pos = encoder0Pos - 1; // CCW } } else // must be a high-to-low edge on channel A { // check channel B to see which way encoder is turning if (digitalRead(encoder0PinB) == HIGH) { encoder0Pos = encoder0Pos + 1; // CW } else { encoder0Pos = encoder0Pos - 1; // CCW } } //Serial.println (encoder0Pos, DEC); // use for debugging - remember to comment out } void doEncoderB() { // look for a low-to-high on channel B if (digitalRead(encoder0PinB) == HIGH) { // check channel A to see which way encoder is turning if (digitalRead(encoder0PinA) == HIGH) { encoder0Pos = encoder0Pos + 1; // CW } else { encoder0Pos = encoder0Pos - 1; // CCW } } // Look for a high-to-low on channel B else { // check channel B to see which way encoder is turning if (digitalRead(encoder0PinA) == LOW) { encoder0Pos = encoder0Pos + 1; // CW } else { encoder0Pos = encoder0Pos - 1; // CCW } } } //#################### PWM-Motor####################### void setPwmFrequency(int pin) { if(pin == 5 || pin == 6 || pin == 9 || pin == 10) { if(pin == 5 || pin == 6) { TCCR0B = TCCR0B & 0b11111000 | 0x01; } else { TCCR1B = TCCR1B & 0b11111000 | 0x01; } } else if(pin == 3 || pin == 11) { TCCR2B = TCCR2B & 0b11111000 | 0x01; } } Now I will explain you the main procedure of my arduino programm: - The coils needs to be alligend to a predefined point, that the arduino knew, where the magnets are located. This is made by the putting a three phase voltage to the coils. So the slider will rast inside the magnetic field automaticly. After that the position of the motor is defined as zero. For further movements the position will be captured by an interrupt on Pin D2 and D3 now. - The attached sinusiodal voltage needs to be sychronised with the magnet-positions. For that application, I have used the function berechne() int berechne(){ double position_in_mm = (encoder0Pos) / 40.00; int vielfaches = position_in_mm / abstand; double phaseshift2 = position_in_mm-(vielfaches*abstand); double phaseshift3 = (phaseshift2 - 0) * (72 - 0) / (abstand - 0) + 0; return phaseshift3; } The function returns the right index of the pwm array, so that there is always the right field orientation between the coils and the magnets. 3.With this information I am able to drive the motor. This is done by the function drive() :; } } This function recieves a double value which stands for the speed and the direction.The actual movement will be created by the function shift with the following analogWrite(). I simply shift the PWM-Array by a 1/3 periode, which causes a movement in the positiv direction. The speed of the motor is controlled by the scale_factor(). Which is the amplitude of the sinusiodal 4. Now there is only one last essential function: The PID controll loop. For that purpose I have used an allready finished PID-libary which can be loaded by the Arduino libary-manager. The PID-loop looks at the difference between the actual position and the desired position. The output will be a double value from -1.0 - 1.0 which can be used by the drive function. I have to say this is not the best arduino code. Yes it works, but there are still some problems to solve especially the velocity and positioning regulation loop. I am also on the way to implement an interface with step and direction signals, but there are still some problems to solve like acceleration ramps and constant velocity. I am trying to update the software in the next time. So stay tuned for an update and please leave some suggestions, how I could improve the code :-) The complete Arduino code can be downloaded below. The file is tested with Arduino IDE 1.8.2 . Step 11: Troubleshooting and Future Plans Troubleshooting Problem: The motor will not move correct, but the magnets in the magnetic field are changing in some way, while pushing the carriage by hand. Solution: The magnets and the coils are not synchronised to each other. The simplest way to do this is by software. At first you can print count[0] and berechne() inside your serial console, but make sure the drive function is uncomment. These two values should always be the same if they are not, please adjust the encoder0Pos in the void setup(). Problem: The encoder will not output a stable position Solution: The encoder cable screen needs to be grounded, otherwise you will have a lot of noise effects which creates unstable signals. Future Plans - Update the firmware for better position and velocity controll. - Upgrade to a STM32 Microcontroller for more peformance. - Build a second motor and use both inside a CNC-Controlled machine. This is an entry in the Epilog X Contest 46 Discussions 4 weeks ago What size wire did you use? In the bom + aliexpress link it is 0.15mm, but in the text you're talking about 0.2mm Reply 24 days ago Sry, this was my mistake. I mean 0.2mm copper wire. I updated the BOM :-) 8 weeks ago This is fantastic! I'm ordering parts now. Respect to you, Sir. Reply 8 weeks ago I am glad you like it. I am looking forward to your version of the Linear Motor :-) Reply 8 weeks ago Any chance for sending two boards to Australia? Reply 6 weeks ago Hello! Parts are being made as we speak :) Is the offer for boards still on? Thank you Reply 5 weeks ago Yes, the offer is still on :-) Reply 5 weeks ago When , where, how much? :) Reply 5 weeks ago Here you can buy the PCBs . If the shipping methode does not match for you, please leave me a private message. Question 6 weeks ago What about plastic instead of aluminium coil carriage? Answer 5 weeks ago Plastic will not be a good option, because the coils are getting hot after some time. I would recommend a coil carriage out of glasfiber and epoxy resin. Then you will have no problems with the heat. Question 6 weeks ago Do you think about forcer "short wire"? I mean coil is on aluminum plate which can work like short winding. Maybe there cut this forcer from the edge to center of winding placing, 3 cuts in total. Maybe this can help to increase performance a little bit ? Answer 6 weeks ago I did not thought about it yet, but you are right. I don't know how much the performance will be increased by these three cuts, but it it worth a try. Thanks for your feedback. :-) 8 weeks ago That is absolutely awesome! Great project, great documentation, great result... Respect! 8 weeks ago Why not do this project with only two coils and drive it like a stepper motor ? Reply 8 weeks ago This was my personal decision, because I wanted to learn how a three phase motor works in general. Of cause you can build linear stepper motors with only two coils, but then you will need a lot more magnets for the identical resolution. Also stepper motors are not closed loop, so there will be no position feed-back Reply 8 weeks ago Your are right, the resolution is weak but with the microsteps you can increase it a lot and the stepper motors don't need always a feedback. However I understand your point of view. 8 weeks ago There are many errors in the documentation. steel_plate_bottom bore distance: 22mm -> 20mm. Carriage hole distance: 16mm -> 10mm. Distance of magnets: 6mm -> 4.5mm Reply 8 weeks ago Thanks for finding these errors. I will update it ! Reply 8 weeks ago I want to reproduce without errors.
https://mobile.instructables.com/id/DIY-IRONLESS-LINEAR-SERVO-MOTOR/
CC-MAIN-2019-13
refinedweb
3,936
61.36
class Solution { public int countNodes(TreeNode root) { if(root == null) return 0; int left = getHeightLeft(root); int right = getHeightRight(root); //If left and right are equal it means that the tree is complete and hence go for 2^h -1. if(left == right) return ((2<<(left)) -1); //else recursively calculate the number of nodes in left and right and add 1 for root. else return countNodes(root.left)+ countNodes(root.right)+1; } public int getHeightLeft(TreeNode root){ int count=0; while(root.left!=null){ count++; root = root.left; } return count; } public int getHeightRight(TreeNode root){ int count=0; while(root.right!=null){ count++; root = root.right; } return count; } } When U have a good legible Solution (JAVA) Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/104381/when-u-have-a-good-legible-solution-java
CC-MAIN-2017-43
refinedweb
132
66.23
I'm using the Fx4 SDK in compatiblity mode to compile my Fx3 applications; everything seems ok, but for popups. No text is being displayed by any popup, not my custom titleWindow popped up with PopUpManager.createPopUp() nor my Alert.show boxes... Why could this be happening? thanks, gtb To fix this I did the following: 1. remove the compatibility directive: -compatibility-version=3.0 2. add as an additional compiler option the following directive -theme PATH_TO_FX4SDK/frameworks/themes/Halo/halo.swc 3. change the namespace on my css file. for the standard flex components: @namespace "library://ns.adobe.com/flex/mx"; and for my custom components @namespace "*"; finally make your application's framework conform to Flex4SDK. gtb If you are using Flex 4.1 then it might be related to one of these bugs: You might want to watch those and try the workarounds listed. Thanks, possibly, although I think it is more related to the fact that each component on my .css file must explicitly state its appropriate namespace in order for the .css styles to be used... gtb
https://forums.adobe.com/thread/697666
CC-MAIN-2017-30
refinedweb
181
60.41
Question: A company s stock currently pays a dividend of 1 25 per A company's stock currently pays a dividend of $1.25 per share. The company is expected to grow dividends 25% next year, 20% the following year, and 15% the year after that before dividend growth settles down to a long-term average rate of 5% per year. Estimate the intrinsic value per share if the required return on the stock is 11%. Answer to relevant QuestionsA stock currently pays a dividend of $0.80 per share. The company is expected to grow dividends 40% next year, 30% the following year, 20% the year after that and 15% the following year before dividend growth settles down to ...Why is it important to know how to value contracts and other packages of expected cash flows in business? Write out the general expression for expected return (e.g., Expected Return = ). Would you expect a firm's annual free cash flow to be greater than or less than NOPAT in a given year? Explain. Calculate UTX's current and quick ratios for each year 2010-2012. Post your question
http://www.solutioninn.com/a-companys-stock-currently-pays-a-dividend-of-125-per
CC-MAIN-2017-17
refinedweb
189
64.61
In this article I am going to explain how to access a .NET component from a COM client. I will also give you an example of how to merge two different Word documents into one document using a .NET COM object created with VB.NET. You cannot directly access a .NET component from a COM client. This can be achieved by CCW, a COM callable wrapper that acts as a proxy for the .NET object. To create a COM object for COM clients in .NET, we need to construct the application as a "ClassLibrary." In this type of application, only one class will be created. We have to include this line at top of the public class declaration: Imports System.Runtime.InteropServices _ <ClassInterface(ClassInterfaceType.AutoDual)> _ ClassInterfaceType is available in the System.Runtime.InteropServices namespace. All we have to do is import that namespace. For example: ClassInterfaceType System.Runtime.InteropServices <ClassInterface(ClassInterfaceType.AutoDual)> _ Public class YourClass End class Include any methods and properties according to your needs. After you are done with your code and before you build, you have to follow the process outlined below. After building the application is over, you will get <yourclass>.dll and the xx.snk file. The xx.snk file is the one that you have given in the "signing" tab. Now we have finished creating two files: a DLL file and a strong name DLL file. To use these DLL files on your target machine, you have to perform the following procedure. First, copy these two files into your target machine and create the x.tlb file using the command in the command line: regasm <yourclass>.dll /tlb:<anyname>.tlb After creating the TLB file, we have to put this DLL into the GAC folder using the command: gacutil –i <yourclass>.dll You can check that the <yourclass>.dll file is available on your machine by going to either C:\windows\assembly or C:\winnt\assembly, provided that Windows is installed in the C drive. After installing the DLL into your GAC, you can use this COM in any programming language. For example, in VB the <anyname>.tlb file will appear in the "COM" tab of the project reference window. You can check that and use it in your application. A COM created in .NET will refer only to a TLB (Type Library) file, whereas one in VB refers to a DLL file. Based on the above theoretical explanation, we will now see an example of merging two different Word documents into one document using a .NET COM object created in VB.NET. Before you start, you must decide the version of Word you are using. This is because we have to use Interoperability services in order to make .NET work with Office. For Interoperability services, Microsoft provides different PIAs (Primary Interop Assemblies) for different Office versions. My example here has Office 2003 working with .NET 2005. Before starting my example, you have to download the PIA for Office 2003 from the Microsoft website and then install and register the PIA on your system. The registration of a particular PIA is given on the Microsoft website itself. After installing the PIA, you can check for it in C:\windows\assembly. Microsoft.Interop.Office.Word.dll should now be available in your system, provided that Windows is installed on the C drive. In this example, I am having one class library that contains three properties and one method, MergeDoc. MergeDoc Properties: FilesToMerge InsertPageBreaks OutputFilePath These are write-only properties, i.e. you can set the values, but you can't get the values. After setting these properties, you should call the MergeDoc method. The result will be the merged Word document. Imports System.Runtime.InteropServices Imports Microsoft.Office.Core Imports System.io <ClassInterface(ClassInterfaceType.AutoDual)> _ Public Class MergeWord Dim sMergeFiles(0) As String Array containing the file name with the path to merge. 'This property accepts only file name with path as string. After accepting as string as file path, it will be stored in an inner arrays. Private nCol As Integer = 0 Private sTempFile As String Public WriteOnly Property FilesToMerge() As String Set(ByVal value As String) sTempFile = value Call display(sTempFile) End Set End Property Private Sub display(ByVal sTempFile As String) sMergeFiles(nCol) = sTempFile nCol = nCol + 1 ReDim Preserve sMergeFiles(nCol) End Sub 'Property is used to insert a pagebreak in between each file. If this is true pagebreak will appear otherwise not. Private bInsertPageBreak As Boolean = True Public WriteOnly Property InsertPageBreaks() As Boolean Set(ByVal value As Boolean) bInsertPageBreak = value End Set End Property 'using this property you will give the Path to save the merged file Private sOutputFilePath As String Public WriteOnly Property OutputFilePath() As String Set(ByVal value As String) sOutputFilePath = value End Set End Property Public Sub MergeDoc() ' Create an object for missing type and pagebreak. Dim missing As Object = System.Type.Missing Dim pageBreak As Object = Microsoft.Office.Interop.Word.WdBreakType.wdPageBreak Dim nRow As Integer ' Create a new Word application Dim oWord As New Microsoft.Office.Interop.Word.Application '// Create a new file. you can create based on any template. Dim oDoc As Microsoft.Office.Interop.Word.Document = oWord.Documents.Add() Dim oSel As Microsoft.Office.Interop.Word.Selection oSel = oWord.Application.Selection Dim nRow As Integer Dim sFiles() As String sFiles = sMergeFiles ' Go through the elements in an array one by one and ' insert the file in to a new file. For nRow = 0 To sFiles.Length - 2 oSel.InsertFile(sFiles(nRow), missing, missing, missing, missing) ' page break is true then you insert page break between the ' documents. If bInsertPageBreak = True Then oSel.InsertBreak(pageBreak) End If ' After merging the files into a new document, save the ' file into your path, ' where your specified in the outputproperty. oDoc.SaveAs(sOutputFilePath, missing, missing, missing, missing, missing, missing, missing, missing, missing, missing, missing, missing, missing, missing, missing) ' Finally close and release the object from memory. oDoc.Close() oDoc = Nothing oWord.Quit(missing, missing, missing) End Sub End Class After creating this class, you have to follow the steps given in the theoretical explanation in order to use it with.
https://www.codeproject.com/Articles/19380/NET-component-from-a-COM-client
CC-MAIN-2019-47
refinedweb
1,026
58.58
Homework 3 Submission date: January 16th, 2020 Topics - Sequence models for text generation - Image generation with a Variational Autoencoder - Generative adversarial networks Downloading The assignment code is available here. Updates 2020-01-06 - Part 1: Fixed order of arguments in the post_epoch_fnfor training. - Part 2: Correction in the formula for the reparametrization trick: $\sigma^2$ replaced with $\sigma$. To update, simply replace the original notebooks with the new ones. No code modifications are necessary, but make sure you implemented the reparametrization trick using the correct formula. 2020-01-08 - Part 3: There was an unnecessary breakstatement in the GAN training block in the notebook. To update, replace the original notebook. No code modifications are necessary.: How can we run long training blocks in the notebooks without running them interactively in jupyter-lab (e.g. from command line on the server)? A: The easiest way is to simply copy the block (and relevant import statements) into a new python script and run that (with srun/ sbatch on the server). A more automated way is to convert the whole notebook to a python script, for example: jupyter nbconvert Part1_Sequence.ipynb --to python And then run it with ipython within srun or sbatch, for example: srun -c 2 --gres=gpu:1 ipython Part1_Sequence.py
https://vistalab-technion.github.io/cs236781/assignments/hw3
CC-MAIN-2020-10
refinedweb
211
55.84
This notebook contains a demonstration of new features present in the 0.45.0 release of Numba. Whilst release notes are produced as part of the CHANGE_LOG, there's nothing like seeing code in action! It should be noted that this release does not contain as much new user facing functionality as usual, a lot of necessary work was done on Numba internals instead! Included are demonstrations of: @jit(parallel=True)functions. First, import the necessary from Numba and NumPy... from numba import jit, njit, config, __version__, errors, types from numba.extending import overload import numpy as np assert tuple(int(x) for x in __version__.split('.')[:2]) >= (0, 45) As noted in the previous release notebook, Numba Version 0.44 deprecated a number of features and issued pending-deprecation notices for others. One of the deprecations with highest impact was the pending-deprecation of reflection of List and Set types, the "typed-list" demonstrated herein is the replacement for the reflected list. The first important thing to note about the typed-list is that it is instantiated (manually or through type inference) with a fixed single type and as a result its items must be homogeneous and of that type, this is similar to the typed dictionary added in Numba Version 0.43. The typed-list documentation can be found here and contains further notes and examples. Demonstration of this feature starts with seeing how to change some code that would be impacted by the deprecation of the "reflected list": @njit def foo(x): x.append(10) # changes made here need "reflecting" back to `a` in the outer scope a = [1, 2, 3] foo(a) This is the same functionality but using the new typed-list: from numba.typed import List @njit def foo(x): x.append(10) a = List() # Create a new typed-list # Add the content to the typed-list, the list type is inferred from the items added [a.append(x) for x in [1, 2, 3]] foo(a) # make the call Taking a look at the output... from numba import typeof print(a) # The list looks like a "normal" python list print(type(a)) # but is actually a Numba typed-list container print(typeof(a)) # and it is type inferred as a `ListType[int64]` (a list of int64 items) The typed list behaves the same way both inside and outside of jitted functions, the usual operators "just work"... def list_demo(jitted, a): print("jitted: ", jitted) print("input :",a) a.pop() print("a.pop() :", a) a.extend(a) print("a.extend(a) :", a) a.reverse() print("a.reverse() :", a) print("slice a[::-2] :", a[::-2]) list_demo(False, a.copy()) # run the demo on a copy of 'a' in a pure python function print("-" * 20) njit(list_demo)(True, a.copy()) # run the demo on a copy of 'a' in a jit compiled function Further, typed lists can contain considerably more involved structures than those supported in the reflected list implementation. For example, this is a list-of-list-of-typed-dict being returned from a jitted function: @njit def complicated_list_structure(): a = List() for x in range(4): tmp = List() for y in range(3): d = dict() d[x] = y tmp.append(d) a.append(tmp) return a print(complicated_list_structure()) In the same manner as with the numba.typed.Dict, it is also possible to instantiate a numba.typed.List instance with a specific type. This is useful in the case that type inference cannot automatically infer the type of the list, for example, if type inference would need to cross a function call boundary. The following demonstrates: @njit def callee(a): a.append(1j) # the list is a complex128 type @njit def untyped_caller(): x = List() # type of `x` cannot be inferred callee(x) return x @njit def typed_caller(): x = List.empty_list(types.complex128) # type of `x` is specified callee(x) return x # This fails... try: untyped_caller() except errors.TypingError as e: print("Caught error: %s" % e.msg) # This works as expected... print("Works fine: %s" % typed_caller()) Most fortunately, with thanks to some side effects of the implementation details of the typed-list, the performance is generally good and in a number of use cases excellent, in comparison to the CPython interpreter. For example, racing a list append of all elements of a large array: def interpreted_append(arr): a = [] for x in arr: a.append(x) return a @njit def compiled_append(arr): a = List() for x in arr: a.append(x) return a arr = np.random.random(int(1e6)) # array of 1e6 elements assert interpreted_append(arr) == list(compiled_append(arr)) # Interpreter performance interpreter = %timeit -o interpreted_append(arr) # JIT compiled performance jitted = %timeit -o compiled_append(arr) print("Speed up: %sx" % np.round(interpreter.best/jitted.best, 1)) This races walking lists and accessing each element... @njit def walk(x): count = 0 for v in x: if v == True: count += 1 return count arr = np.random.random(int(1e6)) < 0.5 # array of 1e6 True/False elements typed_list = List() [typed_list.append(_) for _ in arr] builtin_list = [_ for _ in arr] # check the results assert walk(typed_list) == walk.py_func(builtin_list) == walk.py_func(typed_list) interpreter = %timeit -o walk.py_func(builtin_list) jitted = %timeit -o walk(typed_list) print("Speed up: %sx" % np.round(interpreter.best/jitted.best, 1)) @jit(parallel=True)functions ¶ Whilst a small addition on the face of it, the ability to cache functions that are decorated with @jit(parallel=True) is a huge improvement for users of Numba's automatic parallelisation. The parallelisation compilation path is the most involved of all those in Numba and being able to cache the compilation results for reuse should drastically improve start up performance in certain applications. A quick demonstration: @njit(parallel=True, cache=True) def parallel(): n = int(1e4) x = np.zeros((n, n)) y = np.ones((n, n)) a = x + y b = a * 2 c = a - b d = c / y + np.sqrt(x) e = np.sin(d) ** 2 + np.cos(d) ** 2 return e parallel() parallel.stats @njit def numpy_new(): arr = np.array([[0, 2], [3 ,0]]) # np.select condlist = [arr == 0, arr != 0] choicelist = [arr ** 2, arr ** 3] print("np.select:\n", np.select(condlist, choicelist, 1)) # np.flatnonzero print("np.flatnonzero:\n", np.flatnonzero(arr)) # windowing functions... print("np.bartlett:\n", np.bartlett(5)) print("np.blackman:\n", np.blackman(5)) print("np.hamming:\n", np.hamming(5)) print("np.hanning:\n", np.hanning(5)) print("np.kaiser:\n", np.kaiser(5, 5)) numpy_new() @njit def demo_range(): myrange = range(5, 500, 27) print("start:", myrange.start) print("stop :", myrange.stop) print("step :", myrange.step) print(32 in myrange) print(7 in myrange) demo_range() Also, the inspect_types method on the dispatcher now supports the signature kwarg to be symmetric with respect to the other inspect_* methods. As an example: @njit def add_one(x): return x + 1 add_one(1) add_one(1.) add_one(1j) print("Known signatures:", add_one.signatures) # show the types with respect to the zeroth signature add_one.inspect_types(signature=add_one.signatures[0], pretty=True)
https://nbviewer.jupyter.org/github/numba/numba-examples/blob/master/notebooks/Numba_045_release_demo.ipynb
CC-MAIN-2020-45
refinedweb
1,163
57.37
SampleMan: PotatoKiller - 1.0 shoot'em'up Vitaliy Kudasov (kuviman) 3d shoot em up game should work with python 2.6, PyOpenGL, pygame Links Releases SampleMan: PotatoKiller 0.2 — 12 Jun, 2011 SampleMan: PotatoKiller 0.4 — 17 Jun, 2011 SampleMan: PotatoKiller 1.0 — 29 Jun, 2011 SampleMan: PotatoKiller 0.5 — 17 Jun, 2011 SampleMan: PotatoKiller 0.6 — 22 Jun, 2011 SampleMan: PotatoKiller 0.3 — 14 Jun, 2011 SampleMan: PotatoKiller 0.1 — 12 Jun, 2011 SampleMan: PotatoKiller 0.7 — 25 Jun, 2011 Pygame.org account Comments Lucian Schulte 2011-06-16 18:24:25 you may want to add the HWSURFACE and DOUBLEBUF flags when you set the window. On my machine I got a heavily garbled screen self.screen = pygame.display.set_mode((800, 600), HWSURFACE|OPENGL|DOUBLEBUF|fsmode) As far as using py2exe you may want to try cxfreeze, it's similar. I can't seem to get any OpenGL stuff to work with it though, so I'm not sure what's wrong. Lucian Schulte 2011-06-16 19:23:34 Okay just figured out the py2exe thing for you (I'm trying to do this for a project of my own) I'm not sure what versions you're running of things and there are some weird things so if you have any questions email me at [email protected] So first things first you need to fix up your main.py file add these lines at the top: from ctypes import util from OpenGL.platform import win32 now in the same directory make a file called setup.py the contents should be: from distutils.core import setupimport py2exe,sys,os origIsSystemDLL = py2exe.build_exe.isSystemDLLdef isSystemDLL(pathname): if os.path.basename(pathname).lower() in ("libfreetype-6.dll", "libogg-0.dll", "sdl_ttf.dll"): return 0 return origIsSystemDLL(pathname)py2exe.build_exe.isSystemDLL = isSystemDLL setup(windows=['main.py'], options={ "py2exe": { "includes": ["ctypes", "logging","pygame.mixer"], "excludes": ["OpenGL"], } } ) The 3 DLL Files mentioned there are because one of the newer versions of py2exe considers them system dlls and doesn't copy them but they aren't system DLLs and will need to be copied. Now run python setup.py py2exe copy your data folder, the help.txt, and the settings.txt into the dist folder that is created Note in the setup.py script we omitted OpenGL, I believe this needs to be done for PyOpenGL version 3, so just go along with it. Now go to the folder C:\Python26\Lib\site-packages (change the version number to your python) copy the OpenGL directory into your "dist" directory where the game is Now run your game: main.exe Just email me if you have any questions. But I got your game compiled and running as an exe on my machine so you should be ok. Виталий Кудасов 2011-06-17 10:23:00 Thanks for feedback, but copying OpenGL directory to the dist directory doesn't work for me. The solution was to put it inside library.zip. I've included my py2exe script to the source package. MilanFIN 2011-06-26 16:30:09 Really nice game to play, I started to miss a sesitivity-setting, becouse my mouse went something like 50 rounds per with 20cm move. HEY ALL IT IS EASIEST TO PLAY NEAR A TREE, THEN YOU CAN LAST FOR A LONG TIME XD nice indeed, keep it on Saluk64007 2011-07-08 03:13:49 This is frightening.
http://pygame.org/project-SampleMan%3A+PotatoKiller-1898-3416.html
CC-MAIN-2017-39
refinedweb
571
75.81
COMMANDS AVAILABLE IN TCL lsearch - DONE lsearch ?-exact|-glob|-regexp|-command 'command'? ?-bool|-inline? ?-not? ?-nocase? ?-all? list valueNote that unlike Tcl, -exact is the defaultpackage - DONEThis is much simpler than Tcl. 'package require xyz' simply looks for xyz.tcl or xyz.so anywhere on $auto_path and loads the first one found.To version a package, include it in the name.source - DONEstring - DONEclock - DONEarray - DONEdict - DONEExcept with, incr, for, which probably won't be done.structThis command will substitute the Tcl's binary command. This is more or less how it should work: struct get { version uint4n hdrlen uint4 tos uint8 totlen uint16 id uint16 flags uint3 fragoff uint13 srcaddr 4 uint8 dstaddr uint32 rest * } $binary struct set { } ... ...struct get returns a dictionary, accordingly to the given structure. it's possible to specify a variable name, followed by a type specifier or by a number of elements + a type specifier (being the type specifier non numerical this is always determined). The special type * means "all the rest".Struct set should work in a similar way, with exactly the same structure. so that it's possible, for example, to use struct get, modify some field because what we get is a dictionary, then use struct set to write the modified version of the binary data.scan - DONEformat - DONEtry - DONE JIM COMMANDS NOT AVAILABLE IN TCL Larry Smith Since Jim is a new implementation, it would be wonderful to tidy up one of the clumsier aspects of Tcl:( and )Equivalent to [ expr { <contents of ()> } ]onleave Larry Smith Suggest "at_return" or "at-return"This commands accumulate scripts to execute just before the current procedure is returning to the caller. Every new time onleave is called a script is accumulated. This scripts are executed in reverse order. If the command is called with an empty string as argument the current scritps list is cleared. If the command is called without arguments the current list of scripts registered is returned.assigned to: Salvatore Sanfilippotailcall - DONE JIM OOP SYSTEM Design in progress, it may resemble the Odys object system, but I've many changes in mind. The name will be obviously Josh, Jim Object System H?. What you expected from the guy that called the language Jim? - RS: How about JOE, Jim's Object Extension? JIM NAMESPACES They will be for sure very different of Tcl's namespaces, i.e. designed to just resolve names collisions. Only one level of nesting. the dot as separator like in win32.beep. Ideally Jim namespaces should be similar in the simplicity of the concept to C's static qualifier.Larry Smith Actually, the nesting namespaces are not that bad, they just have hard to read syntax. One of the more elegant features of Honeywell's Gecos 6 OS was their concept of "path expressions". The separator is ">" - and "<" is the equivalent of Unix's "..". syntax: in (namespace name) do { }example using a daughter namespace: in person do { set name Bob set occupation "town idiot" }...the "do" block could use the semantics of the "do...when" exception handler above.Another, referring to something in a sibling namespace: in <family do { lappend members Bob }A path expression referring to the parent namespace: in < do { }A path expression referring to a daughter namespace within a sibling namespace: in <family>Mary do { set occupation "hairdresser" }Global namespace: > in >this>that>theother>thing do { }escargo 5 Oct 2005 - GCOS 6 got these from Multics. Unix reduced the number of characters required by using ., .., and / in pathnames. Also, the trick of using directories with . and .. in them made the notion match pretty well.
http://wiki.tcl.tk/13792
CC-MAIN-2017-04
refinedweb
599
56.76
I am building a python application which uploads images to google drive. However after working for some some time my google drive upload suddenly stopped working. Whenever I try to initialize the service, the program exits with code -1073740777 (0xC0000417). I have already tried to create a new client_secret.json file with the developer console (also with a completely different Google account) and deleting the drive-python-quickstart.json credential file. My friends do not have this problem with the same code and as I said, this has worked for me for some time, too, but suddenly stopped working. I am running Windows 10 Pro x64 with Python 3.5 32 Bit. The problem occurrs when running this example program (taken from the Google quickstart guide): from __future__ import print_function import httplib2 import os from apiclient import discovery import oauth2client from oauth2client import client from oauth2client import tools try: import argparse flags = argparse.ArgumentParser(parents=[tools.argparser]).parse_args() except ImportError: flags = None SCOPES = '' CLIENT_SECRET_FILE = 'client_secret.json' APPLICATION_NAME = 'Drive, 'drive-python-quickstart.json') store = oauth2client.file Google Drive API. Creates a Google Drive API service object and outputs the names and IDs for up to 10 files. """ credentials = get_credentials() http = credentials.authorize(httplib2.Http()) service = discovery.build('drive', 'v2', http=http) results = service.files().list(maxResults=10).execute() items = results.get('items', []) if not items: print('No files found.') else: print('Files:') for item in items: print('{0} ({1})'.format(item['title'], item['id'])) if __name__ == '__main__': main() Okay, I have fixed the problem myself eventually. After a little bit of debugging, I found out the following: The oauth2client can use two different methods for opening a file. It first tries to import the class _Win32Opener and when the import fails it uses the _FcntlOpener. The import of _Win32Opener does not fail but the opening and locking of a file using the _Win32Opener fails so the program crashes. To force oauth2client to use the _FcntlOpener, just remove/rename the file _win32_opener.py in your pyhton packages. Relevant files: [PythonDir]/Lib/site-packages/oauth2client/contrib/locked_file.py [PythonDir]/Lib/site-packages/oauth2client/contrib/_win32_opener.py tl;dr Just remove/rename the file [PythonDir]/Lib/site-packages/oauth2client/contrib/_win32_opener.py
https://codedump.io/share/jBbV77cRHS4p/1/python-google-drive-api-discoverybuild-fails-with-exit-code--1073740777-0xc0000417
CC-MAIN-2018-22
refinedweb
368
52.26
getopt (3p) PROLOGThis manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAMEgetopt, optarg, opterr, optind, optopt — command option parsing SYNOPSIS #include <unistd.h> int getopt(int argc, char * const argv[], const char *optstring); extern char *optarg; extern int opterr, optind, optopt; DESCRIPTIONThe getopt() function is a command-line parser that shall follow Utility Syntax Guidelines 3, 4, 5, 6, 7, 9, and 10 in the Base Definitions volume of POSIX.1‐2008,: - 1. -. - 2. - Otherwise, optarg shall point to the string following the option character in that element of argv, and optind shall be incremented by 1. argv[optind] is a null pointer *argv[optind] is not the character − argv[optind] points to the string "−" argv[optind] points to the string "−−" RETURN VALUEThe −1 when all command line options are parsed. ERRORSIf. The following sections are informative. EXAMPLES Parsing Command Line OptionsT)) { . . . } cmd −ao arg path path cmd −a −o arg path path cmd −o arg −a path path cmd −a −o arg −− path path cmd −a −oarg path path cmd −aoarg path path Selecting Options from the Command LineThe)) != −1) { if ((st = strchr(Options, c)) != NULL) { dbtype = st - Options; break; } }
https://readtheman.io/pages/3p/getopt
CC-MAIN-2019-09
refinedweb
221
60.55
In this section, you will learn how to generate the powerset of a given set of values. As you have already know about the concept of set, which is the finite or infinite collection of objects and there is no order of significance and multiplicity. The Power Set is a set of all the subsets of these sets. For example, if we have a set {x,y,z}: Now, the Power Set of {x,y,z} will be: P(S) = { {}, {x}, {y}, {z}, {x, y}, {x, z}, {y, z}, {x, y, z} } Description of code: In the given code, firstly we have specified the set and using Math.pow(2,len), we get the number of powerset elements where 'len' is the number of elements in the given set. After that, we have created a binary counter for the number of powerset elements and converted the binary numbers of the integer to a String .Then the following code convert each digit of the binary number to the corresponding element in the new set. for (int j = 0; j < pset.length(); j++) { if (pset.charAt(j) == '1') set.add(st[j]); } and at last we get the powerset of the given set. Here is the code: import java.util.*; public class PowerSet { public static void main(String[] args) { String st[] = { "x", "y", "z" }; LinkedHashSet hashSet = new LinkedHashSet(); int len = st.length; int elements = (int) Math.pow(2, len); for (int i = 0; i < elements; i++) { String str = Integer.toBinaryString(i); int value = str.length(); String pset = str; for (int k = value; k < len; k++) { pset = "0" + pset; } LinkedHashSet set = new LinkedHashSet(); for (int j = 0; j < pset.length(); j++) { if (pset.charAt(j) == '1') set.add(st[j]); } hashSet.add(set); } System.out.println(hashSet.toString().replace("[", "{").replace("]","}")); } } Output: Advertisements Posted on: March+
http://www.roseindia.net/tutorial/java/core/powerset.html
CC-MAIN-2015-22
refinedweb
302
66.33
Because Scala generics are built on top of Java generics in the JVM, they suffer the same problems with type-erasure. We effectively lose track of the actual values of the type arguments for the generic type. Thankfully, Scala provides TypeTags to get around this problem. Type tags are explained in detail in other places, such as on the Scala website, and in this Stack Overflow answer, so I won't go into the details here. Type tags are great, and I use them all the time. Maybe I use them a little too much, because sometimes I go beyond their intended usage, and run into problems. This time around, I tried to use them as keys in a map. As the Stack Overflow answer above points out, type tags are not necessarily equal, as in == equal, even though the two type tags represent an equivalent type. To see this in action, let's boot up the REPL (Scala's interactive shell). I'm going to start it up inside SBT in the emblem project, so later on I can import emblem stuff as well. This procedure should work for any project that declares a dependency on emblem. Prompts are in black, input in blue, non-prompt output in purple, and superfluous output is omitted: bash% git clone bash% cd longevity bash% sbt > project emblem > console Welcome to Scala version 2.11.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_25). scala> Okay, now we're in the REPL. Let's import all the type-taggy stuff from the Scala reflection library: scala> import scala.reflect.runtime.universe._ import scala.reflect.runtime.universe._ Let's start with a simple type alias as an example. The type tags are not equal: scala> trait A defined trait A scala> type B = A defined type alias B scala> typeTag[A] == typeTag[B] res0: Boolean = false The proper way to determine if these are equivalent types is with the =:= operator on scala.reflect.api.Types.Type, which can be accessed with method TypeTag.tpe: scala> typeTag[A].tpe =:= typeTag[B].tpe res1: Boolean = true Or, equivalently: scala> typeOf[A] =:= typeOf[B] res2: Boolean = true Note that TypeTag.== will fail us not only when the types are equivalent as above, but in some circumstances when they are exactly the same type: scala> object x { | trait C | val ctag = typeTag[C] | } defined object x scala> x.ctag == typeTag[x.C] res3: Boolean = false scala> x.ctag.tpe =:= typeTag[x.C].tpe res4: Boolean = true Thankfully, we don't get a new, non-equal type tag every time the typeTag method is called: scala> typeTag[A] == typeTag[A] scala> typeTag[A] == typeTag[A] res5: Boolean = true Clearly, type tags are not going to work very well as keys in a map. But emblem's TypeKeys are. Let's give it a try: scala> import emblem.imports._ import emblem.imports._ scala> typeKey[A] == typeKey[B] res6: Boolean = true scala> object x { scala> object x { | trait C | val ckey = typeKey[C] | } defined object x scala> x.ckey == typeKey[x.C] res7: Boolean = true Because there is an implicit conversion from TypeTag to TypeKey, we can basically use a key anywhere we can use a tag. Common usage of type tags is like so: scala> def grokTag[A : TypeTag] = println(typeTag[A].tpe) grokTag: [A](implicit evidence$1: reflect.runtime.universe.TypeTag[A])Unit scala> grokTag[List[_]] scala.List[_] scala> grokTag[List[Int]] scala.List[Int] We can do the exact same thing with type keys: scala> def grokKey[A : TypeKey] = println(typeKey[A].tpe) grokKey: [A](implicit evidence$1: emblem.imports.TypeKey[A])Unit scala> grokKey[List[_]] scala.List[_] scala> grokKey[List[Int]] scala.List[Int] Either of the above grokking methods could have alternatively made use of Scala's implicitly: scala> def grokTag[A : TypeTag] = | println(implicitly[TypeTag[A]].tpe) grokTag: [A](implicit evidence$1: reflect.runtime.universe.TypeTag[A])Unit scala> def grokKey[A : TypeKey] = | println(implicitly[TypeKey[A]].tpe) grokKey: [A](implicit evidence$1: emblem.imports.TypeKey[A])Unit You can also manually convert between tags and keys yourself: scala> val tag = typeTag[A] tag: reflect.runtime.universe.TypeTag[A] = TypeTag[A] scala> TypeKey(tag) res8: emblem.TypeKey[A] = TypeKey[A] scala> val key = typeKey[A] key: emblem.imports.TypeKey[A] = TypeKey[A] scala> key.tag res9: reflect.runtime.universe.TypeTag[A] = TypeTag[A] Of course, the TypeKey.hashCode method is consistent with equals, so we can use them as keys in sets and maps. To see this, let's set up some types to test with: scala> trait A1 scala> type A2 = A1 scala> trait B1 scala> type B2 = B1 scala> trait C1 scala> type C2 = C1 And now, some sets: scala> val tagset1 = Set( | typeTag[A1], typeTag[A2], | typeTag[B1], typeTag[B2], | typeTag[C1], typeTag[C2]) scala> val tagset2 = Set(typeTag[A1], typeTag[B1], typeTag[C1]) scala> val keyset1 = Set( | typeKey[A1], typeKey[A2], | typeKey[B1], typeKey[B2], | typeKey[C1], typeKey[C2]) scala> val keyset2 = Set(typeKey[A1], typeKey[B1], typeKey[C1]) The size of these sets are 6, 3, 3 and 3, respectively. This is how they equal: scala> tagset1 == tagset2 res10: Boolean = false scala> keyset1 == keyset2 res11: Boolean = true scala> tagset1.map(TypeKey(_)) == keyset1 res12: Boolean = true Clearly, a set of type tags does not have any conceptual correlation to a set of types. But the set of type keys does. Above, when we called methods grokTag and grokKey, we specified the type argument explicitly, with calls like grokTag[A] and grokKey[A]. We can also lock down the type argument another way: by explicitly specifying the implicit arguments for TypeTag and TypeKey: scala> grokTag(typeTag[List[Int]]) scala.List[Int] scala> grokKey(typeKey[List[Int]]) scala.List[Int] We can do the same thing with the tags and keys in our sets: scala> tagset1.foreach(grokTag(_)) C2 A1 B2 A2 B1 C1 scala> tagset2.foreach(grokTag(_)) A1 B1 C1 scala> keyset1.foreach(grokKey(_)) A1 B1 C1 scala> keyset2.foreach(grokKey(_)) A1 B1 C1 We can also use type keys as keys in maps. For instance: scala> var counter = Map[TypeKey[_], Int]() counter: scala.collection.immutable.Map[emblem.TypeKey[_],Int] = Map() Here's a function to increment the counter for a given key: scala> def countKey[A : TypeKey]: Unit = | counter += typeKey[A] -> | (counter.getOrElse(typeKey[A], 0) + 1) countKey: [A](implicit evidence$1: emblem.imports.TypeKey[A])Unit Let's see how it works: scala> counter(typeKey[A1]) java.util.NoSuchElementException: key not found: TypeKey[A1]) ... 43 elided scala> counter(typeKey[A2]) java.util.NoSuchElementException: key not found: TypeKey[A2] scala> countKey[A1] scala> counter(typeKey[A1]) res13: Int = 1 scala> counter(typeKey[A2]) res14: Int = 1 scala> countKey[A1] scala> countKey[A2] scala> countKey[A1] scala> countKey[A2] scala> counter(typeKey[A1]) res15: Int = 5 scala> counter(typeKey[A2]) res16: Int = 5 By this point, we've exercised all the basic functionality of a TypeKey. Do you have any ideas of real-life scenarios where these might come in handy? If you do, please comment, I would love to hear about it. We'll start looking at a more realistic use-case in the next post, when we begin to investigate TypeKeyMaps.
http://scabl.blogspot.com/2015/04/emblem-1.html
CC-MAIN-2018-39
refinedweb
1,219
57.37
C library function - bsearch() Description The C library function void *bsearch(const void *key, const void *base, size_t nitems, size_t size, int (*compar)(const void *, const void *)) function searches an array of nitems. Declaration Following is the declaration for bsearch() function. void *bsearch(const void *key, const void *base, size_t nitems, size_t size, int (*compar)(const void *, const void *)) Parameters key -- This is the pointer to the object that serves as key for the search, type-casted as a void*. base -- This is the pointer to the first object of the array where the search is performed, type-casted as a void*. nitems -- This is the number of elements in the array pointed by base. size -- This is the size in bytes of each element in the array. compar -- This is the function that compares two elements. Return Value This function returns a pointer to an entry in the array that matches the search key. If key is not found, a NULL pointer is returned. Example The following example shows the usage of bsearch() function. #include <stdio.h> #include <stdlib.h> int cmpfunc(const void * a, const void * b) { return ( *(int*)a - *(int*)b ); } int values[] = { 5, 20, 29, 32, 63 }; int main () { int *item; int key = 32; /* using bsearch() to find value 32 in the array */ item = (int*) bsearch (&key, values, 5, sizeof (int), cmpfunc); if( item != NULL ) { printf("Found item = %d\n", *item); } else { printf("Item = %d could not be found\n", *item); } return(0); } Let us compile and run the above program, this will produce the following result: Found item = 32
http://www.tutorialspoint.com/c_standard_library/c_function_bsearch.htm
CC-MAIN-2014-42
refinedweb
264
69.01
Details Description. Synonyms are supported by all major database vendors, including Oracle, DB2 and mySQL. DB2 also allows CREATE ALIAS statement, which does exactly same as CREATE SYNONYM. Creating aliases instead of synonyms is not supported by Oracle or mySQL, so I propose that Derby not support creating aliases. Synonyms are not part of SQL-2003 spec, but is a common-SQL statement among major database vendors. SQL standard doesn't pay attention to DDLs as much, so I suspect they skipped synonyms. I will be adding two new DDL statements to Derby: CREATE SYNONYM <SynonymSchema>.<SynonymName> FOR <TargetSchema>.<TargetName> DROP SYNONYM <SynonymSchema>.<SynonymName> Synonyms share the same namespace as tables or views. It is not possible to create a synonym with same name as a table that already exists in the same schema. Similarly, a table/view can't be created that matches a synonym already present. Issue Links - is related to DERBY-6109 Allow CREATE SYNONYM for table function invocations. - Open Activity - All - Work Log - History - Activity - Transitions Second version of the patch. Still to implement include - Dependency registration and checking. (will be submitted as another patch) - May make minor reorg of code to merge resolveTableToSynonym with existing routines. Code has been present in the trunk for sometime. Code changes were submitted as part of SVN checkins: 180459, 189716 and 190182. Feature should be available in 10.1 release. This new feature has been in Derby since 10.1. Here is some Implementation notes and my patch tries to implement the proposed behavior. There are two primary parts to the implementation. First, implement the DDL support and second implement runtime mapping of a synonym to its base table/view. Create synonym DDL Derby already supports creating functions/procedures using CreateAliasNode and CreateAliasConstantAction. In trying to avoid creating more nodes, I have extended these to also handle synonyms. After parsing create synonym DDL, the bind phase performs some checks on the statement, like disabling a synonym on a temporary table (these don't exists in catalogs) etc. Most of the work is performed in the CreateAliasConstantAction. This tries to map schema information to system catalogs. Some of the constraints are: 1. TargetSchema needs to be stored as a name, rather than a schemaID. This ensures that a synonym stays valid even if the targetSchema is dropped and recreated. Similarly a TargetName needs to be stored as a string, instead of a tableID. TargetName need not be present at the DDL time as a database object. 2. While I am providing implementation that allows creating synonyms for tables and views, it is possible to extend this mechanism to other database objects as well, like procedures or functions. Some of the database vendors already support this. There seem to be several options to map this info to existing catalogs 1. Use the SYSALIASES catalog to store all synonym info. This could be achieved either by adding more columns to store TargetSchema and TargetTable or by using AliasInfo to store them as a java object. We currently store function/procedure info by creating a RoutineAliasInfo. I am proposing we follow the same approach. Since synonyms share same namespace with views and tables, we need to check if a table/view is already present before allowing a synonym to be created. 2. It is also possible to add extra columns to SYSTABLES to hold TargetSchema/TargetTable info, if the object refers to a synonym. This approach makes it easy to ensure same namespace is used for synonyms, tables and views. If synonyms are also allowed be created for other database objects, then we would have to check for any namespace conflicts. Database upgrade needs to ensure creating these extra columns following a hard upgrade. CreateAliasConstantAction also needs to catch some error conditions. Attempts to create a cycling synonym reference should result in an error. This can be achieved by traversing a synonym chain. Also attempts to create a synonym to a table/view that doesn't already exists should raise a warning and succeed. Synonym resolution When a DML statement refers to a synonym, it needs to be resolved to its base table or base view. This can be achieved by traversing a synonym chain by reading AliasDescriptors. Other changes I will also be providing some other related changes to Derby. 1. Enhance dblook schema dumping tool to emit synonym info. Changes are required to the tool and these depend on how the synonym info is stored in the catalogs. 2. Add required dependency registering and checking. These ensure that when a synonym is dropped, for example, all plans that depend on the schema are invalidated.
https://issues.apache.org/jira/browse/DERBY-335
CC-MAIN-2017-13
refinedweb
773
57.57
Download presentation Presentation is loading. Please wait. Published byJustin Horton Modified over 3 years ago 1 Promised Abstract: 2 Session Code Yukon Features For SkyServer Database Jim Gray: Microsoft Alex Szalay (and friends): Johns Hopkins Help from: Cathan Cook (personal SkyServer), Maria A. Nieto-Santisteban (image cutout service) 3 SkyServer Overview (10 min) 10 minute SkyServer tour Pixel space Record space: Doc space: Ned Set space: Web & Query Logs Dr1 WebService You can download (thanks to Cathan Cook ) Data + Database code: Website: Data Mining the SDSS SkyServer Database Data Mining the SDSS SkyServer Database MSR-TR-2002-01 Data Mining the SDSS SkyServer Database select top 10 * from weblog..weblog where yy = 2003 and mm=7 and dd =25 order by seq desc select top 10 * from weblog..sqlLog order by theTime Desc 4 Cutout Service (10 min) A typical web service Show it Show WSDL Show fixing a bug Rush through code. You can download it. Maria A. Nieto-Santisteban did most of this (Alex and I started it) 5 SkyQuery: Distributed Query tool using a set of web services Fifteen 6 2MASS INT SDSS FIRST SkyQuery Portal Image Cutout SkyQuery Structure Each SkyNode publishes Schema Web Service Database Web Service Portal is Plans Query (2 phase) Integrates answers Is itself a web service 7 Four Database Topics Sparse tables: column vs row store tag and index tables pivot Maplist (cross apply) Bookmark bug Object Relational has arrived. 8 Column Store Pyramid Users see fat base tables (universal relation) Define popular columns index tag table 10% ~ 100 columns Make many skinny indices 1% ~ 10 columns Query optimizer picks right plan Automate definition & use Fast read, slow insert/update Data warehouse Note: prior to Yukon, index had 16 column limit. A bane of my existence. Simpl e Typical Semi- join Fat quer y Obese query BASE INDICIES TAG 9 Examples create table base ( id bigint, f1 int primary key, f2 int, …,f1000 int) create index tag on base (id) include (f1, …, f100) create index skinny on base(f2,…f17) Simpl e Typical Semi-join Fat quer y Obese query BASE INDICIES TAG 10 A Semi-Join Example create table fat(a int primary key, b int, c int, fat char (988)) declare @i int, @j int; set @i = 0 again: insert fat values(@i, cast(100*rand() as int), cast (100*rand() as int), ' ') set @i = @i + 1; if (@i < 1000000) goto again create index ab on fat(a,b) create index ac on fat(a,c) dbcc dropcleanbuffers with no_infomsgs select count(*) from fat with(index (0)) where c = b -- Table 'fat'. Scan 3, reads 137,230, CPU : 1.3 s, elapsed 31.1s. dbcc dropcleanbuffers with no_infomsgs select count(*) from fat where b=c -- Table 'fat'. Scan 2, reads: 3,482 CPU 1.1 s, elapsed: 1.4 s. 1GB 8MB b=c 3.4K IO 1.4 sec abac b=c 137 K IO 31 sec 11 Moving From Rows to Columns Pivot & UnPivot What if the table is sparse? LDAP has 7 mandatory and 1,000 optional attributes Store row, col, value create table Features (object varchar, attribute varchar, value varchar, primary key (object, attribute)) select * from (featurespivot value on attribute in (year, color) ) as T where object = 4PNC450 Features object attribute value 4PNC450 year 2000 4PNC450 color white 4PNC450 make Ford 4PNC450 model Taurus T Object year color 4PNC450 2000 white 12 Maplist Meets SQL – cross apply Your table-valued function F(a,b,c) returns all objects related to a,b,c. spatial neighbors, sub-assemblies, members of a group, items in a folder,… Apply this function to each row Classic drill-down use outer apply if f() may be null select p.*, q.* from parent as p cross apply f(p.a, p.b, p.c) as q where p.type = 1 p1 f(p1) p2 f(p2) pn f(pn) 13 The Bookmark Bug SQL is a non-procedural language. The compiler/optimizer picks the procedure based on statistics. If the stats are wrong or missing…. Bad things happen. Queries can run VERY slowly. Strategy 1: allow users to specify plan. Strategy 2: make the optimizer smarter (and accept hints from the user.) 14 An Example of the Problem A query selects some fields of an index and of huge table. Bookmark plan: look in index for a subset. Lookup subset in Fat table. This is great if subset << table. terrible if subset ~ table. If statistics are wrong, or if predicates not independent, you get the wrong plan. How to fix the statistics? Index Huge table 15 A Fix: Let user ask for stats Create Statistics on View(f1,..,fn) Then the optimizer has the right data Picks the right plan. Statistics on Views, C. Galindo-Legaria, M. Josi, F. Waas, M. Wu, VLDB 2003, Q3: Select count(*) from Galaxy where r 0.120 Bookmark: 34 M random IO, 520 minutes Create Statistics on Galaxy(objID ) Scan: 5 M sequential IO 18 minutes Ultimately this should be automated, but for now,… its a step in the right direction. 16 Object Relational Has Arrived VMs are moving inside the DB Yukon includes Common Language Runtime (Oracle & DB2 have similar mechanisms). So, C++, VB, C# and Java are co-equal with TransactSQL. You can define classes and methods SQL will store the instances Access them via methods You can put your analysis code INSIDE the database. Minimizes data movement. You cant move petabytes to the client But we will soon have petabyte databases. data code data code +code 17 And.. Fully-async and synchronous (blocking) calls and multi-concurrent-result sets per connection (transaction) Queues built in (service broker): Fire-and forget asynchronous processing It listens to Port 80 for SOAP calls : TP-lite is back Its a web service Notification service and data mining and olap and reporting and xml and xquery and.... ) But, back to OR. 18 Some Background Table valued functions SQL operates on tables. If you can make tables, you can extend SQL This is the idea behind OLE/DB create function Evens(@maxVal int) returns @T table (a int) begin while (@maxVal > 0) begin if (@maxVal % 2 = 0) insert @T values(@maxVal) set @maxVal = @maxVal -1 end return end select * from Evens(10) a ----------- 10 8 6 4 2 19 Using table Valued Functions For Spatial Search Use function to return likely key ranges. Use filter predicate to eliminate objects outside the query box. Select objID From Objects O join fGetRanges( @latitude, @longitude, @radius) R on O.htmID between R.begin and R.end where abs(o.Lat - @latitude) + abs(o.Lon – @longitude) < @radius Table valued function returns candidate ranges of some space-filling curve. Filter discards false positives. 20 The Pre CLR design Transact SQL sp_HTM (20 lines) 469 lines of glue looking like: // Get Coordinates param datatype, and param length information of if (srv_paraminfo(pSrvProc, 1, &bType1, &cbMaxLen1, &cbActualLen1, NULL, &fNull1) == FAIL) ErrorExit("srv_paraminfo failed..."); // Is Coordinate param a character string if (bType1 != SRVBIGVARCHAR && bType1 != SRVBIGCHAR && bType1 != SRVVARCHAR && bType1 != SRVCHAR) ErrorExit("Coordinate param should be a string."); // Is Coordinate param non-null if (fNull1 || cbActualLen1 < 1 || cbMaxLen1 <= cbActualLen1) ErrorExit("Coordinate param is null."); // Get pointer to Coordinate param pzCoordinateSpec = (char *) srv_paramdata (pSrvProc, 1); if (pzCoordinateSpec == NULL) ErrorExit("Coordinate param is null."); pzCoordinateSpec[cbActualLen1] = 0; // Get OutputVector datatype, and param length information if (srv_paraminfo(pSrvProc, 2, &bType2, &cbMaxLen2, &cbActualLen2, NULL, &fNull2) == FAIL) ErrorExit("Failed to get type info on HTM Vector param..."); The HTM code body 21 The glue CLR design Discard 450 lines of UGLY code The HTM code body C# SQL sp_HTM (50 lines) using System; using System.Data; using System.Data.SqlServer; using System.Data.SqlTypes; using System.Runtime.InteropServices; namespace HTM { public class HTM_wrapper { [DllImport("SQL_HTM.dll")] static extern unsafe void * xp_HTM_Cover_get (byte *str); public static unsafe void HTM_cover_RS(string input) { // convert the input from Unicode (array of 2 bytes) to an array of bytes (not shown) byte * input; byte * output; // invoke the HTM routine output = (byte *)xp_HTM_Cover_get(input); // Convert the array to a table SqlResultSet outputTable = SqlContext.GetReturnResultSet(); if (output[0] == 'O') {// if Output is OK uint c = *(UInt32 *)(s + 4); // cast results as dataset Int64 * r = ( Int64 *)(s + 8); // Int64 r[c-1,2] for (int i = 0; i < c; ++i) { SqlDataRecord newRecord = outputTable.CreateRecord(); newRecord.SetSqlInt64(0, r[0]); newRecord.SetSqlInt64(1, r[1]); r++;r++; outputTable.Insert(newRecord); }} // return outputTable; } } } Thanks!!! To Peter Kukol (who wrote this) 22 The Clean CLR design Discard all glue code return array cast as table CREATE ASSEMBLY HTM_A FROM '\\localhost\HTM\HTM.dll' CREATE FUNCTION HTM_cover( @input NVARCHAR(100) ) RETURNS @t TABLE ( HTM_ID_START BIGINT NOT NULL PRIMARY KEY, HTM_ID_END BIGINT NOT NULL ) AS EXTERNAL NAME HTM_A:HTM_NS.HTM_C::HTM_cover using System; using System.Data; using System.Data.Sql; using System.Data.SqlServer; using System.Data.SqlTypes; using System.Runtime.InteropServices; namespace HTM_NS { public class HTM_C { public static Int64[,2] HTM_cover(string input) { // invoke the HTM routine return (Int64[,2]) xp_HTM_Cover(input); // the actual HTM C# or C++ or Java or VB code goes here. } } } Your/My code goes here 23 Performance (Beta1) On a 2.2 Ghz Xeon Call a Transact SQL function33μs Call a C# function50μs Table valued function μs per row Table valued function 1,580 μs + per row 42 μs Array (== table) valued function 200 μs + per row 27 μs 24 CREATE ASSEMBLY ReturnOneA FROM '\\localhost\C:\ReturnOne.dll' GO CREATE FUNCTION ReturnOne_Int( @input INT) RETURNS INT AS EXTERNAL NAME ReturnOneA:ReturnOneNS.ReturnOneC::ReturnOne_Int GO --------------------------------------------- -- time echo an integer declare @i int, @j int, @cpu_seconds float, @null_loop float declare @start datetime, @end datetime set @j = 0 set @i = 10000 set @start = current_Timestamp while(@i > 0) begin set @j = @j + 1 set @i = @i -1 end set @end = current_Timestamp set @null_loop = datediff(ms, @start,@end) / 10.0 set @i = 10000 set @start = current_Timestamp while(@i > 0) begin select @j = dbo.ReturnOne_Int(@i) set @j = @j + 1 set @i = @i -1 end set @end = current_Timestamp set @cpu_seconds = datediff(ms, @start,@end) / 10.0 - @null_loop print 'average cpu time for 1,000 calls to ReturnOne_Int was ' + str(@cpu_seconds,8,2)+ ' micro seconds' The Code using System; using System.Data; using System.Data.SqlServer; using System.Data.SqlTypes; using System.Runtime.InteropServices; namespace ReturnOneNS { public class ReturnOneC { public static int ReturnOne_Int(int input) { return input; } Function written in C# inside the DB Program in DB in different language (Tsql) calling function 25 What Is the Significance? No more inside/outside DB dichotomy. You can put your code near the data. Indeed, we are letting users put personal databases near the data archive. This avoids moving large datasets. Just move questions and answers. 26 Meta-Message Trying to fit science data into databases When it does not fit, something is wrong. Look for solutions Many solutions come from OR extensions Some are fundamental engine changes More structure in DB Richer operator sets Better statistics 27 © 2002 Microsoft Corporation. All rights reserved. This presentation is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS SUMMARY. 29 © 2003 Microsoft Corporation. All rights reserved. This presentation is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS SUMMARY. Similar presentations © 2017 SlidePlayer.com Inc.
http://slideplayer.com/slide/231186/
CC-MAIN-2017-22
refinedweb
1,872
55.54
Pandas Profiling is an amazing open-source Python library that is used for a quick exploratory data analysis of your data in a few lines of code. Exploratory data analysis is a very important step in data science tasks, and this Python library saves a lot of time when exploring any dataset. If you’ve never used the Pandas Profiling Library, this article is for you. In this article, I will present a tutorial on the Pandas Profiling Library in Python. Pandas Profiling It is very important to explore the data you are using while working on any kind of data science task. The process of exploring your data is called exploratory data analysis. Here we use data visualization tools like Tableau, Google Data Studio, and Python libraries like Matplotlib, Seaborn, and Plotly. Exploring your dataset takes a long time, this is where the Pandas Profiling library in Python comes in. It helps you explore your entire dataset in just a few lines of code. Besides exploratory data analysis, you can also use this library to create reports as it also provides various built-in functions that can be used to generate reports of your analysis. I hope you now have understood what Pandas Profiling library is and why it is used. Now in the section below, I will take you through a tutorial on how to use this library for exploratory data analysis using Python. Pandas Profiling in Python (Tutorial) If you have never used this Python library before, then you can easily install it by using the pip command in your terminal or command prompt mentioned below: - pip install pandas-profiling You can use the this Python library in any code editor, but it is recommended to use it in a Jupyter or Google Colab notebook as it is easier to understand the reports generated by this library there. Now let’s see how you can use this Python library to explore your dataset. For this task, I will first import the necessary Python libraries and the dataset that we want to explore: import pandas as pd from pandas_profiling import ProfileReport data = pd.read_csv("") And now, you just need to write these two lines of code, and you will see the complete exploratory data analysis of your dataset: profile = ProfileReport(data, title="Pandas Profiling Report", explorative=True) profile To create and save a report of your exploratory data analysis, you just need to execute the code mentioned below: profile.to_file("your_report.html") Summary So this is how you can use the Pandas Profiling library in Python for a faster exploratory data analysis of your data. In simple words, this Python library helps you to explore your complete dataset in just a few lines of code. I hope you liked this article on a tutorial on the Pandas Profiling library in Python. Feel free to ask your valuable questions in the comments section below.
https://thecleverprogrammer.com/2021/09/24/pandas-profiling-in-python/
CC-MAIN-2021-43
refinedweb
485
55.88
Q. Java program to Check Whether a Number is Prime or not. Here you will find an algorithm and program in Java Java Program to Check Whether a Number is Prime or Not import java.util.Scanner; class PrimeCheck { public static void main(String args[]) { int temp; boolean isPrime=true; Scanner scan= new Scanner(System.in); System.out.println("Enter a positive number:"); int num=scan.nextInt(); scan.close(); for(int i=2;i<=num/2;i++) { temp=num%i; if(temp==0) { isPrime=false; break; } } if(isPrime) System.out.println(num + " is a Prime Number"); else System.out.println(num + " is not a Prime Number"); } } Output Enter a positive integer : 17 17 is a prime number. Enter a positive integer : 25 25 is not a prime number.
https://letsfindcourse.com/java-coding-questions/java-program-to-check-whether-a-number-is-prime-or-not
CC-MAIN-2022-40
refinedweb
128
50.53
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Order one2many lines Is possible to order the line in a one2many without to modify the original class order? Finally, I found a way to manage this, downloading this module: Then, I overrided the one2many whose order I want to alter, in my case the field was child_ids, and I wanted it to be ordered by email: 'child_ids' : one2many_sorted.one2many_sorted( 'res.partner', 'parent_id', 'Contacts', order='email', ) Note that the only difference between this field and the one2many is the param order (you can use a couple new more though, search and set). I also imported the library at the top of the file: import one2many_sorted Now, I'm seeing the tree and kanban of res.partner ordered by name, but this one2many is ordered by email. Great! About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now This could be also useful for open a tree view with a particular order by clause... maybe a context parameter like order_by? Finally, was it possible? Now I have the same question.
https://www.odoo.com/forum/help-1/question/order-one2many-lines-29222
CC-MAIN-2018-05
refinedweb
214
62.78
Whenever you open a scene, you’ll see a message like this logged to the script history: # INFO : 4034 - Loaded scene was created with build number: 10.5.98.0 - compatibility version: 1000 Application.OpenScene("C:\\Users\\blairs\\MyProject\\Scenes\\2012SAP_Scene.scn", "", "") # INFO : 4034 - Loaded scene was created with build number: 10.1.62.0 - compatibility version: 1000 Application.OpenScene("C:\\Users\\blairs\\MyProject\\Scenes\\2012SP1_Scene.scn", "", "") If you want to know the version of Softimage that was used to create the scene, you need to check the specific build number (and there’s a couple of ways to do that, we’ll get to that in a second…). The compatibility version is more a property of Softimage itself than of the scene. You can get the value of the project.CompatibilityVersion parameter, but it’s always going to be the compatibilty version of the current Softimage instance, not of the loaded scene. p = Application.Dictionary.GetObject( "project.CompatibilityVersion" ) print Application.ClassName(p) print p.Value # OR print Application.GetValue( "project.CompatibilityVersion" ) To find out the version of Softimage used to “build” a scene, you can use the printver utility, or look in the scntoc file. In this context, “build” means the version of Softimage that was last used to save the scene. I note that just opening a scene and saving it isn’t enough to bump up the build version. You need to do something to the scene, or at least do something and then undo it. From Jeremie Passerin on the Softimage mailing list, here’s a Python snippet that reads the version from the scntoc: # Python Code import xml.etree.ElementTree as etree ext = 'scntoc' scn = 'C:\\Users\\blairs\\Project\\Scenes\\Test.%s' % ext tree = etree.parse( scn ) root = tree.getroot() version = root.get("xsi_version") LogMessage(version) Here’s a JScript snippet that reads the version from the scntoc: var dom = new ActiveXObject("msxml2.DOMDocument.6.0"); dom.async = false; dom.resolveExternals = false; ext = 'scntoc'; scntoc = 'C:\\Users\\blairs\\Project\\Scenes\\Test.' + ext; dom.load( scntoc ); var oNode = dom.selectSingleNode("xsi_file"); LogMessage( oNode.getAttribute( "xsi_version" ) ); If you don’t want to rely on the existence of a scntoc, you could use the printver.exe utility that ships with Softimage. Given a scene file, printver prints a message that looks like “This Scene was built with version: 11.0.525.0”. Here’s a JScript snippet that runs printver and gets the version number from STDOUT: // JScript var WshShell = new ActiveXObject("WScript.Shell"); scn = "\\\\server\\Project\\Scenes\\Whatever.scn" sExec = "printver " + scn var oExec = WshShell.Exec( sExec ); while ( !oExec.StdOut.AtEndOfStream ) { s = oExec.StdOut.ReadLine(); if ( s.indexOf("This Scene was built with version") != -1 ) { var version = s.split(":")[1].replace(/^\s\s*/, '').replace(/\s\s*$/, ''); } } LogMessage( version ) And here’s a Python snippet: import subprocess scn = 'C:\\Users\\blairs\\Documents\\Support\\Project\\Scenes\\MySphere.scn' p = subprocess.Popen( 'printver -l %s' % scn, stdout=subprocess.PIPE ) stdout = p.stdout.readlines() print stdout print stdout[-1].split(':')[1].lstrip().rstrip() See the thread on the Softimage mailing list, which includes a VBScript snippet for getting the build version. The link you gave titled ‘thread on the Softimage mailing list’ links to a manual bike 🙂 Thanks, I updated the link
https://xsisupport.com/2012/05/15/more-on-getting-the-version-of-softimage-used-to-create-a-scene/
CC-MAIN-2019-09
refinedweb
541
52.87
Scalar::Andand - Guarded method invocation. Version 0.04 Scalar::Andand lets us write: $phone = Location->find('first', name => 'Johnson' )->andand->phone And get a guarded method invocation or safe navigation method. This snippet performs a find on the Location class, then calls phone to the result if the result is defined. If the result is not defined, then the expression returns false without throwing an exception. This module doesn't export anything to your namespace, but it does add a universal method andand, which is a far graver sin. Leon Timmermans, <leont at cpan.org> You have to include the module in every package where you use the magic andand method, or else it doesn't work on undefined values. This module contains more magic than what is responsible, don't be surprised by weird bugs. Note that this module was intended as a proof of concept. The author has never used it in production code, nor is he planning to do so. YMMV. Please report any bugs or feature requests to bug-scalar-andand::Andand You can also look for information at: This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/dist/Scalar-Andand/lib/Scalar/Andand.pm
CC-MAIN-2017-22
refinedweb
203
62.48
pcap_datalink(3) pcap_datalink(3) NAME pcap_datalink - get the link-layer header type SYNOPSIS #include <pcap/pcap.h> pcap_datalink(3) *p); DESCRIPTION pcap_datalink(3) returns the link-layer header type for the live capture or ``savefile'' specified by p. It must not be called on a pcap descriptor created by pcap_create() that has not yet been activated by pcap_activate(). lists the values pcap_datalink(3) can return_datalink(3) returns the link-layer header type on success and PCAP_ERROR_NOT_ACTIVATED if called on a capture handle that has been created but not activated. SEE ALSO pcap(3), pcap-linktype(7) 7 April 2014 pcap_datalink(3) libpcap 1.9.0 - Generated Sat Jul 28 16:13:23 CDT 2018
http://manpagez.com/man/3/pcap_datalink/
CC-MAIN-2018-43
refinedweb
114
55.44
Hey!! Ahh, all day I've been trying to learn something new, so reading over tutorials I compiled my first socket app!! The only problem is, it isn't working the way I would like it to, and I was hoping you gyus could help me figure out the problems.. I appologize if it is a little messy at this time, I'm quite distrought and ready to pull my hair out so I didn't yet take the care to fine-ture it and organize the code.. I'll do that once I can get it to work and figure out how to manipulate these damn sockets lol.. For the time being, i want to work with Datagram sockets, not streaming.. Thanks.. My app: I want it to be simple and console-based, you have two options: To send or receive a message. If you choose to send, you enter the receiver's IP address and a message to send to them. If the message is received, it is returned with an asterisk (*) appended to the end of it. If you choose to receive, the app will wait until a return-request is received, then it will return the data with an asterisk (*) appended to the end of it.. Simple? Yea.. I will make it more advanced etc after I can get the following working properly, please help me with that: // My First Internet Application // By Matthew Cudmore, 2005 #include <iostream> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <unistd.h> // For gethostname #define MYPORT 4689 #define QUEMAX 10 using namespace::std; int main(void) { int sockfd, new_fd, sin_size; socklen_t sl = sizeof(struct sockaddr); struct sockaddr_in my_addr; // my address information struct sockaddr_in their_addr; // connector's address information // SOCK_STREAM or SOCK_DGRAM if ((sockfd = socket(AF_INET, SOCK_DGRAM, 0)) == -1){ // ERROR cout << "\n-- SOCKET error\n"; return 0; } my_addr.sin_family = AF_INET; // host byte order their_addr.sin_family = AF_INET; // Choose one method: // 1) my_addr.sin_port = htons(MYPORT); // short, network byte order // 2) my_addr.sin_port = htons(0); // choose an unused port at random my_addr.sin_port = htons(MYPORT); // short, network byte order their_addr.sin_port = htons(MYPORT); // Choose one method: // 1) my_addr.sin_addr.s_addr = inet_addr("10.12.110.57"); // 2) inet_aton("10.12.110.57", &(my_addr.sin_addr)); // 3) my_addr.sin_addr.s_addr = htonl(INADDR_ANY); // use my IP address my_addr.sin_addr.s_addr = htonl(INADDR_ANY); // use my IP address memset(&(my_addr.sin_zero), '\0', 8); // zero the rest of the struct if (bind(sockfd, (struct sockaddr *)&my_addr, sizeof(struct sockaddr)) == -1) { // ERROR cout << "\n-- BIND error\n"; return 0; } // ################################################################### char Buffer[100]; int BtsMvd; cout << "Simple sockets by Matthew Cudmore...\n"; cout << "Your IP: " << inet_ntoa(my_addr.sin_addr) << "\n"; GetChoice: cout << "\nEnter choice action (x = exit, s = send, a = accept): "; cin.getline(Buffer, 99); if (Buffer[0] == 'x') { close (sockfd); return 0; } else if (Buffer[0] == 's') { cout << "Enter the IP of the receiving machine: "; cin.getline(Buffer, 99); their_addr.sin_addr.s_addr = inet_addr(Buffer); memset(&(their_addr.sin_zero), '\0', 8); cout << "Enter a message to send to the receiving machine: "; cin.getline(Buffer, 99); if ((BtsMvd = sendto(sockfd, Buffer, strlen(Buffer), 0, (struct sockaddr *)&their_addr, sizeof(struct sockaddr))) != -1){ cout << "\nMessage sent. Awaiting reply..."; } else { cout << "\nMessage could not be sent."; goto GetChoice; } while (1) { if ((BtsMvd = recvfrom(sockfd, Buffer, strlen(Buffer), 0, (struct sockaddr *)&their_addr, &sl)) == -1) { // ERROR cout << "\n-- recvfrom error\n"; return 0; } if (BtsMvd) { if (Buffer[BtsMvd - 1] == '*') { cout << "\nReturn received."; break; } else { cout << "\nReturn-request ignored."; } } } } else if (Buffer[0] == 'a') { cout << "Waiting for incoming return-requests..."; BtsMvd = 0; while (1) { if ((BtsMvd = recvfrom(sockfd, Buffer, strlen(Buffer), 0, (struct sockaddr *)&their_addr, &sl)) == -1) { // ERROR cout << "\n-- recvfrom error\n"; return 0; } if (BtsMvd) break; } cout << "\nReturn-request received from: " << inet_ntoa(their_addr.sin_addr); BtsMvd = strlen(Buffer); Buffer[BtsMvd] = '*'; Buffer[BtsMvd + 1] = '\0'; if ((BtsMvd = sendto(sockfd, Buffer, strlen(Buffer), 0, (struct sockaddr *)&their_addr, sizeof(struct sockaddr))) > 0){ cout << "\nA return (" << Buffer << ") has been sent back."; } else { cout << "\nCould not return request."; } } goto GetChoice; return 0; } The problems I've encountered: Can't retrieve my own IP address (I found out that was because INADDR_ANY is equal to zero).. When sending or waiting for data, the console kinda goes neutral and I can type and stuff when I shouldn't be able to.. It won't continue through the code, perhaps it's waiting for the receiving computer to respond with an ACK... Hmm, please heeeeellllppppp!!!! :cheesy:
https://www.daniweb.com/programming/software-development/threads/35977/please-make-my-frist-socket-app-work
CC-MAIN-2017-26
refinedweb
737
65.52
Return the transaction state for a QDB connection #include <qdb/qdb.h> int qdb_gettransstate( qdb_hdl_t *db ); qdb This function returns the transaction state for the specified QDB connection. If an SQL transaction is in progress over the connection, the function returns 1. If no SQL transaction is happening, 0 is returned. If there's an SQL error, -1 is returned (you can use qdb_geterrmsg() to get the error string). You can use this function to determine how to clean up after an SQL error; for example, if you execute several commands in a transaction and need to determine which statement is causing the error. QNX Neutrino
http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.qdb_en.dev_guide/topic/api/qdb_gettransstate.html
CC-MAIN-2018-43
refinedweb
106
62.48
Today’s Programming Praxis is about beautiful code. Specifically, it concerns a bit of C code that can match simple regular expressions. The code in question is widely considered as beautiful code. Personally I’d say the idea behind the code is good, but that the beauty of the code sample itself is being held back by a language that requires too much dealing with trivial stuff (e.g. having to manually increment pointers to move through a string), making the code needlessly long. Fortunately, our assignment is to implement the algorithm using the features and idioms of our own language, so let’s see what we can do with a slightly more modern language: First, an import: import Data.List Since the algorithm itself isn’t all that difficult, we’ll focus on the features of Haskell that are used in this version. the top-level function shows pattern matching (replacing all those if statements), first-class functions (the argument for map) and partial application (match returns a function that takes a string and returns a bool). match :: String -> String -> Bool match ('^':r) = matchHere r match r = or . map (matchHere r) . tails matchHere shows more pattern matching and adds lazy evaluation (if the check on the first character of the regex in the third line fails, the second condition is not checked). matchHere :: String -> String -> Bool matchHere (c:'*':r) xs = matchStar c r xs matchHere "$" xs = null xs matchHere (r:rs) (x:xs) = (r == '.' || r == x) && matchHere rs xs matchHere r _ = null r matchStar adds pattern guards to the mix. matchStar :: Char -> String -> String -> Bool matchStar _ r xs | matchHere r xs = True matchStar c r (x:xs) = (c == '.' || c == x) && matchStar c r xs matchStar _ _ _ = False Using the test suite from Programming Praxis (shortened here due to length) we can see our function works correctly: main :: IO () main = do mapM_ print [ match "a" "a", match "a" "b" == False, ... match "a*a*a" "aaa", match "a*a*a" "xxxxx" == False] With less than half the code size of the original, and a more high-level approach, I prefer this version over the original, but I guess beauty is in the eye of the beholder. Tags: beautiful, bonsai, code, expression, Haskell, kata, praxis, programming, regex, regular
https://bonsaicode.wordpress.com/2009/09/11/programming-praxis-beautiful-code/
CC-MAIN-2016-50
refinedweb
380
66.78
Trying to improve my Haskell coding skills, I decided to test myself at solving the 2017 Advent of Code problems. It’s been a lot of fun and a great learning experience. One problem in particular stood out for me because, for the first time, it let me apply, in anger, the ideas I learned from category theory. But I’m not going to talk about category theory this time, just about programming. The problem is really about dominoes. You get a set of dominoes, or pairs of numbers (the twist is that the numbers are not capped at 6), and you are supposed to build a chain, in which the numbers are matched between consecutive pieces. For instance, the chain [(0, 5), (5, 12), (12, 12), (12, 1)] is admissible. Like in the real game, the pieces can be turned around, so (1, 3) can be also used as (3, 1). The goal is to build a chain that starts from zero and maximizes the score, which is the sum of numbers on the pieces used. The algorithm is pretty straightforward. You put all dominoes in a data structure that lets you quickly pull the pieces you need, you recursively build all possible chains, evaluate their sums, and pick the winner. Let’s start with some data structures. type Piece = (Int, Int) type Chain = [Piece] At each step in the procedure, we will be looking for a domino with a number that matches the end of the current chain. If the chain is [(0, 5), (5, 12)], we will be looking for pieces with 12 on one end. It’s best to organize pieces in a map indexed by these numbers. To allow for turning the dominoes, we’ll add each piece twice. For instance, the piece (12, 1) will be added as (12, 1) and (1, 12). We can make a small optimization for symmetric dominoes, like (12, 12), by adding them only once. We’ll use the Map from the Prelude: import qualified Data.Map as Map The Map we’ll be using is: type Pool = Map.Map Int [Int] The key is an integer, the value is a list of integers corresponding to the other ends of pieces. This is made clear by the way we insert each piece in the map: addPiece :: Piece -> Pool -> Pool addPiece (m, n) = if m /= n then add m n . add n m else add m n where add m n pool = case Map.lookup m pool of Nothing -> Map.insert m [n] pool Just lst -> Map.insert m (n : lst) pool I used point-free notation. If that’s confusing, here’s the translation: addPiece :: Piece -> Pool -> Pool addPiece (m, n) pool = if m /= n then add m n (add n m pool) else add m n pool As I said, each piece is added twice, except for the symmetric ones. After using a piece in a chain, we’ll have to remove it from the pool: removePiece :: Piece -> Pool -> Pool removePiece (m, n) = if m /= n then rem m n . rem n m else rem m n where rem :: Int -> Int -> Pool -> Pool rem m n pool = case fromJust $ Map.lookup m pool of [] -> Map.delete m pool lst -> Map.insert m (delete n lst) pool You might be wondering why I’m using a partial function fromJust. In industrial-strength code I would pattern match on the Maybe and issue a diagnostic if the piece were not found. Here I’m fine with a fatal exception if there’s a bug in my reasoning. It’s worth mentioning that, like all data structures in Haskell, Map is a persistent data structure. It means that it’s never modified in place, and its previous versions persist. This is invaluable in this kind of recursive algorithms, where we use backtracking to explore multiple paths. The input of the puzzle is a list of pieces. We’ll start by inserting them into our map. In functional programming we never think in terms of loops: we transform data. A list of pieces is a (recursive) data structure. We want to traverse it and accumulate the information stored in it into a Map. This kind of transformation is, in general, called a catamorphism. A list catamorphism is called a fold. It is specified by two things: (1) its action on the empty list (here, it turns it into Map.empty), and (2) its action on the head of the current list and the accumulated result of processing the tail. The head of the current list is a piece, and the accumulator is the Map. The function addPiece has just the right signature: presort :: [Piece] -> Pool presort = foldr addPiece Map.empty I’m using a right fold, but a left fold would work fine, too. Again, this is point free notation. Now that the preliminaries are over, let’s think about the algorithm. My first approach was to define a bunch of mutually recursive functions that would build all possible chains, score them, and then pick the best one. After a few tries, I got hopelessly bogged down in details. I took a break and started thinking. Functional programming is all about functions, right? Using a recursive function is the correct approach. Or is it? The more you program in Haskell, the more you realize that you get the most power by considering wholesale transformations of data structures. When creating a Map of pieces, I didn’t write a recursive function over a list — I used a fold instead. Of course, behind the scenes, fold is implemented using recursion (which, thanks to tail recursion, is usually transformed into a loop). But the idea of applying transformations to data structures is what lets us soar above the sea of details and into the higher levels of abstraction. So here’s the new idea: let’s create one gigantic data structure that contains all admissible chains built from the domino pieces at our disposal. The obvious choice is a tree. At the root we’ll have the starting number: zero, as specified in the description of the problem. All pool pieces that have a zero at one end will start a new branch. Instead of storing the whole piece at the node, we can just store the second number — the first being determined by the parent. So a piece (0, 5) starts a branch with a 5 node right below the 0 node. Next we’d look for pieces with a 5. Suppose that one of them is (5, 12), so we create a node with a 12, and so on. A tree with a variable list of branches is called a rose tree: data Rose = NodeR Int [Rose] deriving Show It’s always instructive to keep in mind at least one special boundary case. Consider what would happen if (0, 5) were the only piece in the pool. We’d end up with the following tree: NodeR 0 [NodeR 5 []] We’ll come back to this example later. The next question is, how do we build such a tree? We start with a set of dominoes gathered in a Map. At every step in the algorithm we pick a matching domino, remove it from the pool, and start a new subtree. To start a subtree we need a number and a pool of remaining pieces. Let’s call this combination a seed. The process of building a recursive data structure from a seed is called anamorphism. It’s a well studied and well understood process, so let’s try to apply it in our case. The key is to separate the big picture from the small picture. The big picture is the recursive data structure — the rose tree, in our case. The small picture is what happens at a single node. Let’s start with the small picture. We are given a seed of the type (Int, Pool). We use the number as a key to retrieve a list of matching pieces from the Pool (strictly speaking, just a list of numbers corresponding to the other ends of the pieces). Each piece will start a new subtree. The seed for such a subtree consists of the number at the other end of the piece and a new Pool with the piece removed. A function that produces seeds from a given seed looks like this: grow (n, pool) = case Map.lookup n pool of Nothing -> [] Just ms -> [(m, removePiece (m, n) pool) | m <- ms] Now we have to translate this to a procedure that recreates a complete tree. The trick is to split the definition of the tree into local and global pictures. The local picture is captured by this data structure: data TreeF a = NodeF Int [a] deriving Functor Here, the recursion of the original rose tree is replaced by the type parameter a. This data structure, which describes a single node, or a very shallow tree, is a functor with respect to a (the compiler is able to automatically figure out the implementation of fmap, but you can also do it by hand). It’s important to realize that the recursive definition of a rose tree can be recovered as a fixed point of this functor. We define the fixed point as the data structure X that results from replacing a in the definition of TreeF with X. Symbolically: X = TreeF X In fact, this procedure of finding the fixed point can be written in all generality for any functor f. If we call the fixed point Fix f, we can define it by replacing the type argument to f with Fix f, as in: newtype Fix f = Fix { unFix :: f (Fix f) } Our rose tree is the fixed point of the functor TreeF: type Tree = Fix TreeF This splitting of the recursive part from the functor part is very convenient because it lets us use non-recursive functions to generate or traverse recursive data structures. In particular, the procedure of unfolding a data structure from a seed is captured by a non-recursive function of the following signature: type Coalgebra f a = a -> f a Here, a serves as the seed that generates a single node populated with new seeds. We have already seen a function that generates seeds, we only have to cast it in the form of a coalgebra: coalg :: Coalgebra TreeF (Int, Pool) coalg (n, pool) = case Map.lookup n pool of Nothing -> NodeF n [] Just ms -> NodeF n [(m, removePiece (m, n) pool) | m <- ms] The pièce de résistance is the formula that uses a given coalgebra to unfold a recursive date structure. It’s called the anamorphism: ana :: Functor f => Coalgebra f a -> a -> Fix f ana coalg = Fix . fmap (ana coalg) . coalg Here’s the play-by-play: The anamorphism takes a seed and applies the coalgebra to it. That generates a single node with new seeds in place of children. Then it fmaps the whole anamorphism over this node, thus unfolding the seeds into full-blown trees. Finally, it applies the constructor Fix to produce the final tree. Notice that this is a recursive definition. We are now in a position to build a tree that contains all admissible chains of dominoes. We do it by applying the anamorphism to our coalgebra: tree = ana coalg Once we have this tree, we could traverse it, or fold it, to retrieve all the chains and find the best one. But once we have our tree in the form of a fixed point, we can be smart about folds as well. The procedure is essentially the same, except that now we are collecting information from the nodes of a tree. To do this, we define a non-recursive function called the algebra: type Algebra f a = f a -> a The type a is called the carrier of the algebra. It plays the role of the accumulator of data. We are interested in the algebra that would help us collect chains of dominoes from our rose tree. Suppose that we have already applied this algebra to all children of a particular node. Each child tree would produce its own list of chains. Our goal is to extend those chains by adding one more piece that is determined by the current node. Let’s start with our earlier trivial case of a tree that contains a single piece (0, 5): NodeR 0 [Node 5 []] We replace the leaf node with some value x of the still unspecified carrier type. We get: NodeR 0 x Obviously, x must contain the number 5, to let us recover the original piece (0, 5). The result of applying the algebra to the top node must produce the chain [(0, 5)]. These two pieces of information suggest the carrier type to be a combination of a number and a list of chains. The leaf node is turned to (5, []), and the top node produces (0, [[(0, 5)]]). With this choice of the carrier type, the algebra is easy to implement: chainAlg :: Algebra TreeF (Int, [Chain]) chainAlg (NodeF n []) = (n, []) chainAlg (NodeF n lst) = (n, concat [push (n, m) bs | (m, bs) <- lst]) where push :: (Int, Int) -> [Chain] -> [Chain] push (n, m) [] = [[(n, m)]] push (n, m) bs = [(n, m) : br | br <- bs]] For the leaf (a node with no children), we return the number stored in it together with an empty list. Otherwise, we gather the chains from children. If a child returns an empty list of chains, meaning it was a leaf, we create a single-piece chain. If the list is not empty, we prepend a new piece to all the chains. We then concatenate all lists of chains into one list. All that remains is to apply our algebra recursively to the whole tree. Again, this can be done in full generality using a catamorphism: cata :: Functor f => Algebra f a -> Fix f -> a cata alg = alg . fmap (cata alg) . unFix We start by stripping the fixed point constructor using unFix to expose a node, apply the catamorphism to all its children, and apply the algebra to the node. To summarize: we use an anamorphism to create a tree, then use a catamorphism to convert the tree to a list of chains. Notice that we don’t need the tree itself — we only use it to drive the algorithm. Because Haskell is lazy, the tree is evaluated on demand, node by node, as it is walked by the catamorphism. This combination of an anamorphism followed immediately by a catamorphism comes up often enough to merit its own name. It’s called a hylomorphism, and can be written concisely as: hylo :: Functor f => Algebra f a -> Coalgebra f b -> b -> a hylo f g = f . fmap (hylo f g) . g In our example, we produce a list of chains using a hylomorphism: let (_, chains) = hylo chainAlg coalg (0, pool) The solution of the puzzle is the chain with the maximum score: maximum $ fmap score chains score :: Chain -> Int score = sum . fmap score1 where score1 (m, n) = m + n Conclusion The solution that I described in this post was not the first one that came to my mind. I could have persevered with the more obvious approach of implementing a big recursive function or a series of smaller mutually recursive ones. I’m glad I didn’t. I have found out that I’m much more productive when I can reason in terms of applying transformations to data structures. You might think that a data structure that contains all admissible chains of dominoes would be too large to fit comfortably in memory, and you would probably be right in a strict language. But Haskell is a lazy language, and data structures work more often as control structures than as storage for data. The use of recursion schemes further simplifies programming. You can design algebras and coalgebras as non-recursive functions, which are much easier to reason about, and then apply them to recursive data structures using catamorphisms and anamorphisms. You can even combine them into hylomorphisms. It’s worth mentioning that we routinely apply these techniques to lists. I already mentioned that a fold is nothing but a list catamorphism. The functor in question can be written as: data ListF e a = Nil | Cons e a deriving Functor A list is a fixed point of this functor: type List e = Fix (ListF e) An algebra for the list functor is implemented by pattern matching on its two constructors: alg :: ListF e a -> a alg Nil = z alg (Cons e a) = f e a Notice that a list algebra is parameterized by two items: the value z and the function f :: e -> a -> a. These are exactly the parameters to foldr. So, when you are calling foldr, you are defining an algebra and performing a list catamorphism. Likewise, a list anamorphism takes a coalgebra and a seed and produces a list. Finite lists are produced by the anamorphism called unfoldr: unfoldr :: (b -> Maybe (a, b)) -> b -> [a] You can learn more about algebras and coalgebras from the point of view of category theory, in another blog post. The source code for this post is available on GitHub.
https://bartoszmilewski.com/2017/12/
CC-MAIN-2020-50
refinedweb
2,873
69.82
Hi, i have a 2D list in python which i'm trying to fill using 2 for loops. but the problem is when i finish the filling procedure and i print the values, i find that they are not arrangeed in the order of input. def funcdist(myarr): diffs = 0 testarr = [] finarr = [] for hx in range(len(myarr)): del testarr[:] for hy in range(len(myarr)): diffs=0 for ch1, ch2 in zip(myarr[hx], myarr[hy]): if ch1 != ch2: diffs += 1 testarr.append(diffs) finarr.append((testarr)) return finarr in this case their should be some 0's in the diagonal, but when i print the array after i finish, i found that the 0's are all on the right side of the array. why this is happening? and another question, how to add a column of 1 as the first column of the array?? i have no idea on how to do this. Thank you
https://www.daniweb.com/programming/software-development/threads/437373/2d-list-in-python
CC-MAIN-2021-25
refinedweb
158
70.33
XSMELL README ------------- Congratulations! You have in your hands the MOST BRILLIANTEST C++ XML CREATION LIBRARY EVER CREATED. Have you ever needed to embed a quick snippet of HTML or XML in your C++ source code? Didn't you just hate having to use that obscure string concatenation syntax? Well no more! With the advent of XSMELL you can now use regular XML syntax directly in your source code, thanks to the reckless use of operator overloading, template meta-programming and preprocessor macros: using namespace xsmell; document doc = _ <html>_ <head>_ <title>"XSMELL demo"<!title>_ <!head>_ <body>_ <p>"Yesssssssssssssssss!"<!p>_ <img .src("chucknorris.png") .alt("sneezing eyes open")>_ <!img>_ <!body>_ <!html> _; std::cout << doc << '\n'; That's right! Thanks to XSMELL you'll no longer suffer from S-Expression envy. You've got one up on those Lisp guys now -- smug bastards! And you no longer have to worry about generating malformed XML! After spending hours fighting obscure C++ compiler errors, you'll be 100% certain that your XML is correct.I couldn't wait for April 1st to come around.
https://www.gamedev.net/topic/539056-c-bask-in-my-awesomeness/
CC-MAIN-2017-17
refinedweb
190
68.06
18 April 2007 18:23 [Source: ICIS news] TORONTO (ICIS news)--Methanex, the Canada-based international methanol maker, is not likely to become the target of a leveraged buyout (LBO), analysts at Toronto-based RBC Capital Markets said on Wednesday. The analysts calculated that methanol contract prices would need to stay above $275/tonne for a long time to justify an LBO. “We believe this price assumption would be quite optimistic,” RBC said, and pointed to large new industry capacity due to come onstream in coming years. RBC expects long-term methanol prices of around $205/tonne. The April methanol contract for the US market averaged $354/tonne, according to global chemical intelligence service ICIS pricing. That is down from a January peak at $584/tonne. Methanex’s shares soared 8.9% on Monday, in part driven by a wave of buyouts and takeovers in ?xml:namespace> Methanex’s shares were priced at $23.18/share, down 1.61%, in early Wednesday afternoon
http://www.icis.com/Articles/2007/04/18/9021777/lbo-for-canadas-methanex-unlikely--rbc.html
CC-MAIN-2013-48
refinedweb
163
66.33
Introduction: Connect Your RevPi Core + RevPi DIO to Ubidots Revolution Pi is an open, modular, and durable industrial PC based on the established Raspberry Pi while meeting the EN61131-2 standard. Equipped with the Raspberry Pi Compute Module, the RevPi Core base can be expanded seamlessly using appropriate I/O modules and fieldbus gateways for energy management, process monitoring, machine health and more. The Rev Pi Core is the foundation to any application and depending on your I/O requirements expansion modules such as RevPi DIO, RevPi AIO, RevPi Gates can be attached as digital, analog, or gateway modules. In this tutorial we detail the integration of a the RevPI DIO to visualize and control output signals to your machines or applications with Ubidots. The RevPi DIO digital I/O module comes with 14 digital inputs and 14 outputs, PWM (puls width modulation), and counter inputs. For a detailed list of functionalities for the RevPI DIO, check out the Revolution Pi product brochure. Step 1: Requirements - Ethernet Cable - 24 V Power Supply - RevPi Core - RevPi DIO - Ubidots account - Educational License - Business License Step 2: Hardware Setup As per any new device setup, we recommend becoming familiar with the RevPi Core + RevPi DIO official quick start guide by Revolution Pi. Then be sure your to assemble the RevPi Core + DIO correctly referencing the below articles for additional details, as needed. - Connect your RevPi Core to Ubidots - Connecting modules - Mounting modules on a DIN rail - Connecting the power supply - Status LEDs DIO - Digital in and outputs - Configuration RevPi DIO - Updating firmware on modules (Jessie) Once your RevPi Core + RevPi DIO are configured, powered correctly, and connected to the Internet, we can continue with Firmware uploads. Step 3: Firmware Setup 1. First we must have access to the inputs and outputs of the Revolution Pi. The "python3-revpimodio” module provides all access to the IOs of the Revolution Pis, and can be programmed very easy with Python3. Based on the image installed in your RevPi Core reference this guide to make the installation properly. If you have the Jessie Image on your core, simply install the module from the Kunbus repository running the commands below in the RevPi Terminal: - Update system packages: sudo apt-get update - Install: sudo apt-get install python3-revpimodio2 - Update Distribution (all): sudo apt-get dist-upgrade 2. Next, install the requests module for python3 by running the command below in the RevPi Core terminal: sudo apt-get install python3-requests 3. Once each of the commands above have finished, verify everything as accurate by opening Python3 into your RevPi Core terminal and importing the module previously installed. Open the Python3 by running the command below into the RevPi Core terminal: python3 Once you have access to Python3, import the modules ''revpimodio2" and "requests" as is shown below: import revpimodio2 import requests If receive and error messages after importing the module, verify the issue shown and try again. Step 4: PiCtory Setup PiCtory lets you link up several RevPi modules, alongside the PiBridge that physically links the modules with one another, creating a configuration file. The file has to inform to your RevPi Core which modules are to be found in which position and which basic settings that the modules have. To get a better idea how it's work check out this video. 1. Open your web browser and enter the IP address of your RevPi Cores in the address bar of your browser. Then, you will see the login windows, to enter assign the username and password where is indicated. The login credentials can be found on the side of your RevPi. - username: admin - password: You will find it on the sticker on the side of your RevPi Core. Then, enter to the "APPS" section. 2. To start with the PiCtory settings, press the green button called "START". 3. From the device catalog select the version of your RevPi Core and assign it to the configuration boards. Then, assign the RevPi DIO at the right of the RevPi Core. Remember connect the RevPi Core to the right of your RevPi Core using the PiBridge. IMPORTANT NOTE: The position of the modules assigned in the PiCtory configuration have to be the same assigned in the physical world to be able to generate the configuration file properly. 3. Now that you have the modules needed assigned into the configuration boards, lets verify the name of the pins that we are going to use below. You will find two sample codes provided, one is for send a value from a reading input of the RevPi DIO, and the other one is for control an output of the RevPi DIO. - The input that we are going to use is the Input 1, see above for pin-out diagram. From the Value Editor section, verify if the name assigned for the Input 1 is "I_1" as is shown on the image below, if not please assign it. If you skip this step the firmware code will miss reading this pin. - The output that we are going to use is the Output 1, see above for pin-out diagram. From the Value Editor section, verify the name assigned to Output 1 is "O_1" as is shown on the image below, if not please assign it. If you skip this step the firmware code will miss this output and you will not be able to relay controls. Step 5: Sending Data to Ubidots 1. To begin writing your firmware, create a Python script in the RevPi Core terminal. We are going to use nano editor, in order to create the new script. To do this run the command below: nano ubidots_revpi.py As you will see, the nano editor terminal will automatically populate and you can begin your code. 2. Copy and Paste the sample code below into the nano editor. Once pasted, assign your Ubidots Token where indicated in the script. Reference here for help locating your Ubidots token. In this sample code we are going to read the Input 1 (I_1) of the RevPi DIO module to send its status to Ubidots cloud to be able to monitor and establish alarms based off its data values received. NOTE: To save the script into the nano editor - press Ctrl+o, confirm the file name to write (ubidots_revpi_di.py) and press enter. To close the nano editor press Ctrl+x. 3. Now let's test the script. Run the script previously created in the RevPi terminal: python3 ubidots_revpi_di.py Once the script begins to run, you will see the successful status code response from the Ubidots Server. 4. Go to your Ubidots account and verify the data has been received. You will see a new device automatically created in the Device section with the device name being the MAC address of your RevPi Core. Keep reading for name changes. Don't like the MAC address as your device's name in your Ubidots display? Don't worry! You can change the name to a more friendly one, but the device label will be stay as the MAC address to never get confused which device is which. Reference to Ubidots Help Center for more on Device Labels and Device Name changes in Ubidots. Click on any device in your Device section to visualize the variable being recorded and sent to Ubidots from our sample firmware. As you can see, our sample code has provided a motion-detector variable. Step 6: Unit Counter Application Development Now that the status of your Input is updating in your Ubidots account. Let's start playing with the Ubidots features to design and deploy your application. In this tutorial we will deploy a Unit Counter for boxes moving across a supply line At first, we are going to create a rolling windows variable which let us compute the average, maximum, minimum, sum, and count of other variable; in this case the variable previously created (motion-detector). For this guide, we are going to compute a sum of the variable motion-detector every minute to know how many boxes were detected as they passes along the supply line. To create the variable, press "Add Variable". Then, select "Rolling Window": Now select device created > motion-detector > sum > every 1 minute to finish press save. Then assign the name desired by you, in this case, we named ours "boxes". Now that we know how many boxes our sensor is detecting, we can create an event based on the "boxes" variable to keep pace with production and be alerted if production falls behind. Our production goal is 10 "boxes" a minute. In order to maintain this goal, the RevPi will need to detect 10 boxes minimum per minute. To be alerted to falling production we will simply create an alert letting us know when less than 10 boxes were detected. Go to the Event section of your Ubidots account and press "Add Event". Then, select the device and the variable, and assign the condition of the event. In this case, if the variable boxes is less than 10 set the event. Now that the parameters of your event are configured, assign the action that you desire. I configured the event with an e-mail action. And as you can see above, when the event is triggered I receive the message above. IMPORTANT NOTE: The code provided above is only reading the input 1 without establishing any sensor configuration. Based on the sensors used, add the configuration of the sensor into the code as needed. Step 7: Receiving Data From Ubidots In this sample application we are going to control an output of the RevPi DIO module to be able to turn ON/OFF a light from the Ubidots cloud. 1. To be able to control an output form an Ubidots variable you have to create it first the variable. Enter your RevPi device and create a new variable by selecting "Add Variable" and press "Default". Then, assign it the name "light". Once the device is properly created. 2. Go to your main Ubidots Dashboard and create a control widget. Click the yellow plus(+) icon and follow the on screen options to deploy new dashboard widgets. Select Control > Switch > RevPICore(MACAddress) > light (variable just created) > Finish. After constructing your new widget, the Dashboard will reload and be populated with your new light control widget. This "control" widget will send its status to the RevPi DIO output to control the status of a light or any other device connected to Output 1. 3. Create a new python script using nano editor. To do this run the command below in the RevPi terminal: nano ubidots_revpi_do.py 4. Please copy and paste this sample code into the nano editor. Once pasted, assign your Ubidots Token where indicated in the script. Reference here for help locating your Ubidots token. In this sample code we are going to control an output of the RevPi DIO module to be able to turn ON/OFF a light from the Ubidots cloud. NOTE: To save the script into the nano editor - press Ctrl+o, confirm the file name to write (ubidots_revpi_di.py) and press enter. To close the nano editor press Ctrl+x. 5. Now let's test the script. Run the script previously created in the RevPi terminal: python3 ubidots_revpi_do.py Once the script begins to run, you will see the light status message. 6. Now change the status of the "Control" widget from your Ubidots Dashboard and visualize the status of the RevPI DIO output. Step 8: Results In just a few minutes you've integrated the RevPi Core + RevPi DIO with Ubidots, received data from your supply line for unit count, built an application to track and alert you to production requirements, and control the lights of your factory floor - all by using the RevPi Core + DIO with Ubidots. To learn more or deploy new Industrial solutions for monitoring or management, check out the full lineup of RevPi expansion modules. Recommendations We have a be nice policy. Please be positive and constructive.
http://www.instructables.com/id/Connect-Your-RevPi-Core-RevPi-DIO-to-Ubidots/
CC-MAIN-2018-26
refinedweb
2,012
61.26
13.4. lzma — Compression using the LZMA algorithm¶ New in version 3.3. Source code: Lib/lzma.py This module provides classes and convenience functions for compressing and decompressing data using the LZMA compression algorithm. Also included is a file interface supporting the .xz and legacy .lzma file formats used by the xz utility, as well as raw compressed streams. The interface provided by this module is very similar to that of the bz2 module. However, note that LZMAFile is not thread-safe, unlike bz2.BZ2File, so if you need to use a single LZMAFile instance from multiple threads, it is necessary to protect it with a lock. - exception lzma. LZMAError¶ This exception is raised when an error occurs during compression or decompression, or while initializing the compressor/decompressor state. 13.4.1.", or "at"for text mode. The default is "rb".. For binary mode, this function is equivalent to the LZMAFileconstructor:. The mode argument can be either "r"for reading (default), "w"for overwriting, "x"for exclusive creation, or "a"for appending. These can equivalently be given as "rb", "wb", "xb"and "ab"respectively. If filename is a file object (rather than an actual file name), a mode of "w"does not truncate the file, and is instead equivalent to "a". When opening a file for reading, the input file may be the concatenation of multiple separate compressed streams. These are transparently decoded as a single logical stream.. LZMAF. 13.4.2.. - This format specifier does not support integrity checks, and requires that you always specify a custom filter chain (for both compression and decompression). Additionally, data compressed in this manner cannot be decompressed using FORMAT_AUTO(see LZMADecompressor). The check argument specifies the type of integrity check to include in the compressed data. This check is used when decompressing, to ensure that the data has not been corrupted. Possible values are:and 9(inclusive), optionally OR-ed with the constant PRESET_EXTREME. If neither preset nor filters are given, the default behavior is to use PRESET_DEFAULT(preset level 6). Higher presets produce smaller output, but make the compression process slower. Note In addition to being more CPU-intensive, compression with higher presets also requires much more memory (and produces output that needs more memory to decompress). With preset 9 Create a decompressor object, which can be used to decompress data incrementally. For a more convenient way of decompressing an entire compressed stream at once, see decompress(). The format argument specifies the container format that should be used. The default is FORMAT_AUTO, which can decompress both .xz it is not possible to decompress the input within the given memory limit. The filters argument specifies the filter chain that was used to create the stream being decompressed. This argument is required if format is FORMAT_RAW, but should not be used for other formats. See Specifying custom filter chains for more information about filter chains. Note This class does not transparently handle inputs containing multiple compressed streams, unlike decompress()and LZMAFile. To decompress a multi-stream input with LZMADecompressor, you must create a new decompressor for each stream. decompress(data, max_length=-1)¶ Decompress data (a bytes-like object), returning uncompressed data as bytes. Some of data may be buffered internally, for use in later calls to decompress(). The returned data should be concatenated with the output of any previous calls to decompress(). If max_length is nonnegative, returns at most max_length bytes of decompressed data. If this limit is reached and further output can be produced, the needs_inputattribute will be set to False. In this case, the next call to decompress()may provide data as b''to obtain more of the output. If all of the input data was decompressed and returned (either because this was less than max_length bytes, or because max_length was negative), the needs_inputattribute will be set to True. Attempting to decompress data after the end of stream is reached raises an EOFError. Any data found after the end of the stream is ignored and saved in the unused_dataattribute. Changed in version 3.5: Added the max_length parameter.. 13.4.3. Miscellaneous¶ 13.4.4. A filter chain can consist of up to 4 filters, and cannot be empty. The last filter in the chain must be a compression filter, and any other filters must be delta or BCJ filters. Compression filters support the following options (specified as additional entries in the dictionary representing the filter): - preset: A compression preset to use as a source of default values for options that are not specified explicitly. - dict_size: Dictionary size in bytes. This should be between 4 KiB and 1.5 GiB (inclusive). - lc: Number of literal context bits. - lp: Number of literal position bits. The sum lc + lpmust be at most 4. - pb: Number of position bits; must be at most 4. - mode: MODE_FASTor MODE_NORMAL. - nice_len: What should be considered a “nice length” for a match. This should be 273 or less. - mf: What match finder to use – MF_HC3, MF_HC4, MF_BT2, MF_BT3, or MF_BT4. - depth: Maximum search depth used by match finder. 0 (default) means to select automatically based on other filter options. The delta filter stores the differences between bytes, producing more repetitive input for the compressor in certain circumstances. It supports one option, dist. This indicates the distance between bytes to be subtracted. The default is 1, i.e. take the differences between adjacent bytes. The BCJ filters are intended to be applied to machine code. They convert relative branches, calls and jumps in the code to use absolute addressing, with the aim of increasing the redundancy that can be exploited by the compressor. These filters support one option, start_offset. This specifies the address that should be mapped to the beginning of the input data. The default is 0. 13.4.5. Examples¶ Reading in a compressed file: import lzma with lzma.open("file.xz") as f: file_content = f.read() Creating a compressed file: import lzma data = b"Insert Data Here" with lzma.open("file.xz", "w") as f: f.write(data) Compressing data in memory: import lzma data_in = b"Insert Data Here" data_out = lzma.compress(data_in) Incremental compression: import lzma lzc = lzma.LZMACompressor() out1 = lzc.compress(b"Some data\n") out2 = lzc.compress(b"Another piece of data\n") out3 = lzc.compress(b"Even more data\n") out4 = lzc.flush() # Concatenate all the partial results: result = b"".join([out1, out2, out3, out4]) Writing compressed data to an already-open file: import lzma with open("file.xz", "wb") as f: f.write(b"This data will not be compressed\n") with lzma.open(f, "w") as lzf: lzf.write(b"This *will* be compressed\n") f.write(b"Not compressed\n") Creating a compressed file using a custom filter chain: import lzma my_filters = [ {"id": lzma.FILTER_DELTA, "dist": 5}, {"id": lzma.FILTER_LZMA2, "preset": 7 | lzma.PRESET_EXTREME}, ] with lzma.open("file.xz", "w", filters=my_filters) as f: f.write(b"blah blah blah")
http://docs.activestate.com/activepython/3.6/python/library/lzma.html
CC-MAIN-2019-13
refinedweb
1,151
60.21
How to: Host Login Pages in Your ASP.NET Web Application Published: April 7, 2011 Updated: June 19, 2015 Applies To: Azure - Microsoft® Azure™ Access Control Service (ACS) - ASP.NET This topic describes how to host a login page in your ASP.NET application. This method allows to fully customize your login page with regards to its layout, look, and feel. The communication from your custom login page to ACS is performed over the JSON feed that ACS exposes. To enable full control over the appearance, behavior, and location of your federated login page, ACS provides a JSON-encoded metadata feed that provides the names, login URLs, images, and email domain names (AD FS only) for your identity providers. This feed is known as the “Home Realm Discovery Metadata Feed.” ACS provides an example of a custom login page that includes the necessary code to communicate with the Home Realm Discovery Metadata Feed. This page can be downloaded and fully customized. - Objectives - Overview - Summary of Steps - Step 1 – Downloading an Example Custom Login Page - Step 2 – Customizing the Look and Feel of Your Custom Login Page - Step 3 – Integrating a Custom Login Page in an ASP.NET Web Application - Becoming familiar with a login page in the ACS Management Portal. - Ensuring that each objective is expressed as a specific task - Hosting a login page in an ASP.NET web application to provide a consistent look and feel. - Step 1 – Download an Example Custom Login Page - Step 2 – Customize the Look and Feel of Your Custom Login Page - Step 3 – Integrate a Custom Login Page in an ASP.NET Web Application This step shows how to download an example custom login page. You will use the example custom login page to customize to your needs and then host it in your ASP.NET application. If you were not authenticated using Windows Live® ID, you will be required to do so. After being authenticated with your Windows Live ID (Microsoft account), you are redirected to the My Projects page on the Azure portal. Click the desired project name on the My Project page. On the project’s detail page, locate the desired namespace, and then click the Access Control link in the Manage column. On the Access Control Settings page, click Manage Access Control. Scroll down to the Develop section, and then click the Application Integration link. In the Login Pages section, click the Login Pages link. On the Login Page Integration page, click the desired relying party application in the Relying Party Application column. On the Login Page Integration: <<Your Replying Party>> page, locate Option 2: Host the login page as part of your application section, and then click the Download Example Login Page button. Save the page to an arbitrary location. This is the page you will use for customization. The page’s name is usually <<YourRealm>>LoginPageCode.html. In this step you customize the example custom login page you downloaded in previous page. - Use any HTML editor of your choice—it can be as simple as Notepad or as robust as the Visual Studio® 2010 HTML Editor. - Design the look and feel of your custom login page to your desire. In this step you integrate your newly designed custom login page with your ASP.NET web application. - Copy your newly designed custom login page into a public location in your ASP.NET web application—usually the root folder. - Expose the URL to your custom login page on a public page, usually Default.aspx. Unauthenticated users will click it to be authenticated.
https://msdn.microsoft.com/library/azure/gg185926.aspx
CC-MAIN-2017-34
refinedweb
592
55.74
Fulltext search in SQLite and Django app Since version about 3.5.9 SQLite contains full-text search module called FTS3 (older releases may have FTS1, FTS2). Using this module we can easily add fast full text search to a Django application (or any other using SQLite). During my tests I got FTS3 only on Python 2.6. On Python 2.5 and using pysqlite may be hard to get FTS3 (try recompiling with -DSQLITE_ENABLE_FTS3=1 flag). Creating virtual table Here is an example table: FTS1 and FTS2 can be used exactly in the same way as FTS3. More on sqlite.org. Adding data to a virtual table We can start with importing data from existing table. With this script (executed from Django project folder) we can import the data: # -*- coding: utf-8 -*- import sys import urllib2 from os import environ environ['DJANGO_SETTINGS_MODULE'] = 'settings' from settings import * from django.contrib.sessions.models import * from django.db import connection, transaction from MYAPP.models import * cursor = connection.cursor() j = MY_SOME_MODEL.objects.all() iterr = 1 for i in j: print iterr txt = i.some_txt + ' ' + i.more_txt + ' ' + i.city_of_something # txt should be stripped from HTML, stop words etc. to get smaller size of the database cursor.execute("INSERT INTO my_search (slug, body) VALUES (%s, %s)", (i.slug, txt)) iterr += 1 transaction.commit_unless_managed() Indexing new entries we can handle in Django with signals. For example in models.py add: from django.db.models import signals #.... def update_index(sender, instance, created, **kwargs): cursor = connection.cursor() txt = instance.some_txt + ' ' + instance.more_txt + ' ' + instance.city_of_something # txt should be stripped from HTML, stop words etc. to get smaller size of the database txt = clean_to_search(txt) if created: # add if object is created, not updated cursor.execute("INSERT INTO my_search (slug, body) VALUES (%s, %s)", (instance.slug, txt)) transaction.commit_unless_managed() signals.post_save.connect(update_index, sender=MY_SOME_MODEL) Full-text in SQLite The query looks like this: cursor = connection.cursor() cursor.execute("SELECT slug FROM my_search WHERE body MATCH %s", (query,)) results = cursor.fetchall() I'm testing this search solution on my JobMaster - job offer searcher hosted on megiteam.pl. As for now no problems occurred with this (except switch from Python 2.5 to Python 2.6). It doesn't leak memory (Nginx+FastCGI), and it doesn't seem to be slow (it index entries faster than Whoosh, but no number on it). It's simple, easy to setup, and no whoosh. solr, xapian needed, so it's cool way to add full text search to SQLite powered websites.
https://rk.edu.pl/en/fulltext-search-sqlite-and-django-app/
CC-MAIN-2021-31
refinedweb
417
53.07
Cat shark Compare Revisions This comparison shows the changes necessary to convert path → /shark/trunk/include/ @ 921 → /shark/trunk/include/ @ 922 ↔ Reverse comparison Compare Path: Rev With Path: Rev Ignore whitespace Rev 921 → Rev 922 /shark/trunk/include/kernel/var.h 21,11 → 21,11 /** ------------ CVS : $Id: var.h,v 1.4 2003-03-13 13:36:28 pj Exp $ CVS : $Id: var.h,v 1.5 2005-01-08 14:50:58 pj Exp $ File: $File$ Revision: $Revision: 1.4 $ Last update: $Date: 2003-03-13 13:36:28 $ Revision: $Revision: 1.5 $ Last update: $Date: 2005-01-08 14:50:58 $ ------------ Kernel global variables 112,14 → 112,7 system tasks and then it ends. +*/ extern int mustexit; /*+ This variable is set by the system call sys_end() or sys_abort(). When a sys_end() or sys_abort is called into an event handler, we don't have to change context in the reschedule(). +*/ int calling_runlevel_func; /*+ this variable is set to 1 into extern int calling_runlevel_func; /*+ this variable is set to 1 into call_runlevel_func (look at init.c) ad it is used because the task_activate (look at activate.c) must work in a 126,5 → 119,29 different way when the system is in the global_context +*/ #define EXIT_CALLED 1 #define _EXIT_CALLED 2 extern int _exit_has_been_called; /*+ this variable is set when _exit is called. in this case, the atexit functions will not be called. Values: - 0 neither exit or _exit have been called - 1 exit has been called - 2 _exit has been called +*/ extern CONTEXT global_context; /*+ Context used during initialization; It references also a safe stack +*/ extern int runlevel; /*+ this is the system runlevel... it may be from 0 to 4: 0 - init 1 - running 2 - shutdown 3 - before halting 4 - halting +*/ extern int event_noreschedule; /*+ This controls if the system needed to be rescheduled at the end of an IRQ/event or if must not because exit, _exit, or sys_abort_shutdown was called +*/ __END_DECLS #endif /* __VAR_H__ */ WebSVN 2.3.3 and Subversion 1.14.1 ✓ XHTML & CSS
http://shark.sssup.it/svn/comp.php?repname=shark&compare%5B%5D=/shark/trunk/include/@921&compare%5B%5D=/shark/trunk/include/@922
CC-MAIN-2022-05
refinedweb
332
61.87
Adding Type Hints Wing can understand several different kinds of type hints added to Python code. PEP484 and PEP 526 Type Annotations Adding type hints in the styles standardized by PEP 484 (Python 3.5+) and PEP 526 (Python 3.6+) is another way to help Wing understand difficult-to-analyze code. For example, the following indicates to Wing the argument and return types of the function myFunction: from typing import Dict, List def myFunction(arg1: str, arg2: Dict) -> List: return arg2.get(arg1, []) The type of variables can be indicated by a comment that follows an assignment: x = Something() # type: int Or in Python 3.6+ the type can instead be specified inline: x:int = Something() The types that Wing can recognize include basic types like str and int and also the following from the typing module: List, Tuple, Dict, Set, FrozenSet, Optional, and Union. Type Hinting with isinstance() Another way to inform Wing of the type of a variable is to add an isinstance call to your code. For example isinstance(obj, CMyClass). This is useful in older Python versions, or when combined with debug-only runtime type checking like assert isinstance(obj, CMyClass). In cases where doing this introduces a circular import or other problems, use a conditional: if 0: import othermodule isinstance(obj, othermodule.CMyClass) The source code analysis engine will still pick up on the type hint, even though it is never executed.
http://www.wingware.com/doc/edit/analysis-helping-type-hints
CC-MAIN-2019-26
refinedweb
238
60.95
This. Here is source code of the C++ program which prints pascal’s triangle. The C++ program is successfully compiled and run on a Linux system. The program output is also shown below. /* * C++ Program to Print Pascal's Triangle */ #include<iostream> using namespace std; int main() { int rows; cout << "Enter the number of rows : "; cin >> rows; cout << endl; for (int i = 0; i < rows; i++) { int val = 1; for (int j = 1; j < (rows - i); j++) { cout << " "; } for (int k = 0; k <= i; k++) { cout << " " << val; val = val * (i - k) / (k + 1); } cout << endl << endl; } cout << endl; return 0; } $ g++ main.cpp $ ./a.out Enter the number of rows : 5 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 Sanfoundry Global Education & Learning Series – 1000 C++ Programs. If you wish to look at all C++ Programming examples, go to C++ Programs.
http://www.sanfoundry.com/cpp-program-print-pascal-triangle/
CC-MAIN-2017-39
refinedweb
148
72.7
random OSError: I2C bus error with VLX053 Hello, I am running this library for vl53l0x and that is what I run as a test import time import gc from machine import I2C import vl53l0x i2c = I2C(0, I2C.MASTER, baudrate=100000) #gc.enable() # Initialize I2C bus and sensor. vl53 = vl53l0x.VL53L0X(i2c) while True: range=vl53.read() print(range) print("free mem : ") print(gc.mem_free()) time.sleep(0.1) I tried running it in the hardware I2c and in random pins. I tried various delays. I monitored the memory. Whatever I do I just get randomly OSError: I2C bus error that crashes everything, after a few minutes of proper operation. Is there a way to bypass the error and not crash? Yep I used that library and the pololu breakout board. I can't really give you an exact explanation as to what was going on because I didn't connect the signals to a logic analyser/oscilloscope but the only way I could get the module to start responding was with a full reboot. The sensor is very complex and the library doesn't fully implement everything, and my guess is it doesn't recover from errors very well requiring the full reboot. It works really reliably if I don't fiddle with the wires and make sure it has a clean boot. Try lowering your baud rate like I did: i2c = I2C(1, pins=('P6', 'P5'), baudrate=5000) v = VL53L0X.VL53L0X(i2c) I certainly dont touch wires when it fails as I leave it working on my desk at night. I was testing it with an adafruit break out board as well. Now i am testing it with my own break out board without regulators and level shifter circuits I will see how it is going. I have to point out that I already ran the project with the same sensor on stm32 with the official stm C libraries without stability problems. So I guess its not the sensor. Do you use the same library as me? I have literally just finished a project using this sensor and let me say it is extremely picky. I found that the the i2c communication falls over very quickly if you touch any of the wires when its running (and trust me, I painstakingly checked every wire and solder join to make sure it was solid), from that point onward the module seems to fail to respond. The only way to bring it back is to completely unpower the device and try again. I also slowed down the i2c baud rate just in case that made it worse.
https://forum.pycom.io/topic/3080/random-oserror-i2c-bus-error-with-vlx053
CC-MAIN-2020-40
refinedweb
437
71.75
What’s new in PHP 5.3? Article.. Namespaces Before the days of object oriented PHP, many application developers made use of verbose function names in order to avoid namespace clashes. WordPress, for example, implements functions such as wp_update_post and wp_create_user. The wp_ prefix denotes that the function pertains to the WordPress application, and reduces the chance of it clashing with any existing functions. In an object oriented world, namespace clashes are less likely. Consider the following example code snippet, which is based on a fictional blogging application: <?php class User { public function set( $attribute, $value ) { ... } public function save() { ... } } $user = new User(); $user->set('fullname', 'Ben Balbo'); $user->save(); In this example, the save method will not clash with any other method, as it is contained within the User class. There’s still a potential issue though: the User class might already be defined by some other part of the system if, for example, the blogging application runs within a content management system. The solution to this issue is to use the new namespaces keyword. Taking the above code again, consider the following sample files: <?php namespace MyCompany::Blog; class User { public function set( $attribute, $value ) { $this->$attribute = $value; } public function save() { echo '<p>Blog user ' . $this->fullname . ' saved</p>'; } } <?php $user = new MyCompany::Blog::User(); $user->set(‘fullname’, ‘Ben Balbo’); $user->save(); On the surface, the advantages offered by namespacing our function might not be immediately obvious — after all, we’ve simply changed MyCompany_Blog_User to MyCompany::Blog::User. However, we can now create a User class for the CMS using a different namespace: <?php namespace MyCompany::CMS; class User { public function set( $attribute, $value ) { $this->$attribute = $value; } public function save() { echo ‘<p>CMS user ‘ . $this->fullname . ‘ saved</p>’; } } We can now use the classes MyCompany::Blog::User and MyCompany::CMS::User. The use Keyword Addressing classes using the full namespace still results in lengthy calls, and if you’re using lots of classes from the MyCompany::Blog namespace, you might not want to retype the whole path to the class every time. This is where the use keyword comes in handy. Your application will most likely use a number of different classes at any given time. Say, for example, the user creates a new post: <?php use MyCompany::Blog; $user = new Blog::User(); $post = new Blog::Post(); $post->setUser( $user ); $post->setTitle( $title ); $post->setBody( $body ); $post->save(); The use keyword is not restricted to defining namespaces in which to work. You can also use it to import single classes to your file, like so: <?php use MyCompany::Blog::User; $user = new User(); Namespace Aliases Earlier, I pointed out that one advantage of namespacing is the ability to define more than one class with the same name in different namespaces. There will obviously be instances where those two classes are utilized by the same script. We could just import the namespaces, however, we also have the option of importing just the classes. To do so, we can use namespace aliasing to identify each class, like so: <?php use MyCompany::Blog::User as BlogUser; use MyCompany::CMS::User as CMSUser; $bloguser = new BlogUser(); $bloguser->set(‘fullname’, ‘John Doe’); $bloguser->save(); $cmsuser = new CMSUser(); $cmsuser->set(‘fullname’, ‘John Doe’); $cmsuser->save(); Class Constants Constants are now able to be defined at the class level! Note that class constants are available when you’re importing namespaces, but you cannot import the constant itself. Here’s an example of how we might use them: <?php namespace MyCompany; class Blog { const VERSION = ’1.0.0′; } <?php echo ‘<p>Blog bersion ‘ . MyCompany::Blog::VERSION . ‘</p>’; use MyCompany::Blog; echo ‘<p>Blog version ‘ . Blog::VERSION . ‘</p>’; use MyCompany::Blog::VERSION as Foo; echo ‘<p>Blog version ‘ . Foo . ‘</p>’; This will result in the following output: Blog bersion 1.0.0 Blog version 1.0.0 Blog version Foo Namespaced Functions The use of static class methods has deprecated the use of functions in the object oriented world in which we now live. However, if you do need to add a function to your package, it too will be subject to namespacing! Here’s an example: <?php namespace bundle; function foo() { echo '<p>This is the bundled foo</p>'; } foo(); // This prints 'This is the bundled foo' <?php function foo() { echo ‘<p>This is the global foo</p>’; } require( ‘lib/bundle.class.php’); bundle::foo(); // This prints ‘This is the bundled foo’ foo(); // This prints ‘This is the global foo’ The Global Namespace The global namespace is an important consideration when you’re dealing with functions. In the previous example, you’ll notice that there’s no direct way of calling the global foo function from within the bundle code. The default method of resolving calls to functions is to use the current namespace. If the function cannot be found, it will look for an internal function by that name. It will not look in other namespaces automatically. To call the global foo function from within the bundle namespace, we need to target the global namespace directly. We do this by using a double colon: <?php namespace bundle; function foo() { echo '<p>This is the bundled foo</p>'; } foo(); // This prints 'This is the bundled foo' ::foo(); // This prints 'This is the global foo' Autoloading Namespaced Classes If you’re defining the magic __autoload function to include class definition files on demand, then you’re probably making use of a directory that includes all your class files. Before we could use namespaces, this approach would suffice, as each class would be required to have a unique name. Now, though, it’s possible to have multiple classes with the same name. Luckily, the __autoload function will be passed the fully namespaced reference to the class. So in the examples above, you might expect a call such as: __autoload( 'MyCompany::Blog::User' ); You can now perform a string replace operation on this parameter to convert the double colons to another character. The most obvious substitute would be a directory separator character: function __autoload( $classname ) { $classname = strtolower( $classname ); $classname = str_replace( '::', DIRECTORY_SEPARATOR, $classname ); require_once( dirname( __FILE__ ) . '/' . $classname . '.class.php' ); } This would take the expected call above and include the file ./classes/mycompany/blog/user.class.php. Late Static Binding Late static binding provides the ability for a parent class to use a static method that has been overridden in a child class. You might imagine this would be the default behaviour, but consider the following example: <?php class ParentClass { static public function say( $str ) { self::do_print( $str ); } static public function do_print( $str ) { echo "<p>Parent says $str</p>"; } } class ChildClass extends ParentClass { static public function do_print( $str ) { echo "<p>Child says $str</p>"; } } ChildClass::say( 'Hello' ); You would probably expect this to return "Child says Hello". While I understand why you might expect this, you’ll be disappointed to see it return "Parent says Hello". The reason for this is that references to self:: and __CLASS__ resolve to the class in which these references are used. PHP 5.3 now includes a static:: reference that resolves to the static class called at runtime: static public function say( $str ) { static::do_print( $str ); } With the addition of the static:: reference, the script will return the string "Child says Hello". __callstatic Until now, PHP has supported a number of magic methods in classes that you’ll already be familiar with, such as __set, __get and __call. PHP 5.3 introduces the __callstatic method, which acts exactly like the call method, but it operates in a static context. In other words, the method acts on unrecognized static calls directly on the class. The following example illustrates the concept: <?php class Factory { static function GetDatabaseHandle() { echo '<p>Returns a database handle</p>'; } static function __callstatic( $methodname, $args ) { echo '<p>Unknown static method <strong>' . $methodname . '</strong>' . ' called with parameters:</p>'; echo '<pre>' . print_r( $args, true ) . '</pre>'; } } Factory::GetDatabaseHandle(); Factory::CreateUser(); Factory::CreateBlogPost( 'Author', 'Post Title', 'Post Body' ); Variable Static Calls When is a static member or method not static? When it’s dynamically referenced, of course! Once again, this is an enhancement that brings object functionality to your classes. In addition to having variable variables and variable method calls, you can now also have variable static calls. Taking the factory class defined in the previous section, we could achieve the same results by invoking the following code: $classname = 'Factory'; $methodname = 'CreateUser'; $classname::$methodname(); $methodname = 'CreateBlogPost'; $author = 'Author'; $posttitle = 'Post Title'; $postbody = 'Post Body'; $classname::$methodname( $author, $posttitle, $postbody ); You can create dynamic namespaces like so: <?php require_once( 'lib/autoload.php' ); $class = 'MyCompany::Blog::User'; $user = new $class(); $user->set('fullname', 'Ben Balbo'); $user->save(); These little touches can make your code more readable and allow you full flexibility in an object oriented sense. MySQL Native Driver Until version 5.3 of PHP, any interaction with MySQL usually occurred in conjunction with libmysql — a MySQL database client library. PHP 5.3′s native MySQL driver has been designed from the ground up for PHP and the ZEND engine, which brings about a number of advantages. Most obviously, the native driver is specific to PHP, and has therefore been optimised for the ZEND engine. This produces a client with a smaller footprint and faster execution times. Secondly, the native driver makes use of the ZEND engine’s memory management and, unlike libmysql, it will obey the memory limit settings in PHP. The native driver has been licensed under the PHP license to avoid licensing issues. Additional OpenSSL Functions If you’ve ever had to perform any OpenSSL-related actions in your scripts, (such as generating a Diffie Hellman key or encrypting content) you’ll either have performed this operation in user-land, or passed the request to a system call. A patch to the OpenSSL functionality in PHP 5.3 provides the extra functions required to perform these actions through the OpenSSL library, which not only makes your life easier and your applications faster, but allows you to reuse the proven code that comes with OpenSSL. This will be great news for anyone that’s currently working with OpenID. Improved Command Line Parameter Support Hopefully, you’ll be aware of the fact that PHP is more than just a scripting language for the Web. The command line version of PHP runs outside of the web server’s environment and is useful for automating system and application processes. For example, the getopt function of PHP has been around for a while, but has been limited to a number of system types; most commonly, it didn’t function under a Windows operating system environment. As of PHP 5.3, the getopt function is no longer system dependent. Hooray! XSLT Profiling XSLT is a complex beast, and most users of this templating mechanism will be familiar with xsltproc’s profiling option. As of PHP 5.3, you can profile the transforms from within your PHP scripts. This snippet from the example code that accompanies this article gives you an idea of how we might use it: $doc = new DOMDocument(); $xsl = new XSLTProcessor(); $doc->load('./lib/collection.xsl'); $xsl->importStyleSheet($doc); $doc->load('./lib/collection.xml'); $xsl->setProfiling("/tmp/xslt-profiling.txt"); echo $xsl->transformToXML($doc); echo '<h2>Profile report</h2>'; echo '<pre>' . file_get_contents( '/tmp/xslt-profiling.txt' ) . '</pre>'; The information produced by the profile will look something like this: number match name mode Calls Tot 100us Avg 0 collection 1 4 4 1 cd 2 1 0 Total 3 5 New Error Levels PHP is certainly a language that has a few, er, quirks. For example, why doesn’t E_ALL include all error levels? Well now it does! Yes, PHP 5.3 now includes E_STRICT as part of E_ALL. Furthermore, while E_STRICT used to report on both the usage of functionality that might become deprecated in the near future, and on bad programming practices such as defining an abstract static method, in PHP 5.3, these two errors are split into E_DEPRECATED and E_STRICT respectively, which makes a lot more sense. Other Minor Improvements There’s a handful of other improvements coming in PHP 5.3 that either don’t warrant an entire section in this article, or were untestable at the time this article was written, such as: - Sqlite3 support via the ext/sqliteextension - SPL’s DirectoryIterator, which now implements ArrayAccess - Two news functions: array_replaceand array_replace_recursive. While these functions were undefined when tested under PHP 5.3.0, the C code that implements them suggests that they will contain similar functionality to array_merge. One exception, however, is that the array_replacefunction will only update values in the first array where the keys match in both arrays. Any keys that are present in the second array but don’t appear in the first will be ignored. Summary PHP 5.3 contains much functionality that was originally slated for inclusion in PHP 6, which takes it from being a minor upgrade to a significant release that every PHP developer should start thinking about. We touched on most of the features in this article, and looked at some code that demonstrates how you might go about using these new features. Don’t forget to download the code archive that accompanies this article, and have fun living on the edge! - dskanth
http://www.sitepoint.com/whats-new-php-5-3/
CC-MAIN-2014-35
refinedweb
2,205
53.31
Coding a Bing Bot Bots are becoming increasingly popular, with good reason. Technology has come a very long way; but, there are times when certain technologies seem to reach a point where they cannot go any further. A point where some new "life" is needed—a change, or an improvement. Such is the case with bots. In the 90s, Web sites became popular. This shifted the software market, or rather, people's market, to a bright and new shiny platform called the "Web." Gone were the days where developers developed software solely for the desktop market. The Web has opened up so many possibilities. People could advertise more and reach more people. New businesses started. E-Commerce started. In the 2000s, a new player arrived: Mobile applications. The coming of smartphones again changed the technology landscape. Developing for the mobile platform has opened up even more possibilities than its predecessor (the Web) could. Everything has become so easy and convenient. Now, basically ten years on since the start of the mobile age, mobile apps have reached their pinnacle. There is not much more you can do, or invent that hasn't been done or invented on a mobile platform yet. So, what now? The answer is: Bots. Microsoft Bing Bots A bot is an application that can run automated tasks. These tasks are usually repetitive and bots perform these tasks much faster than humans can. Because of the speed at which bots can perform tasks, they also can be implemented where response speed is faster than how a human can respond and process information. I have spoken about bots before. You can find those articles here: Let's delve a bit deeper into bots today and create a Bing Bot. Microsoft Bot Framework SDK The first step in creating a Bing Bot is to download the Microsoft Bot Framework SDK. The Microsoft Bot Framework is a communication service that helps connect your bots with different communication channels, such as SMS and e-mail. The Microsoft Bot Framework includes Bot Builder to give you the tools you need to develop bots. The Bot Builder SDK is hosted on GitHub. If you are not sure how GitHub works, this article will help in understanding GitHub and Visual Studio better. Bot Builder includes several samples for you to explore and experiment with in Visual Studio. Bot Framework Template Step Two involves obtaining the Bot Framework Visual Studio template. You need to install this template to be able to develop any bot. Download the Bot Application template and install the template by extracting the .zip file into your Visual Studio project templates directory, which is located in the following location: %USERPROFILE%\Documents\Visual Studio VERSION\Templates\ProjectTemplates\Visual C#\ Now, this is where documentation seems a bit lacking as well as the whole process being cumbersome… By installing the template, you are by no means ready to start developing your bot. You first have to ensure that you have all the Updated ApiController classes. To update all the necessary namespaces and classes, follow these steps: - Create the Bot Framework project. At this point in time, only C# is supported. Name the project anything descriptive. In this example, I have named my project HTGBing. - Once the project skeleton has been created, you will notice that a lot of the classes will potentially produce errors, as shown in Figure 1. Figure 1: Solution Explorer before update - Right-click the project and select Manage NuGet Packages. - In the Browse tab, type "Microsoft.Bot.Builder." Figure 2: NuGet Packages - Locate the Microsoft.Bot.Builder package in the list of search results, and click the Update button for that package. - Preview the changes NuGetwill make. Figure 3: Preview NuGet Changes - Follow the prompts to accept the changes and update the package. Figure 4: Finished Updating NuGet Packages Bot Framework Emulator Step Three involves installing the Bot Framework Emulator. You need to install the Bot Framework Emulator to test your bot properly before publishing it to the Bing portal. Download the Emulator here. After it has downloaded—it is about 60MB—run the Setup file. Figure 5: Install Bot Framework Emulator Figure 6 shows the Emulator in action. Figure 6: Emulator in Action Coding Your Microsoft Bing Bot You will notice how your project is divided and what it consists of in the Solution Explorer. There are three main folders: - App_Start - Controllers - Dialogs App_Start contains all of the files for your Bing Bot to start. Controllers contain all of the classes that have to interpret and digest the bot's messages being sent and received. Dialogs contain all the graphical windows of your Bing Bot. Locate and find the Web.Config file. You will have to enter your Bing Bot's credentials in here. Obviously, you have not created the Bing Bot on the Bing Bot Portal yet, but it is good to know where to put it, so that when I cover that (a bit later in this article) you will know what I am talking about. My Web.Config file that includes my Bing Bot's credentials looks like the following: Web.Config <?xml version="1.0" encoding="utf-8"?> <configuration> <appSettings> <add key="BotId" value="HTGBing" /> <add key="MicrosoftAppId" value="d8bd90d3-284f-49c7-a33a-bc047dfe7115" /> <add key="MicrosoftAppPassword" value="E1xdHf9dNLRgwV0n9YOTEXw" /> </appSettings> <system.web> <customErrors mode="Off" /> <compilation debug="true" targetFramework="4.6" /> <httpRuntime targetFramework="4.6" /> </system.web> <system.webServer> <defaultDocument> <files> <clear /> <add value="default.htm" /> </files> </defaultDocument> > <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Helpers"="Newtonsoft.Json" publicKeyToken="30ad4fe6b2a6aeed" culture="neutral" /> <bindingRedirect oldVersion="0.0.0.0-8.0.0.0" newVersion="8.0.0.0" /> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Net.Http.Primitives" publicKeyToken="b03f5f7f11d50a3a" culture="neutral" /> <bindingRedirect oldVersion="0.0.0.0-4.2.29.0" newVersion="4.2.29.0" /> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Net.Http.Formatting" publicKeyToken="31bf3856ad364e35" culture="neutral" /> <bindingRedirect oldVersion="0.0.0.0-5.2.3.0" newVersion="5.2.3.0" /> </dependentAssembly> </assemblyBinding> </runtime> </configuration> The BotID setting must be set to the name of your bot. Once you have registered your bot on the Bing Bot Portal, Bing will supply you with the MicrosoftAppId and MicrosoftAppPassword. Locate and edit the default.htm file: Default.htm <!DOCTYPE html> <html> <head> <title></title> <meta charset="utf-8" /> </head> <body style="font-family:'Segoe UI'"> <h1>HTGBing</h1> <p>This is simply an Example Bot.</p> </body> </html> Default.htm, in this case, provides the user with a simple message. Locate and edit the WebApiConfig.cs file: WebApiConfig.cs using Newtonsoft.Json; using Newtonsoft.Json.Serialization; using System; using System.Collections.Generic; using System.Linq; using System.Web.Http; namespace HTGBing { public static class WebApiConfig { public static void Register(HttpConfiguration config) { // JSON settings// config.Formatters.JsonFormatter.SerializerSettings .NullValueHandling = NullValueHandling.Ignore; config.Formatters.JsonFormatter.SerializerSettings .ContractResolver = new CamelCasePropertyNamesContractResolver(); config.Formatters.JsonFormatter .SerializerSettings.Formatting = Formatting.Indented; JsonConvert.DefaultSettings = () => new JsonSerializerSettings() { ContractResolver = new CamelCasePropertyNamesContractResolver(), Formatting = Newtonsoft.Json.Formatting.Indented, NullValueHandling = NullValueHandling.Ignore, }; // Web API configuration and services// // Web API routes// config.MapHttpAttributeRoutes(); config.Routes.MapHttpRoute( name: "HTGBingMethod", routeTemplate: "api/{controller}/HTGBingMethod/{id}", defaults: new { id = RouteParameter.Optional } ); } } } The WebApiConfig class lets you set up the messaging route. This means that here you should specify how the messages should be sent through to the bot. The end EndPoint would resemble the URL of your bot, along with the Route Template. For example: A Route Template such as the one supplied above might end up looking like this once done:(OR SERVERNAME):3979(OR PORT NUMBER)/api/messages/HTGBingMethod/id Add the MessagesController class: MessagesController.cs using System.Net; using System.Net.Http; using System.Threading.Tasks; using System.Web.Http; using Microsoft.Bot.Builder.Dialogs; using Microsoft.Bot.Connector; namespace HTGBing { [BotAuthentication] public class MessagesController : ApiController { public async Task<HttpResponseMessage> Post([FromBody]Activity activity) { if (activity.Type == ActivityTypes.Message) { await Conversation.SendAsync(activity, () => new Dialogs.RootDialog()); } var response = Request.CreateResponse(HttpStatusCode.OK); return response; } } } If a Message (used to communicate between a user and a bot) was sent, the bot needs to display the Root Dialog, which you will add later. There are various types of Messages that you can send to your bot; these include: - Text: The actual text being sent - Language: The language of the message that gets sent or should be received. - Markdown: This includes the basic formatting of the message as well as an indication of the paragraph markdowns, which is obviously the paragraph marker. - Attachments: This is the possible attachment to the message. This also includes the type of content being sent to and from the user and the bot. Add the RootDialog.cs code: RootDialog.cs using System; using System.Threading.Tasks; using Microsoft.Bot.Builder.Dialogs; using Microsoft.Bot.Connector; namespace HTGBing.Dialogs { ; if (activity.Text == "Hannes") { await context.PostAsync($"Welcome Hannes!"); } else { await context.PostAsync($"Hello there, Stranger!"); } context.Wait(MessageReceivedAsync); } } } If the user entered "Hannes," the bot will reply with a welcoming message. Otherwise, the bot will know I am a stranger and act accordingly. Registering Your Bing Bot To register your Bing Bot, follow these steps: - After you have signed in, click My Bots. - Click Create a bot. - Click Register. Figure 7: Register your Bing Bot Creating a Bing Bot Profile - Upload an icon that will represent your Bing Bot. - Provide a Display Name for your Bing Bot. - Provide a Handle for your Bing Bot. This value will be used in the URL for your bot and cannot be changed after registration. - Provide a Description of your Bing Bot. Figure 8: Bing Bot Profile Configuring Your Bing Bot - Provide your Bing Bot's HTTPS messaging endpoint. This is the endpoint where your bot will receive HTTP POST messages from the Bot Connector. Figure 9: Configuring Bing Bot - Click Create Microsoft App ID and password. - Click Generate an app password to continue. - Copy and securely store the password that is shown, and then click Ok. - Click Finish and go back to Bot Framework. Figure 10: Bing Bot Generate Application ID and Password The App ID field is now displayed in the Bot Framework Portal. Providing Analytics for Your Bing Bots Provide the AppInsights Instrumentation key, AppInsights API key, and AppInsights Application ID from the corresponding resource that you've created. Adding Owner Information - Specify the email address(es) for the owner(s) of the bot. - Check to indicate that you have read and agree to the Terms of Use, Privacy Statement, and Code of Conduct. - Click Register to complete the registration process. Figure 11: Owner information Figure 12: Bing Bot created Your Bing Bot is now created. Remember to supply the correct AppID and Password in your Web.Config file. You now can go back to My Bots and you will see your Bing Bot is shown. Clicking your newly created Bing Bot will enable you to connect to more Bing Channels and to fix any outstanding issues for your bot. Figure 13: Connect to Channels Please feel free to download the accompanying file for this article. It contains the code needed to create your own bot. Conclusion The sooner you get acquainted with bots, the better. This article demonstrated how to create your own Bing Bots. As you can see, it is quite easy to do—you just need the right tools.
https://www.codeguru.com/csharp/csharp/cs_controls/tutorials/coding-a-bing-bot.html
CC-MAIN-2019-35
refinedweb
1,887
51.44
Member since 03-25-2015 7 3 Kudos Received 0 Solutions 02-08-2017 02:43 AM 02-08-2017 02:43 AM Hi Dinesh, You can use this set hive.variable.substitute=true; set hiveconf:my_date=date_sub(current_date, 10); truncate table table_name parition (date=${hiveconf:my_date}); Hope this will help. Regards Niranjan ... View more Hello All, Can any please explain me the difference between kerberos ticket_lifetime vs renew_lifetime. Thanks Niranjan ... View more How to read fsimage: We can use Offline Image Viewer tool to view the fsimage data in a human readable format. Sometimes, this becomes more essential to analyse the fsimage to understand the usage pattern, how many 0 bite files are created, what is the space consumption pattern and is the fsimage corrupt. Donwload the fsimage: hdfs dfsadmin –fetchImage /fsimage This will download the latest fsimage from Name node 16/12/27 05:40:43 INFO namenode.TransferFsImage: Opening connection to http://<nn_hostname>:50070/getimage?getimage=1&txid=latest 16/12/27 05:40:43 INFO namenode.TransferFsImage: Transfer took 0.23s at 89.74 KB/s Reading fsimage: We can read the fsimage in several output formats. 1 .Web is the default output format. 2 .XML document 3 .Delimiters 4 .Reverse XML. 5 .FileDistribution is the tool for analyzing file sizes in the namespace image. In this blog I will focusing on two output formats Web and Delimiters. To get the output on web: Run the oiv command with fsimage as input file: hdfs oiv –i /fsimage/fsimage_0000000000000005792 16/12/27 05:48:43 INFO offlineImageViewer.FSImageHandler: Loading 9 strings 16/12/27 05:48:43 INFO offlineImageViewer.FSImageHandler: Loading 64 inodes. 16/12/27 05:48:43 INFO offlineImageViewer.FSImageHandler: Loading inode references 16/12/27 05:48:44 INFO offlineImageViewer.FSImageHandler: Loaded 0 inode references 16/12/27 05:48:44 INFO offlineImageViewer.FSImageHandler: Loading inode directory section 16/12/27 05:48:44 INFO offlineImageViewer.FSImageHandler: Loaded 32 directories 16/12/27 05:48:44 INFO offlineImageViewer.WebImageViewer: WebImageViewer started. Listening on /127.0.0.1:5978. Press Ctrl+C to stop the viewer. Now open another terminal and run the below commands to read fsimage. hdfs dfs -ls webhdfs://127.0.0.1:5978/ OR hdfs dfs -ls –R webhdfs://127.0.0.1:5978/ We can also get the output in JSON format by using curl: curl -i To get the output in to an output directory: hdfs ovi –p Delimited –i /fsimage/fsimage__0000000000000005792 –o /fsimage/fsimage.txt We can read the data in fsimage.txt by running: head fsimage.txt from the local folder. ... View more - Find more articles tagged with: - capacity - namenode - performance - planning - Sandbox & Learning Labels: 12-24-2016 06:59 AM 12-24-2016 06:59 AM Yes, after changing the directory name for 'NFSGateway dump directory' it works for me. Regards Niranjan ... View more 12-23-2016 10:56 AM 12-23-2016 10:56 AM Hi Tim, The number of Nodes depends on your use case/POC, Data Volume, Cluster usage, High Availability etc., I feel it is good to start with 5 nodes (2 Master and 3 Data Nodes). Hence you will have option to enable HA and you can balance work load with 2 Master nodes and you can replicate the data with replication factor as 3. After this you can add nodes as and when needed. Regards Niranjan ... View more Hi Tim, I am sharing few links which may help you setting up your cluster. Apart from this I recommend you to go with at least 8/16 core machines. Regards Niranjan ... View more 08-03-2016 05:00 PM 08-03-2016 05:00 PM Hi Neeraj, I facing similar issue after setting up Trust store and Importing certificate to Trust Store. I have setup the HTTPS with certificate, key and password(my choice). After restarting the Ambari-server and agent. I am not able to access Ambari. Am I missing some thing here, cloud you please help to fix this issue. Thanks Niranjan ... View more
https://community.cloudera.com/t5/user/viewprofilepage/user-id/10140
CC-MAIN-2019-39
refinedweb
675
58.79
On 05/17/2012 03:50 PM, Linus Torvalds wrote:> Yes, I do think this is closer to the "__u32" kind of usage, and in> general I tend to think that's true of most of the __kernel_ prefix> things. There is very little "kernely" things about it.The only think "kernely" about it is that it describes the kernel ABI.> Yes, we have to have the double underscore (or single+capitalized),> but I think that at least personally, I'd be happier with just> "__long" and "__ulong".I would suggest __slong and __ulong then, to keep with the __[su]*namespace, or does the extra "s" look too much like crap? -hpa-- H. Peter Anvin, Intel Open Source Technology CenterI work for Intel. I don't speak on their behalf.
http://lkml.org/lkml/2012/5/17/420
CC-MAIN-2013-48
refinedweb
131
71.04
In answer to: I am sorry if this is a stupid question but: How did you get the first = =3D one to work? Which one did you click on on the bottom of the various =3D versions of the code? What kind of project type did you use? empty? =3D Empty Window? Etc... I used an empty prject and loaded the source file from the site (the = Visual CPP one that you can download). Then I had to change = CDS_FULLSCREEN to 0 (zero). Then go to the "Project->Project Options" = menu and under "Further object files or linker options:" put this line -lopengl32 -lglu32 then you should be able to compile it! If you have further trouble, I = can send you the project file, etc. Glad I could help, Blumojo Hi this is my first post, so be gentle with me ;) I am trying to create a "datatable" (similar to a database about 10 = records containing about 26 variables per record), the "datatable" will = contain integer and text values. And I need to find out how to create = the storage areas and how to save them to a discfile and load them back = in to memory. I've read a couple of books on the subject of C++ programming, but they = were so confusing that they made my brain trickle out of my ears. So I = was looking for some help that could explain it in a way that even a = total idiot like myself could understand it. Thanx Knight^in^Shining^Armour ----- Original Message -----=20 From: Tomas Dolejsi=20 To: dev-cpp-users@...=20 Sent: Saturday, September 15, 2001 7:44 PM Subject: RE: [Dev-C++] strings for win32 (ABOUT UNICODE) Hi Guiseppe! I learned a little about unicode and Win32 API. Let me share this with = you. Win32 contains both ASCII and wide char modifications of functions = that require text parameter(s). e.g. There's no entry point in the USER32.DLL for the function called = MessageBox. Instead, there are two others. MessageBoxA (for ascii) and MessageBoxW = (for wide chars). But you don't need to worry about it. Simply use MessageBox function. How all this works? The real functions are defined in WinUser.h exactly that: int WINAPI MessageBoxA (WHND hWnd, LPCSTR lpText, LPCSTR lpCaption, = UINT uType) ; int WINAPI MessageBoxW (WHND hWnd, LPCWSTR lpText, LPCWSTR lpCaption, = UINT uType) ; Beside this definitions, you'll find there following code: #ifdef UNICODE #define MessageBox MessageBoxW #else #define MessageBox MessageBoxA #endif It means that if you the call MessageBox function the MessageBoxA = function will be called, unless you insert this line #define UNICODE in the beginning of your own code. It is the same with other text based functions (TextOut, sprintf, = fprintf, strlen, strstr, atc.) There's a pice of my code I work on MessageBox(hWnd, "Furcadia Character Description Editor\nCopyright = =A9 2001 Tomas Dolejsi\nVersion 1.01 (Freeware)\n\nCompiled with DevC++ = 4 ", "Fcde", MB_OK); If you want to use the UNICODE (wide char) instead of ASCII, adding = the line #define UNICODE is one thing. You also need to convert all text = (and char) constants you give the functions as the parameters. The = converison is carried out with L key (e.g. wchar_t * pw =3D L"Hello!" = .) Type wchar_t is unsigned short. There's a macro TEXT() which = converts the given text if UNICODE directive is defined.=20 TEXT() macro is defined in WinNT.h:=20 #ifdef UNICODE #define TEXT(q) L##q #else #define TEXT(q) q #endif That's all from me. I also attach a simple dialog example (originaly = from Ch. Petzold's Programing Windows). Best luck! Hi Tomas. The code compile in my computer too.=20 Thanks for your help.=20 Giuseppe Try the Community Discussion Forum at I found a nice editor at It's got the VC++ feel to it. ----- Original Message -----=20 From: Jesper=20 To: dev-cpp-users@...=20 Sent: Monday, September 17, 2001 11:30 AM Subject: [Dev-C++] (no subject) This has nothing to do with C++, but i was wondering if anybody knows = a good mail-list for Java? This has nothing to do with C++, but i was wondering if anybody knows a = good mail-list for Java? Thank you both! I tried and tried to get the malloc/free to work, but = apperently the compiler has some "C++ safety": It wouldn't allow me to = use those two. When I used the strictly C++ (new[] and delete[]) it = worked without any problems! Guess I should bone up on my C++ :) Thanks again, Blumojo PS- (To the one requesting info on unraring) WinRar is my favorite = zipping/unzipping utility - try it. I been using it on windows me, 98 and dos, seems to work fine on all of them, use #include <cstdlib> system ("PAUSE"); to prevent window from closing At 12:55 PM 9/16/01 EDT, you wrote: >Can Dev C++ be used on Windows Me os? >I undestand that MS hid the Dos os on this version. >When I execute a program it opens and closes. > >_______________________________________________ >Dev-cpp-users mailing list >Dev-cpp-users@... > > > Ray Witter I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/dev-cpp/mailman/dev-cpp-users/?viewmonth=200109&viewday=17
CC-MAIN-2017-13
refinedweb
896
73.17
Opened 4 years ago Closed 3 years ago #23739 closed defect (wontfix) Conversion failure ℚ[√a] → CIF Description (last modified by ) An issue found by Victor Spitzer: sage: NF.<a> = QuadraticField(-2) sage: CIF(a) ... ValueError: can not convert complex algebraic number to real interval Change History (25) comment:1 Changed 4 years ago by comment:2 Changed 4 years ago by comment:3 Changed 4 years ago by comment:4 Changed 4 years ago by The problem is that CIF implements its own __call__ function instead of providing an _element_constructor_ method comment:5 Changed 4 years ago by - Branch set to u/mderickx/23739 - Commit set to d360262afd0929ea9334761ec5035323fa4d665d - Status changed from new to needs_review New commits: comment:6 Changed 4 years ago by see patchbot for an error comment:7 Changed 4 years ago by - Commit changed from d360262afd0929ea9334761ec5035323fa4d665d to f5a18fbcdd63848c8744549e86aa442a6c603ab5 Branch pushed to git repo; I updated commit sha1. New commits: comment:8 Changed 4 years ago by Ok, the error is fixed. Sorry for letting that one slip by. comment:9 Changed 4 years ago by Thanks for your work on this ticket! I think it would be better to remove the __call__() method completely, unless there is a strong reason to keep it. Also, I realized that I had an old branch lying around where I had started doing something similar (perhaps in view of #15114). I'm not sure why I didn't post it for review; I believe I stumbled upon issues related to #22029 and never got around to recycling the parts that could be. It would probably be worth comparing our approaches and taking the best of both. I don't really have time for that right now (perhaps next week...), but I've pushed my commits to u/mmezzarobba/wip/intervals in case you want to have a look. comment:10 Changed 4 years ago by I agree that removing __call__ would be nicer. In fact I tried to remove __call__ but handling the optional im = parameter further down the chain was getting really messy, and still had some doctest failures that I couldn't figure out how to fix. So in the end I decided this was the cleanest solution. If you figure a nice way to deal with this in _emelent_constructor feel free to do it I just don't know how to do this in a clean way. Also I think the way it works now is also still quite clean, since in the case that im = None the current code behave exactly as if there where no __call__ at all. P.s. it is not the first place where this happens, look for example at CC.__call__, this has very similar logic. comment:11 follow-up: ↓ 12 Changed 4 years ago by P.s. are you sure you pushed it? bt-nac-c220:src mderickx$ git pull trac u/mmezzarobba/wip/intervals fatal: Couldn't find remote ref u/mmezzarobba/wip/intervals And I don't seem to be able to find it between your other branches as listed on comment:12 in reply to: ↑ 11 Changed 4 years ago by comment:13 follow-up: ↓ 14 Changed 4 years ago by I disagree with the __call__ sending things up to parent as a wrapped tuple. This makes the conversions from different CIF's slower because of the extra indirection, and I believe we should attempt to keep this a little more optimized. So I think it should immediately return the constructed element when im is not None. (Also, the return ans is superfluous.) Likewise, I disagree with the CC.__call__ implementation, and since the last time that was modified was 2008, it might be good to revisit that code again as well. If you still feel like the best way forward is to remove the __call__, then I can try to help figure out what is going on and what we can do going forward. I don't understand what you mean by this: since in the case that im = Nonethe current code behave exactly as if there where no __call__at all. Also, any Parent has a default an_element that. comment:14 in reply to: ↑ 13 Changed 4 years ago by If you still feel like the best way forward is to remove the __call__, then I can try to help figure out what is going on and what we can do going forward. Yes, I think removing the __call__ method completely is the way forward. If we do implement a custom __call__ we should really call the __call__ of parent if im = None since otherwise we break coercion. And I solved the reported issue of this ticket by fixing coercion. However removing __call__ completely would be even nicer since then we also have as little as possible overhead when im = None. I personally don't care to much about what happens if im != None so I don't mind calling another function more direct in this case. I don't understand what you mean by this: since in the case that im = Nonethe current code behave exactly as if there where no __call__at all. What I mean by this is unrelated to performance issues, but more related to behaviour issues. In general one should not overwrite the __call__ method because it breaks all the nice category and coercion stuff in sage. So I made the __call__ method behave as much as possible as if it were not there by making the code behave as if there were no __call__ at all if im = None. Also, any Parenthas a default an_elementthat. Yes I agree this is a good time to do this. What kind of things would be involved in this? I guess at least replacing this ParentWithGens.__init__(self, self._real_field(), ('I',), False, category = Fields()) in the __init__ function of CIF with something else. comment:15 Changed 4 years ago by I looked a bit more into it. And I think that there is no nice way to git rid of __call__ and support the optional im argument, since it will be very difficult to have different code paths depending on wether im is provided or not. The only way I see this happening with the coercion framework is by forcing every coercion morphism into CIF to be a subclass of our own categories.map class which looks ugly. So maybe I start being in favour of keeping the __call__. So am I correct in understanding that you will be happy if I leave the code if im = None as is and change the behaviour if im != None to: return self.element_class(x, im) comment:16 follow-up: ↓ 17 Changed 4 years ago by Yes, what I am advocating for is this: def __call__(self, x, im=None): if im is None: return super(ComplexIntervalField_class,self).__call__(x) return complex_interval.ComplexIntervalFieldElement(self, (x, im)) For removing the old-style parent, from a quick look, it looks like the following needs to be done: - change ParentWithGensto Parent(maybe adding one or two extra methods to keep functionality) - specify Element = ComplexIntervalFieldElement - replace explicit calls to ComplexIntervalFieldElement(self, args)by self.element_class(self, args) I am pretty sure you can use _element_constructor_(self, x, im=None). Sorry for the shorter responses and not contributing actual code right now, I'm in the process of moving. comment:17 in reply to: ↑ 16 Changed 4 years ago by comment:18 Changed 4 years ago by - Commit changed from f5a18fbcdd63848c8744549e86aa442a6c603ab5 to 6d01b5d02d4b3c479c7119d5458631aef1341ea6 Branch pushed to git repo; I updated commit sha1. New commits: comment:19 Changed 4 years ago by Ok I tried to address all your comments. Made a minor modification to CC as well. I did't turn it into a new style class since it seemed like this would be way more work then for CIF. comment:20 Changed 4 years ago by - Branch changed from u/mderickx/23739 to public/rings/conversions_to_CIF-23739 - Commit changed from 6d01b5d02d4b3c479c7119d5458631aef1341ea6 to f2bc7fc50cf1338f0cefca315b1c15cd1d6162b9 - Reviewers set to Travis Scrimshaw Great, thank you. Looks good. A good followup will be making this use UniqueRepresentation instead of its own custom cache. However, that will probably be a bit more invasive of a change, so I think it is better as a separate ticket. I added a little more documentation for _coerce_map_from_ to match what the code does. I also implemented coercions from CC to match the real version: sage: RIF.coerce_map_from(RR) Call morphism: From: Real Field with 53 bits of precision To: Real Interval Field with 53 bits of precision The rest of my changes are PEP8 and trivial formatting. If my changes look good, then positive review. New commits: comment:21 Changed 4 years ago by - Status changed from needs_review to needs_work many failing doctests comment:22 Changed 4 years ago by Most of them should be trivial, but there are a few I don't know what is going on without investigating. Maarten, would you be willing to make the first attempt? Also, I don't quite understand the change in multi_polynomial.pyx. Can you explain the rationale? comment:23 Changed 3 years ago by - Milestone changed from sage-8.1 to sage-duplicate/invalid/wontfix comment:24 Changed 3 years ago by - Status changed from needs_work to positive_review comment:25 Changed 3 years ago by - Resolution set to wontfix - Status changed from positive_review to closed closing positively reviewed duplicates First I thought this was because there was no convert or coerce map registered. But apparently the coercion framework is not even called in this conversion, because the coercion framework works without problems!
https://trac.sagemath.org/ticket/23739
CC-MAIN-2021-21
refinedweb
1,601
59.94
In my app, users only enter numbers. Change keyboard to numeral-friendly? Hi all. Using pythonista on iOS. I want to help my daughter to make a super simple program to quiz her on multiplication tables. Since she's just learning python, this will be the bare bones, 15 or 20 line python script. BUT, I realize that when we run the app, the input() command will cause the iPad to invoke the normal Pythonista keyboard. And in our app, when a user enters the answers to the questions, the user is asked to only enter numbers. they would pretty much be typing: 64 <return> 12 <return> 72 <return> ... But that standard iPad keyboard has the numerals positioned where you must press an additional key to switch to access the actual numbers. Ug. In practice, that means the user would have to type that extra "switch to numerals" each and every time they go to type in the next answer. Is there a way I can fairly simply tell python/pythonista to invoke a keyboard that has numerals already pressable without pressing the additional key? But ideally not adding a large extra chunk of code to the app to totally build a GUI? Whatever your simplest recommendation is, would be appreciated! even if it's a non-code suggestion like some third party keyboard I could download to the iPad :) [[embarrassed that just the other day a similar question was posted. At the link here (but his is different in that he is building a GUI I believe.): ]] apple, in its infinite wisdom, decided that ipad users don't need big number entry. you can use t.keyboard_type=ui.KEYBOARD_DECIMAL_PAD and you at least start on the number screen. this works for ui.TextField and ui.TextView Oh, for input(), try setting the Extended Keyboard With Numbers under Keyboard settings in pythonista. (gear menu from file manager) thanks for this input... your last point, about setting "extended keyboard with numbers" , was exciting -- but then on my ipad, inside gear -> keyboard, there is no option for that. Tht seemed funny, so I googled it and I admit that I saw someone on Github (user zrzka) saying to someone else "We no longer have extended keyboard with numbers.... if we will be adding it back, I will then xxxxx." Bummer! If any other thoughts pop into mind, let me know :) @estephan500 Create a my_input.py with import ui def my_input(title): tf = ui.TextField() tf.name = title tf.text = '' tf.keyboard_type = ui.KEYBOARD_DECIMAL_PAD def tf_action(sender): sender.close() tf.action = tf_action tf.present('sheet',hide_title_bar=False) tf.begin_editing() tf.wait_modal() return tf.text And your daughter may use it with from my_input import my_input x = my_input('test') print(x) ok, extended keyboard maybe went away in ios10.. @cvp Heyyy! That is very appreciated! So you pretty much showed me that I can avoid being afraid of the "ui" stuff. :) Thanks! ... I was in a hurry, so I only did one fast test of your method, and it works, that is exciting. But, I need to learn more about this. Because, for example, I noticed that the input text label appears on the screen, but on the ipad I have to tap on the input label if I want the keyboard to appear. Do you know a way to tell the app to "focus" on that input element, so that the keyboard immediately appears? (Maybe I will see this problem go away when I actually build this function into our app.) Anyway, this is cool, thank you again! The tf.begin_editing() is supposed to focus the textfield, however, you might need to add a small delay after presenting, before this approach works. @estephan500 On my iPad mini 4, the focus works but I suppose the @JonB solution will help in your configuration Great. huge thanks for this. Someday I will have all these methods in my mind and I can stop feeling "on the outer edges" of them... now also, I see I didn't need any delay. the flaw was that somehow the code that I copied into my app was not the full code that appears above. what I have re-copied works great. thanks. @estephan500 Mea culpa. I had put some code without the tf.begin_editing and some minutes after I have modified it to add this line...Thus, if you have copied just between both, you got the first code. Sorry Hi! I'm the guy who asked I have posted there how to change the type of keyboard using the UI Designer. It's super easy (once you know how to do it. I had to ask Ole). I hope it helps. Javier
https://forum.omz-software.com/topic/4934/in-my-app-users-only-enter-numbers-change-keyboard-to-numeral-friendly
CC-MAIN-2019-04
refinedweb
785
74.59
Pysense looses connection to wipy - wipy reset - Andreas B. last edited by Hi folks, i have a Pysense board updated to 0.0.7 firmware. The corresponding wipy runs on 1.10.0.b1. After some seconds (sometimes 3, sometimes 5 or even 7 heartbeats of the wipy (blue led)) the wipy stops working. Also the (for short time enabled) connection over the serial port fails and i can only reset the wipy. Then it starts over again. some heartbeats and then failure. The serial port shows me that after the reset everything seems to be ok:76 load:0x4009fa00,len:0 ho 12 tail 0 room 4 load:0x4009fa00,len:15344 entry 0x400a070c MicroPython v1.8.6-839-g536c958c on 2017-11-15; WiPy with ESP32 Type "help()" for more information. Then I am able to use the REPL (i.e. 2+2 >>>4 ...) for some seconds and after that the wipy stops and i have no connection any more. Reset starts the process over again, but it only works for some seconds. For me it seems to be a hardware problem (firmware 0.0.4 same problem - different wipy, same problem - different computer, same problem). The used wipy runs perfectly with an usual expansion board. I would appreciate if anyone can give me a hint what causes the probelm and how to fix it?! Thanks in advance and greetings Andreas - Xykon administrators last edited by Can you please first try another firmware update (we have released 1.10.2.b1). If that doesn't help please try to format the internal flash: import os; os.mkfs('/flash') If that still doesn't help please email support@pycom.io
https://forum.pycom.io/topic/2222/pysense-looses-connection-to-wipy-wipy-reset/2
CC-MAIN-2021-10
refinedweb
280
75.1
US8850138B2 - System and method for managing page variations in a page delivery cache - Google PatentsSystem and method for managing page variations in a page delivery cache Download PDF Info - Publication number - US8850138B2US8850138B2 US13619861 US201213619861A US8850138B2 US 8850138 B2 US8850138 B2 US 8850138B2 US 13619861 US13619861 US 13619861 US 201213619861 A US201213619861 A US 201213619861A US 8850138 B2 US8850138 B2 US 8850138B2 - Authority - US - Grant status - Grant - Patent type - - Prior art keywords - cache - page - content - request - Abstract Description This is a continuation of and claims a benefit of priority under 35 U.S.C. §120 from U.S. patent application Ser. No. 12/208,072, filed Sep. 10, 2008, now U.S. Pat. No. 8,463,998 entitled “SYSTEM AND METHOD FOR MANAGING PAGE VARIATIONS IN A PAGE DELIVERY CACHE,” which is a continuation-in-part application of U.S. patent application Ser. No. 11/825,909, filed Jul. 10, 2007, now U.S. Pat. No. 7,818,506, entitled “METHOD AND SYSTEM FOR CACHE MANAGEMENT,” which is a continuation-in-part application of U.S. patent application Ser. No. 10/733,742, filed Dec. 11, 2003, now U.S. Pat. No. 7,360,025, entitled “METHOD AND SYSTEM FOR AUTOMATIC CACHE MANAGEMENT,” which claims priority from Provisional Application No. 60/433,408, filed Dec. 13, 2002, entitled “EXTENSIBLE FRAMEWORK FOR CACHING AND CONFIGURABLE CACHING PARAMETERS.” All applications cited within this paragraph are fully incorporated herein by reference. This disclosure relates generally to disk-based caching systems and, more particularly, to high performance content delivery systems utilizing such caching systems to service web site requests. Even more particularly, this disclosure provides systems and methods for managing page variations in a page delivery cache. in which to communicate data, one of the most ubiquitous of which is the World Wide Web, also referred to as the and can “click on” links in the web pages being viewed to access other web pages. Each time the user clicks on a link,. Commercial web sites typically want to serve different versions of a page to different requesters even though those requesters all request the same Uniform Resource Locator (URL). For example, the front page of a site is often addressed simply as /index.html or /index.jsp, but the site operator may wish to deliver different versions of that page depending upon some property of the requester. Common examples are versions of a page in different languages. The selection of an appropriate variant to serve is commonly known as content negotiation, which is defined in the Hypertext Transfer Protocol (HTTP) specification. Existing content negotiation schemes (as typified in Request for Comments (RFCs) 2616, 2295, and 2296) apply to general characteristics of content: the language used in the content, the style of markup, etc. A user-agent (i.e., a client application used with a particular network protocol, particularly the World Wide Web) can include in a request a description of its capabilities and preferences in these areas, and a server can deduce the best version of content to send in response. For example, a client application may specify, via headers in an HTTP request, that it prefers to receive English, French, and German content, in that order; if the server receives a request for a page that is available only in French and German, it will send the French version in response. This preference will only be applied when there is a choice of representations which vary by language. It's also possible for the server to respond with a list of possible options with the expectation that the client application will then employ its own algorithm to select one of those options and request it. These schemes rely on a certain degree of cooperation on the client application's part, and concern variations that the client application can reasonably be expected to be aware of. Currently, some servers support driven content negotiation as defined in the HTTP/1.1 specification. Some servers also support transparent content negotiation, which is an experimental negotiation protocol defined in RFC 2295 and RFC 2296. Some may offer support for feature negotiation as defined in these RFCs. An HTTP server like Apache provides access to representations of resource(s) within its namespace, with each representation in the form of a sequence of bytes with a defined media type, character set, encoding, etc. A resource is a conceptual entity identified by a URI (RFC 2396)., a server typically needs to be given information about each of the variants. In an HTTP server, this can be done in one of two ways: consult a type map (e.g., a *.var file) which names the files containing the variants explicitly, or do a search, where the server does an implicit filename pattern match and chooses from among the results. In some cases, representations or variants of resource are stored in a cache. When a cache stores a representation, it associates it with the request URL. The next time that URL is requested, the cache can use the stored representation. However, if the resource is negotiable at the server, this might result in only the first requested variant being cached and subsequent cache hits might return the wrong response. To prevent this, the server can mark all responses that are returned after content negotiation as non-cacheable by the clients. Embodiments disclosed herein can increase the performance of a content delivery system servicing web site requests. In some embodiments, these web site requests are HTTP requests. Embodiments disclosed herein can allow developers of business applications to cache different versions of content to be served in response to HTTP requests for the same URL. Example versions of a page include pages in different languages, public content for anonymous users versus secure content for authenticated users, or different versions for users belonging to different service categories (e.g., gold, silver, bronze patrons, or frequent flyers over specific mileage thresholds). In some embodiments, the following additional attributes can be used to determine what version of content to serve: - 1. The values of one or more HTTP request headers. - 2. The values of one or more HTTP cookies. - 3. The value of the HTTP query string. - 4. The existence (or lack thereof) of one or more HTTP request headers. - 5. The existence (or lack thereof) of one or more HTTP cookies. - 6. The values of one or more session attributes. In some embodiments, these are J2EE (Java Platform, enterprise edition) session attributes. In embodiments disclosed herein, when a page is cached, certain metadata is also stored along with the page. That metadata includes a description of what extra attributes, if any, must be consulted to determine what version of content to serve in response to a request. When a request is fielded, a cache reader consults this metadata, then extracts the values of the extra attributes, if any are specified, and uses them in conjunction with the request URL to select an appropriate response. The above-described scheme has many advantages. One advantage is the simplification of the URL structure of a web site. Previously, the variation dimensions have to be encoded into the URLs. For example, a common practice for multi-lingual sites is to segregate the content by adding a language specifier at the top of the URL space, as with /en/index.jsp, /fr/index.jsp, etc. This is tractable because it's reasonable to assume that the language choice applies to all of the pages under the language specifier, but quickly becomes intractable when individual pages are subject to different sets of variation parameters. Furthermore, it becomes difficult or impossible for humans to predict or remember URLs. For the same reason, these schemes also interfere with so-called “search engine optimization”: the design of URLs that lead to high relevance ratings for major search engines. Embodiments disclosed herein can allow a site designer to keep this variation information out of the URLs themselves, thereby helping with both of those problems. disclosure.. Software implementing embodiments disclosed herein may be implemented in suitable computer-executable instructions that may reside on a computer-readable storage. 120 can include central processing unit (“CPU”) 122, read-only memory (“ROM”) 124, random access memory (“RAM”) 126, hard drive (“HD”) or storage memory 128, and input/output device(s) (“I/O”) 129. I/O 129 can include a keyboard, monitor, printer, electronic pointing device (e.g., mouse, trackball, etc.), or the like., and the like. In some embodiments, the page generator software component is a subcomponent of the content delivery software component. Each of the computers in Each of computers 120, 140, 160, and 180 is an example of a data procesing. Updating the cache may be done in the background, without receiving a new request from a user; this allows content in the cache to be kept current and may drastically improve the performance and response time of a web site. This application manager may be part of a content deployment agent coupled to a content management system. The deployment agent may receive updated content, and the application manager may take notice when content has been updated on the deployment agent. The application manager may also be responsible for the assembly of content to be delivered by an application server in response to a request from a user. Embodiments of an example application manager are described in the above-referenced U.S. patent application Ser. No. 11/825,909, entitled “METHOD AND SYSTEM FOR CACHE MANAGEMENT”, filed Jul. 10, 2007. Examples of how a request can be regenerated and used to update cached content can be found in U.S. Pat. No. 7,360,025, entitled “METHOD AND SYSTEM FOR AUTOMATIC CACHE MANAGEMENT,” the content of which is incorporated herein by reference. Within this disclosure, content may be an application or piece of data provided by a web site such as an HTML page, Java application, or the like. In many cases, one piece of content may be assembled from other pieces of content chosen based on a request initiated by a user of the web site.. In embodiments disclosed herein,. The randomness of the addresses is useful for balancing purposes: it ensures that no one directory will be overloaded with entries, regardless of how many variants of a single URL might exist. In some embodiments, cache 25 stores copies of pages that have been provided to client 120 through response(s) 40 to previous request(s) 50. This way, system 170 can quickly service a new request for a page if it can be determined that the new request is asking for the same page. However, in some cases, it may be desirable to serve a variation of the page even if the requester is requesting the same page. For example, it may be that user A is a gold-level customer and user B is a silver-level customer and they both want to view a marketing page containing certain market promotions. System 170 may specify that gold-level customers should be presented with a variation of the marketing page containing gold-level promotions and that silver-level customers should be presented with another variation of the marketing page containing silver-level promotions. In some embodiments, such a presentation decision is made by a page generator component of content delivery system 170. System 170 may store these variants in cache 25 for high performance delivery of content to end users. In embodiments disclosed herein, Page Generator (PG) 70 is responsible for actually generating pages and their variations. While PG 70 creates a page, it also records information about the page's variations, i.e., whether the page varies according to request headers, query string, cookie values, or session values. In some embodiments, PG 70 records one or more of the following information about the page's variations: - The values of one or more HTTP request headers. - The values of one or more HTTP cookies. - The value of the HTTP query string. - The existence (or lack thereof) of one or more HTTP request headers. - The existence (or lack thereof) of one or more HTTP cookies. - The values of one or more session variables. In some embodiments, these are J2EE (Java Platform, enterprise edition) session variables. When the page is placed in the cache, the accumulated metadata about the variation scheme is also placed in the cache. If the page isn't subject to any variations, the page and the metadata are located at the same cache address, which is a function of the page's URL only. For example, suppose Page P1000 of A variation scheme represents a logical family of pages, and each member of that family lives at a different secondary address. The primary address holds the variation scheme and does not hold a member of the family. If a page exists at the primary address, that page is not subject to variation, by definition. For example, suppose Page P100 of A method for managing page variations in a page delivery cache will now be described in detail with reference to When PG 70 generates a page, it also records metadata and dependency information associated with the page. As will be further described below with reference to In embodiments disclosed herein, cache readers can be located at the web-tier or at the application-tier of a configuration. Cache readers at the web-tier have no convenient access to session variables because the sessions are stored in the application-tier. Thus, although a web-tier cache reader can successfully resolve references to variations that involve only request headers, query string, or cookies in the variation scheme, it cannot readily resolve references to variations that involve session values in the variation scheme. In some embodiments, session value variation schemes require that a cache reader be deployed in the application-tier. When a web-tier cache reader receives a request for a page subject to session-variable variations, when it consults the cache at the primary address it will either get a cache miss or it will find a metadata entry that indicates that the requested page uses a session-variable variation scheme. In either case, it simply forwards the request to the back-end. There the request is fielded by the application-tier cache reader. That cache reader handles the request as it would any other. It first computes the primary address for the request and probes its cache; if it finds no entry it forwards the request to the page generator, and if it finds an entry it examines the metadata to determine whether a variation scheme is in effect. In this case a variation scheme is in effect, so it uses the metadata to determine what request and session data are needed to compute the secondary address, and then it probes the cache at that secondary address. If an entry exists, it uses that entry to satisfy the request, and if no entry exists it forwards the request to the page generator. The web-tier cache reader and the application-tier cache reader can be associated with different caches, or they can share the same cache. As it will be appreciated by one skilled in the art, CR30 a can be implemented in several ways. In some embodiments, CR30 a may function the same way as CR30 w. This type of implementation has the advantage that CR30 a works the same way regardless of whether CR30 w exists or not. In some embodiments, CR30 w may forward information that could help to reduce the workload on CR30 a. For example, in some cases where requests are forwarded by CR 30 w, CR 30 w may have already calculated the primary address and checked for entry at that address before forwarding a request to the backend. In some cases, CR 30 w may also have already determined that the request is subject to variations (step 713) before forwarding the request to the backend. Thus, CR30 w may simply forward the information with the request to CR30 a. This way, CR30 a would not have to calculate the primary address, check for entry at that address, and/or determine if the request is subject to variations. Although the present disclosure disclosed herein and additional embodiments will be apparent to, and may be made by, persons of ordinary skill in the art having reference to this description. Accordingly, the scope of the present disclosure should be determined by the following claims and their legal equivalents.
https://patents.google.com/patent/US8850138B2/en
CC-MAIN-2018-34
refinedweb
2,748
51.78
Hi there, I'm reposting here an opinion piece of mine I sent to Chet and the various security lists after the patch was made and prior to the disclosure, for others to comment. Several things discussed in here: - hardening to avoid exposing the parser to untrusted input - duplicate env var entries - impact of localisation on parsing. > I am strongly in favor of Florian's suggestion to only interpret > variables that are listed in some special BASH_FUNCDEFS variable as > functions. BASH_FUNCDEFS (which contains the names (and names only) of functions to be exported) would be hardening. It would be effective as hardening, but I'd argue it would not be the right fix. The problem is that the environment is a shared namespace. Those "foo=() { body;}" variables have a content that is bash specific, so should have a BASH_ name prefix especially considering that they can have any name. Those variables are not like other env vars (HOME, PATH...) that are shared by all applications, they are just for transfering information from one bash instance to another bash instance. Those variables are not useful to anything but bash. Now, even in bash, having "foo=() {...}" is inconsistent, and that raises possibly another security concern. In bash, functions and variables have different name spaces, but if they have to be exported to the environment, there's a clash. And bash handles it in a dangerous way: You can have: foo=bar foo() { echo "$foo"; } That's fine. Now, if you export the "foo" *function* and if the "foo" *variable* was already in the environment, bash puts *both* in the environment: $ foo=bar bash -c 'foo() { echo "$foo"; }; foo' bar fine. $ foo=bar bash -c 'foo() { echo "$foo"; }; export -f foo; env' | grep foo foo=bar foo=() { echo "$foo" (two env vars by the same name!). $ foo=bar bash -c 'foo() { echo "$foo"; }; export -f foo; bash -c foo' bar That works here because bash scans its environment and assigns the one with "() {" to a function and the one with "bar" to a scalar and bash is directly invoking bash. However, many other shells (mksh, ksh93, zsh, ash, fish, not (t)csh, yash) remove duplicate env vars from the environment (nor always the same depending on the shell), so things like: foo=bar bash -c 'foo() { echo x; }; export -f foo; sh -c "bash -c foo"' won't necessarily work (just because "foo" happened to be in the environment, which "export -f foo" did not overwrite) It's also arguably dangerous because it allows an attacker to hide his malicious function behind a scalar variable (though one might argue that it's not only a bash problem since when the environment contains "foo=bar" and "foo=baz" which one is picked seems to depend on what tool queries the environment). Typically, glibc's getenv will pick the first one. That would possibly defeat an environment sanitizing wrapper, or something that logs the content of some variables. Thankfully, sudo is not fooled here, it will preserve several instances of the same variable (like DISPLAY, TERM...) but will remove all the ones that start with "() {". I think a better fix would be to have all the function *definitions* in one env var like: BASH_FUNCDEFS='f(){ echo x;} g(){ echo y;} > $(date +%H)' I would not be against bash removing env entries with duplicate names like most other shells do (or even the glibc do that as I don't expect it being useful and it sounds to me like an avenue for more security vulnerabilities). Another consideration, and that was one of the aggravating aspects in CVE-2014-0475: Chet's patch restricts the name of exported functions to the "legal identifiers". That's required because with: var=value (where value starts with "() {"), bash runs: varvalue So variables with names like "some code; foo" would cause more problem. Now, what bash considers a legal identifier depends on the locale. CVE-2014-0475 was a glibc vulnerability giving attackers the ability to use locale definition files anywhere on the file systems (so malicious ones as well) with LC_ALL=../../path/to/it. The bash behaviour of deciding on /legal/ identifiers (and token separators!) based on the locale meant one could run arbitrary code (for instance by defining a locale where space was anything but "s" and "h" and relying on a command line in ~/.bashrc that contained something like blashbli (which happened to be true in Debian's default .bashrc)). It might be worth checking that Chet's patch cannot be bypassed in locales where for instance "(", ")" and ";" are in the "alpha" character class. -- Stephane
https://lists.gnu.org/archive/html/bug-bash/2014-09/msg00183.html
CC-MAIN-2018-43
refinedweb
768
55.88
Today we will look into java zip file example. We will also compress a folder and create zip file using java program. Table of Contents Java ZIP java.util.zip.ZipOutputStream can be used to compress a file into ZIP format. Since a zip file can contain multiple entries, ZipOutputStream uses java.util.zip.ZipEntry to represent a zip file entry. Java ZIP File Creating a zip archive for a single file is very easy, we need to create a ZipOutputStream object from the FileOutputStream object of destination file. Then we add a new ZipEntry to the ZipOutputStream and use FileInputStream to read the source file to ZipOutputStream object. Once we are done writing, we need to close ZipEntry and release all the resources. Java Zip Folder Zipping a directory is little tricky, first we need to get the files list as absolute path. Then process each one of them separately. We need to add a ZipEntry for each file and use FileInputStream to read the content of the source file to the ZipEntry corresponding to that file. Java Zip Example Here is the java program showing how to zip a single file or zip a folder in java. package com.journaldev.files; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.util.ArrayList; import java.util.List; import java.util.zip.ZipEntry; import java.util.zip.ZipOutputStream; public class ZipFiles { List<String> filesListInDir = new ArrayList<String>(); public static void main(String[] args) { File file = new File("/Users/pankaj/sitemap.xml"); String zipFileName = "/Users/pankaj/sitemap.zip"; File dir = new File("/Users/pankaj/tmp"); String zipDirName = "/Users/pankaj/tmp.zip"; zipSingleFile(file, zipFileName); ZipFiles zipFiles = new ZipFiles(); zipFiles.zipDirectory(dir, zipDirName); } /** * This method zips the directory * @param dir * @param zipDirName */ private void zipDirectory(File dir, String zipDirName) { try { populateFilesList(dir); //now zip files one by one //create ZipOutputStream to write to the zip file FileOutputStream fos = new FileOutputStream(zipDirName); ZipOutputStream zos = new ZipOutputStream(fos); for(String filePath : filesListInDir){ System.out.println("Zipping "+filePath); //for ZipEntry we need to keep only relative file path, so we used substring on absolute path ZipEntry ze = new ZipEntry(filePath.substring(dir.getAbsolutePath().length()+1, filePath.length())); zos.putNextEntry(ze); //read the file and write to ZipOutputStream FileInputStream fis = new FileInputStream(filePath); byte[] buffer = new byte[1024]; int len; while ((len = fis.read(buffer)) > 0) { zos.write(buffer, 0, len); } zos.closeEntry(); fis.close(); } zos.close(); fos.close(); } catch (IOException e) { e.printStackTrace(); } } /** * This method populates all the files in a directory to a List * @param dir * @throws IOException */ private void populateFilesList(File dir) throws IOException { File[] files = dir.listFiles(); for(File file : files){ if(file.isFile()) filesListInDir.add(file.getAbsolutePath()); else populateFilesList(file); } } /** * This method compresses the single file to zip format * @param file * @param zipFileName */ private static void zipSingleFile(File file, String zipFileName) { try { //create ZipOutputStream to write to the zip file FileOutputStream fos = new FileOutputStream(zipFileName); ZipOutputStream zos = new ZipOutputStream(fos); //add a new Zip Entry to the ZipOutputStream ZipEntry ze = new ZipEntry(file.getName()); zos.putNextEntry(ze); //read the file and write to ZipOutputStream FileInputStream fis = new FileInputStream(file); byte[] buffer = new byte[1024]; int len; while ((len = fis.read(buffer)) > 0) { zos.write(buffer, 0, len); } //Close the zip entry to write to zip file zos.closeEntry(); //Close resources zos.close(); fis.close(); fos.close(); System.out.println(file.getCanonicalPath()+" is zipped to "+zipFileName); } catch (IOException e) { e.printStackTrace(); } } } Output of the above java zip example program is: /Users/pankaj/sitemap.xml is zipped to /Users/pankaj/sitemap.zip Zipping /Users/pankaj/tmp/.DS_Store Zipping /Users/pankaj/tmp/data/data.dat Zipping /Users/pankaj/tmp/data/data.xml Zipping /Users/pankaj/tmp/data/xmls/project.xml Zipping /Users/pankaj/tmp/data/xmls/web.xml Zipping /Users/pankaj/tmp/data.Xml Zipping /Users/pankaj/tmp/DB.xml Zipping /Users/pankaj/tmp/item.XML Zipping /Users/pankaj/tmp/item.xsd Zipping /Users/pankaj/tmp/ms/data.txt Zipping /Users/pankaj/tmp/ms/project.doc Notice that while logging files to zip in directory, I am printing absolute path. But while adding zip entry, I am using relative path from the directory so that when we unzip it, it will create the same directory structure. That’s all for Java zip example. I would like to know, how to zip individual files that all end in the extension of .txt and then have them preserve their name. Hi, Are there anything which can allow to compress a file in different modes like fast compression or full compression. Thanks, Nitish Kashyap how to preserve file permissions? Thank You ….. Hiii… can you give me a source code of unzip a file with its hirarchy. suppose i have a zip file “DICOM.zip” and it contains a directory called “root”. now again root contains “subroot”. now “subroot” contains three folder “A”,”B”,”C”. and “A”contains some files like a1.img,a2.img,a3.img etc. and “B” and “C” also contains same type of files. So my quation is that, i want such a java program to unzip my “DICOM.zip” so that it can extract the all file with same hirarchy as source hirarchy of the directory. Thanks in advance. ZipFiles zipFiles = new ZipFiles(); how i can handle this error Hi, how can i save many folders into the zip file? I discovered your post on ZipOutputStream after I had finished writing my own code to create an epub (which is a zipfile by another name) from its constituent files. My code is more or less equivalent to yours, and it seems to run clean, but when I look at the epub zipfile after running the code, the file is there but it is length=0. Since there are no exceptions, I have been searching the web for posts like yours. I have run out of ideas. What might I have missed? Can you try to run in debug mode and check if file is getting written or not. Hello pankaj can you please help me.? i mantioned my problem ubove. Thanks in advanced . Hi! I’ve been following your weblog for some time now and finally got the bravery to go ahead and give you a shout out from Lubbock Tx! Just wanted to say keep up the great job!
https://www.journaldev.com/957/java-zip-file-folder-example
CC-MAIN-2021-10
refinedweb
1,056
60.41
Mat img = imread(..) vs img = imread(..) i have img as a member in the .h file, but when i use img = imread(..); in the .cpp file, it crashes. however, using Mat img = imread(..); works. what is the difference? thanks! note: opencv3.0 with qt "it crashes" IS NO ERROR MESSAGE! What exactly happens? Which message do you get? Is your member in the header file declared public? Is it inside a namespace? The second declaration won't even use your global variable, but rather create a local img object withing the scope of your cpp file, but not update the value of your global img object. However I am not sure if you can make a non static object in header files. Never done so before, only know the principle. @FooBar there werent any error messages... a window popped up saying it stopped running and that it is checking for a solution, and on my Qt creator IDE, it says after the program closed, this: The program has unexpectedly finished. @StevenPuttemans yes it was in the header file, but in private instead. I tried putting it as public, but it was still the same... I actually dont want to make it a local variable, but it seems like it wont crash if i do it that way.. could it be that it wasnt initialized to something? not sure how this works.. :( @pingping__ it seems to me that you need to dig deeper in C++ basics of how header and source code files work together. This seems to be not an OpenCV problem at all.
https://answers.opencv.org/question/56777/mat-img-imread-vs-img-imread/
CC-MAIN-2020-10
refinedweb
264
85.79
I'm trying to get MoinMoin 1.5.4 running with Python 2.3.4 (installed from an SCO Skunkworks binary). Python 2.3.4 (#1, Aug 27 2004, 18:22:39) [GCC 2.95.3 20030528 (SCO/p5)] on sco_sv3 One of the MoinMoin modules attempts to import cgi and triggers this traceback: Traceback (most recent call last): File "./moin.cgi", line 43, in ? from MoinMoin.request import RequestCGI File "/usr/moin/lib/python2.3/site-packages/MoinMoin/request.py", line 10, in ? import os, re, time, sys, cgi, StringIO File "/opt/K/SCO/python/2.3.4/usr/lib/python2.3/cgi.py", line 39, in ? import urllib File "/opt/K/SCO/python/2.3.4/usr/lib/python2.3/urllib.py", line 26, in ? import socket File "/opt/K/SCO/python/2.3.4/usr/lib/python2.3/socket.py", line 44, in ? import _socket ImportError: dynamic linker: /usr/bin/python: relocation error: symbol not found: getaddrinfo; referenced from: /opt/K/SCO/python/2.3.4/usr/lib/python2.3/lib-dynload/_socketmodule.so getaddrinfo is not supported in OpenServer 5, but it is available under the UDK. That is, the function is present in /udk/usr/lib/libsocket.so. I've tried adjusting LD_LIBRARY_PATH without success. My questions: 1) Is the UDK library off-limits for Python? 2) Is there an option to not use the BSD Library function? 3) Finally, is there a trick to searching for shared libaries? Thanks, Dave Harris
http://fixunix.com/sco/112897-getaddrinfo-not-found-sco-openserver-5-0-5-a-print.html
CC-MAIN-2014-52
refinedweb
247
55.4
How I configured Kong Plugins… with the Kong API gateway using it’s plugins and advanced plugins. I was using the Kong ingress controller version 0.9.0 and API gateway version 2.0.4 for this blog, which were the latest releases by the time I write this. I was using the db-less mode of the Kong which was perfectly fine for my internal gateway requirement. But if you are installing Kong as an external gateway, it is recommended to install the full version of Kong along with a database (eg: redis, postgress) to experience some additional features like analytics, management dashboards etc. Kong Plugins Kong plugins enable us to implement policies for the applications running in Kubernetes or non-Kubernetes environment. Below is a basic introduction and sample templates to configure some of the most commonly used API gateway features via Kong plugins. 1) Authentication Plugins 1.1) JWT Plugin- Verify requests containing HS256 or RS256 signed JSON Web Tokens (as specified in RFC 7519). Each of your Consumers will have JWT credentials (public and secret keys) which must be used to sign their JWTs. 1.2) OpenId Plugin- This plugin can be used to implement Kong as an OAuth 2.0 resource server and/or as an OpenID Connect relying party between the client and the upstream service. I will discuss more on how to use this Kong OIDC plugin with an external authorization server to implement a more advanced authentication flow for your APIs with maximum security for any possible external attacks. 1.3) Key Auth Plugin- Once we apply this plugin, consumers should add their key either in a query string parameter or a header to authenticate their requests. 2) Security Plugins 2.1) IP Restriction Plugin- Restrict access to a service or a route by either whitelisting or blacklisting IP addresses. Single IPs, multiple IPs or ranges in CIDR notation like 10.10.10.0/24 can be used. 2.2) Bot detection Plugin- Protects a service or a route from most common bots and has the capability of white-listing and blacklisting custom clients. Regular expressions will be checked against the User-Agent header for white-listing or blacklisting. 2.3) CORS Plugin- We can use this plugin to supports secure cross-origin requests and data transfers between servers. 3) Traffic Control Plugins 3.1) Rate limiting Plugin- Rate limit how many HTTP requests a consumer can make in a given period of seconds, minutes, hours, days, months or years. API trotting can be configured based on IP, consumer, credential, service, header. Advanced use-cases of rate limiting plugin with consumer policy along with the OIDC plugin will be discussed in a later article. 3.2) Request size limiting Plugin- Block incoming requests whose body is greater than a specific size in megabytes. For security reasons it is recommended enabling this plugin for any service you add to Kong to prevent a DOS (Denial of Service) attack. 3.3) Request Validator Plugin- Validate requests before they reach their upstream Service. Supports request body validation, according to a schema. 3.4) ACL Plugin- Restrict access to an API by whitelisting or blacklisting consumers using arbitrary ACL group names. This plugin requires an authentication plugin to have been already enabled on the API. 3.5) Canary release Plugin- Reduce the risk of introducing a new software version in production by slowly rolling out the change to a small subset of users. This plugin also enables roll back to your original upstream service, or shift all traffic to the new version. 3.6) Response Rate Limiting Plugin- This allows you to limit the number of requests a consumer can make based on a custom response header returned by the upstream service. 4) Analytics and Monitoring Plugin 4.1) Prometheus Plugin- Expose metrics related to Kong and proxied upstream services in Prometheus exposition format, which can be scraped by a Prometheus Server. 5) Transformation Plugin 5.1) Request Transfer Plugin- This has enhanced capabilities to match portions of incoming requests using regular expressions, save those matched strings into variables, and substitute those strings into transformed requests via flexible templates 5.2) Response Transfer Advanced Plugin- Transform the response sent by the upstream server on the fly on Kong, before returning the response to the client. 6) Logging Plugin 6.1) Kafka Log Plugin- Publish request and response logs to a Kafka topic. All the above Kong plugins can be configured either as KongPlugins or KongCluseterPlugins. The main difference is clusterplugins can accessible from any namespace of the cluster while kongplugins are specific to a single namespace. Another important thing to note is that when you configure any plugin with the label “global: true”, such plugins will be applied for all the ingress endpoints by default. If the plugin doesn’t have this label, then you need to specifically configure the required plugin list inside your API endpoints. How to apply the Plugins Once you create the Kong plugins, the next step is to use them in your APIs. To expose your service via the Kong API gateway, you need to create an Ingress resource and add the required plugin list under the annotations. You can also use the below command to apply the plugins for any existing ingress resource or service. oc patch ingress ingress-ns -p '{"metadata":{"annotations":{"konghq.com/plugins":"plugin 1, plugin 2"}}}'oc patch service service-ns -p '{"metadata":{"annotations":{"konghq.com/plugins":"plugin 1, plugin 2"}}}' You can read further on more advanced scenarios in my next article. Thanks!
https://danuka-praneeth.medium.com/how-i-configured-kong-plugins-2134887bb2cb?responsesOpen=true&source=---------2----------------------------
CC-MAIN-2021-31
refinedweb
928
54.32
Important: Please read the Qt Code of Conduct - QML Image disply without window Ok Im new to qml and I found this example code to run: import QtQuick 2.12 Image { id: root source: "images/background.png" } However If I run this nothing shows up. I created an empty Qt Quick app with creator and it looks like it needs a window. So this worked well: import QtQuick 2.14 import QtQuick.Window 2.14 Window { visible: true width: 640 height: 480 Image { id: root source: "images/background.png" } } So short stupid question. Can Image get displayed without the window? How do you execute that code? I just made a empty Qt Quick project and hit build and run in debug. I think the qml gets run with its coresponding(); } This post is deleted! @sandro4912 said in QML Image disply without window: import QtQuick 2.12 Image { id: root source: "images/background.png" } That was not the question. The Question is. Can this work without window like this: import QtQuick 2.12 Image { id: root source: "images/background.png" } - J.Hilk Moderators last edited by @sandro4912 AFAIK QQmlApplicationEngineneeds a Window or similar top level QML component if you use a QQuickWidget / QQuickView you may get away with an Item/Image as root element @J-Hilk said in QML Image disply without window: QQuickWidget I modified my main to this: #include <QApplication> #include <QQuickWidget> int main(int argc, char *argv[]) { QCoreApplication::setAttribute(Qt::AA_EnableHighDpiScaling); QApplication app(argc, argv); auto view = new QQuickWidget; view->setSource(QStringLiteral("qrc:/main.qml")); view->show(); return app.exec(); } Now I wonder whats the difference between QQuickWidgetand QQmlApplicationEngine. Why not do everything with the QQuickWidget? - SGaist Lifetime Qt Champion last edited by Hi, Because you might want to only use QML. QQuickWidget adds the widget module as dependency which might not be what you want.
https://forum.qt.io/topic/113089/qml-image-disply-without-window
CC-MAIN-2021-21
refinedweb
308
59.7
(). ); } } }. (question. My most recent column went live on MSDN today. It discusses different methods of dynamic execution of code. Executive summary: Avoid Type.InvokeMember() if you can. [Update: One of my readers sent me an article with some additional timings] public class YearClass { const int StartDate = 1900; const int EndDate = 2050; int[] arr = new int[EndDate - StartDate + 1]; public int this[int num] { get { return arr[ num - StartDate]; } set { arr[num - StartDate] = value; } } } public class Test { public static void Main() { YearClass yc = new YearClass(); yc[1950] = 5; } } Doug. Every. The feedback we got around E&C has been fairly polarized. There is one group who feels the way Rich does, and really wants E&C. There is another group that actively doesn't want E&C, as they feel that it encourages the wrong kind of programming. And then there's a group in the middle who typically see the value of E&C but don't think it's critical.. Min provides a new link to his excellent document on debugging the debugger.. Thanks. Anson. Your task: 1) Figure out what this reference is 2) Figure out why it's an appropriate one Jim Newkirk, one of the authors of NUnit and lately of the Microsoft PAG group, will be joining the C# team in a chat on C# and Unit Testing this Thursday. Jim has a book entitled Test Driven Development in Microsoft Net on the way. He's also a co-author of Extreme Programming in Practice
http://blogs.msdn.com/b/ericgu/archive/2004/03.aspx?PostSortBy=MostViewed&PageIndex=1
CC-MAIN-2014-23
refinedweb
250
61.67
Red Hat Bugzilla – Bug 36760 --allow-secret-key-import not in documentation Last modified: 2005-10-31 17:00:50 EST Neither the man-page nor the output of --help mentions --allow-secret-key-import, although the error output when trying to import a secret key without that option does mention it. At first I thought this was an error in our original backport of the fix to this bug, but it appears to be an intended omission. From the NEWS file in 1.0.5: * Secret keys are no longer imported unless you use the new option --allow-secret-key-import. This is a kludge and future versions will handle it in another way.
https://bugzilla.redhat.com/show_bug.cgi?id=36760
CC-MAIN-2018-39
refinedweb
115
59.84
Feature #3731open Easier Embedding API for Ruby Description =begin With Ruby 1.9, it has become more difficult to embed Ruby in a C application correctly. It would be nice if there was a clearly documented and simple C API to embed ruby in C programs. I know Ruby was not designed from the start to be embedded, but Ruby was used before in several products as an embedded scripting langauge. It should therefore be possible to do so in a more straightforward way. Kind Regards, B. =end Related issues Updated by asher (Asher Haig) almost 11 years ago What is difficult about this? Example: #include <ruby.h> int main( int argc __attribute__((unused)), char *argv[] __attribute__((unused)) ) { ruby_init(); // Ruby Options are just like /usr/bin/ruby // interpreter name, script name, argv ... char* options[] = { "", "rpdom_test.rb", }; void* node = ruby_options( 2, options ); return ruby_run_node( node ); } Struck me as potentially a documentation issue, but the actual functionality I think is pretty straightforward? Asher Updated by Beoran (Beoran Aegul) almost 11 years ago Dear Asher, Well, I certainly agree documentation should be improved for this use case! :) However, the problem with what you suggest is that it doesn't work more than once. For example; I get a segmentation violation (crash) with a "You may encounter a bug of Ruby interpreter." on the second call to ruby_run_node(); if I try this on a ruby1.9.1 p0: #include <ruby.h> RUBY_GLOBAL_SETUP int main( int argc __attribute__((unused)), char *argv[] __attribute__((unused)) ) { RUBY_INIT_STACK ruby_init(); { char* options[] = { "", "-e", "puts 'hello'" }; void* node = ruby_options( 2, options ); char* options2[] = { "", "-e", "puts 'world'" }; void* node2 = ruby_options( 2, options2 ); ruby_run_node( node ); ruby_run_node( node2 ); } } It may be that this bug is fixed in later versions, but I still have to install them to try. Another problem is that it's a rather unwieldy API. It's nicer if you can run multiple scripts, or call rb_eval_string multiple times. It's also nice if you can catch syntax errors in the ruby files loaded and handle them on the C side. Kind Regards, B. Updated by shyouhei (Shyouhei Urabe) almost 11 years ago - Status changed from Open to Feedback Updated by nahi (Hiroshi Nakamura) over 9 years ago Updated by shyouhei (Shyouhei Urabe) over 9 years ago - Status changed from Open to Assigned Updated by ko1 (Koichi Sasada) about 9 years ago I'll write document for embedding Ruby on your application. At first, in Japanese, Second, translate in English. Please revise it after I finished. Updated by ko1 (Koichi Sasada) over 8 years ago - Priority changed from Normal to 5 Ah, sorry for pending. I need one more body (brain?) of myself. Updated by mame (Yusuke Endoh) over 8 years ago - Target version changed from 2.0.0 to 2.6 ko1, I think you are not divided yet. This looks a big feature. I'm setting this to next minor. Please try it if you have time after you finished other tasks. -- Yusuke Endoh mame@tsg.ne.jp Updated by naruse (Yui NARUSE) over 3 years ago - Target version deleted ( 2.6) Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/3731
CC-MAIN-2021-31
refinedweb
518
65.62
If you are getting started with Python it can be a bit confusing understanding what a lambda is. Let’s see if I can clarify few things straight away. A lambda is also called an anonymous function and that’s because lambdas don’t have a name. To define a lambda in Python you use the keyword lambda followed by one or more arguments, a colon (:) and a single expression. We will start with a simple example of lambda function to get used to its syntax and then we will look at how a Python lambda function fits different scenarios. To practice all the examples we will use the Python interactive shell. Let’s get started! How to Use a Lambda in Python Let’s start with the syntax of a lambda function. A lambda function starts with the lambda keyword followed by a list of comma separated arguments. The next element is a colon (:) followed by a single expression. lambda <argument(s)> : <expression> As you can see a lambda function can be defined in one line. Let’s have a look at a very simple lambda that multiplies the number x (argument) by 2: lambda x : 2*x Here’s what happens if I define this lambda in the Python shell: >>> lambda x : 2*x <function <lambda> at 0x101451cb0> I get back a function object. Interestingly when I define a lambda I don’t need a return statement as part of the expression. What happens if I include the return statement in the expression? >>> lambda x : return 2*x File "<stdin>", line 1 lambda x : return 2*x ^ SyntaxError: invalid syntax We receive a syntax error. So, no need to include return in a lambda. How to Call a Lambda Function in Python We have seen how to define a lambda, but how can we call it? Firstly we will do it without assigning the function object to a variable. To do that we just need to use parentheses. (lambda x : 2*x)(2) We will surround the lambda expression with parentheses followed by parentheses surrounding the arguments we want to pass to the lambda. This is the output when we run it: >>> (lambda x : 2*x)(2) 4 Sweet! We also have another option. We can assign the function object returned by the lambda function to a variable, and then call the function using the variable name. >>> multiply = lambda x : 2*x >>> multiply(2) 4 I feel this kind of goes against the idea of not giving a name to a lambda, but it was worth knowing… Before continuing reading this article make sure you try all the examples we have seen so far to get familiar with lambdas. I still remember the first time I started reading about lambdas, I was a bit confused. So don’t worry if you feel the same right now 🙂 Passing Multiple Arguments to a Lambda Function In the previous sections we have seen how to define and execute a lambda function. We have also seen that a lambda can have one or more arguments, let’s see an example with two arguments. Create a lambda that multiplies the arguments x and y: lambda x, y : x*y As you can see, the two arguments are separated by a comma. >>> (lambda x, y : x*y)(2,3) 6 As expected the output returns the correct number (2*3). A lambda is an IIFE (Immediately Invoked Function Expression). It’s basically a way to say that a lambda function is executed immediately as soon as it’s defined. Difference Between a Lambda Function and a Regular Function Before continuing looking at how we can use lambdas in our Python programs, it’s important to see how a regular Python function and a lambda relate to each other. Let’s take our previous example: lambda x, y : x*y We can also write it as a regular function using the def keyword: def multiply(x, y): return x*y You notice immediately three differences compared to the lambda form: - When using the def keyword we have to specify a name for our function. - The two arguments are surrounded by parentheses. - We return the result of the function using the return statement. Assigning our lambda function to a variable is optional (as mentioned previously): multiply_lambda = lambda x, y : x*y Let’s compare the objects for these two functions: >>> def multiply(x, y): ... return x*y ... >>> multiply_lambda = lambda x, y : x*y >>> multiply <function multiply at 0x101451d40> >>> multiply_lambda <function <lambda> at 0x1014227a0> Here we can see a difference: the function defined using the def keyword is identified by the name “multiply” while the lambda function is identified by a generic <lambda> label. And let’s see what is returned by the type() function when applied to both functions: >>> type(multiply) <class 'function'> >>> type(multiply_lambda) <class 'function'> So, the type of the two functions is the same. Can I use If Else in a Python Lambda? I wonder if I can use an if else statement in a lambda function… lambda x: x if x > 2 else 2*x This lambda should return x if x is greater than 2 otherwise it should return x multiplied by 2. Firstly, let’s confirm if its syntax is correct… >>> lambda x: x if x > 2 else 2*x <function <lambda> at 0x101451dd0> No errors so far…let’s test our function: >>> (lambda x: x if x > 2 else 2*x)(1) 2 >>> (lambda x: x if x > 2 else 2*x)(2) 4 >>> (lambda x: x if x > 2 else 2*x)(3) 3 It’s working well… …at the same time you can see that our code can become more difficult to read if we make the lambda expression more and more complex. As mentioned at the beginning of this tutorial: a lambda function can only have a single expression. This makes it applicable to a limited number of use cases compared to a regular function. Also remember… You cannot have multiple statements in a lambda expression. How to Replace a For Loop with Lambda and Map In this section we will see how lambdas can be very powerful when applied to iterables like Python lists. Let’s begin with a standard Python for loop that iterates through all the elements of a list of strings and creates a new list in which all the elements are uppercase. countries = ['Italy', 'United Kingdom', 'Germany'] countries_uc = [] for country in countries: countries_uc.append(country.upper()) Here is the output: >>> countries = ['Italy', 'United Kingdom', 'Germany'] >>> countries_uc = [] >>> >>> for country in countries: ... countries_uc.append(country.upper()) ... >>> print(countries_uc) ['ITALY', 'UNITED KINGDOM', 'GERMANY'] Now we will write the same code but with a lambda. To do that we will also use a Python built-in function called map that has the following syntax: map(function, iterable, ...) The map function takes another function as first argument and then a list of iterables. In this specific example we only have one iterable, the countries list. Have you ever seen a function that takes another function as argument before? A function that takes another function as argument is called an Higher Order Function. It might sound complicated, this example will help you understand how it works. So, what does the map function do? The map function returns an iterable that is the result of the function passed as first argument applied to every element of the iterable. In our scenario the function that we will pass as first argument will be a lambda function that converts its argument into uppercase format. As iterable we will pass our list. map(lambda x: x.upper(), countries) Shall we try to execute it? >>> map(lambda x: x.upper(), countries) <map object at 0x101477890> We get back a map object. How can we get a list back instead? We can cast the map object to a list… >>> list(map(lambda x: x.upper(), countries)) ['ITALY', 'UNITED KINGDOM', 'GERMANY'] It’s obvious how using map and lambda makes this code a lot more concise compared to the one where we have use the for loop. Use Lambda Functions with a Dictionary I want to try to use a lambda function to extract a specific field from a list of dictionaries. This is something that can be applied in many scenarios. Here is my list of dictionaries: people = [{'firstname':'John', 'lastname':'Ross'}, {'firstname':'Mark', 'lastname':'Green'}] Once again I can use the map built-in function together with a lambda function. The lambda function takes one dictionary as argument and returns the value of the firstname key. lambda x : x['firstname'] The full map expression is: firstnames = list(map(lambda x : x['firstname'], people)) Let’s run it: >>> firstnames = list(map(lambda x : x['firstname'], people)) >>> print(firstnames) ['John', 'Mark'] Very powerful! Passing a Lambda to the Filter Built-in Function Another Python built-in function that you can use together with lambdas is the filter function. Below you can see its syntax that requires a function and a single iterable: filter(function, iterable) The idea here is to create an expression that given a list returns a new list whose elements match a specific condition defined by a lambda function. For example, given a list of numbers I want to return a list that only includes the negative ones. Here is the lambda function we will use: lambda x : x < 0 Let’s try to execute this lambda passing a couple of numbers to it so it’s clear what the lambda returns. >>> (lambda x : x < 0)(-1) True >>> (lambda x : x < 0)(3) False Our lambda returns a boolean: - True if the argument is negative. - False if the argument is positive. Now, let’s apply this lambda to a filter function: >>> numbers = [1, 3, -1, -4, -5, -35, 67] >>> negative_numbers = list(filter(lambda x : x < 0, numbers)) >>> print(negative_numbers) [-1, -4, -5, -35] We get back the result expected, a list that contains all the negative numbers. Can you see the difference compared to the map function? The filter function returns a list that contains a subset of the elements in the initial list. How Can Reduce and Lambda Be Used with a List Another common Python built-in function is the reduce function that belongs to the functools module. reduce(function, iterable[, initializer]) In this example we will ignore the initialiser, you can find more details about it here. What does the reduce function do? Given a list of values: [v1, v2, ..., vn] It applies the function passed as argument, to the first two elements of the iterable. The result is: [func(v1,v2), v3, ..., vn] Then it applies the function to the result of the previous iteration and the next element in the list: [func(func(v1,v2),v3), v4, ..., vn] This process continues left to right until the last element in the list is reached. The final result is a single number. To understand it in practice, we will apply a simple lambda that calculates the sum of two numbers to a list of numbers: >>> reduce(lambda x,y: x+y, [3, 7, 10, 12, 5]) 37 Here is how the result is calculated: ((((3+7)+10)+12)+5) Does it make sense? Let’s see if we can also use the reduce function to concatenate strings in a list: >>> reduce(lambda x,y: x + ' ' + y, ['This', 'is', 'a', 'tutorial', 'about', 'Python', 'lambdas']) 'This is a tutorial about Python lambdas' It works! Lambda Functions Applied to a Class Considering that lambdas can be used to replace regular Python functions, can we use lambdas as class methods? Let’s find out! I will define a class called Gorilla that contains a constructor and the run method that prints a message: class Gorilla: def __init__(self, name, age, weight): self.name = name self.age = age self.weight = weight def run(self): print('{} starts running!'.format(self.name)) Then I create an instance of this class called Spartacus and execute the run method on it: Spartacus = Gorilla('Spartacus', 35, 150) Spartacus.run() The output is: Spartacus starts running! Now, let’s replace the run method with a lambda function: run = lambda self: print('{} starts running!'.format(self.name)) In the same way we have done in one of the sections above we assign the function object returned by the lambda to the variable run. Notice also that: - We have removed the def keyword because we have replaced the regular function with a lambda. - The argument of the lambda is the instance of the class self. Execute the run method again on the instance of the Gorilla class… …you will see that the output message is exactly the same. This shows that we can use lambdas as class methods! It’s up to you to chose which one you prefer depending on what makes your code easy to maintain and to understand. Using Lambda with the Sorted Function The sorted built-in function returns a sorted list from an iterable. Let’s see a simple example, we will sort a list that contains the names of some planets: >>> planets = ['saturn', 'earth', 'mars', 'jupiter'] >>> sorted(planets) ['earth', 'jupiter', 'mars', 'saturn'] As you can see the sorted function orders the list alphabetically. Now, let’s say we want to order the list based on a different criteria, for example the length of each word. To do that we can use the additional parameter key that allows to provide a function that is applied to each element before making any comparison. >>> sorted(planets, key=len) ['mars', 'earth', 'saturn', 'jupiter'] In this case we have used the len() built-in function, that’s why the planets are sorted from the shortest to the longest. So, where do lambdas fit in all this? Lambdas are functions and because of this they can be used with the key parameter. For example, let’s say I want to sort my list based on the third letter of each planet. Here is how we do it… >>> sorted(planets, key=lambda p: p[2]) ['jupiter', 'earth', 'mars', 'saturn'] And what if I want to sort a list of dictionaries based on the value of a specific attribute? >>> people = [{'firstname':'John', 'lastname':'Ross'}, {'firstname':'Mark', 'lastname':'Green'}] >>> sorted(people, key=lambda x: x['lastname']) [{'firstname': 'Mark', 'lastname': 'Green'}, {'firstname': 'John', 'lastname': 'Ross'}] In this example we have sorted the list of dictionaries based on the value of the lastname key. Give it a try! Python Lambda and Error Handling In the section in which we have looked at the difference between lambdas and regular functions, we have seen the following: >>> multiply <function multiply at 0x101451d40> >>> multiply_lambda <function <lambda> at 0x1014227a0> Where multiply was a regular function and multiply_lambda was a lambda function. As you can see the function object for a regular function is identified with a name, while the lambda function object is identified by a generic <lambda> name. This also makes error handling a bit more tricky with lambda functions because Python tracebacks don’t include the name of the function in which an error occurs. Let’s create a regular function and pass to it arguments that would cause the Python interpreter to raise an exception: def calculate_sum(x, y): return x+y print(calculate_sum(5, 'Not_a_number')) When I run this code in the Python shell I get the following error: >>> def calculate_sum(x, y): ... return x+y ... >>> print(calculate_sum(5, 'Not_a_number')) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 2, in calculate_sum TypeError: unsupported operand type(s) for +: 'int' and 'str' From the traceback we can clearly see that the error occurs at line 2 of the calculate_sum function. Now, let’s replace this function with a lambda: calculate_sum = lambda x, y: x+y print(calculate_sum(5, 'Not_a_number')) The output is: >>> calculate_sum = lambda x,y: x+y >>> print(calculate_sum(5, 'Not_a_number')) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 1, in <lambda> TypeError: unsupported operand type(s) for +: 'int' and 'str' The type of exception and the error message are the same, but this time the traceback tells us that there was an error at line 1 of the function <lambda>. Not very useful! Imagine if you had to find the right line among 10,000 lines of code. Here is another reason for using regular functions instead of lambda functions when possible. Passing a Variable List of Arguments to a Python Lambda In this section we will see how to provide a variable list of arguments to a Python lambda. To pass a variable number of arguments to a lambda we can use *args in the same way we do with a regular function: (lambda *args: max(args))(5, 3, 4, 10, 24) When we run it we get the maximum between the arguments passed to the lambda: >>> (lambda *args: max(args))(5, 3, 4, 10, 24) 24 We don’t necessarily have to use the keyword args. What’s important is the * before args that in Python represents a variable number of arguments. Let’s confirm if that’s the case by replacing args with numbers: >>> (lambda *numbers: max(numbers))(5, 3, 4, 10, 24) 24 Still working! More Examples of Lambda Functions Before completing this tutorial let’s have a look at few more examples of lambdas. These examples should give you some more ideas if you want to use lambdas in your Python programs. Given a list of Linux commands return only the ones that start with the letter ‘c’: >>> commands = ['ls', 'cat', 'find', 'echo', 'top', 'curl'] >>> list(filter(lambda cmd: cmd.startswith('c'), commands)) ['cat', 'curl'] From a comma separated string with spaces return a list that contains each word in the string without spaces: >>>>> list(map(lambda word: word.strip(), weekdays.split(','))) ['monday', 'tuesday', 'wednesday', 'thursday', 'friday', 'saturday', 'sunday'] Generate a list of numbers with the Python range function and return the numbers greater than four: >>> list(filter(lambda x: x > 4, range(15))) [5, 6, 7, 8, 9, 10, 11, 12, 13, 14] Conclusion In this tutorial we have seen what a Python lambda is, how to define it and execute it. We went through examples with one or more arguments and we have also seen how a lambda returns a function object (without the need of a return statement). Now you know that a lambda is also called an anonymous function because when you define it you don’t bind it to a name. Also, analysing the difference between regular functions and lambda functions in Python has helped us understand better how lambdas works. It’s very common to use lambda functions when they are needed only once in your code. If you need a function that gets called multiple times in your codebase using regular functions is a better approach to avoid code duplication. Always remember how important is to write clean code, code that anyone can quickly understand in case of bugs that need to be fixed quickly in the future. Now you have a choice between lambdas and regular functions, make the right one! 🙂 I’m a Tech Lead, Software Engineer and Programming Coach. I want to help you in your journey to become a Super Developer! One comment
https://codefather.tech/blog/what-is-lambda-python/
CC-MAIN-2021-10
refinedweb
3,214
58.62
Is your React component not rendering? Quick quiz: When a React component loads data from the server in componentWillMount like this one below, what will it render? Original photo by Jay Galvin class Quiz extends Component { componentWillMount() { axios.get('/thedata').then(res => { this.setState({items: res.data}); }); } render() { return ( <ul> {this.state.items.map(item => <li key={item.id}>{item.name}</li> )} </ul> ); } } If you answered “nothing” or “a console error,” congrats! If you answered “the data I fetched,” keep reading ;) State Starts Off Uninitialized There are two important things to realize here: - A component’s state (e.g. this.state) begins life as null. - When you fetch data asynchronously, the component will render at least once before that data is loaded – regardless of whether it’s fetched in the constructor, componentWillMount, or componentDidMount. Yes, even though constructor and componentWillMount are called before the initial render, asynchronous calls made there will not block the component from rendering. You will still hit this problem. The Fix(es) This is easy to fix. The simplest way: initialize state with reasonable default values in the constructor. For the component above, it would look like this:> ); } } You could also handle the empty data inside render, with something like this: render() { return ( <ul> {this.state && this.state.items && this.state.items.map(item => <li key={item.id}>{item.name}</li> )} </ul> ); } This is not the ideal way to handle it though. If you can provide a default value, do so. Trickle-Down Failures The lack of default or “empty state” data can bite you another way, too: when undefined state is passed as a prop to a child component. Expanding on that example above, let’s say we extracted the list into its own component: class Quiz extends React.Component { constructor(props) { super(props); // Initialized, but not enough this.state = {}; } componentWillMount() { // Get the data "soon" axios.get('/thedata').then(res => { this.setState({items: res.data}); }); } render() { return ( <ItemList items={this.state.items}/> ); } } function ItemList({ items }) { return ( <ul> {items.map(item => <li key={item.id}>{item.name}</li> )} </ul> ); } See the problem? When Quiz first renders, this.state.items is undefined. Which, in turn, means ItemList gets items as undefined, and you get an error – Uncaught TypeError: Cannot read property 'map' of undefined in the console. Debugging this would be easier if ItemList had propTypes set up, like this: function ItemList({ items }) { return ( // same as above ); } ItemList.propTypes = { items: React.PropTypes.array.isRequired }; With this in place, you’ll get this helpful message in the console: “Warning: Failed prop type: Required prop items was not specified in ItemList.” However, you will still get the error – Uncaught TypeError: Cannot read property 'map' of undefined. A failed propType check does not prevent the component from rendering, it only warns. But at least this way it’ll be easier to debug. Default Props One more way to fix this: you can provide default values for props. Default props aren’t always the best answer. Before you set up a default prop, ask yourself if it’s a band-aid fix. Is the default value there just to prevent transient errors when the data is uninitialized? Better to initialize the data properly. Is the prop truly optional? Does it make sense to render this component without that prop provided? Then a default makes sense. This can be done a few ways. defaultProps property This method works whether your component is a stateless functional one, or a class that inherits React.Component. class MyComponent extends React.Component { render() { // ... } } MyComponent.defaultProps = { items: [] }; defaultProps static property This method only works for classes, and only if your compiler is set up to support the static initializer syntax from ES7. class MyComponent extends React.Component { static defaultProps = { items: [] } render() { // ... } } Destructuring in render A default can be provided using the ES6 destructuring syntax right in the render function. class MyComponent extends React.Component { render() { const { items = [] } = this.props; return ( <ItemList items={items}/> ); } } This line says “extract the items key from this.props, and if it’s undefined, set it to an empty array”. const { items = [] } = this.props; Destructuring in arguments If your component is of the stateless functional variety, you can destructure right in the arguments: function ItemList({ items = []}) { return ( // Use items here. It'll default to an empty array. ); } Wrap Up In short: - Async calls during the component lifecycle means the component will render before that data is loaded, so… - Initialize statein the constructor and/or be sure to handle empty data. - Use PropTypes to aid debugging - Use default props when appropriate - Destructuring syntax is a clean, easy way to provide defaults Translations This article has been translated into Korean
https://daveceddia.com/watch-out-for-undefined-state/
CC-MAIN-2019-35
refinedweb
775
59.3
biner.sebastien at ouranos.ca (biner) wrote... > I would like to know if there is a module out there to help me get > the size of a gif image. The idea is to verify that the size is not > greater than a certain value before forcing it toward a given > dimension in a web page. Try this: import struct def is_gif(in_file): hdr, ver, w, h = struct.unpack('3s3sHH', in_file.read(10)) if hdr == 'GIF': return ver, w, h It will return the GIF file version (87a or 89a), the width, and the height. If the file is not a GIF, you will get None instead (or an Exception if the file-like object you passed had less than 10 bytes readable). This will save you the use of a heavy duty library for the simple case of extracting image dimensions. > Also, is there any way to know which function is used if we use two > module that define the same function with the same name and arguments. > E.g. the join function is present in the string module and the os.path > module with string as input and output. Is there a way to force Python > to tell me of the possible conflict? If you have a 'join' function in some namespace and you don't know who owns it, you should probably look at reorganising your code. Alternatively: print "Is os.path.join:", join is os.path.join print "Is string.join:", join is string.join print "Is str.join:", join is str.join David.
https://mail.python.org/pipermail/python-list/2004-March/274009.html
CC-MAIN-2014-15
refinedweb
257
84.17
A package that detects the browser being used to view a web page. Useful for blanket browser detection. If possible, try to use feature detection over this approach. Import the Browser Detect package. import 'package:browser_detect/browser_detect.dart'; Use the library's browser field to query for information about the detected browser. This field contains properties for checking the browser type and version. if (browser.isIe && browser.version <= "9") { // Do something. } Add this to your package's pubspec.yaml file: dependencies: browser_detect: ^1.0.4 You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:browser_detect/browser_detect.dart'; The package version is not analyzed, because it does not support Dart 2. Until this is resolved, the package will receive a health and maintenance score of 0. Fix lib/src/browser.dart. (-25 points) Analysis of lib/src/browser.dart failed with 1 error: line 33 col 69: The return type 'String' isn't a 'Match', as defined by anonymous closure. Fix lib/src/browser_version.dart. (-0.50 points) Analysis of lib/src/browser_version.dart reported 1 hint: line 11 col 68: 'onError' is deprecated and shouldn't be used. Format lib/browser_detect.dart. Run dartfmt to format lib/browser_detect.dart. Add SDK constraint in pubspec.yaml. (-50 points) For information about setting SDK constraint, please see. Package is too old. (-100 points) The package was released more than two years ago. Maintain CHANGELOG.md. (-20 points) Changelog entries help clients to follow the progress in your code. Maintain an example. None of the files in your example/ directory matches a known example patterns. Common file name patterns include: main.dart, example.dart or you could also use browser_detect.dart. Packages with multiple examples should use example/readme.md.
https://pub.dartlang.org/packages/browser_detect
CC-MAIN-2018-47
refinedweb
316
61.73
Version: 20.0.1123.4 OS: Mac OS X 10.7.3 What steps will reproduce the problem? 1. Visit 2. Profit! (Or not :-{) What is the expected output? What do you see instead? I expect to see some text or some relevant information for the information stored there. Instead I see a window saying "Which service should be used for viewing?" with a link to the chrome web store to under "Find more services by visiting the chrome web store." Please use labels and text to provide additional information. My best guess after some thought is that chrome realized that the link was an RSS feed, tried to look up an extension or app that could display an RSS feed, didn't find one, and punted the user to the store to find a good app. But: * It wasn't at all clear that that was what was happening from the pop up; I had absolutely no clue what was meant by "What service should be used for viewing?" * When I followed the link to the web store, it didn't point me at RSS reader apps (which would have produces an "Aha!"), it just linked to the main store. I'm *sure* there's a better UE around here somewhere. (Note that I, with my very specialized requirements, would have been happy with a text/plain display.) Is this a webintents feature? I haven't seen it before; opening that link for me just downloads the RSS feed. @kalman: It would make sense that it was webintents, but I haven't intentionally configured anything WRT webintents in my browser. Did you try it with the same Chrome version I did? Greg, we need to figure out that the picker is empty in this case and offer the chance to view or download the RSS file. Need verification from UI leads though. Issue 130015 has been merged into this issue. Noticed a few Google feeds worked as before. Examined the HTTP response and noticed the main difference was the HTTP header: X-content-type-options: nosniff Adding this to your HTTP responses seems to prevent this behaviour in Chrome 20. It looks like this is happening in current beta, so changing current mstone. Also tempted to mark this a regression as the behavior has changed from 19-to-20. There are now apps that support exactly this behavior, but there's a CWS bug that's preventing them from appearing. Ultimately, the desired behavior is to support apps that'll show feed resource types in a more helpful way than a wall of XML text. (See bug 33181 ). Those are coming, as well as providing developers who need to see XML text a way to still do so. Is this going to be fixed for 20? This seems like an undesirable behavior regression. I get a blank box with no installation option. I'm seeing this in 20.0.1132.21 on the mac for all of the feeds on gbillock@: any updates? This is a bug in CWS. It should be fixed and pushed by Monday, June 11. If it is not so fixed, I will revert the new policy, and need to merge that patch. @gbillock This doesn't look fixed, can you back it out? The fix should be live today. Issue 131956 has been merged into this issue. Note: to see how this is intended to work, you can install this app: It handles the MIME type for Atom/RSS and displays the contents very nicely. This is a developer-focused app -- support by feed reader apps will provide a much nicer end-user experience than what we have now, and that's what we're focused on with the new behavior. See and pavanv@ clicked on twice on 21.0.1171.0 and it crashed. This happened in Windows and not reproducible in Mac. other interesting stuff that I observed was a blank popup with a link to "More suggestions" which takes you to chrome web store that shows no results. This is again in 21.0.1171.0, windows 7. Thread 0 *CRASHED* ( EXCEPTION_ACCESS_VIOLATION_READ @ 0x00000004 ) 0x6b924baa [chrome.dll] - web_intents_registry.cc:194 WebIntentsRegistry::OnWebDataServiceRequestDone(int,WDTypedResult const *) 0x6aeaf87d [chrome.dll] - web_data_service.cc:606 WebDataService::RequestCompleted(int) 0x6aa8a8ac [chrome.dll] - bind_internal.h:1254 base::internal::Invoker<2,base::internal::BindState,void (IPC::SyncChannel::SyncContext *,int),void (IPC::SyncChannel::SyncContext *,int)>,void (IPC::SyncChannel::SyncContext *,int)>::Run(base::internal::BindStateBase *) 0x6aa6f990 [chrome.dll] - message_loop.cc:465 MessageLoop::RunTask(base::PendingTask const &) 0x6aa6dece [chrome.dll] - message_loop.cc:654 MessageLoop::DoWork() 0x6abf6312 [chrome.dll] - message_pump_win.cc:238 base::MessagePumpForUI::DoRunLoop() 0x6aa6da5f [chrome.dll] - message_loop.cc:419 MessageLoop::RunInternal() 0x6ae981a6 [chrome.dll] - message_loop.cc:770 MessageLoopForUI::RunWithDispatcher(base::MessagePumpDispatcher *) 0x6ae97d57 [chrome.dll] - chrome_browser_main.cc:1909 ChromeBrowserMainParts::MainMessageLoopRun(int *) 0x6ae97abf [chrome.dll] - browser_main_loop.cc:444 content::BrowserMainLoop::RunMainMessageLoopParts() 0x6ae97a28 [chrome.dll] - browser_main_runner.cc:98 `anonymous namespace'::BrowserMainRunnerImpl::Run() 0x6aad2f02 [chrome.dll] - browser_main.cc:21 BrowserMain(content::MainFunctionParams const &) 0x6aa678c2 [chrome.dll] - content_main_runner.cc:371 content::RunNamedProcessTypeMain(std::basic_string,std::allocator > const &,content::MainFunctionParams const &,content::ContentMainDelegate *) 0x6aa67849 [chrome.dll] - content_main_runner.cc:627 content::ContentMainRunnerImpl::Run() 0x6aa5a25e [chrome.dll] - content_main.cc:35 content::ContentMain(HINSTANCE__ *,sandbox::SandboxInterfaceInfo *,content::ContentMainDelegate *) 0x6aa5a1ea [chrome.dll] - chrome_main.cc:28 ChromeMain 0x01366357 [chrome.exe] - client_util.cc:423 MainDllLoader::Launch(HINSTANCE__ *,sandbox::SandboxInterfaceInfo *) 0x01365556 [chrome.exe] - chrome_exe_main_win.cc:31 RunChrome(HINSTANCE__ *) 0x013655c1 [chrome.exe] - chrome_exe_main_win.cc:47 wWinMain 0x013bda42 [chrome.exe] - crt0.c:275 __tmainCRTStartup 0x76933399 [kernel32.dll] + 0x00013399] BaseThreadInitThunk 0x775b9ef1 [ntdll.dll] + 0x00039ef1] __RtlUserThreadStart 0x775b9ec4 [ntdll.dll] + 0x00039ec4] _RtlUserThreadStart Issue 126411 has been merged into this issue. Issue 132406 has been merged into this issue. The work to fix this is in the chrome web store, and is slated to land Monday. Decision is wait until then and roll back the policy if needed. I just tried on Monday, June 18th and the bad behavior still exists. It seems like this should be more robust to the Web Store's state anyway. cbentzel: See the blocking bug, issue 129917 . We're getting too close to the wire, and I'm going to revert r133456 on M20. We should be ready for this on M21, but there's too much unpredictability at this point to leave it in place. sounds good to me! Please go ahead and revert the change. thanks! (label removal - the new intents picker UI has a provision for save) As discussed in email, this will be targeted to M21. Issue 133475 has been merged into this issue. Support for viewing XML raw is now available in the store. Closing this as wontfix since no code is landing. Issue 46489 has been merged into this issue. This just started happening to me. I used to be able to view RSS/XML in the browser which was very useful. All of a sudden (some time int he last 48 hours) Chrome started showing me that "services" alert and I am no longer able to just "See" the RSS. Horrible! Viewing raw RSS XML in the browser is of very little use to the majority of users. For web developers for whom this is useful, there are apps in the Chrome Web Store which provide that functionality. See bug 33181 for instance. As mentioned in issue 126411 (which was closed as a duplicate even though it was posted before this issue), it should be possible to view the source of XML feeds in the browser. If not by entering the URL, prepending `view-source:` to the URL should work. Could that be a potential solution/workaround for developers? Alternatively, perhaps a setting in the DevTools could enable viewing XML files. See which is a Chrome web store app that will enable you to view raw XML. (And in a nicer layout than the existing presentation.) X'ing out of the dialog is a clear sign the user isn't interested in the CWS options. If they decide that, they are met with a blank page and a blank URL bar. Instead of this dead-end, can we display the XMLViewer in this scenario? paulirish: Long-term we need to expose the browser as a service (pre-registered of course). Once the user clicks on this, and assuming the user does not decide to choose another service, the behavior should be the same as before the introduction of Intents in this flow. To me canceling the dialog is a clear sign the user clicked on the link in error, and just wants out. Closing the tab seems like the right back-off for this case. I completely understand that web developers want to be able to view the raw feed XML, but the fact that Chrome used to display that for feed links wasn't useful to the majority of users. The power of web intents is that developers who want to see raw XML can use an app enabling them to do that, and end users who want to view the contents of the feed and subscribe to it can do that using the tool of their choice. That's the whole point. :-) Why should an external app be required for something that the browser already is entirely capable of doing? aiiane: See comment 41. Issue 141122 has been merged into this issue. Issue 140233 has been merged into this issue. jhawkins: Yes, I was agreeing with comment 41. :) Also note that per merged-in bug, view-source: doesn't seem to work properly for feed urls with a content-type other than text/*. The use of "Feed Intent Viewer" is not really a solid solution. Once the app takes over you are no longer able to manipulate the feed URL. Something that developers generally tend to do when trying to work with feeds. That's a good feature. The viewer source is here: I know that the author is actively interested in this feature, so if you write a patch I think it'll be well received. I agree with #50. Keeping the current behavior of being able to view the XML directly in the browser is a feature too many of us use to just throw away. Make it an additional option at the bottom of all the available intents. Also note that, currently, disabling web-intents via "wrench->Privacy->Content Settings->Web Intents" results in very broken behavior whereby chrome automatically downloads the XML without prompting. I am very disappointed with this change. The feed intent viewer is terrible for web development, as it seems to be caching the xml and I can't find a way to refresh it. Furthermore, it adds an extra layer of complexity to what should be a simple process. I should be able to check a button on the intents menu to specifically state that I do not wish to open this with a plugin but instead intend to open it with the browser itself. Seeing this issue despite having Foxish live RSS 3.7 installed. When selecting certain feeds, the "reader picker" shows up. I already have a reader ! Happening (amongst many others) at using Chrome Version 21.0.1180.75 on Ubuntu 11.10 Gnome Classic. Very disapointed by this new behavior. Viewing XML file directly in the browser is really an obligatory option for me. Hope it will be fixed. Same boat here. Another option we have lost is using CSS and/or XSLT to format the XML. On other browsers you can attatch style-sheets to rss feeds to format them nicely, but Chrome always tries to download. I would like to be able to see the raw xml in my browser. I don't know what's going on but the current behavior is problem for me. I'm constantly asked by people what's wrong with their RSS and downloading is completely un-useful. Furthermore, when it downloads it doesn't expose the URL, so I can't copy the URL and use it to send to someone so they can stick it in their iTunes or whatever. In my opinion, the browser should never download a file like this as a default behavior. What's the average user going to do with it? I like the Firefox default behavior - render the page in the browser so it looks like a regular webpage to virtually everyone and then let me view source to see the source. And I should not have to add extensions to support this default behavior. Hope whoever you are at chromium.org are listening. Yeah, I bump this. Really bad and has somehow killed the extentions that previously read all of my XML feeds notably XML Tree and XML Viewer. Bump... very annoying. I think the argument in Comment 50 is very compelling. If I need to add or change GET parameters, even if I have installed the proper web intents, I need to copy the URL before I submit it and then paste it and edit it again. Additionally, if I open a feed in a background tab (Alt+Click) it directly downloads without the option to view it using any available intents. This is a very poor UX and a great step backward, IMO... Issue 141088 has been merged into this issue. The status says it all. They are not going to fix this one. So anyone who wants to view the feeds without being downloaded will just have to get some new browser. This issue is really annoying, but if the majority of users expect fancy views and love using plugins for everything. Well, there is really nothing to be said or done. Well, since it was closed on July 3rd, before many of the comments regarding Web Intents and lack of functionality provided by the Chrome apps, I have high hopes that the developers will reconsider adding in the optional functionality of directly viewing RSS feeds. Even with the plugin from comment 21, Chrome Version 21.0.1180.77 still downloads the feed in the background before the intents window opens. At this rate my downloads folder will be filled with a hundred files called "download.rss" by the end of the week. How is this not seen as a bug by the developers? This is typical RSS feed hijacking by the web browser and has been a problem in one form or another for many years. Browser makers seem to think that they know how to handle the "special case" of RSS flavored XML files and they always get it wrong when they try to add interfering design. Chrome actually had it right before this... Chrome ignored RSS and just loaded the data appropriately. My biggest complaint is how it ignores the XSL stylesheet processing instruction which should without a doubt take precedence over the browser's handling of the document. The stylesheet reference is there to tell the browser what to do and is a w3c standard. Ignoring this is poor design and an intentional bug, especially in its current design state. Understanding the motives and intentions, A compromise is surely still on the table, right? I am aware of hacky workarounds but it would be nice for once to have a development and product team actually discuss this issue in a reasonable thoughtful manner. Its not just about what developers want and the increasing focus on consumer-level web browser product. More careful and less hasty modifications needs to be the norm and this issue does not demonstrate that. #65: yes, that's a problem with a fix in progress. So based on comment #41, can we see this issue marked as "open" or at least no longer "wontfix"? This is a major issue for developers in the RIGHT NOW. Chome is a fantastic browser and has previously been a joy to work in. Some level of UI MUST be implemented. Web intents be damned. I shouldn't need a plugin, and I certainly don't want a browser to decide to download ANY kind of file without me explicitly telling it to. You're not apple, you're google. "Don't be evil" and let me make my own decisions on how to interact with the information comming through our wires. As stated above this breaks existing extensions. I'm sorry that most users don't need to see XML but Frankly most users don't even know what they are. please realize that web applications and sites will work better in your browser if you please the suite of people building and maintaining these apps and sites. A proposed solution in two acts: 1. When such feed URLs are prefixed with `view-source:`, there’s no doubt that the end user wants to view the source code instead of opening the Web Intents handler. So, suppress the “Which service should be used for viewing this feed?” prompt in these cases. 2. Expose Chrome’s built-in support for the `view-source:` scheme as a Web Intents handler for such feeds. That way, all problems are solved at once: * Developers can still easily view source code of feeds after a trivial one-time setup. * No third-party Chrome extensions required. I've updated the Feed Intent Viewer app () based on feedback from this thread. In version 1.1 it: - Displays the feed URL (and allows it to be modified) - Re-fetches the feed from the server when the page is reloaded If you already have it installed, you can force an immediate update by going to chrome://extensions, enabling developer mode, and pressing the "Update extensions now" button. Thanks to everyone for the feedback. We have been actively discussing a better way of handling RSS feeds. We originally marked this issue as WontFix because the vast majority of our users do not interact with RSS feeds the way the developers in this thread interact with RSS. For most users, viewing raw XML is not a good user experience. We believe giving these users the choice to select an RSS handling service via Web Intents is a good solution to this problem. Having said that, we are actively working toward a solution that provides an outstanding experience for our developers and casual users alike. In the meantime, there are Web Intents services that you can install via the Chrome Web Store that will provide most of the desired functionality that has been expressed in this thread. (Feed Intents Viewer is a great option:) Take care, and I will keep this thread up-to-date as our conversation progresses into actionable bugs. If web intents are to be used to handle RSS within chrome then it either needs to have a default intent handler or it needs to prompt the user to use an intent to handle the feed. RSS is so ubiquitous on the web that users will become confused about "what" they've just downloaded if there isn't a default intent to handle the file. Possibly the most frustrating UX change in Chrome yet. Bump. Few more changes like this and I go back to that slow FF. If sb. say that XML view is not useful for may people that Chrome is not browser for me, because I'm not the many people. Does anybody really think that that horrible popup is more useful than RAW XML view that has been there for many releases? I agree. I use several different content management systems and web frameworks which publish content as RSS feeds. Now, when users browse those sites and click on the RSS link icon from within Chrome their browser just downloads some random file like "Default.aspx" or "RSS.html" or "RSS.php", etc. it is very confusing for end users. Earlier someone in the conversation mentioned that this problem only affects developers, but I disagree! Us developers actually KNOW how to handle this situation and if it ONLY affected us, we really probably wouldn't care. But, in fact, we are so angry about this change because of what it is doing to our END USERS.... please take notice of this and know that our concerns lie with how this experience affects our content users more than we care about how it makes our development a little more difficult.... Thank you. It might be a good idea to optionally allow extensions to hijack / override the web intents dialog. For example, I extensively use the excellent "XML Tree" extension (). If this extension had the ability to "claim" certain interactions away from web-intents, I would be a very happy developer. Patrick, I do understand that you would like to see some new functionality, but please be aware that this particular thread is dedicated to a known bug currently affecting a large number of website and users. You should definitely open a new issue and start a conversation asking for the functionality that you have mentioned. Thank you! Hi Will, Good point, my comment is out of scope for this particular issue. I will open a new one. Patrick: extensions can quite readily be modified to be web intents services and handle the display of feed contents. I'd like to just see the RSS feed in a normal Chrome tab, without any downloads or this new dialog window. Thanks. What is the current status on this issue? The feed intent viewer app allows url manipulation (thanks Mihai!). There are code fixes in review for some of the edge cases we didn't account for (target=_blank was particularly bad -- bug 129784 ). Does that mean comment #41 by jhawk no longer stands? no please remove this feature or at least let the user choose a preference. i just want to see the xml like i used to. This is a real annoyance - it means I'm now breaking out a different browser to work with RSS. Surely adding an option 'view raw RSS' wouldn't hurt? Issue 150627 has been merged into this issue. End users on my site are complaining because they cannot subscribe to feeds. When they click on the standard RSS / Atom links, all they see is a popup dialog and an attempt to download a file with an unfamiliar extension (.webintents). I have even had a user notify me that my site tried to force them to download a virus. This is clearly not an issue that only impacts developers, who can at least research and find ways around the annoyance. But local installations of extensions on my browser will not be of any help to end users, who will likely just become confused or irritated. Please provide a solution that will make the end user experience more intuitive.
https://bugs.chromium.org/p/chromium/issues/detail?id=127313
CC-MAIN-2018-17
refinedweb
3,777
65.42
A for loop is used to execute the program for a fixed number of times. For loop in C# is similar to that in C, C++ or Java. Syntax A for loop has three parts. - Initialization - Condition and - An operation to be performed each time the loop is executed. for(int i=0; i<10; i++){ //statements } Working This is how a for loop works: - A value is initialized (only for the first time). - An expression is evaluated. - The body of the loop is executed if the expression is evaluated to true. - An operation (increment or decrement) is performed. - The above three steps are repeated until the expression is evaluated to false. See this post to find the difference between a for loop and while loop. Example using System; namespace CSharpExamples { class Program { static void Main(string[] args) { for(int i=0;i<=3;i++) { Console.WriteLine("value of i is: " + i); } } } } This program, when executed will produce the following output: value of i is: 0 value of i is: 1 value of i is: 2 value of i is: 3 Nested for loop A for loop specified inside a for loop is known as a nested for loop. Example using System; namespace CSharpExamples { class Program { static void Main(string[] args) { for(int i=2;i>=0;i--) { for(int j=0;j<2;j++) { Console.WriteLine("i = " + i + ", j = " + j); } } Console.ReadLine(); } } } The output of this program is: i = 2, j = 0 i = 2, j = 1 i = 1, j = 0 i = 1, j = 1 i = 0, j = 0 i = 0, j = 1 Subscribe Join the newsletter to get the latest updates.
https://www.geekinsta.com/c-sharp-for-loop/
CC-MAIN-2020-40
refinedweb
272
71.65
Thanks dims, I have put the patch in to the JIRA, issue I creatrd earlier @ also more info is at the mail [Fwd: [NEW PATCH-FIX FOR TESTS]webservice deployment in geronimo] in the geronimo dev and as a comment in the issue. Basically patch has two parts 1) patch for cvs manage folders 2) cvs not managed sample Dir is zipped as sample.zip please put it in to the modules/axis/src before build (that is how in my last patch samples are missing) Thanks for help Srinath > Srinath, > > - don't undo, If there are any changes in EWS, go ahead and apply it > and i will rebuild the ews-SNAPSHOT.jar > - Submit a JIRA for the patch to geronimo-axis module and i'll take care > of it. > > thanks, > dims > > On Wed, 28 Jul 2004 03:39:57 -0400 (EDT), Srinath Perera > <hemapani@opensource.lk> wrote: >> Sorry I have meesed it up .. it works well in the morning with the >> updated >> code :(. If the Patch I sent is checked in this is fixed.. But I belive >> it >> would take some time. >> >> 1) Shall I undo the changes in the EWS >> 2) I can sent a patch to update the axis module to fix this and later >> submit a new patch based on new code to the patch I sent erlier (fixing >> test cases) >> >> Opps that is tough .. I got to update the ews so that the when dims >> looking at the patch it works with the ews-SNAPSHOT.jar he is is >> downloading and doing so will breaks the current code by the updated >> jar. >> Is there a way to protect the both ends. >> >> pls help me to sort this out >> Thanks >> Srianth >> >> >> >> > Forwarded on behalf of Alan who is having mailer problems >> > >> > -------- Original Message -------- >> > Subject: FW: EWS and Geronimo (THANKS) >> > Date: Wed, 28 Jul 2004 00:28:14 -0400 >> > From: Alan D. Cabrera <adc@toolazydogs.com> >> > To: <jboynes@gluecode.com> >> > >> > >> > Ok. Two more questions. >> > >> > First, it seems that a change in EWS has not been propagated to >> Geronimo >> > and we're getting this error: >> > >> > C:\dev\geronimo\modules\axis\src\java\org\apache\geronimo\axis\GeronimoW >> > sDeployContext.java:27: >> org.apache.geronimo.axis.GeronimoWsDeployContext >> > is not abstract and does not override abstract method isCompile() in >> > org.apache.geronimo.ews.ws4j2ee.toWs.Ws4J2 >> > eeDeployContext >> > public class GeronimoWsDeployContext implements Ws4J2eeDeployContext { >> > ^ >> > 1 error >> > >> > >> > Second, you have a jndi.properties file in your jar with the following >> > contents: >> > >> > java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory >> > java.naming.provider.url=jnp://localhost:1099 >> > java.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces >> > >> > >> > Not sure if this is kosher, use wise and ASF policy wise. As for the >> > former, the Geronimo server barfs when it can't find >> > org.jnp.interfaces.NamingContextFactory when it loads up the remote >> JMX >> > server. Can we remove it? >> > >> > >> > Regards, >> > Alan >> > >> >> -----Original Message----- >> >> From: Srinath Perera [mailto:hemapani@opensource.lk] >> >> Sent: Tuesday, July 27, 2004 8:42 PM >> >> To: dev@geronimo.apache.org >> >> Subject: Re: EWS and Geronimo >> >> >> >> We were wondering should ews go to ws-fx (to a webservice project ) >> or >> > in >> >> to the geronimo. It is still not decided. I accept the fact that it >> > should >> >> use the geronimo group ID if it continue to use the Geronimo package >> >> space. >> >> Or else that should replace with "ws" and put the geronimo specific >> > code >> >> in to geronimo axis module. To me there are good reasons to both and >> > do >> >> not mind either. >> >> >> >> Thanks >> >> Srinath >> >> >> >> > Why doesn't EWS use the Geronimo maven group id? Its code is in >> the >> >> > Geronimo package-space. >> >> > Just wondering. >> >> > Regards, >> >> > >> >> > Alan >> > >> > >> > >> > >> > >> >> >> ------------------------------------ >> Lanka Sofware Foundation >> ------------------------------------ >> > > > -- > Davanum Srinivas - > > ------------------------------------ Lanka Sofware Foundation ------------------------------------
http://mail-archives.apache.org/mod_mbox/geronimo-dev/200407.mbox/%3C46510.220.247.207.122.1091018696.squirrel@220.247.207.122%3E
CC-MAIN-2016-22
refinedweb
611
55.74
Sentiment Analysis of Twitter Posts on Chennai Floods using Python Introduction The best way to learn data science is to do data science. No second thought about it! One of the ways, I do this is continuously look for interesting work done by other community members. Once I understand the project, I do / improve the project on my own. Honestly, I can’t think of a better way to learn data science. As part of my search, I came across a study on sentiment analysis of Chennai Floods on Analytics Vidhya. I decided to perform sentiment analysis of the same study using Python and add it here. Well, what can be better than building onto something great. To get acquainted with the crisis of Chennai Floods, 2015 you can read the complete study here. This study was done on a set of social interactions limited to the first two days of Chennai Floods in December 2015. The objectives of this article is to understand the different subjects of interactions during the floods using Python. Grouping similar messages together with emphasis on predominant themes (rescue, food, supplies, ambulance calls) can help government and other authorities to act in the right manner during the crisis time. Building Corpus A typical tweet is mostly a text message within limit of 140 characters. #hashtags convey subject of the tweet whereas @user seeks attention of that user. Forwarding is denoted by ‘rt’ (retweet) and is a measure of its popularity. One can like a tweet by making it ‘favorite’. About 6000 twits were collected with ‘#ChennaiFloods’ hashtag and between 1st and 2nd Dec 2015. Jefferson’s GetOldTweets utility (got) was used in Python 2.7 to collect the older tweets. One can store the tweets either in a csv file or to a database like MongoDb to be used for further processing. import got, codecs from pymongo import MongoClient client = MongoClient('localhost', 27017) db = client['twitter_db'] collection = db['twitter_collection'] tweetCriteria = got.manager.TweetCriteria().setQuerySearch('ChennaiFloods').setSince("2015-12-01").setUntil("2015-12-02").setMaxTweets(6000) def streamTweets(tweets): for t in tweets: obj = {"user": t.username, "retweets": t.retweets, "favorites": t.favorites, "text":t.text,"geo": t.geo, "mentions": t.mentions, "hashtags": t.hashtags,"id": t.id, "permalink": t.permalink,} tweetind = collection.insert_one(obj).inserted_id got.manager.TweetManager.getTweets(tweetCriteria, streamTweets) Tweets stored in MongoDB can be accessed from another python script. Following example shows how the whole db was converted to Pandas dataframe. import pandas as pdfrom pymongo import MongoClient client = MongoClient ('localhost', 27017) db = client ['twitter_db'] collection = db ['twitter_collection'] df=pd.DataFrame(list(collection.find())) First few records of the dataframe look as below: Data Exploration Once in dataframe format, it is easier to explore the data. Here are few examples: hashtags = [] for hs in df["hashtags"]: # Each entry may contain multiple hashtags. Split. hashtags += hs.split(" ") fdist1 = FreqDist(hashtags) fdist1.plot(10) Top 10 Hashtags trending As seen in the study the most used tags were “#chennairains”, “#ICanAccommodate”, apart from the original query tag “#ChennaiFloods”. - Top 10 users users = df["user"].tolist() fdist2 = FreqDist(users) fdist2.plot(10) As seen from the plot, most active users were “TMManiac” with about 85 tweets, “Texx_willer” with 60 tweets and so on… Top 10 Users tweeting Text Pre-processing All tweets are processed to remove unnecessary things like links, non-English words, stopwords, punctuation’s, etc. from nltk.tokenize import TweetTokenizer from nltk.corpus import stopwords import re, string import nltk tweets_texts = df["text"].tolist() stopwords=stopwords.words('english') english_vocab = set(w.lower() for w in nltk.corpus.words.words()) def process_tweet_text(tweet): if tweet.startswith('@null'): return "[Tweet not available]" tweet = re.sub(r'\$\w*','',tweet) # Remove tickers tweet = re.sub(r'https?:\/\/.*\/\w*','',tweet) # Remove hyperlinks tweet = re.sub(r'['+string.punctuation+']+', ' ',tweet) # Remove puncutations like 's twtok = TweetTokenizer(strip_handles=True, reduce_len=True) tokens = twtok.tokenize(tweet) tokens = [i.lower() for i in tokens if i not in stopwords and len(i) > 2 and i in english_vocab] return tokens words = [] for tw in tweets_texts: words += process_tweet_text(tw) The word list generated looks like: [‘time’, ‘history’, ‘temple’, ‘closed’, ‘due’, ‘pic’, ‘twitter’, ‘havoc’, ‘incessant’, …] Text Exploration The words are plotted again to find the most frequently used terms. A few simple words repeat more often than others: ’help’, ‘people’, ‘stay’, ’safe’, etc. [(‘twitter’, 1026), (‘pic’, 1005), (‘help’, 569), (‘people’, 429), (‘safe’, 274)] These are immediate reactions and responses to the crisis. Some infrequent terms are [(‘fit’, 1), (‘bible’, 1), (‘disappear’, 1), (‘regulated’, 1), (‘doom’, 1)]. Most frequently used words Collocations are the words that are found together. They can be bi-grams (two words together) or phrases like trigrams (3 words) or n-grams (n words). from nltk.collocations import * bigram_measures = nltk.collocations.BigramAssocMeasures() finder = BigramCollocationFinder.from_words(words, 5) finder.apply_freq_filter(5) print(finder.nbest(bigram_measures.likelihood_ratio, 10)) Most frequently appearing Bigrams are: [(‘pic’, ‘twitter’), (‘lady’, ‘labour’), (‘national’, ‘media’), (‘pani’, ‘pani’), (‘team’, ‘along’), (‘stay’, ‘safe’), (‘rescue’, ‘team’), (‘beyond’, ‘along’), (‘team’, ‘beyond’), (‘rescue’, ‘along’)] These depict the disastrous situation, like “stay safe”, “rescue team”, even a commonly used Hindi phrase “pani pani” (lots of water). Clustering In such crisis situations, lots of similar tweets are generated. They can be grouped together in clusters based on closeness or ‘distance’ amongst them. Artem Lukanin has explained the process in details here. TF-IDF method is used to vectorize the tweets and then cosine distance is measured to assess the similarity. Each tweet is pre-processed and added to a list. The list is fed to TFIDF Vectorizer to convert each tweet into a vector. Each value in the vector depends on how many times a word or a term appears in the tweet (TF) and on how rare it is amongst all tweets/documents (IDF). Below is a visual representation of TFIDF matrix it generates. Before using the Vectorizer, the pre-processed tweets are added in the data frame so that each tweets association with other parameters like id, user is maintained. cleaned_tweets = [] for tw in tweets_texts: words = process_tweet_text(tw) cleaned_tweet = " ".join(w for w in words if len(w) > 2 and w.isalpha()) #Form sentences of processed words cleaned_tweets.append(cleaned_tweet) df['CleanTweetText'] = cleaned_tweets Vectorization is done using 1-3 n-grams, meaning phrases with 1,2,3 words are used to compute frequencies, i.e. TF IDF values. One can get cosine similarity amongst tweets/documents as well. from sklearn.feature_extraction.text import TfidfVectorizer tfidf_vectorizer = TfidfVectorizer(use_idf=True, ngram_range=(1,3)) tfidf_matrix = tfidf_vectorizer.fit_transform(cleaned_tweets) feature_names = tfidf_vectorizer.get_feature_names() # num phrases from sklearn.metrics.pairwise import cosine_similarity dist = 1 - cosine_similarity(tfidf_matrix) print(dist) from sklearn.cluster import KMeans num_clusters = 3 km = KMeans(n_clusters=num_clusters) km.fit(tfidf_matrix) clusters = km.labels_.tolist() df['ClusterID'] = clusters print(df['ClusterID'].value_counts()) K-means clustering algorithm is used to group tweets into choosen number (say, 3) of groups. The output shows 3 clusters, with following number of tweets in respective clusters. Most of the tweets are clustered around in group Id =1. Remaining are in group id 2 and id 0. The top words used in each cluster can be computed by as follows: #sort cluster centers by proximity to centroid order_centroids = km.cluster_centers_.argsort()[:, ::-1] for i in range(num_clusters): print("Cluster {} : Words :".format(i)) for ind in order_centroids[i, :10]: print(' %s' % feature_names[ind]) The result is: - Cluster 0: Words: show mercy please people rain - Cluster 1: Words: pic twitter zoo wall broke ground saving guilty water growing - Cluster 2: Words: help people pic twitter safe open rain share please Topic Modeling Finding central subject in the set of documents, tweets in case here. Following are two ways of detecting topics, i.e. clustering the tweets Latent Dirichlet Allocation (LDA) LDA is commonly used to identify chosen number (say, 6) topics. Refer tutorial for more details. from gensim import corpora, models texts = [text for text in cleaned_tweets if len(text) > 2] doc_clean = [clean(doc).split() for doc in texts] dictionary = corpora.Dictionary(doc_clean) doc_term_matrix = [dictionary.doc2bow(doc) for doc in doc_clean] ldamodel = models.ldamodel.LdaModel(doc_term_matrix, num_topics=6, id2word = dictionary, passes=5) for topic in ldamodel.show_topics(num_topics=6, formatted=False, num_words=6): print("Topic {}: Words: ".format(topic[0])) topicwords = [w for (w, val) in topic[1]] print(topicwords) The output gives us following set of words for each topic. It is clear from the words associated with the topics that they represent certain sentiments. Topic 0 is about Caution, Topic 1 is about Help, Topic 2 is about News, etc. Doc2Vec and K-means Doc2Vec methodology available in gensim package is used to vectorize the tweets, as follows: import gensim from gensim.models.doc2vec import TaggedDocument taggeddocs = [] tag2tweetmap = {} for index,i in enumerate(cleaned_tweets): if len(i) > 2: # Non empty tweets tag = u'SENT_{:d}'.format(index) sentence = TaggedDocument(words=gensim.utils.to_unicode(i).split(), tags=[tag]) tag2tweetmap[tag] = i taggeddocs.append(sentence) model = gensim.models.Doc2Vec(taggeddocs, dm=0, alpha=0.025, size=20, min_alpha=0.025, min_count=0) for epoch in range(60):if epoch % 20 == 0: print('Now training epoch %s' % epoch) model.train(taggeddocs) model.alpha -= 0.002 # decrease the learning rate model.min_alpha = model.alpha # fix the learning rate, no decay Once trained model is ready the tweet-vectors available in model can be clustered using K-means. from sklearn.cluster import KMeans dataSet = model.syn0 kmeansClustering = KMeans(n_clusters=6) centroidIndx = kmeansClustering.fit_predict(dataSet) topic2wordsmap = {} for i, val in enumerate(dataSet): tag = model.docvecs.index_to_doctag(i) topic = centroidIndx[i] if topic in topic2wordsmap.keys(): for w in (tag2tweetmap[tag].split()): topic2wordsmap[topic].append(w) else: topic2wordsmap[topic] = [] for i in topic2wordsmap: words = topic2wordsmap[i] print("Topic {} has words {}".format(i, words[:5])) The result is the list of topics and commonly used words in each, respectively. It is clear from the words associated with the topics that they represent certain sentiments. Topic 0 is about Caution, Topic 1 is about Actions, Topic 2 is about Climate, etc.The result is the list of topics and commonly used words in each, respectively. End Notes This article shows how to implement Capstone-Chennai Floods study using Python and its libraries. With this tutorial, one can get introduction to various Natural Language Processing (NLP) workflows such as accessing twitter data, pre-processing text, explorations, clustering and topic modeling.
https://www.analyticsvidhya.com/blog/2017/01/sentiment-analysis-of-twitter-posts-on-chennai-floods-using-python/
CC-MAIN-2021-39
refinedweb
1,712
50.73
On Mon, Jan 21, 2008 at 08:34:50PM -0800, Alexander Dunlap wrote: > > However, I'm not sure how we would implement it without using error. > The usual Haskell solution would be to use Maybe, but what would we > have Just of, since the functions don't return real values? Is Just () > an accepted idiom? Bool would be nicer than that, but presumably these functions can fail in a number of ways, so better still would be Maybe Exception or Maybe IOError. I think in my opinion throwing an exception is best, though. People using MonadIO can convert this into the variant that doesn't throw an exception by replacing readlineFunction args with something like (try $ readlineFunction args) >> return () Thanks Ian
http://www.haskell.org/pipermail/libraries/2008-January/009031.html
CC-MAIN-2014-41
refinedweb
121
68.81
MP4FileInfo - Return a textual summary of an mp4 file #include <mp4.h> char* MP4FileInfo( const char* mp4FileName, MP4TrackId trackId = MP4_INVALID_TRACK_ID ); Upon success, a malloc'ed string containing the summary info. Upon an error, NULL. MP4FileInfo provides a string that contains a textual summary of the contents of an mp4 file. This includes the track id's, the track type, and track specific information. For example, for a video track, media encoding, image size, frame rate, and bitrate are summarized. Note that the returned string is malloc'ed, so it is the caller's responsibility to free() the string to prevent memory leaks. Also note that the returned string contains newlines and tabs which may or may not be desirable. The following is an example of the output of MP4Info(): Track Type Info 1 video MPEG-4 Simple @ L3, 119.625 secs, 1008 kbps, 352x288 @ 24.00 fps 2 audio MPEG-4, 119.327 secs, 128 kbps, 44100 Hz 3 hint Payload MP4V-ES for track 1 4 hint Payload mpeg4-generic for track 2 5 od Object Descriptors 6 scene BIFS MP4(3) MP4Info(3)
http://www.makelinux.net/man/3/M/MP4FileInfo
CC-MAIN-2016-36
refinedweb
185
65.42
Closed Bug 636014 Opened 10 years ago Closed 9 years ago [autoconfig] Align labels and textfields on the existing account wizard Categories (Thunderbird :: Account Manager, defect) Tracking (thunderbird11 fixed, thunderbird12 fixed) Thunderbird 13.0 People (Reporter: BenB, Assigned: standard8) References Details Attachments (3 files, 1 obsolete file) The name/email/password fields in the Account Creation wizard have labels with a doesn't fix it. Also, they should use a <grid>, too, for the same reason as above. Work in progress fix. This does: - use grids for the problem areas to account for different length of strings. - currently uses sizeToContent to fix the problem where the error text goes off the window. This doesn't yet fix the vertical alignment of the results when something is found in ispdb. Assignee: nobody → mbanner Status: NEW → ASSIGNED Summary: [autoconfig] Align labels and textfields → [autoconfig] Align labels and textfields on the existing account wizard For the vertical mis-alignment, I've been able to fix it by changing the descriptions to a textbox, but making that textbox disabled and look the same as a label. This is also good from the a11y point of view as it means we can link the labels to the text. I'm a bit concerned about overrunning the window width, but I'm failure sure we're not going to hit really long configs. If we do, then we might have to reconsider it, but generally this should look a lot better than it does now for the majority of cases. For the sizeToContent issue, I had a look at doing it a slightly different way, but that didn't work unfortunately, so for now, this is the best I can do, but I'll file a follow-up bug if you want. Attachment #576165 - Attachment is obsolete: true Attachment #576713 - Flags: ui-review?(bwinton) Attachment #576713 - Flags: review?(bwinton) Comment on attachment 576713 [details] [diff] [review] The fix Review of attachment 576713 [details] [diff] [review]: ----------------------------------------------------------------- On Windows: Hit tab, type "jdfkls@fjdls.fdjskl", hit tab, hit shift-tab, type "fjdkls", hit shift-tab, type "fjdsklfdjskl", hit tab a bunch, and watch the fields change width. Because of this, I'm going to have to give this a ui-r-. But the code seems good, so it gets an r=me. :) Thanks, Blake. Attachment #576713 - Flags: ui-review?(bwinton) Attachment #576713 - Flags: ui-review- Attachment #576713 - Flags: review?(bwinton) Attachment #576713 - Flags: review+ I can confirm this behavior on Win7. This is because the password textbox is using italic text for the placeholder and switches to normal when the textbox is active. When you set #password {width: 165px;} the box width is stable. I think the width is dynamically calculated based on the font and italic is wider than normal. PS: The same happens with the Gloda searchbox. Richard any quick fixes coming to your mind ? The only quick fix would be to use non-italic text. If this is okay I can write a patch for this. Blake, what do you mean? Can you whip up a patch with non-italic text, and we can see how that looks? Thanks, Blake. This patch is based on Mark's patch with this changes in mail/themes/qute/mail/accountCreation.css: I added @namespace html url(""); and textbox.padded html|*.textbox-input:-moz-placeholder { font-style: normal; } Because of this I'm asking only for ui-r. Attachment #590791 - Flags: ui-review?(bwinton) On the left with Mark's patch added and on the right with my additional fix. Comment on attachment 590791 [details] [diff] [review] New Fix Stealing this ui-review from bwinton Attachment #590791 - Flags: ui-review?(bwinton) → ui-review?(nisses.mail) Comment on attachment 590791 [details] [diff] [review] New Fix Works great! Labels can become really long and widgets do no longer resize. Also tested in RTL. Attachment #590791 - Flags: ui-review?(nisses.mail) → ui-review+ Comment on attachment 590791 [details] [diff] [review] New Fix I didn't do a detailed review, but this looks like the right approach, it's using <grid>, and we should have done that all along. Thanks! Attachment #590791 - Flags: feedback+ Perhaps we should file a Toolkit bug on the resizing widgets? That seems wrong in general... Attachment #590791 - Flags: approval-comm-beta? Attachment #590791 - Flags: approval-comm-aurora? Checked in: Status: ASSIGNED → RESOLVED Closed: 9 years ago Resolution: --- → FIXED Target Milestone: --- → Thunderbird 13.0 Attachment #590791 - Flags: approval-comm-beta? Attachment #590791 - Flags: approval-comm-beta+ Attachment #590791 - Flags: approval-comm-aurora? Attachment #590791 - Flags: approval-comm-aurora+ Checked into branches: status-thunderbird11: --- → fixed status-thunderbird12: --- → fixed
https://bugzilla.mozilla.org/show_bug.cgi?id=636014
CC-MAIN-2020-29
refinedweb
768
65.52
Testing functionally test web applications in multiple target environments without manual work. In the past, web UIs were built using the page navigation to allow users to submit forms, etc. These days, more and more web applications use Ajax and therefore act and look a lot more like desktop applications. However, this poses problems for testing – Selenium and WebDriver are designed to work with user interations resulting in page navigation and don’t play well with AJAX apps out of the box. GWT-based applications in particular have this problem, but there are some ways I’ve found to develop useful and effective tests. GWT also poses other issues in regards to simulating user input and locating DOM elements, and I discuss those below. Note that my code examples use Groovy to make them concise, but they can be pretty easily converted to Java code. Problem 1: Handling Asynchronous Changes One issue that developers face pretty quickly when testing applications based on GWT is detecting and waiting for a response to user interaction. For example, a user may click a button which results in an AJAX call which would either succeed and close a window or, alternatively, show an error message. What we need is a way to block until we see the expected changes, with a timeout so we can fail if we don’t see the expected changes. Solution: Use WebDriverWait The easiest way to do this is by taking advantage of the WebDriverWait (or Selenium’s Wait). This allows you to wait on a condition and proceed when it evaluates to true. Below I use Groovy code for the conciseness of using closures, but the same can be done in Java, though with a bit more code due to the need for anonymous classes. def waitForCondition(Closure closure) { int timeout = 20 WebDriverWait w = new WebDriverWait(driver, timeout) w.until({ closure() // wait until this closure evaluates to true } as ExpectedCondition) } def waitForElement(By finder) { waitForCondition { driver.findElements(finder).size() > 0; } } def waitForElementRemoval(By finder) { waitForCondition { driver.findElements(finder).size() == 0; } } // now some sample test code submitButton.click() // submit a form // wait for the expected error summary to show up waitForElement(By.xpath("//div[@class='error-summary']")) // maybe some more verification here to check the expected errors // ... correct error and resubmit submitButton.click() waitForElementRemoval(By.xpath("//div[@class='error-summary']")) waitForElementRemoval(By.id("windowId")) As you can see from the example, your code can focus on the actual test logic while handling the asynchronous nature of GWT applications seamlessly. Problem 2: Locating Elements when you have little control over DOM In web applications that use templating (JSPs, Velocity, JSF, etc.), you have good control and easy visibility into the DOM structure that your pages will have. With GWT, this isn’t always the case. Often, you’re dealing with nested elements that you can’t control at a fine level. With WebDriver and Selenium, you can target elements using a few methods, but the most useful are by DOM element ID and XPath. How can we leverage these to get maintainable tests that don’t break with minor layout changes? Solution: Use XPath combined with IDs to limit scope In my experience, to develop functional GWT tests in WebDriver, you should use somewhat loose XPath as your primary means of locating elements, and supplement it by scoping these calls by DOM ID, where applicable. In particular, use IDs at top level elements like windows or tabs that are unique in your application and won’t exist more than once in a page. These can help scope your XPath expressions, which can look for window or form titles, field labels, etc. Here are some examples to get you going. Note that we use // and * in our XPath to keep our expressions flexible so that layout changes do not break our tests unless they are major. By byUserName = By.xpath("//*[@id='userTab']//*[text()='User Name']/..//input") WebElement userNameField = webDriver.findElement(byUserName) userNameField.sendKeys("my new user") // maybe a user click and then wait for the window to disappear By submitLocator = By.xpath("//*[@id='userTab']//input[@type='submit']") WebElement submit = webDriver.findElement(submitLocator) submit.click() // use our helper method from Problem 1 waitForElementRemoval By.id("userTab") Problem 3: Normal element interaction methods don’t work! GWT and derivatives (Vaadin, GXT, etc.) often are doing some magic behind the scenes as far as managing the state of the DOM goes. To the developer, this means you’re not always dealing with plain <input> or <select>, etc. elements. Simply setting the value of the field through normal means may not work, and using WebDriver or Selenium’s click methods may not work. WebDriver has improved in this regard, but issues still persist. Solution: Unfortunately, just some workarounds The main problems you’re likely to encounter relate to typing into fields and clicking elements. Here are some variants that I have found necessary in the past to get around clicks not working as expected. Try them if you are hitting issues. The examples are in Selenium, but they can be adapted to the corresponding calls in WebDriver if you require them. You may also use the Selenium adapter for WebDriver (WebDriverBackedSelenium) if you want to use the examples directly. Click Issues Sometimes elements won’t respond to a click() call in Selenium or WebDriver. In these cases, you usually have to simulate events in the browser. This was true more of Selenium before 2.0 than WebDriver. // Selenium's click sometimes has to be simulated with events. def fullMouseClick(String locator) { selenium.mouseOver locator selenium.mouseDown locator selenium.mouseUp locator } // In some cases you need only mouseDown, as mouseUp may be // handled the same as mouseDown. // For example, this could result in a table row being selected, then deselected. def mouseOverAndDown(String locator) { selenium.mouseOver locator selenium.mouseDown locator } Typing Issues These are the roundabout methods of typing I have been able to use successfully in the past when GWT doesn’t recognize typed input. // fires only key events (works for most GWT inputs) // Useful if WebDriver sendKeys() or Selenium type() aren't cooperating. def typeWithEvents(String locator, String text) { def keyEvents = ["keydown", "keypress", "keyup"] typeWithEvents(locator, text, keyEvents) } // fires key events, plus blur and focus for really picky cases def typeWithFullEvents(String locator, String text) { def fullEvents = ["keydown", "keypress", "keyup", "blur", "focus"] typeWithEvents(locator, text, fullEvents) } // use this directly to customize which events are fired def typeWithEvents(String locator, String text, def events) { text.eachWithIndex { ch, i -> selenium.type locator, text.substring(0, i+1) events.each{ event -> selenium.fireEvent locator, event } } } Note that the exact method that works will have to be figured out by trial-and-error and in some cases, you may get different behaviour in different browsers, so if you run your functional tests against different environments, you’ll have to ensure your method works for all of them. Conclusion Hopefully some of you find these tips useful. There are similar tips out there but I wanted to compile a good set of examples and workarounds so that others in similar situations don’t hit dead-ends or waste time on problems that require lots of guessing and time. If you have any other useful tips or workarounds, please share by leaving a comment. Maybe you’ll save someone having to work late or on a weekend! From (Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
http://java.dzone.com/articles/testing-gwt-apps-selenium-or
CC-MAIN-2014-10
refinedweb
1,250
55.44
Bits and pieces of useful information by Kristoffer Henriksson By default web browsers will only open 2 simultaneous connections to a named website to be a good client. If you're loading several images this means you may hamstring your bandwidth usage and make your website load slower. Take the following HTML source as our base, off a site hosted at: <img src="Image1.png" width="200" height="200" /> <img src="Image2.png" width="200" height="200" /> <img src="Image3.png" width="200" height="200" /> <img src="Image4.png" width="200" height="200" /> <img src="Image5.png" width="200" height="200" /> <img src="Image6.png" width="200" height="200" /> By not allowing more than 2 images to be downloaded at once the total page load time takes longer than is necessary. Since we own the host in question we know we can safely allow more than 2 images to be downloaded at once so we create two DNS aliases, img1.example.com and img2.example.com that both point to. We now modify our HTML source to make use of these two new hosts: <img src="" width="200" height="200" /> <img src="" width="200" height="200" /> <img src="" width="200" height="200" /> <img src="" width="200" height="200" /> Web browsers will now open 2 connections per named host (, img1.example.com, and img2.example.com) for a total of 6 concurrent connections. Downloading all of them at the same time decreases the amount of bandwidth available for any individual image but will maximize the overall bandwidth usage leading to faster load times. Nice tip. Where can I get more information about this? The HTTP 1.1 RFC is a good place to start. Would this not lead to an increase of dns lookups? I can see this working on a popular web site where it is certain that dns records are cached locally on the clients dns server but otherwise you would you not have a dns lookup for every 2 images with the latency that goes with that? agreed we are talking about a small amount of time but as you suggest it is only of benefit to those clients with bandwidth downloading the images 2 by 2 would not take long either. I may be wrong and do accept that for popular sites and pages with a whole lot of images this would work and is a neat trick but otherwise for the average site would the human eye notice the difference ? Oh wow... Instead of making web browsers more intelligent and recognize fast connections to allow more than 2 concurrent transfers, let's all destroy our web page sources, pollute the namespace with DNS aliases, and make my mother's internet experience via her 33.6kbps modem degrade! Daren - Yes it does lead to more DNS lookups and you would need to do measurements yourself to see if this results in page load speed increase. For pages with only 6 images it may not but double or triple the images and you're looking at a likely speed increase. Your mileage may vary tremendously but I work on a very image heavy site (Virtual Earth) where images are downloaded after panning and zooming so having multiple hosts benefits us tremendously and the overhead of extra DNS lookups only happens on the first load. Hopefully we're also popular enough that your local DNS server has our records cached.
http://blogs.msdn.com/kristoffer/archive/2006/12/27/loading-website-images-in-parallel.aspx
crawl-002
refinedweb
571
70.02
Code for teaching What care should be taken with the code we give our students A student emailed me a link to where the following code was posted for their class. Note: I am not soliciting for other examples of code like this. public class Team { private int teamSize = 0; private BallPlayer player1, player2, player3, player4, player5, player6, player7, player8, player9; public void getPlayer() { teamSize++; switch(teamSize) case 1: player1 = new BallPlayer("first baseman"); System.out.println(player1.getPosition() + " added to team"); break; case 2: player2 = new BallPlayer("pitcher"); System.out.println(player2.getPosition() + " added to team"); break; case 3: player3 = new BallPlayer("catcher"); System.out.println(player3.getPosition() + " added to team"); break; case 4: player4 = new BallPlayer("left fielder"); System.out.println(player4.getPosition() + " added to team"); break; case 5: player5 = new BallPlayer("right fielder"); System.out.println(player5.getPosition() + " added to team"); break; case 6: player6 = new BallPlayer("shortstop"); System.out.println(player6.getPosition() + " added to team"); break; case 7: player7 = new BallPlayer("third baseman"); System.out.println(player7.getPosition() + " added to team"); break; case 8: player8 = new BallPlayer("Centerfielder"); System.out.println(player8.getPosition() + " added to team"); break; case 9: player9 = new BallPlayer("second baseman"); System.out.println(player9.getPosition() + " added to team"); break; } } public int getNumberPlayers() { return teamSize; } } The students were asked to clean up code in a class titled BallPlayer and one titled TeamUser, but the proper usage of Team is to call getPlayer nine times. This is code intended to teach students the switch statement in their first course in Java. I'm scratching my head over much of it starting with the void return type for getPlayer() and the use of switch in the first place for this example. As a child I hated hearing adults say "do what I say and not what I do." As a teacher, I often fall short but I do aim to model the behavior I'm trying to teach. If I want the students to write test-first then I better do the same when possible. I know there's lots of bad code out there, but this class struck me. In today's Weblogs, Navaneeth Krishnan blogs about Poor Man's Web Services. He's looking at RSS in the context of a web service. Chet Haase blogs on Miscellaneous Stuffn Things. He shares his thoughts on JavaOne call for papers and the Netbeans release and Swing/2D effects for rich clients. Do we respect our elders enough? John Reynolds has started a conversation in Too old to program? He asks " Is programming just a career for young whipersnappers? " In Also in Java Today , ONJava is featuring a collection of goodies and gotchas from Elliote Rusty Harold's newly-updated Java Network Programming, 3rd Edition. In URLs and URIs, Proxies and Passwords, he illustrates character-encoding hazards with URL's, how to build a URI, how to get fine-grained control of proxy server communication, how to get data with an HTTP GET, and how to work with password-protected sites with Java. With J2SE 5.0, it has become easier to figure what is going on inside the JVM as your application is running. In Building Manageability Satadip Dutta provides an overview of the new Java Management Extensions (JMX) that make monitoring, tracking, and control easier. After surveying internal and external monitoring and control, he writes "The exposed management information from an application can alert IT operators about impending problems. The application can also use the JVM information to throttle the application and prevent it from getting into an unrecoverable state. By throttling the application, the garbage collector may get a chance to run and thereby prevent memory or thread-related problems." In Projects and Communities, the NetBeans community announces NetBeans IDE 4.0 is now available. The 4.0 release is the first open source IDE to support J2SE 5.0 and provides a release announcement with more details on latest release and pointers to the download page. From the JXTA community, James Todd blogs about Chat JXTA Style NonStop. He writes with "MyJXTA 2.3.2 launched unto the world we are now sharing ideas as to what features to take on" and invites you to contribute. Kelly O'Hair responds to comments about More flexible builds in today's Forums. "Debug builds in particular are an issue (the debug DLL's from Microsoft are not on the system, or in the free compilers, and cannot be re-distributed). One thought is to by default only build the optimized J2SE (currently by default both an optimized and debug version is built), and anyone wanting a 'debug' build would need to 'make debug' and perhaps have purchased a set C/C++ of compilers/tools that can build a debug version. I'm curious what people think of this." With regards to extending primitives, Walter Bruce begins "Perhaps it might help the discussion if we looked at what would have to be changed in Java (compiler and/or JVM) to support extended primitives." In today's java.net News Headlines : - Java Compatibility Kit Source Released - Sun Java Studio Enterprise 7 - JOnAS 4.2.2 - Annogen First Release - Override JSR175 Annotations - Berkeley DB XML 2.0.7 - JXMLPad 3.0 - yFiles v2.3 - Diagramming. What care should be taken with the code we give our students - Login or register to post comments - Printer-friendly version - daniel's blog - 402 reads
http://weblogs.java.net/blog/editors/archives/2004/12/code_for_teachi.html
crawl-003
refinedweb
905
55.13
Garry Willgoose wrote: >. __import__() returns a value which is the imported module so you have to say user_module = __import__('siberia900') and then reload(user_module) user_module.version > Anyway here's the original problem again with more info. I think the problem is that the module is being imported in the scope of load(). The first time you run load(), a local name 'siberia900' is created and bound to the imported module. The second time you run load(), this name is not available and you get the name error. One way to fix this would be to use a global dict as the namespace for the exec and eval calls. A better solution is to get rid of them entirely as I have suggested. Kent > > --------------------------------- > > ==================================================================== > > > > > _______________________________________________ > Tutor maillist - Tutor at python.org > >
https://mail.python.org/pipermail/tutor/2007-November/058388.html
CC-MAIN-2016-44
refinedweb
130
74.29
To use the provided random number generator in the StandardDeck class, you need a private data member of type Rands. You then use this object to call any of the functions in the Rands class. The constructor should call the function rand_seed(). If you don’t, then you will get the exact same sequence of numbers every time you run your program. In order to get a different sequence of numbers and hence a different ordering of cards for different runs, you can use the current time to seed the generator. To do this, you need to include the header <ctime> and get the time by calling time(0). So if “r” is your data member of type Rands, you seed the generator as follows. #include <ctime> // needed for the time() function r.rand_seed( (Uint) time(0) ); // seed the generator according to the time Are you working on my question? Yes I am stuck. What I have so far will not compile. #ifndef CARD_H#define CARD_H#include <iostream>using namespace std; enum Suit {CLUBS, DIAMONDS, HEARTS, SPADES}; enum Rank {ACE = 1, TWO, THREE, FOUR, FIVE, SIX, SEVEN, EIGHT, NINE, TEN, JACK, QUEEN, KING, };class Card { friend ostream& operator<<( ostream &stream, const Card &card ); public: private: static char *suit_str[4]; // output strings for suit static char *rank_str[14]; // output strings for rank };#endif #ifndef DECK_H#define DECK_H#include <vector>#include "card.h"#include "rands.h"class StandardDeck { public: void shuffle(); // shuffle the deck private: vector<Card> deck; // the deck of cards Rands r; // random numbers used for shuffling void swap( Card &, Card & ); // used for shuffling int top; // top of deck index Card card; };#endif #ifndef HAND_H#define HAND_H#include <vector>#include "card.h"class Hand { public: private: };#endif /******************************************************************//* concatenation of the two 16-bit multiply with carry generators *//* x(n)=a*x(n-1)+carry mod 2^16 and y(n)=b*y(n-1)+carry mod 2^16, *//* with the number and carry packed within the same 32 bit *//* integer. Algorithm due to Marsaglia *//******************************************************************/#ifndef RANDS_H#define RANDS_Htypedef unsigned int Uint;class Rands { public: Rands(); void rand_seed( Uint ); // seed the generator Uint rand(); // returns a random 32-bit integer double rand_float(); // returns a random float in (0,1] int rand( int lo, int hi ); // returns a random int from lo through hi private: Uint SEED_X; Uint SEED_Y; };#endif #include "card.h"#include <iostream>using namespace std;Card::Card(Suit s, Rank r){ suit = s; rank = r; for (Card::Suit s = Card::CLUBS; s<=Card::SPADES; s = Card::Suit(s+1)) { for (Card::Rank r = Card::TWO;r <= Card::ACE; r = Card::Rank(r+1)) { deck.push_back (Card (s, r)); }} ostream& operator<<( ostream &stream, const Card &card ) { char *Card::rank_str[14] = {ace, two, three, four, five, six, seven, eight, nine, ten, jack, queen, king }; char *Card::suit_str[4] = {clubs, diamonds, hearts, spades}; stream << Card::rank_str[ card.rank ] << " of " << Card::suit_str[ card.suit ]; return stream;}#include "deck.h"#include <iostream>#include <ctime>using namespace std;void StandardDeck::shuffle() { for (Uint i=0; i < deck.size()-1; i++) swap( deck, deck[ r.rand(i,deck.size()-1) ] ); top = 0; }void StandardDeck::swap( Card &c1, Card &c2 ) { Card temp = c1; c1 = c2; c2 = temp; } I did not know that. How should I post then? Yes go ahead and post the solution you worked on. In the meantime I will go to. I have a question that was put into the waiting room that another professional said he did not know the answer to. I really need an answer ASAP. I wonder if you would take a look at it. I assume that it is available for you to see or do I have to resubmit the question? "For ATLProg only" The question is somewhat long with some code provided and it would be easier to just upload a zip file to you. and Code.zip and File ID: 257104 I need it by tomorrow. I also have some additional information I can upload if you are willing to take another try at it. I am sending more information after this file that I think will help Okay. Thanks. You said you were out of ideas on my last question. Since that time I have found that there were errors in the code that I provided that would have prevented the program from compiling. If you were running into problems because the provided code had bugs, then the question is would you take third and final look at the problem with the corrected code to see if you can help me with it? But if you just don't want to look again that is fine too. I understand. added some notes that might help.zip I added a word document with some code that might help. Again it is a long project I know but I just need somewhere to start if nothing else. Thanks. Ok. Thanks
http://www.justanswer.com/homework/7uxim-working-project-shuffling-dealing-cards.html
CC-MAIN-2017-04
refinedweb
813
70.02
The JDialog class uses the idea of an owner object. The owner is the class which is using the JDialog, either a Frame or another Dialog. It's possible to pass null as the owner, but that's a very bad habit. If no owner is defined, then various highly desirable behaviors are lost: - inheriting position from the owner - inheriting application icon from the owner - correct ALT+TAB behavior when switching between applications The last point may seem minor at first glance, but it's actually a major problem. When proper ALT+TAB behavior is absent, the dialog can easily get 'lost' behind your application. Many end users have no idea how to recover access to such lost dialogs. This becomes especially serious when the dialog is modal (which they usually are) since, not only can the user not find the dialog, but they are unable to interact with any other screens in the application. In this situation, many end users will naturally conclude that the application is hung, and they will do what they always do when an application is hung - they will reboot the machine. This issue can be addressed in your Standard Dialog class, which may enforce the rule that an owner be specified. As a backup style, your Standard Dialog might use the active frame as its owner, if none is explicitly passed by the caller. Here is a code snippet showing how to get the active frame. import java.awt.Frame; public final class ActiveFrame { /** Return the currently active frame. */ public static Frame getActiveFrame() { Frame result = null; Frame[] frames = Frame.getFrames(); for (int i = 0; i < frames.length; i++) { Frame frame = frames[i]; if (frame.isVisible()) { result = frame; break; } } return result; } }
http://www.javapractices.com/topic/TopicAction.do;jsessionid=CEDFF473CD05E0DC9866625E87331870?Id=230
CC-MAIN-2018-13
refinedweb
286
54.02
Add C# web project to VB Web project Discussion in 'ASP .Net' started by Brad, Apr 1, 2004.: - 5,005 - Ethan V - Jun 25, 2006 Converting VS.NET 2003 web project to VS.NET 2005 web project=?Utf-8?B?anJldHQ=?=, Sep 25, 2006, in forum: ASP .Net - Replies: - 1 - Views: - 928 - Laurent Bugnion - Sep 26, 2006 How to add project output to web setup project?tomix, Nov 9, 2006, in forum: ASP .Net - Replies: - 0 - Views: - 474 - tomix - Nov 9, 2006 compile errors when converting web site project to web application projectJohn Dalberg, Mar 26, 2007, in forum: ASP .Net - Replies: - 1 - Views: - 523 - =?Utf-8?B?UGV0ZXIgQnJvbWJlcmcgW0MjIE1WUF0=?= - Mar 28, 2007 procedure to add web reference which will not create new namespace just add class in existing namespDeep Mehta via .NET 247, May 28, 2005, in forum: ASP .Net Web Services - Replies: - 2 - Views: - 450 - Dave A - May 31, 2005
http://www.thecodingforums.com/threads/add-c-web-project-to-vb-web-project.74923/
CC-MAIN-2014-42
refinedweb
151
85.89
At some point, you'll find yourself in the situation where you need to decide on a naming scheme for an Active Directory forest and domain. This is a critical point and should not be chosen when you're standing in front of the screen and typing DCPROMO. Let me elaborate a bit... Historically, Microsoft has been extremely liberal in what names are allowed within the GUI during DCPromo and user/computer account creation (or setup.exe in NT 4). This is partly because during the NT 4 days it didn't matter much what you called the domain or the computer, it pretty much worked anyway as you only had the NetBIOS version to worry about and no possibility of any component confusing it with a DNS name. Enter Windows 2000 and AD...you suddenly have the DNS version and the NetBIOS version to worry about. Enter Windows 2003 and Forest Trusts...you now have trusts using DNS names. Additionally, since the name suffix routing engine uses the UPN of accounts as a hint to which forest it should route login requests you need to make sure your UPN is in a proper DNS format if you want to use it outside of the forest. So, here is a list compiled from existing cases with PSS: Bad AD domain or forest names: Developers often need to decide whether a domain name being passed to the application is NetBIOS or DNS, with this type of format you most likely get the wrong type returned which results in unexpected errors that can be difficult to troubleshoot. Bad naming schemes for user or computer accounts (security principals) The worst naming scheme possible is what I refer to as AD 360, this is where the NetBIOS name looks like the DNS name and the DNS name is single label (i.e. SLD). Example: NetBIOS name: DOMAIN.COM and DNS name: DOMAIN. Disjointed namespaces are also on the list, but don't warrant a separate entry (they'd need a separate website). In the next release after Windows 2008, most of these will probably be blocked from being created but you should still be able to manage or upgrade existing domains that use them. See also: Naming conventions in Active Directory for computers, domains, sites, and OUs A user in a trusted Windows Server 2003 forest cannot use a UPN to log on to a trusting Windows Server 2003 forest when UPN suffixes are not DNS-compliant Information about configuring Windows for domains with single-label DNS names Error message when you join a Windows Vista-based client computer to a top level domain (TLD) that has a purely numeric suffix: "An Active Directory Domain Controller for the domain <DNS domain name> could not be contacted" Requirements for Internet Hosts -- Application and Support
http://blogs.technet.com/b/instan/archive/2008/07/03/naming-schemes-to-avoid-in-ad.aspx
CC-MAIN-2013-20
refinedweb
469
54.26
On 25 Mar 2005 12:37:29 -0800, bbands <bbands at yahoo.com> wrote: > I've a 2,000 line and growing Python script that I'd like to break up > into a modules--one class alone is currently over 500 lines. There is a > large config.ini file involved (via ConfigParser), a fair number of > passed and global variables as well as interaction with external > programs such as MySQL (via MySQLdb), R (via Rpy) and gnuplot (via > Gnuplot). Every time I have tried to break it up I end up with thorny > name-space problems and have had to stitch it back together gain. What sort of namespace problems? I think you need to tell us what your specific problems were, so that we can help you more. Peace Bill Mill bill.mill at gmail.com
https://mail.python.org/pipermail/python-list/2005-March/345798.html
CC-MAIN-2014-15
refinedweb
137
82.65
Serv. The main areas covered so far by the expert group are - Pluggability and extensibility - Ease of development - Async support - Security enhancements - Some enhancements to the existing APIs. Pluggability and extensibility In keeping with one of the main theme for Java EE 6, the goal of this requirement is provide more extension points in the servlet container to allow developers to easily use frameworks / libraries by "just dropping" a jar file in the application classpath and not require any additional configuration in the application.. It is the job of the container to assemble all the fragments at deployment / runtime of the application. Along Ease of Development In servlet 3.0 one of the areas of focus is ease of development. The servlet 3.0 API is making use of annotations to enable a declarative style of programming for components of a web application. For example - to define a servlet you would use the following piece of code - package samples; import javax.servlet.http.annotation.*; @Servlet(urlMappings={"/foo"}) public class SimpleSample { } The use of annotations makes web.xml optional and a developer does not really need to use it unless they want to change some configuration without changing code - for example at deployment time. Also, as with all the other parts of the Java EE platform, the descriptor always overrides the metadata provided via use of annotations. Async support The initial proposal for Async servlet support came from Greg Wilkins from Webtide and has been discussed and refined in the expert group. Support for async servlets allows you to suspend and resume request processing and enable and disable the response depending on the need of the application. A good use case of this servlet is in writing comet style applications. API changes for async support: ServletRequest: suspend, resume, complete, isSuspended, isResumed, isTimeout, isInitial ServletResponse: disable, enable, isDisabled ServletRequestListener: requestSuspended, requestResumed, requestCompleted Security enhancements While this is not in the early draft of the specification as it needs some more discussion in the expert group, I thought I would give a sneak preview of what might be in the next draft of the specification. Ron Monzillo from Sun who is a security expert and has done work on many of the security related JSRs proposed the addition of methods to HttpServletRequest and HttpSession to support programmatic login and logout. API changes for login / logout support: HttpServletRequest: login, logout HttpSession:logout Enhancements to existing APIs A few areas of the current API have been enhanced that will help in developer productivity. Listed below are some of them - - Ability to get a ServletContext and a Response object from the Request object. To get a Context (to maybe get context information or to get a RequestDispatcher for example) today you need to create a HttpSession object just to get a reference to a Context object. With the addition of the new APIs you can now get the Context and the associated Response from the ServletRequest. API changes for getting a Context and Response: ServletRequest: getServletContext, getServletResponse - Mark a cookie as an HttpOnly cookie. By setting a cookie to be an HttpOnly cookie you prevent client side scripting code from getting access to the cookie. Most modern browsers support this feature and it helps mitigate some kinds of cross-site scripting attacks. API changes to support HttpOnlyCookie: Cookie: setHttpOnly, isHttpOnly. - Login or register to post comments - Printer-friendly version - mode's blog - 2812 reads by sendtopms - 2008-09-05 09:38Do you have Reference Implementation (RI) available? If so, Where can I get the RI? by ronaldtm - 2008-06-05 06:53 Unless Sun now wants Servlets to compete with MVC frameworks, it just doesn't make sense. This is different from JAX-WS. JAX-WS *is* competing with third-party WebServices libraries, in the sense that when you use JAX-WS you usually don't use Spring-WS or Axis. And, you rarely use a third-party, out-of-the-box WebService, you usually code them, because they have to expose a business-specific operation (they usually are application-specific). Servlets, instead, are completelly low-level infrastructure (application-agnostic, configurated on a per-application base). Other frameworks build on it, instead of trying to 'compete' with it. by chris_e_brown - 2008-05-02 00:48 by greeneyed - 2008-05-01 10:44I wish you would get rid of the "up to the application server vendor" parts of security, so one would truly be able to move freely one application from one server to another. (user<->roles mappings for example) Oh, and dynamic security settings, instead of harcoded paths in descriptors and mixing security calls in the business logic. Cheers to Ron for proposing also the long forgotten logout feature ;-). S! by ss141213 - 2008-05-15 19:46mode: I know generating equivalent DD from annotations at deployment time is not mandated by the spec. I mentioned about that alternative only to counter the argument that annotation processing can slow things down. The implementations that care too much about speed can choose an alternative like that. by mode - 2008-05-15 15:25Sahoo: The specification says that all the libraries in WEB-INF/lib and WEB-INF/classes must be looked into. Beyond that it is up to the implementation. by mode - 2008-05-15 15:24Sahoo: Processing annotations only once is not a requirement of the specification. It can in fact use them even at runtime. We do that in JAX-WS in some cases for example. Generating the corresponding xml descriptor is one way of doing it in the implementation. by mode - 2008-05-13 12:00Ronaldtm: For this release we are limiting it to adding configuration at startup time only. Reconfiguration has a whole set of issues which I would rather punt on for now and revisit it in a subsequent release. As for annotations - you need to define a url-mapping for every servlet / filter that you want to make available. There is a way to turn off automatic scanning and exposing the servlets if you choose to do so. We didn't see it to be a problem with Web Services (which run on top of the servlet container) if you use the annotaitons in the right way. by mode - 2008-05-13 11:57Chris: We have gone through this discussion during Java EE 5 and we noticed that the benefits to developer far out weighs the minimal delay that it takes for deploying the app. Also if you absolutely care about performance you can turn off scanning for annotations and just continue using the deployment descriptor (web.xml) to specify all the metadata.. by mode - 2008-05-13 11:53greeneyed: We will look into what can be done for security enhancements. We are starting to take the steps towards making security enhancements in the servlet specification. - Rajiv by ronaldtm - 2008-05-02 18:11... by ss141213 - 2008-05-15 00:53Rajiv, by ss141213 - 2008-05-15 00:48Chris: Your point about annotation processing slowing down start up of web app is not an issue, because most of the application servers process annotations only once, i.e., during deployment of the application. e.g., GlassFish generates corresponding XML deployment descriptors out of annotations during deployment. During subsequent start up of application or application server, it just reads the information from XML DD. Thanks, Sahoo
http://weblogs.java.net/blog/mode/archive/2008/04/servlet_30_jsr.html
crawl-003
refinedweb
1,221
51.78
Dr. Who Meets Metal Hey guys, So I saw this EVERYWHERE haha. I included the "I am the Doctor" tune in there as well, it reminded me of Symphony X or Liquid Tension E... High Places - Year Off From the forthcoming Album ~ "Original Colors" due 10/11 via Thrill Jockey For more Info ~... Freestylers - Cracks (Ft. Belle Humble) (Flux Pavilion Remix) Available to buy on UKF Dubstep 2010 (iTunes): AVAILABLE TO BUY NOW: Flux Pavilion's rem... Freestylers feat. Belle Humble - Cracks (Flux Pavillion Remix) Freestylers feat. Belle Humble - Cracks (Flux Pavillion Remix) Robin S - Show Me Love (Official Music Video) [1993] Join us on Spinnin' Facebook: x-posed -Point of no return (original extended 1984) I love both versions, but this is the original extended PRODUCED ARRANGEMENT & COMPOSED BY: LEWIS A.MARTINEE FOR COUNTDOWN PRODUCTION EXECUTIVE PR... Expose- Point Of No Return (1985) Original recording of Expose's "Point Of No Return", released in 1985 under the original line-up of Alejandra Lorenzo, Sandra Casanas, and Laurie M... Nu Shooz I Can't Wait Video for new shoes , " I can't wait" .Lets see how long it stays up ? Susanne Vega - Tom's Diner Sorry had to block comments, tired of my phone going off.. Doctor Who Every Title Sequence (1963-2011) Thanks To Mad Monkey Every Title From Doctor Who Other Than Big Finish And Big Finish Doctor Who: Every Story 1963 to Now - A Babelcolour Tribute As recommended in 'Doctor Who Magazine' issue 444, this is a rundown of every Doctor Who adventure from 1963's 'An Unearthly Child' right up to 201... Doctor Who: Regeneration (All The Doctor's Regenerations 1963 - 2010) All the Doctor's regenerations 1963 - 2010. * All footage taken from the original episodes (along with original audio) 1. William Hartnell - Patri... Thin Lizzy - Dr Who ( Cover '73 Berlin) Information :- Eric Bell introduces this live rockout which I just had to upload. A friend of mine sent it but try as I might I could not eliminat... Doctor Who Theme Guitar Lesson A different type of lesson as teach you how to play the Doctor Who Theme Tune! READ THIS!!! I have been asked for tabs and I do not have the time ... N*E*R*D & Daft Punk - Hypnotize You (Nero Remix) Nero's incredible remix of 'Hypnotize You' by N*E*R*D & Daft Punk. Become a fan of Nero: Follow them on Twitter: ht... Call Me Maybe by Carly Rae Jepsen Meets Metal Hey guys, So got 3 really big requests brewing but wanted to get one out. Saw this one a lot especially in the last couple videos. Thank you guys... Green Loontern - Daffy Duck (Dodgers) as the green lantern! for more green lantern information please visit... Merrie Melodies - Daffy Duck the Wizard HD I was surprise there wasn't already an HD version uploaded to Youtube. From episode 17 "Sunday Night Slice" ROBIN S - SHOW ME LOVE (Official Video) Official video by Delpo DJ. This is the STONE BRIDGE CLUB MIX. Robin Stone is an american singer born April 27, 1962, Queens, New York. She had a f... Homemade Flying Captain America Shield This is my flying Captain America shield, which I made from duct tape and cardboard. I have posted full instructions and a free pdf pattern for all... Ghost (2of2) live @ Fortarock Nijmegen 2011-07-02 (15:37:22) Ghost live performance @ Fortarock festival Nijmegen (Netherlands) on July 2nd 2011. Whole show (35 minutes, 2 parts) on our channel youtube.com/u... Ghost (1of2) live @ Fortarock Nijmegen 2011-07-02 (15:09:58) Ghost live performance @ Fortarock festival Nijmegen (Netherlands) on July 2nd 2011. Whole show (35 minutes, 2 parts) on our channel youtube.com/u... New Order - Bizzare Love Triangle (1987) From the album "Substance" Gojira - Toxic Garbage Island (Live at Vieilles Charrues Festival 2010) Gojira - 10 - Toxic Garbage Island Live in Carhaix, Vieilles Charrues festival July 17th 2010 Get the dvd i made for you here ! :.... Unloading a 35mm SLR I show how to unload film from a Nikon FM2 35mm SLR. This same procedure will apply to many similar models of film SLR cameras. 8-bit: Mastodon - The Last Baron (Part 1) full version (includes download link):... - 1 year ago 8-bit: Mastodon - Crack The Skye DOWNLOAD:... - 1 year ago David Ellefson of MEGADETH Sioux City, IA Megadeth play Sioux City, IA with Rob Zombie and Volbeat. David gets a new first-ever Jackson Kelly Bird five string bass. Rig Rundown - Dethklok's Brendon Small PG's Jordan Wagner is On Location in Des Moines, IA, at the Val Air Ballroom where he visits backstage with Dethklok's... Rig Rundown - Mastodon's Brent Hinds & Bill Kelliher PG's Jordan Wagner is On Location in Des Moines, IA, at the Val Air Ballroom where he visits backstage with Mastodon's... 8-bit: Mastodon - The Last Baron (Part 2) Part 2! - 1 year ago 07 The Last Baron part 2 - Mastodon - Glasgow Barrowlands 19-02-10 Mastodon - The Last Baron, live at Glasgow Barrowlands 19-02-10 had to split into 2 parts due to the 10 minute rule taken with Lumix DMC TZ7 07 The Last Baron part 1 - Mastodon - Glasgow Barrowlands 19-02-10 Mastodon - The Last Baron, live at Glasgow Barrowlands 19-02-10 had to split into 2 parts due to the 10 minute rule taken with Lumix DMC TZ7 Municipal Waste "Wrong Answer" Video - Gore Version Get MASSIVE AGGRESSIVE on CD, vinyl and digital download: Europe - North America - iTunes -... Mastodon - Crack the Skye (live) with Neurosis' Scott Kelly Mastodon - Crack the Skye (live) multi-cameras Crack the skye with Neurosis' Scott Kelly The Masquerade Atlanta GA 2/28/09 - Credit To: Yahoo! Se... S.O.D - Seasoning the Obese Music video of SOD Disclaimer: I do not own any of this material. Copyright held by their respective authors.
http://www.youtube.com/Slaytanicguy
crawl-003
refinedweb
977
70.43
Simulating tracepoints in Chrome dev tools One of my favorite F12 under appreciated tooling features is tracepoints and I want to look at how to simulate it in Chrome's dev tools. This library is really just a core set of features which don't really belong to any particularly category, and I find a handy for common use in all JavaScript I write. Something that I do a lot of in JavaScript is create namespaces. I always like to keep all code in a single namespace in the same manner which I would do with .NET. But there is a problem, JavaScript doesn't have namespaces! I'm sure everyone has written their own code to register namespaces. The code that I use is actually someone I worked with adapted from some code that I'd written as I thought you had to be able to use recursive functions to do it and he was just quicker to getting it written than I was :P. The method resides within my core API namespace, slace.core as a method named registerNamespace, like so: slace.core.registerNamespace('some.namespace'); This will create a new namespace starting at the window object, but it also has the capabilities to add the namespace from any existing namespace, eg: slace.core.registerNamespace('web', slace); Now the slace object will also have web to go with core. As I mentioned above JavaScript doesn't have the concept of namespaces, so how do you create a namespace in a language which doesn't do namespaces? Well namespaces in JavaScript are actually a bit of a trick, and they aren't namespaces which are familiar to .NET developers, they are actually just a series of empty objects. Take this piece of code: slace.core.registerNamespace('slace.web.controls'); This will produce the following object: slace = { web = { controls = { } } }; Well technically the window object should be before slace but it's skipped for brevity, as is the slace.core object So this is really just a set of empty objects! So you can find the code here, and let's have a look at what it does. The crux of it is a recursive function which the namespace is passed into: slace.core.registerNamespace = function (namespace, global) { var go; go = function (object, properties) { if (properties.length) { var propertyToDefine = properties.shift(); if (typeof object[propertyToDefine] === 'undefined') { object[propertyToDefine] = {}; } go(object[propertyToDefine], properties); } }; go(global || (function () { return this; })(), namespace.split('.')); } In this function the argument object is what we're putting the namespace onto, with properties is an array of the namespace to define (having been split on the .). The last line initiates the function and either passes in the object you want to augment, or the object which is scoped as this for the method (which will be window unless you're really going to get nasty with JavaScript, but that's a topic for another time :P). This code can actually be reduced by a few lines by making it a self-executing named function (or a self-executing anonymous function if you want ;)), but due to limitations in the Visual Studio 2010 JavaScript intellisense engine it doesn't work recursively it seems. Odd bug, but easy to get around (and it makes your code a bit more readable!). The library also includes some handy extensions for detecting if a method already registered on an object, in the form of Function.method (which is from Douglas Crockford's article on JavaScript Inheritance), and the Array.prototype is also augmented to have Array.contains, Array.remove and Array.indexOf (unless it's already there).
http://www.aaron-powell.com/posts/2010-09-12-slace-core-javascript-library.html
CC-MAIN-2017-17
refinedweb
603
61.77